The Multi-Tasking Myth

This is the trouble with developers capitulating to noisy environments by donning headphones. I simply can’t code as well listening to music as I can in silence. I suppose I could if I didn’t actually like music. If I listen to music and write code at the same time, I’m only programming at half-capacity.

Darn, I just forgot what I was about to do.

OK, Ken, let’s assume you’re twice as good at multitasking as everybody else, which is probably horrifically unreasonably favorable to you because the costs are fundamental to how our minds are structured, not lack of practice.

At 5 projects, you’re still losing 40%, nearly half, of your productivity, which is still one hell of a hit; you’ve turned what should take one employee into two, all else being equal and allowing for a bit more penalty due to communication and knowledge-base-splitting and synchronization.

And remember, the argument in favor of multitasking isn’t that “it doesn’t hurt too much”… you’re supposed to be getting more done. I see no evidence it works that way whatsoever.

Fundamentally, the problem is that we don’t know how to intuitively time our losses in switching; multitaskers mentally feel it to be productive time, making “multitasking is hurting you” a tough sell. The problem is, perception of productivity can be very loosely related to actual productivity. It may even be true that while you’re switching and coming back up to speed that your productivity isn’t zero in those first fifteen minutes, the problem is that compared to what you could do with those 15 minutes if you were already up to speed, the multitasker is doing horribly.

What about dead time?

I find that if I am only working on one project, then I spend half my time waiting for other people. By having several projects, I can jump to the next one whenever my current one becomes blocked.

Steve, I agree with you in that human beings (it’s not just the software industry) tend to always gravitate to extremes - we aren’t good at treading the middle ground and most “best practices” fall into the middle ground. For example, there is nothing wrong with socialism and nothing wrong with capitalism unless you take either to the extreme - then they fail. But go to any financial district in the U.S./Canada and you’ll find extreme capitalists - go to any university and you’ll find extreme socialists.

Where I will disagree with you is your statement “They’re used to coding in nested if statements and aren’t really going to jive to a polymorphic command pattern” with its implication that nested if statements are simpler than a polymorphic solution (I wasn’t discussing just the Command pattern - that is usually overkill for most tasks). If you are nesting a couple of if statements on a one shot occasion, you’re most likely correct. However, too often an element of code has a set of state variables that, when interpreted, tell you what operations you need to do. There are also common tasks that must be done and you don’t want to duplicate all that code. When the if statements become disastrous is when a program needs to repeatedly (for example in each method it enters) decipher the state variables to determine what special action must occur. Understanding that there are objects that specialize in specific behaviour and then calling those objects where appropriate is far simpler than constantly referencing these state variables and trying to a) remember what they mean and b) decipher the nested if/switch statements that select the appropriate behaviour.

All things in moderation is usually the best. If the differential behaviour is fairly simple and there is not a lot of it, use if statements. If the differential behaviour is significantly different and there are many cases where the different behaviour exhibits itself, use polymorphism. Everywhere in between those extremes you need to consider which option is best BUT and this is a big but (with one ‘t’ as opposed to what I am sitting on), if you are having a hard time knowing which is the correct decision, expect to have to change whichever decision you make at some point and build the system accordingly.

“Just my humble thoughts”, he lies unconvincingly …

I’ve only skimmed the comment of this post, so I’m not sure if I am repeating something already written, but:

I find that some tasks can be done in parellel with no loss in productivity. In fact, I think it may actually gain. I think that one major task is the limit, though. The other tasks must all be trivial. The example I am using is instant messages:

When I am programming, I am capable of conducting 2 or 3 light IM conversations at the same time. My IM client is configured to give a one-time notice of a pending message, which disappears after 2 seconds, so messages don’t distract me. I can put them in queue and get to them when I am ready to switch. I have found that when conversation matter is trivial, 2 to 3 conversations while programming non-complex code is no problem.

To go along with the analogy of a computer: the conversations must not require any registers. Once the subject matter becomes complex, I begin to slow at programming. Also when the programming is complex, it requires so many registers that I can’t spare any to IM.

The reason for doing this falls in line with what Rob Conery was saying about “reboots”. The IMs provide tiny windows where I can switch away from it and come back with a fresh perspective. Of course, these breaks are so small that it isn’t a “fresh” perspective, but the break does modify it slightly.

On a larger scale, taking a break to read other subject matter (recreational) when an obstacle is encountered can be very helpful. It goes with the expression to “sleep on it” but on a smaller scale.

In conclusion, some task switching can be helpful. I think the kind described in the original post, though, are generally very detrimental: working on multiple complex problems at the same time, or unexpected interrupts.

I think we’re aware of the inefficiencies of context switching, which creates more inefficiencies.

Say you’re programming some hariy code before your 2:00 meeting. The phone rings at 1:30, it’s someone from the business side with a question that requires some digging. You figure out the answer and get off the phone at 1:50. Now do you try to get back toyour programming task for 10 minutes, knowing it will take that long just to get everything back in your head? I don’t – I check my e-mail and the news.

Of course the more tasks you’re on,the more meetings and questions you get, and the more this happens.

At my last job it got to the point where I had to count on only doing development work at home at night on my laptop, since I knew I couldn’t get stuff done in the office.

One more note. This may be a failing on my part, but sometimes I got to the points where I assumed interruptions would happen and wouldn’t want to get engaged in anything.

“Those distracted by incoming email and phone calls saw a 10-point fall in their IQ - more than twice that found in studies of the impact of smoking marijuana, said researchers.”

–hmm so I guess smoking marijuana while checking my email on my Blackberry during work hours explains my demotion.

Oh no! Another topic that comes down to the same answer: ok in moderation.

Does anyone else find it frustrating that as a programmer used to dealing in pass/fail, true/false that we’re completely surrounded by practices and advice that are neither wrong nor right?

I find myself nodding at Tim’s remarks, I come across tons of code that was written to obviously keep the compiler from complaining. catch {} and the like. But I’ve seen the high-design school fail miserably too. Many designs are so absurdly simple and brilliant that no one can understand them. If I’m a code monkey sent in to fix a bug there’s no way I’m going to read 20 pages of design documentation so I can fix a bug. Looks like the issue is right there: if(bad) don’t do this. Done!

I’m no Architect, and I want to trust and believe in my coworkers and their abilities. But some times simple and elegant is easy for me and 2 other people to understand, but someone else has to work on the code and just can’t grasp it. They’re used to coding in nested if statements and aren’t really going to jive to a polymorphic command pattern.

Yeah, yeah, there’s no silver bullet. At least my compiler understands me…

It’s definitely good that some concrete measurements and studies are mentioned here.

Besides context switching, a lot of other (bad) things can happen because of multitasking, which are not at first as obvious to an individual as loosing time and lowered productivity is. For example, in companies where multitasking is a “normal thing”, meaning that most employees have to practice it for some reason, people who work on only one task at the time are seen from other colleagues as lazy and incompetent. That’s not nice, but the worst thing is that they usually are! How sick is that?

I think multitasking brings many hidden problems.

Love the post. Regards!

One final comment on this, to perhaps clarify my words or to perhaps further villify myself.

I had this very discussion with a former developer today. You cannot rationalize that all projects must have a long bug fixing cycle or must be complex just because those you have worked on always do or, as this former developer said, because all the senior developers he talked to said that always happened. That is a fallacious argument in that it assumes that the development team is not the problem. I would allege that in most, if not all those cases, it is the development team that is the problem - there are very few truly complex applications but there are many applications that are truly complex (if you get my drift). Failing that, I’ll quote Albert:

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.”
– Albert Einstein

Having been on fairly large scale projects that did not require lengthy bug fixing cycles at project end and did not become huge cumbersome, complex beasts that were nearly impossible to alter, I have personal experience that this need not be the case. However, having been on projects where developers had no overall plan save what was in their heads (and therefore virtually guaranteed to be different than what was in the heads of the rest of the team) and where those developers simply sat down and started to “cut code”, I’ve seen that software developed in this manner always ends up with a lengthy bug fix cycles and almost always ends up being complex, convoluted and headache inducing.

I’m not implying that Paul was saying this because I don’t believe he was. Another rationalization for why a project resulting in bad code is “expected and normal” is that, since almost all projects that live long enough to undergo many revisions and become legacy apps (a very small percentage indeed) are complex and difficult to understand does not imply that all projects therefore will be complex and difficult to understand nor does it justify the argument that it is expected that projects become complex and difficult to understand right off the bat. Rather, it is a rationalization to allow us to believe we are really great developers even though our code sucks.

Finally, with respect to the statement about “allow the debugger to do their thinking for them”. What I am trying to say here is that, when you are stuck in the gory details of some code and you are revamping a loop or a class because you just discovered it doesn’t quite do the trick, it is nearly impossible to maintain a grasp of the complexity of that loop/class and keep track of the second order effects changing it will have on the code around it. By relying on the debugger to assure you your logic is correct, you are virtually giving up on trying to understand the first order effects of your code, let alone the second order. It is through design that we determine that the whole works as a unit - it is through code that we realize determine how the small units work and, as a consequence, realize the whole - at least that’s how it should be.

Well that makes perfect sense to me but, as it is late, I will acknowledge that it might not make any sense to others.

I think you guys are totally missing the point here. It seems to me that the book discussed in this thread is about the loss of productivity due to multitasking. From trying to do things like talk on the phone, GOTO meetings, IM, email, and code all at the same time. So the context switch we’re talking about here is between email and code or meeting and code, and not the context switch between project A and project B.

I think we’ve all had those days where there are constant interruptions and you forget what you were supposed to be working on in the first place. One minute someone from sales is telling you the changes they want you to make to a report. And the next someone from marketing is telling you that we just have to have customizable messages print on out invoices. And of course the Emails keep pouring in and the phone keeps ringing. Which is why I think programmers get a bad reputation for being antisocial there are a lot of demands on our time and the cost of interruptions is higher for us then for people with less complicated professions. I know when I’m in the “zone” it doesn’t take much to knock me out of it, like the programmer in the next office talking extremely loudly on the phone, which he always does, but I’m kind enough not to mention it to him. And at times I can be cross when someone interrupts me while I’m working on an important issue. And after a day of such interruptions that simple, elegant, almost beautiful code that you just wrote this morning might as well be in Chinese.

P.S. If you really want a good, or at least interesting, read in the project management genre you should pick up “The Mythical Man Month”.

everybody seems to think that listening to music decreases your focus, i’ve found that in a loud environment (a shared open space), it’s sometimes the only way to isolate yourself from the noisy conversations around you.
Oh wait, maybe that means i should change workplaces to one that actually limits the possible sources of interruption !?

I recognize this problem. I’m ‘currently’ ‘working’ on a lot of things. Writing a bunch of interpreters, cryptographic communication tools, and lots more. The problem is that I barely ever finish anything.
Only for me the problem appears to be that I like to code, have many ideas and want to implement them all. I just can’t, and that’s the problem. My to-do list grows a damn lot every day, yet I barely ever finish anything, but I’m repeating myself.
Over-motivation, appears to be it. Unfortunately.

jbn: Music may decrease your focus (and harm productivity) in comparison to a perfectly quiet, distraction free environment. However, I think what the music listeners are suggesting it that it will have less impact on productivity than a noisy, open cubicle environment with Dilbert’s overly loud guy in every cube around you and with flashing disco balls spinning … (c’mon, how often do you get to talk about disco balls in a software setting - I couldn’t pass that up).

In other words, to many people listening to music in their head phones is the lesser of two evils.

Tim claims with some reason that complexity in context-switches has an impact on the effectiveness of the switch.

However, I can’t help but feel one of the quotes in Jeff’s post was probably true; “…just how much they suck at it”.

Am I the only person ever to have driven home, and realise as I get home that I have totally forgotten the journey? This is a wonderful ‘skill’ to have, but I have to assume that I was far less attentive (and therefore more dangerous?) on those trips.

JohnMcG, and William have recently posted their observations about the long-term impact of interruptions (even if they are ‘anticipated’, like meetings). I think this is where Joel’s historic discussions on programmer workspaces have been immensely interesting to me (and made me mucho jealous!)

I for one have experienced days where nothing happens due to interruptions. Days where I simply try and fill time because I know that I’ll get the work done when I get home (probably interrupting loads of people for a chit-chat in the process).
I wonder how many people actually get their (programming) work done in the office?

Returning to the driving analogy; my gut feeling is that providing your own distractions whilst coding is likely to turn off the critical-judgement part of your brain. So your ‘for’ loop worked? Great… you just didn’t remember that you’d better put something else in a try block or catch an exception in a proper manner!

I’m pretty sure that I have read (possibly in ‘PeopleWare’ by DeMarco and Lister) that even listening to music substantially reduces a programmer’s ability to code a simple routine…


– Surely Tim code that is complex to understand and difficult to maintain could be assumed to be legacy code.

kind apropos of task switching, and comments further up the line (much of them didn’t come up until just now; page cached I suppose). it is naive to assume that current code is not difficult to maintain. just because a codebase is written in the au courant OO language doesn’t imply that said codebase is transparent.

I have commended Allen Holub’s “Bank of Allen” articles before, perhaps even here. it describes what OO is intended to do, and how. most of what exists in java space is procedural COBOL drek, just lower case text; at least the Enterprise stuff. many of its creators are blissfully unaware of this.

maintaining such codebases involves lots o task switching simply to follow how it works.

I also commend Meyer’s book; while he doesn’t quite get the evil of get/set (Stroustrop and Arnold do, and I can find the quotes if anyone’s interested), he does get the notion of Design by Contract. It’s his term.

I’ve always been amazed that folks don’t find the existence of the Sequence Diagram in UML just the teeniest bit ironic (well, dumb): a fig leaf to cover the proceduralness of what OO designers are really doing.

finally, I wonder how many of this blogs readers come from the ranks of Management? I doubt that they’re listening. To paraphrase one of our prime Knuckleheads: “you’re dealing with reality based decision making; we aren’t”. The mythical man month was mentioned above. The original use came from describing how OS/360 came to be born. That was 1964. True Fact.

These are some interesting statistics. As a university student, I have to wonder if the same effects can be seen in students trying to juggle 5 different courses at once.

2 things can help eliminate some of the wasted time here, in my experience. Few companies, however, will actually spend the time or money to do these things, but when they do, it helps.

1 - Actual duplication/triplication/whatever of the physical resources to facilitate the multiple tasks, i.e., seperate development machines, each left in a setup state with all software/hardware/network configured and left as-is until needed by that task.

2 - I know this sounds crazy, but I had the pleasure of working for a time with a gentleman who was a brilliant and productive coder who’s productivity was increased by drinking fine beers (in moderation of course). I joined him once, at his insistence, and found it didn’t work so well for me. I think there might be way too many individual traits and quirks and variables for any kind of metric to be established.

I do agree that many people think they are good at this, even take pride in it and brag about their abilities in this area, and yet they suck so hard.