I'm concerned that a "programming and human factors" blog is jumping to this conclusion rather than addressing the human factors issues.
(My background: I'm a computer scientist and HCI researcher who has interviewed and hired programmers.)
Suppose I was interviewing Jeff for a human factors job. I ask him:
"Jeff, why do you think programming interview candidates usually fail simple coding tests, even when all the other empirical evidence (their career thus far, references, and degree from a top rated university) suggests they should be competent?"
And he responds:
"Because they're just all that dumb; top tier university professors are monkeys who can't assess students; the exam results are all fake; and programmers' bosses are all just chimpanzees that can't see whether any code is being produced even though they are programmers themselves. The empirical evidence from the rest of their careers is wrong and my toy interview question asked by an untrained interviewer over the telephone is right!"
Jeff would not be getting the job -- however much the grumpy misanthrope in me might want to cheer him on!
Like it or not, from a human factors perspective, coding at interview is very different from coding on a job. To use a loose but obvious analogy: a great many people struggle at public speaking -- shove a microphone in front of them and say "talk about rain" and they go "um, er, um... Mummy can I go home now?". That does not mean they don't know English or what rain is. Dumbing the question down to "ok, well just talk about water then" doesn't solve the problem. Similarly, if I'm hiring a engineer to design the plans for an extension to my house, the best test is not "draw me a cartoon of an igloo in the next ten seconds".
In an interview, we have a fake task (nobody wants to use the code), in a fake setting (an interview), via a false interaction (over the telephone!), with a false assessment (one interviewer whose word is God, no compiler, no user, no sales, no code metrics or unit tests), a fake timeframe (a few minutes on each 'project'), false pressures (your job depends on the next ten lines of code), and somehow we expect to have valid results. Speaking as a scientist, that's just nuts.
Fine. At this point, most people reply "Sure, but we don't care about missing out on good candidates, only not hiring bad ones." But I have worse news for you. You are probably still hiring as many bad candidates as if you selected your candidates by rolling dice. Most interview coding tasks are so over-simplified that they no longer select for programming or thinking skills at all -- the "programming on stage" skill dominates completely.
The irony is that by selecting for "skill at interview coding tasks" you are effectively selecting for people who have done a lot of interviews -- but you actually want to hire the guy who has hardly done any interviews because no company ever wants to let him leave.