Here’s a nice idea about captcha cracking:
http://ardoino.com/41-online-social-and-unaware-captcha-cracking/
At an abstract level a CAPTCHA is attempting to perform a specific Turing test to determine if an unknown participant is a human or machine.
As the variety of CAPTCHA increase, the Turing tests change from specific to general.
A program capable of discriminating the difference between a human and another machine for these ‘general’ Turing tests would capable of passing itself off as human to itself (and possibly humans too).
You end up with an infinite recursion with a CAPTCHAs arms war. As a side effect SPAM solves a key problem of machine intelligence (who said it was useless!).
what if you showed a picture which the user had to describe in one word. but also randomly change the pictures.
so you could have a few thousand off each (i.e. dog, cat, house, market, man, woman etc pictures.) and thousands of different words.
then change all of the pictures every so often.
it sounds daft i know.
sorry didn’t read the above posts.
I use a modified captcha-type Turing test on my blog. Now, I don’t get the traffic that some sites get, but I had a post make the front page of digg.com recently. That post garnered well over 100 comments, without a single spam comment.
How did I modify it? Well, I don’t use reading or images. I use a form of an intelligence test with questions that should be easy for a human to answer, but not for anything automated to guess easily. Some answers are text, some are numbers. It’s not perfect, and it could probably be broken pretty quickly and easily by anyone with a will to do so, but really, if we are honest with ourselves, all a captcha or any other Turing test is going to do is help eliminate the nuisances. This is like putting a lock on the front door of your house, it won’t prevent a thief with intent, but it will stop the casual opportunist attempting to open the door.
With CAPTCHA breached, do you think that Google system issues like the meltdown Google Groups group-owners are experiencing
http://groups.google.com/group/Google-Groups-Basics/browse_thread/thread/1427ec5996001762/
are the result of Google overreacting to this security threat?
I run a web site that has a registration form that was getting bombed by spammers. I threw in two very simple tests:
- 
I scan every submission against a list of “unlikely words”. This list includes words that were routinely showing up in the spam adds, like “mortgage” and names of sex drugs, including a few common “obfuscated spellings” like “/iagra”. (Obviously if you are running a web site for a bank, blocking anyone who asks about mortgages may not be a good plan. The list of prohibited words would have to be tailored to the site.) (I see from my first attempt to submit this post that you’re blocking names of sex drugs also.)
 - 
The funny part: One field on the form asked the user to place himself in a category with a set of radio buttons to pick. I noticed that the spammers picked the first radio button well over 90% of the time. So I added a new first choice, “I am a spammer”, and if they picked that, I rejected the entry.
 
Since making the above two changes several months ago, only a handful of irrelevant entries have made it through, and those look too coherent to be machine-generated spam, I think they’re “manual spam”.
The big caveat on this sort of strategy is that my site gets about 60,000 unique visitors a month and the only thing anyone has to gain by spamming my site is getting his ads or links to his site onto my pages. That is, I’m not a big target. I’m sure if Google or a big bank or somebody tried my tactics the spammers would see what they were up to and easily circumvent it.
But I think it stands to reason that “adequate security” for a small site with little to steal is much different from adequate security for a big site that could potentially give a succesful hacker access to megabucks. Like, I lock my front door and I keep a gun handy for self-defense. I consider that adequate security. I certainly hope that First National Bank, not to mention nuclear weapons depots, have more stringent security than that. I have no illusions that the lock on my front door is going to keep a skilled team of terrorists from breaking into my house. But I also pretty much assume that no skilled team of terrorists is likely to target my house.
On a totally different direction: How about if we just start compiling a big list of web sites and email addresses of spammers. It should be easy enough to collect this using spam filters on email programs. Then post many copies of this list, with hot links, all over the net. Then the spammers robots will find it, and they’ll start spamming each other! It may not do much to solve the problem but it would certainly be poetic justice.
Idea #2: Put together an organization dedicated to tracking down the home phone numbers of spammers. Post this on the net. Encourage hundreds of thousands of people to call them at all hours of the day and night. Maybe they’d sue for harassment, but it would make for a fun day in court.
Jay: because blacklists don’t work. Enumerating badness is like trying to count grains of sand.
There’s a service that hires captcha typers from bulgaria.
To Jay: most addresses are faked or are joe jobs.
graylist
Captcha is a hurdle for visitors. Why should visitors have to jump thru hoops b/c of spammers? (And still, it’s not 100%).
Blacklists / greylists / whitelists are a PITA to maintain, distribute and make errors.
Moderation puts the onus on the blogger and a delay in the comment posting - who wants either?
Bots makes oodles of assumptions and can be tested for, just need to think like a bot. 
No hurdles, open commenting, no maintenance, no delay … simple. 
Interesting ideas, but most won’t work
- ASCII art
 
- take a png from the webpage and OCR it (piece of cake, 1 hour work)
 
- Javascript
 
- comments are made by HTTP requests GET or POST No javascript is involved, and if it’s in the browser, you can look what it does, simulate and POST it. Robots don’t use a webpage, they use a socket to send the HTTP request
 
- dogs/cats/ugly people
 
- 9 pictures, 3 choices, that would be 1/1000 ?
could work, but I saw some guys that I wouldn’t call ugly that were labeled ugly. Can’t work for google/hotmail, spammers would just harvest the images and create 1 big database with the results ugly/not ugly. Homeusers can’t use it either, they don’t have an ugly-people database. 
- jane has 4 oranges, take away one, how many does she have left?
 
- useless, you can’t say this question in 20 difference ways, so hackers be able to calculate this very easily.
 
- math
 
- if there is something a computer can do, it’s solving math… so useless
 
Acutally, no system will work. People in China get 30$ a month (!) to make my NIKE running shoes. Give 50$ to some friends from India and they’ll solve captcha’s all day long… Defying all captcha’s
Tricks:
- no human can enter a captcha within 1 second, so if the message is posted 2 second after generation - delete
 - noone is supposed to post more messages than 1 per minute
 - limit the regeneration of the captcha: 1 minute for the 2nd chance, 2 for the 3rd, 5 for the 4th, 10 for the 5th…
 - if the captcha isn’t solved withing 10 seconds after generation (let them first solve the captcha before entering userdetails/comments) it fails - solves the farming/sending to p*rnsites
 - internet police: log IP’s, IP + time = user
that user’s internet access is blocked for 7 days. countries not cooperating: cut-off of the internet 
Also:
use a captcha to inform the user if the registration/comment was successful
- that way, a bot doesn’t know if he solved the captcha correctly 
 since knowing that would require solving a captcha 
 
I don’t mean to spam 
But for forums/blogs: registered users should be able to flag something spam. Make use of the “web 2.0 social” techniques to fight spammers
http://www.theregister.co.uk/2008/04/14/msn_captcha_breaking/
MSN is truly broken…and definitely by script, not cheap labor.
First, judging by Poker’s comment above, your captcha is broken 
Second, your captcha has made the news: a href="http://www.news.com/8301-10784_3-9929073-7.html"http://www.news.com/8301-10784_3-9929073-7.html/a
How about ReCaptcha? Anybody heard it broken?
It is the Captcha 2.0 green technology (recycling human computer interaction power)!
Hi Jeff
Why did’t your captcha control work, when reject user from the default page ?
captchas are a thing of a past no offense… youclash.com