Suspension, Ban or Hellban?

For almost eight months after launching Stack Overflow to the public, we had no concept of banning or blocking users. Like any new frontier town in the wilderness of the internet, I suppose it was inevitable that we'd be obliged to build a jail at some point. But first we had to come up with some form of government.

This is a companion discussion topic for the original blog entry at:
1 Like

Identities are free. Just create a new user, carry on the same.

As I see it, the point in these kinds of bans are that nobody (including the banned person) knows that they are banned.
Besides, these can be done per-ip too and those aren’t switched that easily.

@Nick Yes and no. A new identity will have a low reputation and limited capability for mischief.

Be very careful with the technique you call “slowbanning”. It’s all too easy to open yourself up to DoS issues. If your web server holds open a process for the duration of the artificial delay then make sure that your rate-limiting happens before the delay.

I work for a large-ish (~2 million unique visitors/month) discussion forum and we currently do not use any of the techniques listed above. We used to use “hellbanning” but the implementation of the feature in the software was poor and the tiny benefit was outweighed by the additional complexity.

If the problem you’re trying to solve is “how do I cut down on the meta-discussion around each ban” then one of the most useful fixes is to exclude bystanders from discussions about specific bans. If you allow general discussion of ban scenarios but prohibit specific discussion of individual bans by uninvolved 3rd parties you should be able to limit the amount of your time you have wasted. It mostly works for us.

@Nick: You’d be surprised at how little of your identity is made up of login/OpenID/e-mail/IP/etc and how much of it is made up of behavior. Duplicate identities are very easy to spot in the vast majority of cases.

What I see here is that while these methods work, in theory, if someone finds out they’ll just create another account. So, you have to keep them secret.

Another issue: All of these seem like permanent solutions to hide the bad community members. Since they, in theory, don’t know what is going on, they won’t learn a lesson. So, for instance, you can’t make the slow ban temporary. If it’s a week and they just come back and resume their behavior (you know, “when the site performance is back on track”), nothing learned. In fact, they’ll probably start complaining about how poor the site performs.

For the hellban, it’s even worse. Because they get to continue their bad behavior, they really don’t learn anything. And you can never re-enable the account, in whole, because of that.

So, it seems that in all cases, these are effectively permanent bans. The main difference is that the idea is to bore the person to self exclusion, rather than instantly anger them to creating new accounts and causing more trouble.

(Per the reputation: It’s most likely the most of the worst offenders have less than 100 reputation – or maybe 200 since 100 is the base now once you start linking. If people with thousands of reputation start causing trouble, something else might be wrong.)

@Nick: People need to know that they are banned to get a new identity. The point of banning people secrectly is to ban them without letting them know.

@Jeff: On Stack Exchange there would be another possibility:
Prevent the questions in which the user post from rising to the top.

What about giving the control to the users, by allowing them to block other users they do not want to see content from? Or is that not absolute enough?

@LukeMorton: That’s a good idea, and in fact I think the best approach is to use a combination of all the involved suggestions in this post and thread.

Hellbanning/slowbanning/errorbanning shouldn’t be tools of first resort – they should be reserved for users who cannot learn to behave better, or who are deliberately trying to be destructive and have no intention of changing their behavior, because public-banning these users just makes them go create new accounts and try again.

Warnings followed by timed suspensions (known to the user) should be the first approach; some users will realize they’ve been misbehaving and improve their behavior. And all such actions taken by moderators should be public, despite the possibility of notoriety; in a democracy, as Jeff puts it, the actions of government need to be public and transparent.

But after the first couple of offenses, or if it’s clear that a user is being intentionally disruptive, moving to hellbanning (maybe not permanently, but for a while) is entirely reasonable.

In fact, it would be interesting to implement a trial system, where when a user is deemed (by mods) to be disruptive, they’re placed on trial, and “jury service” is randomly assigned to users. (Probably you’d try to exclude users who the accused had posted replies to. ;-)) The users individually vote on whether to suspend the user. It could maybe require a unanimous vote, as in a criminal trial, or maybe just a supermajority (two-thirds?). Whether the identities of the jury were known might not have to be public, since they’d be pulled randomly by computer and can issue their verdicts without having to be physically present in the courtroom – instead, the “testimony” would be presented to them on a special page that showed all the accused’ recent posts. There’s all sorts of issues with a trial approach, I’m sure, but the SO sites could be an interesting place for it to be (no pun intended) tried.

A user being able to /ignore other users is something that should always exist. (With an option: Do I see the user’s presence on a thread, but the content of their posts/replies is hidden, or is their presence entirely hidden?) Normally, being ignored by a few people shouldn’t have any effect on whether you end up getting banned, but if you end up getting blocked by a lot of people with high reputation (maybe each user has a “block score” which is the sum of all the reputation of everyone who’s blocked them), then that user should be flagged for the moderators to look into. (Someone who gets blocked occasionally but has been around for a long time might just be crotchety and not harmful, but someone who gets blocked by a huge number of people in a short period would probably indicate malice.)

Suspension and the various banning types should require multiple moderators agreeing about it. If you search for “suspended” on meta.stackoverflow, you’ll find a lot of posts like this: where suspension doesn’t seem that justified.

(Risking hellbanning)…

The Great Brain would make a fantastic movie, don’t you think? Somebody like Walden Media should get right on that.

@LukeMorton: You’re talking about PLONK as it was called on usenet. I am not sure if you’re an older user, but to PLONK was to put someone in a “killfile” which the usenet client would use to magically erase a posters messages from the newsgroup(s).

I am sure there were downsides, but it was a well-used and, I think, well-regarded system.

@Tcv, to be clear, your killfile hid problematic posts from you, but it did not actually erase those posts from the newsgroup.

I’m probably stating the obvious, but for people who have already accrued large amounts of rep, a representation of respect, shouldn’t we have a sort of pentalty system which detracts that hard earned rep in BIG ways? After all, they’re losing respect.

I’d like it so people who offend get a black mark on their profile, visible only to people with lots of rep, creating a “criminal record” so to speak of bad behaviour on an acccount, and each associated criminal act associated with an account has respective rep reductions connected to it.


[ Suspended for 2 days for { crime type }, and charged 2000 rep, <link to some sort of controlled discussion on this individual charge which includes all the cited “bad behaviour” stored on record as permenant evidence> ]

At least that way, if the discussion at some latter date concludes the conclusion was wrong, the rep charge can be repealed.

( to be similar to our real-world criminal justice system )

At least this way, we’ve got more “Sane” tools to deal with high rep people on occasional offenses, and more sane tools to track habbitual offenders.

As for people already with low rep, I don’t see what a any of the above techniques would do, all they have to do is suspect they’re being hellbanned/slowbanned and then do what it takes to thrawt the system, whether it be in-band abuse( account jumping ) or out-of-band ( trolling networks outside the scope of the ban ).

Bah. Fail. I put some comment in there between “<” for style purposes, and it nuked the whole thing.

Aforementioned “criminal record” entry would have a link to a controlled discussion containing the members involved with incident, cited evidence of allegations of abusive behaviour ( that is not editable content, hardcopy ) allowing “offenders” and accusers to flesh it out, but not have the general public giving their 2 cents all over the place.

Perhaps we can add “jurors” at some stage who can vote on guilty/not guilty at some point in the discussion, but I haven’t thought that part through yet.

@Shane The point of hellbanning is not to teach a lesson, neither is it punishment, it’s to save an online community. It should only be applied to people who have failed and failed again to learn their lessons.

Though hellbanning seems on its surface to be cruel and unusual, when you run a community where you have to deal with malicious internet trolls, you quickly realize that there are precious few options for dealing with serious trolls. If you want to have a relatively open community, then it becomes trivial for the troll to register new accounts. You can ban IPs, but proxies are widespread and easy to use. When you don’t have the normal meatspace methods of enforcing social norms, you have to be creative.

Sorry, but you have to be some sort of stupid if you don’t realise that you are hellbanned/slowbanned/errorbanned. These things are incredibly simple to figure out and work around.

That said, at sites like SO I find it easy to ignore the fools, because the good answers are voted up by real users.

The problem with Hellbanning (or any other invisible banning) is that there is no feedback loop from the user who was banned.
That means it’s hard to identify cases when moderator banned user by mistake.

If there is no banned user feedback to moderator - it’s hard to improve moderation skills.
That problem is especially serious, when banning is done by automoderator.

Bottom line: it’s better be open about ban and don’t hide it from anywhere.

@DontCare4Free, I wouldn’t be too receptive to banning by IP address- if somebody in the office is screwing with SO it’s a bit of a sledgehammer-to-crack-a-nut approach to excluding somebody, making everybody else suffer in the process.