Your Community Door

What are the real world consequences to signing up for a Twitter or Facebook account through Tor and spewing hate toward other human beings?

This is a companion discussion topic for the original entry at

Thanks for this. I’ve been the organizer of a few communities, and in one in particular, I’ve been much quicker with the banhammer recently (ban first, sort it out later, bans can always be removed). It can be hard to be the arbiter of speech, but permitting abusive attitudes and behaviors allows them to infect and destroy the community.

1 Like

There are real-world consequences to the community, and to the individual doing the negative posting - they include mental and physical health consequences that are real. The larger issue, of course, is that they can destroy a community, especially when they begin to predominate. But one individual can poison the well. We see this everywhere on the “free” Internet today - which isn’t really free if it tolerates negative behavior, unless you consider being trapped in a cell with a psychopath to be “freedom.”

I’ve lived in small cooperative communities for 38 of my 72 years, and I find that communities where the preponderance of members are positive and constructive tend to be self-healing, in the sense that negative people can’t tolerate positive people after a while - kind of funny, really, how they self-eliminate. Online, the best example I know of a community that cleans itself is the voting system on the sports pages of, where readers can thumbs-up/down a comment. Because most readers are positive, fair-minded, and keenly tuned to good sportsmanship, the occasional trolls, psychos, and jerks are so overwhelmingly down-voted that they don’t stay long - or the sysadmin takes notice and bans them. Either way, it’s very important to eliminate those whose energy is consistently down-pulling and unconstructive, because they can cause big wounds.

Definitely comes back to a moderator/admin who is active within their own community. I hate reddit’s up/down vote visibility system because it creates an echo chamber and encourages hive mentality, BUT the idea of mods/admins being able to filter out unhealthy members of the community via these votes is an encouraging system.

I think one of the BEST examples lately is Steam’s recommendation system. They don’t show you everything all at once, just a general overview of how positive or negative a game’s feedback is. If applied to users in a forum, this could be rather useful, rather than a reddit style “make them disappear” or having a well rounded person’s comment vanish under hordes of downvotes because of the avalanche affect.

I like reddit’s system, because just like you said, it encourages the hive mentality. That hive mentality isn’t really the problem, rather its the is lack of attentive moderation - reddit mods aren’t always active, or don’t always enforce positive discussion. Reddit doesn’t do anything to enforce, encourage, or coach the hive to be positive to other users.

The upvote/downvote is a good idea, especially since its a really great way for the community to let a user know when they’re being a jerk, but it still needs to be moderated well.

Using the community as a gauge of positive or negative effect can be a double-edged sword. It can also be used by negative people to overwhelm the positive (manipulation or vote brigading), or by the community to ignore anything that doesn’t agree with their world view (the filter bubble effect). If the majority have absolute power then things like gay rights, african-american rights, female rights, and so on and so forth, become harder to change.

This isn’t to say that negative influences should be allowed or that all opinions should be treated equally, but that care should be taken in how this is done to try to balance between the two extremes. I otherwise agree with everything else said about victim blaming and addressing problematic behavior. It’s a novel view of the standard tools available to communities that I think should be explored more to find a better solution.

1 Like

The problem is also the product might be renamed from Discourse to Propaganda based on “truth in labeling”.

“Bad Behavior” can mean lots of things, including simple disagreement or dissent upon the conclusions - even from the same data or facts but with a different interpretation.

Same with “Hate”. If I disagree with your opinion, you may label me a hater. Or harasser.

You don’t bother defining any of these things clearly in the post, so that is left open, but is the essence of the problem.

1 Like

Since I received online death-threats for citing that I think house cats should stay in the house and not in your neighbours garden I’ve given up on all online discussions.

@tomz: I’m not sure, to quote from the example given in the post: “Then she can cry about misogyny in hell while being raped by a pack of feral* demons” is really an “eye of the beholder” type thing. Its not hate because of a different interpretation of facts or whatever. Its just hate.

Obviously people can quibble over where exactly the line should be drawn, but that’s true of pretty much any aspect of human interaction. I don’t think they make it any less true that many online posts and tweets and the like are objectively hateful.

*(are there domesticated demons?)

1 Like

As to the offensive twitter post, it depends on the context. Was some troll calling the tweeter a mysogynistic racistsexisthomophobic rapist? In response to something like cats?

If the discussion thread is about something technical, then a lot of things are unwelcome. Some people hate First person Shooters because they hate guns. Spam and going ad-hominem are trolling. But if the purpose of the discussion is to express opinions, then the proper counter is “I consider you hateful”, and the reply “I don’t care what you think” are both on topic. Including adding more specific adjectives.

If the topic goes to something political or religious, it will get heated.

One example is the Atheism-Plus where thunderf00t was ejected.

Free speech and thought? Not there. He was also banned from Twitter - and reinstated, apparently when they couldn’t find an actual violation of the terms of service.

Right now there is a controversy called #GamerGate which is testing the limits. One side claims the media has a Payola type scandal or worse, the other says they are everyone, just misogynists. But there is content under the mud, and important issues.

If it is “your home”, fine, but do not advertise an “open house” then object to who shows up. If all you want are people to say you are right that is ok, but few are likely to join such discussions.

Twitter and Facebook advertise themselves as open. As did 4chan and reddit, but they started politically motivated banning and censorship. Hence 8chan and others.

It is important to remember the freedom and right to speech and expression is to cover things which are detestable, not likable.

The only thing I’m intolerant of is censorship.

Agreed. Muting and ignoring jerks on the 'net is not a very good solution. But until someone invents a way to punch someone in the face through the Internet, our options are limited.

One idea that might be worth considering is public shaming. A good example of this is Jimmy Kimmel’s ‘Celebrities Read Mean Tweets’ videos. This could work on Facebook: any comment or post could be flagged as mean/hateful/etc., which would make it appear on a special page that everyone can see. Anyone would also be able to vote for any of these posts/comments, and the meanest item of the week would be promoted across the entire site for everyone to ridicule.

Right now there is a controversy called #GamerGate which is testing the limits. One side claims the media has a Payola type scandal or worse,
the other says they are everyone, just misogynists. But there is
content under the mud, and important issues.

The problem is that because of a lack of policing, all that’s left is the mud. The originating question of journalistic ethics having been resolved, the question has moved onto the ethics of a sustained campaign of abuse, in which multiple journalists have been threatened, forced to cancel appointments, and in some cases leave their homes.

Now, you can argue faults on both sides, but frankly, that incident shows how vital community policing is, and how a lack of censorship often leads to “community-led censorship”, where people are afraid to speak out and become vulnerable to the abusers. You can claim “anyone can speak out”, but some people are vulnerable to abuse and some are not; for example, if you are a public figure or have a family, you are more vulnerable than an anonymous conspiracy theorist.

And “Intolerant of Censorship” is too often a synonym for “Tolerant of Abuse”.

1 Like

Right. Okay, so only the owner of the account targeted by the mean post would be able to nominate that post for public shaming, and they could choose to do it anonymously. Also, they would be able to control how much context is included.

Disclaimer: this idea is far from perfect. I just think it merits discussion.

Abuse seems also to be a very general term. In a deminimus way, your post is abusive to me. Random, anonymous, amorphous threats are also abusive, but still just talk. Or as children are taught “Sticks and Stones may break my bones, but words will never hurt me”. That is true except for slander and libel. To continue with the cliches, “People who live in glass houses shouldn’t throw stones”.

I am not tolerant of any PHYSICAL abuse. I’m not sure virtual abuse is real. No one I know is going to some lady’s Facebook page where she has uncontroversial pictures and discussions on things like kitchen utensils and party decorations and making death threats.

However there are men and women who aren’t merely trying to discuss #GamerGate, but they want to sling not just mud but stones (virtual stones?). Some enter into the discussion, and they ought to be wearing appropriate attire. It is something like playing paintball, then complaining your clothes get dirty and you get a small bruise from a direct hit. I know of no one who has not intentionally entered the fray who has been subject to “abuse”, and what would you might label as “abuse” occurs on both sides. But on reddit and 4chan only one side has been censored. That is fine, but “free speech” v.s. “only politically correct pro-gender-feminist speech” are different policies, and they seemed to change toward the latter recently.

This even applies to something purely technical. Code reviews can be brutal, and maybe ought to be. If you will be reduced to tears when someone finds a critical bug, then the problem is with you, not with the reviewer.

If you wish to do battle, you should wear armor and shield. Many places require helmets when riding a motorcycle. You don’t have to ride a motorcycle. And if you run barefoot, you need to develop a thick skin - callouses - there.

I don’t think oversensitivity and fear on someone’s part makes ordinary aggressive and even tough discussion “abuse” toward them. Victims don’t get to define the crime, the law, the rules, must be objective and enforced equally.

Here is a case of “if a Man said it…” It cannot be equality, the rule of law, or “abuse” if it is only wrong if men do it to women, but not when women do the IDENTICAL thing to men. And that is my problem - I don’t tolerate or am intolerant of anything different based on the sex, gender, race, creed, or whatever of the person. Either the act itself is abusive or it is not. If the place is supposed to be “polite”, then rudeness on the part of anyone and everyone should not be tolerated. Instead, I find excuses being made for one party or the other party, persons, not actions.

1 Like

I agree, too often I see people claiming victim blaming/shaming when the person decided to air various topics in a public forum. Public forums encourage public discussion. There are often ideas and angles that will in fact, point out that circumstances can be mitigated or entirely altered via a number of variables. The idea, especially in the US, that people who are victims of a crime are always 100% dissociated with the acts the led to the crime being committed is absurdly high.

Each case should be taken under individual inspection, rather than broad strokes of completely removing responsibility from the harmed party. It seems to be very hard to get across the point that what people deserve, not being victims of crimes, is not related to the reality of what you can do to mitigate becoming a victim.

The summary of what I’m trying to say is, armor is necessary. Both virtual and in real life. Not only that, but sometimes even as a victim, you can learn new mitigating factors that might actually prevent or up your chances of not being a victim a second time from the very people who are attacking you.

Odd that Facebook is the primary example of offensive comments, when Facebook is the only venue where I’ve never seen one, as opposed to Usenet newsgroups, YouTube comments, and numerous news sites. I think that’s because every comment I see on Facebook is from either a friend of mine, or a friend of a friend, or possibly a friend of a friend of a friend (but still linked to me in some traceable way). All my Facebook friends are people I know in real life, and it seems that most of them appear to also have only real-life acquaintances as their Facebook friends. I do receive friend requests from people I don’t know, but I ignore them. I’m curious how Bob Beschizza is linked to Gerald Witt (the two people in the Facebook example).

I believe the majority of hateful people are unlikely to let loose their inner vitriol unless they are either in a circle of similarly-inclined friends, or in an anonymous forum, but that they are usually restrained when they are in a heterogenous group of acquaintances. Therefore a social network such as Facebook is uniquely positioned to squelch such comments simply through its structure rather than any active reporting system, but it would be difficult to replicate this effect in a community such as, well this one on Coding Horror, where many commenters are unlikely to know anybody else in the community in real life.

[quote=“tomz, post:16, topic:2681, full:true”]
Random, anonymous, amorphous threats are also abusive, but still just talk. Or as children are taught “Sticks and Stones may break my bones, but words will never hurt me”. That is true except for slander and libel.[/quote]

Actually, “Sticks and Stones” is a complete fiction. It’s an attempt to dismiss attack on a person, the equivalent of “Suck it up, pussy” when you’re subjected to physical abuse. Words have power, specifically the power to define relationships and attitudes. An anonymous threat is not just talk, it is an attempt to alter someone’s behaviour by instilling fear of what might happen to them. False accusations, when believed, affect your ability to function in society. And anyone who was bullied or who deals with the bullied will tell you that the non-physical abuse has a far more long-term effect than the physical, in terms of the future behaviour of the bullied.

Basically, my claim is that the people making these threats are well aware of the power of words, and that’s why they’re using them in a way which is defined as abuse in the same way that we separate physical violence and abuse by the way that violence is used. By saying “I’m not willing to judge whether this is abuse”, you are not supporting it, but you are certainly tolerating it. You can substitute the term abuse for whatever term you like, but you are certainly tolerating the actual actions.

And that’s fine. You can tolerate that, either because it doesn’t affect you personally, or because you are emotionally safe from criticism. But if you then say “it’s wrong to protect people against this and put structures in place to prevent it”, you are like a prizefighter saying it’s fine to beat up people because he can take it. Again, that’s fine, and I doubt I can convince you otherwise.

But I can at least draw to your attention that that is what you are doing, and argue that tolerating such behaviour is not as ethically sound as “intolerant to censorship” appears at first sight.

I’m not sure the distinction of “ignore bad, moderation good” is as clear-cut as this post makes it sound, either - the problem of ignores being ‘silent’ is not inherent to the idea, but rather an implementation detail.

In fact, over two years ago I suggested an extension to XenForo’s ignore system, which would make it considerably more useful for encouraging good behavior by hybridizing it with Hellbanning:

The core of the idea is that you have a “loserboard” (c.f. leaderboard) of those who are “most ignored”, and when they hit a threshold they are hellbanned. That provides a social cue of “You’re being a poor citizen” without forcing the people hitting the ignore button to de-anonymize themselves. In addition, mods can examine the ‘loserboard’ proactively to find problem users.

The actual post has a number of other things in order to make it actually work, but I think it illustrates that “ignore” can be turned into a useful tool for exactly the kind of environment Discourse is trying to support.

And by analogy with it being “your house”… If you see all of your guests ignoring a few others pointedly (and with the above, it is pointed), that’s a hint to you as the host that something is up - without them coming up to you-the-mod and complaining, which can in some circumstances be stigmatized (that, too, is a problem, but one thing at a time :stuck_out_tongue:)

The use of “your house” in this article makes it clear that this is being written to discourse site administrators, not to users of those sites, which really makes the situation no different than it is with Twitter or Facebook or any other specific site controlled by the whims of its administrators.

Regular debates, such as this one, are attempts at altering people’s behavior as well, with an appeal to logic, rather than an appeal to survival. They do the same thing in different ways. It’s clearly socially acceptable to appeal to logic, and sometimes emotion, but not survival unless you aren’t the one threatening them directly (“move or you’ll get hit by that car!”).

You’re conflating most attacks with survival though, and that’s simply not true. The vast majority of things people are calling attacks on each other are not death threats, they are using emotionally charged words. This is important to note, because the people who are being emotionally attacked are using emotional appeals themselves as well. Obviously not all debates are emotional, but a great deal of back and forth is entirely an emotional battle.

Person 1 - “These words make me feel bad, nobody should do/say this”

Person 2 - “It makes me feel good, you should understand my feelings/thoughts”

Person 1 - “MY wants and needs are more important.”

And so goes the argument (downhill usually). Person 1 and 2 BOTH have valid points, but in nearly every debate one of these people is demonized and a “winner” is picked. It’s ridiculously rare that Person 1 or 2 wants to actively harm or hold privilege over the other (although this is a highly used demonizing tactic, especially in war).

What you are doing by saying people shouldn’t tolerate certain “attacks” is not protecting people, but really “pick a winner, and make sure it’s the one I agree with”.

Addendum: This is one of the reasons I largely disagree with publicly viewable open forums that define themselves as “safe spaces”. Pick one, a public forum, or a safely moderated private one that only accepts opinions from people they already agree with.