a companion discussion area for blog.codinghorror.com

What If We Could Weaponize Empathy?


Interesting. Maybe one equivalent is the Ars Banana Experiment?

Yesterday the popular news site, Ars Technica, published an article titled “Guns at home more likely to be used stupidly than in self-defense.” Unfortunately the purpose of the article was not so much to report on a recent study, but act as an experiment to see how much content their readers are truly taking in before moving to the comment section.

The author slipped in a sentence towards the end of the article that states, “If you have read this far, please mention Bananas in your comment below. We’re pretty sure 90% of the respondants to this story won’t even read it first.” As a redditor astutely points out it took three pages worth of comments for someone to finally mention the secret word, bananas.


Unsurprisingly, someone took this as an opportunity to grind an axe:

I rarely see GMOs, but often see Lizzie Borden, Social Justice
Warrior. (Or Carrie Nation, if you prefer, since you axed). This is
one of the roots of #Gamergate - gamers want to discuss game play, not
misogyny, portrayals, in-game demographics, etc. especially with someone
who doesn’t care about games.

Of course this is not the place to discuss GMOs, people fighting for social justice, etc., but sure enough someone used a mere example referring to such things as an opportunity to spread his own views about them. My empathy – my understanding of and familiarity with the internal states associated with this behavior – makes me all the more critical of it.

And someone else noted the attack on people under 19, or 25, by someone who just had to voice his opinion on “the brain-addling influences of modern culture”, and advocate a blanket ban based on age, after lauding his teacher’s statement that “maturity as the ability to relate to realities other than our own” (and then saying this same teacher “rightly gave no energy to negative people” for which “he was accused of discouraging free discussion” … I have some empathy for the accusers). When this person writes of trolls harming “our physical and emotional health”, I wonder how much self-examination he has done. If he really wants to “expand the heart and decrease our own and others’ suffering, and cultivate happiness and fulfillment for all”, perhaps he should step down from his soapbox, remembering that, before healing the world, one needs to heal oneself … or at least start on that journey.


I’ve been looking for that XKCD comic for so long! Thank you!


I don’t think it is useful to get into a who-is-offending-who-most contest.


Sometimes when being called on out behavior, the best thing to do is reflect on it rather than immediately, reflexively trying to echo the criticism back. Also, criticism should generally flow “up” toward the powers that be and people with more privilege. This is also known as “don’t punch down”


That’s exactly what being offended boils down to though. Clearly someone who is either purposely or accidentally offending someone else is not offended (let’s say on a scale of 0-10, they’re at a zero). The person who is offended feels so at a 6. Now they call the person who made the joke or comment a bad name. The person who WAS at a zero, now feels offended at an 7. Escalation, yah.

This is basically every internet argument ever.

As for don’t punch down, that’s never going to work, as explained in the link you provided (everyone finds punching down pretty hilarious in comedy). As an argument tactic, it’s EXTREMELY unfair to ask one party not to engage in any sort of disparagement while the other gets to do whatever they want (i.e. don’t punch down) because one is white, male, christian, a CEO, etc. It’s an excuse to invalidate someone’s arguments because they have different experiences “privilege” (or you THINK they do).


With a volatile issue like gun control I’m not surprised by the experiment’s outcome (the article itself could be viewed as trollbait). It is pretty interesting to see how the comments evolve after the “banana” has been outed. On one hand, it divides the community into the people who got the joke and everyone else, creating an ingroup/outgroup mentality. There’s actually a micro-debate on page 10 of the comments as to whether the users with “banana” in their posts have more credible arguments than posters who cite research papers. On the other hand, it’s a joke that diffuses tensions and allows empathy to build between those who laugh. If it’s received well, it could even redraw the ingroup/outgroup lines among users.

Do you feel the banana strategy created empathy between users with relevant contributions, while spurning trolls out of the community? Or was it just a way to attract fruit flies and drive up page views for a day?


It’s an interesting point, but I find that were a lot of communities suffer is when dealing not with lack of empathy between individuals, but lack of empathy between groups of people (and maybe individuals).

I find that communities that start reaching a certain mass (not as big as you’d expect, as you start getting into thousands it seems to start appearing) were you see these destructive non-empathetic behaviors but can’t really point to any individual. I have realized that there are “mob personalities”, this is when you put certain people together their pros and cons cancel themselves out in order to form a new personality that might have traits that maybe even none of them have.

Membership in mobs is many-to-many and a complex thing. If a population is big enough membership could be transient (even if we somehow got rid of the permanent members it’d be hard to deal with it). Every single one of the negative traits can appear out there:

  1. Endless contrarian: this one is the easiest to learn, there’s always someone out there to give you the opposing view. At some point there is always that says you are wrong just because you are wrong and doesn’t build to anything.
  2. Axe grinding: the name for a mob doing this is circle-jerking
  3. Griefing is something that happens more against an individual that gains fame. Say that a new member comes over to the community and at first doesn’t get it. It takes them to learn the right way of doing things. Yet someone is always reminding people of “that one time”, someone that heard of those mistakes, but not of the redemption and improvement of the new member. Then those mistakes are repeated and parroted blindly by other members.
  4. Persistent negativity is yet another thing that is very common in communities. Even though there isn’t a specific attacker, it’s easy to see those negative comments and attacks. Not one person has to focus on it.
  5. Ranting: as a community’s membership approaches infinity the probability that someone feels extremely strong about a specific subject approaches 1.
  6. Grudges are the ones that would change the most: they become prejudice against people of a group. A community begins to form subgroups that are extremely closed-minded and splinter.

The thing is that improving the percentage of “bad apples” and of “misguided apples” only slows this down, but it shouldn’t stop it. The number needed to reach this “critical mass” is an absolute that probably is tied to our limitation to empathize with more than 250 people at the same time.

I have heard and read many articles about how to deal with bad apples, dangerous individuals, but what do you do when it isn’t actions of a single, or even a few individual, but the results of the actions of all the community? The only solution I could think is to splinter into smaller, more specific groups, and let a couple cross-pollinators exist to transfer the best ideas and discussions of one group into the others. Yet segmentation and splintering would not really be the best way to do this for everyone so it can’t be a good enough solution.


Nice post. It’s clear that the internet is having a huge social impact. We’re seeing companies like Facebook, Twitter, etc. becoming huge. While it’s great that we can all connect so easily over the internet, there are inevitably going to be some consequences. Yet, we don’t get much chance to step back and see all the implications the internet is having. Hopefully, there isn’t too much negativity, trolling, lying, etc.


There is one, you know?


I think a lot about communities too; where they succeed and where they fall short. Just wanted to add that one of the most insidiously harmful and all too common problems is simple indifference.

When the people in a group share a genuine concern for the others, their issues and concerns, that’s a community. Everything else are just a groups of people. That distinction is pivotal. Communities work together. Other groups of people usually don’t. Families, neighborhoods, workplaces, development projects, forums, … are always groups of people, but only sometimes communities.


A lot of the comments on posts describing bad community behaviour work on the assumption that bad behaviour comes from bad people, or people who consistently exhibit tendencies towards bad behaviour, which apparently isn’t the case.

I can’t find the source any more (if anyone knows what I’m talking about I’d be keen to retrieve it), the community blog for one of the big online games / community services (it could have been League of Legends or DOTA) recently discussed their findings that behaviour they found to be ‘measurably bad’ was by an overwhelming majority from people whose behaviour was usually good.

This thesis goes some way to nerfing the idea that participants just need to be educated, or that you can ban your way out of community negligence.


There’s a bunch.

The biggest one?

One of Riot’s experiments in curbing negative behavior was a simple one: turning off all-player chat as a default. Players had to opt-in to it. Prior to the experiment, Riot says that more than 80 percent of player chat was “extremely negative,” compared to 8.7 percent positive.

A week later, after turning off all-player chat as the default, many players still opted in, but behavior changed. According to Riot, negative chat saw a decrease by 32.7 percent. Positive chat went up, by 34.5 percent. A drop in offensive language and verbal abuse was also observed.

Turning off cross-team (aka “all player”) chat just feels like cheating to me, but I guess it works. In general these guys do great work and should be studied closely.


I recently donated to a charity. Before they let me donate, they made me say “I will try my best to be a long-term donor.” as part of the enrollment process. I thought that this was a pretty transparent tactic, but later, when I thought about pulling my charity dollars and moving them to somewhere that didn’t use high-pressure sales tactics to get charity dollars, I felt a little guilty, and I feel like that’s a part of the reason. They have my money, but they’ve found a way to create a sense of obligation.

That sort of stuff is sneakily, insidiously powerful.

What - and I know this is ridiculous, cumbersome, and a little silly - if you made your users record themselves with a camera and microphone pledging to treat everybody in your community with respect?


Yeah, Riot really does put a lot of effort to be community friendly…

Coming to think of it, Counter Strike never allowed cross-team / all player chat. And it had nothing else to prevent toxic behaviour. In fact, talking with a microphone was much easier and better than most non-fps nowadays, which could make room for more toxicity but I’m not sure it did.

I wonder how much it would help to remove counting stats / numbers from the game as well… It would probably also reduce player engagement, though.


There is a great podcast with Tim Minchin talking about empathy being key to everything:


What if we stopped weaponizing all the things?


I think of it more like this