Can self-moderation of a game community reduce abuse and dickwadery?

PA's Greater Internet Dickwad theoryI’ve recently entered the beta for an upcoming DotA (almost-)clone called Heroes of Newerth (HoN) when I discovered through reddit giving away 100 of ‘em for the most absurd names (I submitted the Flying Spaghetti Monster of course). I always wanted to play the DotA experience but I didn’t have Warcraft3 available and Demigod is quite different from it (and a bit disappointing as well). Plus, having a native GNU/Linux client was an offer I couldn’t resist. ;)

But what does this have to do with the title above? Well one of the main issues that DotA has is the sheer number of elitist assholes who heave tons of abuse at people trying to learn the ropes or even just don’t play perfectly. I am talking total nerd-rage here. Unfortunately, this mentality seems to have migrated to the HoN community, most likely because it’s been marketed as the intellectual sequel to DotA (Items and Heroes are almost the same).

While this general level of fucktardery is not such a big issue in a free mod such as DotA, for a commercial game with developers to pay and with big plans for the future, it might make or break their life-expectancy. The less people that are interested in nurturing and increasing their “newbie scene”, the less people will stick around until they won’t be at a level where they suffer abuse simply for not having climbed the (very steep) learning curve.

As I was reading similar sentiments from other people in the fora, I got to thinking on how those who would like to help new players might overcome this obstacle and alleviate, if not reduce the rampart dickwadery. While technical solutions might be proposed and coded, such as improving the match-making system, I think the solution lies in direct action and cooperation from the community.

Of course the community cannot take very good action without the game presenting at least some tools to combat the problem, which is incidentally why the DotA community is what it is. Fortunately, even at this beta stage, the game has some controls that could be used for such purposes. Permaban and Ignore. If I understand the first one correctly, one can mark a specific account as always banned from games one hosts. Ignore just…well, ignores chat messages from a particular player.

So how can these two be used for self-moderation? My idea was through a blacklist. Lets say that a known newbie-friendly player (lets call him/her a ‘Mentor’) while playing in a newb-only game, discovers that one of his team members is constantly ranting and cursing at the others for being worthless, n00bs, sucky and whatnot. The Mentor then, grabs a few screencaps or a replay as evidence of this and adds the dickwad’s alias to a blacklist he maintains. This can be as simply as a blog with each new post being about a particular dickwad and a full list in a prominent location.

Now all the other people who have a likewise mentality, ie they like to promote a healthier community are subscribed to this blacklist. Each time a new person is added to it, people judge the evidence and if solid1, they add this account to their permaban and ignore lists. If just a 10% of the HoN people are subscribed to this blacklist, then the abusive players are going to quickly start running into problems joining games or talking to people.

The effects of this tactic would be akin to peer pressure in a normal social situation. Suddenly the dickwads are going to find out that being a jerk online has some drawbacks. Hopefully some might reconsider as generally, not being a dickwad is not so difficult. They should be then given a chance to take themselves out of the blacklist (probation time?) and who knows, maybe they’ll join the other side for a change.

So why is this better than simply using system based changes? Well first of all because no programmed system is perfect, especially at catching such vague concepts as dickwadery. Matchmaking may not work well enough and options to mark others as abusive (say via a game function like permaban) may in turn be abused themselves for griefing purposes. On the other hand, a self-moderated solution avoids these issues.

Let’s say for example that someone was added to the dickwads blacklist but some think this was wrong. Perhaps his frustration was warranted, or there is not enough evidence and whatnot. What would probably happen is that not all subscribers to the blacklist would add him as they wouldn’t feel he deserves it. As such his “pain” would be much less. Dialogue will be also had and perhaps more evidence requested.

Lets have another example where the Mentor goes on a power trip and starts adding people he doesn’t like to the blacklist without evidence just because he expects to be trusted. Seeing as this is not anything official, nothing would prevent people from calling him out on this, a new blacklist forked from the old under the supervision of another Mentor or even a collaboration of them and the old Mentor might quickly find himself in a prominent position in the new blacklist.

All of these then are ideas that might work to allow a game community to self-moderate itself to a healthy environment which is conductive to new people joining, without requiring any authoritarian measures on the part of the developers or the moderators. Rather it would be based on direct action by the members themselves and as such far less prone to corruption, who would then get the community they deserve.

Who knows, If I stick around with HoN once it comes out (curse my short attention span) I might actually start this for the heck of it. Just to see if a purely community driven initiative can make a difference. It would be an interest test to put some of my principles under. ;)

Reblog this post [with Zemanta]
  1. Although of course, if the Mentor or the maintainers of the blacklist are trusted, many will not even need to look at the evidence []