|The classic film Troll 2 prefigured the internet by at least a decade. http://bit.ly/1pV3fVZ|
Like many people out there, I get discouraged and outraged by the tenor of commentary around the web--the cesspool of ignorance, homophobia, racism, overreaction, and conspiracy theories that accompanies everything on the Internet. It doesn’t matter if you’re a journalist providing a thoughtful analysis on the situation in Gaza, or a proud fur-mommy posting Kitten’s First Meow on YouTube: you WILL be labeled something horrible. (Almost always “fag,” though.)
Unlike some people, though, I have given some thought as to what might turn this Hellscape into a digital utopia, and the theory I landed on was to strip the comments section of its anonymity. It makes sense on the surface: people act out when certain they can do so from behind a veil; websites are kinda like modern-day newspapers, and newspapers don’t generally elevate the anonymous rantings they receive to publication; and so on and so forth.
Well, it turns out that such disclosure isn’t the silver bullet I might have assumed. Ironically enough, I found an excellent discussion of these matters in the comments section of a political blog (original post). Rather than reframe these views myself, I thought I’d just reproduce part of the discussion as-is here, and let you, my always-respectful commenters, add to it. What would be your strategy to defeat the plague of Internet trolls? Do services such as Twitter and Facebook have a responsibility to police their users? Where’s the line between keeping the peace and stifling expression?
For easy reference’s sake, here are two resources cited by one of the commenters below:
And now, the discussion.
Yeah, removing anonymity from commenting won't prevent much. As the comment thread on the Zelda Williams piece proved, there are plenty of people who think anonymity protects them from offline harassment, and there are plenty of people who are fully aware they're assholes, and don't mind being publicly identified as such.
Moderation in combination with account limiting (so it won't be so easy to open another account when one gets banned or shut down) may help, but really, it is always going to be a battle between keeping the freedom to comment without fear and the security to allow it for others. Really, what most sites need is a clear, concise and objective statement of what constitutes harassment and abuse, and the wherewithal to stick to it. That is Twitter's problem: claiming to take such matters seriously, but their system is broken and they make no attempt to even enforce the barest of rules.
[Ed. note: Doesn’t sound too different from librarians’ efforts to craft thoughtful library usage policies that protect patrons and their rights!]
The problem with comments is that you get what you pay for. The answer to trolls is moderators, but good moderation is as much work as writing an article. Good moderation has to tread the fine line between supressing alternative points of view that lead to lively discussions and expelling the just plain loathsome. Writers may be willing to "pay their dues" to get published and build a career, but there is no career path for moderators, and volunteers are likely to just supress what they don't agree with.
I kind of like some blogs that work another way, pulling a few of the best comments for display between the end of the article and the start of the rabble; it lets you get the best reactions without wading through the muck.
[Ed. note: I like this guy’s thoughtful understanding of the way volunteer moderation could give way to personal bias. The “featured comment” idea also strikes me as a good stop-gap, but could it be the end-all? Should we be aiming to “reform” trolls, or is that too much social engineering, and impossible besides? What will the end result be if we have a permanent division between “good” commenters and “bad” ones?]
As DB [Daily Banter, the site hosting this discussion] grows they should consider getting a couple volunteer moderators and some concrete commenting guidelines so that their writers can focus their energy on actually writing.
The staff would then simply ensure that the moderators are enforcing the guidelines and not their personal agendas or vendettas.
The chilling effect of insisting on real names stifles political and other controversial discussions, inhibiting people from stating their views on gun laws, feminism, errorism, abortion, climate change and so on. When such debates are held face to face, in cafes and over dinner tables, there is little concern that, say, a future employer will learn what you said and decline to hire you (unless you have the misfortune to live in a regime with a Stasi-like network of citizen-spies), but as the internet increasingly becomes the venue of choice for such discussions, any opinion stated under your real name is trivially accessible. For anyone in a vulnerable position – people seeking a job, people whose beliefs are at odds with their neighbors or co-workers – the ability to participate in such discussions depends, effectively, on being able to do so pseudonymous
YouTube has joined a growing list of social media companies who think that forcing users to use their real names will make comment sections less of a trolling
wasteland, but there’s surprisingly good evidence from South Korea that real name policies fail at cleaning up comments. In 2007, South Korea temporarily mandated that all websites with over 100,000 viewers require real names, but scrapped it after
it was found to be ineffective at cleaning up abusive and malicious comments (the policy reduced unwanted comments by an estimated .09%). We don’t know how this hidden gem of evidence skipped the national debate on real identities, but it’s an important lesson for YouTube, Facebook and Google, who have assumed that fear of judgement will change online behavior for the better.
The Wired post above is the only reason I agree with the anonymity. I used to post with my real name, but a quick googling pulled up a large number of comments I have made on this, and other sites. They aren't rude or offensive per se, but they do expose a lot more of myself than I'm comfortable revealing to just any random person, one who may or may not have an impact on my life, now or at some time in the future.
[Ed. note: An interesting angle. I never give much thought to the picture my comments might conjure; I couldn’t imagine an employer sifting through so much data to find and judge me. But Google is all-powerful and can deliver my comments without much hassle. This line of discussion strikes me as overly self-interested, though. My focus in this issue is on raising the level of discourse, and limiting the prevalence of abuse, in the realm where we increasingly spend most of our time. But there are surely practical implications as well, especially for those of us whose comment history is not rude, but rather politically charged.]