In a post earlier this week, I discussed Randy Cohen's "guideline" for anonymous blogging. Specifically, Cohen argued in a recent New York Times piece that, "The effects of anonymous posting have become so baleful that it should be forsworn unless there is a reasonable fear of retribution. By posting openly, we support the conditions in which honest conversation can flourish." While sympathetic to that guideline, I noted I agreed with it as an ethical principle, not a legal matter. In others words, what might make sense as a "best practice" for the Internet and its users would not make sense as a regulatory standard. I prefer using social norms and public pressure to drive these standards, not regulation that could have an unintended chilling effect on beneficial forms of anonymous online speech.
Dan Gillmor of the Center for Citizen Media of the Harvard Berkman Center has a new column up at the UK Guardian in which he takes a slightly different cut at a new standard or social norm for dealing with some of the more caustic anonymous speech out there:
One of the norms we'd be wise to establish is this: People who don't stand behind their words deserve, in almost every case, no respect for what they say. In many cases, anonymity is a hiding place that harbours cowardice, not honour. The more we can encourage people to use their real names, the better. But if we try to force this, we'll create more trouble than we fix. But we don't want, in the end, to turn everything over to the lawyers. The rest of us -- the audience, if you will -- need to establish some new norms as well.
When you read or hear an anonymous or pseudonymous attack on someone else, you should not just assume -- barring persuasive evidence of the charge -- that it's false. Assume that the accuser is an outright, contemptible liar.I am generally sympathetic to Gillmor's principle, but I think he goes a bit overboard in asking us to assume that all anonymous or pseudonymous attacks are false. So, here's a reformulation of it: We should discount, by at least some small measure, anonymous online speech that attacks others in a heated manner and which lacks supporting evidence for the assertions made or charges levied. However, the more heated or vicious the attack, the greater we should discount the veracity of the claims asserted.
Of course, this is simply a guideline for readers, not speakers or the sites that host online speech. Each speaker will have to decide for themselves whether to post anonymously or reveal their identities. As I noted in my previous essay, however, I think it makes sense to generally encourage people to reveal their true identities when blogging or commenting. I have always lived by that rule personally when blogging or posting comments on other sites, whether they are blogs, discussion boards, or even shopping sites.
For sites that host speech, things get trickier. Luckily, we have Section 230 of the CDA to protect online operators from onerous forms of liability for the content they host on their sites, although some would like to change that. Also, as I've discussed here before, some critics of online anonymity would like to see "civility check" or "cooling off periods" instituted that would prevent instantaneous comments from being posted without some sort of human or automated review of the content. But tweaking Sec. 230 liability norms or requiring "cooling off periods" for comments could have a profoundly chilling effect on many beneficial forms of online speech. As Gillmor wisely notes in his essay:
anonymity has crucially important value. We need it for whistleblowers, for political dissidents in dictatorships -- for those who have important stories to tell but whose lives or livelihoods would be in jeopardy if their identities were exposed.
People who'd ban anonymity don't seem to realise that it's technically impossible unless we're willing to turn over all of our communications in every venue to a central authority -- a system that would herald the end of liberty. They can't really want such a regime, can they? Meanwhile, even that kind of structure could and would be hacked by motivated types, though with more difficulty.