David Margolick has penned a lengthy piece for Portfolio.com about the AutoAdmit case, which has important ramifications for the future of Section 230 and online speech in general. Very brief background: AutoAdmit is a discussion board for students looking to enter, or just discuss, law schools. Some threads on the site have included ugly -- insanely ugly -- insults about some women. A couple of those women sued to reveal the identities of their attackers and hold them liable for supposedly wronging them. The case has been slowly moving through the courts ever since. Again, read Margolick's article for all the details. The important point here is that the women could not sue AutoAdmit directly for defamation or harassment because Section 230 of the Communications Decency Act of 1996 immunizes websites from liability for the actions of their users. Consequently, those looking to sue must go after the actual individuals behind the comments which (supposedly) caused the harm in question.
I am big defender of Section 230 and have argued that it has been the cornerstone of Internet freedom. Keeping online intermediaries free from burdensome policing requirements and liability threats has created the vibrant marketplace of expression and commerce that we enjoy today. If not for Sec. 230, we would likely live in a very different world today.
Sec. 230 has come under attack, however, from those who believe online intermediaries should "do more" to address various concerns, including cyber-bullying, defamation, or other problems. For those of us who believe passionately in the importance of Sec. 230, the better approach is to preserve immunity for intermediaries and instead encourage more voluntary policing and self-regulation by intermediaries, increased public pressure on those sites that turn a blind eye to such behavior to encourage them to change their ways, more efforts to establish "community policing" by users such that they can report or counter abusive language, and so on.
Of course, those efforts will never be fool proof and a handful of bad apples will still be able to cause a lot of grief for some users on certain discussion boards, blogs, and so on. In those extreme cases where legal action is necessary, it would be optimal if every effort was exhausted to go after the actual end-user who is causing the problem before tossing Sec. 230 and current online immunity norms to the wind in an effort to force the intermediaries to police speech. After all, how do the intermediaries know what is defamatory? Why should they be forced to sit in judgment of such things? If, under threat of lawsuit, they are petitioned by countless users to remove content or comments that those individuals find objectionable, the result will be a massive chilling effect on online free speech since those intermediaries would likely play is safe most of the time and just take everything down.
Which brings up back to the danger of a 230 backlash following the AutoAdmit case. As Margolick notes of the case:
By any standard, the plaintiffs' catch has been meager. Even with one of the country's top intellectual-property lawyers, backed by a super-elite law firm, going after them, most of the worst offenders got off scot-free. The fact that so few prey were netted could prompt calls to modify Section 230(c), if only to give victims of internet abuse more of a chance. Brian Leiter, the professor and vocal critic of AutoAdmit, sees it coming. He calls the free pass enjoyed by Google and other carriers "a disaster" and says change is inevitable. "The point at which some senator's daughter becomes the target of this kind of campaign of online vilification and harassment on the next iteration of AutoAdmit -- something's going to happen," predicted Leiter, who now teaches at the University of Chicago Law School.
the response will be a dramatic evisceration or even elimination of Section 230 immunity. We might end up with some kind of notice-and-takedown regime that could be abused just as it is in the DMCA setting, allowing anyone to effectively force the elimination of web content they dislike with the mere untested allegation that it was tortious. Worse, we might see an effort to repeal section 230 altogether, making it impossible to run an open online forum for user-generated content without risking significant liability.
Importantly, however, the "[AutoAdmit] case has already made a difference," Margolick notes:
Things have calmed down on AutoAdmit, where, Cohen says, he's driven away the worst actors and enlisted volunteer moderators. Some postÂers, moreover, have announced their "retirement"; any further self-expression, they've concluded, is clearly not worth the risk. Thanks to the case, casual defamers -- those who take potshots for sport -- may now refrain out of empathy for the plaintiffs, while the more malicious may have been intimidated into silence. The case may also have helped Heller and Iravani [the plantiffs in the case] cleanse their Google pages, as the old slurs have fallen farther down the screen. And last spring, Cohen quietly removed the offending threads. He'd have done so sooner, he says, had he been asked more nicely.
Some additional reading on the case: