Short but very important essay here from Santa Clara University Law School Prof. Eric Goldman about calls to alter Sec. 230 of the Communications Decency Act (CDA) to address concerns about online harassment. Generally speaking, Sec. 230 immunizes online intermediaries from punishing liability for the content that travels over their networks / services. Specifically, Sec. 230 stipulates that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In other words: Don't shoot the messenger!
As we've noted here before, it is probably not an overstatement to think of Sec. 230 as the very cornerstone of Internet Freedom, since it makes possible an online "utopia for utopias," to borrow a phrase from our favorite modern political philosopher, the late Robert Nozick. Without Sec. 230, intermediaries would likely be forced to shut down many avenues of communication and would have to become deputized conduct and morality police for every cyber-street corner.
Goldman, America's leading expert on Sec. 230-related jurisprudence, correctly notes that, "Frequently, § 230's critics do not attack the immunization generally, but instead advocate a new limited exception for their pet concern." He's got that right. Indeed, we are increasingly hearing calls from numerous quarters these days to "tweak 230" for one pet concern after another. We've illustrated some of those concerns in this exhibit.
Regulatory advocates can be found for each of these issues who like to see the protections afforded by Sec. 230 scaled back by Congress or he courts. But Goldman rightly warns:
As tempting as minor tweaks to § 230 may sound, however, we should be reluctant to entertain these proposals. Any new exceptions to § 230, even if relatively narrow, would undercut these benefits for several reasons. First, new exceptions would reduce the clarity of § 230's rule to judges. Second, service providers will be less confident in their immunity, leading them to remove content more frequently and to experiment with alternative techniques less. Third, plaintiffs' lawyers will try to exploit any new exception and push it beyond its intent.
Again, exactly right. The idea of a "simple tweaking" of Sec. 230 is dangerous. If Congress were to reopen the law, it's more likely that lawmakers would instead invert its purpose and seek to appease every concern under the sun. Consider how this might play out on the child safety front. In their otherwise excellent book, Born Digital: Understanding the First Generation of Digital Natives, Harvard Berkman Center professors John Palfrey and Urs Gasser argue that: "The scope of the immunity the CDA provides for online service providers is too broad" and that the law "should not preclude parents from bringing a claim of negligence against [a social networking site] for failing to protect the safety of its users." They also suggest that "There is no reason why a social network should be protected from liability related to the safety of young people simply because its business operates online." Specifically, they call for "strengthening private causes of action by clarifying that tort claims may be brought against online service providers when safety is at stake," although they do not define those instances.
More recently, Prof. Palfrey elaborated on these proposals in testimony before a House hearing in September 2009 about cyberbullying legislation. He joined us (see our written testimony) in rejecting sweeping criminal penalties for cyberbullying, but called for an "affirmative obligation [on social networking operators to act] in cases where harm to minors is clear," arguing that, "In the context of online safety, the law needs to provide an incentive for technology companies to do the right thing." Specifically, Palfrey suggested:
at least three ways to amend the safe harbor. The light‐touch approach would be to require intermediaries to retain log files for a certain period of time and to participate in law enforcement efforts to bring those who defame others to justice. Alternately, one could require online intermediaries to respond to notice from those who have been defamed by taking down the defamatory content if the intermediary wishes to be protected by the safe harbor (which is what we do in the context of copyright, through Section 512 of the Digital Millennium Copyright Act). A third approach could be to exempt intermediaries from the safe harbor of CDA 230 altogether in cases where there has been harm to young people as a result of harmful speech, a carve‐out that parallels the carve‐out in CDA 230 for copyright complaints.
These are essentially the three proposals Palfrey suggested at the end of his exchange a friendly debate with Adam on these issues at about this time last year. In the debate, Adam argued that:
Section 230 was the legal cornerstone that gave rise to many of the online freedoms we enjoy today. I fear that the proposal you have set forth could reverse that. It could lead to crushing liability for many online operators-and not just giants like MySpace or Facebook-that might not be able to absorb the litigation costs.
Adam also worried that, because "not all social networking sites are alike or serve the same interests.. a new liability standard might not leave sufficient room for flexibility or experimentation." The liability threat could absolutely crush new online innovation and speech as a result.
But perhaps the biggest problem raised by Palfrey's proposals is that he can't seem to make up his mind as to what exactly he means by child safety and "harm." His most sympathetic hypothetical involves the "case of a young person who is physically harmed after meeting someone in an online environment," but his first two legislative proposals focus on defamation. This certainly doesn't reflect intellectual sloppiness on Palfrey's party--he's one of the brightest minds in the field! But it does reflect just how difficult it is to write a "narrow" carve-out to Sec. 230 focused on "child safety" (see our attempt to map this out above!) and Goldman's core point about undermining the critical legal clarity currently offered by Sec. 230.
And the online child safety issue is really just the beginning. When you look at those other concerns raised above--especially those related to privacy and defamation, which we think are particular problematic for 230--you quickly begin to realize that as soon as the law is teed up again in Congress, everybody will come out swinging against Sec. 230. The "tweaking" proposed by well-meaning folks like Palfrey will quickly become a legislative gutting. Indeed, we wouldn't be surprised to see lawmakers invert the purpose of this uniquely deregulatory statute and impose an affirmative obligation on intermediaries to police their networks.
Regardless, when it comes to many of these concerns, especially online child safety and cyberbullying, Goldman rightly suggests that the better approach is more education and "netiquette." "While the debate about regulating intermediaries' role in online harassment continues," he argues, "education may provide a complementary--or possibly substitutive--method of curbing online harassment." That's very much consistent with the approach we've taken in our work. We've focused on education, empowerment, and self-regulation as the foundations of a constructive plan for dealing with concerns about objectionable online content of any sort, so long as it is not illegal content or conduct.
We welcome comments by email - look for a link to the author's email address in
the
byline of each post. Please let us know if we may publish your remarks.