Anyone interested in the long-running debate over how to balance online privacy with anonymity and free speech, whether Section 230's broad immunity for Internet intermediaries should be revised, and whether we need new privacy legislation must read the important and enthralling NYT Magazine piece "The Trolls Among Us" by Mattathias Schwartz about the very real problem of Internet "trolls"--a term dating to the 1980s and defined as "someone who intentionally disrupts online communities."
While all trolls "do it for the lulz" ("for kicks" in Web-speak) they range from the merely puckish to the truly "malwebolent." For some, trolling is essentially senseless web-harassment or "violence" (e.g., griefers), while for others it is intended to make a narrow point or even as part of a broader movement. These purposeful trolls might be thought of as the Yippies of the Internet, whose generally harmless anti-war counter-cutural antics in the late 1960s were the subject of the star-crossed Vice President Spiro T. Agnew's witticism:
And if the hippies and the yippies and the disrupters of the systems that Washington and Lincoln as presidents brought forth in this country will shut up and work within our free system of government, I will lower my voice.
But the more extreme of these "disrupters of systems" might also be compared to the plainly terroristic Weathermen
or even the more familiar Al-Qaeda
. While Schwartz himself does not explicitly draw such comparisons, the scenario he paints of human cruelty is truly nightmarish: After reading his article before heading to bed last night, I myself had Kafka-esque dreams about complete strangers invading my own privacy for no intelligible reason. So I can certainly appreciate how terrifying Schwartz's story will be to many readers, especially those less familiar with the Internet or simply less comfortable with the increasing readiness of so many younger Internet users to broadcast their lives online.
But Schwartz leaves unanswered two important questions. The first question he does not ask: Just how widespread is trolling? However real and tragic for its victims, without having some sense of the scale of the problem, it is difficult to answer the second question Schwartz raises but, wisely, does not presume to answer: What should be done about it? The policy implications of Schwartz's article might be summed up as follows: Do we need new laws or should we focus on some combination of enforcing existing laws, user education and technological solutions? While Schwartz focuses on trolling, the same questions can be asked about other forms of malwebolence--best exemplified by the high-profile online defamation Autoadmit.com case, which demonstrates the effectiveness of existing legal tools to deal with such problems.
Schwartz begins by noting that:
Many trolling practices ... violate existing laws against harassment and threats. The difficulty is tracking down the perpetrators. In order to prosecute, investigators must subpoena sites and Internet service providers to learn the original author's IP address, and from there, his legal identity. Local police departments generally don't have the means to follow this digital trail, and federal investigators have their hands full with spam, terrorism, fraud and child pornography.
He then asks, quite fairly, what the consequences of more aggressive enforcement might be:
But even if we had the resources to aggressively prosecute trolls, would we want to? Are we ready for an Internet where law enforcement keeps watch over every vituperative blog and backbiting comments section, ready to spring at the first hint of violence? Probably not. All vigorous debates shade into trolling at the perimeter; it is next to impossible to excise the trolling without snuffing out the debate.
Certainly, proposals to ban online anonymity would seriously threaten legitimate anonymous speech, as my TLF colleagues Ryan Radia and Adam Thierer have pointed out
. Schwartz is probably correct that part
of the answer to the problem of trolling and other serious malwebolences lies in equipping law enforcement at all levels with, and training them to use, the basic tools already available to "pierce the veil" of online anonymity and prosecute truly bad actors under existing laws. But Schwartz is also right to highlight the danger of relying on government to enforce even existing laws, and to take on responsibility for monitoring online activity.
But like most commentators, Schwartz seems to assume that the enforcement of existing laws is solely the province of the "law enforcement" community (police, prosecutors and government investigators). To be sure, there are a variety of state and federal laws criminalizing certain acts of "malwebolence." But those who find themselves victimized online generally have recourse to bring a lawsuit on their own (a "private right of enforcement") under well-established causes of action under tort law--a crucial part of the "free system of government" lauded by Agnew.
Specifically, such a plaintiff may bring a defamation claim ("libel" if written, "slander" if oral) or one of the four categories of privacy claims that have emerged since 1890, defined by the magisterial Second Restatement of Torts as follows:
(a) unreasonable intrusion upon the seclusion of another;
(b) appropriation of the other's name or likeness;
(c) unreasonable publicity given to the other's private life; or
(d) publicity that unreasonably places the other in a false light before the public.
If the defendant is known, pursuing such claims is common-place. The obstacle facing plaintiffs who do not know the legal identity of those who may have defamed them or intruded upon their privacy is the same facing law enforcement: to "subpoena sites and Internet service providers [and other intermediaries] to learn the original author's IP address, and from there, his legal identity." Such "third party subpoenas" are a vital part of the solution to the problem of malwebolence: By enabling lawsuits under established causes of action against even anonymous defendants, they provide a real remedy to true victims. The use of such subpoenas does not require finding new appropriations for "law enforcement," new privacy laws or re-thinking Section 230's grant of broad immunity to online intermediaries--a policy prescription that has gathered momentum in recent years.
For example, Daniel Solove has argued in his book The Future of Reputation that Section 230 should be re-interpreted:
to grant immunity only before the operator of a website is alerted that something posted there by another violates somebody's privacy or defames her. If the operator of a website becomes aware of the problematic material on the site, yet doesn't remove it, then the operator could be liable.
Frank Pasquale has argued
that we ought to require Internet search engines to provide a "right of reply"--allowing someone to post a "reply" that would appear on a search engine next to content concerning them that they consider inaccurate or defamatory (essentially the "fairness doctrine" applied online). Others (one example
) have argued for replacing Section 230 with something akin to the notice-and-takedown regime of copyright so that publishers' immunity would be contingent on compliance with takedown notices. But Mark Lemley, an internet law guru who is representing the plaintiffs in the Autoadmit
case, has argued
that Section 230 should instead be "rationalized" along with other Internet safe harbors under a unified safe harbor drawn from current trademark law: "innocent infringers" would have immunity and would not be required to take down allegedly defamatory content, but plaintiffs could get courts to issue injunctions requiring intermediaries to take down content. What unites advocates of all these proposals is that, like Schwartz, they downplay or ignore the effectiveness of existing tort remedies and third-party subpoenas.
Indeed, if the public is aware of third party subpoenas at all, it is probably only because of their use by copyright-holders in attempting to identify those caught using peer-to-peer software to share copyright materials. Whatever one's opinions on copyright and of the recording industry's enforcement strategy, it is safe to say that the overall impression created by such lawsuits against users has been less than favorable. Regardless, these lawsuits have established an effective legal process for identifying anonymous defendants. While we can expect that this process--and the safeguards that accompany it--will continue to evolve, it is critical to appreciate the basics of how the third party subpoena process works if one is to evaluate the policy arguments raised by articles like Schwartz's.
The infamous Autoadmit.com case provides a clear illustration of how this proces works and the evolving safeguards for anonymous speech. As summarizes the case--and its most recent development:
"Women named Jill and Hillary should be raped."
Those are the words of "AK-47" -- a poster to the college-admissions web forum AutoAdmit.com. AK-47 was one of a handful of students heaping misogynist scorn on women attending the nations' top law schools in 2007, in posts so vile they spurred a national debate on the limits of online anonymity, and an unprecedented federal lawsuit aimed at unmasking and punishing the posters.
Now lawyers for two female Yale Law School students have ascertained AK-47's real identity, along with the identities of other AutoAdmit posters, who all now face the likely publication of their names in court records -- potentially marking a death sentence for the comment trolls' budding legal careers even before the case has gone to trial.
The plaintiff law students in this case originally sued Autoadmit.com and its operator in a Connecticut Federal District Court, but eventually removed them as plaintiffs in recognition of the fact that Section 230 immunizes them from liability. But Section 230 did not
stop them from suing those who had defamed them anonymously on Autoadmit.com. And third party subpoenas have since made it possible for the plaintiffs to uncover the identity of most of those defendants.
The Process. The procedure, made possible by Federal Rule of Civil Procedure 45, is relatively straight-forward: A plaintiff brings a lawsuit against a John or Jane Doe(s), a pseuodymous defendant whose identify is as yet unknown. The lawsuit must clearly state the facts, cause(s) of action and remedy sought--just as with any lawsuit (see the Autoadmit complaint, for example).
Having filed such a lawsuit, the plaintiffs may then have a court issue subpoenas (subject to certain limitations) under FRCP 45 to parties who may have identifying information about the identity of the defendants. For example, if the plaintiff has the IP address associated with a defamatory blog comment, one can subpoena the ISP for further identifying information about that user. There may be several steps to the process: for example, Autoadmit might disclose under subpoena an email address, leading to a subpoena to a webmail provider and ultimately a subpoena to an ISP. Once the John/Jane Doe has been identified, the lawsuit can proceed.
The Safeguards. In the Autoadmit case, one of the John Does did indeed file under FRCP 45 a "motion to quash" a subpoena to AT&T by which the plaintiffs sought the disclosure of identifying information about the John Doe. Plaintiffs, of course, opposed the motion, and the Court ultimately denied the motion. The Court's discussion (pp 6-13) is instructive for those wondering just how the First Amendment would protect anonymity when a plaintiff seeks to force an Internet intermediary to disclose identifying information about an anonymous speaker.
At least since the Supreme Court's 1958 decision in NAACP v. Alabama ex rel. Patterson, the First Amendment has limited the ability of courts to order the disclosure of identifying information (in that case, the NAACP's membership list). Since then, U.S. courts have developed a two-part balancing test that" ensures that:
the First Amendment rights of anonymous Internet speakers are not lost unnecessarily, and that plaintiffs do not use discovery to "harass, intimidate or silence critics in the public forum opportunities presented by the Internet."
Understanding the way in which the Autoadmit.com
court applied that test is critical to understanding how courts might balance privacy with free speech in the future:
First, the Court should consider whether the plaintiff has undertaken efforts to notify the anonymous posters that they are the subject of a subpoena and withheld action to afford the fictitiously named defendants a reasonable opportunity to file and serve opposition to the In this case, the plaintiffs have satisfied this factor by posting notice regarding the subpoenas on AutoAdmit ... which allowed the posters ample time to respond, as evidenced by Doe 21's [motion to quash].
Second, the Court should consider whether the plaintiff has identified and set forth the exact statements purportedly made by each anonymous poster that the plaintiff alleges constitutes actionable speech. Doe II has identified the allegedly actionable statements by AK47/Doe 21: the first such statement is "Alex Atkind, Stephen Reynolds, [Doe II], and me: GAY LOVERS;" and the second such statement is ""Women named Jill and Doe II should be raped...."
The Court should also consider the specificity of the discovery request and whether there is an alternative means of obtaining the information called for in the subpoena. Here, the subpoena sought, and AT&T provided, only the name, address, telephone number, and email address of the person believed to have posted defamatory or otherwise tortious content about Doe II on AutoAdmit, and is thus sufficiently specific. Furthermore, there are no other adequate means of obtaining the information because AT&T's subscriber data is the plaintiffs' only source regarding the identity of AK47.
Similarly, the Court should consider whether there is a central need for the subpoenaed information to advance the plaintiffs' claims. Here, clearly the defendant's identity is central to Doe II's pursuit of her claims against him.
Finally, and most importantly, the Court must consider whether the plaintiffs have made an adequate showing as to their claims against the anonymous defendant.
The court noted that there is a range of competing standards for this last prong, but dismissed those standards most deferential to the plaintiff--requiring only that the plaintiff show a "good faith basis" to contend it may have an actionable cause or that there is "probable cause" for a claim--as "set[ting] the threshold for disclosure too low to adequately protect the First Amendment rights of anonymous defendants." The court also dismissed other standards very favorable to the defendant, such as requiring plaintiffs to show their claims could withstand a motion for summary judgment, noting the obvious point that "it would be impossible to meet this standard for any cause of action which required evidence within the control of the defendant." Ultimately, the court settled on the standard requiring the plaintiffs to "make a concrete showing as to each element of a prima facie
case against the defendant" as striking, "the most appropriate balance between the First Amendment rights of the defendant and the interest in the plaintiffs of pursuing their claims, ensuring that the plaintiff is not merely seeking to harass or embarrass the speaker or stifle legitimate criticism."
While Solove, Pasquale and others would make it far easier for a victim to require an online intermediary to take down content that truly defames them or invades their privacy--or to rein in a troll posting such content--relying on existing tort law of course requires that a victim actually file a website and third-party subpoenas. Those who demand changes to Section 230 will likely argue that this is too burdensome and costly to be an effective remedy for a widespread problem. But, again, one must ask how widespread that problem really is before leaping to conclusions about what kind of remedies are required. As UCLA law professor and Internet law guru Eugene Volokh noted in the Yale Daily News' coverage of this story, even a small number of lawsuits like Autoadmit "might remind some potential would-be defamers that their anonymity may not be secure." One wonders whether the trolls described by Schwartz would really be so brazen if more of their coven were unmasked and sued.
One obvious advantage of relying on the combination of tort law and third party subpoenas is that requiring the actual filing of a lawsuit minimizes the problem of Internet users attempting to squelch legitimate speech--for example, by sending frivolous take-down notices to intermediaries, a serious problem in the copyright context. Those truly concerned with protecting anonymous speech should take a far greater interest in the balancing test chosen by courts following in Autoadmit's footsteps. Marc Randazza, former counsel for Autoadmit administrator Anthony Ciolli, summarized the the balance struck by the court as follows: "If you're doing right, the First Amendment will protect you," Randazza said. "If you're doing wrong, it won't."
Much more could be said about third-party subpoenas, but it cannot be said that the law does not already provide every American with a remedy against the trolls identified by Schwartz, the villains of the Autoadmit case or other "disrupters of the systems." Any inquiry into whether we need new laws or regulations should begin by looking at the processes described above.