Join Us for pii2010 "Privacy Identity Innovation 2010" Conference in Seattle 8/17-19!
If you're as fascinated as I am by the interplay of privacy, identity and innovation, I hope to see you at the pii2010 conference in Seattle, August 17-19! Organized by the folks who've put on the top-notch Tech Policy Summit since 2003, and co-sponsored by The Progress & Freedom Foundation (among others), this event offers a truly unique perspective on privacy--not just another policy food fight, but a true roll-up-our-sleeves, in-depth seminar on what to do about privacy, especially through technological innovation.
I'll be on the "pii & Digital Advertising: Navigating the Regulatory Landscape" panel on the 18th at 10am, giving my usual talk about the need to be careful about the trade-offs inherent in privacy regulation. Check out the detailed agenda here.
TLFers Larry Downes and Carl Gipson will also be attending, so we're planning a long-overdue "Alcohol Liberation Front" happy hour after the conference on August 18--details to be announced soon.
Check out the discussion around the #pii2010 hashtag on Twitter. And register today! Mid-August is supposed to be paradise in Seattle, and the week of the conference also happens to be Seattle GeekWeek, so there are a bunch of other events worth checking out in town before and after the pii2010 conference.
I summed up most of my thoughts on the online privacy issue in my written testimony to the FTC's privacy roundtable last fall. Also check out my paper Privacy Polls v. Real-World Trade-Offs, which explains why Prof. Turow's polls can't really show us what choices consumers would make if actually presented with the trade-off between locking down on the use of their data and the content and services supported by advertising that relies on that data for its value.
After all that complaining (and bashing their Soviet Realist-style statue, "Man Controlling Trade"), you might think we had it in for the agency. But as I've said repeatedly, we're actually big fans of the FTC's core consumer protection mission: holding companies to their promises. (Indeed, we want to make sure they stay focused on that mission, and have the staff, resources and technological tools to pursue it effectively--which might mean, as I've pointed out, increased funding rather than increased powers.) We've also repeatedly praised the FTC's efforts to educate kids, parents, and Internet users in general about things like online privacy, advertising, spyware, user empowerment tools, online scams, etc.
But I don't want to be accused of being only a fair-weather friend of the agency. So I wanted to point out a particularly good concrete example of the FTC doing what we talk about in the abstract: holding companies to their promises. Grant Gross notes that the FTC sent a stern letter earlier this month to the company that is seeking to buy the subscriber info and photos and other assets of the now-defunct XY Magazine, which served primarily gay U.S. teens, warning them that the FTC would hold them to the terms of the privacy policy under which XY collected information from its subscribers.
This is a great example of how the FTC can effectively use its existing authority to protect consumers against clear harms involved in the disclosure of truly sensitive data, sometimes even prophylactically--in this case, outing around 100,000 gay youths and young adults--collected by companies that make unambiguous promises to protect users' data. This incident also illustrates how privacy law can evolve in an organic fashion from a growing body of such well-justified preemptive warnings, enforcement actions brought against truly bad actors, and ultimately court decisions that decide whether the FTC has properly weighed the interests at stake. In other words, just because we don't have a privacy code enforced by a Data Protection Authority as in Europe doesn't mean our legal system doesn't protect privacy!
Google Street View/Wi-Fi Privacy Technopanic Continues but Real Cybersecurity Begins at Home
Congressmen working on national intelligence and homeland security either don't know how to secure their own home Wi-Fi networks (it's easy!) or don't understand why they should bother. If you live outside the Beltway, you might think the response to this problem would be to redouble efforts to educate everyone about the importance of personal responsibility for data security, starting with Congressmen and their staffs. But of course those who live inside the Beltway know that the solution isn't education or self-help but... you guessed it... to excoriate Google for spying on members of Congress (and bigger government, of course)!
Consumer Watchdog (which doesn't actually claim any consumers as members) held a press conference this morning about their latest anti-Google stunt, announced last night on their "Inside Google" blog: CWD drove by five Congressmen's houses in the DC area last week looking for unencrypted Wi-Fi networks. At Jane Harman's (D-CA) home, they found two unencrypted networks named "Harmanmbr" and "harmantheater" that suggest the networks are Harman's. So they sent Harman a letter demanding that she hold hearings on Google's collection of Wi-Fi data, charging Google with "WiSpying." This is a classic technopanic and the most craven, cynical kind of tech politics--dressed in the "consumer" mantle.
The Wi-Fi/Street View Controversy
Rewind to mid-May, when Google voluntarily disclosed that the cars it used to build a photographic library of what's visible from public streets for Google Maps Street View had been unintentionally collecting small amounts of information from unencrypted Wi-Fi hotspots like Harman's. These hotspots can be accessed by anyone who might drive or walk by with a Wi-Fi device--thus potentially exposing data sent over those networks between, say, a laptop in the kitchen, and the wireless router plugged into the cable modem.
Google's Street View allows you to virtually walk down any public street and check out the neighborhood
Privacy MythBusters: No, Facebook Doesn't Give Advertisers Your Data!
Working in any field of public policy is a bit like living in a haunted house: You spend most of your day dodging bogeymen, ghosts, phantasms, phantoms and specters of imagined harms, frauds, invasions and various conspiracies supposedly perpetrated by evil companies against helpless consumers, justice, God, Gaia, small woodland creatures and every sort of underserved, disadvantaged and/or underprivileged group of man, animal, vegetable and mineral imaginable.
But Internet policy--particularly online privacy--tends to be haunted by such groundless imaginings far more than most other areas of policy, largely because it manifests itself in ways that are far more real and immediate to ordinary users. For example, as outraged as any of us might feel about the Gulf oil spill, how many of us have the slightest clue what's really involved (beyond what we've learned watching TV anchors stumble through a vocabulary they don't understand)?
By contrast, huge numbers of Americans have daily interaction with web services like those provided by Google, Microsoft, Yahoo, Twitter and Facebook. That doesn't mean we necessarily understand how these technologies work. Indeed, quite the contrary! As Arthur C. Clark said, "Any sufficiently advanced technology is indistinguishable from magic." But we often think we know how these technological marvels work, and certainly sound much more informed when we spout off (pun intended) about these things than, say, "top kills" on the bottom of the ocean floor. In short, we know just enough web services to be dangerous when we ground strong policy positions in our unsophisticated understanding of how things really work online.
There are few better examples of this than the constantly repeated bugaboo that "Facebook sells your data to advertisers!" Or "Facebook only wants you to share more information with more people for advertising purposes!" These myths bear no relation to how advertising on social networking sites actually works, as Facebook CEO Sheryl Sandberg explains beautifully in a short tutorial video. Here's the key portion:
Common Sense Media (CSM) is a media "watchdog" group that provides a terrifically useful service to the public through independent reviews of popular media content (movies, music, TV, games, and more). As a parent, I find their service indispensable and, as a policy analyst, I have praised their rating system and their media literacy / digital citizenship programs again and again, including numerous endorsements in my special report on Parental Controls & Online Child Protection and other testimony and filings before Congress and federal regulatory agencies.
Thus, being such a big fan of CSM, I was quite dismayed to see the comments they just submitted to the Federal Trade Commission (FTC) as part of the agency's review of the Children's Online Privacy Protection Act (COPPA). They advocate not just expanded educational efforts, which are great, but also expanding COPPA's age scope to cover all kids under 18 as well as opt-in mandates for the collection and use of any "personal information" or "behavioral marketing." For all the background on the law and the FTC's resulting COPPA rule, see this beefy paper Berin Szoka and I authored last year and this testimony and follow-up submission Berin did for the Senate Commerce Committee. And then read the joint submission made by PFF, CDT, and EFF in the same FTC proceeding that CSM just filed in.
Sadly, it's clear to me that Common Sense Media didn't take anything we warned about in those papers or filings seriously--or perhaps that they just didn't bother to read them very carefully, if at all. Their filing is a classic example of good intentions gone wrong. I understand that they want to take additional steps to protect children online, but they completely ignore the practical realities of COPPA expansion and its associated trade-offs:
TechCast #6: OSTWG Report "Youth Safety on a Living Internet"
In PFF TechCast #6, Adam Thierer provides an excellent overview of an important new report from NTIA's Online Safety & Technology Working Group, entitled "Youth Safety on a Living Internet."
What We Didn't Hear at Yesterday's FTC COPPA Workshop
Yesterday, the Federal Trade Commission (FTC) hosted an all-day workshop on "Protecting Kids' Privacy Online," which looked into the Children's Online Privacy Protection Act of 1998 (COPPA) and challenges posed to its enforcement by new technological developments. The FTC staff did a nice job bringing together and moderating 5 panels worth of participants, all of whom had plenty of interesting things to say about the future of COPPA. But I was more struck by what was not said yesterday. Namely, there was:
ZERO explanation of the supposed harms of advertising, marketing, and data collection. Advertising-bashing is an old sport here in Washington, so I guess I should not have been surprised to hear several panelists yesterday engaging in teeth-gnashing and hand-wringing about advertising, marketing, and the data collection methods that make it possible. But this grousing just went on and on without any explanation by the critics of the supposed harms that would result from it.
ZERO appreciation of the benefits of advertising, marketing, and data collection. Not once yesterday -- NOT ONCE -- did anyone pause to ask what it is that makes all these wonderful online sites, services and content free (or dirt cheap) to consumers. Everyone at this show was guilty of the "manna fallacy" (that all this stuff just falls magically to Earth from the Net Gods above). Well, back here in the real world, something has to pay for all those goodies, and that something is advertising and marketing, which are facilitated by data collection! Or would you like to pay $19.95 a month for each of those currently free sites and services? Yeah, I didn't think so.
At the April hearing, Senators asked whether COPPA could be improved. Today, as in my April oral and written testimony, I again urged lawmakers to "tread carefully" because COPPA, as implemented, basically works. I explained why COPPA's technological neutrality and flexibility should allow the FTC to keep pace with technological convergence and change without the need for legislative changes. But expanding the statute beyond its limited purposes, especially to cover adolescents under 18, could raise serious constitutional questions about the First Amendment rights of adults as well as older teens and site and service operators, and also have unintended consequences for the health of online content and services without necessarily significantly increasing the online privacy and safety of children.
The Committee's follow-up questions also inquired about COPPA's implementation, the subject of today's FTC Roundtable. I noted that COPPA implementation has gone reasonably well, meeting its primary goal of enhancing parental involvement in children's online activities, but that implementation has come at a price, since the costs of obtaining verifiable parental consent and otherwise complying with COPPA have, on the one hand, discouraged site and service operators from allowing children on their sites or offering child-oriented content, and, on the other hand, raised costs for child-oriented sites. The FTC could do more to lower compliance costs for website operators, thus allowing achievement of COPPA's goals at a lower cost for parents and kids in foregone content and services.
Finally, I raised concerns about the FTC's seeming invitation for changes to the COPPA statute itself. As a general matter, regulatory agencies should not be in the business of re-assessing the adequacy of their own powers, since the natural impulse of all bureaucracy is to grow. Though the agency has done a yeoman's job of implementing COPPA, ultimately it is the responsibility of Congress, not the FTC, to make decisions about modifying the statute.
Three Cheers for Facebook's Privacy Management Upgrade
Last week, Facebook announced significant improvements to its privacy management tools. As explained in the new Privacy Guide, this upgrade allows users to exercise greater and easier choice over sharing of their information on the site and through the site to third party applications and external websites.
By giving users powerful new tools to further protect their privacy, Facebook has employed a potent weapon to deal with marketplace apprehensions: self-regulation. Government intervention stands little chance in acting as swiftly or as effectively to tackle such matters. Rather than short-circuiting the self-regulatory process, we should trust that users are capable of choosing for themselves if given the right tools, and that companies like Facebook will respond to reputational pressure to develop, and constantly improve, those tools. That approach is far more likely to move us towards the ideal of user empowerment than is heavy-handed government regulation, which would override marketplace experimentation and have many unintended consequences for free online sites and services like Facebook.
Today's announcement represents a major leap forward for privacy controls, but of course the company will have to keep innovating in this area as it does in others. In particular, I hope Facebook and other social networking services like MySpace, Buzz, LinkedIn and Flickr will all work on the next logical step forward: building Applications Programming Interfaces (API) that will allow third party tools to tap into each site's unique privacy settings so that users can have a single "dashboard" for controlling how they share data across platforms. Such a "Privacy API" would take one step further what Facebook has started today: the challenging problem of giving users both granularity/complexity and ease/simplicity, depending on what they want in any particular context. Ideally, such tools would also allow users to harmonize their lists of friends across multiple platforms so they can manage their sharing more easily. For example, Facebook offers powerful privacy functionality by letting users restrict access to particular information or shared items to, say, their family, or specific groups of friends configured by the user. Portability of those lists would make privacy empowerment far easier for users.
But Rome wasn't built in a day, and it's important to remember that opening up this kind of access comes with its own risks. Again, innovation is an iterative process and, as such, takes time. Today's announcement should instill great confidence that there is strong reputational pressure on companies like Facebook to meet this challenge, and vie with each other for leadership in privacy empowerment.
One thing Facebook CEO Mark Zuckerberg mentioned in the press conference bears special emphasis: It's a myth that Facebook is hell-bent on getting users to share more information more widely for the sake of of advertisers. In fact, advertising on Facebook doesn't involve sharing information about users with advertisers. In fact, advertisers buy ads that Facebook shows to users Facebook (or rather, its algorithms) thinks might be interested. If anything, sharing more information can actually help Facebook's competitors if users take advantage of Facebook Connect's data portability to port their data over to competing platforms. So the widely perceived conflict of interest between Facebook's economic interests and users' privacy just doesn't exist. The site gains from having more users spend more time on the site, not from tricking users into "giving up their privacy."
We welcome comments by email - look for a link to the author's email address in the
byline of each post. Please let us know if we may publish your remarks.