IPcentral Weblog
  The DACA Blog

Wednesday, December 17, 2008

Bandwidth, Storewidth, and Net Neutrality
(previous | next)

I was happy to see the discussion over The Wall Street Journal's Google/net neutrality story. Always good to see holes poked and the truth set free.

But let's not allow the eruptions, backlashes, recriminations, and "debunkings" -- This topic has been debunked. End of story. Over. Sit down! -- obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI "network layers" proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this "layers" -- or network neutrality -- proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model.

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, "The Antediluvian Paradigm." Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed "Storewidth," dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google's Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies.

This interplay between bandwidth, storage, and latency, caching, content, and conduit, was the very point of the conference. What are the technical and economic trade-offs? Where will the Net be modular? And where will it be integrated? Where will content be stored, and who will pay? In many ways, the conference was ahead of its time. And my humble view is that Schmidt and Page may have even adopted some of the key insights of these conferences and turned them into some of Google's most successful applications and architectures. A talk by Yale computer scientist David Gelernter in particular, I remember, seemed to have a profound impact on the way attendees visualized this coming "cloud" that would enable the death of the desktop. Remember, at the time, Google was still just a search engine company that hosted its then-thousands of servers in the data centers of Equinix and a few other hosting companies. Today, Google, with its global cloud platform and desktop killing apps, has become the supreme storewidth company.

I offer this background because some of us have been thinking about these topics for a (relatively) long time. When we first began analyzing this new "network layers" and then "network neutrality" policy concept five or more years ago, we did so with these profound architectural questions in mind. The Net, and the bits and applications traversing it, moves so fast, that we need all these technical solutions -- routing, switching, QoS, CDNs, etc. -- to make it work, let alone make it fast and robust.

So yesterday's Wall Street Journal story was not noteworthy for exposing some brand new network technology or architectural scheme. No, it seemed noteworthy (again, pending the accuracy of the reporting and the follow-on assertions) because (1) it highlighted the reality of this already existing architecture -- something a few of us have been trying for years to expose and highlight as a shortcoming of the neutrality concept -- and (2) suggested Google and others were softening their stance on the net neutrality policy issue.

Now it's perfectly possible the article is mistaken, that no one is softening on the push for net neutrality regulation. Let's have the truth, indeed. But it is a good thing that we are getting deeper into the technology and architecture of the Net because a clearer understanding will expose net neutrality's big flaws. As Gilder and I surmised five years ago, net neutrality, as ill-defined as it still is after all this time, seems one group's attempt to get the upper hand on competitors using the heavy hand of government. My networks, good; your networks, bad. My content delivery bandwidth-saving latency-reducing fix, good; your content-delivery bandwidth-saving latency-reducing method, "evil."

More to come. . . .

posted by Bret Swanson @ 11:56 AM | Net Neutrality

Share |

Link to this Entry | Printer-Friendly

Post a Comment:

Blog Main
RSS Feed  
Recent Posts
  EFF-PFF Amicus Brief in Schwarzenegger v. EMA Supreme Court Videogame Violence Case
New OECD Study Finds That Improved IPR Protections Benefit Developing Countries
Hubris, Cowardice, File-sharing, and TechDirt
iPhones, DRM, and Doom-Mongers
"Rogue Archivist" Carl Malamud On How to Fix Gov2.0
Coping with Information Overload: Thoughts on Hamlet's BlackBerry by William Powers
How Many Times Has Michael "Dr. Doom" Copps Forecast an Internet Apocalypse?
Google / Verizon Proposal May Be Important Compromise, But Regulatory Trajectory Concerns Many
Two Schools of Internet Pessimism
GAO: Wireless Prices Plummeting; Public Knowledge: We Must Regulate!
Archives by Month
  September 2010
August 2010
July 2010
June 2010
  - (see all)
Archives by Topic
  - A La Carte
- Add category
- Advertising & Marketing
- Antitrust & Competition Policy
- Appleplectics
- Books & Book Reviews
- Broadband
- Cable
- Campaign Finance Law
- Capitalism
- Capitol Hill
- China
- Commons
- Communications
- Copyright
- Cutting the Video Cord
- Cyber-Security
- Digital Americas
- Digital Europe
- Digital Europe 2006
- Digital TV
- E-commerce
- e-Government & Transparency
- Economics
- Education
- Electricity
- Energy
- Events
- Exaflood
- Free Speech
- Gambling
- General
- Generic Rant
- Global Innovation
- Googlephobia
- Googlephobia
- Human Capital
- Innovation
- Intermediary Deputization & Section 230
- Internet
- Internet Governance
- Internet TV
- Interoperability
- IP
- Local Franchising
- Mass Media
- Media Regulation
- Monetary Policy
- Municipal Ownership
- Net Neutrality
- Neutrality
- Non-PFF Podcasts
- Ongoing Series
- Online Safety & Parental Controls
- Open Source
- PFF Podcasts
- Philosophy / Cyber-Libertarianism
- Privacy
- Privacy Solutions
- Regulation
- Search
- Security
- Software
- Space
- Spectrum
- Sports
- State Policy
- Supreme Court
- Taxes
- The FCC
- The FTC
- The News Frontier
- Think Tanks
- Trade
- Trademark
- Universal Service
- Video Games & Virtual Worlds
- VoIP
- What We're Reading
- Wireless
- Wireline
Archives by Author
PFF Blogosphere Archives
We welcome comments by email - look for a link to the author's email address in the byline of each post. Please let us know if we may publish your remarks.

The Progress & Freedom Foundation