Whitt a deference to protocol revised journal draft december 2012 120612


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Whitt a deference to protocol revised journal draft december 2012 120612

  1. 1. WORKING DRAFT December 6, 2012 A Deference to Protocol: Fashioning A Three-Dimensional Public Policy Framework for the Internet Age Richard S. Whitt* “Truth happens to an idea.” William James AbstractThis paper discusses how public policy grounded in the Internet’s architecture can bestensure that the Net fully enables tangible benefits such as innovation, economic growth,free expression, and user empowerment. In particular, recognizing that the Internet israpidly becoming society’s chief operating system, this paper shows how an overarchingpublic policy framework should be faithful to the multifaceted nature of the online world.As part of such a framework, this paper will explore one key aspect of the Internet: the“logical” Middle Layers functions, its inner workings derived from open softwareprotocols. Adhering to the deferential principle of “respect the functional integrity of theInternet,” in combination with the appropriate institutional and organizationalimplements, can help ensure that any potential regulation of Internet-based activitiesenables, rather than hinders, tangible and intangible benefits for end users.I. IntroductionNo Nerds AllowedIn late 2011, the United States Congress was heavily involved in debating legislationaimed at stopping foreign websites from hosting content that violates U.S. copyrightlaws. The House bill, called SOPA (―Stop Online Piracy Act‖), and the Senate bill,known as PIPA (―Protect Intellectual Property Act‖), shared a common element: theysought to impose certain technical requirements on website owners, search engines, ISPs,and other entities, intended to block the online dissemination of unlawful content.On November 16, 2011, the House Judiciary Committee held a public hearing on SOPA.Representatives from the affected content industries were on hand to testify in favor ofthe legislation. The sole voice in opposition was Katie Oyama, copyright counsel forGoogle. There were no other witnesses called to provide testimony – including, notably,no technical experts on the actual workings of the Internet. No software engineers, no* Rick currently is global head for public policy and government relations at Motorola Mobility LLC, awholly-owned subsidiary of Google Inc. Previously he spent over five years in Google‘s public policyshop in Washington, DC. Rick wishes to thank in particular Max Senges, his colleague in Google‘s Berlinoffice, for his unfailing enthusiasm and insights about polycentric governance and other pertinent matters.Vint Cerf and Brett Frischmann also provided some helpful detailed feedback on an earlier version of thepaper.
  2. 2. hardware engineers, no technologists, no scientists, no economists, no historians.This apparent oversight was surprising, because many prominent Internet engineers hadbeen making known their concerns about the legislation. In May 2011, for example,various network engineering experts produced a straightforward technical assessment ofthe many infirmities of the legislation.1 The engineers took no issue with the goal oflawfully removing infringing content from Internet hosts with suitable due process.2Instead, they pointed out how the proposed means of filtering of the Internet‘s DomainName System (DNS) would be easily circumvented.3 They also argued that collateraldamage to innocent online activities would result from such circumvention techniques, aswell from the act of DNS filtering itself.4 These experts readily identified both the under-inclusion and over-inclusion risks from the pending legislation.5Other engineering experts, including Steve Crocker, Paul Vixie, Esther Dyson, DaveFarber, and many others, expressed similar concerns in a short letter they submitted toCongress just following the November hearing.6 The engineers all agreed that the bill‘sproposed restrictions on DNS and other functional elements of the Internet would utilizethe highly problematic targeting of basic networking functions. The bills in particularwould interfere with the Internet‘s naming and routing systems, in a way that would beboth ineffective (because many technical work-arounds are possible) and overinclusive(because many legitimate uses and users would be adversely affected). 7 In other words,Congress was considering legislation that, while laudable in its overall objective, wasaimed at the wrong functional target.In their arguments, the engineers relied on the design principles embedded in theInternet‘s architecture. ―When we designed the Internet the first time,‖ they explained,―our priorities were reliability, robustness, and minimizing central points of failure orcontrol. We are alarmed that Congress is so close to mandating censorship-complianceas a design requirement for new Internet innovations.‖8Little evidence of these views actually made its way into the November 16th hearing, andno expert testimony about the inner workings of the Internet was heard. At a subsequentmarkup of the House bill, members debated whether such knowledge was evennecessary. For example, Congressman Mel Watt (D-CA), ranking member of thesubcommittee that governs Internet policy, exclaimed that ―I don‘t know about the1 Steve Crocker, David Dagon, Dan Kaminsky, Danny McPherson, and Paul Vixie, Security and OtherTechnical Concerns Raised by the DNS Filtering Requirements in the Protect IP Bill, May 2011.2 Security and Other Technical Concerns, at 3.3 Id. at 7-10.4 Id. at 10-13.5 See also Allan A. Friedman, Cybersecurity in the Balance: Weighing the Risks of the PROTECT IP Actand the Stop Online Piracy Act, The Brookings Institution, November 15, 2011 (Legislation‘s attempt to―execute policy through the Internet architecture‖ creates real threats to cybersecurity, by both harminglegitimate security measures (over-inclusive) and missing many potential work-arounds (under-inclusive).6 Vint Cerf, Paul Vixie, Tony Li et al., An Open Letter from Internet Engineers to the United StatesCongress, December 15, 2011.7 An Open Letter, at 1.8 An Open Letter, at 1. 2
  3. 3. technical stuff, but I don‘t believe the experts.‖ He added, ―I‘m not a nerd.‖Congresswoman Maxine Waters (D-CA) complained that those who raised questionsabout the bill‘s impact on the Internet were ―wasting time.‖ A few voices were raised onthe other side.9 Congressman Jason Chaffetz (R-Utah) said that ―We are doing surgeryon the Internet without understanding what we are doing…. If you don‘t know whatDNSSEC is, we don‘t know what we‘re doing.‖ As he put it, ―Maybe we oughta asksome nerds what this thing really does.‖Nonetheless the political skids appeared greased. After a markup on December 15th, itwas becoming increasingly apparent to the pundits that this legislation was going to pass,and likely be signed into law, albeit reluctantly, by President Obama.10 Except it did nothappen. On January 18, 2012, a host of Internet companies participated in what becameknown as the ―Internet Blackout Day,‖ seeking to enlist heir users to protest against thebills. Over 115,000 websites committed what was the Web‘s version of a collective workstoppage, with some like Wikipedia closing access to their content while others likeGoogle posting messages exhorting users to voice their concerns to Congress.Lawmakers received emails from some 14 million people.11 The response was swift, andpredictable: the legislation would not be brought to a floor vote in either chamber.The interesting question is where things go from here. Despite the best efforts of theInternet community to explain and educate, their voices were taken seriously only inwake of the Internet Blackout Day, leaving politicians scrambling to publicly back awayfrom bills previously endorsed. It is fair to say that many in Congress still do not have aninformed appreciation for the structural and functional integrity of the Internet. Instead,the debate turned into a classic political battle, won only by unconventional butstraightforward lobbying tactics, rather than the power of legitimate ideas.What would John Kingdon say about this dilemma?Accepting the Kingdon ChallengeThis paper complements and extends the analysis proposed in four previous papers by theauthor,12 as well as a seminal paper by Professor Lawrence Solum calling onpolicymakers not to disrupt the integrity of the different protocol layers of the Internet.139 Steve King tweeted that ―We are debating the Stop Online Piracy Act and Shiela [sp] Jackson has sobored me that I‘m killing time by surfing the Internet.‖10 On January 17th, the White House issued a statement that the President would not support legislation that―reduces freedom of expression, increases cybersecurity risks, or undermines the dynamic, innovativeglobal internet.‖ It is not clear whether this was a legitimate veto threat, or something short of one.11 Wikipedia, ―Protests against SOPA and PIPA,‖ last visited July 22, 2012.12 Richard S. Whitt & Stephen Schultze, The New “Emergence Economics” of Innovation and Growth, andWhat It Means for Communications Policy, 7 J. TELECOMM. & HIGH TECH. 217 (2009) (―EmergenceEconomics‖); Richard S. Whitt, Adaptive Policymaking: Evolving and Applying Emergent Solutions forU.S. Communications Policy, 61 FED. COMM. L.J. 483 (2009) (―Adaptive Policymaking‖); Richard S.Whitt, Evolving Broadband Policy: Taking Adaptive Stances to Foster Optimal Internet Platforms, 17COMMLAW CONSPECTUS 417 (2009) (―Broadband Policy‖); Richard S. Whitt, A Horizontal Leap Forward:Formulating a New Communications Public Policy Framework Based on the Network Layers Model, 56FED. COMM. L.J. 587 (2004) (―Horizontal Leap‖).13 Lawrence B. Solum & Minn Chung, The Layers Principle: Internet Architecture and the Law (University 3
  4. 4. These papers share a certain perspective about the desirability of creating an overarchingconceptual framework for the Internet that helps us explore and craft policy solutions thatwork with, and not against, its generative nature. But the chief message here isstraightforward: policymakers should do what they can to understand and take advantageof the Internet‘s structural and functional design.This paper focuses on discovering an optimal fit between the means and ends of proposedregulation of the Internet‘s inner workings – what I call its ―Middle Layers‖ -- so as tobest preserve the integrity of its basic design. That policy which works best in the marketis best fit to the technology, institutions, and organizations that are involved, and notmerely the most convenient politically. Fortunately, there are a number of ways thatpublic policy can be applied to ―the Internet‖ without doing violence to its workinginternal structure. The key is to match the right policy instruments to the right functionalsolution. As Solum points out, for Internet-related issues ―public policy choices can begrounded not in vague theoretical abstractions, but in the ways that communicationsnetworks actually are designed, constructed, and operated.‖14 In that way, we would be―drawing public policy lessons from Internet topology and experience.‖15In my previous work, I have argued that traditional economic theories have been rootedin basic misconceptions about micro and macro human behavior. Modern politicaltheory suffers accordingly from the same drawbacks: persistent misunderstandings abouthow ordinary people think and operate, particularly in pervasive networked environmentslike the Internet. In their place, I respectfully suggested some new ways of seeing andframing the relevant issues related to online markets (Emergence Economics) andcommunications technology (Adaptive Policymaking). In Broadband Policy I applied themarket and technology framings to broadband transmission networks, offering concreteexamples of novel policy options. We now face a similar crisis when it comes to thething, the process, the system, we call the Internet.Political scientist John Kingdon famously asserted that in the political sphere often it isideas, not pressure, that matters most at the end of the day.16 While power, influence, andstrategy are important, ―the content of the ideas themselves, far from being meresmokescreens, or rationalizations, are integral parts of decision making in and aroundgovernment.‖17 Keynes agreed that in politics ―the power of vested interests is vastlyexaggerated compared with the gradual encroachment of ideas.‖18And yet, the still-fresh SOPA/PIPA legislative battle sorely tests that optimistic thesis,and crystallizes the challenge. After all, various concerted educational efforts wereof San Diego School of Law, Public Law and Legal Theory Research Paper No. 55) (2003), available athttp://ssrn.com/abstract=416263 (Solum, Layers Principle).14 Solum & Chung, Layers Principle, at 595.15 Solum & Chung, Layers Principle, at 614.16 JOHN W. KINGDON, AGENDAS, ALTERNATIVES, AND PUBLIC POLICIES (Updated 2 nd edition2011), at 125.17 KINGDON, AGENDAS, at 125.18 JOHN MAYNARD KEYNES, THE GENERAL THEORY OF EMPLOYMENT, INTEREST, ANDMONEY (1936), at 383. 4
  5. 5. launched, and failed. In some respects, this has become a battle of ideas about the properrole of the Internet in society. As a purely intellectual mater, the idea of not imposingcertain mandates on the Internet‘s design architecture was not accepted. Or rather, thecounter idea was winning out that the Internet as it currently exists must be ―fixed‖because, allegedly, it facilitates massive theft of content.As it turned out, politics as usual (with a unique Internet-fueled twist) won the moment,with the Internet Blackout Day forcing a hasty retreat even by staunch bill sponsors.Frankly this is not a desirable outcome, one driven by confusion and fear, rather thanunderstanding. Such shows of political force are usually difficult to replicate,complicated to harness, and can quickly lose their novelty and impact. Moreover, whilemost impressive and effective, at least for the moment, the show of force convincedpolitically, without convincing intellectually.For all practical purposes, the Net is becoming the chief operating system for society.And yet, confusion and misapprehension about how the Net functions – its basic designattributes and architecture– remains frighteningly high, even in policymaking circles.Ironically, the collective ignorance of our body politic is slowly strangling that which weshould want most to preserve. Perhaps the Net community only has itself to blame forthis predicament. For too long we urged policymakers simply to look the other waywhenever talk about Internet regulation surfaced. After all, many of us simply laughedwhen then-Senator Ted Stevens railed about the Net as a ―series of tubes,‖ or then-President George W. Bush referred to ―the internets.‖ We were convinced that ignoranceabout the Internet – just a big mysterious, amorphous cloud, right? – would lead ourpoliticians to shy away from imposing law and rules.One can make a case that the lack of understanding was willful, that the fault lies not withthe Net‘s partisans but with the politicals who chose blindness over vision. And clearlyquite serious efforts had been undertaken, often in vain, to educate policymakers aboutthe potential errors of their ways.19 On the other hand, for some entities hidingcommercial dealings behind the rhetorical cloak of Internet ―unregulation‖ helped wardoff unwanted government scrutiny. Deliberate obfuscation protected pecuniary interests.Regardless of motivations on both sides, however, the larger point is what is crucial: thedays of easy sloganeering are over. It is time for the Internet community to come outfrom behind the curtain and explain itself. This paper is intended as a modestcontribution to that end.To be clear at the outset, this piece is not going to argue for a form of what some havetermed ―Internet exceptionalism.‖ The rallying cry of ―Don‘t regulate the Internet‖ nolonger makes sense any longer, at least as commonly understood. The Internet and all themyriad activities it facilitates will be regulated, to some degree, by someone. 20 The chief19 While there have been such educational efforts in the past, too often their voices have failed to reach theears of policymakers. See, e.g., Doc Searls and David Weinberge, ―World of Ends, What the Internet Is,and How to Stop Mistaking It for Something Else‖ (March 10, 2003), found at wwww.worldofends.net.20 As de La Chappelle puts it, ―The Internet is far from being unregulated; numerous national laws directlyor indirectly impact human activities on the Internet, whether we like it or not.‖ Bernard de La Chappelle, 5
  6. 6. question is, how? My paper will attempt to explain that what we really need is a newform of ―Internet contextualism,‖ where the basic workings of the Net are understood andfully accounted for as we wrestle with difficult questions about social concerns. Underthis banner, government involvement – directly or indirectly, through a variety ofinstitutional and organizational vehicles – will happen only for the right reasons, andaimed in the right way at the pertinent uses and abuses of the network.The philosophical pragmatists will observe that it is not enough for an idea to be true; forpurposes of public acceptance, it must be seen to be true.21 Assuring politicians that it isacceptable not to regulate what they don‘t comprehend just won‘t fly anymore. InKingdonian terms, we need to couple the three policy streams of recognizing problems,formulating proposals, and connecting to politics.22 We cannot afford to ignore ordownplay any of those three elements in a policy framework that actually works. Norshould we invest in slogans whose time has come and gone. But as we shall see, a newslogan may now be appropriate. A more modest and grounded exhortation, to ―respectthe functional integrity of the Internet.‖ An idea whose veracity and validity in the mindsof too many policymakers is still very much in doubt.II. The Internet’s Fundamental Design Features A. The Net’s Framing YearsA technology is not easily severable from the culture in which it is embedded.23 It is atruism that the Internet was born and raised not from the market, but from an unlikelyconfluence of government and academic forces. Literally hundreds of people contributedto what eventually became the ―Internet project‖ over several decades of development,from designers and implementers to writers and critics. The participants came fromuniversities, research laboratories, government agencies, and corporations.24 What manyof them worked on were the technical standards that would provide the essential buildingblocks for the various online technologies to follow.25There is little doubt that the Internet ―represents one of the most successful examples ofsustained investment and commitment to research and development in informationinfrastructure.‖26 A brief overview of the Net‘s roots, processes, and people will shedsome light on how it actually operates.Multistakeholder Governance, Principles and Challenges of an Innovative Political Paradigm, MINDCo:Llaboratory Discussion Paper Series No. 1, #2 Internet Policy Making, September 2011, at 16.21 ―The truth of an idea is not a stagnant property inherent in it. Truth happens to an idea. It becomes true, ismade true by events. Its verity is in fact an event, a process, the process namely of its verifying itself, itsverification. Its validity is the process of its validation.‖ WILLIAM JAMES, THE MEANING OF TRUTH,(18__), at __. See also Whitt, Adaptive Policymaking, at __.22 KINGDON, AGENDAS, at __.23 Whitt and Schultze, Emergence Economics, at 251.24 David D. Clark, The Design Philosophy of the DARPA Internet Protocols, ACM (1998), 106, 114.25 Center for Democracy and Technology, ―The Importance of Voluntary Technical Standards for theInternet and its Users,‖ August 29, 2012, at 3.26 Barry M. Leiner, Vinton G. Cerf, David D. Clark, et al. The Past and Future History of the Internet,Communications of the ACM, Vol, 40, No. 2 (February 1997), at 102. 6
  7. 7. 1. From top-down government management to bottom-up guidanceThe Internet was actually born from several different projects in the late 1960s and1970s, all which were funded and controlled in some manner by national governmentagencies. However, the early homogeneity of design and top-down control slowly gaveway over time to a heterogeneity of design and bottom-up governance. In some sense,the nature of process followed the nature of function.In 1968 the U.S. Department of Defense‘s Advanced Research Projects Agency (ARPA)awarded to contractor BBN the first government grant to construct and developARPANET. This single network was intended to allow dissimilar computers operating atdifferent sites to share online resources. ARPANET eventually became ARPA‘s host-to-host communications system.27 One key feature was the Interface Message Processes(IMPs), the network protocols, accessible only by project engineers. ARPA provideddirect management and control over this project, alongside introduction of the NetworkWorking Group (NWG) in 1969 and the ARPA Internet Experiment group in 1973.Vint Cerf and Bob Kahn would do much of their work on the TCP/IP protocol suite underthe auspices and funding of ARPA, starting in 1968 and leading to their landmark 1974paper.28 As opposed to addressing how to communicate within the same network, Cerfand Kahn tackled a far more challenging problem: linking together disparate packet-switching networks with a common set of protocols. The Cerf/Kahn paper developedTCP as the means of sharing resources that exist in different data networks (the paperfocused on TCP, but IP later was split out to logically separate router addressing fromhost packet sending).29 TCP/IP was adopted as a Defense Department standard in 1980,and incorporated within ARPANET in 1983. However this work helped serve as acrucial bridge to the next phase in the development of what now is known as the Internet.Moving beyond ARPANET, the top-level goal for the new Internet project was todevelop ―an effective technique for multiplexed utilization of existing interconnectednetworks.‖30 The National Science Foundation (NSF) and others recognized TCP/IP asthe primary means of solving that difficult task, and the protocol suite was incorporatedinto its NSFNET research network in 1985. NSF and other government agencies were incontrol of this particular networking project, and access remained strictly limited toacademic institutions and the U.S. military. Nonetheless starting in the late 1970s otherbodies began to appear to help steer the course of this growing new ―network ofnetworks,‖ including the Internet Configuration Control Board (founded by Vint Cerf in1979), the International Control Board, and the Internet Activities Board. The InternetEngineering Task Force (IETF) was launched in 1986.27 All told, Congress spent some $25 million on ARPANET – a fine return on investment.28 Vinton G. Cerf and Robert E. Kahn, ―A Protocol for Packet Network Intercommunication,‖ IEEE Trans.On Comms, Vol. 22, No. 5, May 1974.29 In essence, TCP creates and organizes data packets, while IP wraps a header with routing instructionsaround each packet. UDP was another host-to-host protocol developed in this same timeframe.30 Clark, Design Philosophy, at 106. 7
  8. 8. Commercial services were authorized on ―the Internet‖ beginning in 1989, and with itcame a plethora of new bodies involved in some element of Internet governance orstandards. The Internet Society (ISOC) arrived in 1992, along with the InternetArchitecture Board (IAB), which replaced ICCB. The World Wide Web Consortium(W3C) followed two years later, and the Internet Corporation for Assigned Names andNumbers (ICANN) made its appearance in 1998. As government control and fundingdeclined, commercial and non-commercial entities alike stepped into the breach to guidethe Internet‘s continuing evolution. 2. From government roots to market realityAmazingly, some would contest this long-settled version of the Net‘s history. GordonCrovitz, former publisher of the Wall Street Journal, proclaimed in a recent opinion piecethat ―It‘s an urban legend that the government launched the Internet.‖31 The reaction tothis piece was swift, and appropriately merciless.32 The larger lessons should not bemissed, however. First, it is troubling that a major publication would publish suchnonsense, and assumes there would be a ready audience for it. As much as there isignorance about the basic workings of the Internet, apparently there is willful effort toconfuse the public about how it even began. Second, and more fundamentally, election-year posturing should not be allowed to obscure the simple fact that the Internet is adifferent animal because of its origins. The Net‘s design derives directly from its birth asan unusual confluence of government, academic, and libertarian culture, which onlygradually gave way to commercialization.Indeed, it is a fascinating question whether the Internet would have developed on its ownas a purely commercial creature of the marketplace. Networking pioneer andentrepreneur Charles Ferguson for one says no. He argues that many new technologieslike the Internet typically come not from the free market or the venture capital industry;rather, ―virtually all the critical technologies in the Internet and Web revolution weredeveloped between 1967 and 1993 by government research agencies and/or inuniversities.‖33Steve Crocker, an original technical pioneer of the Internet, shares that view. He pointsout that the Internet never could have been created without government‘s assistance as31 Gordon Crovitz, ―Who Really Invented the Internet?‖ Wall Street Journal, July 22, 2012. Instead, heclaims, Xerox should get ―full credit‖ for such an invention. His motivation for holding and explicatingsuch a view, apparently, is that the Net‘s history is ―too often wrongly cited to justify big government.‖ Id.32 See, e.g., Farhad Manjoo, ―Obama Was Right: The Government Invented the Internet,‘ Slate, July 24,2012 (―Crovitz‘ yarn is almost hysterically false…. A ridiculously partisan thing.‖); Harry McCracken,―How Government (and Did Not) Invent the Internet,‖ Time Techland, July 25, 2012 (Crovitz‘ argument ―is bizarrely, definitively false.‖). Vint Cerf also provided a pointed response. Seehttp://news.cnet.com/8301-1023_3-57479781-93/no-credit-for-uncle-sam-in-creating-net-vint-cerf-disagrees/ (―I would happily fertilize my tomatoes with Crovitz‘ assertion.‖).33 CHARLES H. FERGUSON, HIGH STAKES, NO PRISONERS: A WINNER‘S TALE OF GREEDAND GLORY IN THE INTERNET WARS (1999), at 13. 8
  9. 9. funder and convener.34 In particular, the Internet‘s open architecture was a fundamentalprinciple that was a hallmark of the government research effort, one that would not havecome about if the Net had been created instead by private industry. 35 Indeed, without theexistence of a ready alternative like the Internet, the relatively ―closed‖ online networksmay well have remained the prevailing marketplace norm.36Regardless, given its distinctive origins at the ―unlikely intersection of big science,military research, and libertarian culture,‖37 it is not surprising that the players, processes,and guiding philosophies pertinent to how the Net was designed and operated are ratherunique. This may well mean that we need some unconventional tools to fully assess theNet, as a technological and social phenomenon not originating from the market. 3. Rough consensus and running codeWith the Net‘s roots stretching at least as far back as 1962, ―the initial impulse andfunding for the Internet came from the government military sector,‖ with members of theacademic community enjoying great freedom as they helped create the network ofnetworks.38 According to Hofmokl, that freedom remained as ―a major source of pathdependency,‖ as shown in the early shaping principles and operating rules of the Net: thelack of a central command unit (with consensus-driven, democratic processes to defineoperations), the principle of network neutrality (a simple network with intelligenceresiding at the end points), and an open access principle (local networks joining theemerging global Internet structure).39The Net‘s design criteria were conceptualized during its first decade in powerfully path-dependent ways that have been foundational for the treatment of legal and policy issuesthen and now – what Braman calls ―the framing years.‖40 Key to the design criteria aretechnical standards, the language that computers, phones, software, and networkequipment use to talk to each other.41 Protocols became widely recognized technicalagreements among computers and other devices about how data moves between physicalnetworks.42 Internet pioneer Steve Crocker states that a ―culture of open processes‖ ledto the development of standards and protocols that became building blocks for the Net. 43Informal rules became the pillars of Internet culture, including a loose set of values and34 Steve Crocker, ―Where Did the Internet Really Come From?‖ techpresident.com, August 3, 2012.35 Crocker, ―Where Did the Internet Really Come From?‖36 See Whitt and Schultze, Emergence Economics, at 254.37 MANUEL CASTELLS, THE INTERNET GALAXY 17 (2001).38 Hofmokl, at 230.39 Hofmokl, at 230.40 Braman, The Framing Years, at 3.41 CDT, ―Voluntary Technical Standards,‖ at 1.42 SEARLS, INTENTION ECONOMY, at Chapter 9. In 1969 the Network Working Group adopted theword ―protocol‖ (then in widespread use in the medical and political fields to mean ―agreed procedures‖) todenote the set of rules created to enable communications via ARPANET. Interestingly, the Greek root―protokollon‖ refers to a bit of papyrus affixed to the beginning of a scroll to describe its contents -- muchlike the header of a data packet. Whitt, A Horizontal Leap, at 601-02.43 Whitt, Broadband Policy, at 507 n.533. 9
  10. 10. norms shared by group members.44 The resulting broader global vision of both processand rules ―overshadowed the orientation that initially had been pursued by thegovernment agencies focused on building specific military applications.‖45Unconventional entities accompany these informal rules. Today there is no singlegoverning body or process that directs the development of the Internet‘s protocols.46Instead, we have multiple bodies and processes of consensus. Much of the ―governance‖of the Internet is carried out by so-called multistakeholder organizations (MSOs), such asISOC, W3C, and ICANN. Over the last two decades, although these entities have largelyestablished the relevant norms and standards for the global Internet, ―they are littleknown to the general public, and even to most regulators and legislators.‖47ISOC is the first and one of the most important MSO, with a stated mission "to assure theopen development, evolution and use of the Internet for the benefit of all peoplethroughout the world."48 Since 1992 engineers, users, and the companies that assembleand run the Internet debate at ISOC about what particular course the Net should take.The Internet Engineering Task Force (IETF) now operates under the auspices of ISOC,and its stated goal is ―to make the Internet work better.‖49 It grew out of the InternetActivities Board, and previously the Internet Configuration Control Board. The IETF isthe institution that has developed the core networking protocols for the Internet, includingIPv4, IPv6, TCP, UDP, and countless others.50 The body is open to any interestedindividual, meets three times a year, and conducts activities through working groups invarious technical areas. Its standards-setting process includes electronic publishing andbroad distribution of proposed standards.The IETF has articulated its own cardinal principles for operation. The body employs anopen process (where any interested person can participate, know what is being decided,and be heard), relies on technical competence (where input and output is limited to areasof technical competence, or ―engineering quality‖), has a volunteer core of leaders andparticipants, utilizes ―rough consensus and running code‖ (standards are derived from acombination of engineering judgment and real-world experience), and acceptsresponsibility for all aspects of any protocol for which it takes ownership.51 An earlydocument states that IETF should act as a trustee for the public good, with a requirementthat all groups be treated equitably, and an express recognition of the role forstakeholders.52 Some have argued that this statement alone created ―the underpinning of44 Hofmokl, at 230.45 Hofmokl, at 231.46 Bernbom, Analyzing the Internet as a CPR, at 13.47 Joe Waz and Phil Weiser, Internet Governance: The Role of Multistakeholder Organizations, SiliconFlatirons Roundtable Series on Entrepreneurship, Innovation, and Public Policy (2012), at 1.48 www.ietf.org.49 RFC 3935, ―A Mission Statement for the IETF,‖ October 2004, at 1.50 DiNardis, Internet Governance, at 7.51 RFC 3935, ―Mission Statement,‖ at 1-2. These processes stand as an interesting contrast to, say, theworkings of the U.S. Congress.52 RFC 1591, ―Domain Name System Structure and Delegation,‖ Jon Postel, at __. 10
  11. 11. the multi-stakeholder governance system that is the foundation of Internet governance.‖53The Request for Comments (RFCs) were first established by Steve Crocker of UCLA inApril 1969. The memos were intended as an informal means of distributing shared ideasamong network researchers on the ARPANET project.54 ―The effect of the RFCs was tocreate a positive feedback loop, so ideas or proposals presented in one RFC would triggerother RFCs.‖55 A specification document would be created once consensus cametogether within the governing organization (eventually IETF), and used as the basis forimplementation by various research teams. RFCs are now viewed as the ―document ofrecord‖ in the Net standards community,56 with over 6,000 documents now in existence.An RFC does not automatically carry the full status of a standard. Three types of RFCscan be promulgated: the proposed standard (specification and some demonstrated utility),draft standard (implementations and at least some limited operational capability), and thestandard itself (demonstrated operational capacity). A proposed or draft standard canonly become an actual standard once it has been readily accepted and used in the market.In fact, standards ultimately succeed or fail based on the response of the marketplace.57Other organizations involved in governing the Internet include W3C and ICANN. W3Cwas formed in 1994 to evolve the various protocols and standards associated with theWorld Wide Web. The body produces widely-available specifications, calledRecommendations, which describe the building blocks of the Web. ICANN was formedin 1998, in what Milton Mueller calls ―cyberspace‘s constitutional moment.‖58 Its role isto manage the unique system of identifiers of the Internet, including domain names,Internet addresses, and parameters of the Internet Protocol suite.As we shall see, the Internet‘s ―running code‖ is a reflection of its unique heritage: openstandards and public commons, rather than proprietary standards and private property.While much of its underlying physical networks and ―over-the-top‖ applications andcontent come from the commercial, privately-owned and -operated world, its logicalarchitectural platform does not. B. The Internet’s Designed ArchitectureComplex systems like the Internet can only be understood in their entirety by abstraction,and reasoned about by reference to principles.‖59 ―Architecture‖ is a high-level53 Doria, Study Report, at 19.54 Leiner et al, The Past and Future History of the Internet, at 106.55 Leiner et al, The Past and Future History of the Internet, at 106.56 Leiner et al, The Past and Future History of the Internet, at 106.57 Werbach, Higher Standard, at 199.58 MILTON L. MUELLER, RULING THE ROOT: INTERNET GOVERNANCE AND THE TAMINGOF CYBERSPACE (2002). He believes there is no suitable legal or organizational framework in place togovern the Net. Id. at __.59 Matthias Barwolff, End-to-End Arguments in the Internet: Principles, Practices, and Theory,Dissertation submitted to the Department of Electrical Engineering and Computer Science at TechnicalUniversitat Berlin, presented on October 22, 2010, at 133. 11
  12. 12. description of a complex system‘s organization of basic building blocks, its fundamentalstructures.60 How the Internet runs is completely dependent on the implementing code,its fundamental nature created and shaped by engineers.61 ―The Internet‘s value is foundin its technical architecture.‖62Technology mediates and gives texture to certain kinds of private relationships, weighingin on the side of one vested interest over others.63 ―Design choices frequently havepolitical consequences – they shape the relative power of different groups in society.‖64Or, put differently, ―technology has politics.‖65 Law and technology both have the abilityto organize and impose order on society.66 Importantly, ―technology design may be theinstrument of law, or it may provide a means of superseding the law altogether.‖67Langdon Winner may have put it best (in a pre-Internet formulation): ―The issues thatdivide or unite people in society are settled not only in the institutions and practices ofpolitics proper, but also, and less obviously, in tangible arrangements of steel andconcrete, wires and semiconductors, nuts and bolts.‖68 Indeed, ―like laws or socialnorms, architecture shapes human behavior by imposing constraints on those who interactwith it.‖69David Clark and others remind us that ―there is no such thing as value-neutral design.What choices designers include or exclude, what interfaces are defined or not, whatprotocols are open or proprietary, can have a profound influence on the shape of theInternet, the motivations of the players, and the potential for distortion of thearchitecture.‖70 Those responsible for the technical design of the early days of theInternet may not always have been aware that their preliminary, often tentative andexperimental, decisions were simultaneously creating enduring frames not only for theInternet, but also for their treatment of social policy issues. 71 As time went on, however,those decisions came more into focus. The IETF proclaims that ―the Internet isn‘t valueneutral, and neither is the IETF…. We embrace technical concepts such as decentralizedcontrol, edge-user empowerment, and sharing of resources, because those conceptsresonate with the core values of the IETF community.‖72It may well be true that ―engineering feedback from real implementations is more60 VAN SCHEWICK, INTERNET ARCHITECTURE AND INNOVATION (2010), at Part I, Chapter 1.61 Solum & Chung, Layers Principle, at 12.62 Searls and Weinberger, ―World of Ends,‖ at __.63 Nissenbaum, From Preemption to Circumvention, at 1375.64 Brown, Clark, and Trossen, Should Specific Values Be Embedded in the Internet Architecture, at 3.65 Nissenbaum, From Preemption to Circumvention, at 1377.66 Helen Nissenbaum, From Preemption to Circumvention: If Technology Regulates, Why Do We NeedRegulation (and Vice Versa)?, Berkeley Technology Law Journal, Vol. 26 No. 3 (2011), 1367, 1373.67 Braman, Defining Information Policy, at 4.68 Langdon Winner, Do Artifacts Have Politics, in THE WHALE AND THE REACTOR: A SEARCHFOR LIMITS IN AN AGE OF HIGH TECHNOLOGY (1986), at 19.69 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part 1, Chapter 1.70 Clark, Sollins, Wroclawski, and Braden, Tussle in Cyberspace, at __.71 Braman, The Framing Years, at 29.72 RFC 3935, at 4. 12
  13. 13. important than any architectural principles.‖73 And yet, the fundamental design attributesof the Net have stood the challenge of time. Through several decades of design andimplementation, and several more decades of actual use and endless tinkering, these arethe elements that simply work. C. The Internet’s Four Design AttributesThe Internet is a network of networks, an organic arrangement of disparate underlyinginfrastructures melded together through common protocols. Understanding the what,where, why, and how of this architecture goes a long ways towards understanding howthe Net serves the role it does in modern society, and the many benefits (and somechallenges) it provides.It would be quite useful to come up with ―a set of principles and concerns that suffice toinform the problem of the proper placement of functions in a distributed network such asthe Internet.‖74 Data networks like the Internet actually operate at several differentlevels. Avri Doria helpfully divides up the world of Internet protocols and standards intothree buckets.75 First we have the general communications engineering principles,consisting of elements like simplicity, flexibility, and adaptability. Next we have thedesign attributes of the Internet, such as no top-down design, packet-switching, end-to-end transmission, layering, and the Internet Protocol hourglass. Finally we have theoperational resources, those naming and numbering features dedicated to carrying out thedesign principles; these include the Domain Name System (DNS), IP addressing, andAutonomous System Numbers.76 This paper will focus on Doria‘s second bucket of theNet‘s fundamental design attributes, the home of many of the Net‘s software protocols.DiNardis points out the key role played by protocol standards in the logical layers: ―The Internet works because it is universally based upon a common protocological language. Protocols are sometimes considered difficult to grasp because they are intangible and often invisible to Internet users. They are not software and they are not material hardware. They are closer to text. Protocols are literally the blueprints, or standards, that technology developers use to manufacture products that will inherently be compatible with other products based on the same standards. Routine Internet use involves the direct engagement of hundreds of standards….‖77Scholars differ on how to define and number the Net‘s design attributes. As mentioned,Doria identifies the lack of top-down design, packet-switching, end-to-end transmission,layering, and the Internet Protocol hourglass.78 Barwolff sees five fundamental design73 RFC 1958, at 4.74 Barwolff, End-to-End Arguments, at 135.75 Avti Doria, ―Study Report: Policy implications of future network architectures and technology,‖ Paperprepared for the 1st Berlin Symposium on Internet and Society, Conference Draft, October 2, 2011.76 DiNardis, ―Internet Governance,‖ at 3. She calls these ―Critical Internet Resources.‖77 DeNardis, ―Internet Governance,‖ at 6.78 Doria, Study Report, at 8-12. 13
  14. 14. principles of the Internet: end-to-end, modularity, best efforts, cascadability, andcomplexity avoidance. Bernbom comes up with five principles of distributed systems,network of networks, peer-to-peer, open standards, and best efforts.79 Barbara vanSchewick weighs in with her own short list of design principles: layering, modularity,and two forms (the narrow version and the strong version) of the end-to-end principle.80RFC 1958 probably best sums it up back in 1996: ―In very general terms, the communitybelieves that the goal is connectivity, the tool is the Internet Protocol, and the intelligenceis end to end rather than hidden in the network.‖81 And, one can add with confidence,modularity or layering is the logical scaffolding that makes it all work together. So,consistent with RFC 1958 and other sources, I come up with a list of four major designattributes: the structure of layering (the what), the goal of connectivity (the why), the toolof the Internet Protocol (the how), and the ends-based location of function (the where).82As with Doria‘s exercise, it is important at this point to separate out the actual networkfunction from both the impetus and the effect, even though both aspects are critical tounderstanding the function‘s role. Design attributes also are not the same as the actualnetwork instantiations, like DNS and packet-switching. My list may not be definitive,but it does seek to capture much of the logical essence of the Net. 1. The Law of Code: ModularityThe modular nature of the Internet describes the ―what,‖ or its overall structuralarchitecture. The use of layering means that functional tasks are divided up and assignedto different software-based protocol levels.83 For example, the ―physical‖ layers of thenetwork govern how electrical signals are carried over physical wiring; independently,the ―transport‖ layers deal with how data packets are routed to their correct destinations,and what they look like, while the ―application‖ layers control how those packets areused by an email program, web browser, or other user application or service.This simple and flexible system creates a network of modular ―building blocks,‖ whereapplications or protocols at higher layers can be developed or modified with no impact onlower layers, while lower layers can adopt new transmission and switching technologieswithout requiring changes to upper layers. Reliance on a modular system of layers greatlyfacilitates the unimpeded delivery of packets from one point to another. Importantly, thecreation of interdependent layers also creates interfaces between them. These stableinterfaces are the key features that allow each layer to be implemented to different ways.79 Bernbom, at 3-5.80 BARBARA VAN SCHEWICK, INTERNET ARCHITECTURE AND INNOVATION (2010), at __.Van Schewick sees layering as a special kind of modularity. I agree, which is why I refrain from assigningmodularity as a wholly separate design attribute. Modularity is more of a general systems elementapplicable in most data networks. For this paper, I use the two terms interchangeably.81 RFC 1958, at 2.82 See also Whitt and Schultze, Emergence Economics.83 Whitt and Schultze, Emergence Economics, at 256-257. 14
  15. 15. RFC 1958 reports that ―Modularity is good. If you can keep things separate, do it.‖84 Inparticular, layers create a degree of ―modularity,‖ which allows for ease of maintenancewithin the network. Layering organizes separate modules into a partially orderedhierarchy.85 This independence, and interdependence, of each layer creates a useful levelof abstraction as one moves through the layered stack. Stable interfaces between thelayers fully enable this utility. In particular, the user‘s ability to alter functionality at acertain layer without affecting the rest of the network can yield tremendous efficiencieswhen one seeks to upgrade an existing application (higher layer) that makes extensive useof underlying physical infrastructure (lower layer).86 So, applications or protocols athigher layers can be developed or modified with little or no impact on lower layers.87In all engineering-based models of the Internet, the fundamental point is that thehorizontal layers, defined by code or software, serve as the functional components of anend-to-end communications system. Each layer operates on its own terms, with its ownunique rules and constraints, and interfaces with other layers in carefully defined ways. 88 2. Smart Edges: End-to-EndThe end-to-end (―e2e‖) design principle describes the ―where,‖ or the place for networkfunctions to reside in the layered protocol stack. The general proposition is that the coreof the Internet (the network itself) tends to support the edge of the Internet (the end userapplications, content, and other activities).89 RFC 1958 states that ―the intelligence is endto end rather than hidden in the network,‖ with most work ―done at the fringes.‖ 90 Somehave rendered this broadly as dumb networks supporting smart applications.91 A moreprecise technical translation is that a class of functions generally can be more completelyand correctly implemented by the applications at each end of a network communication.By removing interpretation of applications from the network, one also vastly simplifiesthe network‘s job: just deliver IP packets, and the rest will be handled at a higher layer.In other words, the network should support generality, as well as functional simplicity.92The e2e norm/principle arose in the academic communities of the 1960s and 1970s, andonly managed to take hold when the US Government compelled adoption of the TCP/IP84 RFC 1958, at 4. On the other hand, some forms of layering (or vertical integration) can be harmful if thecomplete separation of functions makes the network operate less efficiently. RFC 3439, at 7-8.85 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Ch. 2.86 Whitt, Horizontal Leap, at 604.87 Whitt and Schultze, Emergence Economics, at 257.88 Whitt, Horizontal Leap, at 602. In the ―pure‖ version of layering, a layer is allowed to use only the layerimmediately below it; in the ―relaxed‖ version, a layer is permitted to utilize any layer that lies beneath it.VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Ch. 2. The Internet uses a version ofrelaxed layering, with IP acting as the ―portability‖ layer for all layers above it. Id.89 Whitt and Schultze, Emergence Economics, at 257-58.90 RFC 1958, at 2, 3.91 David Isenberg, id., at n.194.92 As Van Schewick puts it, e2e requires not ―stupidity‖ or simplicity in the network core, but that networkfunctions need only be general in order to support a wide range of functions in the higher layers. VANSCHEWICK, INTERNET ARCHITECTURE, at Part II, Ch. 4. One wonders if excessive talk about―stupid networks‖ has made these basic concepts more contentious for some than they need to be. 15
  16. 16. protocols, mandated a regulated separation of conduit and content, and grantednondiscriminatory network access to computer device manufacturers and dial-up onlinecompanies. These authoritative ―nudges‖ pushed the network to the e2e norm.93Consequently end-to-end arguments ―have over time come to be widely considered thedefining if vague normative principle to govern the Internet.‖94While end-to-end was part of Internet architecture for a number of years prior, theconcept was first identified, named, and described by Jerome Saltzer, David Reed, andDavid Clark in 1981.95 The simplest formulation of the end-to-end principle is thatpackets go into the network, and those same packets come out without change, and that isall that happens in the network. This formulation echoes the 1974 Cerf/Kahn paper,which describes packet encapsulation as a process that contemplates no change to thecontents of the packets.The e2e principle suggests that specific application-level functions ideally operate on theedges, at the level of client applications that individuals set up and manipulate.96 Bycontrast, from the network‘s perspective, shared ignorance is built into the infrastructurethrough widespread compliance with the end-to-end design principle.97 In addition, andcontrary to some claims, e2e is not really neutral; it effectively precludes prioritizationbased on the demands of users or uses, and favors one set of applications over another.98The e2e principle also generally preferences reliability, at the expense of timeliness.The concept of e2e design is closely related to, and provides substantial support for, theconcept of protocol layering.99 End-to-end tells us where to place the network functionswithin a layered architecture.100 In fact, end-to-end guides how functionality isdistributed in a multilayer network, so that layering must be applied first.101 Both relateto the overall general design objectives of keeping the basic Internet protocols simple,general, and open.102With regard to the Internet, the end-to-end argument now has been transformed into abroader principle to make the basic Internet protocols simple, general, and open, leavingthe power and functionality in the hands of the application.103 Of course, the e2e93 Whitt, Broadband Policy, at 507-08.94 Barwolff, at 134.95 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Ch. 2.96 Whitt and Schultze, Emergence Economics, at 258.97 FRISCHMANN, INFRASTRUCTURE (2012), at 322.98 FRISCHMANN, INFRASTRUCTURE, at 324, 326. Richard Bennett, a persistent critic of the ―mythicalhistory‖ and ―magical properties‖ of the Internet, claims that end-to-end arguments ―don‘t go far enough topromote the same degree of innovation in network-enabled services as they do for network-independentapplications.‖ Richard Bennett, ―Designed for Change: End-to-End Arguments, Internet Innovation, andthe Net Neutrality Debate,‖ ITIF, September 2009, at 27, 38. To the extent this is a true statement, itreflects again that the Internet‘s many founding engineers made deliberate design choices.99 Whitt, Horizontal Leap, at 604-05.100 Whitt, Horizontal Leap, at 604-05.101 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part I, Ch. 2.102 Whitt, Horizontal Leap, at 605.103 Whitt and Schultze, Emergence Economics, at 259. 16
  17. 17. principle can be prone to exaggeration, and there are competing versions in the academicliterature.104 One cannot have a modern data network without a core, and in particularthe transport functionality to connect together the myriad constituents of the edge, as wellas the widespread distribution of the applications and content and services provided bythe edge. Elements of the core network, while erecting certain barriers (such as firewallsand traffic shaping) that limit pure e2e functionality,105 may still allow relativelyunfettered user-to-user connectivity at the applications and content layers. To have a fullyfunctioning network, the edge and the core need each other. And they need to beconnected together. 3. A Network of Networks: InterconnectionRFC 1958 puts it plainly: the goal of the Internet is ―connectivity.‖106 The Internet has aphysical architecture as much as a virtual one.107 Unlike the earlier ARPANET, theInternet is a collection of IP networks owned and operated in part by privatetelecommunications companies, and in part by governments, universities, individuals,and other types of entities, each of which needs to connect together.108 Kevin Werbachhas pointed out that connectivity is an often under-appreciated aspect of Internetarchitecture.109 ―The defining characteristic of the Net is not the absence ofdiscrimination, but a relentless commitment to interconnectivity.‖110Jim Speta agrees that the Internet‘s utility largely depends on ―the principle of universalinterconnectivity, both as a technical and as an economic matter.‖111 In order to becomepart of the Internet, owners and operators of individual networks voluntarily connect tothe other networks already on the Internet. This aspect of the Net goes to its ―why,‖which is the overarching rationale of moving traffic from Point A to Point B. The earlyInternet was designed with an emphasis on internetworking and interconnectivity, andmoving packets of data transparently across a network of networks. Steve Crockerreports that in a pre-Internet environment all hosts would benefit from interconnectingwith ARPANET, but that ―the interconnection had to treat all of the networks with equalstatus‖ with ―none subservient to any other.‖112104 Barbara van Schewick devotes considerable attention to the task of separating out what she calls the―narrow‖ and ―broad‖ versions of the end-to-end principle. VAN SCHEWICK, INTERNETARCHITECTURE, at Part II, Ch. 1. She finds ―real differences in scope, content, and validity‖ betweenthe two, and plausibly concludes that the Net‘s original architecture was based on the broader version thatmore directly constrains the placement of functions in the lower layers of the network. Id. As oneexample, the RFCs and other IETF documents are usually based on the broad version. Id. at Part II, Ch. 2.For purposes of this paper however we need only recognize that some form of the end-to-end concept hasbeen firmly embedded in the Net as one of its chief design attributes.105 Whitt, Broadband Policy, at 453 n.199.106 RFC 1958, at 2.107 Laura DeNardis, ―The Emerging Field of Internet Governance,‖ Yale Information Society ProjectWorking Paper Series, September 2010, at 12.108 DeNardis, ―Internet Governance,‖ at 12.109 Whitt and Schultze, Emergence Economics, at 259 n.205.110 Kevin Werbach, quoted in Whitt, Broadband Policy, at 522.111 Jim Speta, quoted in Whitt, Broadband Policy, at 522.112 Crocker, ―Where Did the Internet Really Come From?‖ 17
  18. 18. Today‘s Internet embodies a key underlying technical idea: open-architecturenetworking. Bob Kahn first articulated this concept of open architecture in 1972, and itbecame the basis for later Internet design. Under this design principle, network providerscan freely interwork with other networks through ―a meta-level ‗internetworkingarchitecture.‘‖113 Critical ground rules include that each distinct network must stand onits own, communications will be on a best-effort basis, and there is no global control atthe operations level.114 Impetus for the best efforts concept then is the desire for as manydifferent networks as possible to voluntarily connect, even if strong guarantees of packetdelivery were not possible.The Internet‘s goal of open and voluntary connectivity requires technical cooperationbetween different network service providers.115 Networks of all types, shapes, and sizesvoluntarily choose to interoperate and interconnect with other networks. They do so byagreeing to adopt the Internet‘s protocols as a way of passing data traffic to and fromother entities on the Internet. For example, it has always been legally and technicallypermissible for a private network, such as a broadband operator, to opt out by ceasing tooffer Internet access or transport services – choose to reject TCP/IP – and instead provideonly proprietary services.116 So, ―if you want to put a computer – or a cell phone or arefrigerator – on the network, you have to agree to the agreement that is the Internet.‖117Bernbom says that the key elements of the networks of networks include ―the autonomyof network participants, the rule-based requirements for interconnection, and the peer-to-peer nature of interconnection agreements.‖118 Ignoring one or more of these elementsdiminishes or even eliminates a network‘s interoperability with the rest of the Internet.119In their recent book Interop, Palfrey and Glasser observe that ―the benefits and costs ofinteroperability are most apparent when technologies work together so that the data theyexchange prove useful at the other end of the transaction.‖120 Without interoperability atthe lower layers of the Internet, interoperability at the higher layers – the human andinstitutional layers – is often impossible.121 Their concept of ―interop‖ is to ―embracecertain kinds of diversity not by making systems, applications, and components the same,but by enabling them to work together.‖122 If the underlying platforms are open anddesigned with interoperability in mind, then all players – including end users andintermediaries – can contribute to the development of new products and services.123113 Leiner et al, Past and Future History of the Internet, at 103.114 Leiner et al, at 103-04.115 RFC 1958, at 2.116 FRISCHMANN, INFRASTRUCTURE, at 345.117 Doc Searls and David Weinberger, World of Ends, March 10, 2003, quoted in Whitt, Broadband Policy,at 504 n.515.118 Bernbom, at 5. Interestingly he ties these same elements to maintaining the Internet as a commons. Id.119 Bernbom, at 23.120 INTEROP, THE PROMISE AND PERILS OF HIGHLY INTERCONNECTED SYSTEMS, PALFREYAND GLASSER (2012), at 22.121 INTEROP, at 23.122 INTEROP, at 108.123 INTEROP, at 121. Standards processes play a particularly important role in getting to interoperability. 18
  19. 19. Interconnection agreements are unseen, ―in that there are no directly relevant statutes,there is no regulatory oversight, and there is little transparency in private contracts andagreements.‖124 The fundamental goal is that the Internet must be built byinterconnecting existing networks, so employing ―best efforts‖ as the baseline quality ofservice for the Internet makes it easier to interconnect a wide variety of network hardwareand software.125 This also facilitates a more robust, survivable network of networks, or―assured continuity.‖126 As a result, ―the best effort principle is embedded in today‘sinterconnection agreements across IP-networks taking the for of transit and peeringagreements.‖127We must not overlook the obvious financial implications of interconnecting disparatenetworks. ―Interconnection agreements do not just route traffic in the Internet, they routemoney.‖128 A healthy flow of money between end users and access ISPs is important tosustain infrastructure investment, consistent with concerns about potential market powerabuses.129 Traditionally interconnection agreements on the backbone were party of arelatively informal process of bargaining.130 Transiting is where the ISP provides accessfor the entire Internet for its customers; peering is where two ISPs interconnect toexchange traffic on a revenues-neutral basis. The changing dynamics of Netinterconnection economics include paid peering between content delivery networks(CDNs) and access ISPs.131Interconnecting then is the baseline goal embedded in the Internet‘s architecture, creatingincentives and opportunities for isolated systems to come together, and for edges tobecome embedded in tightly interconnected networks. Werbach has shown thatinterconnectivity creates both decentralizing and centralizing trends in the Interneteconomy, and both centripetal force (pulling networks and systems into the Internetcommons) and centrifugal force (towards the creation of isolated gated communities).132Thus far, however, ―the Internet ecosystem has managed to adapt IP interconnectionarrangements to reflect (inter alia) changes in technology, changes in (relative) marketpower of players, demand patterns, and business models.‖133At least 250 technical interoperability standards are involved in the manufacture of the average laptopcomputer produced today. INTEROP, at 160, 163.124 DeNardis, ―Internet Governance,‖ at 13.125 Solum, Layers Principle, at 107.126 Solum, at 107.127 BEREC, IP-interconnection, at 5.128 David Clark, William Lehr, Steven Bauer, Interconnection in the Internet: the policy challenge,prepared for TPRC, August 9, 2011, at 2.129 Clark, Lehr, and Bauer, Interconnection in the Internet, at 2.130 Clark, Lehr, and Bauer, Interconnection in the Internet, at 3.131 Clark, Lehr, and Bauer, Interconnection in the Internet, at 2. See also BEREC, IP-interconnection, at 48(the emergence of CDNs and regional peering together have resulted in a reduced role for IP transitproviders).132 Whitt and Schultze, Emergence Economics, at 260 n.209; see also Whitt, Broadband Policy, at 522.133 BEREC, An assessment of IP-interconnection in the context of Net Neutrality, Draft report for publicconsultation, May 29, 2012, at 48. 19
  20. 20. 4. Agnostic Protocols: IPRFC 1958 calls IP ―the tool‖ for making the Internet what it is.134 The design of theInternet Protocol (―IP‖), or the ―how,‖ allows for the separation of the networks from theservices that ride on top of them. IP was designed to be an open standard, so that anyonecould use it to create new applications and new networks. By nature, IP is completelyindifferent to both the underlying physical networks, and to the countless applicationsand devices using those networks. In particular, IP does not care what underlyingtransport is used (such as fiber, copper, cable, or radio waves), what application it iscarrying (such as browsers, e-mail, Instant Messaging, or MP3 packets), or what contentit is carrying (text, speech, music, pictures, or video). Thus, IP enables any and all userapplications and content. ―By strictly separating these functions across a relatively simplyprotocol interface the two parts of the network were allowed to evolve independently butyet remain connected.‖135In 1974, Vint Cerf and Robert Kahn issued their seminal paper on the TCP protocol suite,in which the authors ―present a protocol design and philosophy that supports the sharingof resources that exist in different packet switching networks.‖136 IP later was split off in1977 to facilitate the different functionality of the two types of protocols. Based in largepart on how Cerf and Kahn designed the Internet architecture, the Internet Protocol hasbecome a wildly successful open standard that anyone can use. By 1990, whenARPANET was finally decommissioned, TCP/IP had supplanted or marginalized otherwide-area computer network protocols worldwide, and the IETF has overseen furtherdevelopment of the protocol suite. Thus, IP was on the way to becoming the bearerservice for the Net.137TCP and IP make possible the Net‘s design as general infrastructure. 138 IP is the singleprotocol that constitutes the ―Internet layer‖ in the OSI stack, while TCP is one of theprotocols in the ―transport layer.‖ To higher layers, IP provides a function that isconnectionless (each datagram is treated independent from all others) and unreliable(delivery is not guaranteed) between end hosts. By contrast, TCP provides a reliable andconnection-oriented continuous data stream within an end host.139 IP also provides bestefforts delivery because, although it does its best to deliver datagrams it does not provideany guarantees regarding delays, bandwidth, or losses.140On the Internet, TCP/IP are the dominant uniform protocols (UDP is a parallel to TCP,and heavily used today for streaming video and similar applications). Because they arestandardized and non-proprietary, the things we can do on top of them are incrediblydiverse. ―The system has standards at one layer (homogeneity) and diversity in the ways134 RFC 1958, at 2.135 Doria, Study Report, at 11.136 Whitt and Schultze, Emergence Economics, at n.212.137 Leiner et al, The Past and Future History of the Internet, at 106.138 Leiner et al, The Past and Future History of the Internet, at 104.139 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Ch. 4.140 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Chapter 4. 20
  21. 21. that ordinary people care about (heterogeneity).‖141About the ARPANET, RFC 172 states that ―assume nothing about the information andtreat it as a bit stream whose interpretation is left to a higher level process, or a user.‖ 142That design philosophy plainly carried over to the Internet. As Barwolff puts it, IPcreates ―the spanning layer‖ that creates ―an irreducibly minimal coupling between thefunctions above and below itself.‖143 Not only does IP separate the communicationspeers at either end of the network, it generally maintains a firm separation between theentities above and below it.144 This is another example of how two discrete elements, inthis case modular design and agnostic protocols, work closely together to create adistinctive set of network functions. IP also interconnects physical networks throughrouters in the networks. Moreover, Frischmann believes that TCP/IP actually implementsthe end-to-end design.145 C. The End Result: A Simple, General, and Open Feedback NetworkThese four fundamental architectural components of the Internet are not standalones orabsolutes; instead, they each exist, and interact, in complex and dynamic ways, along acontinuum. Together the four design attributes can be thought of as creating a networkthat is, in David Clark‘s formulation, simple, general, and open.146 At the same time, thedifferent layers create the logical traffic lanes through which the other three attributestravel and are experienced. So in that one sense modularity provides the Internet‘sfoundational superstructure.We must keep in mind that these four attributes describe the Internet in its nativeenvironment, with no alterations or impediments imposed by other agents in the largerecosystem. Where laws or regulations, or other activities, would curtail one or more ofthe design attributes, the Net becomes less than the sum of its parts. It is only when thedesign features are able to work together that we see the full emergent phenomenon ofthe Net. In this paper, I will use the term ―integrity‖ to describe how the design elementsfit together and function cohesively to create the user‘s overall experience of the Internet.Every design principle, instantiated in the network, has its drawback, and itscompromises. Technical improvements are a given.147 The Internet certainly could bemore simple (or complex), more general (or specialized), more open (or closed). 148 Nor141 INTEROP, at 108.142 RFC 172, at 6.143 Barwolff, at 136.144 Barwolff, at 137.145 FRISCHMANN, INFRASTRUCTURE, at 320. Frischmann also observes that the e2e concept is foundin many infrastructure systems. See also Whitt and Schultze, Emergence Economics, at 260-61 (IP wasdesigned to follow the e2e principle).146 [citation]147 Whitt, Broadband Policy, at 453.148 As one example, David Clark reports that the decision to impose the datagram model on the logicallayers deprived them of an important source of information which they could use in achieving the lowerlayer goals of resource management and accountability. Clark, Design Philosophy, at 113. Certainly afuture version of the Net could provide a different building block for the datagram. Id. 21
  22. 22. is the Internet an absolutely neutral place, a level playing field for all comers.The design features reinforce one another. For example, the layering attribute is relatedto the end-to-end principle in that it provides the framework for putting functionality at arelative edge within the network‘s protocol stack.149 RFC 1958 states that keeping thecomplexity of the Net at the edges is ensured by keeping the IP layer as simple aspossible.150 Putting IP in a central role in the Internet is also ―related loosely tolayering.‖151 At the same time, the ―best efforts‖ paradigm is ―intrinsically linked‖ to thenature of IP operating in the transmission network,152 because IP defines passing packetson a best efforts basis.153 Further, ―TCP/IP defines what it means to be part of theInternet.‖154 Certainly the combination of the four design attributes has allowed endusers to utilize the Net as a ubiquitous platform for their activities.155The end result is that IP helps fashion what come have called a ―virtuous hourglass‖ fromdisparate activities at the different network layers. In other words, the Net drivesconvergence at the IP (middle) layer, while at the same time facilitating divergence at thephysical networks (lower) and applications/content (upper) layers. The interconnectednature of the network allows innovations to build upon each other in self-feeding loops.In many ways layering is the key element that ties it all together.As the networks and users that comprise it continue to change and evolve, the Net‘s coreattributes of modularity, e2e, interconnectivity, and agnosticism are constantly beingpushed and prodded by technology, market, and legal developments. That is not to saythese developments inherently are unhealthy. Clearly there are salient exceptions to everyrule, if not new rules altogether. The Internet needs to be able to adjust to the realities ofsecurity concerns like DoS attacks, and the needs of latency-sensitive applications likestreaming video and real-time gaming. The question is not whether the Net will evolve,but how.156III. Internet Design as Foundational, Dynamic, and CollectiveThe first part of this paper seeks to respond to a basic question: What types oftechnologies and technical design features afford the greatest potential to drive userbenefits? In the previous section we took a relatively micro view, attempting to isolateand describe the Internet‘s four fundamental design attributes. This section takes thediscussion to a more macro level perspective on the Internet as a whole. Depending onwho you ask – a technologist, a scientist, or an economist – the answer to that question isthe same: the Internet. Whether as a general platform technology, a complex adaptive149 Doria, Study Report, at 11.150 RFC 1958, at 3. The layer principle is related to, but separate from, the broad version of the end-to-endprinciple. VAN SCHEWICK, INTERNET ARCHITECTURE, at Part II, Chapter 4.151 Doria, Study Report, at 11.152 BEREC, IP-interconnection, at 4.153 SEARLS, INTENTION ECONOMY, at Chapter 9. ―Best effort‖ is what IP requires. Id. at Chapter 14.154 Werbach, Breaking the Ice, at 194.155 Whitt and Schultze, Emergence Economics, at 300-01.156 Whitt and Schultze, Emergence Economics, at 262. 22
  23. 23. system, or a common pool resource, the Net serves as the ideal platform to promote andenhance a myriad of human activities. In brief the Net‘s basic design enables massivespillovers, emergent phenomena, and shared resources. A. The Internet as General Platform TechnologyOne potential answer to that question has been articulated in the ongoing research on―General Purpose Technologies‖ (―GPTs‖). A GPT is a special type of technology thathas broad-ranging enabling effects across many sectors of the economy. Technologiststypically define a GPT as a generic technology that eventually comes to be widely used,to have many uses, and to have many spillover effects.157The foundational work on GPTs was first published by Timothy Bresnahan and ManuelTrajtenberg in 1992. They describe how this particular type of technology is most likelyto generate increasing returns in line with economist Paul Romer, with growth comingfrom specific applications that depend on ideas in the ―general‖ layer of technology.Specifically, GPTs play a role of ―enabling technologies‖ by opening up newopportunities rather than offering complete, final solutions. The result, as they found it, is―innovational complementarities,‖ meaning that ―the productivity of R&D in adownstream sectors increases as a consequence of innovation in the GPT technology.These complementarities magnify the effects of innovation in the GPT, and helppropagate them throughout the economy.‖158The Internet has been labeled a GPT, with ―the potential to contribute disproportionatelyto economic growth‖ because it generates value ―as inputs into a wide variety ofproductive activities engaged in by users.‖159 Concurrently the Net is an infrastructureresource that enables the production of a wide variety of private, public, and socialgoods.160 As the early Net pioneers see it, ―The Internet was not designed for just oneapplication but as a general infrastructure on which new applications could be conceived,exemplified later by the emergence of the Web. The general-purpose nature of theservice provided by TCP and IP made this possible.‖161The GPT literature demonstrates that Internet technologies share key features of a GPT,all of which help make the Net an enabling technology. These features include:widespread use across key sectors of the economy and social life; represents a great scopeof potential improvement over time; facilitates innovations generating new products andprocesses; and shows strong complementarities with existing and emergingtechnologies.162By its nature a GPT maximizes the overall utility to society. Lipsey observes that GPTs157 Whitt and Schultze, Emergence Economics, at 276.158 Whitt and Schultze, Emergence Economics, at 276 n.289.159 Whitt and Schultze, Emergence Economics, at 277 n.294.160 FRISCHMANN, INFRASTRUCTURE, at 334.161 Leiner et al, The Past and Future History of the Internet, at 104.162 Hofmokl, at 241. 23
  24. 24. help ―rejuvenate the growth process by creating spillovers that go far beyond the conceptof measurable externalities,‖ and far beyond those agents that initiated the change.163 Thishas important implications when trying to tally the sum total of beneficial value andactivity generated by the Internet.164 GPT theory emphasizes ―the broadercomplementarity effects of the Internet as the enabling technology changing thecharacteristics, as well as the modes of production and consumption of many othergoods.‖165Perhaps the most important policy-related takeaway about GPTs is that keeping them―general‖ is not always in the clear interest of firms that might seek to control them. Acorporation might envision greater profits or efficiency through making a tremendouslyuseful resource more scarce, by charging much higher than marginal cost, or bycustomizing solely for a particular application. While these perceptions might be true inthe short term, or for that one firm‘s profits, they can have devastating effects for growthof the economy overall. The more general purpose the technology, the greater are thegrowth-dampening effects of allowing it to become locked-down in the interest of aparticular economic agent.166 The important feature of generative platforms, such as theInternet, is that users easily can do numerous things with them, many of which may nothave been envisioned by the designers. If, for example, the Internet had been built solelyas a platform for sending email, and required retooling to do anything else, mostapplications and business models never would have developed.167 B. The Internet as Complex Adaptive SystemIn addition to serving as a GPT, the Internet is also a complex adaptive system (―CAS‖),whose architecture is much richer than the sum of its parts. As such, the microinteractions of ordinary people on the Internet lead to macro structures and patterns,including emergent and self-organizing phenomena.Complexity can be architectural in origin. It is believed that the dense interconnectionswithin the ―network of networks‖ produce the strongly non-linear effects that are difficultto anticipate.168 Engineers understand that more complex systems like the Internetdisplay more non-linearities; these occur (are ―amplified‖) at large scales which do notoccur at smaller scales.169 Moreover, more complex systems often exhibit increasedinterdependence between components, due to ―coupling‖ between or within protocollayers.170 As a result, global human networks linked together by the Internet constitute acomplex society, where more is different.171163 Whitt and Schultze, Emergence Economics, at 280.164 Whitt and Schultze, Emergence Economics, at 279-280.165 Hofmokl, The Internet commons, at 241.166 Whitt and Schultze, Emergence Economics, at 277.167 Whitt and Schultze, Emergence Economics, at 277-78.168 de La Chappelle, Multistakeholder Governance, at 16.169 RFC 3439, at 4.170 RFC 3439, at 5.171 de La Chappelle, at 17. 24
  25. 25. As scientists are well aware, emergence is not some mystical force that magically comesinto being when agents collaborate. Emergent properties are physical aspects of a systemnot otherwise exhibited by the component parts. They are macro-level features of asystem arising from interactions among the system‘s micro-level components, bringingforth novel behavior. Characteristics of emergent systems include micro-macro effects,radial symmetry, coherence, interacting parts, dynamical, decentralized control, bi-directional links between the macro- and micro- levels, and robustness and flexibility.172The brain is an example of a CAS: the single neuron has no consciousness, but a networkof neurons brings forth, say, the perception of and appreciation for the smell of a rose.Similarly, when agents interact through networks, they evolve their ways of doing workand discover new techniques. Out of this combined activity, a spontaneous structureemerges. Without any centralized control, emergent properties take shape based on agentrelationships and the conditions in the overall environment. Thus, emergence stems frombehavior of agents, system structures, and exogenous inputs.173Emergent systems exist in an ever-changing environment and consist of complexinteractions that continuously reshape their internal relationships. The many independentactions of agents unify, but they do not necessarily work toward one particular structureor equilibrium. For example, emergent systems can be robust to change, and they can befar better at evolving toward efficiency than top-down systems. On the other hand,emergent structures can fall apart when their basic conditions are altered in such a waythat they work against the health of the system as a whole. The line between emergence-fostering actions and emergence-stifling actions sometimes can be difficult to discern.174 C. The Internet as Common Pool ResourceA third perspective on the Internet comes to us from modern economic theory, where theNet is seen by many as a ―common pool resource‖ or CPR. The term ―commons‖ has hadmany uses historically, almost all contested.175 Elinor Ostrom has defined it simply as ―aresource shared by a group of people and often vulnerable to social dilemmas.‖176 YochaiBenkler states that ―the commons refer to institutional devices that entail governmentabstention from designating anyone as having primary decision-making power over useof a resource.‖177 The two principal characteristics that have been utilized widely in theanalysis of traditional commons are non-excludability and joint (non-rivalrous)consumption.178172 Whitt and Schultze, Emergence Economics, at 248 n.141.173 Whitt and Schultze, Emergence Economics, at 247-48.174 Whitt and Schultze, Emergence Economics, at 248-49.175 Hess and Ostrom, Ideas, Artifacts, and Facilities, at 115.176 Hofmokl, The Internet commons, at 229, quoting Ostrom (2007), at 349.177 Benkler, The Commons as a Neglected Factor of Information Policy, Remarks at TPRC (September1998).178 Justyna Hofmokl, The Internet commons: towards an eclectic theoretical framework, InstitutionalJournal of the Commons, Vol. 4, No. 1, 226 (February 2010), at 227. See also LEWIS HYDE, COMMONAS AIR (20 ) (Commons is a kind of property – including the rights, customs, and institutions thatpreserve its communal use – in which more than one person has rights). 25
  26. 26. In turn, a CPR has been defined as ―a natural or man-made resource system that issufficiently large as to make it costly (but not impossible) to exclude potentialbeneficiaries from obtaining benefits for its use.‖179 Ostrom and Hess have concludedthat cyberspace is a CPR, similar as a resource to fishing grounds, grazing lands, ornational security that is constructed for joint use. Such a system is self-governed andheld together by informal, shared standards and rules among a local and global technicalcommunity. This is so even as the resource units themselves – in this case data packets –typically are individually owned.180Frischmann points out that traditional infrastructures are generally managed as commons,which well fits their role as a shared means to many ends.181 In the United States andelsewhere, government traditionally plays the role of ―provider, subsidizer, coordinator,and/or regulator‖ of infrastructure.182 Studies written about the Internet as a CPR tend tofocus on the technology infrastructure and the social network issues, rather than theinstitutions developed about the distributed information per se.183 However, Bernbombelieves that many of the design elements of the Internet create the basic rules formanaging it as a commons.184Much of the thinking about the Internet as a CPR glosses over the functional specifics,which is a correctable mistake. Contrary to how some describe its resource role, theInternet is actually a complicated blend of private goods and public goods, with varyingdegrees of excludability and joint consumption. Hofmokl does an excellent job analyzingthe physical, logical, and content layers of the Internet, pointing out where its attributesas a resource match up to those of the commons.185 In particular the physical layers(physical networks and computing devices) and the content layers (digitized information)are mostly pure private goods, showing excludability combined with rivalrousconsumption.186However, when we are talking about the design attributes of the Internet, the elements weare focused on – the technical standards and protocols, including TCP-IP-HTTP, thatdefine how the Internet and World Wide Web function -- all constitute exclusively publicgoods, free for everyone to use without access restrictions.187 Further, as Frischmannduly notes, the Net‘s logical infrastructure – the open, shared protocols and standards --179 ELINOR OSTROM, GOVERNING THE COMMONS: THE EVOLUTION OF INSTITUTIONS FORCOLLECTIVE ACTION (1990), at __.180 Hess and Ostrom, Ideas, Artifacts, and Facilities, at 120-21.181 FRISCHMANN, INFRASTRUCTURE, at 3-4.182 FRISCHMANN, INFRASTRUCTURE, at 4.183 Hess and Ostrom, Ideas, Artifacts, and Facilities: Information as a Common-Pool Resource, Law andContemporary Problems, Vol. 66:111 (Winter/Spring 2003), at 128.184 Gerald Bernbom, Analyzing the Internet as a Common Pool Resource: The Problem of NetworkCongestion, Pre-Conference Draft, International Association for the Study of Common Property, IASCP2000, April 29, 2000, at 5.185 Hofmokl, The Internet commons, at 232-238.186 Hofmokl, The Internet commons, at 232-238. Hofmokl calls this ―a dual structure within the Internet, ofcommercial and free access segments.‖ Id. at 232.187 Hofmokl, The Internet commons, at 235-36. 26
  27. 27. are managed as commons.188 So the key architectural components of the Net constitute acommon pool resource, managed as a commons, even if many of the network‘s actualcomponent parts – individual communications networks, proprietary applications andcontent, etc. – are private goods, or a blend of private and public goods.189 The Net‘sdesign attributes are what make it a commons resource. *********************Like a funhouse mirror, each of the three cross-functional perspectives sketched outabove are a partially correct reflection of reality, and yet remain incomplete without theothers. As a GPT, the Net serves a vital function as a general, foundational platform formany people. As a CAS, the Net presents emergent properties, often with dynamic andunanticipated consequences. As a CPR, the Net provides a shared resource for all toutilize for a mix of purposes and ends. From its micro-functions, the Internet generatesthe macro-phenomena of a general platform, a complex system, and a shared resource.IV. The Emergence of Net Effects: Benefits and ChallengesIt seems almost a truism to point out that the Internet on whole has done some very goodthings for modern society. In his recent book Infrastructure, Brett Frischmann does anadmirable job explicating a lengthy list of such benefits.190 The more interesting point isto figure out exactly why that would be the case. Using the power of abductive reasoning(from effect to cause),191 we can determine that the very design attributes outlined abovehave led to a raft of benefits for users, as well as some challenges. In other words, wecan safely come to the presumption that the modular, end-to-end, interconnected, andagnostic Internet provides real economic and social ―spillovers‖ value.192 That means theInternet‘s social returns exceed its private returns, because society realizes benefits aboveand beyond those realized by individual network providers and users.193Frischmann explains in some detail precisely how infrastructure generates such spilloversthat results in large social gains.194 In particular, managing the Internet‘s infrastructure asa commons sustains a spillovers-rich environment.195 Here are a few of the moreimportant economic and social and personal gains from the Internet‘s design attributes.188 FRISCHMANN, INFRASTRUCTURE, at 320 n.10.189 For example, Bernbom divides the Internet into the network commons, the information commons, andthe social commons. However, the notion that each of the Net‘s resources has the characteristics of a CPRseems to ignore the private goods nature of many of them. See Bernbom, Analyzing the Internet as aCommon Pool Resource, at 1-2.190 FRISCHMANN, INFRASTRUCTURE, at 317.191 DANIEL W. BROMLEY, SUFFICIENT REASON, VOLITIONAL PRAGMATISM AND THEMEANING OF ECONOMIC INSTITUTIONS (2006).192 Whitt and Schultze, Emergence Economics, at 297.193 FRISCHMANN, INFRASTRUCTURE, at 12.194 FRISCHMANN, INFRASTRUCTURE, at 5.195 FRISCHMANN, INFRASTRUCTURE, at 318. 27
  28. 28. A. Engine of InnovationIdeas are the raw material for innovation. In the ordinary transformational cycle, ideasbecome concepts, which become inventions, which are utilized for commercial or otherpurposes. They are the recipes for combining atoms and bits into useful things. While thephysical components are limited, the ideas themselves essentially are unlimited –characterized by increasing returns, continued re-use, and ease of sharing. Innovation, bycontrast, is the application of ideas—invention plus implementation. Ideas and innovationform an essential feedback cycle, where input becomes output, becomes input again.196If there is any one business lesson of the last decade that has acquired near-universalempirical support and expert agreement, it is this: innovation is a good thing. Thecreation of new and different objects, processes, and services are at the heart of anyrational conception of economic growth and the fulfillment of human potential. Nomatter what you call it—creativity, entrepreneurism, novelty, ingenuity—the globaleconomy feeds on the constant infusion of the products of innovation.197 Theproliferation of new ideas and inventions, channeled through generative networks ofagents, provides powerful fuel for economic growth and other important emergenteffects.198Network architectures affect economic systems, by both enabling and constrainingcertain behaviors.199 Barbara van Schewick has explained the strong linkage between theway the Net has been constructed – including modularity, layering, and a broad versionof the end-to-end principle -- and the prevalence of innovation.200 Not surprisingly, theInternet as a networked platform helps enable all the attributes of an innovativeenvironment. Generally speaking, a greater ability of agents to connect and explore newmodes of production will facilitate the contingent connections that a top-down designerlikely will not foresee. Better global information sharing and feedback between agentsfacilitates better local decisions. The system as a whole can take a leap forward whennew innovations emerge from this process and are replicated throughout the network bywilling agents. As a result, the Internet serves as a particularly effective innovationengine, rapidly developing, diffusing, and validating scores of novel inventions.Indeed, numerous empirical studies show conclusively the types of institutionalorganizational and networked environments within which innovation actually thrives. Inbrief, innovation tends to flow from:-- the users, not the consumers or providers;-- from the many, not the few;-- from the connected, not the isolated;-- from individuals and small groups, not larger organizations;196 Whitt and Schultze, Emergence Economics, at 278-284.197 Whitt and Schultze, Emergence Economics, at 267.198 Whitt, Adaptive Policymaking, at 494.199 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part I.200 VAN SCHEWICK, INTERNET ARCHITECTURE, at Part III. 28
  29. 29. -- from the upstarts, not the established;-- from the decentralized, not the concentrated;-- from the flat, not the hierarchical; and-- from the autonomous, not the controlled.201Innovation is produced from those users motivated by many incentives, including profit,pride, and personal fulfillment. There is also a separate ―demand side‖ perspective toinnovation, based on extensive research showing that ―venturesome‖ consumers adoptingand using technology are crucial to maintaining economic prosperity.202The Internet provides the enabling background conditions for the creation anddissemination of innovation, and feedback loops: open, connected, decentralized,autonomous, upstarts, etc. Commentators have observed the strong correlation betweenrobust, ends-oriented innovation and the architecture of the Internet. 203 Lee McKnightnotes that ―the Internet works its magic through rapid development and diffusion ofinnovations.‖ The Internet Protocol acts as a ―bearer service‖—the general purposeplatform technology linking technologies, software, services, customers, firms, andmarkets—so that the Internet is ―an innovation engine that enables creation of aremarkable range of new products and services.‖204 Michael Katz believes that ―[t]hehourglass architecture allows innovations to take place at the application and transportlayers separately. This ability for independent innovation speeds the rate of innovationand increases the ability of entrepreneurs to take advantage of new opportunities.‖205In functional terms, one can envision the open interface to the Internet Protocol servingas the virtual gateway to its functionality, leaving all the applications and content andservices residing in the higher layers free to evolve in a vast number of ways. B. Spur to Economic GrowthEven a cursory review of contemporary economic statistics shows that the Internet hasbeen and continues to be a real boon for global economies. As one example, theMcKinsey study ―Internet Matters‖ shows that over the past five years the Internetaccounts for over one-fifth of GDP growth in mature countries.206 That same reportexplains that, for every job ―lost‖ to the Internet, some 2.6 new jobs are created. 207 TheNet also increases productivity by smaller businesses by at least ten percent, and enablesthem to export twice as much as before.208 Further for every ten percentage pointsincrease in broadband penetration (which of course enables high-speed Internet access),from 0.9 to 1.5 percent is added to per capita GDP growth, with similar increases in labor201 Whitt and Schultze, Emergence Economics, at 267-68.202 Whitt and Schultze, Emergence Economics, at 268.203 Whitt and Schultze, Emergence Economics, at 269.204 [citation]205 [citation]; Whitt, Horizontal Leap, at 630 n.160.206 McKinsey & Company, Internet Matters (2011), at __.207 McKinsey, Internet Matters, at __.208 McKinsey, Internet Matters, at __. 29