Wednesday, November 19, 2014

My Network Neutrality Paper of Eight Years Ago

Back when I was fresh out of school, and unsure of what I was going to do, I wrote this paper as part of an internship application to the John Locke Foundation.  I'm reproducing it here mainly as evidence that the push-back against strong new regulations (or rather, of applying really old regulations to new technology) is not some recent, anti-Obama fervor.

Looking back, the prose is kinda horrifying at times.  That first sentence in particular is extra-stuffed.  So I'm not trying to show off by sharing.  In fact, I know it must have been bad, because I never heard back.  Oh well, being an Engineer has been pretty great.

I've also noticed a bit of a miss in the analysis, as I've annotated below (Update: I've since heard about this story, which highlights that no, basically what I said in the original paper was how Netflix was operating - they simply chose a different transit provider).  In all, though, I think it still captures the issue pretty well.  One big miss was the rise of wireless broadband, which in retrospect should have been pretty obvious, but is only mentioned in passing.  But its growth in the marketplace only strengthens the point of the paper.
Network Neutrality
John Covil
                        As the ubiquity of the Internet becomes more complete in modern life, new policy issues arise as many people form different opinions as to just what the Internet should be.  There is currently very heated debate on the subject of so called ‘Network Neutrality’.  The concept is born out of very old regulatory laws, and the discussion is hampered by a lack of understanding of some or all of the key issues surrounding it.  When talking of network neutrality, people are generally referring to the principle that transmission of data across the Internet is content neutral.  That is, data is treated the same as it traverses the Internet regardless of the application, source, or destination.  One must consider both the technical and regulatory aspects of the Internet when making policy decisions on a neutrality stance.
            First of all, the concept that most people have about the Internet is shaped only by their experiences sitting at their PCs, and not by an understanding of how the Internet actually behaves.  The Internet is, of course, a network of networks.  These networks allow computers to communicate for a variety of purposes.  Instead of one single, vast connection of individual computers, the Internet is best described as an interconnection of autonomous systems.  Each network is maintained by an individual or individuals as part of some institution.  One network does not have control over another network.  Harmony is achieved through adherence to commonly accepted industry standards.  So instead of an Internet that is sent down from on high as a free and open land for the mutual benefit of all mankind, the real Internet is one that is controlled at great expense by private institutions, corporations, municipalities, universities, &c.  The connections between these networks are provided by Internet Service Providers (ISPs), which are often tiered to two or three levels, and owned and operated by telecommunications companies at great expense in terms of equipment and labor.
            Important for understanding the behavior of the Internet with regards to neutrality is an understanding of the way the Internet is abstracted.  A model was developed which divides networks up into seven (sometimes five) layers.  The behavior of a network at each layer is governed by different protocols.  The data is sent in packets, and as each data packet is constructed, the data from the higher layer is encapsulated (and ignored) by the layer below it.  The top four layers are governed by the end systems.  The bottom three are governed by the media that moves and routes the data through the Internet.  The way the Internet currently behaves, the routers that perform the yeoman’s work in moving data only operate up to layer three.  This is the Internet Protocol (IP) layer.  Devices operating at the IP layer keep routing tables, which tell the device which IP address to forward packets to, based on the destination address (or range of addresses).  Therefore, much of the Internet traffic is managed with no concern at all as to what application is actually being used.  Also, packets are forwarded on a ‘best effort’ basis.  Due to the transient nature of routing tables, and some attempts at congestion avoidance, packets transmitted from one source to the same destination, for the same application, may not take the same path.  Therefore, they may arrive out of order.  All of this means that the Internet, operating as it does now, cannot offer any effective kind of Quality of Service (QoS) guarantees.  This has a tremendous impact on the future of the Internet as a potentially new way to deliver entertainment. 
 Telecommunications companies are looking to compete with the cable companies by offering high-definition quality television over the Internet (IPTV).  With the Internet as it is now, this is effectively impossible.  While in some small instances, high quality, live broadcasts have been transmitted over the Internet, ‘toll quality’ television would require more QoS guarantees.  The early days of the Internet did not create a need for separate treatment of data based on the application (though the protocols were designed with that purpose in mind).  But now, there are an increasing number of interactive and bandwidth intensive applications that require more speed and lower latency (time it takes data to go from sender to receiver) than simple data transfers.  While some attempts have been made at implementing differentiated services, near-future rollout of IPTV would require a fundamental shift in the way the Internet operates.  Live television places immense demands on network resources for low jitter.  Jitter (in this context) occurs when information arrives with a greatly varying amount of time.  To put it simply, if you’re displaying the first set of frames of a television broadcast, you need to have the second set of frames there in time to show them when necessary, whether or not frame set two arrived before frame set three.  Jitter is a result of high latency, and aggravated by out-of-order packet delivery.  Some secondary network protocols, like Asynchronous Transfer Mode (ATM), guarantee in-order packet delivery for teleconferencing applications, but usually at the expense of other qualities of service, and they would be impractical to implement on the scale needed for IPTV.  So overcoming the challenges posed by transmitting television across IP networks requires treating IPTV traffic differently than other traffic.  This violates the concept of network neutrality. 
Controversy over the elimination of neutrality is a fairly recent phenomenon.  The format of the IP layer of network packets actually includes eight bits for differentiated service.  By flagging certain packets as higher priority, they can move to the front of queues and buffers, and face much less traffic and congestion problems across a given network.  While this possibility has always been in place, as a general rule, networks did not treat data this way.  The principles of neutrality have been largely followed, although in recent years many residential ISPs have been blocking or throttling traffic corresponding to illegal file transfer, or transfers with the BitTorrent protocol.  What is causing the most commotion now, is that telecommunications companies want to charge for different levels of service to content providers.  The fear is that the free and open Internet that we know and love today will vanish overnight.  As a technical matter, these fears are unwarranted.  While the Internet has been largely content neutral up to this point, the fact is that each network is autonomous.  Routing to other networks are handled by ‘exterior’ routing protocols.  If one network does not like the way a neighboring network, it can simply adjust its traffic around the problematic network [in practice, this hasn't always been the case for home consumers, as seen with the infamous fight between Comcast and Netflix but may be even when people don't see it that way].  What’s contributing to the emotion is the scale of the proposed change, and legal history surrounding the telecommunications industry.
The current controversy is magnified by the lack of options for ‘last mile’ broadband to the home.  A few corporations have invested large amounts of money in stringing high speed Internet to the home, most notably Verizon.  Verizon is rolling out a fiber optic network all the way to the home in many areas.  They are leading the charge for charging for differentiated services to make IPTV possible, and to recoup costs in investments to the home.  As noted, operation of these networks is not free, and they are the property of their respective owners.  The argument that a corporation should be able to do what they want with what they own is met with charges of monopolization.  The problem is that the monopolization was a result of regulation.  The Telecommunications Act of 1996 categorized ISPs as providing either Information Services (IS) or Telecommunications Services (TS).  Cable based ISPs were categorized as IS providers, while DSL providers (usually the old telecommunications companies) were categorized as TS providers.  While IS providers don’t face much regulation, TS providers were required to ‘unbundle’ services, as they always had with common carrier laws.  This put the telecommunications companies at an unfair disadvantage, as they had to open up their networks to other users.  Investment in last-mile wiring has suffered as a result.  However, it has not been dead.  Even in the face of such regulation, Verizon has stepped in to begin wiring fiber to the home.  This is in areas where DSL options already exist.  So even with current regulation, last-mile monopolies are not set in stone.  If unbundling regulations were repealed, and there were no pro net neutrality regulations added, the last-mile marketplace would be opened considerably, and even more companies would invest in the last mile.  However, by adding regulations, as the pro net neutrality supporters are want to do, they will actually remove incentive for last-mile investment, and further cement the monopoly of the cable and telecommunications providers [emphasis added 11/19/14].  While the Internet would remain as content neutral as it is today, Verizon would have little reason to continue its fiber rollout, and consumers would face, at most, three or four broadband service options: cable, DSL, wireless and fiber.
Net neutrality proponents want to throw bad policy after bad, by proposing new regulations to make up for the problems caused by old regulations.  Deregulation of the broadband industry would result in more options for the consumer, along with an increase in the ability of the Internet to carry entertainment.  Fears of stifling unwanted content are based on the old idea where a consumer has one or two broadband choices.  With more competition in the last mile, even without content neutrality, the consumers will demand and receive access to all of their old favorite websites.  It is important that we don’t fall prey to the rash appeals of the Internet content providers, and we take a stoic, reasoned look at the telecommunications policy that will have an increasingly large impact on all our lives.

No comments: