A response to Barbara van Schewick: code needs (only a little) help from the law

Barbara van Schewick posted a really thoughtful analysis about how about application-specific vs. application-agnostic discrimination directly affects innovation, and looks at an actual example of a Silicon Valley startup. I think her points are right on, and I strongly support the rationale for resisting “application-specific” discrimination.

In fact, Barbara’s point is the key to the whole debate. The future of the internet requires that applications be able to be invented by anyone and made available to everyone, and information shared on the net by anyone to be accessible to anyone. That property is “under fire” today by Internet access providers, by nation states, and by others who wish to limit the Internet’s reach and capabilities. I wholeheartedly support her points and her proposal.

I think it’s important to remind us of a further point that is quite obvious to those who have participated in the Internet on a technical level, from its original design to the present, so I thought I’d write a bit more, focusing on the fact that the Internet was designed in a way that makes application-specific discrimination difficult. Barbara knows this, since her work has been derived from that observation, but policymakers not steeped in the Internet’s design may not. As we pursue the process of making such rules, it is important to remind all of us that such rules are synergistic with the Internet’s own code, reinforcing the fundamental strengths of the Internet.

So I ask the question here: what do we need from the “law” when the “code” was designed to do most of the job? Well, the issue here is about the evolution of the Internet’s “code” – the implementation of the original architecture. The Internet’s code will continue to make it hard for application discrimination to be difficult as long as a key aspect of its original design is preserved – that the transport portion of the network need not know what the meaning of the bits being transported on any link is. We struggled to make all our design decisions so that would remain true. Barbara has made the case that this design choice is probably the most direct contribution to the success of the Internet as a platform for innovation.

My experience with both startups and large companies deciding to invest in building on general purpose platforms reinforces her point. Open platforms really stimulate innovation when it is clear that there is no risk of the platform being used as a point where the platform vendor can create uncertainty that affects a product’s ability to reach the market. This is especially true for network communications platforms, but was also true for operating systems platforms like Microsoft DOS and Windows, and hardware platforms like the Apple II and Macintosh in their early days. In their later days, there is a tendency for the entities that control the platform’s evolution to begin to compete with the innovators who have succeeded on the platform, and also to try to limit the opportunities of new entrants to the platform.

What makes the Internet different from an operating system, however, is that the Internet is not itself a product – it is a set of agreements and protocols that create a universal “utility” that runs on top of the broadest possible set of communications transport technologies, uniting them into a global framework that provides the ability for any application, product or service that involves communications to reach anyone on the globe who can gain access to a local part of the Internet.

The Internet is not owned by anyone (though the ISOC and ICANN and others play important governance roles). It’s growth is participatory – anyone can extend it and get the benefits in exchange for contributing to extending it. So controlling the Internet as a whole is incredibly hard. However certain areas of the Internet can control the Internet in limited ways. In particular, given that local authorities tend to restrict the right to deploy fiber, and countries tend to restrict the right to transmit or receive radio signals, the first or last mile of the Internet is often a de facto monopoly, controlled by a small number of large companies. Those companies have incentives and the ability to do certain kinds of discrimination.

However, a key part of the Internet’s design, worth repeating over and over, is that the role of the network is solely to deliver bits from one user of the Internet to another. The meaning of those bits never, under any circumstances, needs to be known to anyone other than the source or the destination for the bits to be delivered properly. In fact, it is part of the specification of the Internet that the source’s bits are to be delivered to the destination unmodified and with “best efforts” (a technical term that doesn’t matter for this post).

In the early days of the Internet design, my officemate at MIT, Steven T. Kent, who is now far more well known as one of the premier experts on secure and trustworthy systems, described how the Internet design could in fact, be designed so that all of the bits delivered from source to destination were encrypted with keys unknown to the intermediate nodes, and we jointly proposed that this be strongly promoted for all users of the Internet. While this proposal was not accepted, because encryption was thought to be too expensive to require for every use, the protocol design of TCP and all other standard protocols have carefully preserved the distinctions needed so that end-to-end encryption can be used. That forces the design to not depend in any way on the content, since encryption means that no one other than the source or destination can possibly understand the meaning of the bits, so the network must be able to do perfectly correct job without knowing same.

Similarly, while recommendations were made for standard “port numbers” to be associated with some common applications, the designers of the Internet recognized that they should not assign any semantic meaning to those port numbers that the network would require to do its job of delivering packets. Instead, we created a notion of labeling packets in their header for various options and handling types, including any prioritization that might be needed to do the job. This separation of functions in the design meant that the information needed for network delivery was always separate from application information.

Why did we do this? We did not do it to prevent some future operator from controlling the network, but for a far more important reason – we were certain that we could not predict what applications would be invented. So it was prudent to make the network layer be able to run any kind of application, without having to change the network to provide the facilities needed (including prioritization, which would be specified by the application code running at the endpoints controlled by the users).

So here’s a concern with Barbara’s latest post, and in fact with much of the policy debate at the FCC and so forth. The concern is that the Internet’s design requires that the network be application agnostic as a matter of “code”. More importantly, because applications don’t have to tell the network of their existence, the network can’t be application specific if it follows the Internet standards.

So why are we talking about this question at all, in the context of rules about the Open Internet at FCC? Well, it turns out that there are technologies out there that try to guess what applications generated particular packets, usually by relatively expensive add-on hardware that inspects every packet flowing through the network. Generically called “deep packet inspection” technologies and “smart firewall” technologies, they look at various properties of the packets between a source and destination, including the user data contents and port numbers, and make an inference about what the packet means. Statistically, given current popular applications, they can be pretty good at this. But they would be completely stymied by a new application they have never seen before, and also by encrypted data.

What’s most interesting about these technologies is that they are inherently unreliable, given the open design of the Internet, but they can be attractive for someone who wants to limit applications to a small known set, anyway. An access network that wants to charge extra for certain applications might be quite happy to block or to exclude any applications that generate packets its deep packet inspection technologies or smart firewall technologies cannot understand.

The support for such an idea is growing – allowing only very narrow sets of traffic through, and blocking everything else, including, by necessity, any novel or innovative applications. The gear to scan and block packets is now cheap enough, and the returns for charging application innovators for access to customers is thought to be incredibly high by many of those operators, who want a “piece of the pie”.

So here’s the thing: on the Internet, merely requiring those who offer Internet service to implement the Internet design as it was intended – without trying to assign meaning to the data content of the packets – would automatically be application agnostic.

In particular: We don’t need a complex rule defining “applications” in order to implement an application agnostic Internet. We have the basis of that rule – it’s in the “code” of the Internet. What we need from the “law” is merely a rule that says a network operator is not supposed to make routing decisions, packet delivery decisions, etc. based on contents of the packet. Only the source and destination addresses and the labels on the packet put there to tell the network about special handling, priority, etc. need to be understood by the network transport, and that is how things should stay, if we believe that Barbara is correct that only application-agnostic discrimination makes sense.

In other words, the rule would simply embody a statement of the “hourglass model” – that IP datagrams consist of an envelope that contains the information needed by the transport layer to deliver the packets, and that the contents of that envelope – the data itself, are private and to be delivered unchanged and unread to the destination. The other part of the hourglass model is that port numbers do not affect delivery – they merely tell the recipient which process is to receive the datagram, and have no other inherent meaning to the transport.

Such a rule would reinforce the actual Internet “code” because that original design is under attack by access providers who claim that discrimination against applications is important. A major claim that has been made is that “network management” and “congestion control” require application specific controls. That claim is false, but justified by complex hand-waving references to piracy and application-specific “hogging”. Upon examination, there is nothing specific about the applications that hog or the technologies used by pirates. Implementing policies that eliminate hogging or detect piracy don’t require changes to the transport layer of the Internet.

There has been a long tradition in the IETF of preserving the application-agnostic nature of the Internet transport layer. It is often invoked by the shorthand phrase “violating the end-to-end argument”. That phrase was meaningful in the “old days”, but to some extent the reasons why it was important have been lost to the younger members of the IETF community, many of whom were not even born when the Internet was designed. They need reminding, too – there is a temptation to throw application-specific “features” into the network transport by vendors of equipment, by representatives of network operators wanting to gain a handle to control competition against non-Internet providers, etc. as well as a constant tension driven by smart engineers wanting to make the Internet faster, better, and cheaper by questioning every aspect of the design. (this design tradition pushed designers to implement functions outside the network transport layer whenever possible, and to put only the most general and simple elements into the network to achieve the necessary goal, so for example, network congestion control is managed by having the routers merely detect and signal the existence of congestion back to the edges of the network, where the sources can decide to re-route traffic and the traffic engineers can decide to modify the network’s hardware connectivity. This decision means that the only function needed in the network transport itself is application-agnostic – congestion detection and signalling).

So I really value Barbara’s contribution, reminding us of two things:

  • Application specific discrimination harms everyone who uses the Internet, because it destroys the generativity of the Internet, and
  • The Internet’s design needs a little help these days, from the law, to reinforce what the original code was designed to do

The law needs to be worked out in synergy with the fundamental design notion of the Internet, and I believe it can be a small bit of law at this point in time, because the Internet is supposed to be that way, by design. If the Internet heads in a bad direction through new designs, perhaps we might want to seek more protections for this very important generativity is important to the world.

Note: My personal view is that the reason that this has become such an issue is that policymakers are trying to generalize from the Internet to a broad “network neutrality” for networks that are not the Internet, that don’t have the key design elements that the Internet has had from the start. For example, the telephone network’s design did not originally separate content from control – it used “in band signalling” to control routing of phone calls. To implement “neutrality” in the phone network would require actually struggling with the fact that certain sounds (2600 Hz, e.g.) were required to be observed by the network – in the user’s space. (this also led to security problems, but it was done to make “interconnect” between phone switches easy).


Posted

in

by

Tags: