Networking (I)
Summarized by Neil Stratford, Cambridge University


Active Network Vision and Reality: Lessons From a Capsule-Based System
David Wetherall, Univ. of Washington

The talk focused on the experiences gained from the development of the ANTS Active Network Toolkit, a user level implementation of some 10,000 lines of code. Insights were generally positive. The ANTS architecture is believed to enable the deployment of useful services, with the exception of policy enforcement (firewalls) and services that override resource allocation (e.g., guaranteed rate). The performance was only a few orders of magnitude off ideal, but the results show it would be usable up to T1 rates using the current implementation. The levels of protection provided were equivalent to those in todays internet -- services are unable to corrupt each other, but resource management remains an unsolved issue and is the subject of future research in the AN community. It was specifically pointed out that the use of TTLs are a weak solution, ignoring the possibility of attacks that cause work to be carried out on other nodes. The current architecture falls back on trust mechanisms.

Someone from INRIA kicked off the questions by asking if advantages could be found by using domain specific languages, such as PLAN, through for example proofs that the active program cannot loop. The reply was that it was possible for some programs, but unlikely in general. It was noted that analysing such programs to prove that code will not loop is good and such techniques will help in the future.

Jeff Mogul from Compaq WRL observed that there appears to be a contradiction in the design of Dave's system. The performance is best at the edges of the network, where there a lot of cycles per packet, but the applications described are better suited for the center of the network. Dave answered that there's no contradiction: the applications also work at the edges. Some things are also possible in the core, such as Random Early Discard and Explicit Congestion Notification. It's a continuum.

Ken Birman from Cornell stated that it's time to call the question of whether active networks are ever going to come to anything or not. He has yet to see one good reason for using an active network. Dave's response: you don't need novel applications; the old problems like multicast still aren't solved: that should be reason enough. Ken asked: but do we really need active networks for those old problems? The answer was that for fast deployment, we absolutely do need them. Dave believes that we're going to succeed, but in the end it won't be called "active networks" any more.

David Cheriton from Stanford followed up by declaring his distaste at agreeing with Ken, but that multicast is in all routers and it is just an issue of turning it on. People don't want to risk reliability, seeing extra activity as a threat. He also pointed out the trend of hardware forwarding in ASICs. The forwarding path will not be active but will have to push active packets to outside. Once outside the processing may as well be carried out anywhere on the network. Why does processing have to be carried out that close? Following some confusion and general laughter Wetherall replied that he didn't believe multicast was a solved problem; even if people want to use it, it still is not usable. It would be great if you could deploy your own version. Cheriton replied that it was the ISPs that don't trust multicast.

Timothy Roscoe from Sprint was only allowed to ask another question on the condition he not agree with Ken. Roscoe replied that he only slightly agreed and hence proceeded to declare that multicast was indeed in every router but not turned on, and that ISPs offer it only under special circumstances. It is generally disabled for reliability and resource management reasons. Playing the bad Telco guy, he added that billing was the real key which had been overlooked. How do we charge for chewing up resources? Wetherall replied that he had not considered billing, but that he will look at it once the other problems have been solved! Roscoe captured the mood by asserting that once billing was a solved problem, there were no more problems. When the laughter had eventually subsided, Wetherall pointed out that everything can be tied back to bandwidth and packet rates.


Building Reliable, High-Performance Communication Systems from Components
Xiaoming Liu, Christoph Kreitz, Robbert van Renesse, Jason Hickey, Mark Hayden (Compaq SRC), Kenneth Birman, and Robert Constable (Cornell)

Robbert van Renesse presented recent research by the Ensemble communication group and the Nuprl automated formal reasoning group at Cornell, who are collaborating on a system for developing robust communication software from components. Component-based programming offers many advantages including software reusability and run-time system extensibility. However, typical component-based software may have poor performance due to excessive inter-component communication or redundant computation. Further, it is often difficult to verify that particular component compositions are correct (i.e., implement the properties required of the overall system). The goal of the Ensemble and Nuprl groups' collaboration is to tackle these problems in the context of communication protocols.

Van Renesse then described their "logical programming environment" for developing protocol software. The first piece of the system is Ensemble: an architecture in which programmers create protocol stacks by combining individual micro-protocol components. Ensemble provides a library of over sixty micro-protocol components that implement behaviors such as sliding windows, fragmentation and reassembly, flow control, group membership, message ordering, and other functions. The second piece of the system is Nuprl, a formal methods tool that operates on Ensemble components. Nuprl can read the (OCaml) source code for a protocol component and reason about it. More important, Nuprl is able to reason about component compositions in order to synthesize optimized protocol code.

Questions

Jay Lepreau from the University of Utah kicked off the questions by announcing that he was sympathetic to the approach, but had reservations about the practicality today. He made the comment that the specification needs to be written by hand, and anecdotal evidence suggests this means it will be difficult to write. He quoted the paper, which says that the proof of layers was "daunting," and even though it took less than a week to generate, it could take months for 4 layers. Van Renesse replied that they had only proved one layer of the ten, which took 40 pages and about a month. He added that formal tools will help to reduce many of the boring details. Lepreau expressed his understanding that a proof was required of each layer, but Van Renesse responded that they didn't have proofs for the other layers. It is possible to prove the correctness of a layer, or just assume it and prove things about the composition of layers. He agreed that it was hard and that the verification part had gone further than he had originally expected.

Brian Noble from the University of Michigan noted than when you have proofs of each layer you can compose them, but asked when did optimization continue to respect the conditions of the original. Van Renesse replied that Nuprl doesn't only output the code, but also a proof of correctness, which is fully automated.

Karin Petersen from Xerox PARC concluded by asking how they chose which properties to specify, as the proofs could only be as strong as the specification. Van Renesse responded that they could list properties, but it wouldn't make sense. They started by trying to enumerate the properties but ended up with long and complicated lists that were difficult to make internally consistent, so they used abstract behavioral specifications instead.