Client Systems
Summarized by Jay Lorch, U.C. Berkeley


The Interactive Performance of SLIM: A Stateless, Thin-Client Architecture
Brian K. Schmidt, Monica S. Lam, and J. Duane Northcutt (Stanford and Sun Microsystems)

The first talk of the session was Brian Schmidt's about SLIM, the Stateless Low-level Interface Machine. The central point of this work is that the fast, switched networks we have today are finally powerful enough to allow a transition back to the era of dumb terminals, paving the way for end devices with no need for administration and thus lower total cost of ownership. SLIM uses a simple encoding system to efficiently transfer frame buffer contents to these consoles. The evidence that this can work, and even be competitive with higher-level protocols like X, is well shown by user studies, benchmarks, and actual deployment. Brian also talked briefly at the end about his ongoing work on using persistent process sets, private namespaces, and a stateless operating system to ensure that you can recover from server crashes in this system as easily as from console crashes.

Peter Chen from the University of Michigan asked the first question. He wanted to know why Brian decided to design a new display encoding instead of using a standard one like MPEG. After Brian pointed out that it wasn't really his decision, he gave two motivations for his colleagues' decision. First, the encoding settled on was to be generic, not targeted to any particular imaging mechanism, so that they could leverage this single technology for lots of different types of installations and keep the system useful for years to come. Second, it was simple and cheap to implement, unlike Peter's example of MPEG, which is "tricky."

The next question was from Michael Scott, from the University of Rochester. He pointed out that since we have already shown a tendency to ping-pong between a preference for dumb terminals and one with lots of smarts on the desktop, even if this is the proper solution for today, it's not very convincing that it'll be the solution for much longer. For instance, if immersive computing or other 3-D interfaces become the wave of the future, we could find ourselves throwing away all this cool console hardware that was supposed to last us through many years of computing. And, even today there are plenty of applications that require more smarts at the terminal than SLIM provides. Brian responded to the latter point saying that this system was targeted at installations where they run business applications, and will for the foreseeable future. He agreed that if immersive computing or something like that took over, SLIM couldn't accommodate that.

The third question, from Jonathan Shapiro of IBM, harkened back to the good old days of 1982, when the BLIT tried to do this same thing over 300-baud connections. The lesson learned at the time was that intelligence at the terminal was an inescapable necessity. So, Jonathan asked Brian, what made him think this work changes this invariant? The answer was that fast, switched networks are what changes the game now. And he conceded again that there were cases when you did need something like an MPEG decoder or 3-D pipeline on the terminal, but that there are plenty of users who just plain don't need that and know they don't.

Sitaram Ayer, from Rice University, pointed out the pitiful performance of the recent and similar VNC and essentially asked why SunRay could be so much better. Brian answered that it comes down to client-driven versus server-driven updates. While VNC has the client decide when to request an update, SunRay puts that decision in the hands of the server, which is better prepared to know when updates are needed and how much updating is needed.


Energy-Aware Adaptation for Mobile Applications
Jason Flinn and M. Satyanarayanan, Carnegie Mellon University

Jason Flinn, a CMU student, began the second talk by speaking passionately about the usefulness of improved energy management to solve one of the great important frontiers of system optimization: improving battery lifetime. He then got down to the details of his and Professor Satyanarayanan's work on energy-aware adaptation, as follows. Many multimedia applications can save a lot of energy in software by switching to lower levels of fidelity. With the authors' adaptation techniques, the user can specify a target battery lifetime, and the system will automatically and dynamically notify applications of the adjustments they should make to their fidelity levels in order to achieve the desired battery lifetime with amazing accuracy. An incidental observation from this work is that background power is a major impediment to getting massive battery lifetime increases, so zoned backlighting, the ability to selectively power down parts of the backlight, might help with this if we could get the hardware folk to implement it.

Margo Seltzer, from Harvard University, began the questioning. She complimented the author on his work, and agreed that this seemed like a good way to do adaptation of multimedia applications. However, she pointed out that she and most people she could think of used laptops very differently, for business and personal information management applications, which don't do any multimedia except perhaps to put an animated paper clip on the screen when you make the mistake of asking for help. She asked what can be done for these applications. Jason responded that there wasn't much that could be done in the way of adaptation for these types of applications, and you had to rely on hardware energy management for the most part. He viewed his work as paying off in the imminent future, when multimedia applications were sure to become much more prevalent, even on people's laptops.

Ken Birman, from Cornell University, pointed out that the graphs in the paper of fidelity versus time were generally jumpy. He asked if the right thing to do instead was to degrade an application and then just leave it degraded, even if you later decided there was a small benefit from reversing the degradation. In essence, he said, there is a substantial cost to the user for any adaptation, and this wasn't reflected in Jason's scheme. Jason responded that indeed an important aspect of future work needed was to identify the magnitude of those costs relative to the costs of lower battery life and lower fidelity, so that the system could find the proper middle ground. Perhaps from his own experience watching the experiments run at lowest fidelity, he pointed out that there is something to be gained from running at higher fidelity even if the cost for this is an obvious transition.

Andy Tanenbaum, from Vrije Universiteit, jokingly answered Margo's question by suggesting that business applications could be adapted: a spreadsheet could just make guesses instead of doing actual calculations when battery life was low. He then asked if the real answer to business applications was to slow the clock frequency. Jason said that indeed these kinds of applications required hardware-level solutions, not the high-level software adaptations that his work focused on. And as for making guesses instead of actual calculations, there is a real-world example where you can reduce the amount of calculation you do in order to get satisfactory but not ideal answers: speech, which one of his group members is looking at.

Brian Noble, from the University of Michigan, remarked that PowerScope was cool, but that his gut reaction to it as a computer science person was that it was at base a multimeter, and thus icky hardware. He asked what could be done to give application writers reasonable information from PowerScope in a context they would understand and be comfortable with. Jason said that application writers should understand that you consume less energy if you consume less hardware, so energy profiling can show you the application performance in those terms. Energy profiling can also be useful in determining when a tradeoff between different pieces of hardware, like choosing a network-intensive algorithm instead of a disk-intensive one, pays off.

Preston Crow, from EMC, remarked on the similarity between this work and the work of the realtime community. The main difference is that the realtime community wants applications to scale their usage to meet deadlines while energy-aware adaptation wants applications to do it to meet energy targets. He asked how results from the realtime community could be leveraged in work like this. Jason agreed that there was a lot of similarity, but pointed out one important difference: that in the energy realm, there is only one goal that matters, namely making the battery last longer. This introduces a lot of extra flexibility in obtaining solutions.

Erik Cota-Robles, from Intel, pointed out that stochastic predictions of energy demands were all well and good in small benchmarks in controlled situations, but that the real world has annoying statistical properties such as the long tails that Vogels talked about in his paper on NT file systems. Jason said that naturally user studies were necessary, but his intuition was that his methods would do fine in the real world. Erik then cautioned that he'd had the same views in his work on realtime issues: things would look good analytically, and things would start out great in practice, but that then "something would happen." Jason again reiterated that his techniques work well in experimental cases but that validation would be necessary.