In the past every company that used a network would buy their own networking equipment and set up services such as firewalls, content filters, and policies. Now however, there is a growing trend of companies not buying their own networking hardware and instead paying dedicated companies to manage these services. The theme of this BoF session was to discuss the impact of this trend. One benefit of outsourcing network service management is easier maintenance. Maintaining or updating a centrally-managed policy is as simple as specifying changes and doesn’t have to involve actually implementing those changes, and hardware maintenance is entirely taken care of by the service provider. Because of economies of scale in the service provider, centrally-managed network services are also cheaper than each company hiring its own network staff. The negative side of easy management though is loss of control. Do companies still have as much control over their network services if they outsource management? In a model where providing infrastructure hardware is the central service, then the client company has just as much control as if they owned the hardware. In a model where the provided services are abstracted away from hardware though, some control is lost. This means that possibly more qualified people would be configuring hardware, but it’ll be harder to debug if something is misconfigured. This idea of misconfiguration leads to another problem: can clients trust external companies to manage their network services properly and honestly? Applications often depend on reliable network services and assume that everything will work correctly, but that may not be a valid assumption. In addition to questions of trust, the relocation of network applications to the cloud raises questions of availability. In particular, the session debated the types of guarantees that cloud providers can make about services. Though the network within the data center is reliable and under the cloud provider's control, the connection to the data center is not. Which end-to-end guarantees can be made in such an environment? Is a best-effort network based on IP really capable of enforcing isolation and providing strong resource guarantees to specific tenants? What changes when cloud providers over-subscribe? Can/should cloud providers cheat routing protocols and favor certain applications? Because of these questions, mission critical applications will need to examine the assumptions they make about the network --- particularly whether or not those assumptions are realistic, and how vulnerable the application is to those assumptions. Moving beyond the question of which guarantees cloud providers can make, there is the question of a customer's ability to realize these guarantees. One problem is whether customers can measure service levels in such a way as to be verifiable by a third party. Although an interesting engineering problem, the session debated the necessity for a solution --- the cost of litigation often outweighs its gains, and market mechanisms should provide customers the ability to switch providers when faced with unacceptable service. However, it is unclear whether customers will actually have sufficiently many provider choices, or, if given a choice, whether it is practical for customers to switch.