ACM SIGCOMM 2014 Conference, SIGCOMM'14, Chicago, IL, USA, August 17-22, 2014. ACM 【DBLP Link】
【Paper Link】 【Pages】:1-2
【Authors】: George Varghese
【Abstract】: The most compelling ideas in systems are abstractions such as virtual memory, sockets, or packet scheduling. Algorithmics is the servant of abstraction, allowing system performance to approach that of the underlying hardware, sometimes by using efficient algorithms but often by simply leveraging other aspects of the system. I will survey the trajectory of network algorithmics starting with a focus on speed and scale in the 1990s to measurement and security in the 2000s. While doing so, I will reflect on my experiences in choosing problems and conducting research. I will conclude by describing my passion for the emerging field of network verification and its confluence with programming language research. George Varghese worked at DEC designing DECNET protocols before obtaining his Ph.D. in 1992 from MIT. He worked from 1993-1999 at Washington University and from 1999 to 2012 at UCSD, both as professor of computer science. He joined Microsoft Research as a Principal Researcher in 2012. He won the ONR Young Investigator Award in 1996, and was elected to be a Fellow of the Association for Computing Machinery (ACM) in 2002. He helped design the 40 Gbps forwarding engine for Procket Networks, subsequently acquired by Cisco Systems. His book "Network Algorithmics" was published in December 2004 by Morgan-Kaufman. He co-founded NetSift Inc. in May 2004. NetSift was acquired by Cisco in 2005. He was the 2014 winner of the IEEE Koji Kobayashi Computers and Communications Award
【Keywords】: algorithmics; networks
【Paper Link】 【Pages】:3-14
【Authors】: Vimalkumar Jeyakumar ; Mohammad Alizadeh ; Yilong Geng ; Changhoon Kim ; David Mazières
【Abstract】: This paper presents a practical approach to rapidly introducing new dataplane functionality into networks: End-hosts embed tiny programs into packets to actively query and manipulate a network's internal state. We show how this "tiny packet program" (TPP) interface gives end-hosts unprecedented visibility into network behavior, enabling them to work with the network to achieve a desired functionality. Our design leverages what each component does best: (a) switches forward and execute tiny packet programs (at most 5~instructions) in-band at line rate, and (b) end-hosts perform arbitrary (and easily updated) computation on network state. By implementing three different research proposals, we show that TPPs are useful. Using a hardware prototype on a NetFPGA, we show our design is feasible at a reasonable cost.
【Keywords】: active networks; design; measurement; performance
【Paper Link】 【Pages】:15-26
【Authors】: Kirill Kogan ; Sergey I. Nikolenko ; Ori Rottenstreich ; William Culhane ; Patrick Eugster
【Abstract】: Efficient packet classification is a core concern for network services. Traditional multi-field classification approaches, in both software and ternary content-addressable memory (TCAMs), entail tradeoffs between (memory) space and (lookup) time. TCAMs cannot efficiently represent range rules, a common class of classification rules confining values of packet fields to given ranges. The exponential space growth of TCAM entries relative to the number of fields is exacerbated when multiple fields contain ranges. In this work, we present a novel approach which identifies properties of many classifiers which can be implemented in linear space and with worst-case guaranteed logarithmic time \emph{and} allows the addition of more fields including range constraints without impacting space and time complexities. On real-life classifiers from Cisco Systems and additional classifiers from ClassBench (with real parameters), 90-95% of rules are thus handled, and the other 5-10% of rules can be stored in TCAM to be processed in parallel.
【Keywords】: TCAM; packet classification
【Paper Link】 【Pages】:27-38
【Authors】: Rohan Gandhi ; Hongqiang Harry Liu ; Y. Charlie Hu ; Guohan Lu ; Jitendra Padhye ; Lihua Yuan ; Ming Zhang
【Abstract】: Load balancing is a foundational function of datacenter infrastructures and is critical to the performance of online services hosted in datacenters. As the demand for cloud services grows, expensive and hard-to-scale dedicated hardware load balancers are being replaced with software load balancers that scale using a distributed data plane that runs on commodity servers. Software load balancers offer low cost, high availability and high flexibility, but suffer high latency and low capacity per load balancer, making them less than ideal for applications that demand either high throughput, or low latency or both. In this paper, we present Duet, which offers all the benefits of software load balancer, along with low latency and high availability -- at next to no cost. We do this by exploiting a hitherto overlooked resource in the data center networks -- the switches themselves. We show how to embed the load balancing functionality into existing hardware switches, thereby achieving organic scalability at no extra cost. For flexibility and high availability, Duet seamlessly integrates the switch-based load balancer with a small deployment of software load balancer. We enumerate and solve several architectural and algorithmic challenges involved in building such a hybrid load balancer. We evaluate Duet using a prototype implementation, as well as extensive simulations driven by traces from our production data centers. Our evaluation shows that Duet provides 10x more capacity than a software load balancer, at a fraction of a cost, while reducing latency by a factor of 10 or more, and is able to quickly adapt to network dynamics including failures.
【Keywords】: SDN; datacenter; load balancing
【Paper Link】 【Pages】:39-50
【Authors】: Tong Yang ; Gaogang Xie ; Yanbiao Li ; Qiaobin Fu ; Alex X. Liu ; Qi Li ; Laurent Mathy
【Abstract】: The Forwarding Information Base (FIB) of backbone routers has been rapidly growing in size. An ideal IP lookup algorithm should achieve constant, yet small, IP lookup time and on-chip memory usage. However, no prior IP lookup algorithm achieves both requirements at the same time. In this paper, we first propose SAIL, a Splitting Approach to IP Lookup. One splitting is along the dimension of the lookup process, namely finding the prefix length and finding the next hop, and another splitting is along the dimension of prefix length, namely IP lookup on prefixes of length less than or equal to 24 and IP lookup on prefixes of length longer than 24. Second, we propose a suite of algorithms for IP lookup based on our SAIL framework. Third, we implemented our algorithms on four platforms: CPU, FPGA, GPU, and many-core. We conducted extensive experiments to evaluate our algorithms using real FIBs and real traffic from a major ISP in China. Experimental results show that our SAIL algorithms are several times or even two orders of magnitude faster than well known IP lookup algorithms.
【Keywords】: IP lookup; LPM; sail; virtual router multi-FIB lookup
【Paper Link】 【Pages】:51-62
【Authors】: Ethan Heilman ; Danny Cooper ; Leonid Reyzin ; Sharon Goldberg
【Abstract】: The Resource Public Key Infrastructure (RPKI) is a new infrastructure that prevents some of the most devastating attacks on interdomain routing. However, the security benefits provided by the RPKI are accomplished via an architecture that empowers centralized authorities to unilaterally revoke any IP prefixes under their control. We propose mechanisms to improve the transparency of the RPKI, in order to mitigate the risk that it will be used for IP address takedowns. First, we present tools that detect and visualize changes to the RPKI that can potentially take down an IP prefix. We use our tools to identify errors and revocations in the production RPKI. Next, we propose modifications to the RPKI's architecture to (1) require any revocation of IP address space to receive consent from all impacted parties, and (2) detect when misbehaving authorities fail to obtain consent. We present a security analysis of our architecture, and estimate its overhead using data-driven analysis.
【Keywords】: RPKI; public key infrastructures; security; transparency
【Paper Link】 【Pages】:63-74
【Authors】: Zhiyong Zhang ; Ovidiu Mara ; Katerina J. Argyraki
【Abstract】: When can we reason about the neutrality of a network based on external observations? We prove conditions under which it is possible to (a) detect neutrality violations and (b) localize them to specific links, based on external observations. Our insight is that, when we make external observations from different vantage points, these will most likely be inconsistent with each other if the network is not neutral. Where existing tomographic techniques try to form solvable systems of equations to infer network properties, we try to form \emph{un}solvable systems that reveal neutrality violations. We present an algorithm that relies on this idea to identify sets of non-neutral links based on external observations, and we show, through network emulation, that it achieves good accuracy for a variety of network conditions.
【Keywords】: network neutrality; network tomography
【Paper Link】 【Pages】:75-86
【Authors】: David Naylor ; Matthew K. Mukerjee ; Peter Steenkiste
【Abstract】: Though most would agree that accountability and privacy are both valuable, today's Internet provides little support for either. Previous efforts have explored ways to offer stronger guarantees for one of the two, typically at the expense of the other; indeed, at first glance accountability and privacy appear mutually exclusive. At the center of the tussle is the source address: in an accountable Internet, source addresses undeniably link packets and senders so hosts can be punished for bad behavior. In a privacy-preserving Internet, source addresses are hidden as much as possible. In this paper, we argue that a balance is possible. We introduce the Accountable and Private Internet Protocol (APIP), which splits source addresses into two separate fields --- an accountability address and a return address --- and introduces independent mechanisms for managing each. Accountability addresses, rather than pointing to hosts, point to accountability delegates, which agree to vouch for packets on their clients' behalves, taking appropriate action when misbehavior is reported. With accountability handled by delegates, senders are now free to mask their return addresses; we discuss a few techniques for doing so.
【Keywords】: accountability; privacy; source address
【Paper Link】 【Pages】:87-98
【Authors】: Jakub Czyz ; Mark Allman ; Jing Zhang ; Scott Iekel-Johnson ; Eric Osterweil ; Michael Bailey
【Abstract】: After several IPv4 address exhaustion milestones in the last three years, it is becoming apparent that the world is running out of IPv4 addresses, and the adoption of the next generation Internet protocol, IPv6, though nascent, is accelerating. In order to better understand this unique and disruptive transition, we explore twelve metrics using ten global-scale datasets to create the longest and broadest measurement of IPv6 adoption to date. Using this perspective, we find that adoption, relative to IPv4, varies by two orders of magnitude depending on the measure examined and that care must be taken when evaluating adoption metrics in isolation. Further, we find that regional adoption is not uniform. Finally, and perhaps most surprisingly, we find that over the last three years, the nature of IPv6 utilization-in terms of traffic, content, reliance on transition technology, and performance-has shifted dramatically from prior findings, indicating a maturing of the protocol into production mode. We believe IPv6's recent growth and this changing utilization signal a true quantum leap.
【Keywords】: IP; IPv4; IPv6; dns; internet; measurement
【Paper Link】 【Pages】:99-110
【Authors】: Simon Peter ; Umar Javed ; Qiao Zhang ; Doug Woos ; Thomas E. Anderson ; Arvind Krishnamurthy
【Abstract】: A longstanding problem with the Internet is that it is vulnerable to outages, black holes, hijacking and denial of service. Although architectural solutions have been proposed to address many of these issues, they have had difficulty being adopted due to the need for widespread adoption before most users would see any benefit. This is especially relevant as the Internet is increasingly used for applications where correct and continuous operation is essential. In this paper, we study whether a simple, easy to implement model is sufficient for addressing the aforementioned Internet vulnerabilities. Our model, called ARROW (Advertised Reliable Routing Over Waypoints), is designed to allow users to configure reliable and secure end to end paths through participating providers. With ARROW, a highly reliable ISP offers tunneled transit through its network, along with packet transformation at the ingress, as a service to remote paying customers. Those customers can stitch together reliable end to end paths through a combination of participating and non-participating ISPs in order to improve the fault-tolerance, robustness, and security of mission critical transmissions. Unlike efforts to redesign the Internet from scratch, we show that ARROW can address a set of well-known Internet vulnerabilities, for most users, with the adoption of only a single transit ISP. To demonstrate ARROW, we have added it to a small-scale wide-area ISP we control. We evaluate its performance and failure recovery properties in both simulation and live settings.
【Keywords】: BGP; internet; overlay networks; reliability; source routing
【Paper Link】 【Pages】:111-112
【Authors】: Hyunwoo Nam ; Kyung-Hwa Kim ; Doru Calin ; Henning Schulzrinne
【Abstract】: Adaptive bitrate (ABR) technologies are being widely used in today's popular HTTP-based video streaming such as YouTube and Netflix. Such a rate-switching algorithm embedded in a video player is designed to improve video quality-of-experience (QoE) by selecting an appropriate resolution based on the analysis of network conditions while the video is playing. However, a bad viewing experience is often caused by the video player having difficulty estimating transit or client-side network conditions accurately. In order to analyze the ABR streaming performance, we developed YouSlow, a web browser plug-in that can detect and report live buffer stalling events to our analysis tool. Currently, YouSlow has collected more than 20,000 of YouTube video stalling events over 40 countries.
【Keywords】: HTTP video streaming; adaptive bitrate streaming (ABR); video quality of experience
【Paper Link】 【Pages】:113-114
【Authors】: Aanchal Malhotra ; Sharon Goldberg
【Abstract】: BGP, the Internet's interdomain routing protocol, is highly vulnerable to routing failures that result from unintentional misconfigurations or deliberate attacks. To defend against these failures, recent years have seen the adoption of the Resource Public Key Infrastructure (RPKI), which currently authorizes 4% of the Internet's routes. The RPKI is a completely new security infrastructure (requiring new servers, caches, and the design of new protocols), a fact that has given rise to some controversy. Thus, an alternative proposal has emerged: Route Origin Verification (ROVER}, which leverages the existing reverse DNS (rDNS) and DNSSEC to secure the interdomain routing system. Both RPKI and ROVER rely on a hierarchy of authorities to provide trusted information about the routing system. Recently, however, it has been argued that the misconfigured, faulty or compromised RPKI authorities introduce new vulnerabilities in the routing system, which can take IP prefixes offline. Meanwhile, the designers of ROVER claim that it operates in a "fail-safe mode", where "[o]ne could completely unplug a router verification application at any time and Internet routing would continue to work just as it does today". There has been debate in Internet community mailing lists about the pros and cons of both approaches. This poster therefore compares the impact of ROVER failures to those of the RPKI, in a threat model that covers misconfigurations, faults or compromises of their trusted authorities.
【Keywords】: DNS; RPKI; public key infrastructure; routing security
【Paper Link】 【Pages】:115-116
【Authors】: Baobao Zhang ; Jun Bi ; Jianping Wu ; Fred Baker
【Abstract】:
【Keywords】: access control list; static routes; traffic engineering
【Paper Link】 【Pages】:117-118
【Authors】: Sajad Shirali-Shahreza ; Yashar Ganjali
【Abstract】: One of the limitations of wildcard rules in Software Defined Networks, such as OpenFlow, is losing visibility. FleXam is a flexible sampling extension for OpenFlow that allows the controller to define which packets should be sampled, what parts of each packet should be selected, and where they should be sent. Here, we present an interactive demo showing how FleXam enables the controller to dynamically adjust sampling rates and change the sampling scheme to optimally keep up with a sampling budget in the context of a traffic statistics collection application.
【Keywords】: SDN; openflow; sampling; traffic statistics
【Paper Link】 【Pages】:119-120
【Authors】: Arash Molavi Kakhki ; Abbas Razaghpanah ; Rajesh Golani ; David R. Choffnes ; Phillipa Gill ; Alan Mislove
【Abstract】: The goal of this research is to detect traffic differentiation in cellular data networks. We define service differentiation as any attempt to change the performance of network traffic traversing an ISP's boundaries. ISPs may implement differentiation policies for a number of reasons, including load balancing, bandwidth management, or business reasons. Specifically, we focus on detecting whether certain types of network traffic receive better (or worse) performance. As an example, a wireless provider might limit the performance of third-party VoIP or video calling services (or any other competing services) by introducing delays or reducing transfer rates to encourage users to use services provided by the wireless provider. Previous work explored this problem in limited environments. Glasnost focused on BitTorrent in the desktop/laptop environment, and lacked the ability to conduct controlled experiments to provide strong evidence of differentiation. NetDiff covered a wide range of passively gathered traffic from a large ISP but likewise did not support targeted, controlled experiments. We address these limitations with Mobile Replay.
【Keywords】: net neutrality; traffic differentiation
【Paper Link】 【Pages】:121-122
【Authors】: Rui Miao ; Minlan Yu ; Navendu Jain
【Abstract】:
【Keywords】: cloud attack; software-defined networking
【Paper Link】 【Pages】:123-124
【Authors】: Ricky K. P. Mok ; Weichao Li ; Rocky K. C. Chang
【Abstract】: Crowdtesting is increasingly popular among researchers to carry out subjective assessments of different services. Experimenters can easily assess to a huge pool of human subjects through crowdsourcing platforms. The workers are usually anonymous, and they participate in the experiments independently. Therefore, a fundamental problem threatening the integrity of these platforms is to detect various types of cheating from the workers. In this poster, we propose cheat-detection mechanism based on an analysis of the workers' mouse cursor trajectories. It provides a jQuery-based library to record browser events. We compute a set of metrics from the cursor traces to identify cheaters. We deploy our mechanism to the survey pages for our video quality assessment tasks published on Amazon Mechanical Turk. Our results show that cheaters' cursor movement is usually more direct and contains less pauses.
【Keywords】: cheat-detection; crowdsourcing; cursor submovement
【Paper Link】 【Pages】:125-126
【Authors】: Attila Csoma ; Balázs Sonkoly ; Levente Csikor ; Felician Németh ; András Gulyás ; Wouter Tavernier ; Sahel Sahhaf
【Abstract】: Mininet is a great prototyping tool which combines existing SDN-related software components (e.g., Open vSwitch, OpenFlow controllers, network namespaces, cgroups) into a framework, which can automatically set up and configure customized OpenFlow testbeds scaling up to hundreds of nodes. Standing on the shoulders of Mininet, we implement a similar prototyping system called ESCAPE, which can be used to develop and test various components of the service chaining architecture. Our framework incorporates Click for implementing Virtual Network Functions (VNF), NETCONF for managing Click-based VNFs and POX for taking care of traffic steering. We also add our extensible Orchestrator module, which can accommodate mapping algorithms from abstract service descriptions to deployed and running service chains.
【Keywords】: NETCONF; SDN; click; mininet; prototyping; service chain
【Paper Link】 【Pages】:127-128
【Authors】: Maksym Gabielkov ; Ashwin Rao ; Arnaud Legout
【Abstract】: Online social networks (OSNs) are an important source of information for scientists in different fields such as computer science, sociology, economics, etc. However, it is hard to study OSNs as they are very large. For instance, Facebook has 1.28 billion active users in March 2014 and Twitter claims 255 million active users in April 2014. Also, companies take measures to prevent crawls of their OSNs and refrain from sharing their data with the research community. For these reasons, we argue that sampling techniques will be the best technique to study OSNs in the future. In this work, we take an experimental approach to study the characteristics of well-known sampling techniques on a full social graph of Twitter crawled in 2012 [2]. Our contribution is to evaluate the behavior of these techniques on a real directed graph by considering two sampling scenarios: (a) obtaining most popular users (b) obtaining an unbiased sample of users, and to find the most suitable sampling techniques for each scenario.
【Keywords】: sampling; social graph; social networks; twitter
【Paper Link】 【Pages】:129-130
【Authors】: Ravi Netravali ; Anirudh Sivaraman ; Keith Winstein ; Somak Das ; Ameesh Goyal ; Hari Balakrishnan
【Abstract】: This demo presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi is structured as a set of arbitrarily composable UNIX shells. It includes two shells to record and replay Web pages, RecordShell and ReplayShell, as well as two shells for network emulation, DelayShell and LinkShell. In addition, Mahimahi includes a corpus of recorded websites along with benchmark results and link traces (https://github.com/ravinet/sites). Mahimahi improves on prior record-and-replay frameworks in three ways. First, it preserves the multi-origin nature of Web pages, present in approximately 98% of the Alexa U.S. Top 500, when replaying. Second, Mahimahi isolates its own network traffic, allowing multiple instances to run concurrently with no impact on the host machine and collected measurements. Finally, Mahimahi is not inherently tied to browsers and can be used to evaluate many different applications. A demo of Mahimahi recording and replaying a Web page over an emulated link can be found at http://youtu.be/vytwDKBA-8s. The source code and instructions to use Mahimahi are available at http://mahimahi.mit.edu/.
【Keywords】: page load time; record-and-replay; web measurements
【Paper Link】 【Pages】:131-132
【Authors】: Zachary S. Bischof ; Fabián E. Bustamante
【Abstract】: When a new technology reaches the market, we typically focus on the want or need that it can fulfill. As the technology becomes a commodity and its market matures, reliability often become a key differentiating factor between competing products. We posit that as broadband capacities continue to improve and users migrate to over-the-Internet services, such as on-demand video and voice-over-IP services, we will see this common pattern emerge for broadband services. In this poster, we present the first study of reliability in broadband networks. Using data collected from residential gateways (via FCC/SamKnows), we study the availability and reliability of fixed-line broadband services across the US. Using natural experiments, we look at the impact of increased network downtime on user network demand. We use traditional metrics (e.g. failure rate, MTBF, MTTR) to quantify broadband services, as well as each ISP's configured DNS. Since the impact of a network outage will depend on when it occurred (e.g. time of day), we compare ISP services by the annual average number of bytes lost, based on typical user demand during periods of network downtime.
【Keywords】: access link reliability; broadband access networks
【Paper Link】 【Pages】:133-134
【Authors】: Pierdomenico Fiadino ; Mirko Schiavone ; Pedro Casas
【Abstract】: WhatsApp, the new giant in instant multimedia messaging in mobile networks is rapidly increasing its popularity, taking over the traditional SMS/MMS messaging. In this paper we present the first large-scale characterization of WhatsApp, useful among others to ISPs willing to understand the impacts of this and similar applications on their networks. Through the combined analysis of passive measurements at the core of a national mobile network, worldwide geo-distributed active measurements, and traffic analysis at end devices, we show that: (i) the WhatsApp hosting architecture is highly centralized and exclusively located in the US; (ii) video sharing covers almost 40% of the total WhatsApp traffic volume; (iii) flow characteristics depend on the OS of the end device; (iv) despite the big latencies to US servers, download throughputs are as high as 1.5 Mbps; (v) users react immediately and negatively to service outages through social networks feedbacks.
【Keywords】: instant multimedia messaging; large-scale measurements; mobile networks; service outages; whatsapp
【Paper Link】 【Pages】:135-136
【Authors】: John P. Rula ; Fabian E. Bustamante
【Abstract】:
【Keywords】: cellular dns; content delivery networks; domain name system
【Paper Link】 【Pages】:137-138
【Authors】: Yuliang Li ; Guang Yao ; Jun Bi
【Abstract】:
【Keywords】: data plane; software-defined network; visibility
【Paper Link】 【Pages】:139-140
【Authors】: Angela H. Jiang ; Zachary S. Bischof ; Fabian E. Bustamante
【Abstract】: A social news site presents user-curated content, ranked by popularity. Popular curators like Reddit, or Facebook have become effective way of crowdsourcing news or sharing for personal opinions. Traditionally, these services require a centralized authority to aggregate data and determine what to display. However, the trust issues that arise from a centralized system are particularly damaging to the "Web democracy" that social news sites are meant to provide. In this poster, we present cliq, a decentralized social news curator. cliq is a P2P based social news curator that provides private and unbiased reporting. All users in cliq share responsibility for tracking and providing popular content. Any user data that cliq needs to store is also managed across the network. We first inform our design of cliq through an analysis of Reddit. We design a way to provide content curation without a persistent moderator, or usernames.
【Keywords】: P2P systems; social news curation
【Paper Link】 【Pages】:141-142
【Authors】: Matthias Vallentin ; Dominik Charousset ; Thomas C. Schmidt ; Vern Paxson ; Matthias Wählisch
【Abstract】: When an organization detects a security breach, it undertakes a forensic analysis to figure out what happened. This investigation involves inspecting a wide range of heterogeneous data sources spanning over a long period of time. The iterative nature of the analysis procedure requires an interactive experience with the data. However, the distributed processing paradigms we find in practice today fail to provide this requirement: the batch-oriented nature of MapReduce cannot deliver sub-second round-trip times, and distributed in-memory processing cannot store the terabytes of activity logs needed to inspect during an incident. We present the design and implementation of Visibility Across Space and Time (VAST), a distributed database to support interactive network forensics, and libcppa, its exceptionally scalable messaging core. The extended actor framework libcppa enables VAST to distribute lightweight tasks at negligible overhead. In our live demo, we showcase how VAST enables security analysts to grapple with the huge amounts of data often associated with incident investigations.
【Keywords】: message-oriented middleware; network forensics; security
【Paper Link】 【Pages】:143-144
【Authors】: David Koll ; Jun Li ; Xiaoming Fu
【Abstract】: With increasing frequency, users raise concerns about data privacy and protection in centralized Online Social Networks (OSNs), in which providers have the unprecedented privilege to access and exploit every user's private data at will. To mitigate these concerns, researchers have suggested to decentralize OSNs and thereby enable users to control and manage access to their data themselves. However, previously proposed decentralization approaches suffer from several drawbacks. To tackle their deficiencies, we introduce the Self-Organized Universe of People (SOUP). In this demonstration, we present a prototype of SOUP and share our experiences from a real-world deployment.
【Keywords】: DOSN; decentralization; online social networks
【Paper Link】 【Pages】:145-146
【Authors】: Bo Zhang ; Jinfan Wang ; Xinyu Wang ; Tracy Yingying Cheng ; Xiaohua Jia ; Jianfei He
【Abstract】: In the current Internet architecture, application service providers (ASPs) own users' data and social groups information, which made a handful of ASP companies growing bigger and bigger and denied small and medium companies from entering this business. We propose a new architecture, called Application Independent Information Infrastructure (AI3). The design goals of AI3 are: 1) Decoupling users' data from ASPs and users' social relations from ASPs, such that ASPs become independent from users' data and social relations. 2) Open architecture, such that different ASPs can interoperate with each other. This demo is to show a prototype of AI3. The demo has four parts: 1) ASPindependent data management in AI3; 2) ASP-independent management of users' social relations in AI3; 3) inter-domain data transport and user roaming; 4) real-time communications by using AI3. The demo video can be watched at: http://www.cs.cityu.edu.hk/~jia/AI3_DemoVideo.mp4
【Keywords】: internet architecture; network infrastructure; storage system
【Paper Link】 【Pages】:147-148
【Authors】: Dinesh Bharadia ; Kiran Raj Joshi ; Sachin Katti
【Abstract】: This paper presents demonstration of a real-time full duplex point-to-point link, where transmission and reception occurs in the same spectrum band simultaneously between a pair of full-duplex radios. This demo first builds a full duplex radio by implementing self-interference cancellation technique on top of a traditional half duplex radio architecture. We then establish a point-to-point link using a pair of these radios that can transmit and receive OFDM packets. By changing the environmental conditions around the full-duplex radios we then demonstrate the robustness of the self-interference cancellation to adapt to the changing environment.
【Keywords】: full duplex; interference cancellation; wireless radio
【Paper Link】 【Pages】:149-150
【Authors】: Florian Wamser ; Thomas Zinner ; Lukas Iffländer ; Phuoc Tran-Gia
【Abstract】:
【Keywords】: YouTube; application-aware networking; dynamic resource allocation; home networks
【Paper Link】 【Pages】:151-162
【Authors】: Ryan Craven ; Robert Beverly ; Mark Allman
【Abstract】: Understanding, measuring, and debugging IP networks, particularly across administrative domains, is challenging. One particularly daunting aspect of the challenge is the presence of transparent middleboxes---which are now common in today's Internet. In-path middleboxes that modify packet headers are typically transparent to a TCP, yet can impact end-to-end performance or cause blackholes. We develop TCP HICCUPS to reveal packet header manipulation to both endpoints of a TCP connection. HICCUPS permits endpoints to cooperate with currently opaque middleboxes without prior knowledge of their behavior. For example, with visibility into end-to-end behavior, a TCP can selectively enable or disable performance enhancing options. This cooperation enables protocol innovation by allowing new IP or TCP functionality (e.g., ECN, SACK, Multipath TCP, Tcpcrypt) to be deployed without fear of such functionality being misconstrued, modified, or blocked along a path. HICCUPS is incrementally deployable and introduces no new options. We implement and deploy TCP HICCUPS across thousands of disparate Internet paths, highlighting the breadth and scope of subtle and hard to detect middlebox behaviors encountered. We then show how path diagnostic capabilities provided by HICCUPS can benefit applications and the network.
【Keywords】: TCP; header integrity; header modifications; middlebox
【Paper Link】 【Pages】:163-174
【Authors】: Aaron Gember-Jacobson ; Raajay Viswanathan ; Chaithan Prakash ; Robert Grandl ; Junaid Khalid ; Sourav Das ; Aditya Akella
【Abstract】: Network functions virtualization (NFV) together with software-defined networking (SDN) has the potential to help operators satisfy tight service level agreements, accurately monitor and manipulate network traffic, and minimize operating expenses. However, in scenarios that require packet processing to be redistributed across a collection of network function (NF) instances, simultaneously achieving all three goals requires a framework that provides efficient, coordinated control of both internal NF state and network forwarding state. To this end, we design a control plane called OpenNF. We use carefully designed APIs and a clever combination of events and forwarding updates to address race conditions, bound overhead, and accommodate a variety of NFs. Our evaluation shows that OpenNF offers efficient state control without compromising flexibility, and requires modest additions to NFs.
【Keywords】: middleboxes; network functions; software-defined networking
【Paper Link】 【Pages】:175-186
【Authors】: Ilias Marinos ; Robert N. M. Watson ; Mark Handley
【Abstract】: Contemporary network stacks are masterpieces of generality, supporting many edge-node and middle-node functions. Generality comes at a high performance cost: current APIs, memory models, and implementations drastically limit the effectiveness of increasingly powerful hardware. Generality has historically been required so that individual systems could perform many functions. However, as providers have scaled services to support millions of users, they have transitioned toward thousands (or millions) of dedicated servers, each performing a few functions. We argue that the overhead of generality is now a key obstacle to effective scaling, making specialization not only viable, but necessary. We present Sandstorm and Namestorm, web and DNS servers that utilize a clean-slate userspace network stack that exploits knowledge of application-specific workloads. Based on the netmap framework, our novel approach merges application and network-stack memory models, aggressively amortizes protocol-layer costs based on application-layer knowledge, couples tightly with the NIC event model, and exploits microarchitectural features. Simultaneously, the servers retain use of conventional programming frameworks. We compare our approach with the FreeBSD and Linux stacks using the nginx web server and NSD name server, demonstrating 2--10x and 9x improvements in web-server and DNS throughput, lower CPU usage, linear multicore scaling, and saturated NIC hardware.
【Keywords】: clean-slate design; network performance; network stacks; network- stack specialization
【Paper Link】 【Pages】:187-198
【Authors】: Te-Yuan Huang ; Ramesh Johari ; Nick McKeown ; Matthew Trunnell ; Mark Watson
【Abstract】: Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.
【Keywords】: http-based video streaming; video rate adaptation algorithm
【Paper Link】 【Pages】:199-210
【Authors】: Dinesh Bharadia ; Sachin Katti
【Abstract】: This paper presents, FastForward (FF), a novel full duplex relay that constructively forwards signals such that wireless network throughput and coverage is significantly enhanced. FF is a Layer 1 in-band full duplex device, it receives and transmits signals directly and simultaneously on the same frequency. It cleanly integrates into existing networks (both WiFi and LTE) as a separate device and does not require changes to the clients. FF's key invention is a constructive filtering algorithm that transforms the signal at the relay such that when it reaches the destination, it constructively combines with the direct signals from the source and provides a significant throughput gain. We prototype FF using off-the-shelf software radios running a stock WiFi PHY and show experimentally that it provides a 3× median throughput increase and nearly a 4× gain at the edge of the coverage area.
【Keywords】: full duplex; full duplex relay; interference cancellation; low latency cancellation
【Paper Link】 【Pages】:211-222
【Authors】: Swarun Kumar ; Ezzeldin Hamed ; Dina Katabi ; Li Erran Li
【Abstract】: Despite the rapid growth of next-generation cellular networks, researchers and end-users today have limited visibility into the performance and problems of these networks. As LTE deployments move towards femto and pico cells, even operators struggle to fully understand the propagation and interference patterns affecting their service, particularly indoors. This paper introduces LTEye, the first open platform to monitor and analyze LTE radio performance at a fine temporal and spatial granularity. LTEye accesses the LTE PHY layer without requiring private user information or provider support. It provides deep insights into the PHY-layer protocols deployed in these networks. LTEye's analytics enable researchers and policy makers to uncover serious deficiencies in these networks due to inefficient spectrum utilization and inter-cell interference. In addition, LTEye extends synthetic aperture radar (SAR), widely used for radar and backscatter signals, to operate over cellular signals. This enables businesses and end-users to localize mobile users and capture the distribution of LTE performance across spatial locations in their facility. As a result, they can diagnose problems and better plan deployment of repeaters or femto cells. We implement LTEye on USRP software radios, and present empirical insights and analytics from multiple AT&T and Verizon base stations in our locality.
【Keywords】: LTE; PHY; analytics; cellular; wireless
【Paper Link】 【Pages】:223-234
【Authors】: Guan-Hua Tu ; Yuanjie Li ; Chunyi Peng ; Chi-Yu Li ; Hongyi Wang ; Songwu Lu
【Abstract】: Control-plane protocols are complex in cellular networks. They communicate with one another along three dimensions of cross layers, cross (circuit-switched and packet-switched) domains, and cross (3G and 4G) systems. In this work, we propose signaling diagnosis tools and uncover six instances of problematic interactions. Such control-plane issues span both design defects in the 3GPP standards and operational slips by carriers. They are more damaging than data-plane failures. In the worst-case scenario, users may be out of service in 4G, or get stuck in 3G. We deduce root causes, propose solutions, and summarize learned lessons.
【Keywords】: cellular networks; control-plane; protocol verification
【Paper Link】 【Pages】:235-246
【Authors】: Jue Wang ; Deepak Vasisht ; Dina Katabi
【Abstract】: Prior work in RF-based positioning has mainly focused on discovering the absolute location of an RF source, where state-of-the-art systems can achieve an accuracy on the order of tens of centimeters using a large number of antennas. However, many applications in gaming and gesture based interface see more benefits in knowing the detailed shape of a motion. Such trajectory tracing requires a resolution several fold higher than what existing RF-based positioning systems can offer. This paper shows that one can provide a dramatic increase in trajectory tracing accuracy, even with a small number of antennas. The key enabler for our design is a multi-resolution positioning technique that exploits an intrinsic tradeoff between improving the resolution and resolving ambiguity in the location of the RF source. The unique property of this design is its ability to precisely reconstruct the minute details in the trajectory shape, even when the absolute position might have an offset. We built a prototype of our design with commercial off-the-shelf RFID readers and tags and used it to enable a virtual touch screen, which allows a user to interact with a desired computing device by gesturing or writing her commands in the air, where each letter is only a few centimeters wide.
【Keywords】: RFID; grating lobes; trajectory tracing; virtual touch screen
【Paper Link】 【Pages】:247-258
【Authors】: Abhigyan Sharma ; Xiaozheng Tie ; Hardeep Uppal ; Arun Venkataramani ; David Westbrook ; Aditya Yadav
【Abstract】: Mobile devices dominate the Internet today, however the Internet rooted in its tethered origins continues to provide poor infrastructure support for mobility. Our position is that in order to address this problem, a key challenge that must be addressed is the design of a massively scalable global name service that rapidly resolves identities to network locations under high mobility. Our primary contribution is the design, implementation, and evaluation of auspice, a next-generation global name service that addresses this challenge. A key insight underlying auspice is a demand-aware replica {placement engine} that intelligently replicates name records to provide low lookup latency, low update cost, and high availability. We have implemented a prototype of auspice and compared it against several commercial managed DNS providers as well as state-of-the-art research alternatives, and shown that auspice significantly outperforms both. We demonstrate proof-of-concept that auspice can serve as a complete end-to-end mobility solution as well as enable novel context-based communication primitives that generalize name- or address-based communication in today's Internet.
【Keywords】: distributed systems; mobility; network architecture
【Paper Link】 【Pages】:259-270
【Authors】: Zhaoyu Gao ; Arun Venkataramani ; James F. Kurose ; Simon Heimlicher
【Abstract】: This paper presents a quantitative methodology and results comparing different approaches for {\it location-independent} communication. Our approach is empirical and is based on real Internet topologies, routing tables from real routers, and a measured workload of the mobility of devices and content across network addresses today. We measure the extent of network mobility exhibited by mobile devices with a home-brewn Android app deployed on hundreds of smartphones, and measure the network mobility of Internet content from distributed vantage points. We combine this measured data with our quantitative methodology to analyze the different cost-benefit tradeoffs struck by location-independent network architectures with respect to routing update cost, path stretch, and forwarding table size. We find that more than 20% of users change over 10 IP addresses a day, suggesting that mobility is the norm rather than the exception, so intrinsic and efficient network support for mobility is critical. We also find that with purely name-based routing approaches, each event involving the mobility of a device or popular content may result in an update at up to 14% of Internet routers; but, the fraction of impacted routers is much smaller for the long tail of unpopular content. These results suggest that recent proposals for pure name-based networking are suitable for highly aggregateable content that does not move frequently but may need to be augmented with addressing-assisted approaches to handle device mobility.
【Keywords】: location-independence; mobility; network architecture
【Paper Link】 【Pages】:271-282
【Authors】: Tiffany Hyun-Jin Kim ; Cristina Basescu ; Limin Jia ; Soo Bum Lee ; Yih-Chun Hu ; Adrian Perrig
【Abstract】: In-network source authentication and path validation are fundamental primitives to construct higher-level security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, currently proposed solutions either fall short of addressing important security concerns or require a substantial amount of router overhead. In this paper, we propose lightweight, scalable, and secure protocols for shared key setup, source authentication, and path validation. Our prototype implementation demonstrates the efficiency and scalability of the protocols, especially for software-based implementations.
【Keywords】: path validation; retroactive key setup; source authentication
【Paper Link】 【Pages】:283-294
【Authors】: Yunpeng James Liu ; Peter Xiang Gao ; Bernard Wong ; Srinivasan Keshav
【Abstract】: Most datacenter network (DCN) designs focus on maximizing bisection bandwidth rather than minimizing server-to-server latency. We explore architectural approaches to building low-latency DCNs and introduce Quartz, a design element consisting of a full mesh of switches. Quartz can be used to replace portions of either a hierarchical network or a random network. Our analysis shows that replacing high port-count core switches with Quartz can significantly reduce switching delays, and replacing groups of top-of-rack and aggregation switches with Quartz can significantly reduce congestion-related delays from cross-traffic. We overcome the complexity of wiring a complete mesh using low-cost optical multiplexers that enable us to efficiently implement a logical mesh as a physical ring. We evaluate our performance using both simulations and a small working prototype. Our evaluation results confirm our analysis, and demonstrate that it is possible to build low-latency DCNs using inexpensive commodity elements without significant concessions to cost, scalability, or wiring complexity.
【Keywords】: WDM; datacenter; latency; optical technologies
【Paper Link】 【Pages】:295-306
【Authors】: Anuj Kalia ; Michael Kaminsky ; David G. Andersen
【Abstract】: This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware. HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.
【Keywords】: RDMA; ROCE; infiniband; key-value stores
【Paper Link】 【Pages】:307-318
【Authors】: Jonathan Perry ; Amy Ousterhout ; Hari Balakrishnan ; Devavrat Shah ; Hans Fugal
【Abstract】: An ideal datacenter network should provide several properties, including low median and tail latency, high utilization (throughput), fair allocation of network resources between users or applications, deadline-aware scheduling, and congestion (loss) avoidance. Current datacenter networks inherit the principles that went into the design of the Internet, where packet transmission and path selection decisions are distributed among the endpoints and routers. Instead, we propose that each sender should delegate control---to a centralized arbiter---of when each packet should be transmitted and what path it should follow. This paper describes Fastpass, a datacenter network architecture built using this principle. Fastpass incorporates two fast algorithms: the first determines the time at which each packet should be transmitted, while the second determines the path to use for that packet. In addition, Fastpass uses an efficient protocol between the endpoints and the arbiter and an arbiter replication strategy for fault-tolerant failover. We deployed and evaluated Fastpass in a portion of Facebook's datacenter network. Our results show that Fastpass achieves high throughput comparable to current networks at a 240x reduction is queue lengths (4.35 Mbytes reducing to 18 Kbytes), achieves much fairer and consistent flow throughputs than the baseline TCP (5200x reduction in the standard deviation of per-flow throughput with five concurrent connections), scalability from 1 to 8 cores in the arbiter implementation with the ability to schedule 2.21 Terabits/s of traffic in software on eight cores, and a 2.5x reduction in the number of TCP retransmissions in a latency-sensitive service at Facebook.
【Keywords】: arbiter; centralized; data plane; datacenter; high throughput; low latency; scheduling; zero-queue
【Paper Link】 【Pages】:319-330
【Authors】: Navid Hamed Azimi ; Zafar Ayyub Qazi ; Himanshu Gupta ; Vyas Sekar ; Samir R. Das ; Jon P. Longtin ; Himanshu Shah ; Ashish Tanwer
【Abstract】: Conventional static datacenter (DC) network designs offer extreme cost vs. performance tradeoffs---simple leaf-spine networks are cost-effective but oversubscribed, while "fat tree"-like solutions offer good worst-case performance but are expensive. Recent results make a promising case for augmenting an oversubscribed network with reconfigurable inter-rack wireless or optical links. Inspired by the promise of reconfigurability, this paper presents FireFly, an inter-rack network solution that pushes DC network design to the extreme on three key fronts: (1) all links are reconfigurable; (2) all links are wireless; and (3) non top-of-rack switches are eliminated altogether. This vision, if realized, can offer significant benefits in terms of increased flexibility, reduced equipment cost, and minimal cabling complexity. In order to achieve this vision, we need to look beyond traditional RF wireless solutions due to their interference footprint which limits range and data rates. Thus, we make the case for using free-space optics (FSO). We demonstrate the viability of this architecture by (a) building a proof-of-concept prototype of a steerable small form factor FSO device using commodity components and (b) developing practical heuristics to address algorithmic and system-level challenges in network design and management.
【Keywords】: data centers; free-space optics; reconfigurablility
【Paper Link】 【Pages】:331-342
【Authors】: K. V. Rashmi ; Nihar B. Shah ; Dikang Gu ; Hairong Kuang ; Dhruba Borthakur ; Kannan Ramchandran
【Abstract】: Erasure codes such as Reed-Solomon (RS) codes are being extensively deployed in data centers since they offer significantly higher reliability than data replication methods at much lower storage overheads. These codes however mandate much higher resources with respect to network bandwidth and disk IO during reconstruction of data that is missing or otherwise unavailable. Existing solutions to this problem either demand additional storage space or severely limit the choice of the system parameters. In this paper, we present "Hitchhiker", a new erasure-coded storage system that reduces both network traffic and disk IO by around 25% to 45% during reconstruction of missing or otherwise unavailable data, with no additional storage, the same fault tolerance, and arbitrary flexibility in the choice of parameters, as compared to RS-based systems. Hitchhiker 'rides' on top of RS codes, and is based on novel encoding and decoding techniques that will be presented in this paper. We have implemented Hitchhiker in the Hadoop Distributed File System (HDFS). When evaluating various metrics on the data-warehouse cluster in production at Facebook with real-time traffic and workloads, during reconstruction, we observe a 36% reduction in the computation time and a 32% reduction in the data read time, in addition to the 35% reduction in network traffic and disk IO. Hitchhiker can thus reduce the latency of degraded reads and perform faster recovery from failed or decommissioned machines.
【Keywords】: degraded reads; disk IO; distributed storage; erasure codes; fault tolerance; network traffic; recovery
【Paper Link】 【Pages】:343-344
【Authors】: Matthew K. Mukerjee ; JungAh Hong ; Junchen Jiang ; David Naylor ; Dongsu Han ; Srinivasan Seshan ; Hui Zhang
【Abstract】: User-created live video streaming is marking a fundamental shift in the workload of live video delivery. However, live-video-specific challenges and the viral nature of user-created content makes it difficult for current CDNs to deliver 1) high-quality, 2) highly-scalable, and 3) highly-responsive service. We present the design and implementation of VDN, a new control plane for CDNs designed to optimize the delivery of live streams within the CDN. VDN satisfies these requirements by using two approaches: 1) optimizing directly for video quality (not just throughput) and 2) combining centralized control with local control, allowing VDN to adapt to traffic dynamics and network failures at fine timescales.
【Keywords】: CDNs; central optimization; hybrid control; live video
【Paper Link】 【Pages】:345-346
【Authors】: Sean P. Donovan ; Nick Feamster
【Abstract】: Home and business network operators have limited network statistics available over which management decisions can be made. Similarly, there are few triggered behaviors, such as usage or bandwidths cap for individual users, that are available. By looking at sources of traffic, based on Domain Name System (DNS) cues for content of particular web addresses or source Autonomous System (AS) of the traffic, network operators could create new and interesting rules for their network. NetAssay is a Software-Defined Networking (SDN)-based, network-wide monitoring and reaction framework. By integrating information from Border Gateway Protocol (BGP) and the Domain Name System, NetAssay is able to integrate formerly disparate sources of control information, and use it to provide better monitoring, more useful triggered events, and security benefits for network operators.
【Keywords】: network management; network monitoring; software-defined networking
【Paper Link】 【Pages】:347-348
【Authors】: Jinzhen Bao ; Baokang Zhao ; Wanrong Yu ; Zhenqian Feng ; Chunqing Wu ; Zhenghu Gong
【Abstract】: In recent years, with the rapid development of satellite technology including On Board Processing (OBP) and Inter Satellite Link (ISL), satellite network devices such as space IP routers have been experimentally carried in space. However, there are many difficulties to build a future satellite network with current terrestrial Internet technologies due to the distinguished space features, such as the severely limited resources, remote hardware/software upgrade in space. In this paper, we propose OpenSAN, a novel architecture of software-defined satellite network. By decoupling the data plane and control plane, OpenSAN provides satellite network with high efficiency, fine-grained control, as well as flexibility to support future advanced network technology. Moreover, we also discuss some practical challenges in the deployment of OpenSAN.
【Keywords】: satellite network; software-defined network
【Paper Link】 【Pages】:349-350
【Authors】: Abdulla Alwabel ; Minlan Yu ; Ying Zhang ; Jelena Mirkovic
【Abstract】: We propose a new software-defined security service -- SENSS -- that enables a victim network to request services from remote ISPs for traffic that carries source IPs or destination IPs from this network's address space. These services range from statistics gathering, to filtering or quality of service guarantees, to route reports or modifications. The SENSS service has very simple, yet powerful, interfaces. This enables it to handle a variety of data plane and control plane attacks, while being easily implementable in today's ISP. Through extensive evaluations on realistic traffic traces and Internet topology, we show how SENSS can be used to quickly, safely and effectively mitigate a variety of large-scale attacks that are largely unhandled today.
【Keywords】: SDN; design; management; privacy; security
【Paper Link】 【Pages】:351-352
【Authors】: Srikanth Sundaresan ; Nick Feamster ; Renata Teixeira
【Abstract】: We present a demonstration of WTF (Where's The Fault?), a system that localizes performance problems in home and access networks. We implement WTF as custom firmware that runs in an off-the-shelf home router. WTF uses timing and buffering information from passively monitored traffic at home routers to detect both access link and wireless network bottlenecks.
【Keywords】: bottleneck location; home networks; performance diagnosis; troubleshooting
【Paper Link】 【Pages】:353-354
【Authors】: Xiongzi Ge ; Yi Liu ; David H. C. Du ; Liang Zhang ; Hongguang Guan ; Jian Chen ; Yuping Zhao ; Xinyu Hu
【Abstract】: The resources of dedicated accelerators (e.g. FPGA) are still required to bridge the gap between software-based Middleboxs(MBs) and the commodity hardware. To consolidate various hardware resources in an elastic, programmable and reconfigurable manner, we design and build a flexible and consolidated framework, OpenANFV, to support virtualized accelerators for MBs in the cloud environment. OpenANFV is seamlessly and efficiently put into Openstack to provide high performance on top of commodity hardware to cope with various virtual function requirements. OpenANFV works as an independent component to manage and virtualize the acceleration resources (e.g. cinder manages block storage resources and nova manages computing resources). Specially, OpenANFV mainly has the following three features. (1)Automated Management. Provisioning for multiple Virtualized Network Functions (VNFs) is automated to meet the dynamic requirements of NFV environment. Such automation alleviates the time pressure of the complicated provisioning and configuration as well as reduces the probability of manually induced configuration errors. (2) Elasticity. VNFs are created, migrated, and destroyed on demand in real time. The reconfigurable hardware resources in pool can rapidly and flexibly offload the corresponding services to the accelerator platform in the dynamic NFV environment. (3) Coordinating with Openstack. The design and implementation of the OpenANFV APIs coordinate with the mechanisms in Openstack to support required virtualized MBs for multiple tenants.
【Keywords】: FPGA; middlebox; network function virtualization; openstack
【Paper Link】 【Pages】:355-356
【Authors】: Filipe Manco ; João Martins ; Felipe Huici
【Abstract】: Traditionally, the number of VMs running on a server and how quickly these can be migrated has been less than optimal mostly because of the memory and CPU requirements imposed on the system by the full-fledged OSes that the VMs run. More recently, work towards VMs based on minimalistic or specialized OSes has started pushing the envelope of how reactive or fluid the cloud can be. In this demo we will demonstrate how to concurrently execute thousands of Xen-based VMs on a single inexpensive server. We will also show instantiation and migraion of such VMs in tens of milliseconds, and transparent, wide area migration of virtualized middleboxes by combining such VMs with the multi-path TCP (MPTCP) protocol.
【Keywords】: cloud; performance; server consolidation; virtualization; xen
【Paper Link】 【Pages】:357-358
【Authors】: Gordon Stewart ; Mahanth Gowda ; Geoffrey Mainland ; Bozidar Radunovic ; Dimitrios Vytiniotis ; Doug Patterson
【Abstract】: Software-defined radios (SDR) have the potential to bring major innovation in wireless networking design. However, their impact so far has been limited due to complex programming tools. Most of the existing tools are either too slow to achieve the full line speeds of contemporary wireless PHYs or are too complex to master. In this demo we present our novel SDR programming environment called Ziria. Ziria consists of a novel programming language and an optimizing compiler. The compiler is able to synthesize very efficient SDR code from high-level PHY descriptions written in Ziria language. To illustrate its potential, we present the design of an LTE-like PHY layer in Ziria. We run it on the Sora SDR platform and demonstrate on a test-bed that it is able to operate in real-time.
【Keywords】: DSL; SDR; domain specific language; programming; software-defined radio; wireless
【Paper Link】 【Pages】:359-360
【Authors】: Steffen Gebert ; David Hock ; Thomas Zinner ; Phuoc Tran-Gia ; Marco Hoffmann ; Michael Jarschel ; Ernst-Dieter Schmidt ; Ralf-Peter Braun ; Christian Banse ; Andreas Köpsel
【Abstract】:
【Keywords】: function placement; network functions virtualisation
【Paper Link】 【Pages】:361-362
【Authors】: Benjamin Hesmans ; Olivier Bonaventure
【Abstract】: Multipath TCP is a new extension to TCP that enables a host to transmit the packets from a given connection by using several interfaces. We propose mptcptrace, a software that enables a detailed analysis of Multipath TCP packet traces.
【Keywords】: Multipath TCP
【Paper Link】 【Pages】:363-364
【Authors】: Mark Schmidt ; Florian Heimgaertner ; Michael Menth
【Abstract】: This demo presents a testbed for computer networking education. It leverages hardware virtualization to accommodate 6 PCs and 2 routers on a single testbed host to reduce costs, energy consumption, space requirements, and heat emission. The testbed excels by providing dedicated physical Ethernet and USB interfaces for virtual machines so that students can interconnect them with cables and switches like in a non-virtualized testbed
【Keywords】: VLAN; computer networking education; virtual machines
【Paper Link】 【Pages】:365-366
【Authors】: Mo Dong ; Qingxi Li ; Doron Zarchy ; Brighten Godfrey ; Michael Schapira
【Abstract】: After more than two decades of evolution, TCP and its end host based modifications can still suffer from severely degraded performance under real-world challenging network conditions. The reason, as we observe, is due to TCP family's fundamental architectural deficiency, which hardwires packet-level events to control responses and ignores emprical performance. Jumping out of TCP lineage's architectural deficiency, we propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender controls its sending strategy based on empirically observed performance metrics. We show through preliminary experimental results that PCC achieves consistently high performance under various challenging network conditions.
【Keywords】: congestion control
【Paper Link】 【Pages】:367-368
【Authors】: Arup Raton Roy ; Md. Faizul Bari ; Mohamed Faten Zhani ; Reaz Ahmed ; Raouf Boutaba
【Abstract】: With the growing adoption of Software Defined Networking (SDN) technology, there is a compelling need for an SDN emulator that can facilitate experimenting with new SDN solutions. Unfortunately, Mininet, the de facto standard emulator for software defined networks, fails to scale with network size and traffic volume. To address these limitations, we developed Distributed OpenFlow Testbed (DOT), a highly scalable emulator for SDN. It can emulate large SDN deployments by distributing the workload over a cluster of compute nodes. Moreover, DOT can emulate a wider range of network services compared to other publicly available SDN emulators and simulators. Our demonstration will illustrate several features of DOT including: (i) how easy it is to setup the emulator, (ii) how to deploy a topology using a single configuration file, (iii) how to run a connectivity test to ensure that the emulated network is properly deployed, and (iv) how to control and monitor the emulated components from a centralized location. We will also showcase DOT by emulating two applications: (i) policy based traffic steering through middleboxes and (ii) traffic monitoring.
【Keywords】: emulator; software defined networking; testbed
【Paper Link】 【Pages】:369-370
【Authors】: Adrian Gämperli ; Vasileios Kotronis ; Xenofontas Dimitropoulos
【Abstract】:
【Keywords】: BGP; emulation; software defined networks
【Paper Link】 【Pages】:371-372
【Authors】: Zhenlong Yuan ; Yongqiang Lu ; Zhaoguo Wang ; Yibo Xue
【Abstract】: As smartphones and mobile devices are rapidly becoming indispensable for many network users, mobile malware has become a serious threat in the network security and privacy. Especially on the popular Android platform, many malicious apps are hiding in a large number of normal apps, which makes the malware detection more challenging. In this paper, we propose a ML-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of Android app for malware detection. The comparison of modeling results demonstrates that the deep learning technique is especially suitable for Android malware detection and can achieve a high level of 96% accuracy with real-world Android application sets.
【Keywords】: android malware; deep learning; detection
【Paper Link】 【Pages】:373-374
【Authors】: Payman Samadi ; Varun Gupta ; Berk Birand ; Howard Wang ; Gil Zussman ; Keren Bergman
【Abstract】: We present a control plane architecture to accelerate multicast and incast traffic delivery for data-intensive applications in cluster-computing interconnection networks. The architecture is experimentally examined by enabling physical layer optical multicasting on-demand for the application layer to achieve non-blocking performance.
【Keywords】: hybrid data center networks; incast; multicast; optics
【Paper Link】 【Pages】:375-376
【Authors】: Arjuna Sathiaseelan ; M. Said Seddiki ; Stoyan Stoyanov ; Dirk Trossen
【Abstract】:
【Keywords】: home networks; online social networks; software defined networking
【Paper Link】 【Pages】:377-378
【Authors】: Masoud Moshref ; Apoorv Bhargava ; Adhip Gupta ; Minlan Yu ; Ramesh Govindan
【Abstract】:
【Keywords】: software-defined network; state machine
【Paper Link】 【Pages】:379-380
【Authors】: Liang Zhu ; Zi Hu ; John S. Heidemann ; Duane Wessels ; Allison Mankin ; Nikita Somaiya
【Abstract】: DNS is the canonical protocol for connectionless UDP. Yet DNS today is challenged by eavesdropping that compromises privacy, source-address spoofing that results in denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and size limitations that constrain policy and operational choices. We propose T-DNS to address these problems. It uses TCP to smoothly support large payloads and to mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. Our model shows end-to-end latency from TLS to the recursive resolver is only about 9% slower when UDP is used to the authoritative server, and 22% slower with TCP to the authoritative. With diverse traces we show that frequent connection reuse is possible (60-95% for stub and recursive resolvers, although half that for authoritative servers). Our experiment shows that after connection establishment, TCP and TLS latency is equivalent to UDP. With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) and conservative estimates of connection state memory requirements, we show that server memory requirements well within current, commodity server hardware. We identify the key design and implementation decisions needed to minimize overhead: query pipelining, out-of-order responses, TLS connection resumption, and plausible timeouts. This poster abstract summarizes work we describe in detail in ISI-TR-2014-693.
【Keywords】: domain name system (DNS); network protocols; performance; privacy; security; transport layer security (TLS)
【Paper Link】 【Pages】:381-382
【Authors】: Oliver Michel ; Michael Coughlin ; Eric Keller
【Abstract】: Given that Software-Defined Networking is highly successful in solving many of today's manageability, flexibility, and scalability issues in large-scale networks, in this paper we argue that the concept of SDN can be extended even further. Many applications (esp. stream processing and big-data applications) rely on graph-based inter-process communication patterns that are very similar to those in computer networks. To our mind, this network abstraction spanning over different types of entities is highly suitable for and would benefit from central (SDN-inspired) control for the same reasons classical networks do. In this work, we investigate the commonalities between such intra-host networks and classical computer networking. Based on this, we study the feasibility of a central network controller that manages both network traffic and intra-host communication over a custom bus system.
【Keywords】: SDN; multithreading; stream processing
【Paper Link】 【Pages】:383-394
【Authors】: Yang Wu ; Mingchen Zhao ; Andreas Haeberlen ; Wenchao Zhou ; Boon Thau Loo
【Abstract】: When debugging a distributed system, it is sometimes necessary to explain the absence of an event - for instance, why a certain route is not available, or why a certain packet did not arrive. Existing debuggers offer some support for explaining the presence of events, usually by providing the equivalent of a backtrace in conventional debuggers, but they are not very good at answering 'Why not?' questions: there is simply no starting point for a possible backtrace. In this paper, we show that the concept of negative provenance can be used to explain the absence of events in distributed systems. Negative provenance relies on counterfactual reasoning to identify the conditions under which the missing event could have occurred. We define a formal model of negative provenance for distributed systems, and we present the design of a system called Y! that tracks both positive and negative provenance and can use them to answer diagnostic queries. We describe how we have used Y! to debug several realistic problems in two application domains: software-defined networks and BGP interdomain routing. Results from our experimental evaluation show that the overhead of Y! is moderate.
【Keywords】: debugging; diagnostics; provenance
【Paper Link】 【Pages】:395-406
【Authors】: Colin Scott ; Andreas Wundsam ; Barath Raghavan ; Aurojit Panda ; Andrew Or ; Jefferson Lai ; Eugene Huang ; Zhi Liu ; Ahmed El-Hassany ; Sam Whitlock ; Hrishikesh B. Acharya ; Kyriakos Zarifis ; Scott Shenker
【Abstract】: Software bugs are inevitable in software-defined networking control software, and troubleshooting is a tedious, time-consuming task. In this paper we discuss how to improve control software troubleshooting by presenting a technique for automatically identifying a minimal sequence of inputs responsible for triggering a given bug, without making assumptions about the language or instrumentation of the software under test. We apply our technique to five open source SDN control platforms---Floodlight, NOX, POX, Pyretic, ONOS---and illustrate how the minimal causal sequences our system found aided the troubleshooting process.
【Keywords】: SDN control software; test case minimization; troubleshooting
【Paper Link】 【Pages】:407-418
【Authors】: Jeff Rasley ; Brent Stephens ; Colin Dixon ; Eric Rozner ; Wes Felter ; Kanak Agarwal ; John B. Carter ; Rodrigo Fonseca
【Abstract】: Software-defined networking introduces the possibility of building self-tuning networks that constantly monitor network conditions and react rapidly to important events such as congestion. Unfortunately, state-of-the-art monitoring mechanisms for conventional networks require hundreds of milliseconds to seconds to extract global network state, like link utilization or the identity of "elephant" flows. Such latencies are adequate for responding to persistent issues, e.g., link failures or long-lasting congestion, but are inadequate for responding to transient problems, e.g., congestion induced by bursty workloads sharing a link. In this paper, we present Planck, a novel network measurement architecture that employs oversubscribed port mirroring to extract network information at 280 µs--7 ms timescales on a 1 Gbps commodity switch and 275 µs--4 ms timescales on a 10 Gbps commodity switch,over 11x and 18x faster than recent approaches, respectively (and up to 291x if switch firmware allowed buffering to be disabled on some ports). To demonstrate the value of Planck's speed and accuracy, we use it to drive a traffic engineering application that can reroute congested flows in milliseconds. On a 10 Gbps commodity switch, Planck-driven traffic engineering achieves aggregate throughput within 1--4% of optimal for most workloads we evaluated, even with flows as small as 50 MiB, an improvement of up to 53% over previous schemes.
【Keywords】: networking measurement; software-defined networking; traffic engineering
【Paper Link】 【Pages】:419-430
【Authors】: Masoud Moshref ; Minlan Yu ; Ramesh Govindan ; Amin Vahdat
【Abstract】: Software-defined networks can enable a variety of concurrent, dynamically instantiated, measurement tasks, that provide fine-grain visibility into network traffic. Recently, there have been many proposals to configure TCAM counters in hardware switches to monitor traffic. However, the TCAM memory at switches is fundamentally limited and the accuracy of the measurement tasks is a function of the resources devoted to them on each switch. This paper describes an adaptive measurement framework, called DREAM, that dynamically adjusts the resources devoted to each measurement task, while ensuring a user-specified level of accuracy. Since the trade-off between resource usage and accuracy can depend upon the type of tasks, their parameters, and traffic characteristics, DREAM does not assume an a priori characterization of this trade-off, but instead dynamically searches for a resource allocation that is sufficient to achieve a desired level of accuracy. A prototype implementation and simulations with three network-wide measurement tasks (heavy hitter, hierarchical heavy hitter and change detection) and diverse traffic show that DREAM can support more concurrent tasks with higher accuracy than several other alternatives.
【Keywords】: resource allocation; software-defined measurement
【Paper Link】 【Pages】:431-442
【Authors】: Fahad R. Dogar ; Thomas Karagiannis ; Hitesh Ballani ; Antony I. T. Rowstron
【Abstract】: Many data center applications perform rich and complex tasks (e.g., executing a search query or generating a user's news-feed). From a network perspective, these tasks typically comprise multiple flows, which traverse different parts of the network at potentially different times. Most network resource allocation schemes, however, treat all these flows in isolation -- rather than as part of a task -- and therefore only optimize flow-level metrics. In this paper, we show that task-aware network scheduling, which groups flows of a task and schedules them together, can reduce both the average as well as tail completion time for typical data center applications. To achieve these benefits in practice, we design and implement Baraat, a decentralized task-aware scheduling system. Baraat schedules tasks in a FIFO order but avoids head-of-line blocking by dynamically changing the level of multiplexing in the network. Through experiments with Memcached on a small testbed and large-scale simulations, we show that Baraat outperforms state-of-the-art decentralized schemes (e.g., pFabric) as well as centralized schedulers (e.g., Orchestra) for a wide range of workloads (e.g., search, analytics, etc).
【Keywords】: datacenter; response time; scheduling; transport
【Paper Link】 【Pages】:443-454
【Authors】: Mosharaf Chowdhury ; Yuan Zhong ; Ion Stoica
【Abstract】: Communication in data-parallel applications often involves a collection of parallel flows. Traditional techniques to optimize flow-level metrics do not perform well in optimizing such collections, because the network is largely agnostic to application-level requirements. The recently proposed coflow abstraction bridges this gap and creates new opportunities for network scheduling. In this paper, we address inter-coflow scheduling for two different objectives: decreasing communication time of data-intensive jobs and guaranteeing predictable communication time. We introduce the concurrent open shop scheduling with coupled resources problem, analyze its complexity, and propose effective heuristics to optimize either objective. We present Varys, a system that enables data-intensive frameworks to use coflows and the proposed algorithms while maintaining high network utilization and guaranteeing starvation freedom. EC2 deployments and trace-driven simulations show that communication stages complete up to 3.16X faster on average and up to 2X more coflows meet their deadlines using Varys in comparison to per-flow mechanisms. Moreover, Varys outperforms non-preemptive coflow schedulers by more than 5X.
【Keywords】: coflow; data-intensive applications; datacenter networks
【Paper Link】 【Pages】:455-466
【Authors】: Robert Grandl ; Ganesh Ananthanarayanan ; Srikanth Kandula ; Sriram Rao ; Aditya Akella
【Abstract】: Tasks in modern data parallel clusters have highly diverse resource requirements, along CPU, memory, disk and network. Any of these resources may become bottlenecks and hence, the likelihood of wasting resources due to fragmentation is now larger. Today's schedulers do not explicitly reduce fragmentation. Worse, since they only allocate cores and memory, the resources that they ignore (disk and network) can be over-allocated leading to interference, failures and hogging of cores or memory that could have been used by other tasks. We present Tetris, a cluster scheduler that packs, i.e., matches multi-resource task requirements with resource availabilities of machines so as to increase cluster efficiency (makespan). Further, Tetris uses an analog of shortest-running-time-first to trade-off cluster efficiency for speeding up individual jobs. Tetris' packing heuristics seamlessly work alongside a large class of fairness policies. Trace-driven simulations and deployment of our prototype on a 250 node cluster shows median gains of 30% in job completion time while achieving nearly perfect fairness.
【Keywords】: cluster schedulers; completion time; fairness; makespan; multi-dimensional; packing
【Paper Link】 【Pages】:467-478
【Authors】: Jeongkeun Lee ; Yoshio Turner ; Myungjin Lee ; Lucian Popa ; Sujata Banerjee ; Joon-Myung Kang ; Puneet Sharma
【Abstract】: Providing bandwidth guarantees to specific applications is becoming increasingly important as applications compete for shared cloud network resources. We present CloudMirror, a solution that provides bandwidth guarantees to cloud applications based on a new network abstraction and workload placement algorithm. An effective network abstraction should enable applications to easily and accurately specify their requirements, while simultaneously enabling the infrastructure to provision resources efficiently for deployed applications. Prior research has approached the bandwidth guarantee specification by using abstractions that resemble physical network topologies. We present a contrasting approach of deriving a network abstraction based on application communication structure, called Tenant Application Graph or TAG. CloudMirror also incorporates a new workload placement algorithm that efficiently meets bandwidth requirements specified by TAGs while factoring in high availability considerations. Extensive simulations using real application traces and datacenter topologies show that CloudMirror can handle 40% more bandwidth demand than the state of the art (e.g., the Oktopus system), while improving high availability from 20% to 70%.
【Keywords】: application; availability; bandwidth; cloud; datacenter; virtual network
【Paper Link】 【Pages】:479-490
【Authors】: Anirudh Sivaraman ; Keith Winstein ; Pratiksha Thaker ; Hari Balakrishnan
【Abstract】: When designing a distributed network protocol, typically it is infeasible to fully define the target network where the protocol is intended to be used. It is therefore natural to ask: How faithfully do protocol designers really need to understand the networks they design for? What are the important signals that endpoints should listen to? How can researchers gain confidence that systems that work well on well-characterized test networks during development will also perform adequately on real networks that are inevitably more complex, or future networks yet to be developed? Is there a tradeoff between the performance of a protocol and the breadth of its intended operating range of networks? What is the cost of playing fairly with cross-traffic that is governed by another protocol? We examine these questions quantitatively in the context of congestion control, by using an automated protocol-design tool to approximate the best possible congestion-control scheme given imperfect prior knowledge about the network. We found only weak evidence of a tradeoff between operating range in link speeds and performance, even when the operating range was extended to cover a thousand-fold range of link speeds. We found that it may be acceptable to simplify some characteristics of the network---such as its topology---when modeling for design purposes. Some other features, such as the degree of multiplexing and the aggressiveness of contending endpoints, are important to capture in a model.
【Keywords】: congestion control; learnability; machine learning; measurement; protocol; simulation
【Paper Link】 【Pages】:491-502
【Authors】: Ali Munir ; Ghufran Baig ; Syed Mohammad Irteza ; Ihsan Ayyub Qazi ; Alex X. Liu ; Fahad R. Dogar
【Abstract】: Many data center transports have been proposed in recent times (e.g., DCTCP, PDQ, pFabric, etc). Contrary to the common perception that they are competitors (i.e., protocol A vs. protocol B), we claim that the underlying strategies used in these protocols are, in fact, complementary. Based on this insight, we design PASE, a transport framework that synthesizes existing transport strategies, namely, self-adjusting endpoints (used in TCP style protocols), innetwork prioritization (used in pFabric), and arbitration (used in PDQ). PASE is deployment friendly: it does not require any changes to the network fabric; yet, its performance is comparable to, or better than, the state-of-the-art protocols that require changes to network elements (e.g., pFabric). We evaluate PASE using simulations and testbed experiments. Our results show that PASE performs well for a wide range of application workloads and network settings.
【Keywords】: datacenter; scheduling; transport
【Paper Link】 【Pages】:503-514
【Authors】: Mohammad Alizadeh ; Tom Edsall ; Sarang Dharmapurikar ; Ramanan Vaidyanathan ; Kevin Chu ; Andy Fingerhut ; Vinh The Lam ; Francis Matus ; Rong Pan ; Navindra Yadav ; George Varghese
【Abstract】: We present the design, implementation, and evaluation of CONGA, a network-based distributed congestion-aware load balancing mechanism for datacenters. CONGA exploits recent trends including the use of regular Clos topologies and overlays for network virtualization. It splits TCP flows into flowlets, estimates real-time congestion on fabric paths, and allocates flowlets to paths based on feedback from remote switches. This enables CONGA to efficiently balance load and seamlessly handle asymmetry, without requiring any TCP modifications. CONGA has been implemented in custom ASICs as part of a new datacenter fabric. In testbed experiments, CONGA has 5x better flow completion times than ECMP even with a single link failure and achieves 2-8x better throughput than MPTCP in Incast scenarios. Further, the Price of Anarchy for CONGA is provably small in Leaf-Spine topologies; hence CONGA is nearly as effective as a centralized scheduler while being able to react to congestion in microseconds. Our main thesis is that datacenter fabric load balancing is best done in the network, and requires global schemes such as CONGA to handle asymmetry.
【Keywords】: datacenter fabric; distributed; load balancing
【Paper Link】 【Pages】:515-526
【Authors】: Srikanth Kandula ; Ishai Menache ; Roy Schwartz ; Spandana Raj Babbula
【Abstract】: Datacenter WAN traffic consists of high priority transfers that have to be carried as soon as they arrive alongside large transfers with pre-assigned deadlines on their completion (ranging from minutes to hours). The ability to offer guarantees to large transfers is crucial for business needs and impacts overall cost-of-business. State-of-the-art traffic engineering solutions only consider the current time epoch and hence cannot provide pre-facto promises for long-lived transfers. We present Tempus, an online traffic engineering scheme that exploits information on transfer size and deadlines to appropriately pack long-running transfers across network paths and time, thereby leaving enough capacity slack for future high-priority requests. Tempus builds on a tailored approximate solution to a mixed packing-covering linear program, which is parallelizable and scales well in both running time and memory usage. Consequently, Tempus is able to quickly and effectively update its solution when new transfers arrive or unexpected changes happen. These updates involve only small edits to existing transfers. Therefore, as experiments on traces from a large production WAN show, Tempus can offer and keep promises to long-lived transfers well in advance of their actual deadline; the promise on minimal transfer size is comparable with an offline optimal solution and outperforms state-of-the-art solutions by 2-3X.
【Keywords】: deadlines; inter-datacenter; mixed packing covering; online temporal planning; software-defined networking; wide area network
【Paper Link】 【Pages】:527-538
【Authors】: Hongqiang Harry Liu ; Srikanth Kandula ; Ratul Mahajan ; Ming Zhang ; David Gelernter
【Abstract】: Faults such as link failures and high switch configuration delays can cause heavy congestion and packet loss. Because it takes time to detect and react to faults, these conditions can last long---even tens of seconds. We propose forward fault correction (FFC), a proactive approach to handling faults. FFC spreads network traffic such that freedom from congestion is guaranteed under arbitrary combinations of up to k faults. We show how FFC can be practically realized by compactly encoding the constraints that arise from this large number of possible faults and solving them efficiently using sorting networks. Experiments with data from real networks show that, with negligible loss in overall network throughput, FFC can reduce data loss by a factor of 7--130 in well-provisioned networks, and reduce the loss of high-priority traffic to almost zero in well-utilized networks.
【Keywords】: congestion-free; fault tolerance; traffic engineering
【Paper Link】 【Pages】:539-550
【Authors】: Xin Jin ; Hongqiang Harry Liu ; Rohan Gandhi ; Srikanth Kandula ; Ratul Mahajan ; Ming Zhang ; Jennifer Rexford ; Roger Wattenhofer
【Abstract】: We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53--88% in both wide area and data center networks compared to prior methods.
【Keywords】: network update; software-defined networking
【Paper Link】 【Pages】:551-562
【Authors】: Arpit Gupta ; Laurent Vanbever ; Muhammad Shahbaz ; Sean P. Donovan ; Brandon Schlinker ; Nick Feamster ; Jennifer Rexford ; Scott Shenker ; Russell J. Clark ; Ethan Katz-Bassett
【Abstract】: BGP severely constrains how networks can deliver traffic over the Internet. Today's networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an "SDX"), we must create compelling applications, such as "application-specific peering"---where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates.
【Keywords】: BGP; internet exchange point (IXP); software defined networking (SDN)
【Paper Link】 【Pages】:563-574
【Authors】: Peng Sun ; Ratul Mahajan ; Jennifer Rexford ; Lihua Yuan ; Ming Zhang ; Ahsan Arefin
【Abstract】: We present Statesman, a network-state management service that allows multiple network management applications to operate independently, while maintaining network-wide safety and performance invariants. Network state captures various aspects of the network such as which links are alive and how switches are forwarding traffic. Statesman uses three views of the network state. In observed state, it maintains an up-to-date view of the actual network state. Applications read this state and propose state changes based on their individual goals. Using a model of dependencies among state variables, Statesman merges these proposed states into a target state that is guaranteed to maintain the safety and performance invariants. It then updates the network to the target state. Statesman has been deployed in ten Microsoft Azure datacenters for several months, and three distinct applications have been built on it. We use the experience from this deployment to demonstrate how Statesman enables each application to meet its goals, while maintaining network-wide invariants.
【Keywords】: datacenter network; network state; software-defined networking
【Paper Link】 【Pages】:575-576
【Authors】: Aisha Mushtaq ; Asad Khalid Ismail ; Abdul Wasay ; Bilal Mahmood ; Ihsan Ayyub Qazi ; Zartash Afzal Uzmi
【Abstract】: Data center operators face extreme challenges in simultaneously providing low latency for short flows, high throughput for long flows, and high burst tolerance. We propose a buffer management strategy that addresses these challenges by isolating short and long flows into separate buffers, sizing these buffers based on flow requirements, and scheduling packets to meet different flow-level objectives. Our design provides new opportunities for performance improvements that complement transport layer optimisations.
【Keywords】: TCP; buffer management; data center
【Paper Link】 【Pages】:577-578
【Authors】: Joel Obstfeld ; Simon Knight ; Ed Kern ; Qiang Sheng Wang ; Tom Bryan ; Dan Bourque
【Abstract】: The increasing demand to provide new network services in a timely and efficient manner is driving the need to design, test and deploy networks quickly and consistently. Testing and verifying at scale is a challenge: network equipment is expensive, requires space, power and cooling, and there is never enough test equipment for everyone who wants to use it! Network virtualization technologies enable a flexible environment for educators, researchers, and operators to create functional models of current, planned, or theoretical networks. This demonstration will show VIRL --- the Virtual Internet Routing Lab --- a platform that can be used for network change validation, training, education, research, or network-aware applications development. The platform combines network virtualization technologies with virtual machines (VMs) running open-source and commercial operating systems; VM orchestration capabilities; a context-aware configuration engine; and an extensible data-collection framework. The system simplifies the process to create both simple and complex environments, run simulations, and collect measurement data.
【Keywords】: emulation; network design; network modelling; simulation
【Paper Link】 【Pages】:579-580
【Authors】: Arpit Gupta ; Laurent Vanbever ; Muhammad Shahbaz ; Sean Patrick Donovan ; Brandon Schlinker ; Nick Feamster ; Jennifer Rexford ; Scott Shenker ; Russell J. Clark ; Ethan Katz-Bassett
【Abstract】: BGP severely constrains how networks can deliver traffic over the Internet. Today's networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users. To realize a Software Defined IXP (an "SDX"), we need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. We must also ensure that the system scales, both in rule-table size and computational overhead. In this demo, we show how we tackle these challenges demonstrating the flexibility and scalability of our SDX platform. The paper also appears in the main program.
【Keywords】: BGP; internet exchange point (IXP); software defined networking (SDN)
【Paper Link】 【Pages】:581-582
【Authors】: Han Hu ; Yichao Jin ; Yonggang Wen ; Tat-Seng Chua ; Xuelong Li
【Abstract】: The emergence of portable devices and online social networks (OSNs) has changed the traditional video consumption paradigm by simultaneously providing multi-screen video watching, social networking engagement, etc. One challenge is to design a unified solution to support ever-growing features while guarantee system performance. In this demo, we design and implement a multi-screen technology to provide multi-screen interactions over wide area network (WAN). Furthermore, we incorporate face-detection technology into our system to identify users' bio-features and employ a machine learning based traffic scheduling mechanism to improve the system performance.
【Keywords】: cloud; internet video; second screen
【Paper Link】 【Pages】:583-584
【Authors】: Jiaqiang Liu ; Yong Li ; Depeng Jin
【Abstract】:
【Keywords】: software defined network; virtual machine migration
【Paper Link】 【Pages】:585-586
【Authors】: Wentao Chang ; An Wang ; Aziz Mohaisen ; Songqing Chen
【Abstract】:
【Keywords】: botnet; collaborations; measurement
【Paper Link】 【Pages】:587-588
【Authors】: Shaofeng Chen ; Dingyi Fang ; Xiaojiang Chen ; Tingting Xia ; Meng Jin
【Abstract】: This poster presents GuideLoc, a highly efficient aerial wireless localization system that uses directional antennas mounted on a mini Multi-rotor Unmanned Aerial Vehicle (UAV), to enable detecting and positioning of targets. Taking advantage of angle and signal strength information of frames transmitted from targets, GuideLoc can directly fly towards the targets with the minimum flight route and time. We implement a prototype of GuideLoc using ArduCopter and evaluate the performance by simulations and experiments. Experimental results show that GuideLoc achieves an average location accuracy of 2.7 meters and reduces flight distance more than 50% compared with other known wireless localization approaches using UAV.
【Keywords】: flight routes; multi-rotor UAV; wireless localization
【Paper Link】 【Pages】:589-590
【Authors】: Keunhong Lee ; Joongi Kim ; Sue B. Moon
【Abstract】:
【Keywords】: automated test suite; educational networking framework; full layer implementation
【Paper Link】 【Pages】:591-592
【Authors】: Jun Li ; Skyler Berg ; Mingwei Zhang ; Peter L. Reiher ; Tao Wei
【Abstract】: End hosts in today's Internet have the best knowledge of the type of traffic they should receive, but they play no active role in traffic engineering. Traffic engineering is conducted by ISPs, which unfortunately are blind to specific user needs. End hosts are therefore subject to unwanted traffic, particularly from Distributed Denial of Service (DDoS) attacks. This research proposes a new system called DrawBridge to address this traffic engineering dilemma. By realizing the potential of software-defined networking (SDN), in this research we investigate a solution that enables end hosts to use their knowledge of desired traffic to improve traffic engineering during DDoS attacks.
【Keywords】: DDoS; software-defined networking; traffic engineering
【Paper Link】 【Pages】:593-594
【Authors】: Chengchen Hu ; Ji Yang ; Zhimin Gong ; Shuoling Deng ; Hongbo Zhao
【Abstract】:
【Keywords】: data center; openflow; programmable
【Paper Link】 【Pages】:595-606
【Authors】: Vivek Yenamandra ; Kannan Srinivasan
【Abstract】: Global synchronization across time and frequency domains significantly benefits wireless communications. Multi-Cell (Network) MIMO, interference alignment solutions, opportunistic routing techniques in ad-hoc networks, OFDMA etc. all necessitate synchronization in either time or frequency domain or both. This paper presents sysname, a system that exploits the easily accessible and ubiquitous power line infrastructure to achieve synchronization in time and frequency domains across nodes distributed beyond a single-collision domain. sysname uses the power lines to transmit a reference frequency tone to which each node locks its frequency. sysname exploits the steady periodicity of delivered power signal itself to synchronize distributed nodes in time. We validate the extent of sysname's synchronization and evaluate its effectiveness. We verify sysname's suitability for wireless applications such as OFDMA and multi-cell MIMO by validating the benefits of global synchronization in an enterprise wireless network. Our experiments show a throughput gain of 8.2x over MegaMIMO, 7x over NemoX and 2.5x over OFDMA systems. Enterprise wireless networks are supported by an Ethernet backbone. Researchers have been exploring techniques over the backbone to enable and assist in improving the performance of the wireless networks. Recent work also showed the benefit of sharing information in the air between the nodes to enable higher performance of the network. Even sharing information as little as synchronization information has been demonstrated to open new avenues (such as MU-MIMO, Physical Network Coding, Opportunistic routing, etc.) for the wireless networks to enhance its performance. However, another medium shared by majority of the nodes in the enterprise network - the power line infrastructure - has been largely left untapped to assist the wireless network. While the power lines are noisy and have a frequency selective transmission characteristic, its uniqueness is that its range can extend beyond that over air while it is unburdened by switching and other responsibilities of the Ethernet backbone. This paper poses the following question: How best to exploit the opportunity presented by the power lines to further enhance enterprise wireless networks? The key contributions of this paper are the following: Identify and demonstrate the feasibility of utilizing power lines as a medium to achieve synchronization (in time and frequency domains) between nodes in the network; Demonstrate the scalability of this technique by achieving synchronization between nodes beyond the transmission range of any of the individual nodes. The paper presents empirical results pertaining to the accuracy of synchronization and the benefit of the proposed synchronization method to existing distributed wireless techniques by virtue of extension across multiple collision domains.
【Keywords】: frequency synchronization; network mimo; power line communications; time synchronization; wireless networks
【Paper Link】 【Pages】:607-618
【Authors】: Bryce Kellogg ; Aaron N. Parks ; Shyamnath Gollakota ; Joshua R. Smith ; David Wetherall
【Abstract】: RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.
【Keywords】: backscatter; energy harvesting; internet of things; wireless
【Paper Link】 【Pages】:619-630
【Authors】: Aaron N. Parks ; Angli Liu ; Shyamnath Gollakota ; Joshua R. Smith
【Abstract】: Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication.
【Keywords】: backscatter; energy harvesting; internet of things; wireless
【Paper Link】 【Pages】:631-642
【Authors】: Konstantinos Nikitopoulos ; Juan Zhou ; Ben Congdon ; Kyle Jamieson
【Abstract】: This paper presents the design and implementation of Geosphere, a physical- and link-layer design for access point-based MIMO wireless networks that consistently improves network throughput. To send multiple streams of data in a MIMO system, prior designs rely on a technique called zero-forcing, a way of "nulling" the interference between data streams by mathematically inverting the wireless channel matrix. In general, zero-forcing is highly effective, significantly improving throughput. But in certain physical situations, the MIMO channel matrix can become "poorly conditioned," harming performance. With these situations in mind, Geosphere uses sphere decoding, a more computationally demanding technique that can achieve higher throughput in such channels. To overcome the sphere decoder's computational complexity when sending dense wireless constellations at a high rate, Geosphere introduces search and pruning techniques that incorporate novel geometric reasoning about the wireless constellation. These techniques reduce computational complexity of 256-QAM systems by almost one order of magnitude, bringing computational demands in line with current 16- and 64-QAM systems already realized in ASIC. Geosphere thus makes the sphere decoder practical for the first time in a 4 × 4 MIMO, 256-QAM system. Results from our WARP testbed show that Geosphere achieves throughput gains over multi-user MIMO of 2× in 4 × 4 systems and 47% in 2 × 2 MIMO systems.
【Keywords】: MIMO; distributed MIMO; sphere decoder