2018 IEEE Conference on Computer Communications, INFOCOM 2018, Honolulu, HI, USA, April 16-19, 2018. IEEE 【DBLP Link】
【Paper Link】 【Pages】:1-9
【Authors】: Wasiur R. KhudaBukhsh ; Bastian Alt ; Sounak Kar ; Amr Rizk ; Heinz Koeppl
【Abstract】: Collaborative uploading describes a type of crowd-sourcing scenario in networked environments where a device utilizes multiple paths over neighboring devices to upload content to a centralized processing entity such as a cloud service. Intermediate devices may aggregate and preprocess this data stream. Such scenarios arise in the composition and aggregation of information, e.g., from smart phones or sensors. We use a queuing theoretic description of the collaborative uploading scenario, capturing the ability to split data into chunks that are then transmitted over multiple paths, and finally merged at the destination. We analyze replication and allocation strategies that control the mapping of data to paths and provide closed-form expressions that pinpoint the optimal strategy given a description of the paths' service distributions. Finally, we provide an online path-aware adaptation of the allocation strategy that uses statistical inference to sequentially minimize the expected waiting time for the uploaded data. Numerical results show the effectiveness of the adaptive approach compared to the proportional allocation and a variant of the join-the-shortest-queue allocation, especially for bursty path conditions.
【Keywords】: Resource management; Delays; Collaboration; Sensors; Cloud computing; Closed-form solutions
【Paper Link】 【Pages】:1-9
【Authors】: Zhuotao Liu ; Kai Chen ; Haitao Wu ; Shuihai Hu ; Yih-Chun Hu ; Yi Wang ; Gong Zhang
【Abstract】: Today's cloud networks are shared among many tenants. Bandwidth guarantees and work conservation are two key properties to ensure predictable performance for tenant applications and high network utilization for providers. Despite significant efforts, very little prior work can really achieve both properties simultaneously even some of them claimed so. In this paper, we present QShare, a comprehensive in-network solution to achieve bandwidth guarantees and work conservation simultaneously. QShare leverages weighted fair queuing on commodity switches to slice network bandwidth for tenants, and solves the challenge of queue scarcity through balanced tenant placement and dynamic tenant-queue binding. We have implemented a QShare prototype and evaluated it extensively via both testbed experiments and simulations. Our results show that QShare ensures bandwidth guarantees while driving network utilization to over 91% even under unpredictable traffic demands.
【Keywords】: Bandwidth; Resource management; Cloud computing; Hoses; Conferences; Computational modeling; Topology
【Paper Link】 【Pages】:1-9
【Authors】: You Zhou ; Yian Zhou ; Shigang Chen ; Youlin Zhang
【Abstract】: Per-flow traffic measurement is a fundamental problem in the era of big network data, and has been widely used in many applications, including capacity planning, anomaly detection, load balancing, traffic engineering, etc. In order to keep up with the line speed of modern network devices (e.g., routers), per-flow measurement online module is often implemented by using on-chip cache memory (such as SRAM) to minimize per-packet processing time, but on-chip SRAM is expensive and limited in size, which poses a major challenge for traffic measurement. In response, much recent research is geared towards designing highly compact data structures for approximate estimation that can provide probabilistic guarantees for per-flow measurement. The state of art, called Counter Tree (CT), requires at least 2 bits per flow in memory consumption and more than 2 memory accesses per packet in processing time. In this paper, we propose a novel design of a highly compact and efficient counter architecture, called Virtual Active Counter estimation (VAC), which achieves faster processing speed (slightly more than 1 memory access per packet on average) and provides more accurate measurement results than CT under the same allocated memory. Moreover, VAC can perform well even with a very tight memory space (less than 1 bit per flow or even one fifth of a bit per flow). Theoretical analysis and experiments based on real network traces demonstrate the superior performance of VAC.
【Keywords】: Size measurement; Memory management; Estimation; Time measurement; System-on-chip; Probabilistic logic; Gain measurement
【Paper Link】 【Pages】:10-18
【Authors】: Francesco De Pellegrini ; Lorenzo Maggi ; Antonio Massaro ; Damien Saucez ; Jeremie Leguay ; Eitan Altman
【Abstract】: To optimize routing of flows in datacenters, SDN controllers receive a packet-in message whenever a new flow appears in the network. Unfortunately, flow arrival rates can peak to millions per second, impairing the ability of controllers to treat them on time. Flow scheduling copes with such sheer numbers by segmenting the traffic between elephant and mice flows and by treating elephant flows in priority, as they disrupt short lived TCP flows and create bottlenecks. We propose a learning algorithm called SOFIA and able to perform optimal online flow segmentation. Our solution, based on stochastic approximation techniques, is implemented at the switch level and updated by the controller, with minimal signaling over the control channel. SOFIA is blind, i.e., it is oblivious to the flow size distribution. It is also adaptive, since it can track traffic variations over time. We prove its convergence properties and its message complexity. Moreover, we specialize our solution to be robust to traffic classification errors. Extensive numerical experiments characterize the performance of our approach in vitro. Finally, results of the implementation in a real OpenFlow controller demonstrate the viability of SOFIA as a solution in production environments.
【Keywords】: software defined networks; flow segmentation; stochastic approximation; adaptive algorithms; traffic classifiers
【Paper Link】 【Pages】:19-27
【Authors】: Haoyu Wang ; Haiying Shen
【Abstract】: With the rapid development of web applications in datacenters, network latency becomes more important to user experience. The network latency will be greatly increased by incast congestion, in which a huge number of requests arrive at the front-end server simultaneously. Previous incast problem solutions usually handle the data transmission between the data servers and the front-end server directly, and they are not sufficiently effective in proactively avoiding incast congestion. To further improve the effectiveness, in this paper, we propose a Proactive Incast Congestion Control system (PICC). Since each connection has bandwidth limit, PICC novelly limits the number of data servers concurrently connected to the front-end server to avoid the incast congestion through data placement. Specifically, the front-end server gathers popular data objects (i.e., frequently requested data objects) into as few data servers as possible, but without overloading them. It also re-allocates the data objects that are likely to be concurrently or sequentially requested into the same server. As a result, PICC reduces the number of data servers concurrently connected to the front-end server (which avoids the incast congestion), and also the number of connection establishments (which reduces the network latency). Since the selected data servers tend to have long queues to send out data, to reduce the queuing latency, PICC incorporates a queuing delay reduction algorithm that assigns higher transmission priorities to data objects with smaller sizes and longer queuing times. The experimental results on simulation and a real cluster based on a benchmark show the superior performance of PICC over previous incast congestion problem solutions.
【Keywords】: Servers; Delays; Bandwidth; Silicon; Time-frequency analysis; Conferences; Throughput
【Paper Link】 【Pages】:28-36
【Authors】: Zhenzhe Zheng ; R. Srikant ; Guihai Chen
【Abstract】: As more applications and businesses move to the cloud, pricing for inter-datacenter links has become an important problem. In this paper, we study revenue maximizing pricing from the perspective of a network provider in inter-datacenter networks. Designing a practical bandwidth pricing scheme requires us to jointly consider the requirements of envy-freeness and arbitrage-freeness, where envy-freeness guarantees the fairness of resource allocation and arbitrage-freeness induces users to truthfully reveal their data transfer requests. Considering the non-convexity of the revenue maximization problem and the lack of information about the users' utilities, we propose a framework for computationally efficient pricing to approximately maximize revenue in a range of environments. We first study the case of a single link accessed by many users, and design a (1 + E)-approximation pricing scheme with polynomial time complexity and information complexity. Based on dynamic programming, we then extend the pricing scheme for the tollbooth network, preserving the (1 + E) approximation ratio and the computational complexity. For the general network setting, we analyze the revenue generated by uniform pricing, which determines a single per unit price for all potential users. We show that when users have similar utilities, uniform pricing can achieve a good approximation ratio, which is independent of network topology and data transfer requests. The pricing framework can be extended to multiple time slots, enabling time-dependent pricing.
【Keywords】: Pricing; Data transfer; Wide area networks; Bandwidth; Computational complexity; Data centers; Network topology
【Paper Link】 【Pages】:37-45
【Authors】: Sowndarya Sundar ; Ben Liang
【Abstract】: We study the scheduling decision for an application consisting of dependent tasks, in a generic cloud computing system comprising a network of heterogeneous local processors and a remote cloud server. We formulate an optimization problem to find the offloading decision that minimizes the overall application execution cost, subject to an application completion deadline. Since this problem is NP-hard, we propose a heuristic algorithm termed Individual Time Allocation with Greedy Scheduling (ITAGS) to obtain an efficient solution. ITAGS first uses a binary-relaxed version of the original problem to allocate a completion deadline to each individual task, and then greedily optimizes the scheduling of each task subject to its time allowance. Through trace-based simulation using real applications, as well as various randomly generated task trees, we study the performance of ITAGS, highlighting the effect of the application deadline, communication delay, number of processors, and number of tasks. We further demonstrate the substantial performance advantage of ITAGS over existing alternatives.
【Keywords】: Task analysis; Program processors; Cloud computing; Delays; Processor scheduling; Mobile handsets; Scheduling
【Paper Link】 【Pages】:46-54
【Authors】: Yeli Geng ; Yi Yang ; Guohong Cao
【Abstract】: Modern mobile devices are equipped with multicore-based processors, which introduce new challenges on computation offloading. With the big.LITTLE architecture, instead of only deciding locally or remotely running a task in the traditional architecture, we have to consider how to exploit the new architecture to minimize energy while satisfying application completion time constraints. In this paper, we address the problem of energy-efficient computation offloading on multicore-based mobile devices running multiple applications. We first formalize the problem as a mixed-integer nonlinear programming problem that is NP-hard, and then propose a novel heuristic algorithm to jointly solve the offloading decision and task scheduling problems. The basic idea is to prioritize tasks from different applications to make sure that both application time constraints and task-dependency requirements are satisfied. To find a better schedule while reducing the schedule searching overhead, we propose a critical path based solution which recursively checks the tasks and moves tasks to the right CPU cores to save energy. Simulation and experimental results show that our offloading algorithm can significantly reduce the energy consumption of mobile devices while satisfying the application completion time constraints.
【Keywords】: Task analysis; Multicore processing; Cloud computing; Mobile handsets; Energy consumption; Time factors
【Paper Link】 【Pages】:55-62
【Authors】: Shuang Jiang ; Dong He ; Chenxi Yang ; Chenren Xu ; Guojie Luo ; Yang Chen ; Yunlu Liu ; Jiangwei Jiang
【Abstract】: Recently, Edge Computing has emerged as a new computing paradigm dedicated for mobile applications for performance enhancement and energy efficiency purposes. Specifically, it benefits today's interactive applications on power-constrained devices by offloading compute-intensive tasks to the edge nodes which is in close proximity. Meanwhile, Field Programmable Gate Array (FPGA) is well known for its excellence in accelerating compute-intensive tasks such as deep learning algorithms in a high performance and energy efficiency manner due to its hardware-customizable nature. In this paper, we make the first attempt to leverage and combine the advantages of these two, and proposed a new network-assisted computing model, namely FPGA-based edge computing. As a case study, we choose three computer vision (CV)-based interactive mobile applications, and implement their backend computation parts on FPGA. By deploying such application-customized accelerator modules for computation offloading at the network edge, we experimentally demonstrate that this approach can effectively reduce response time for the applications and energy consumption for the entire system in comparison with traditional CPU-based edge/cloud offloading approach.
【Keywords】: Cloud computing; Field programmable gate arrays; Servers; Acceleration; Computational modeling; Edge computing; Time factors
【Paper Link】 【Pages】:63-71
【Authors】: Shiqiang Wang ; Tiffany Tuor ; Theodoros Salonidis ; Kin K. Leung ; Christian Makaya ; Ting He ; Kevin Chan
【Abstract】: Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
【Keywords】: Machine learning; Data models; Distributed databases; Machine learning algorithms; Convergence; Computational modeling; Task analysis
【Paper Link】 【Pages】:72-80
【Authors】: Soteris Demetriou ; Puneet Jain ; Kyu-Han Kim
【Abstract】: An increasing number of depth sensors and surrounding-aware cameras are being installed in the new generation of cars. For example, Tesla Motors uses a forward radar, a front-facing camera, and multiple ultrasonic sensors to enable its Autopilot feature. Meanwhile, older or legacy cars are expected to be around in volumes, for at least the next 10 to 15 years. Legacy car drivers rely on traditional GPS for navigation services, whose accuracy varies 5 to 10 meters in a clear line-of-sight and degrades up to 30 meters in a downtown environment. At the same time, a sensor-rich car achieves better accuracy due to high-end sensing capabilities. To bridge this gap, we propose CoDrive, a system to provide a sensor-rich car's accuracy to a legacy car. We achieve this by correcting GPS errors of a legacy car on an opportunistic encounter with a sensor-rich car. CoDrive uses smartphone GPS of all participating cars, RGB-D sensors of sensor-rich cars, and road boundaries of a traffic scene to generate optimization constraints. Our algorithm collectively reduces GPS errors, resulting in accurate reconstruction of a traffic scene's aerial view. CoDrive does not require stationary landmarks or 3D maps. We empirically evaluate CoDrive which is shown to achieve a 90% and a 30% reduction in cumulative GPS error for legacy and sensor-rich cars respectively, while preserving the shape of the traffic.
【Keywords】: Automobiles; Global Positioning System; Sensors; Visualization; Optimization; Cloud computing
【Paper Link】 【Pages】:81-89
【Authors】: Luca Bedogni ; Marco Fiore ; Christian Glacet
【Abstract】: Upcoming mobile network technologies developed in the context of 5G and DSRC are expected to finally legitimize direct data transfers among vehicles as a standard communication paradigm. We investigate fundamental properties of the topology of vehicular networks built on top of these emerging vehicle-to-vehicle communication technologies. Our study yields multiple elements of originality: ( i) it addresses temporal connectivity, which has been poorly investigated despite a high relevance for vehicular network operations; (ii) it introduces exact but computationally efficient models of the temporal connectivity of vehicular networks; (iii) it evaluates the proposed models in urban settings that exhibit an unprecedented combination of dependability, scale and generality of vehicular mobility. This approach lets us unveil an apparent scale-and city-invariant law of temporal reachability in vehicular networks. Finally, we open our original scenarios to the research community, so as to ensure reproducibility of our results and foster further investigations of vehicular network performance.
【Keywords】: Trajectory; Global Positioning System; Roads; Calibration; Public transportation; 5G mobile communication; Computational modeling
【Paper Link】 【Pages】:90-98
【Authors】: Daxin Tian ; Jianshan Zhou ; Min Chen ; Zhengguo Sheng ; Qiang Ni ; Victor C. M. Leung
【Abstract】: Vehicular ad hoc networks (VANETs) have a potential to promote vehicular telematics and infotainment applications, where a key and challenging issue is the design of robust and efficient vehicular content transmissions to combat the lossy inter-vehicle links. In this paper, we focus on the robust optimization of content transmissions over cooperative VANETs. We first derive a stochastic model for estimation of time-varying inter-vehicle distance, which is dependent of the vehicle real-time kinematics and the distribution of the initial space headway. With this model, we analytically formulate the transient inter-vehicle connectivity assuming Nakagami fading channels for the physical (PHY) layer. We also model the contention nature of the medium access control (MAC) layer, on which we are based to evaluate the throughput achieved by each vehicle equipped with dedicated short-range communication (DSRC). Combining these models, we derive a closed-formed expression for the upper bound of the probability of failure in intact-content transmissions. Based upon this theoretical bound, we develop a robust optimization model for assigning content data traffic among different cooperative transmission paths, where the objective is to minimize the maximum likelihood of unsuccessful content transmissions over the cooperative VANET. We mathematically transform the optimization model to another equivalent form, such that it can be practically deployed. Finally, we validate our theoretical development with extensive simulations. Numerical results are also provided to confirm the power of cooperation in boosting the VANET performance as well as demonstrate the advantage of the proposed robust optimization in terms of content data reception reliability.
【Keywords】: Fading channels; Robustness; Optimization; Stochastic processes; Mathematical model; Electronic mail; Relays
【Paper Link】 【Pages】:99-107
【Authors】: Chi Lin ; Zhiyuan Wang ; Jing Deng ; Lei Wang ; Jiankang Ren ; Guowei Wu
【Abstract】: Benefited from recent breakthrough in wireless power transfer technology, the lifetime of wireless sensor networks (WSNs) can be prolonged significantly, generating the concept of wireless rechargeable sensor networks (WRSNs). While most recent works have been focusing on WRSNs with a single wireless charging vehicle (WCV), we investigate the issue of multiple WCVs' on-line collaborative charging schedules in this work. In our design, termed mTS, the network area is divided into subdomains for designated WCVs. Each WCV schedules its charging scheduling path by responding to the interdependency of temporal and spatial correlations from different charging requests. Higher priorities are given to sensor requests with a mixture of closer charging deadlines and closer distances. We further analyze the system performance with an M/M/n/mTS queueing model. Our further study through simulations revealed that our scheme excels in successful charging rate, sensor survival rate, and other related performance metrics. Our field experiments further confirmed these results and showed some further interesting findings on different charging hardware and methods.
【Keywords】: Nickel; Wireless sensor networks; Scheduling; Task analysis; Measurement; Clustering algorithms; Wireless communication
【Paper Link】 【Pages】:108-116
【Authors】: Tuo Shi ; Jianzhong Li ; Hong Gao ; Zhipeng Cai
【Abstract】: Battery-Free Wireless Sensor Network (BF-WSN) is a newly proposed network architecture to address the limitation of traditional Wireless Sensor Networks (WSNs). The special features of BF-WSNs make the coverage problem quite different and even more challenging from and than that in traditional WSNs. This paper defines a new coverage problem in BF-WSNs which aims at maximizing coverage quality rather than prolonging network lifetime. The newly defined coverage problem is proved to be at least NP-Hard. Two sufficient conditions, under which the optimal solution of the problem can be derived in polynomial time, are given in this paper. Furthermore, two approximate algorithms are proposed to derive nearly optimal coverage when the sufficient conditions are unsatisfied. The time complexity and approximate ratio of the two algorithms are analyzed. Extensive simulations are carried out to examine the performance of the proposed algorithms. The simulation results show that these algorithms are efficient and effective.
【Keywords】: Wireless sensor networks; Monitoring; Approximation algorithms; Batteries; Energy resources; Sensors; Conferences
【Paper Link】 【Pages】:117-125
【Authors】: Quan Chen ; Hong Gao ; Zhipeng Cai ; Lianglun Cheng ; Jianzhong Li
【Abstract】: The emerging energy harvesting technology enables charging sensor batteries with renewable energy sources, which has been effectively integrated into Wireless Sensor Networks (EH-WSNs). Meanwhile, data aggregation is an essential operation in a WSN. The problem of Minimum Latency Aggregation Scheduling (MLAS) which seeks a fast and collision-free aggregation schedule has been well studied when nodes are energy-abundant. However, due to the limited energy harvesting capacities of tiny sensors, the captured energy remains scarce and differs greatly among nodes. Thus, all of the previous algorithms for MLAS are not suitable in EH-WSNs. In this paper, we investigate the MLAS problem in EH-WSNs. To make use of the harvested energy smartly, we construct an aggregation tree adaptively according to the residual battery level at each node. Furthermore, we identify a new kind of collision, named as energy -collision, and design a special structure to assist in avoiding it. By considering transmitting time, residual energy, and energy-collision, we propose three scheduling algorithms for MLAS problem in EH-WSNs. The theoretical analysis and simulation results verify that the proposed algorithms have high performance in terms of aggregation latency compared with the baseline methods.
【Keywords】: Data aggregation; Batteries; Schedules; Wireless sensor networks; Scheduling algorithms; Energy harvesting; Interference
【Paper Link】 【Pages】:126-134
【Authors】: Huiqi Liu ; Xiangyang Li ; Lan Zhang ; Yaochen Xie ; Zhenan Wu ; Qian Dai ; Ge Chen ; Chunxiao Wan
【Abstract】: With the proliferation of mobile devices and various sensors (e.g., GPS, magnetometer, accelerometers, gyroscopes) equipped, richer services, e.g. location based services, are provided to users. A series of methods have been proposed to protect the users' privacy, especially the trajectory privacy. Hardware fingerprinting has been demonstrated to be a surprising and effective source for identifying/authenticating devices. In this work, we show that a few data samples collected from the motion sensors are enough to uniquely identify the source mobile device, i.e., the raw motion sensor data serves as a fingerprint of the mobile device. Specifically, we first analytically understand the fingerprinting capacity using features extracted from hardware data. To capture the essential device feature automatically, we design a multi-LSTM neural network to fingerprint mobile device sensor in real-life uses, instead of using handcrafted features by existing work. Using data collected over 6 months, for arbitrary user movements, our fingerprinting model achieves 93 % F -score given one second data, while the state-of-the-art work achieves 79% F-score. Given ten seconds randomly sampled data, our model can achieve 98.8% accuracy. We also propose a novel generative model to modify the original sensor data and yield anonymized data with little fingerprint information while retain good data utility.
【Keywords】: Feature extraction; Mobile handsets; Hardware; Data models; Robustness; Neural networks; Fingerprint recognition
【Paper Link】 【Pages】:135-143
【Authors】: Bin Tang
【Abstract】: Many emerging sensor network applications operate in challenging environments wherein sensor nodes do not always have connected paths to the base station. Data generated from such intermittently connected sensor networks therefore must be stored inside the network for some unpredictable period of time before uploading opportunities become available. Consequently, sensory data could overflow limited storage capacity available in the entire network, making discarding valuable data inevitable. To overcome such overall storage overflow in intermittently connected sensor networks, we propose and study a new algorithmic problem called data aggregation for overall storage overflow (DA0 2 ). Utilizing spatial data correlation that commonly exists among sensory data, DAO 2 employs data aggregation techniques to reduce the overflow data size while minimizing the total energy consumption. To solve DAO 2 , we uncover a new graph theoretic problem called multiple traveling salesman walks (MTSW), and show that with proper graph transformation, the DAO 2 is equivalent to the MTSW. We prove that MTSW is NP-hard and design a (2-1.)-approximation q algorithm, where q is the number of nodes to visit (i.e., the number of sensor nodes that aggregate their overflow data). The approximation algorithm is based on a novel routing structure called minimum q-edge forest that accurately captures information needed for energy-efficient data aggregation. We further put forward a heuristic algorithm and empirically show that it constantly outperforms the approximation algorithm by 15% -30% in energy consumption. Finally, we propose a distributed data aggregation algorithm that can achieve the same approximation ratio as the centralized algorithm under some condition, while incurring comparable energy consumption.
【Keywords】: Intermittently Connected Sensor Networks; Data Aggregation; Approximation Algorithms; Graph Theory
【Paper Link】 【Pages】:144-152
【Authors】: Piotr Gawlowicz ; Anatolij Zubow ; Adam Wolisz
【Abstract】: LTE in Unlicensed (LTE-U) constitutes a new source of interference in the 5 GHz ISM band with a potentially strong impact on WiFi performance. Cross-technology interference and radio resource management are the best ways to assure efficient coexistence but require proper signaling channels. We present LtFi, a system which enables to set-up a cross-technology communication between nodes of co-located LTE-U and WiFi networks. LtFi follows a two-step approach: using an innovative side channel on their air-interface LTE-U BSs are broadcasting connection and identification information to adjacent WiFi nodes, which is used in a subsequent step to create a bi-directional control channel over the wired backhaul. The simple LtFi is fully compliant with LTE-U and works with COTS WiFi hardware. The achievable data rate on the air-interface based broadcast side channel (up to 665 bps) is sufficient for this and multiple other purposes. Experimental evaluation of a fully operational prototype has demonstrated reliable data transmission even in crowded wireless environments for LTE-U receive power levels down to -92 dBm. Moreover, system-level simulations demonstrate accurate recognition of the complete set of interfering LTE-U BSs in a typical LTE-U multi-cell environment.
【Keywords】: Cross-technology communication; LTE-U; WiFi; coexistence; cooperation; heterogeneous networks
【Paper Link】 【Pages】:153-161
【Authors】: Yongrui Chen ; Zhijun Li ; Tian He
【Abstract】: Cross-Technology Communication (CTC) is an enabling technology for efficient coexistence and effective cooperation among heterogeneous wireless devices by exchanging data frames directly without gateways. Recent advances in the physical-layer CTC achieve thousands of times faster speed than that of previous packet-level CTC techniques. However, physical-layer CTC still faces the challenge of inherent unreliability due to the imperfection of physical-layer signal emulation. Our work, named TwinBee, aims to recover the intrinsic errors of physical-layer CTC, by exploring chip-level error patterns. This is achieved interestingly without even observing the chip information and without any hardware modification. System evaluation shows that our key idea, namely symbol-level chip-combining decoding with soft mapping, significantly improves the Packet Reception Ratio (PRR) of the physical-layer CTC from 50%-60% to more than 99%. We also demonstrate the reliability of TwinBee in a data dissemination application over a network of 20 TelosB nodes, achieving over 40x reduction of data dissemination delay compared to Deluge.
【Keywords】: Wireless fidelity; ZigBee; Emulation; Quadrature amplitude modulation; Reliability; Receivers; OFDM
【Paper Link】 【Pages】:162-170
【Authors】: Zhijun Li ; Tian He
【Abstract】: Cross-Technology Communication (CTC) supports direct message exchange among heterogeneous wireless technologies (e.g., Wi-Fi, ZigBee, and BlueTooth) under the same ISM band, enabling explicit cross-technology control and coordination. For instance, a Wi-Fi AP can directly control ZigBee-enabled smart light bulbs without an expensive dual-radio gateway. Such CTC capability can be further amplified if we can extend the communication range of CTC to support long-range wide-area IoT applications such as environmental monitoring, smart metering, and precision agriculture. Our work, named LongBee, is the first to extend the range of Cross-Technology Communication. At the transmitter side, LongBee concentrates the effective TX power through down-clocked operations, and at the receiver side, LongBee improves the RX sensitivity with an innovative transition coding to ensure reliable preamble detection and payload reception. All these are achieved without modifying hardware and without introducing extra Wi-Fi RF energy cost. We implemented the LongBee on the USRP platform and commodity ZigBee devices. Our comprehensive evaluation reveals that LongBee with concentrated TX power and higher RX sensitivity achieves reliably over 10x range extension over native ZigBee communication and 2x range extension than the longest distance achieved by existing CTC schemes so far.
【Keywords】: ZigBee; Wireless fidelity; Hardware; Receivers; Wireless communication; Payloads; Sensitivity
【Paper Link】 【Pages】:171-179
【Authors】: Xiaolong Zheng ; Yuan He ; Xiuzhen Guo
【Abstract】: Cross- Technology Communication (CTC) is an emerging technique to enable the direct communication among different wireless technologies. A main category of the existing proposals on CTC propose to modulate packets at the sender side, and demodulate them into 1 and 0 bits at the receiver side. The performance of those proposals is likely to degrade in a densely coexisting environment. Solely judged according to the received signal strength, a symbol 0 that is modulated as packet absence is generally indistinguishable from dynamic interference. In this paper, we propose StripComm, interference-resilient CTC in coexisting environments. A sender in StripComm adopts an interference-resilient coding scheme that contains both presence and absence of packets in one symbol. The receiver strips the interference from the interested signal by exploiting the self-similarity of StripComm signals. We prototype StripComm with commercial WiFi, ZigBee devices and a software radio platform. The theoretical and experimental evaluation demonstrate that StripComm offers a data rate up to 1.1K bps with a SER (Symbol Error Rate) lower than 0.01 and a data rate of 0.89K bps even against strong interference.
【Keywords】: Interference; Wireless fidelity; Receivers; ZigBee; Wireless communication; Encoding; Signal to noise ratio
【Paper Link】 【Pages】:180-188
【Authors】: Yinggen Xu ; Wei Chen ; Shaoqi Wang ; Xiaobo Zhou ; Changjun Jiang
【Abstract】: Modern datacenter schedulers apply a static policy to partition resources among different tasks. The amount of allocated resource won't get changed during a task's lifetime. However, we found that resource usage during a task's runtime demonstrates high dynamics and it only reaches full usage at few moments. Therefore, the static allocation policy doesn't exploit the dynamic nature of resource usage, leading to low system resource utilization. To address this hard problem, a recently proposed task-consolidation approach packs as many tasks as possible on the same node based on real-time resource demands. However, this approach may cause resource over-allocation and harm application performance. In this paper, we propose and develop ECS, an elastic container based scheduler that leverages resource usage variation within the task lifetime to exploit the potential utilization and parallelism. The key idea is to proactively select and shift tasks backward so that the inherent paralleled tasks can be identified without over-allocation. We formulate the scheduling scheme as an online optimization problem and solves it using a resource leveling algorithm. We have implemented ECS in Apache Yarn and performed evaluations with various MapReduce benchmarks in a cluster. Experimental results show that ECS can efficiently utilize resource and achieves up to 29% reduction on average job completion time while increasing CPU utilization by 25%, compared to stock Yarn.
【Keywords】: Task analysis; Containers; Dynamic scheduling; Resource management; Benchmark testing; Parallel processing; Runtime
【Paper Link】 【Pages】:189-197
【Authors】: Kun Suo ; Yong Zhao ; Wei Chen ; Jia Rao
【Abstract】: Containers, a form of lightweight virtualization, provide an alternative means to partition hardware resources among users and expedite application deployment. Compared to virtual machines (VMs), containers incur less overhead and allow a much higher consolidation ratio. Container networking, a vital component in container-based virtualization, is still not well understood. Many techniques have been developed to provide connectivity between containers on a single host or across multiple machines. However, there lacks an in-depth analysis of their respective advantages, limitations, and performance in a cloud environment. In this paper, we perform a comprehensive study of representative container networks. We first conduct a qualitative comparison of their applicable scenarios, levels of security isolation, and overhead. Then we quantitatively evaluate the throughput, latency, scalability, and startup cost of various container networks in a realistic cloud environment. We find that virtualized network in containers incurs non-negligible overhead compared to physical networks. Performance degradation varies depending on the type of network protocol and packet size. Our experiments show that there is no clear winner in performance and users need to select an appropriate container network based on the requirements and characteristics of their workloads.
【Keywords】: Containers; IP networks; Bridges; Security; Cloud computing; Virtualization; Throughput
【Paper Link】 【Pages】:198-206
【Authors】: Yipei Niu ; Fangming Liu ; Zongpeng Li
【Abstract】: With the advent of cloud container technology, enterprises develop applications through microservices, breaking monolithic software into a suite of small services whose instances run independently in containers. User requests are served by a series of microservices forming a chain, and the chains often share microservices. Existing load balancing strategies either incur significant networking overhead or ignore the competition for shared microservices across chains. Furthermore, typical load balancing solutions leverage a hybrid technique by combining HTTP with message queue to support microservice communications, bringing additional operational complexity. To address these challenges, we propose a chain-oriented load balancing algorithm (COLBA) based solely on message queues, which balances load based on microservice requirements of chains to minimize response time. We model the load balancing problem as a non-cooperative game, and leverage Nash bargaining to coordinate microservice allocation across chains. Employing convex optimization with rounding, we efficiently solve the problem that is proven NP-hard. Extensive trace-driven simulations demonstrate that COLBA reduces the overall average response time at least by 13% compared with existing load balancing strategies.
【Keywords】: Load management; Time factors; Containers; Computer architecture; Load modeling; Conferences; Logic gates
【Paper Link】 【Pages】:207-215
【Authors】: Jie Xu ; Lixing Chen ; Pan Zhou
【Abstract】: Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the network edge, thereby meeting the latency requirements of many emerging mobile applications and saving backhaul network bandwidth. Although many existing works have studied computation of-floading policies, service caching is an equally, if not more important, design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related databases/libraries in the edge server (e.g. MEC-enabled BS), thereby enabling corresponding computation tasks to be executed. Because only a small number of application services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the edge computing performance. In this paper, we investigate the extremely compelling but much less studied problem of dynamic service caching in MEC-enabled dense cellular networks. We propose an efficient online algorithm, called OREO, which jointly optimizes dynamic service caching and task offloading to address a number of key challenges in MEC systems, including service heterogeneity, unknown system dynamics, spatial demand coupling and decentralized coordination. Our algorithm is developed based on Lyapunov optimization and Gibbs sampling, works online without requiring future information, and achieves provable close-to-optimal performance. Simulation results show that our algorithm can effectively reduce computation latency for end users while keeping energy consumption low.
【Keywords】: Task analysis; Cloud computing; Edge computing; Servers; Cellular networks; Heuristic algorithms; Computational modeling
【Paper Link】 【Pages】:216-224
【Authors】: Sourav Mondal ; Goutam Das ; Elaine Wong
【Abstract】: Cloud-based computing technology is one of the most significant technical advents of the last decade and extension of this facility towards access networks by aggregation of cloudlets is a step further. To fulfill the ravenous demand for computational resources entangled with the stringent latency requirements of computationally-heavy applications related to augmented reality, cognitive assistance and context-aware computation, installation of cloudlets near the access segment is a very promising solution because of its support for wide geographical network distribution, low latency, mobility and heterogeneity. In this paper, we propose a novel framework, Cloudlet Cost OptiMization over PASSIve Optical Network (CCOMPASSION), and formulate a nonlinear mixed-integer program to identify optimal cloudlet placement locations such that installation cost is minimized whilst meeting the capacity and latency constraints. Considering urban, suburban and rural scenarios as commonly-used network deployment models, we investigate the feasibility of the proposed model over them and provide guidance on the overall cloudlet facility installation over optical access network. We also study the percentage of incremental energy budget in the presence of cloudlets of the existing network. The final results from our proposed model can be considered as fundamental cornerstones for network planning with hybrid cloudlet network architectures.
【Keywords】: Cloudlet; low latency; non-linear mixed-integer programming; optical access network
【Paper Link】 【Pages】:225-233
【Authors】: Mohammad Noormohammadpour ; Cauligi S. Raghavendra ; Srikanth Kandula ; Sriram Rao
【Abstract】: Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. Since many such transfers move the same data from one source to multiple destinations, using multicast forwarding trees can reduce bandwidth needs and improve completion times. However, using a single forwarding tree per transfer can lead to poor performance as the slowest receiver dictates the completion time for all receivers. Using multiple forwarding trees per transfer alleviates this concern-the average receiver could finish early; however, if done naively, bandwidth usage would also increase and it is apriori unclear how best to partition receivers, how to construct the multiple trees and how to determine the rate and schedule of flows on these trees. This paper presents QuickCast, a first solution to these problems. Using simulations on real-world network topologies, we see that QuickCast can speed up the average receiver's completion time by as much as 10× while only using 1.04× more bandwidth; further, the completion time for all receivers also improves by as much as faster at high loads. Thereby, while some implementation challenges remain, we advocate using a cohort of forwarding trees.
【Keywords】: Software Defined WAN; Datacenter; Scheduling; Completion Times; Replication
【Paper Link】 【Pages】:234-242
【Authors】: Mathieu Leconte ; Apostolos Destounis ; Georgios S. Paschos
【Abstract】: This paper addresses a major challenge in traffic engineering: the selection of a set of paths that minimizes routing cost for a random traffic matrix. We introduce the concept of pathbook: a small set of paths to which we restrict routing. The use of pathbook accelerates centralized traffic engineering algorithms, and therefore is appealing for instantiating, configuring, and optimizing large software-based networks. However, restricting routing to a few paths may lead to higher cost or infeasibility. To this end, we introduce the problem of pathbook design, wherein we search for a pathbook of constrained size that minimizes the expected routing cost of the random traffic matrix, which represents a prediction of the future traffic. The pathbook design problem is of combinatorial nature, and we show that it is NP-hard. We then study its convex relaxation for which we propose an optimal algorithm based on the projected subgradient method. For large networks, the subgradient vector is of prohibitive dimensions, hence we propose a coordinate-descent method using the Gauss-Southwell rule, which prescribes a move along the direction of largest subgradient element. We test the performance of our solution on dynamic traffic matrices from GEANT and find that our Gauss-Southwell pathbooks can accelerate standard methods by two orders of magnitude.
【Keywords】: Routing; Acceleration; Bandwidth; Optimization; Conferences; Robustness; Quality of service
【Paper Link】 【Pages】:243-251
【Authors】: Balázs Sonkoly ; Marton Szabo ; Balázs Németh ; András Majdán ; Gergely Pongrácz ; László Toka
【Abstract】: Future services and applications, such as Tactile Internet, coordinated remote driving or wireless controlled exoskeletons, pose serious challenges on the underlying networks and IT platforms in terms of reliability, latency, or capacity, just to mention a few. Towards those services, virtualization is a key enabler from both technological and economic aspects which significantly reshaped the IT and networking ecosystem. On the one hand, cloud computing and the services based on that are evident results of last years' efforts; on the other hand, networking is in the middle of a momentous revolution and important changes mainly driven by Network Function Virtualization (NFV) and Software Defined Networking (SDN). In order to enable carrier grade network services with strict QoS requirements, we need a novel data plane supporting high performance and flexible, fine granular programmability and control. As the network functions (implemented by virtual machines or containers) use the same hardware resources (cpu, memory) as the components responsible for networking, we need a low-level resource orchestrator which is capable of jointly controlling these resources. In this paper, we propose a novel resource orchestrator (RO) for a data plane making use of open source components such as, Docker, DPDK and OVS. Our goal is threefold. First, we propose a novel data plane resource model which is capable of abstracting several hardware architectures. Second, we provide an adapter module which can automatically discover the underlying hardware and build the model on-the-fly. Third, we design and implement a novel RO building on the aforementioned components and a publicly available Service Graph embedding engine. As a proof of the concept, two software switches (OVS, ERFS) are adapted and different hardware platforms are evaluated
【Keywords】: Cloud computing; Hardware; Data models; Computer architecture; Open source software; Computational modeling
【Paper Link】 【Pages】:252-260
【Authors】: Wangkit Wong ; S.-H. Gary Chan
【Abstract】: Interference Alignment (IA) has emerged as a promising interference coordination approach for cooperative MIMO systems. Due to heavy CSI feedback overhead, APs (Access Points) need to be partitioned into cooperation groups no larger than a certain size where only APs in the same group are able to cooperate with IA. We consider a general MIMO network using a hybrid interference coordination approach, i.e. intra-group interference is managed with IA, while inter-group interference is overcome with traditional orthogonal multiple access techniques. Users are usually non-uniformly distributed. Their throughput can be improved by association optimization. We study the novel problem of minimizing AP load by joint AP grouping and user association. The problem is shown to be NP-hard. Based on alternating direction optimization, we propose DAGA (Distributed Joint AP Grouping and User Association) to tackle the problem. DAGA is distributed and uses only long-term CSI. Based on current AP grouping, it produces an approximated user association solution which is at most elogm (m is the number of APs) times of the optimum. Based on current user association, it adjusts AP grouping with local search. Extensive simulation results show that it substantially outperforms other comparison schemes.
【Keywords】: MU-MIMO network; Interference alignment; Load balancing; Joint Optimization; Approximation algorithm
【Paper Link】 【Pages】:261-269
【Authors】: Ehsan Aryafar ; Alireza Keshavarz-Haddad
【Abstract】: We present the design and implementation of PAFD, a design methodology that enables full-duplex (FD) in hybrid beamforming systems with constant amplitude phased array antennas. The key novelty in PAFD's design is construction of analog beamformers that maximize the beamforming gains in the desired directions while simultaneously reducing the self-interference (SI). PAFD is implemented on the WARP platform, and its performance is extensively evaluated in both indoor and outdoor environments. Our experimental results reveal that (i) PAFD sacrifices a few dB in beamfomring gain to provide large amounts of reduction in SI power; (ii) the reduction in SI is dependent on the number of phased array antennas and increases as the number of antennas increases; and (iii) finally, PAFD significantly outperforms half-duplex (HD) for small cells even in presence of high interference caused by uplink clients to the downlink clients. The gains increase with a larger array size or less multipath in the propagation environment.
【Keywords】: Phased arrays; Array signal processing; Gain; Radio frequency; Phase shifters
【Paper Link】 【Pages】:270-278
【Authors】: Xin Tan ; Zhi Sun ; Dimitrios Koutsonikolas ; Josep Miquel Jornet
【Abstract】: The millimeter-wave (mmWave) frequency band has been utilized in the IEEE 802.11ad standard to achieve multi-Gbps throughput. Despite the advantages, mmWave links are highly vulnerable to both user and environmental mobility. Since mmWave radios use highly directional antennas, the line-of-sight (LOS) signal can be easily blocked by various obstacles, such as walls, furniture, and humans. In the complicated indoor environment, it is highly possible that the blocked mmWave link cannot be restored no matter how the access point and the mobile user change their antenna directions. To address the problem and enable indoor mobile mmWave networks, in this paper, we introduce the reconfigurable 60 GHz reflect-arrays to establish robust mmWave connections for indoor networks even when the links are blocked by obstructions. First, the reconfigurable 60 GHz reflect-array is designed, implemented, and modeled. Then a three-party beam-searching protocol is designed for reflect-array-assisted 802.11ad networks. Finally, an optimal array deployment strategy is developed to minimize the link outage probability in indoor mobile mmWave networks. The proposed solution is validated and evaluated by both in-lab experiments and computer simulations.
【Keywords】: Indoor environments; Directive antennas; Protocols; Relays; Transceivers; Antenna arrays; Systems architecture
【Paper Link】 【Pages】:279-287
【Authors】: Kaidong Wang ; Konstantinos Psounis
【Abstract】: 802.11ax introduces OFDMA to WiFi. It thus enables multiplexing users/user groups in the frequency domain. WiFi networks usually operate in a multipath environment which generates a frequency selective channel. Hence, the capacity of a user/user group changes over different subcarriers. A good scheduling and resource allocation scheme can maximize the sum rate by allocating users and user groups on subcarriers based on their CSI and other system considerations. In this paper we investigate how to optimally assign users and user groups to subcarriers with the goal of maximizing the user sum rate in the context of 802.11ax. We introduce a novel divide and conquer based algorithm which we prove to be optimal under the assumption that a user can be assigned to more than one resource unit (RU) which consists of one ore more subcarriers. This serves as a tight upper bound on the actual problem where users/user groups can be assigned to a single RU only per the 802.11ax standard. We then introduce two practical algorithms for the actual problem, a greedy one and a recursive one which jointly splits the bandwidth into RUs and schedules users on them. Extensive simulations comparing the performance of the aforementioned algorithms establish that our practical schemes achieve very good performance in all studied scenarios.
【Keywords】: Bandwidth; Channel capacity; OFDM; Resource management; Schedules; Wireless fidelity; Downlink
【Paper Link】 【Pages】:288-296
【Authors】: Arpan Mukhopadhyay ; Nidhi Hegde ; Marc Lelarge
【Abstract】: We consider models of content delivery networks in which the servers are constrained by two main resources: memory and bandwidth. In such systems, the throughput crucially depends on how contents are replicated across servers and how the requests of specific contents are matched to servers storing those contents. In this paper, we first formulate the problem of computing the optimal replication policy which if combined with the optimal matching policy maximizes the throughput of the caching system in the stationary regime. It is shown that computing the optimal replication policy for a given system is an NP-hard problem. A greedy replication scheme is proposed and it is shown that the scheme provides a constant factor approximation guarantee. We then propose a simple randomized matching scheme which avoids the problem of interruption in service of the ongoing requests due to re-assignment or repacking of the existing requests in the optimal matching policy. The dynamics of the caching system is analyzed under the combination of proposed replication and matching schemes. We study a limiting regime, where the number of servers and the arrival rates of the contents are scaled proportionally, and show that the proposed policies achieve asymptotic optimality. Extensive simulation results are presented to evaluate the performance of different policies and study the behavior of the caching system under different service time distributions of the requests.
【Keywords】: Servers; Resource management; Microsoft Windows; Bandwidth; Conferences; Electronic mail; Computational modeling
【Paper Link】 【Pages】:297-305
【Abstract】: Power-of-d-choices is a popular load balancing algorithm for many-server systems such as large-scale data centers. For each incoming job, the algorithm probes d servers, chosen uniformly at random from a total of N servers, and routes the job to the least loaded one. It is well known that power-of-d-choices reduces queueing delays by orders of magnitude compared to the policy that routes each incoming job to a randomly selected server. The question to be addressed in this paper is how large d needs to be so that power-of-d-choices achieves asymptotic zero delay like the join-the-shortest-queue (JSQ) algorithm, which is a special case of power-of-d-choices with d=N. We are interested in the heavy-traffic regime where the load of the system, denoted by λ, approaches to one as N increases, and assume λ = 1-γN -α for and . This paper establishes that when d=ω-([1/(1-λ)]), the probability that an incoming job is routed to a busy server is asymptotically zero, i.e. a job experiences zero queueing delay with probability one asymptotically; and when d=O([1/(1-λ)])' the probability that a job is routed to a busy server is lower bounded by a positive constant independent of N. Therefore, our results show that d=ω([1/(1-λ)]) is sufficient and almost necessary for achieving zero delay with the power-of-d-choices load balancing policy.
【Keywords】: Servers; Delays; Load management; Steady-state; Data centers; Conferences; Probes
【Paper Link】 【Pages】:306-314
【Authors】: Mingyan Li ; Xinping Guan ; Cunqing Hua ; Cailian Chen ; Ling Lyu
【Abstract】: Driven by mission-critical applications in modern industrial systems, the 5th generation (5G) communication system is expected to provide ultra-reliable low-latency communications (URLLC) services to meet the quality of service (QoS) demands of industrial applications. However, these stringent requirements cannot be guaranteed by its conventional dynamic access scheme due to the complex signaling procedure. A promising solution to reduce the access delay is the pre-allocation scheme based on the semi-persistent scheduling (SPS) technique, which however may lead to low spectrum utilization if the allocated resource blocks (RBs) are not used. In this paper, we aim to address this issue by developing DPre, a predictive pre-allocation framework for uplink access scheduling of delay-sensitive applications in industrial process automation. The basic idea of DPre is to explore and exploit the correlation of data acquisition and access behavior between nodes through static and dynamic learning mechanisms in order to make judicious resource per-allocation decisions. We evaluate the effectiveness of DPre based on several monitoring applications in a steel rolling production process. Simulation results demonstrate that DPre achieves better performance in terms of the prediction accuracy, which can effectively increase the rewards of those reserved resources.
【Keywords】: Uplink; Correlation; Automation; Job shop scheduling; Sensors; Quality of service; Heuristic algorithms
【Paper Link】 【Pages】:315-323
【Authors】: Yanbing Yang ; Jun Luo
【Abstract】: LED-Camera Visible Light Communication (VLC) is gaining increasing attention, thanks to its readiness to be implemented with Commercial Off-The-Shelf devices and its potential to deliver pervasive data services indoors. Nevertheless, existing LED-Camera VLC systems employ mainly low-order modulations such as On-Off Keying (OOK) given the simplicity of their implementation, yet such rudimentary modulations cannot yield a high throughput. In this paper, we investigate various opportunities of using a high-order modulation to boost the throughput of LED-Camera VLC systems, and we decide that Amplitude-Shift Keying (ASK) is the most suitable scheme given the limited operating frequency of such systems. However, directly driving an LED to emit different levels of luminance may suffer heavy distortions caused by the nonlinear behavior of LED. As a result, we innovatively propose to generate ASK using the composition of light emission. In other words, we digitally control the On-Off states of several groups of LED chips, so that their light emissions compose in the air to produce various ASK symbols. We build a prototype of this novel ASK-based VLC system and demonstrate its superior performance over existing systems: it achieves a rate of 2 kbps at a 1 m distance with only a single LED luminaire.
【Keywords】: Visible Light Communication; Collaborative Transmissions; Amplitude-Shift Keying
【Paper Link】 【Pages】:324-332
【Authors】: Hao Cai ; Tilman Wolf
【Abstract】: Neighbor discovery is a critical first step in establishing communication in a wireless ad-hoc network. Existing quorum-based neighbor discovery algorithms only consider a pair of nodes and ensure that this pair can communicate at least once in a bounded interval. However, when the node density of a wireless network increases, collisions are more likely to happen, which makes these quorum-based algorithms inefficient in practice. We propose a novel self-adapting quorum-based neighbor discovery algorithm that can dynamically adjust its cycle pattern to decrease the impact of such collisions. We first assess the collision problem in wireless networks when using quorum-based neighbor discovery algorithms and then establish a theoretical framework to analyze the discovery delay when considering collision effects. Guided by these theoretical results, we design a self-adapting mechanism for cycle patterns in quorum-based algorithms. Simulation results show that our algorithm can achieve complete neighbor discovery in less time than existing quorum-based neighbor discovery algorithms.
【Keywords】: Protocols; Schedules; Wireless sensor networks; Heuristic algorithms; Delays; Ad hoc networks; Analytical models
【Paper Link】 【Pages】:333-341
【Authors】: Tien Tran ; Dung T. Huynh
【Abstract】: In this paper, we investigate the Antenna Orientation (AO) and Antenna Orientation and Power Assignment (AOPA) problems concerning symmetric connectivity in Directional Wireless Sensor Networks (DWSNs) where each sensor node is equipped with 2 ≤ k ≤ 5 directional antennas having beamwidth θ ≥ 0. The AO problem for DWSNs is closely related with the well-known Euclidean Degree-Bounded Minimum Bottleneck Spanning Tree (EBMBST) problem where different cases for the degree bound have been studied. While current works on DWSNs focus on solving each case of k(=2, 3, 4) separately, we propose a uniform approach for the AO problem that yields constant-factor approximation algorithms for the AO as well as the EBMBST problem where the degree bound is between 2 and 4. Our method achieves the same constant factors. For the AOPA problem, to the best of our knowledge, our paper provides the first results concerning this problem. We show that the problem is NP-hard when 2 ≤ k ≤ 4. We also establish the first constant-factor approximation algorithms for the problem. Finally, we perform some simulations to understand the practical performance of our algorithms.
【Keywords】: Approximation algorithms; Directional antennas; Directive antennas; Wireless sensor networks; Transmitting antennas; Conferences
【Paper Link】 【Pages】:342-350
【Authors】: Johnny Verhoeff ; S. N. Akshay Uttama Nambi ; Marco Zuniga Zamalloa ; Bontor Humala
【Abstract】: Artificial lighting is a pervasive element in our daily lives. Researchers from different communities are investigating challenges and opportunities related to artificial lighting but from different angles: energy disaggregation, to monitor the status of light bulbs in buildings; and communication, to transmit information wirelessly. We argue that there is an unexplored synergy between these two communities. When a light bulb modulates its intensity for communication, it also affects the current it draws. This current signature is unique and could be used by energy disaggregation methods to identify the lights' status. These signatures however will be exposed to interference (collisions of signatures) and distortions due to power line effects. To overcome these problems, we build upon coding schemes to assign interference-resilient signatures, and we develop custom hardware to ameliorate distortions introduced by power lines. We validate our framework in a proof-of-concept testbed, perform simulations to test scalability, and use energy traces from real homes to evaluate the impact of other electric loads.
【Keywords】: Buildings; Lighting; Monitoring; Meters; Interference; Light emitting diodes; Distortion
【Paper Link】 【Pages】:351-359
【Authors】: Viet Nguyen ; Mohamed Ibrahim ; Siddharth Rupavatharam ; Minitha Jawahar ; Marco Gruteser ; Richard E. Howard
【Abstract】: This paper explores the feasibility of localizing and detecting activities of building occupants using visible light sensing across a mesh of light bulbs. Existing Visible Light activity sensing (VLS) techniques require either light sensors to be deployed on the floor or a person to carry a device. Our approach integrates photosensors with light bulbs and exploits the light reflected off the floor to achieve an entirely device-free and light source based system. This forms a mesh of virtual light barriers across networked lights to track shadows cast by occupants. The design employs a synchronization circuit that implements a time division signaling scheme to differentiate between light sources and a sensitive sensing circuit to detect small changes in weak reflections. Sensor readings are fed into indoor supervised tracking algorithms as well as occupancy and activity recognition classifiers. Our prototype uses modified off-the-shelf LED flood light bulbs and is installed in a typical office conference room. We evaluate the performance of our system in terms of localization, occupancy estimation and activity classification, and find a 0.89m median localization error as well as 93.7% and 93.78% occupancy and activity classification accuracy, respectively.
【Keywords】: Sensors; Light emitting diodes; Light sources; Lighting; Tracking; Receivers; Buildings
【Paper Link】 【Pages】:360-368
【Authors】: Xiuzhen Guo ; Yuan He ; Xiaolong Zheng ; Liangcheng Yu ; Omprakash Gnawali
【Abstract】: Cross-technology communication (CTC) is a technique that enables direct communication among different wireless technologies. Recent works in this area have made positive progress, but high-throughput CTC from ZigBee to WiFi remains an open problem. In this paper, we propose ZigFi, a novel CTC framework that enables direct communication from ZigBee to WiFi. Without impacting the ongoing WiFi transmissions, ZigFi carefully overlaps ZigBee packets with WiFi packets. Through experiments we show that Channel State Information (CSI) of the overlapped packets can be used to convey data from ZigBee to WiFi. Based on this finding, we propose a receiver-initiated protocol and translate the decoding problem into a problem of CSI classification with Support Vector Machine. We further build a generic model through experiments, which describes the relationship between the Signal to Interference and Noise Ratio (SINR) and the symbol error rate (SER). We implement ZigFi on commercial-off-the-shelf WiFi and ZigBee devices. We evaluate the performance of ZigFi under different experimental settings. The results demonstrate that ZigFi achieves a throughput of 215.9bps, which is 18X faster than the state-of-the-art.
【Keywords】: ZigBee; Wireless fidelity; Wireless communication; Throughput; Interference; Bandwidth; Conferences
【Paper Link】 【Pages】:369-377
【Authors】: Wei Wang ; Tiantian Xie ; Xin Liu ; Ting Zhu
【Abstract】: Recent advances in cross-technology communication have significantly improved the spectrum efficiency in the same ISM band among heterogeneous wireless devices (e.g., WiFi and ZigBee). However, further performance improvement in the whole network is hampered because the cross-technology network layer is missing. As the first cross-technology network layer design, our work, named ECT, opens a promising direction for significantly reducing the packet delivery delay via collaborative and concurrent cross-technology communication between WiFi and ZigBee devices. Specifically, ECT can dynamically change the nodes' priorities and reduce the delivery delay from high priority nodes under unreliable links. The key idea of ECT is to leverage the concurrent transmission of important data and raw data from ZigBee nodes to the WiFi AP. We extensively evaluate ECT under different network settings and results show that our ECT's packet delivery delay is more than 29 times lower than the current state-of-the-art solution.
【Keywords】: ZigBee; Wireless fidelity; Delays; Servers; Schedules; Receivers; Sensors
【Paper Link】 【Pages】:378-386
【Authors】: Haipeng Dai ; Yang Zhao ; Guihai Chen ; Wanchun Dou ; Chen Tian ; Xiaobing Wu ; Tian He
【Abstract】: One critical issue for wireless power transfer is to avoid human health impairments caused by electromagnetic radiation (EMR) exposure. The existing studies mainly focus on scheduling wireless chargers so that (expected) EMR at any point in the area doesn't exceed a threshold Rt. Nevertheless, they overlook the EMR jitter that leads to exceeding of R t even if the expected EMR is no more than Rt. This paper studies the fundamental problem of RObustly SafE charging for wireless power transfer (ROSE), that is, scheduling the power of chargers so that the charging utility for all rechargeable devices is maximized while the probability that EMR anywhere doesn't exceed Rt is no less than a given confidence. We first build our empirical probabilistic charging model and EMR model. Then, we present EMR approximation and area discretization techniques to formulate ROSE into a Second-Order Cone Program, and the first redundant second-order cone constraints reduction algorithm to reduce the computational cost, and therefore obtain a (1-ε)-approximation centralized algorithm. Further, we propose a (1-ε)-approximation fully distributed algorithm scalable with network size for ROSE. Simulations and field experiments show that our algorithms can outperform comparison algorithms by 480.19%.
【Keywords】: Wireless communication; Wireless sensor networks; Safety; Probabilistic logic; Jitter; Gaussian distribution; Approximation algorithms
【Paper Link】 【Pages】:387-395
【Authors】: Nan Yu ; Haipeng Dai ; Alex X. Liu ; Bingchuan Tian
【Abstract】: In this paper, we first study the problem of Connected wIReless Charger pLacEment (CIRCLE). That is, given a fixed number of directional wireless chargers and candidate positions, determining the placement position and orientation angle for each charger under connectivity constraint for wireless chargers such that the overall charging utility is maximized. To address CIRCLE, we first consider a relaxed version of CIRCLE (CIRCLE-R for short). We prove that CIRCLE-R falls into the realm of maximizing a submodular set function subject to a connectivity constraint, and propose an algorithm whose approximation ratio is at least 1.5 times better than that of the state-of-the-art algorithm. Next, we reduce the solution space for CIRCLE from infinite to finite, and propose an algorithm with a constant approximation ratio to address CIRCLE. We conduct both simulations and field experiments to verify our theoretical findings. The results show that our algorithm can outperform comparison algorithms by 83.35 %.
【Keywords】: Wireless communication; Wireless sensor networks; Approximation algorithms; Conferences; Sensors; Optimization; Mathematical model
【Paper Link】 【Pages】:396-404
【Authors】: Kai Bu ; Yutian Yang ; Zixuan Guo ; Yuanyuan Yang ; Xing Li ; Shigeng Zhang
【Abstract】: Software-Defined Networking (SDN) greatly simplifies middlebox policy enforcement. Middleboxes need tag packet headers to avoid forwarding ambiguity on SDN switches. In this paper, we present a new attack, called middlebox-bypass attack, to breach SDN-based middlebox policy enforcement. Such an attack manipulates a compromised switch to locally tag attacking packets without handing them over to the attached middlebox for inspection. Existing SDN security solutions, however, cannot detect the middlebox-bypass attack under practical constraints of efficiency, robustness, and applicability. We design and implement FlowCloak, the first protocol for per-packet real-time detection and prevention of middlebox-bypass attacks. FlowCloak enables middleboxes to generate tags that are probabilistically unknown to an attacker and confines it to only random guessing. We propose a multi-tag verification technique to address the tradeoff between FlowCloak robustness and TCAM usage by tag verification rules on the egress switch. Experiment results show that dozens of verification rules can confine the attacking probability under 0.1 %. FlowCloak imposes only a 0.3 ms packet processing delay on middleboxes and no obvious delay on the egress switch.
【Keywords】: Middleboxes; Firewalls (computing); Delays; Robustness; Control systems; Tagging
【Paper Link】 【Pages】:405-413
【Authors】: Andreas Blenk ; Patrick Kalmbach ; Johannes Zerwas ; Michael Jarschel ; Stefan Schmid ; Wolfgang Kellerer
【Abstract】: Network virtualization enables increasingly diverse network services to cohabit and share a given physical infrastructure and its resources, with the possibility to rely on different network architectures and protocols optimized towards specific requirements. In order to ensure a predictable performance despite shared resources, network virtualization requires a strict performance isolation and hence, resource reservations. Moreover, the creation of virtual networks should be fast and efficient. The underlying NP-hard algorithmic problem is known as the Virtual Network Embedding (VNE) problem and has been studied intensively over the last years. This paper presents NeuroViNE, a novel approach to speed up and improve a wide range of existing VNE algorithms: NeuroViNE is based on a search space reduction mechanism and preprocesses a problem instance by extracting relevant subgraphs, i.e., good combinations of substrate nodes and links. These subgraphs can then be fed to an existing algorithm for faster and more resource-efficient embeddings. NeuroViNE relies on a Hopfield network, and its performance benefits are investigated in simulations for random networks, real substrate networks, and data center networks.
【Keywords】: Substrates; Neurons; Bandwidth; Heuristic algorithms; Manganese; Conferences; Virtualization
【Paper Link】 【Pages】:414-422
【Authors】: Sheng-Hao Chiang ; Jian-Jhih Kuo ; Shan-Hsiang Shen ; De-Nian Yang ; Wen-Tsuen Chen
【Abstract】: Previous research on SDN traffic engineering mostly focuses on static traffic, whereas dynamic traffic, though more practical, has drawn much less attention. Especially, online SDN multicast that supports IETF dynamic group membership (i.e., any user can join or leave at any time) has not been explored. Different from traditional shortest-path trees (SPT) and graph theoretical Steiner trees (ST), which concentrate on routing one tree at any instant, online SDN multicast traffic engineering is more challenging because it needs to support dynamic group membership and optimize a sequence of correlated trees without the knowledge of future join and leave, whereas the scalability of SDN due to limited TCAM is also crucial. In this paper, therefore, we formulate a new optimization problem, named Online Branch-aware Steiner Tree (OBST), to jointly consider the bandwidth consumption, SDN multicast scalability, and rerouting overhead. We prove that OBST is NP-hard and does not have a |Dmax| 1-ε -competitive algorithm for any , where |Dmax| is the largest group size at any time. We design a |Dmax|-competitive algorithm equipped with the notion of the budget, the deposit, and Reference Tree to achieve the tightest bound. The simulations and implementation on real SDNs with YouTube traffic manifest that the total cost can be reduced by at least 25% compared with SPT and ST, and the computation time is small for massive SDN.
【Keywords】: Unicast; Steiner trees; Optical switches; Routing; Bandwidth; Scalability
【Paper Link】 【Pages】:423-431
【Authors】: Hao Li ; Kaiyue Chen ; Tian Pan ; Yadong Zhou ; Kun Qian ; Kai Zheng ; Bin Liu ; Peng Zhang ; Yazhe Tang ; Chengchen Hu
【Abstract】: Software Defined Network (SDN) enables flexible update of network functions with a well-defined abstraction between the control and the data plane. However, multiple active network functions with the same priority will potentially trigger conflicts among policies with overlapped flow space, causing the flow table explosion. In contrast to the local switch conflict resolution schemes proposed by previous works, this paper tackles the same problem from a different angle and resolves the policy conflict problem by coordinating all switches under a global centralized view. Specifically, we propose COnflict RAzor (CORA), which tremendously reduces the storage cost of conflicting policies leveraging the global network information obtained in the controller. The basic idea of CORA is migrating policies causing large explosions across the network if necessary, while keeping the semantics equivalence. We prove CORA's NP hardness and propose a heuristic to efficiently search a near-optimal policy migration strategy. Our experiments demonstrate that, CORA can effectively reduce the flow table storage occupation by at least 49% within less than 40 seconds.
【Keywords】: Switches; Routing; Conferences; Aerospace electronics; Explosions; Semantics; Performance evaluation
【Paper Link】 【Pages】:432-440
【Authors】: Giuliano Casale
【Abstract】: List-based caches can offer lower miss rates than single-list caches, but their analysis is challenging due to state-space explosion. We analyze in this setting randomized replacement policies for caches with non-uniform access costs. In our model, costs can depend on the stream a request originated from, the target item, and the list that contains it. We first show that, similarly to the uniform-cost case, the random replacement (RR) and first-in first-out (FIFO) policies can be exactly analyzed using a product-form expression for the equilibrium state probabilities of the cache. We then tackle the state space explosion by means of the singular perturbation method, deriving limiting expressions for the equilibrium performance measures as the number of items and the cache capacity grow in a fixed ratio. Simulations indicate that our asymptotic formulas rapidly converge to the cache equilibrium distribution.
【Keywords】: Indexes; Markov processes; Conferences; Explosions; Computational modeling; Analytical models; Numerical models
【Paper Link】 【Pages】:441-449
【Authors】: Jian Li ; Truong Khoa Phan ; Wei Koong Chai ; Daphné Tuncer ; George Pavlou ; David Griffin ; Miguel Rio
【Abstract】: The dominant application in today's Internet is content streaming, which is increasingly relying on caches to meet the stringent conditions on the latency between content servers and end-users. These systems routinely face the challenges of limited bandwidth capacities and network server failures, which degrade caching performance. In this paper, we study the problem of optimally allocating content over a resilient caching network, in which each cache may fail under some situations. Given content request rates and multiple routing paths, we formulate an optimization problem to maximize the expected caching gain, i.e., the reduction of latency due to intermediate caching. The offline version of this problem is NP-hard. We first propose a centralized, offline algorithm and show that a solution with (1-1/e) approximation ratio to the optimal can be constructed. We then propose a distributed ascent algorithm based on the concave relaxation of the expected gain. Informed by the results of our analysis, we finally propose a distributed resilient caching algorithm (DR-Cache) that is simple and adaptive to network failures. We show numerically that DR-Cache significantly outperforms other candidate algorithms under synthetic requests, as well as real world traces over a class of network topologies.
【Keywords】: Routing; Delays; Servers; Approximation algorithms; Resource management; Optimization; Network topology
【Paper Link】 【Pages】:450-458
【Authors】: Kaiyi Ji ; Guocong Quan ; Jian Tan
【Abstract】: To efficiently scale data caching infrastructure to support emerging big data applications, many caching systems rely on consistent hashing to group a large number of servers to form a cooperative cluster. These servers are organized together according to a random hash function. They jointly provide a unified but distributed hash table to serve swift and voluminous data item requests. Different from the single least-recently-used (LRU) server that has already been extensively studied, theoretically characterizing a cluster that consists of multiple LRU servers remains yet to be explored. These servers are not simply added together; the random hashing complicates the behavior. To this end, we derive the asymptotic miss ratio of data item requests on a LRU cluster with consistent hashing. We show that these individual cache spaces on different servers can be effectively viewed as if they could be pooled together to form a single virtual LRU cache space parametrized by an appropriate cache size. This equivalence can be established rigorously under the condition that the cache sizes of the individual servers are large. For typical data caching systems this condition is common. Our theoretical framework provides a convenient abstraction that can directly apply the results from the simpler single LRU cache to the more complex LRU cluster with consistent hashing.
【Keywords】: Servers; Distributed databases; Partitioning algorithms; Conferences; Clustering algorithms; Random variables; Electronic mail
【Paper Link】 【Pages】:459-467
【Authors】: Guocong Quan ; Kaiyi Ji ; Jian Tan
【Abstract】: Least-recently-used (LRU) caching systems have been widely used, and are increasingly deployed driven by emerging trends for big data. In a typical scenario, these systems are used to serve multiple flows of dependent data item requests that are also correlated over time. These flows compete for the limited cache space. Characterizing the miss ratios of these competing flows can facilitate the design and improve the system performance. The existing asymptotic analyses for correlated requests give explicit results for Zipf's distributions with the index greater than a critical value (one). Consequently, the asymptotic result is inaccurate around this critical point, which notably is also the typical parameter region reported by many empirical measurements. In contrast, we derive the asymptotic miss ratios of multiple flows for a large class of truncated heavy-tailed data item popularity distributions with time dependency. Importantly, it significantly improves the accuracy in numerical computations when the index of a Zipf's distribution is close to one. Moreover, the result generalizes beyond Zipf's distributions, e.g., to Weibull, for multiple flows of correlated data item requests. Our asymptotic result directly exploits the critical properties of the distribution and the truncated support region. As our versatile expression is explicit, it avoids the numerical computations required by the characteristic time approximation. Interestingly, it also validates the characteristic time approximation with new forms for multiple flows of competing requests that are correlated over time under certain conditions.
【Keywords】: Correlation; Weibull distribution; Conferences; Indexes; Market research; Big Data; Numerical models
【Paper Link】 【Pages】:468-476
【Authors】: Lin Wang ; Lei Jiao ; Ting He ; Jun Li ; Max Mühlhäuser
【Abstract】: While social Virtual Reality (VR) applications such as Facebook Spaces are becoming popular, they are not compatible with classic mobile-or cloud-based solutions due to their processing of tremendous data and exchange of delay-sensitive metadata. Edge computing may fulfill these demands better, but it is still an open problem to deploy social VR applications in an edge infrastructure while supporting economic operations of the edge clouds and satisfactory quality-of-service for the users. This paper presents the first formal study of this problem. We model and formulate a combinatorial optimization problem that captures all intertwined goals. We propose ITEM, an iterative algorithm with fast and big “moves” where in each iteration, we construct a graph to encode all the costs and convert the cost optimization into a graph cut problem. By obtaining the minimum s-t cut via existing max-flow algorithms, we can simultaneously determine the placement of multiple service entities, and thus, the original problem can be addressed by solving a series of graph cuts. Our evaluations with large-scale, real-world data traces demonstrate that ITEM converges fast and outperforms baseline approaches by more than 2 × in one-shot placement and around 1.3 × in dynamic, online scenarios where users move arbitrarily in the system.
【Keywords】: Cloud computing; Edge computing; Optimization; Quality of service; Delays; Social network services; Urban areas
【Paper Link】 【Pages】:477-485
【Authors】: Wang Zhang ; Xiaokang Hu ; Jian Li ; Haibing Guan
【Abstract】: In a Virtual Symmetric Multiprocessing (VSMP) environment, the behavior of hypervisor scheduler can significantly influence a guest's I/O responsiveness. The interrupt remapping mechanism, which can leverage multiple virtual CPUs in the VSMP guest to process I/O events, is known to be an efficient and prevalent solution to improve the I/O performance. However, in this paper we identified a novel challenge called the “Interruptability Holder Preemption” (IHP) problem in interrupt remapping mechanism. The IHP issue presents that a virtual CPU (vCPU) disabling the interruptability of guest's network device is descheduled by the hypervisor scheduler, which can easily invalidate the efficiency of the interrupt remapping mechanism. To solve this problem, we propose CoINT, a gasket coordinator residing in the hypervisor, to substantially enhance the network I/O performance by empowering the hypervisor to be proactively aware of the interruptability information of the guest's network device. ColNT completely eliminates the “Interruptability Holder Preemption” problem and largely reduces I/Ointerrupt processing delay caused by hypervisor scheduler. We implement ColNT in KVM hypervisor and evaluate its efficiency and responsiveness using both macro and microlevel benchmarks. The results show that ColNT can improve the netperf throughput up to 3x compared with native KVM, and up to 1.3x compared with traditional interrupt remapping of hypervisor-Ievel solution, in sacrifice of an negligible and reasonable overhead in the hypervisor.
【Keywords】: Virtual machine monitors; Throughput; Conferences; Delays; Registers; Performance evaluation; Virtualization
【Paper Link】 【Pages】:486-494
【Authors】: Xincai Fei ; Fangming Liu ; Hong Xu ; Hai Jin
【Abstract】: With the evolution of Network Function Virtual-izaiton (NFV), enterprises are increasingly outsourcing their network functions to the cloud. However, using virtualized network functions (VNFs) to provide flexible services in today's cloud is challenging due to the inherent difficulty in intelligently scaling VNFs to cope with traffic fluctuations. To best utilize cloud resources, NFV providers need to dynamically scale the VNF deployments and reroute traffic demands for their customers. Since most existing work is reactive in nature, we seek a proactive approach to provision new instances for overloaded VNFs ahead of time based on the estimated flow rates. We formulate the VNF provisioning problem in order that the cost incurred by inaccurate prediction and VNF deployment is minimized. In the proposed online algorithm, we first employ an efficient online learning method which aims at minimizing the error in predicting the service chain demands. We then derive the requested instances with adaptive processing capacities and call two other algorithms for new instance assignment and service chain rerouting, respectively, while achieving good competitive ratios. The joint online algorithm is proven to provide good performance guarantees by both theoretical analysis and trace-driven simulation.
【Keywords】: Prediction algorithms; Routing; Cloud computing; Bandwidth; Servers; Outsourcing; Conferences
【Paper Link】 【Pages】:495-503
【Authors】: Yixin Bao ; Yanghua Peng ; Chuan Wu ; Zongpeng Li
【Abstract】: Nowadays large-scale distributed machine learning systems have been deployed to support various analytics and intelligence services in IT firms. To train a large dataset and derive the prediction/inference model, e.g., a deep neural network, multiple workers are run in parallel to train partitions of the input dataset, and update shared model parameters. In a shared cluster handling multiple training jobs, a fundamental issue is how to efficiently schedule jobs and set the number of concurrent workers to run for each job, such that server resources are maximally utilized and model training can be completed in time. Targeting a distributed machine learning system using the parameter server framework, w e design an online algorithm for scheduling the arriving jobs and deciding the adjusted numbers of concurrent workers and parameter servers for each job over its course, to maximize overall utility of all jobs, contingent on their completion times. Our online algorithm design utilizes a primal-dual framework coupled with efficient dual subroutines, achieving good long-term performance guarantees with polynomial time complexity. Practical effectiveness of the online algorithm is evaluated using trace-driven simulation and testbed experiments, which demonstrate its outperformance as compared to commonly adopted scheduling algorithms in today's cloud systems.
【Keywords】: Servers; Training; Data models; Machine learning; Computational modeling; Resource management; Graphics processing units
【Paper Link】 【Pages】:504-512
【Authors】: Chen Chen ; Wei Wang ; Bo Li
【Abstract】: Efficient resource management is of paramount importance in today's production clusters. In this paper, we identify the demand elasticity of data-parallel jobs. Demand elasticity allows jobs to run with a significantly less amount of resources than they ideally need, at the expense of only a modest performance penalty. Our EC2 experiment using popular Spark benchmark suites confirms that running a job using 50% of demanded slots is sufficient to achieve at least 75% of the ideal performance. We show that such an elasticity is an intrinsic property of data-parallel jobs and can be exploited to speed up average job completion. In this regard, we propose Performance-Aware Fair (PAF) scheduler to identify the demand elasticity and use it to improve the average job performance, while still attaining near-optimal isolation guarantee close to fair sharing. PAF starts with a fair allocation and iteratively adjusts it by transferring resources from one job to another, improving the performance of resource-taker without penalizing resource-giver by a noticeable amount. We implemented PAF in Spark and evaluated its effectiveness through both EC2 experiments and large-scale simulations. Evaluation results show that compared with fair allocation, PAF improves the average job performance by 13%, while penalizing resource-givers by no more than 1%.
【Keywords】: Task analysis; Elasticity; Resource management; Sparks; Data analysis; Runtime; Parallel processing
【Paper Link】 【Pages】:513-521
【Authors】: Xiaoda Zhang ; Zhuzhong Qian ; Sheng Zhang ; Xiangbo Li ; Xiaoliang Wang ; Sanglu Lu
【Abstract】: Typical data analytics systems abstract jobs as directed acyclic graphs (DAGs). It is crucial to maximize throughput and speedup completions for DAG jobs in practice. Existing works propose clairvoyant schedulers optimizing these goals, however, they assume complete job information as a prior knowledge which limits their applicability. Instead, we remove the complete prior knowledge assumption and rely solely on a partial prior information, which is more practical. And we design a semi-clairvoyant task scheduler Cobra working within each job. Cobra adaptively adjusts its resource desires in a multiplicative-increase multiplicative-decrease (MIMD) manner according to nearly past resource utilizations and the current waiting tasks. On the other hand, Cobra seeks to satisfy task locality preferences by allowing each task to wait for some time that is bounded by a parameterized threshold. Surprisingly, even with the partial prior job information, we theoretically prove, Cobra, when working with the widely used fair job scheduler, is O(1)-competitive with respect to both makespan and average job response time. We experimentally validate that the performance promotion of Cobra in both real system deployment and trace-driven simulations.
【Keywords】: Task analysis; Containers; Time factors; Dynamic scheduling; Data analysis; Processor scheduling
【Paper Link】 【Pages】:522-530
【Authors】: Liang Bao ; Chase Q. Wu ; Haiyang Qi ; Weizhao Chen ; Xin Zhang ; Weina Han ; Wei Wei ; En Tail ; Hao Wang ; Jiahao Zhai ; Xiang Chen
【Abstract】: Parallel computing combined with distributed data storage and management has been widely adopted by most big data analytics systems. Scheduling computing tasks to improve data locality is crucial to the performance of such systems. While existing schedulers target near-data scheduling on top of physical data blocks, these systems face a new scheduling problem where computing tasks process table-based datasets directly and access large physical blocks indirectly through their indices stored in associated small logical blocks. This new problem invalidates the basic assumption made by many existing algorithms on near-data scheduling. In this paper, we propose a Logical-block Affinity Scheduling (LAS) algorithm to coordinate the near-data scheduling of computing tasks and the placement of logical blocks for a desired balance between data-locality and load-balancing to maximize system throughput. The proposed algorithm is implemented and evaluated using a well-known big data benchmark and a practical production system deployed in public clouds. Extensive experimental results illustrate the performance superiority of LAS over three existing scheduling algorithms.
【Keywords】: Task analysis; Scheduling; Servers; Big Data; Scheduling algorithms; Sparks
【Paper Link】 【Pages】:531-539
【Authors】: Shuhao Liu ; Li Chen ; Baochun Li ; Aiden Carnegie
【Abstract】: Graph analytics has emerged as one of the fundamental techniques to support modern Internet applications. As real-world graph data is generated and stored globally, the scale of the graph that needs to be processed keeps growing. It is critical to efficiently process graphs across multiple geographically distributed datacenters, running wide-area graph analytics. Existing graph analytics frameworks are not designed to run across multiple datacenters well, as they implement a Bulk Synchronous Parallel model that requires excessive wide-area data transfers. In this paper, we present a new Hierarchical Synchronous Parallel model designed and implemented for synchronization across datacenters with a much improved efficiency in inter-datacenter communication. Our new model requires no modifications to graph analytics applications, yet guarantees their convergence and correctness. Our prototype implementation on Apache Spark can achieve up to 32% lower WAN bandwidth usage, 49% faster convergence, and 30% less total cost for benchmark graph algorithms, with input data stored across five geographically distributed datacenters.
【Keywords】: Synchronization; Analytical models; Convergence; Computational modeling; Partitioning algorithms; Bandwidth; Message passing
【Paper Link】 【Pages】:540-548
【Authors】: Harsh Gupta ; Atilla Eryilmaz ; R. Srikant
【Abstract】: We consider the problem of transmitting at the optimal rate over a rapidly-varying wireless channel with unknown statistics when the feedback about channel quality is very limited. One motivation for this problem is that, in emerging wireless networks, the use of mm Wave bands means that the channel quality can fluctuate rapidly and thus, one cannot rely on full channel-state feedback to make transmission rate decisions. Inspired by related problems in the context of multi-armed bandits, we consider a well-known algorithm called Thompson sampling to address this problem. However, unlike the traditional multi-armed bandit problem, a direct application of Thompson sampling results in a computational and storage complexity that grows exponentially with time. Therefore, we propose an algorithm called Modified Thompson sampling (MTS), whose computational and storage complexity is simply linear in the number of channel states and which achieves at most logarithmic regret as a function of time when compared to an optimal algorithm which knows the probability distribution of the channel states.
【Keywords】: Link Rate Selection; Thompson Sampling; Regret Minimization; Computational Complexity
【Paper Link】 【Pages】:549-557
【Authors】: Jing Chen ; Feilong Tang ; Heteng Zhang ; Laurence T. Yang
【Abstract】: Existing random access protocols designed for satellite networks have poor performance in short burst communications because of the difficulty on global time synchronization and frequent collisions. In this paper, we propose a Lightweight Retransmission (LwR) mechanism for random access in satellite networks to reduce collisions and get rid of synchronization requirement. In our LwR, only partial bits in a packet are retransmitted. Firstly, we formulate the lightweight retransmission problem and prove that it is NP-hard. Next, we focus on the construction of partial replicas, which is the core of our LwR, and propose regular and random construction methods. Especially, we prove the sufficient conditions for successfully decoding two conflicted packets by ZigZag. Finally, we propose an algebraic model and derive the upper and lower bounds of successfully decoding probability under different construction methods. Both theoretical analysis and experimental results reveal that the random construction method achieves higher decoding probability than the regular construction method. Simulation results also demonstrate that our LwR significantly outperforms related schemes designed for satellite networks.
【Keywords】: Decoding; Satellites; Synchronization; Throughput; Silicon carbide; Optimization; Conferences
【Paper Link】 【Pages】:558-566
【Authors】: Hongzhi Guo ; Zhi Sun
【Abstract】: Many important applications in the extreme environment require wireless communications to connect smart devices. Metamaterial-enhanced magnetic induction (M 2 I) has been proposed as a promising solution thanks to its long communication range in the lossy medium. M 2I communication relies on magnetic coupling, which makes it intrinsically full-duplex without self-interference. Moreover, the engineered active metamaterial provides reconfigurability in communication range and interference. In this paper, the new networking paradigm based on the reconfigurable and full-duplex M 2I communication technique is investigated. In particular, the theoretical analysis and electromagnetic simulation are first provided to prove the feasibility. Then, a medium access control protocol is proposed to avoid collisions. Finally, the capacity and delay of the full-duplex M2I network are derived to show the advantage of the new networking paradigm. The analysis in this paper indicates that in a full-duplex M 2I network, the distance between the source and destination can be arbitrarily long and the end-to-end delay can be as short as a single hop delay. As a result, each node in such network can reach any other node by one hop, which can greatly enhance the network robustness and efficiency. It is important for timely transmission of emergent information or real-time control signals.
【Keywords】: Relays; Metamaterials; Magnetic materials; Antennas; Delays; Couplings; Receivers
【Paper Link】 【Pages】:567-575
【Authors】: Avinash Mohan ; Aditya Gopalan ; Anurag Kumar
【Abstract】: Motivated by medium access control for resource-challenged wireless sensor networks whose main purpose is data collection, we consider the problem of queue scheduling with reduced queue state information. In particular, we consider a model with N sensor nodes, with pair-wise dependence, such that nodes i and i+1, 1 ≤ i ≤ N-1 cannot transmit together. For N=3, 4, and 5, we develop new throughput-optimal scheduling policies requiring only the empty-nonempty state of each queue, and also revisit previously proposed policies to rigorously establish their throughput-and delay-optimality. For N=3, there exists a sum-queue length optimal scheduling policy that requires only the empty-nonempty state of each queue. We show, however, that for N ≥ 4, there is no scheduling policy that uses only the empty-nonempty states of the queues and is sum-queue length optimal uniformly over all arrival rate vectors. We then extend our results to a more general class of interference constraints, namely, a star of cliques. Our throughput-optimality results rely on two new arguments: a Lyapunov drift lemma specially adapted to policies that are queue length-agnostic, and a priority queueing analysis for showing strong stability. Our study throws up some counterintuitive conclusions: 1) knowledge of queue length information is not necessary to achieve optimal throughput/delay performance for a large class of interference networks, 2) it is possible to perform throughput-optimal scheduling by merely knowing whether queues in the network are empty or not, and 3) it is also possible to be throughput-optimal by not always scheduling the maximum possible number of nonempty queues. We also show the results of numerical experiments on the performance of queue length agnostic scheduling vs. queue length aware scheduling, on several interference networks.
【Keywords】: Wireless Sensor Networks; Medium Access Control (MAC) protocols; Optimal Polling; Delay Minimization; Hybrid MACs; Self-Organizing Networks; Internet of Things (IoT)
【Paper Link】 【Pages】:576-584
【Authors】: Tingjun Chen ; Jelena Diakonikolas ; Javad Ghaderi ; Gil Zussman
【Abstract】: Full-duplex (FD) wireless is an attractive communication paradigm with high potential for improving network capacity and reducing delay in wireless networks. Despite significant progress on the physical layer development, the challenges associated with developing medium access control (MAC) protocols for heterogeneous networks composed of both legacy half-duplex (UD) and emerging FD devices have not been fully addressed. Therefore, we focus on the design and performance evaluation of scheduling algorithms for infrastructure-based heterogeneous networks (composed of UD and FD users). We develop the hybrid Greedy Maximal Scheduling (U-GMS) algorithm, which is tailored to the special characteristics of such heterogeneous networks and combines both centralized GMS and decentralized Q-CSMA mechanisms. Moreover, we prove that H-GMS is throughput-optimal. We then demonstrate by simple examples the benefits of adding FD nodes to a network. Finally, we evaluate the performance of U-GMS and its variants in terms of throughput, delay, and fairness between FD and UD users via extensive simulations. We show that in heterogeneous UD-FD networks, U-GMS achieves 5-10x better delay performance and improves fairness between HD and FD users by up to 50% compared with the fully decentralized Q-CSMA algorithm.
【Keywords】: Full-duplex wireless; scheduling; distributed throughput maximization
【Paper Link】 【Pages】:585-593
【Authors】: Paolo Castagno ; Vincenzo Mancuso ; Matteo Sereno ; Marco Ajmone Marsan
【Abstract】: In this paper we study the queuing system that describes the operations of data services in cellular networks, e.g., UMTS, LTE/LTE-A, and most likely the forthcoming 5G standard. The main characteristic of all these systems is that after service access, resources remain allocated to the end user for some time before release, so that if the same user requests access to service again, before a system timeout, the same resources are still available. For the resulting queuing model, we express the blocking probability in closed form, and we also provide recursive expressions in the number of connections that can be handled by the base station. Closed form expressions are also derived for other useful performance metrics, i.e., throughput and network service time. Analytical results are validated against results of a detailed simulation model, and compared to traditional queueing models results, such as the Erlang B formula iteratively applied to the resources that are not blocked by potentially returning users. Our analysis complements the performance evaluation of the other key mechanism used to access data services in cellular networks, namely the random access, which precedes the resource allocation and utilization phase studied in this paper.
【Keywords】: Base stations; Cellular networks; Analytical models; Closed-form solutions; Throughput; Measurement; Queueing analysis
【Paper Link】 【Pages】:594-602
【Authors】: Qingkai Liang ; Eytan Modiano
【Abstract】: Stochastic models have been dominant in network optimization theory for over two decades, due to their analytical tractability. However, these models fail to capture non-stationary or even adversarial network dynamics which are of increasing importance for modeling the behavior of networks under malicious attacks or characterizing short-term transient behavior. In this paper, we consider the network utility maximization problem in adversarial network settings. In particular, we focus on the tradeoffs between total queue length and utility regret which measures the difference in network utility between a causal policy and an “oracle” that knows the future within a finite time horizon. Two adversarial network models are developed to characterize the adversary's behavior. We provide lower bounds on the tradeoff between utility regret and queue length under these adversarial models, and analyze the performance of two control policies (i.e., the Drift-plus-Penalty algorithm and the Tracking Algorithm).
【Keywords】: Analytical models; Computational modeling; Stochastic processes; Optimization; Microsoft Windows; Heuristic algorithms; Wireless networks
【Paper Link】 【Pages】:603-611
【Authors】: Kun Chen ; Longbo Huang
【Abstract】: Motivated by the increasing importance of providing delay-guaranteed services in general computing and communication systems, and the recent wide adoption of learning and prediction in network control, in this work, we consider a general stochastic single-server multi-user system and investigate the fundamental benefit of predictive scheduling in improving timely-throughput, being the rate of packets that are delivered to destinations before their deadlines. By adopting an error rate-based prediction model, we first derive a Markov decision process (MDP) solution to optimize the timely-throughput objective subject to an average resource consumption constraint. Based on a packet-level decomposition of the MDP, we explicitly characterize the optimal scheduling policy and rigorously quantify the timely-throughput improvement due to predictive-service, which scales as Θ(p[C 1 [((a-a max q))/(p-q)] ρ τ
0, C 2 ≥ 0 are constants, p is the true-positive rate in prediction, Q is the false-negative rate, τ is the packet deadline and D is the prediction window size. We also conduct extensive simulations to validate our theoretical findings. Our results provide novel insights into how prediction and system parameters impact performance and provide useful guidelines for designing predictive low-latency control algorithms.
【Keywords】: Servers; Predictive models; Delays; Optimal scheduling; Markov processes; Microsoft Windows; Scheduling
【Paper Link】 【Pages】:612-620
【Authors】: Linqi Guo ; John Z. T. Pang ; Anwar Walid
【Abstract】: In this work, we investigate the problem of joint optimization over placement and routing of network function chains in data centers. In the offline case, we demonstrate that a classical randomization algorithm works well and we derive a new bound on the performance sub-optimality gap. In the online case, we prove a fundamental lower bound in resource violation and propose a new algorithm that combines techniques from multiplicative weight update and primal-dual update paradigms. This online algorithm asymptotically achieves the best possible performance in terms of resource allocation among all online algorithms. We demonstrate the applicability of our solutions to address practical problems by conducting simulation-based evaluations over different data center architectures using data generated from real trace distribution.
【Keywords】: Optimization; Routing; Data centers; Heuristic algorithms; Approximation algorithms; Noise measurement; Silicon
【Paper Link】 【Pages】:621-629
【Authors】: Garegin Grigoryan ; Yaoqing Liu ; Michael Leczinsky ; Jun Li
【Abstract】: Due to network practices such as traffic engineering and multi-homing, the number of routes-also known as IP prefixes-in the global forwarding tables has been increasing significantly in the last decade and continues growing in a super linear trend. One of the most promising solutions is to use smart Forwarding Information Base (FIB) aggregation algorithms to aggregate the prefixes and convert a large table into a small one. Doing so poses a research question, however, i.e., how can we quickly verify that the original table yields the same forwarding behaviors as the aggregated one? We answer this question in this paper, including addressing the challenges caused by the longest prefix matching (LPM) lookups. In particular, we propose the VeriTable algorithm that can employ a single tree/trie traversal to quickly check if multiple forwarding tables are forwarding equivalent, as well as if they could result in routing loops or black holes. The VeriTable algorithm significantly outperforms the state-of-the-art work for both IPv4 and IPv6 tables in every aspect, including the total running time, memory access times and memory consumption.
【Keywords】: Routing; IP networks; Routing protocols; Binary trees; Memory management; Engines; Heuristic algorithms
【Paper Link】 【Pages】:630-638
【Authors】: Long Luo ; Hongfang Yu ; Zilong Ye ; Xiaojiang Du
【Abstract】: Many large-scale compute-intensive and mission-critical online service applications are being deployed on geo-distributed datacenters, which require transfers of bulk business data over Wide Area Networks (WANs). The bulk transfers are often associated with different requirements on deadlines, either a complete transfer before a hard deadline or a best-effort delivery within a soft deadline. In this paper, we study the online bulk transfer problem over inter-datacenter WANs, while taking into consideration the requests with a mixture of hard and soft deadlines. We use Linear Programming (LP) to mathematically formulate the problem with the objective of maximizing a system utility represented by the service provider's revenue, taking into account the revenue earned from deadline-met transfers and the penalty paid for deadline-missed ones. We propose an online framework to efficiently manage mixed bulk transfers and design a competitive algorithm that applies the primal-dual method to make routing and resource allocation based on the LP. We perform theoretical analysis to prove that the proposed approach can achieve a competitive ratio of (e-1)/e with little link capacity augmentation. In addition, we conduct comprehensive simulations to evaluate the performance of our method. Simulation results show that our method irrespective of the revenue model, can accept at least 25% more transfer requests and improve the network utilization by at least 35%, compared to prior solutions.
【Keywords】: Resource management; Electronics packaging; Routing; Bandwidth; Channel allocation; Heuristic algorithms; Linear programming
【Paper Link】 【Pages】:639-647
【Authors】: Vasanta G. Chaganti ; James F. Kurose ; Arun Venkataramani
【Abstract】: Future Internet Architectures must support the rapid growth of traffic generated by mobile endpoints in a manner that is scalable and ensures low latency. We present a quantitative evaluation of three distinct approaches towards handling endpoint mobility: name-based forwarding, indirection and a global name service (GNS). Using a range of parameterized mobility distributions and real ISP topologies, we describe representative instantiations of each approach and evaluate their performance using four key metrics: update cost and update propagation cost in the control plane; and forwarding traffic cost and time-to-connect (TTC) in the data plane. (1) We show that by leveraging the fact that realistic endpoint mobility distributions show a high probability of being at a small subset of visited locations, name-based forwarding strategies can provide up to 60% improvement in control costs over simple best-port forwarding. (2) We show that the TTC in these name-based forwarding strategies is comparable to the TTC in the GNS. (3) Finally we show that a GNS-based approach offers the most suitable balance of total (combined data and control) cost to TTC across all approaches, all endpoint mobility distributions, and all ISP topologies considered.
【Keywords】: Servers; Measurement; Routing protocols; Network topology; Topology; Conferences; Internet
【Paper Link】 【Pages】:648-656
【Authors】: Jiancheng Ye ; Ka-Cheong Leung ; Victor O. K. Li ; Steven H. Low
【Abstract】: Bufferbloat is a phenomenon where router buffers are constantly being filled, resulting in high queueing delay and delay variation. Larger buffer size and more delay-sensitive applications on the Internet have made this phenomenon a pressing issue. Active queue management (AQM) algorithms, which play an important role in combating bufferbloat, have not been widely deployed due to complicated manual parameter tuning. Moreover, AQM algorithms are often designed and analyzed based on models with a single bottleneck link, rendering their performance and stability unclear in multi-bottleneck networks. In this paper, we propose a general framework to combat bufferbloat in multi-bottleneck networks. We first conduct an equilibrium analysis for a general multi-bottleneck TCP/ AQM system and develop an algorithm to compute the equilibrium point. We then decompose the system into single-bottleneck subsystems and derive sufficient conditions for the local asymptotic stability of the subsystems. Using the proposed framework, we present a case study to analyze the stability of the recently proposed Controlled Delay (CoDel) in multi-bottleneck networks and devise Self-tuning CoDel to improve the system stability and performance. Extensive simulation results show that Self-tuning CoDel effectively stabilizes queueing delay in multi-bottleneck scenarios, and thus contributes to combating bufferbloat.
【Keywords】: Stability analysis; Asymptotic stability; Delays; Mathematical model; Nickel; Heuristic algorithms; Internet
【Paper Link】 【Pages】:657-665
【Authors】: Yongshu Bai ; Pengzhan Hao ; Yifan Zhang
【Abstract】: We show that many popular mobile services suffer from excessive network bandwidth consumption. The root cause is that the existing mobile/cloud communication interfaces are designed and optimized for service providers rather than end user devices. Solving the problem is challenging, because of the conflicted interests of service providers and mobile devices. We propose Edge-hosted Personal Service (EPS), with which device-oriented solutions can be easily deployed without affecting service providers. EPS also enjoys other notable advantages, including enabling new mobile services, reducing loads on the cloud, and benefiting delay-sensitive applications. We demonstrate the usefulness of EPS by designing ETA (Edge-based web Traffic Adaptation), an effective solution for the excessive bandwidth consumption problem, and deploy ETA with a prototype EPS system. By exploring lightweight virtualization techniques, our EPS prototype system is highly scalable in terms of concurrent EPS instances, and secure in terms of resource isolation. The real-world evaluation shows that our ETA EPS can effectively reduce bandwidth for mobile devices with small overheads.
【Keywords】: Mobile handsets; Google; Bandwidth; Web services; Payloads; Servers; Cloud computing
【Paper Link】 【Pages】:666-674
【Authors】: Gayane Vardoyan ; C. V. Hollot ; Don Towsley
【Abstract】: The Transmission Control Protocol (TCP) utilizes congestion avoidance and control mechanisms as a preventive measure against congestive collapse and as an adaptive measure in the presence of changing network conditions. The set of available congestion control algorithms is diverse, and while many have been studied from empirical and simulation perspectives, there is a notable lack of analytical work for some variants. To gain more insight into the dynamics of these algorithms, we: (1) propose a general modeling scheme consisting of a set of functional differential equations of retarded type (RFDEs) and of the congestion window as a function of time; (2) apply this scheme to TCP Reno and demonstrate its equivalence to a previous, well known model for TCP Reno; (3) show an application of the new framework to the widely-deployed congestion control algorithm TCP CUBIC, for which analytical models are few and limited; and (4) validate the model using simulations. Our modeling framework yields a fluid model for TCP CUBIC. From a theoretical analysis of this model, we discover that TCP CUBIC is locally uniformly asymptotically stable-a property of the algorithm previously unknown.
【Keywords】: Mathematical model; Analytical models; Stability analysis; Asymptotic stability; Computational modeling; Adaptation models; Heuristic algorithms
【Paper Link】 【Pages】:675-683
【Authors】: Ahmed M. Abdelmoniem ; Brahim Bensaou
【Abstract】: We first study, at a microscopic level, the effects of various types of packet losses on TCP performance in a small data center. Then based on the findings we propose a simple recovery mechanism to combat the drawbacks of the long retransmission timeout. We emphasize through our empirical study that packet losses that occur at the tail of short-lived flows a nd/or bursty losses that span a large fraction of the congestion window are frequent in data center networks; and, in most cases, especially for short-lived flows, they result in a loss recovery that incurs waiting for a long retransmission timeout (RTO). The negative effect of frequent RTOs on the FCT is dramatic, yet recovery via RTO is merely a symptom of the pathological design of TCP's minimum RTO mechanism (set by default to the Internet scale). We propose the so-called Timely Retransmitted ACKs (T-RACKs), a very simple recovery mechanism for data centers, implemented as a shim layer between the virtual machines layer and the end-host NIC, to bridge the gap between TCP's huge RTO and the actual round trip times experienced in the data center. Compared to alternative solutions such as DCTCP, our T-RACKS has the virtue of not requiring any modification to TCP, which makes it readily deployable in virtualized multi-tenant public data centers. Experimental results show considerable improvements in the FCT distribution. Index Terms-Data Center, Cross Layer, Fast Recovery, Kernel Module, TCP-Incast, Timeouts.
【Keywords】: Data centers; Packet loss; Microsoft Windows; Delays; Probes; Computational efficiency
【Paper Link】 【Pages】:684-692
【Authors】: Qiaofeng Qin ; Konstantinos Poularakis ; George Iosifidis ; Leandros Tassiulas
【Abstract】: Fog architectures at the network edge are becoming a popular research trend to provide elastic resources and services to end-users, where the processing capacity resides at the network periphery as opposed to traditional data-centers. Despite their momentum, the control plane of these architectures remains complex and challenging to implement. To enhance control capability, in this work, we propose to use Software Defined Networking. SDN moves the control logic off data plane devices and onto external network entities, the controllers. We provide a proof-of-concept implementation of a multi-controller edge system and measure traffic delay and overheads. The results reveal the sensitivity of delay to the location of controllers and the magnitude of inter-controller and controller-node overheads. Guided by the above, we model the problem of determining the placement of controllers in the edge network. Using linearization and supermodular function techniques, we present approximation solutions which perform close to optimal and better than state-of-the-art methods.
【Keywords】: Delays; Smart phones; Peer-to-peer computing; Computer architecture; Emulation; Wireless communication; Approximation algorithms
【Paper Link】 【Pages】:693-701
【Authors】: Richard Cziva ; Christos Anagnostopoulos ; Dimitrios P. Pezaros
【Abstract】: Future networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner. One approach to support emerging use cases such as, e.g., virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation in the core network. While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (i.e., minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, e.g., end-to-end latency, the constantly changing network dynamics, or user mobility patterns. In this paper, we formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs. We present a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory. We then evaluate our dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics. We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (e.g., periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications.
【Keywords】: Network Function Virtualization; Latency; Edge Network; Resource Orchestration; Optimal Stopping Theory
【Paper Link】 【Pages】:702-710
【Authors】: Yi Ren ; Tzu-Ming Huang ; Kate Ching-Ju Lin ; Yu-Chee Tseng
【Abstract】: The emergence of Network Function Virtualization (NFV) enables flexible and agile service function chaining in a Software Defined Network (SDN). While this virtualization technology efficiently offers customization capability, it however comes with a cost of consuming precious TCAM resources. Due to this, the number of service chains that an SDN can support is limited by the flowtable size of a switch. To break this limitation, this paper presents CRT-Chain, a service chain forwarding protocol that requires only constant flowtable entries, regardless of the number of service chain requests. The core of CRT-Chain is an encoding mechanism that leverages Chinese Remainder Theorem (CRT) to compress the forwarding information into small labels. A switch does not need to insert forwarding rules for every service chain request, but only needs to conduct very simple modular arithmetic to extract the forwarding rules directly from CRT-Chain's labels attached in the header. We further incorporate prime reuse and path segmentation in CRT-Chain to reduce the header size and, hence, save bandwidth consumption. Our evaluation results show that, when a chain consists of no more than 5 functions, CRT-Chain actually generates a header smaller than the legacy 32-bit header defined in IETF. By enabling prime reuse and segmentation, CRT-Chain further reduces the total signaling overhead to a level lower than the conventional scheme, showing that CRT-Chain not only enables scalable flowtable-free chaining but also improves network efficiency.
【Keywords】: Switches; Protocols; Encoding; Cathode ray tubes; Routing; Network function virtualization; Bandwidth
【Paper Link】 【Pages】:711-719
【Authors】: Christopher Leet ; Xin Wang ; Y. Richard Yang ; James Aspnes
【Abstract】: High-Ievel programming and programmable data paths are two key capabilities of software-defined networking (SDN). A fundamental problem linking these two capabilities is whether a given high-level SDN program can be realized onto a given low-level SDN datapath structure. Considering all high-level programs that can be realized onto a given datapath as the programming capacity of the datapath, we refer to this problem as the SDN data path programming capacity problem. In this paper, we conduct the first study on the SDN datapath programming capacity problem, in the general setting of high-level, datapath oblivious, algorithmic SDN programs and state-of-art multi-table SDN data path pipelines. In particular, considering datapath-oblivious SDN programs as computations and datapath pipelines as computation capabilities, we introduce a novel framework called SDN characterization junctions, to map both SDN programs and datapaths into a unifying space, deriving the first rigorous result on SDN datapath programming capacity. We not only prove our results but also conduct realistic evaluations to demonstrate the tightness of our analysis.
【Keywords】: Pipelines; Routing; Registers; Programming profession; Conferences; Switches
【Paper Link】 【Pages】:720-728
【Authors】: Kai Gao ; Jingxuan Zhang ; Y. Richard Yang ; Jun Bi
【Abstract】: As modern network applications (e.g., large data analytics) become more distributed and can conduct application-layer traffic adaptation, they demand better network visibility to better orchestrate their data flows. As a result, the ability to predict the available bandwidth for a set of flows has become a fundamental requirement of today's networking systems. While there are previous studies addressing the case of non-reactive flows, the prediction for reactive flows, e.g., flows managed by TCP congestion control algorithms, still remains an open problem. In this paper, we identify three challenges in providing throughput prediction for reactive flows: throughput dynamics, heterogeneous reactive control mechanisms, and source-constrained flows. Based on a previous theoretical model, we introduce a novel learning-based prediction system with a key component named fast factor learning (FFL) model. We adopt novel techniques to overcome practical concerns such as scalability, convergence and unknown system parameters. A system, Prophet, is proposed leveraging the emerging technologies of Software Defined Networking (SDN) to realize the model. Evaluations demonstrate that our solution achieves significant accuracy in a wide range of settings.
【Keywords】: Throughput; Bandwidth; Estimation; Optimization; Conferences; Computational modeling; Predictive models
【Paper Link】 【Pages】:729-737
【Authors】: Colton Harper ; Massimiliano Pierobon ; Maurizio Mazarini
【Abstract】: Biological cells naturally exchange information for adapting to the environment, or even influencing other cells. One of the latest frontiers of synthetic biology stands in engineering cells to harness these natural communication processes for tissue engineering and cancer treatment, amongst others. Although experimental success has been achieved in this direction, approaches to characterize these systems in terms of communication performance and their dependence on design parameters are currently limited. In contrast to more classical communication systems, information in biological cells is propagated through molecules and biochemical reactions, which in general result in nonlinear input-output behaviors with system-evolution-dependent stochastic effects that are not amenable to analytical closed-form characterization. In this paper, a computational approach is proposed to characterize the information exchange in these systems, based on stochastic simulation of biochemical reactions and the estimation of information-theoretic parameters from sample distributions. In particular, this approach focuses on engineered cell-to-cell communications with a single transmitter and receiver, and it is applied to characterize the performance of a realistic system. Numerical results confirm the feasibility of this approach to be at the basis of future forward engineering practices for these communication systems.
【Keywords】: Molecular Communication; Synthetic Biology; Mutual Information; Stochastic Simulation
【Paper Link】 【Pages】:738-746
【Authors】: Yan Qiao
【Abstract】: This paper addresses the problem of inferring link loss rates based on network performance tomography in noisy network systems. Since the network tomography emerged, all existing tomography-based methods are limited to the basic condition that both network topologies and end-to-end routes must be absolutely accurate, which in most cases is impractical, especially for large-scale heterogeneous networks. To overcome the impracticability of tomography-based methods, we propose a robust tomography-based loss inference method capable of accurately inferring all link loss rates even when the network system may change dynamically. The new method first measures the end-to-end loss rates of selected paths to reduce the probing cost, and then calculates an upper bound for the loss rate of each link using the measurement results. Finally, it finds all the link loss rates that most closely conform to the measurement results within their upper bounds. Compared with two traditional loss inference methods (with and without path selection, respectively), the results strongly confirm the promising performance of our proposed approach.
【Keywords】: Network tomography; Loss inference; Uncertain network; Least-squares problem
【Paper Link】 【Pages】:747-755
【Authors】: Alfonso Iacovazzi ; Sanat Sarda ; Yuval Elovici
【Abstract】: TOR is a well-known and established anonymous network that has increasingly been abused by services distributing and hosting content, in most cases images and videos, that are illegal or morally deplorable (e.g., child pornography content). Law enforcement continually tries to identify the users and providers of such content. State of the art techniques to breach TOR's anonymity are usually based on passive and active network traffic analysis, and rely on the ability of the deanonymization entity to control TOR's edge communication. Despite this, locating hidden servers and linking illegal content with those providing and spreading this content remains an open and controversial issue. In this paper, we describe Inflow, a new technique to identify hidden servers based on inverse flow watermarking. Inflow exploits the influence of congestion mechanisms on the traffic passing through the TOR network. Inflow drops bursts of packets for short time intervals on the receiving side of a traffic flow coming from a hidden server and passing through the TOR network. Packet dropping affects the TOR flow control and causes time gaps in flows observed on the hidden server side. By controlling the communication edges and detecting the watermarking gaps, Inflow is able to detect the hidden server. Our results, obtained by means of empirical experiments performed on the real TOR network, show true positive rates in the range of 90 to 98%.
【Keywords】: Traceback; Watermark; TOR; Hidden service
【Paper Link】 【Pages】:756-764
【Authors】: Qiang Liu ; Siqi Huang ; Johnson Opadere ; Tao Han
【Abstract】: Mobile augmented reality (MAR) involves high complexity computation which cannot be performed efficiently on resource limited mobile devices. The performance of MAR would be significantly improved by offloading the computation tasks to servers deployed with the close proximity to the users. In this paper, we design an edge network orchestrator to enable fast and accurate object analytics at the network edge for MAR. The measurement-based analytical models are built to characterize the tradeoff between the service latency and analytics accuracy in edge-based MAR systems. As a key component of the edge network orchestrator, a server assignment and frame resolution selection algorithm named FACT is proposed to mitigate the latency-accuracy tradeoff. Through network simulations, we evaluate the performance of the FACT algorithm and show the insights on optimizing the performance of edge-based MAR systems. We have implemented the edge network orchestrator and developed the corresponding communication protocol. Our experiments validate the performance of the proposed edge network orchestrator.
【Keywords】: Servers; Mars; Computational modeling; Cloud computing; Analytical models; Wireless communication; Mobile handsets
【Paper Link】 【Pages】:765-773
【Authors】: Chaobing Zeng ; Fangming Liu ; Shutong Chen ; Weixiang Jiang ; Miao Li
【Abstract】: Network function virtualization (NFV) decouples network functions from the dedicated hardware and enables them running on commodity servers, facilitating widespread deployment of virtualized network functions (VNFs). Network operators tend to deploy VNFs in virtual machines (VMs) due to VM's ease of duplication and migration, which enables flexible VNF placement and scheduling. Efforts have been paid to provide efficient VNF placement approaches, aiming at minimizing the resource cost of VNF deployment and reducing the latency of service chain. However, existing placement approaches may result in hardware resource competition of co-located VNFs, leading to performance degradation. In this paper, we present a measurement study on the performance interference among different types of co-located VNFs and analyze how VNFs' competitive hardware resources and the characteristics of packet affect the performance interference. We disclose that the performance interference between co-located VNFs is ubiquitous, which causes the performance degradation, in terms of VNFs' throughput, ranging from 12.36% to 50.3%, and the competition of network I/O bandwidth plays a key role in the performance interference. Based on our measurement results, we give some advices on how to design more efficient VNF placement approaches.
【Keywords】: Interference; Servers; Bandwidth; Degradation; Hardware; Resource management; Throughput
【Paper Link】 【Pages】:774-782
【Authors】: Andrea Tomassilli ; Frédéric Giroire ; Nicolas Huin ; Stéphane Pérennes
【Abstract】: A Service Function Chain (SFC) is an ordered sequence of network functions, such as load balancing, content filtering, and firewall. With the Network Function Virtualization (NFV) paradigm, network functions can be deployed as pieces of software on generic hardware, leading to a flexibility of network service composition. Along with its benefits, NFV brings several challenges to network operators, such as the placement of virtual network functions. In this paper, we study the problem of how to optimally place the network functions within the network in order to satisfy all the SFC requirements of the flows. Our optimization task is to minimize the total deployment cost. We show that the problem can be seen as an instance of the Set Cover Problem, even in the case of ordered sequences of network functions. It allows us to propose two logarithmic factor approximation algorithms which have the best possible asymptotic factor. Further, we devise an optimal algorithm for tree topologies. Finally, we evaluate the performances of our proposed algorithms through extensive simulations. We demonstrate that near-optimal solutions can be found with our approach.
【Keywords】: Approximation algorithms; Network function virtualization; Conferences; Servers; Software; Optimization
【Paper Link】 【Pages】:783-791
【Authors】: Ruozhou Yu ; Guoliang Xue ; Xiang Zhang
【Abstract】: S-The emergence of the Internet-of-Things (IoT) has inspired numerous new applications. However, due to the limited resources in current IoT infrastructures and the stringent quality-of-service requirements of the applications, providing computing and communication supports for the applications is becoming increasingly difficult. In this paper, we consider IoT applications that receive continuous data streams from multiple sources in the network, and study joint application placement and data routing to support all data streams with both bandwidth and delay guarantees. We formulate the application provisioning problem both for a single application and for multiple applications, with both cases proved to be NP-hard. For the case with a single application, we propose a fully polynomial-time approximation scheme. For the multi-application scenario, if the applications can be parallelized among multiple distributed instances, we propose a fully polynomial-time approximation scheme; for general non-parallelizable applications, we propose a randomized algorithm and analyze its performance. Simulations show that the proposed algorithms greatly improve the quality-of-service of the IoT applications compared to the heuristics.
【Keywords】: Internet-of-things; quality-of-service; service provisioning; fog computing; approximation algorithms
【Paper Link】 【Pages】:792-800
【Authors】: Shengshan Hu ; Chengjun Cai ; Qian Wang ; Cong Wang ; Xiangyang Luo ; Kui Ren
【Abstract】: Enabling search directly over encrypted data is a desirable technique to allow users to effectively utilize encrypted data outsourced to a remote server like cloud service provider. So far, most existing solutions focus on an honest-but-curious server, while security designs against a malicious server have not drawn enough attention. It is not until recently that a few works address the issue of verifiable designs that enable the data owner to verify the integrity of search results. Unfortunately, these verification mechanisms are highly dependent on the specific encrypted search index structures, and fail to support complex queries. There is a lack of a general verification mechanism that can be applied to all search schemes. Moreover, no effective countermeasures (e.g., punishing the cheater) are available when an unfaithful server is detected. In this work, we explore the potential of smart contract in Ethereum, an emerging blockchain-based decentralized technology that provides a new paradigm for trusted and transparent computing. By replacing the central server with a carefully-designed smart contract, we construct a decentralized privacy-preserving search scheme where the data owner can receive correct search results with assurance and without worrying about potential wrongdoings of a malicious server. To better support practical applications, we introduce fairness to our scheme by designing a new smart contract for a financially-fair search construction, in which every participant (especially in the multiuser setting) is treated equally and incentivized to conform to correct computations. In this way, an honest party can always gain what he deserves while a malicious one gets nothing. Finally, we implement a prototype of our construction and deploy it to a locally simulated network and an official Ethereum test network, respectively. The extensive experiments and evaluations demonstrate the practicability of our decentralized search scheme over encrypted data.
【Keywords】: Contracts; Cryptography; Servers; Indexes
【Paper Link】 【Pages】:801-809
【Authors】: Wenhai Sun ; Ruide Zhang ; Wenjing Lou ; Y. Thomas Hou
【Abstract】: Search over encrypted data (SE) enables a client to delegate his search task to a third-party server that hosts a collection of encrypted documents while still guaranteeing some measure of query privacy. Software-based solutions using diverse cryptographic primitives have been extensively explored, leading to a rich set of secure search indexes and algorithm designs. However, each scheme can only implement a small subset of information retrieval (IR) functions and often with considerable search information leaked. Recently, the hardware-based secure execution has emerged as an effective mechanism to securely execute programs in an untrusted software environment. In this paper, we exploit the hardware-based execution environment (TEE) and explore a software and hardware combined approach to address the challenging secure search problem. For functionality, our design can support the same spectrum of plaintext IR functions. For security, we present oblivious keyword search techniques to mitigate the index search trace leakage. We build a prototype of the system using Intel SGX. We demonstrate that the proposed system provides broad support of a variety of search functions and achieves computation efficiency comparable to plaintext data search with elevated security protection.
【Keywords】: Indexes; Encryption; Software; Keyword search; Hardware
【Paper Link】 【Pages】:810-818
【Authors】: Guoxing Chen ; Ten-Hwang Lai ; Michael K. Reiter ; Yinqian Zhang
【Abstract】: Searchable encryption enables searches to be performed on encrypted documents stored on an untrusted server without exposing the documents or the search terms to the server. Nevertheless, the server typically learns which encrypted documents match the query-the so-called access pattern-since the server must return those documents. Recent studies have demonstrated that access patterns can be used to infer the search terms in some scenarios. In this paper, we propose a framework to protect systems using searchable symmetric encryption from access-pattern leakage. Our technique is based on d-privacy, a generalized version of differential privacy that provides provable security guarantees against adversaries with arbitrary background knowledge.
【Keywords】: Servers; Encryption; Indexes; Probabilistic logic
【Paper Link】 【Pages】:819-827
【Authors】: Xiangyu Wang ; Jianfeng Ma ; Yinbin Miao ; Ruikang Yang ; Yijia Chang
【Abstract】: With the development of Mobile Healthcare Monitoring Network (MHMN), patients' personal data collected by body sensors not only allows patients to monitor their health or make online pre-diagnosis but also enable clinicians to make proper decisions by utilizing data mining technique. However, the sensitive data privacy is still a major concern. In this paper, we first propose an Efficient Privacy-preserving Sensor data Monitoring and online Diagnosis (EPSMD) system for outsourced computing, then furnish an improved Multidimensional Range Query Technique (MRQT) to gain a broad range of applications in practice. In addition, a privacy-preserving naive Bayesian classifier based on MRQT is designed to protect patients' data in data mining and online diagnosis efficiently. Security analysis proves that patients' data privacy can be well protected without loss of data confidentiality, and performance evaluation demonstrates the efficiency and accuracy in data monitoring and disease pre-diagnosis, respectively.
【Keywords】: Monitoring; Cloud computing; Cryptography; Bayes methods; Data privacy; Privacy
【Paper Link】 【Pages】:828-836
【Authors】: Han Ding ; Jinsong Han ; Chen Qian ; Fu Xiao ; Ge Wang ; Nan Yang ; Wei Xi ; Jian Xiao
【Abstract】: We study a new problem, refined localization, in this paper. Refined localization calculates the location of an object in high precision, given that the object is in a relatively small region such as the surface of a table. Refined localization is useful in many cyber-physical systems such as industrial autonomous robots. Existing vision-based approaches suffer from several disadvantages, including good lighting conditions, line of sight, pre-learning process, and high computation overhead. Also vision-based approaches cannot differentiate objects with similar colors and shapes. This paper presents a new refined localization system, called Trio, which uses passive Radio Frequency Identification (RFID) tags for low cost and easy deployment. Trio provides a new angle to utilize RF interference for tag localization by modeling the equivalent circuits of coupled tags. We implement our prototype using commercial off-the-shelf RFID reader and tags. Extensive experiment results demonstrate that Trio effectively achieves high accuracy of refined localization, i.e., <; 1 cm errors for several types of main stream tags.
【Keywords】: Impedance; Radiofrequency identification; Dipole antennas; Manipulators; Integrated circuit modeling
【Paper Link】 【Pages】:837-845
【Authors】: Yanling Bu ; Lei Xie ; Yinyin Gong ; Chuyu Wang ; Lei Yang ; Jia Liu ; Sanglu Lu
【Abstract】: Nowadays, the demand for novel approaches of 2D human-computer interaction has enabled the emergence of a number of intelligent devices, such as Microsoft Surface Dial. Surface Dial realizes 2D interactions with the computer via simple clicks and rotations. In this paper, we propose RF-Dial, a battery-free solution for 2D human-computer interaction based on RFID tag arrays. We attach an array of RFID tags on the surface of an object, and continuously track the translation and rotation of the tagged object with an orthogonally deployed RFID antenna pair. In this way, we are able to transform an ordinary object like a board eraser into an intelligent HCI device. According to the RF-signals from the tag array, we build a geometric model to depict the relationship between the phase variations of the tag array and the rigid transformation of the tagged object, including the translation and rotation. By referring to the fixed topology of the tag array, we are able to accurately extract the translation and rotation of the tagged object during the moving process. Moreover, considering the variation of phase contours of the RF-signals at different positions, we divide the overall scanning area into the linear region and non-linear region in regard to the relationship between the phase variation and the tag movement, and propose tracking solutions for the two regions, respectively. We implemented a prototype system and evaluated the performance of RF-Dial in the real environment. The experiments show that RF-Dial achieved an average accuracy of 0. 6cm in the translation tracking, and an average accuracy of 1.9°in the rotation tracking.
【Keywords】: Two dimensional displays; Human computer interaction; Tracking; Radar tracking; Antenna arrays; Radiofrequency identification
【Paper Link】 【Pages】:846-854
【Authors】: Youlin Zhang ; Shigang Chen ; You Zhou ; Yuguang Fang
【Abstract】: Radio-frequency identification (RFID) technologies have been widely used in inventory management, object tracking and supply chain management. One of the fundamental system functions is called cardinality estimation, which is to estimate the number of tags in a covered area. We extend the research of this function in two directions. First, we perform joint cardinality estimation among tags that appear at different geographical locations and at different times. Moreover, we collect category-level information, which is more significant in practical scenarios where we need to monitor the tagged objects of many different types. Second, we require anonymity in the process of information gathering in order to preserve the privacy of the tagged objects. These capabilities will enable new applications such as tracking how products are moved in a large, distributed supply network. We propose a novel protocol design to meet the requirements of anonymous category-level joint estimation over multiple tag sets. We formally analyze the performance of our estimator and determine the optimal system parameters. Extensive simulations show that the proposed protocol can efficiently obtain accurate category-level estimation, while preserving tags' anonymity.
【Keywords】: Estimation; Protocols; Radiofrequency identification; Monitoring; Encoding; Conferences; Measurement
【Paper Link】 【Pages】:855-863
【Authors】: Chunhui Duan ; Lei Yang ; Huanyu Jia ; Qiongzheng Lin ; Yunhao Liu ; Lei Xie
【Abstract】: Conventional spinning inspection systems, equipped with separated sensors (e.g., accelerometer, laser, etc.) and communication modules, are either very expensive and/or suffering from occlusion and narrow field of view. The recently proposed RFID-based sensing solution draws much attention due to its intriguing features, such as being cost-effective, applicable to occluded objects and auto-identification, etc. However, this solution only works in quiet settings where the reader and spinning object remain absolutely stationary, as their shaking would ruin the periodicity and sparsity of the spinning signal, making it impossible to be recovered. This work introduces Tagtwins, a robust spinning sensing system that can work in noisy settings. It addresses the challenge by attaching dual RFID tags on the spinning surface and developing a new formulation of spinning signal that is shaking-resilient, even if the shaking involves unknown trajectories. Our main contribution lies in two newly developed techniques, relative spinning signal and dual compressive reading. We analytically demonstrate that our solution can work in various settings. We have implemented Tagtwins with COTS RFID devices and evaluated it extensively. Experimental results show that Tagtwins can inspect the rotation frequency with high accuracy and robustness.
【Keywords】: RFID; spinning sensing; robust; dual-tag
【Paper Link】 【Pages】:864-872
【Authors】: Bingchuan Tian ; Chen Tian ; Haipeng Dai ; Bingquan Wang
【Abstract】: Datacenter networks are critical to Cloud computing. The coflow abstraction is a major leap forward of application-aware network scheduling. In the context of multistage jobs, there are dependencies among coflows. As a result, there is a large divergence between coflow-completion-time (CCT) and job-completion-time (JCT). To our best knowledge, this is the first work that systematically studies: how to schedule dependent coflows of multi-stage jobs, so that the total weighted job completion time can be minimized. We present a formal mathematical formulation. We also prove that this problem is strongly NP-hard. Inspired by the optimal solution of the relaxed linear programming, we design an algorithm that runs in polynomial time to solve this problem with an approximation ratio of (2M + 1), where M is the number of machines. Evaluation results demonstrate that, the largest gap between our algorithm and the lower bound is only 9.14%. We reduce the average JCT by up to 33.48 % compared with Aalo, a heuristic multi-stage coflow scheduler. We reduce the total weighted JCT by up to 83.31 % compared with LP-OV-LS, the state-of-the-art approximation algorithm of coflow scheduling.
【Keywords】: Approximation algorithms; Schedules; Processor scheduling; Linear programming; Task analysis; Conferences; Scheduling
【Paper Link】 【Pages】:873-881
【Authors】: Wenxin Li ; Xu Yuan ; Keqiu Li ; Heng Qi ; Xiaobo Zhou
【Abstract】: Coflow scheduling is crucial to improve the communication performance of data-parallel jobs, especially when these jobs running in the inter-datacenter networks with limited and heterogeneous link bandwidth. However, prior solutions on coflow scheduling assume the endpoints of flows in a coflow to be fixed, making them insufficient to optimize the coflow completion time (CCT). In this paper, we focus on the problem of jointly considering endpoint placement and coflow scheduling to minimize the average CCT of coflows across geo-distributed datacenters. We first develop the mathematical model and formulate a mixed integer linear programming (MILP) problem to characterize the intertwined relationship between endpoint placement and coflow scheduling, and reveal their impact on the average CCT. Then, we present SmartCoflow, a coflow-aware optimization framework, to solve the MILP problem without any prior knowledge of coflow arrivals. In SmartCoflow, we first apply an approximate algorithm to obtain the endpoint placement and scheduling decisions for a single coflow. Based on the single-coflow solution, we then develop an efficient online algorithm to handle the dynamically arrived coflows. To validate the efficiency and practical feasibility of SmartCoflow, we implement it as a real-world coflow scheduler based on the Varys open-source framework. Through experimental results from both a small-scale testbed implementation and large-scale simulations, we demonstrate that SmartCoflow can achieve significant improvement on the average CCT, when compared to the state-of-the-art scheduling-only method.
【Keywords】: Task analysis; Processor scheduling; Bandwidth; Mathematical model; Scheduling; Optimal scheduling; Approximation algorithms
【Paper Link】 【Pages】:882-890
【Authors】: Shih-Hao Tseng ; Bo Bait ; John C. S. Lui
【Abstract】: A switching/forwarding fabric with high-bandwidth one-to-one and low-bandwidth many-to-many data forwarding can be achieved by combing electronic packet switches (EPS) and optical circuit switches (OCS). This hybrid solution scales well but it is not suitable for cloud computing/datacenter applications, which typically rely on one-to-many and many-to-one communications. Recently, composite-path switching (cp-switching), which adds paths between EPS and ocs is introduced to deal with this skewed traffic pattern. The state-of-the-art scheduling algorithm for CP-switches reduces, by heuristics, a CP-switching problem with one composite path to a hybrid switching problem without the composite path, and leverages existing scheduling techniques to tackle the latter problem. Unfortunately, the approach provides neither performance guarantee nor the support for multiple composite paths. In this paper, we systematically study the shortest time CP-switch scheduling problem with multiple composite paths and show that the problem can be expressed as an optimization problem with sparsity constraints. An LP-based algorithm is derived accordingly which supports multiple composite paths and more importantly, it provides performance guarantee. Simulations demonstrate that adding more composite paths can help shorten the schedule, and our approach outperforms existing methods by 30% to 70%.
【Keywords】: Schedules; Optical switches; Approximation algorithms; Bandwidth; Conferences; Scheduling
【Paper Link】 【Pages】:891-899
【Authors】: Luping Wang ; Wei Wang ; Bo Li
【Abstract】: Performance and service isolation come as two top objectives for coflow scheduling. However, the common wisdom is that these two objectives are often conflicting with each other and cannot be achieved simultaneously. Existing coflow scheduling frameworks either focus only on minimizing the average coflow completion time (CCT) (e.g., Varys), or providing optimal isolation between contending coflows by means of fair network sharing (e.g., HUG). In this paper, we make an attempt to achieve the best of both worlds through a novel coflow scheduler, Utopia, to attain near-optimal performance with provable isolation guarantee. This is particularly challenging given the correlation of bandwidth demands across multiple links from coflows. We show that Utopia is capable of reducing the average CCT dramatically, while still guaranteeing that no coflow will ever be delayed beyond a constant time than its CCT in a fair scheme. Both trace-driven simulation and EC2 deployment confirm that Utopia outperforms the fair sharing policy by 1.8 × in terms of average CCT, while producing no completion time delay for a single coflow. Even compared with performance-optimal Varys, Utopia speeds up average coflow completion by 9%.
【Keywords】: Bandwidth; Resource management; Fabrics; Delays; Conferences; Correlation; Production
【Paper Link】 【Pages】:900-907
【Authors】: Yongce Chen ; Yan Huang ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou ; Sastry Kompella
【Abstract】: In recent years, degree-of-freedom (DoF) based models were proven to be very successful in studying MIMO-based wireless networks. However, most of these studies assume channel matrix is of full-rank. Such assumption, although attractive, quickly becomes problematic as the number of antennas increases and propagation environment is not close to ideal. In this paper, we address this problem by developing a general theory for DoF-based model under rank-deficient conditions. We start with a fundamental understanding on how MIMO's DoFs are consumed for spatial multiplexing (SM) and interference cancellation (IC) in the presence of rank deficiency. Based on this understanding, we develop a general DoF model that can be used for identifying DoF region of a multi-link MIMO network and for studying DoF scheduling in MIMO networks. Specifically, we found that shared DoF consumption at transmit and receive nodes is critical for optimal allocation of DoF for IC. The results of this paper serve as an important tool for future research of many-antenna based MIMO networks.
【Keywords】: MIMO; rank deficiency; degree of freedom (DoF); interference cancellation
【Paper Link】 【Pages】:908-916
【Authors】: Gam D. Nguyen ; Sastry Kompella ; Clement Kam ; Jeffrey E. Wieselthier ; Anthony Ephremides
【Abstract】: Communication over an interference channel, which is fundamental and pervasive in the wireless and wireline environment, is often intended to carry information among different transmitter-receiver pairs. For applications that require time critical updates, it is desirable to maintain the freshness of the received information, which is quantified by the age metric (unlike the familiar delay metric). In this paper, we consider the case of two transmitter-receiver pairs, and address the impact of interference on information freshness by formulating a two-player “interference” game, in which each player is a transmitter desiring to maintain the freshness of the information updates it sends to its receiver. The strategy of a player is the choice of power level at which it will transmit. We then derive both Nash and Stackelberg strategies for the game. Our analysis shows that the Stackelberg strategy uses less power than the Nash strategy, and that it dominates the Nash strategy (i.e., the Stackelberg total cost function is lower than the Nash total cost function). Our obtained Nash and Stackelberg strategies are desirable user operating points in competitive situations.
【Keywords】: Age of information; information freshness; game theory; interference channel; Nash strategy; Stackelberg strategy
【Paper Link】 【Pages】:917-925
【Authors】: Zhanzhan Zhang ; Yin Sun ; Ashutosh Sabharwal ; Zhiyong Chen
【Abstract】: The robustness of system throughput with scheduling is a critical issue. In this paper, we analyze the sensitivity of multi-user scheduling performance to channel misreporting in systems with massive antennas. The main result is that for the round-robin scheduler combined with max-min power control, the channel magnitude misreporting is harmful to the scheduling performance and has a different impact from the purely physical layer analysis. Specifically, for the homogeneous users that have equal average signal-to-noise ratios (SNRs), underreporting is harmful, while overreporting is beneficial to others. In under-reporting, the asymptotic rate loss on others is derived, which is tight when the number of antennas is huge. One interesting observation in our research is that the rate loss “periodically” increases and decreases as the number of misreporters grows. For the heterogeneous users that have various SNRs, both underreporting and overreporting can degrade the scheduler performance. We observe that strong misreporting changes the user grouping decision and hence greatly decreases some users' rates regardless of others gaining rate improvements, while with carefully designed weak misreporting, the scheduling decision keeps fixed and the rate loss on others is shown to grow nearly linearly with the number of misreporters.
【Keywords】: MIMO communication; Downlink; Signal to noise ratio; Resource management; Processor scheduling; Antennas; Precoding
【Paper Link】 【Pages】:926-934
【Authors】: Subhramoy Mohanti ; Elif Bozkaya ; M. Yousof Naderi ; Berk Canberk ; Kaushik R. Chowdhury
【Abstract】: Wireless RF energy transfer for indoor sensors is an emerging paradigm that ensures continuous operation without battery limitations. However, high power radiation within the ISM band interferes with the packet reception for existing WiFi devices. The paper proposes the first effort in merging the RF energy transfer functions within a standards compliant 802.11 protocol to realize practical and WiFi-friendly Energy Delivery (WiFED). The WiFED architecture is composed of a centralized controller that coordinates the actions of multiple distributed energy transmitters (ETs), and a number of deployed sensors that periodically request energy from the ETs. The paper first describes the specific 802.11 supported protocol features that can be exploited by sensors to request energy and for the ETs to participate in the energy delivery process. Second, it devises a controller-driven bipartite matching-based algorithmic solution that assigns the appropriate number of ETs to energy requesting sensors for an efficient energy transfer process. The proposed in-band and protocol supported coexistence in WiFED is validated via simulations and partly in a software defined radio testbed, showing 15% improvement in network lifetime and 31% reduction in the charging delay compared to the classical nearest distance-based charging schemes that do not anticipate future energy needs of the sensors and are not designed to co-exist with WiFi systems.
【Keywords】: Sensors; Energy exchange; Wireless fidelity; Array signal processing; Receivers; Protocols; Radio frequency
【Paper Link】 【Pages】:935-943
【Authors】: Giacomo Calvigioni ; Ramon Aparicio-Pardo ; Lucile Sassatelli ; Jeremie Leguay ; Paolo Medagliani ; Stefano Paris
【Abstract】: The surge of video traffic is a challenge for service providers that need to maximize Quality of Experience (QoE) while optimizing the cost of their infrastructure. In this paper, we address the problem of routing multiple HTTP-based Adaptive Streaming (HAS) sessions to maximize QoE. We first design a QoS-QoE model incorporating different QoE metrics which is able to learn online network variations and predict their impact on representative classes of adaptation logic, video motion and client resolution. Different QoE metrics are then combined into a QoE score based on ITU-T Rec. P.1202.2. This rich score is used to formulate the routing problem. We show that, even with a piece-wise linear QoE function in the objective, the routing problem without controlled rate allocation is non-linear. We therefore express a routing-plus-rate allocation problem and make it scalable with a dual subgradient approach based on Lagrangian relaxation where subproblems select a single path for each request with a trivial search, thereby connecting explicitly QoE, QoE and HAS bitrate. We show with ns-3 simulations that our algorithm provides values for HAS QoE metrics (quality, rebufferings, variation) equivalent to MILP and better than QoS-based approaches.
【Keywords】: Quality of experience; Routing; Measurement; Quality of service; Streaming media; Bandwidth; Resource management
【Paper Link】 【Pages】:944-952
【Authors】: Diego N. da Hora ; Karel Van Doorselaer ; Koen Van Oost ; Renata Teixeira
【Abstract】: Poor Wi-Fi quality can disrupt home users' internet experience, or the Quality of Experience (QoE). Detecting when Wi-Fi degrades QoE is extremely valuable for residential Internet Service Providers (ISPs) as home users often hold the ISP responsible whenever QoE degrades. Yet, ISPs have little visibility within the home to assist users. Our goal is to develop a system that runs on commodity access points (APs) to assist ISPs in detecting when Wi-Fi degrades QoE. Our first contribution is to develop a method to detect instances of poor QoE based on the passive observation of Wi-Fi quality metrics available in commodity APs (e.g., PHY rate). We use support vector regression to build predictors of QoE given Wi-Fi quality for popular internet applications. We then use K-means clustering to combine per-application predictors to identify regions of Wi-Fi quality where QoE is poor across applications. We call samples in these regions as poor QoE samples. Our second contribution is to apply our predictors to Wi-Fi metrics collected over one month from 3479 APs of customers of a large residential ISP. Our results show that QoE is good most of the time, still we find 11.6% of poor QoE samples. Worse, approximately 21% of stations have more than 25% poor QoE samples. In some cases, we estimate that Wi-Fi quality causes poor QoE for many hours, though in most cases poor QoE events are short.
【Keywords】: Wireless fidelity; Quality of experience; Measurement; Streaming media; Quality of service; Degradation; YouTube
【Paper Link】 【Pages】:953-961
【Authors】: Mengbai Xiao ; Chao Zhou ; Viswanathan Swaminathan ; Yao Liu ; Songqing Chen
【Abstract】: Today, 360-degree video streaming has become a popular Internet service with the rise of affordable virtual reality (VR) technologies. However, streaming 360-degree videos suffers from the prohibitive bandwidth demand. Existing bandwidth-efficient solutions mainly focus on exploiting the inherent spatial adaptability of 360-degree videos, delivering only video content (spatially-cut tiles) in the viewer's region of interest (ROI) with higher quality. Temporal adaptability, which has been widely leveraged in HTTP streaming, has not been well exploited to select proper quality for video segments according to the bandwidth variations. When these two dimensions of adaptability are jointly considered, bitrate selection for the tiles become more complicated and challenging. The importance of a tile with a spatial coordination played at a specific time should be quantified so that we can determine how to allocate bandwidth for improving the viewer's quality of experience. Furthermore, viewer's head orientation prediction is highly variable, which makes the determination of important tiles highly dynamic. In addition, network fluctuations are very common on the Internet. To overcome these challenges, we propose Bi-Adaptive Streaming for 360-degree videos (BAS-360°). In BAS-360°, both spatial and temporal adaptabilities are explored in the bitrate selection for different tiles. The objective is to minimize the bandwidth waste by allocating bandwidth to more important tiles (the tiles that are more likely to be watched). To tackle the high variability of visual region prediction and the unpredictable network fluctuations, we employ two features provided by HTT P /2: stream termination and stream priority, to efficiently organize tile delivery. Evaluation results show that BAS-360° outperforms naive tile-based 360-degree video streaming strategies when network fluctuations or errors in viewport predictions occur.
【Keywords】: HTTP streaming; 360-degree video streaming; HTTP/2
【Paper Link】 【Pages】:962-970
【Authors】: Chao Zhou ; Mengbai Xiao ; Yao Liu
【Abstract】: 360-degree video has the potential to transform the video streaming experience by providing a more-immersive environment for users to interact with than standard streaming video. This experience is hampered, however, by high bandwidth requirements resulting from the extra information associated with the 360-degree frames. Because users cannot see this full 360-degree view, but the full view is transmitted in the majority of 360-degree streaming systems, there is much potential to reduce wasted bandwidth in this domain. We propose ClusTile, a tiling approach formulated to select a set of tiles that allows minimal bandwidth needed to be used when streaming 360-degree video over an expected set of views. These tiles are selected by solving a set of integer linear programs (ILPs) independently on clusters of collected user views. The clustering approach reduces computation requirements of the ILPs to practical levels. Tilings computed from ClusTile can save up to 76% bandwidth compared to standard 360-degree streaming and up to 52% bandwidth compared to best-performing fixed tiling schemes.
【Keywords】: Streaming media; Bandwidth; Standards; Encoding; Bit rate; Conferences; Image segmentation
【Paper Link】 【Pages】:971-979
【Authors】: Bing Leng ; Liusheng Huang ; Chunming Qiao ; Hongli Xu
【Abstract】: The combination of Network Function Virtualization (NFV) and Software Defined Networking (SDN) possesses a great potential in accommodating dynamic network control via cloning/migration of virtualized NFs and steering of traffic flows. A great challenge is the lack of the proprietary internal NF state information to the control system (including SDN controller and NFV orchestrator), which may lead to incorrect packet/flow processing at the newly created NF instances. In this work, we design a light-weight approach which can function either independently or as a plug-in to the network control system to reveal the internal NF states. Unlike the previous work, we propose to learn the internal NF states through normal network functions instead of designing extra APIs for certain NFs. Moreover, we propose a feasible way to detect state violations and even correct them automatically. Our approach is tested by experiments, and the results confirm its efficiency and practicability.
【Keywords】: Noise measurement; IP networks; Networked control systems; Switches; Routing; Conferences
【Paper Link】 【Pages】:980-988
【Authors】: Jaehyun Nam ; Hyeonseong Jo ; Yeonkeun Kim ; Phillip A. Porras ; Vinod Yegneswaran ; Seungwon Shin
【Abstract】: As the network operating system (NOS) is the strategic control center of a software-defined network (SDN), its design is critical to the welfare of the network. Contemporary research has largely focused on specialized NOSs that seek to optimize controller design across one or a few dimensions (e.g., scalability, performance, or security) due to fundamental differences in architectural trade-offs needed to support competing demands. We thus designed Barista, as a new framework that enables flexible and customizable instantiations of network operating systems (NOSs) supporting diverse design choices. The Barista framework incorporates two mechanisms to harmonize architectural differences across design choices: component synthesis and dynamic event control. First, the modular design of the Barista framework enables flexible composition of functionalities prevalent in contemporary SDN controllers. Second, its event-handling mechanism enables dynamic adjustment of control flows in a NOS. These capabilities allow operators to easily enable functionalities and dynamically handle associated events, thereby satisfying network operating requirements. Our results demonstrate that Barista can synthesize NOSs with many functionalities found in commodity NOSs with competitive performance profiles.
【Keywords】: Security; Scalability; Conferences; Network operating systems; Computer architecture; Robustness
【Paper Link】 【Pages】:989-997
【Authors】: Roberto Iraja Tavares da Costa Filho ; William Lautenschlager ; Nicolas Kagami ; Marcelo Caggiani Luizelli ; Valter Roesler ; Luciano Paschoal Gaspary
【Abstract】: To deal with the massive traffic produced by video applications, mobile operators rely on offloading technologies such as Small Cells, Content Delivery Networks and, shortly, Cloud Edge and 5G Device to Device communications. Although these techniques are fundamental for improving network efficiency, they produce a multitude of paths onto which the user traffic can be forwarded. Thus, a critical problem arises about how to handle the increasing video traffic while managing the interplay between infrastructure optimization and the user's Quality of Experience (QoE). Solving this problem is remarkably difficult, and recent investigations do not consider the large-scale context of mobile operator networks. To address this issue, we present a novel QoE-aware path deployment scheme for large-scale SDN-based mobile networks. The scheme relies on both a polynomial-time algorithm for composing multiple QoS metrics and a scalable QoS to QoE translation strategy. Considering real mobile operator network and video traffic traces, we show that the proposed algorithm outperformed state-of-the-art approaches by reducing impaired videos in aggregate MOS by at least 37% and lowering accumulated video stall length four times.
【Keywords】: Quality of experience; Quality of service; Streaming media; Delays; Loss measurement; Throughput
【Paper Link】 【Pages】:998-1006
【Authors】: Chunhui He ; Xinyu Feng
【Abstract】: SDN programming has been challenging because programmers have to not only implement the control logic, but also handle low-level details such as the generation of flow tables and the communication between the controller and switches. New generation of SDN with protocol oblivious forwarding and multi-table pipelining introduces even more low-level details to consider. We propose POMP, the first SDN programming environment supporting both protocol oblivious forwarding and automatic multi-table pipelining. POMP applies the static taint analysis technique to automatically infer compact and efficient multi-table pipelines from a data-plane agnostic network policy written by the programmer. The runtime system tracks the execution of the network policy, and automatically generates table entries. POMP also introduces a novel notion of dependent labels in the taint analysis, which, combined with the runtime information of the network policy, can further reduce the number of table entries. Like P4, POMP supports protocol-oblivious programming by providing a network protocol specification language. Parsers of packets can be automatically generated based on the protocol specification. POMP supports two main emerging SDN platforms, POF and P4, therefore network policies written in POMP are portable over any switches supporting POF or P4.
【Keywords】: Protocols; Pipeline processing; Runtime; Programming; Optical fibers; Control systems; Pipelines
【Paper Link】 【Pages】:1007-1015
【Authors】: Xinyu Wu ; Xiaohua Tian ; Xinbing Wang
【Abstract】: Cellular network positioning is a mandatory requirement for localizing emergency callers, such as E911 in North America. Although smartphones are normally with GPS modules, there are still a large number of users with cell phones only as basic devices, and GPS could be ineffective in urban canyon environments. To this end, the fingerprinting positioning mechanism is incorporated into LTE architecture by 3GPP, where the major challenge is to collect geo-tagged wireless fingerprints in vast areas. This paper proposes to utilize the subspace identification approach for large-scale wireless fingerprints prediction. We formulate the problem into the problem of finding the optimal subspace over Stiefel manifold, and redesign the Stiefel-manifold optimization method with fast convergence rate. Moreover, we propose a sliding window mechanism for the practical large-scale fingerprints prediction scenario, where fingerprints are unevenly distributed in the vast area. Combining the two proposed mechanisms enables an efficient method of large-scale fingerprints prediction in the city level. Further, we validate our theoretical analysis and proposed mechanisms by conducting experiments with real mobile data, which shows that the resulted localization accuracy and reliability with our predicted fingerprints exceed the requirement of E911.
【Keywords】: Manifolds; Cellular networks; Global Positioning System; Optimization; Prediction algorithms; Wireless communication; Smart phones
【Paper Link】 【Pages】:1016-1024
【Authors】: Suining He ; Kang G. Shin
【Abstract】: Mobile crowdsensing with increasing pervasiveness of smartphones has enabled a myriad of applications, including urban-scale signal map monitoring and revision. Despite the importance of its quality, due to the large size of a site to cover, dense crowdsourcing is neither cost-effective nor convenient for crowdsourcing participants, making it critical and challenging to balance between signal quality and crowdsourcing cost. To address this problem, we propose a novel incentive mechanism, BCCS, based on Bayesian Compressive Crowdsensing (BCS). BCCS iteratively determines the spatial grids for crowd-sourcing quality and predicts the remaining unexplored grids for deployment efficiency. BCS returns not only the predicted signal values, but also the confidence intervals for convergence and incentive control. A probabilistic user participation and measurement model is applied for incentive design, which is flexible for crowdsensing deployment. Our extensive evaluation based on two different data sets shows that BCCS achieves much higher prediction accuracy (often by more than 20%) with lower payments to the participants and fewer iterations (often by 30%) than existing solutions.
【Keywords】: Crowdsourcing; Sensors; Compressed sensing; Task analysis; Bayes methods; Monitoring; Correlation
【Paper Link】 【Pages】:1025-1033
【Authors】: Sihua Shao ; Abdallah Khreishah ; Issa Khalil
【Abstract】: Indoor localization is very important to enable Internet-of-things (IoT) applications. Visible light communication (VLC)-based indoor localization approaches enjoy many advantages, such as utilization of existing ubiquitous lighting infrastructure, high location and orientation accuracy, and no interruption to RF -based devices. However, existing VLC-based localization methods lack a real-time backward channel from the device to landmarks and necessitate computation at the device, which make them unsuitable for real-time tracking of small IoT devices. In this paper, we propose and prototype a retroreflector-based visible light localization system (RETRO), that establishes an almost zero-delay backward channel using a retroreflector to reflect light back to its source. RETRO localizes passive IoT devices without requiring computation and heavy sensing (e.g., camera) at the devices. Multiple photodiodes (i.e., landmarks) are mounted on any single unmodified light source to sense the retroreflected optical signal (i.e., location signature). We theoretically derive a closed-form expression for the reflected optical power related to the location and orientation of the retroreflector, and validate the theory by experiments. The characterization of received optical power is applied to a received signal strength indicator and trilateration based localization algorithm. Extensive experiments demonstrate centimeter-level location accuracy and single-digit angular error.
【Keywords】: Indoor localization; retroreflector; visible light
【Paper Link】 【Pages】:1034-1042
【Authors】: Tao Li ; Yimin Chen ; Rui Zhang ; Yanchao Zhang ; Terri Hedgpeth
【Abstract】: Indoor positioning systems (IPSes) can enable many location-based services in large indoor environments where GPS is not available or reliable. Mobile crowdsourcing is widely advocated as an effective way to construct IPS maps. This paper presents the first systematic study of security issues in crowd-sourced WiFi-based IPSes to promote security considerations in designing and deploying crowdsourced IPSes. We identify three attacks on crowdsourced WiFi-based IPSes and propose the corresponding countermeasures. The efficacy of the attacks and also our countermeasures are experimentally validated on a prototype system. The attacks and countermeasures can be easily extended to other crowdsourced IPSes.
【Keywords】: IP networks; Databases; Crowdsourcing; Prototypes; Wireless fidelity; Indoor environments; Security
【Paper Link】 【Pages】:1043-1051
【Authors】: Angelo Trotta ; Fabio D'Andreagiovanni ; Marco Di Felice ; Enrico Natalizio ; Kaushik Roy Chowdhury
【Abstract】: This paper proposes a network architecture and supporting optimization framework that allows Unmanned Aerial Vehicles (UAVs) to perform city-scale video monitoring of a set of Points of Interest (PoI). Our approach is systems-driven, relying on experimental studies to identify the permissible number of hops for multi-UAV video relaying in a noisy 3-D environment. Our architecture itself is innovative in the sense that it defines a mathematical framework for selecting the UAVs for periodic re-charging by landing on public transportation buses, and then "riding" the bus to the successive chosen Pol. Specifically, we show that our UAV scheduler can be modeled as an instance of multicommodity flow problems, and mathematically solved through Mixed Integer Linear Programming (MILP) techniques. Thus, our centralized formulation identifies the UAV, the next bus, and the next PoI, given the information about energy thresholds, the bus routes in the city and their next arrival times, to ensure persistent and reliable video coverage of all PoIs in the city. Finally, our work is validated via emulation of a city environment with live traffic updates from a real bus transportation network.
【Keywords】: Urban areas; Relays; Streaming media; Monitoring; Unmanned aerial vehicles; Batteries; Charging stations
【Paper Link】 【Pages】:1052-1060
【Authors】: Linsong Cheng ; Jiliang Wang
【Abstract】: Nowadays, video surveillance systems are widely deployed in various places, e.g., schools, parks, airports, roads, etc. However, existing video surveillance systems are far from full utilization due to high computation overhead in video processing. In this work, we present ViTrack, a framework for efficient multi-video tracking using computation resource on the edge for commodity video surveillance systems. In the heart of ViTrack lies a two layer spatial/temporal compressive target detection method to significantly reduce the computation overhead by combining videos from multiple cameras. Further, ViTrack derives the video relationship and camera information even in absence of camera location, direction, etc. To address variant video quality and missing targets, ViTrack leverages a Markov Model based approach to efficiently recover missing information and finally derive the complete trajectory. We implement ViTrack on a real deployed video surveillance system with 110 cameras. The experiment results demonstrate that ViTrack can provide efficient trajectory tracking with processing time 45x less than the existing approach. For 110 video cameras, ViTrack can run on a Dell OptiPlex 390 computer to track given targets in almost real time. We believe ViTrack can enable practical video analysis for widely deployed commodity video surveillance systems.
【Keywords】: Cameras; Video surveillance; Trajectory; Image edge detection; Object detection; Markov processes; Conferences
【Paper Link】 【Pages】:1061-1069
【Authors】: Lin Yang ; Wei Wang ; Zeyu Wang ; Qian Zhang
【Abstract】: Since the mobile camera is often small in size and easy to conceal, existing anti-piracy solutions are inefficient in defeating mobile-camera-based piracy, thus it remains a serious threat to copyright of Intellectual property. This paper presents Rainbow, a low-cost lighting system which prevents mobile-camera-based piracy attacks on intellectual property in the physical world, e.g., art paintings. Through embedding invisible illuminance flickers and chromatic changes into the light, our system can significantly degrade the imaging quality of a camera while maintaining a good visual experience for human eyes. Extensive objective evaluations under different scenarios demonstrate that Rainbow is robust to different confounding factors and can effectively defeat piracy attacks on various mobile devices. Subjective tests on volunteers further evidence that our system can not only significantly pollute pirated photos but also be able to provide good lighting conditions.
【Keywords】: Cameras; Distortion; Fading channels; Visualization; Lighting; Delays; Sensors
【Paper Link】 【Pages】:1070-1078
【Authors】: Cheng Zhang ; Fan Yang ; Gang Li ; Qiang Zhai ; Yi Jiang ; Dong Xuan
【Abstract】: Recently, intelligent sports analytics is becoming a hot area in both industry and academia for coaching, practicing tactic and technical analysis. With the growing trend of bringing sports analytics to live broadcasting, sports robots and common playfield, a low cost system that is easy to deploy and performs real-time and accurate sports analytics is very desirable. However, existing systems, such as Hawk-Eye, cannot satisfy these requirements due to various factors. In this paper, we present MV-Sports, a cost-effective system for real-time sports analysis based on motion and vision sensor integration. Taking tennis as a case study, we aim to recognize player shot types and measure ball states. For fine-grained player action recognition, we leverage motion signal for fast action highlighting and propose a long short term memory (LSTM)-based framework to integrate MV data for training and classification. For ball state measurement, we compute the initial ball state via motion sensing and devise an extended kalman filter (EKF)-based approach to combine ball motion physics-based tracking and vision positioning-based tracking to get more accurate ball state. We implement MV-Sports on commercial off-the-shelf (COTS) devices and conduct real-world experiments to evaluate the performance of our system. The results show our approach can achieve accurate player action recognition and ball state measurement with sub-second latency.
【Keywords】: Tracking; Real-time systems; Sensors; Cameras; Position measurement; Motion segmentation
【Paper Link】 【Pages】:1079-1087
【Authors】: Jianwei Qian ; Feng Han ; Jiahui Hou ; Chunhong Zhang ; Yu Wang ; Xiang-Yang Li
【Abstract】: Privacy-preserving data publishing has been a heated research topic in the last decade. Numerous ingenious attacks on users' privacy and defensive measures have been proposed for the sharing of various data, varying from relational data, social network data, spatiotemporal data, to images and videos. Speech data publishing, however, is still untouched in the literature. To fill this gap, we study the privacy risk in speech data publishing and explore the possibilities of performing data sanitization to achieve privacy protection while preserving data utility simultaneously. We formulate this optimization problem in a general fashion and present thorough quantifications of privacy and utility. We analyze the sophisticated impacts of possible sanitization methods on privacy and utility, and also design a novel method - key term perturbation for speech content sanitization. A heuristic algorithm is proposed to personalize the sanitization for speakers to restrict their privacy leak (p-leak limit) while minimizing the utility loss. The simulations of linkage attacks and sanitization on real datasets validate the necessity and feasibility of this work.
【Keywords】: Data privacy; Privacy; Couplings; Publishing; Spectrogram; Computer science; Optimization
【Paper Link】 【Pages】:1088-1096
【Authors】: Shaowei Wang ; Liusheng Huang ; Yiwen Nie ; Pengzhan Wang ; Hongli Xu ; Wei Yang
【Abstract】: Set-valued data is useful for representing a rich family of information in numerous areas, such as market basket data of online shopping, apps on mobile phones and web browsing history. By analyzing set-valued data that are collected from users, service providers could learn the demographics of the users, the patterns of their usages, and finally, improve the quality of services for them. However, privacy has been an increasing concern in collecting and analyzing users' set-valued data, since these data may reveal sensitive information (e.g., identities, preferences and diseases) about individuals. In this work, we propose a privacy preserving aggregation mechanism for set-valued data: PrivSet. It provides rigorous data privacy protection locally (e.g., on mobile phones or wearable devices) and efficiently (its computational overhead is linear to the item domain size) for each user, and meanwhile allowing effective statistical analyses (e.g., distribution estimation of items, distribution estimation of set cardinality) on set-valued data for service providers. More specifically, in PrivSet, within the constraints of local e-differential privacy, each user independently responses with a subset of the set-valued data domain with calibrated probabilities, hence the true positive/false positive rate of each item is balanced and the performance of distribution estimation is optimized. Besides presenting theoretical error bounds of PrivSet and proving its optimality over existing approaches, we experimentally validate the mechanism, the experimental results illustrate that the estimation error in PrivSet has been reduced by half when compared to state-of-the-art approaches.
【Keywords】: Estimation; Data models; Servers; Data aggregation; Itemsets
【Paper Link】 【Pages】:1097-1105
【Authors】: Ning Zhang ; Kun Sun ; Deborah Shands ; Wenjing Lou ; Y. Thomas Hou
【Abstract】: With the emergence of Internet of Things, mobile devices are generating more network traffic than ever. TrustZone is a hardware-enabled trusted execution environment for ARM processors. While TrustZone is effective in providing the much-needed memory isolation, we observe that it is possible to derive secret information from secure world using the cache contention, due to its high-performance cache sharing design. In this work, we propose TruSense to study the timing-based cache side-channel information leakage of TrustZone. TruSense can be launched from not only the normal world operating system but also a non-privileged user application. Without access to virtual-to-physical address mapping in user applications, we devise a novel method that uses the expected channel statistics to allocate memory for cache probing. We also show how an attacker might use the less accurate performance event interface as a timer. Using the T-table based AES implementation in OpenSSL 1.0.1f as an example, we demonstrate how a normal world attacker can steal fine-grained secret in the secure world. We also discuss possible mitigations for the information leakage.
【Keywords】: Side-channel attacks; Kernel; Program processors; Timing; Hardware
【Paper Link】 【Pages】:1106-1114
【Authors】: Jinxue Zhang ; Jingchao Sun ; Rui Zhang ; Yanchao Zhang ; Xia Hu
【Abstract】: User-generated social media data are exploding and of high demand in public and private sectors. The disclosure of intact social media data exacerbates the threats to user privacy. In this paper, we first identify a text-based user-linkage attack on current data outsourcing practices, in which the real users in an anonymized dataset can be pinpointed based on the users' unprotected text data. Then we propose a framework for differentially privacy-preserving social media data outsourcing for the first time in literature. Within our framework, social media data service providers can outsource perturbed datasets to provide users differential privacy while offering high data utility to social media data consumers. Our differential privacy mechanism is based on a novel notion of E - text indistinguishability, which we propose to thwart the text-based user-linkage attack. Extensive experiments on real-world and synthetic datasets confirm that our framework can enable high-level differential privacy protection and also high data utility.
【Keywords】: Data privacy; Outsourcing; Privacy; Twitter; Companies
【Paper Link】 【Pages】:1115-1123
【Authors】: Han Ding ; Jinsong Han ; Yanyong Zhang ; Fu Xiao ; Wei Xi ; Ge Wang ; Zhiping Jiang
【Abstract】: As the Ultra High Frequency (UHF) passive Radio Frequency IDentification (RFID) technology becomes increasingly deployed, it faces an array of new security attacks. In this paper, we consider a type of attack in which a malicious RFID reader could arbitrarily modify the tags via standard commands, e.g., IDs or other data in the memory. To deal with this type of attack, we propose a physical-layer RF signal based reader authentication solution, namely Arbitrator, that involves passively listening on RF channels, analyzing the communication signals, identifying unauthorized readers and jamming the commands from such readers. Our solution does not need to modify RFID devices or the underlying communication standards, hence fully compatible with the existing RFID infrastructure. In this study, we have implemented a prototype Arbitrator over the Universal Software Radio Peripheral (USRP) platform, and conducted extensive experiments to evaluate its performance. Our results show that Arbitrator can detect unauthorized RFID readers with high accuracy, and thus effectively diminish the unauthorized access attacks.
【Keywords】: Password; Radiofrequency identification; Encoding; Jamming; Feature extraction; Authentication
【Paper Link】 【Pages】:1124-1132
【Authors】: Xiulong Liu ; Jiannong Cao ; Keqiu Li ; Jia Liu ; Xin Xie
【Abstract】: This paper takes the first step in studying the problem of range query for sensor-augmented RFID systems, which is to classify the target tags according to the range of tag information. The related schemes that seem to address this problem suffer from either low time-efficiency or the information corruption issue. To overcome their limitations, we first propose a basic classification protocol called Range Query (RQ), in which each tag pseudo-randomly chooses a slot from the time frame and uses the ON-OFF Keying modulation to reply its range identifier. Then, RQ employs a collaborative decoding method to extract the tag information range from even collision slots. The numerical results reveal that the number of queried ranges significantly affects the performance of RQ. To optimize the number of queried ranges, we further propose the Partition&Mergence (PM) approach that consists of two steps, i.e., top-down partitioning and bottom-up merging. Sufficient theoretical analyses are proposed to optimize the involved parameters, thereby minimizing the time cost of RQ+PM. The prominent advantages of RQ+PM over previous schemes are two-fold: (i) it is able to make use of the collision slots, which are treated as useless in the previous schemes; (ii) it is immune to the interference from unexpected tags. We use the USRP and WISP tags to conduct a set of experiments, which demonstrate the feasibility of RQ+PM. Moreover, extensive simulation results reveal that RQ+PM can ensure 100% query accuracy, meanwhile reducing the time cost as much as 40% comparing with the existing schemes.
【Keywords】: RFID; Sensor-augmented Tags; Range Query
【Paper Link】 【Pages】:1133-1141
【Authors】: Jihong Yu ; Wei Gong ; Jiangchuan Liu ; Lin Chen
【Abstract】: Searching for a particular group of tags in an RFID system is a key service in such important Internet-of-Things applications as inventory management. When the system scale is large with a massive number of tags, deterministic search can be prohibitively expensive, and probabilistic search has been advocated, seeking a balance between reliability and time efficiency. Given a failure probability [1/(O(K))], where K is the number of tags, state-of-the-art solutions have achieved a time cost of O(K log K) through multi-round hashing and verification. Further improvement however faces a critical bottleneck of repetitively verifying each individual target tag in each round. In this paper, we present a novel Tree-based Tag Search (TTS) that approaches O (K) through batched verification. TTS smartly hashes multiple tags into each internal tree node and adaptively controls the node degrees. It conducts bottom-up search to verify tags group by group with the number of groups decreasing rapidly. We derive the optimal hash code length and node degrees to accommodate hash collisions, and demonstrate the superiority of TTS through both theoretical analysis and extensive simulations. In particular, we show that, with increasing reliability demand and system size, TTS achieves an even higher performance gain, making it a highly scalable solution.
【Keywords】: Reliability; Radiofrequency identification; Search problems; Servers; Probabilistic logic; Conferences; RF signals
【Paper Link】 【Pages】:1142-1150
【Authors】: Ziling Zhou ; Binbin Chen
【Abstract】: For many applications that use RFID technology, it is important to count the number of RFID tags accurately. However, the wireless channel between the RFID tags and readers can introduce communication errors, and the error rate may vary significantly over time. No existing protocol can perform RFID counting robustly (i.e., maintaining the estimation quality) over time-varying channels. In this paper, we design RRC, a Robust RFID Counting protocol that offers provable guarantees on estimation quality over time-varying channels. Specifically, regardless of how the communication errors occur, the final output generated by RRC is always a standard (e, δ) estimate of the correct count n. Furthermore, the expected amount of time needed by RRC is O(Y + 1/2 + (log log n) 2 ) for a constant 6, where Y is the number of communication errors encountered by RRC. This makes the efficiency of RRC asymptotically near-optimal.
【Keywords】: Protocols; Robustness; Estimation; Error analysis; Time-varying channels; RFID tags
【Paper Link】 【Pages】:1151-1159
【Authors】: Xinyu Wu ; Zhongzhao Hu ; Xinzhe Fu ; Luoyi Fu ; Xinbing Wang ; Songwu Lu
【Abstract】: The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced reidentification ratio compared to non-overlapping cases.
【Keywords】: Social network services; Cost function; Privacy; Conferences; Estimation; Computer science; Art
【Paper Link】 【Pages】:1160-1168
【Authors】: Yuqing Zhu ; Deying Li
【Abstract】: We study the problem to maximize the profit for a social network host who offers viral marketing to multiple company campaigners. Each campaigner has its special interest for social network users, and she pays the host commission when a target user adopts her product. The campaigners decide the cost they are willing to pay for viral marketing, and the host collects cost from all campaigners and uses it for marketing. We call our optimization problem Competitive PROfit maximization for the host (CPro), and its solution is the seed allocation for campaigners, such that the profit (commission) campaigners given back to the host is maximized. CPro is NP-hard with a non-monotone and non-submodular objective function, which means existing techniques in influence or profit maximization cannot give guaranteed performance. To solve this issue, we design an efficient approximation algorithm that works on billion-scale networks, and more importantly, we give the performance bound of our algorithm. As far as we know, this is the first bounded scalable approximation algorithm for competitive profit maximization. A comprehensive set of experiments are set on various real networks with up to several billion edges from diverse disciplines, and our solution identifies the top choices for the host in only a few minutes on network that contains 1.5 billion edges.
【Keywords】: Scalable Algorithm; Profit Maximization; Competitive influence; Viral Marketing
【Paper Link】 【Pages】:1169-1177
【Authors】: Vineeth S. Varma ; Irinel-Constantin Morarescu ; Yezekael Hayel
【Abstract】: This paper proposes and analyzes a stochastic multiagent opinion dynamics model. We are interested in a multi-leveled opinion of each agent which is randomly influenced by the binary actions of its neighbors. It is shown that, as far as the number of agents in the network is finite, the model asymptotically produces consensus. The consensus value corresponds to one of the absorbing states of the associated Markov system. However, when the number of agents is large, we emphasize that partial agreements are reached and these transient states are metastable, i.e., the expected persistence duration is arbitrarily large. These states are characterized using an N-intertwined mean field approximation (NIMFA) for the Markov system. Numerical simulations validate the proposed analysis.
【Keywords】: Opinion dynamics; Social computing and networks; Markov chains; agent based models
【Paper Link】 【Pages】:1178-1186
【Authors】: Jing Tang ; Xueyan Tang ; Junsong Yuan
【Abstract】: Online Social Networks (OSNs) attract billions of users to share information and communicate where viral marketing has emerged as a new way to promote the sales of products. An OSN provider is often hired by an advertiser to conduct viral marketing campaigns. The OSN provider generates revenue from the commission paid by the advertiser which is determined by the spread of its product information. Meanwhile, to propagate influence, the activities performed by users such as viewing video ads normally induce diffusion cost to the OSN provider. In this paper, we aim to find a seed set to optimize a new profit metric that combines the benefit of influence spread with the cost of influence propagation for the OSN provider. Under many diffusion models, our profit metric is the difference between two submodular functions which is challenging to optimize as it is neither submodular nor monotone. We design a general two-phase framework to select seeds for profit maximization and develop several bounds to measure the quality of the seed set constructed. Experimental results with real OSN datasets show that our approach can achieve high approximation guarantees and significantly outperform the baseline algorithms, including state-of-the-art influence maximization algorithms.
【Keywords】: Measurement; Social network services; Diffusion processes; Conferences; Electronic mail; Approximation algorithms; Integrated circuit modeling
【Paper Link】 【Pages】:1187-1195
【Authors】: Chaojie Gu ; Rui Tan ; Xin Lou ; Dusit Niyato
【Abstract】: Separation of control and data planes (SCDP) is a desirable paradigm for low-power multi-hop wireless networks requiring high network performance and manageability. Existing SCDP networks generally adopt an in-band control plane scheme in that the control-plane messages are delivered by their data-plane networks. The physical coupling of the two planes may lead to undesirable consequences. To advance the network architecture design, we propose to leverage on the long-range communication capability of the increasingly available low-power wide-area network (LPWAN) radios to form one-hop out-of-band control planes. We choose LoRaWAN, an open, inexpensive, and ISM band based LPWAN radio to prototype our out-of-band control plane called LoRaCP. Several characteristics of LoRaWAN such as downlink-uplink asymmetry and primitive ALOHA media access control (MAC) present challenges to achieving reliability and efficiency. To address these challenges, we design a TDMA-based multi-channel MAC featuring an urgent channel and negative acknowledgment. On a testbed of 16 nodes, we demonstrate applying LoRaCP to physically separate the control-plane network of the Collection Tree Protocol (CTP) from its ZigBee-based data-plane network. Extensive experiments show that LoRaCP increases CTP's packet delivery ratio from 65 % to 80 % in the presence of external interference, while consuming a per-node average radio power of 2.97mW only.
【Keywords】: Routing; Spread spectrum communication; Protocols; Wireless networks; Wireless sensor networks; Network interfaces
【Paper Link】 【Pages】:1196-1204
【Authors】: Dongxiao Yu ; Yong Zhang ; Yuyao Huang ; Hai Jin ; Jiguo Yu ; Qiang-Sheng Hua
【Abstract】: In this paper, we present the first algorithm for exactly implementing the abstract MAC (absMAC) layer in the physical SINR model. The absMac layer, first presented by Kuhn et al. in [15], provides reliable local broadcast communication, with timing guarantees stated in terms of a collection of abstract delay functions, such that high-level algorithms can be designed in terms of these functions, independent of specific channel behavior. The implementation of absMAC layer is to design a distributed algorithm for the local broadcast communication primitives over a particular communication model that defines concrete channel behaviors, and the objective is minimizing the bounds of the abstract delay functions. Halldórsson et al. [10] have shown that in the standard SINR model (synchronous communication, without physical carrier sensing or location information), there cannot be efficient exact implementations. In this work, we show that physical carrier sensing, a commonly seen function performed by wireless devices, can help get efficient exact implementation algorithms. Specifically, we propose an algorithm that exactly implements the absMAC layer. The algorithm provides asymptotically optimal bounds for both acknowledgement and progress functions defined in the absMAC layer. Our algorithm can lead to many new faster algorithms for solving high-level problems in the SINR model. We demonstrate this by giving algorithms for problems of Consensus, Multi-Message Broadcast and Single-Message Broadcast. It deserves to point out that our implementation algorithm is designed based on an optimal algorithm for a General Local Broadcast (GLB) problem, which takes the number of distinct messages into consideration for the first time. The GLB algorithm can handle much more communication scenarios apart from those defined in the absMAC layer. Simulation results show that our proposed algorithms perform well in reality.
【Keywords】: Interference; Signal to noise ratio; Sensors; Delays; Wireless sensor networks; Distributed algorithms; Probabilistic logic
【Paper Link】 【Pages】:1205-1213
【Authors】: Dingwen Yuan ; Hsuan-Yin Lin ; Jörg Widmer ; Matthias Hollick
【Abstract】: Millimeter-wave (mmWave) communication is a promising technology to cope with the expected exponential increase in data traffic in 5G networks. mmWave networks typically require a very dense deployment of mmWave base stations (mmBS). To reduce cost and increase flexibility, wireless backhauling is needed to connect the mmBSs. The characteristics of mmWave communication, and specifically its high directionality, imply new requirements for efficient routing and scheduling paradigms. We propose an efficient scheduling method, so-called schedule-oriented optimization, based on matching theory that optimizes QoS metrics jointly with routing. It is capable of solving any scheduling problem that can be formulated as a linear program whose variables are link times and QoS metrics. As an example of the schedule-oriented optimization, we show the optimal solution of the maximum throughput fair scheduling (MTFS). Practically, the optimal scheduling can be obtained even for networks with over 200 mmBSs. To further increase the runtime performance, we propose an efficient edge-coloring based approximation algorithm with provable performance bound. It achieves over 80% of the optimal max-min throughput and runs 5 to 100 times faster than the optimal algorithm in practice. Finally, we extend the optimal and approximation algorithms for the cases of multi-RF-chain mmBSs and integrated backhaul and access networks.
【Keywords】: Schedules; Optimal scheduling; Throughput; Radio frequency; Routing; Base stations
【Paper Link】 【Pages】:1214-1222
【Authors】: Fan Zhou ; M. Yousof Naderi ; Kunal Sankhe ; Kaushik Chowdhury
【Abstract】: Emerging network architectures in 60GHz millimeter wave bands will likely use dense deployment of access points (APs) given the high attenuation and frequent line-of-sight related outages. We show in this paper that naively connecting to any available AP, or even multiple APs, may not fully realize the promise of efficiently utilizing the extremely large bandwidths available in this band. This paper holistically addresses these problems through the Multi-AP Association Protocol (MAP) that: (i) indicates ideal durations for beam-searching at the physical/link layers of the protocol stack that will result in minimal interruptions to user-traffic, (ii) devises a Multi-AP association framework based on multipath TCP, which increases the network robustness by allowing immediate redirection of traffic to alternative APs whenever an existing connection is interrupted, and (iii) designs an exploration-exploitation aware flow scheduling algorithm that dynamically activates the sub-flows to optimize transport layer performance. MAP is a lightweight, client side system that needs zero modifications to existing protocol stack, though it interacts with the latter to opportunistically trigger standards-defined functions. The paper also presents convergence analysis of throughput, experimental validation on a 60GHz network testbed, and trace-driven simulation studies that shows MAP achieving 5-7x reduction in re-buffering rate in HD video streaming, rapid convergence to the best possible AP, and fair allocation of resources among clients.
【Keywords】: Throughput; Protocols; Streaming media; Bandwidth; Standards; Quality of experience; IP networks
【Paper Link】 【Pages】:1223-1231
【Authors】: Zheng Yang ; Kimmo Järvinen
【Abstract】: Localization based on premeasured WiFi fingerprints is a popular method for indoor localization where satellite based positioning systems are unavailable. In these systems, privacy of the user's location is lost because the location is computed by the service provider. In INFOCOM'14, Li et al. presented PriWFL, a WiFi fingerprint localization system based on additively homomorphic Paillier encryption, that was claimed to protect both the users' location privacy and the service provider's database privacy. In this paper, we demonstrate a severe weakness in PriWFL that allows an attacker to compromise the service provider's database under a realistic attack model and also identify certain other problems in PriWFL that decrease its localization accuracy. Hence, we show that PriWFL does not solve the privacy problems of WiFi fingerprint localization. We also explore different solutions to implement secure privacy-preserving WiFi fingerprint localization and propose two schemes based on Paillier encryption which do not suffer from the weakness of PriWFL and offer the same localization accuracy as the privacy-violating schemes.
【Keywords】: Localization; privacy; security; WiFi fingerprint; cryptanalysis; homomorphic encryption; attack
【Paper Link】 【Pages】:1232-1240
【Authors】: Yang Liu ; Zhenjiang Li
【Abstract】: We revisit a crucial privacy problem in this paper - can the sensitive information, like the passwords and personal data, frequently typed by user on mobile devices be inferred through the motion sensors of wearable device on user's wrist, e.g., smart watch or wrist band? Existing works have achieved the initial success under certain context-aware conditions, such as 1) the horizontal keypad plane, 2) the known keyboard size, 3) and/or the last keystroke on a fixed “enter” button. Taking one step further, the key contribution of this paper is to fully demonstrate, more importantly alarm people, the further risks of typing privacy leakage in much more generalized context-free scenarios, which are related to most of us for the daily usage of mobile devices. We validate this feasibility by addressing a series of unsolved challenges and developing a prototype system aLeak. Extensive experiments show the efficacy of aLeak, which achieves promising successful rates in the attack from more than 300 rounds of different users' typings on various mobile platforms without any context-related information.
【Keywords】: Keyboards; Trajectory; Side-channel attacks; Privacy; Wearable sensors; Password
【Paper Link】 【Pages】:1241-1249
【Authors】: Shams Zawoad ; Ragib Hasan ; Mohammad Kamrul Islam
【Abstract】: The black-box nature of clouds introduces a lack of trusts in clouds. Since provenance can provide a complete history of an entity, trustworthy provenance management for data, application, or workflow can make the cloud more accountable. Current research on cloud provenance mainly focuses on collecting provenance records and trusting the cloud providers in managing the provenance records. However, a dishonest cloud provider can alter the provenance records, as the records are stored within the control of the cloud provider. To solve this problem, we first propose CloProv - a provenance model to capture the complete provenance of any type of entities in the cloud. We analyze the threats on the CloProv model considering collusion among malicious users and dishonest cloud providers. Based on the threat model, we propose a secure data provenance scheme - SECProv for cloud-based, multi-user, shared data storage systems. We integrate SECProv with the object storage module of an open source cloud framework - OpenStack Swift and analyze the efficiency of the proposed scheme.
【Keywords】: Cloud computing; Data models; History; Conferences
【Paper Link】 【Pages】:1250-1258
【Authors】: Xuhui Gong ; Qiang-Sheng Hua ; Lixiang Qian ; Dongxiao Yu ; Hai Jin
【Abstract】: Privacy-preserving data aggregation has been extensively studied in the past decades. However, most of these works target at specific aggregation functions such as additive or multiplicative aggregation functions. Meanwhile, they assume there exists a trusted authority which facilitates the keys and other information distribution. In this paper, we aim to devise a communication efficient and privacy-preserving protocol that can exactly compute arbitrary data aggregation functions without trusted authority. In our model, there exist one untrusted aggregator and n participants. We assume that all communication channels are insecure and are subject to eavesdropping attacks. Our protocol is designed under the semi-honest model, and it can also tolerate k (k ≤ n-2) collusive adversaries. Our protocol achieves (n - k) -source anonymity. That is, for the source of each collected data aparting from the colluded participants, what the aggregator learns is only from one of the (n - k) non-colluded ones. Compared with recent work [1] that computes arbitrary aggregation functions by collecting all the participants' data using the trusted authority, our protocol increases merely by at most a factor of O(([logn/loglogn]) 2 ) in terms of computation time and communication cost. The key of our protocol is that we have designed algorithms that can efficiently assign unique sequence numbers to each participant without the trusted authority.
【Keywords】: Protocols; Data aggregation; Computational modeling; Privacy; Additives; Encryption
【Paper Link】 【Pages】:1259-1267
【Authors】: Wei Gong ; Si Chen ; Jiangchuan Liu ; Zhi Wang
【Abstract】: In the past few years, various backscatter nodes have been invented for many emerging mobile applications, such as sports analytics, interactive gaming, and mobile healthcare. Backscatter networks are expected to provide a high-throughput and stable communication platform for those interconnected mobile nodes. Yet, through experiments, we find state-of-the-art rate adaptation methods for backscatter networks share a fundamental limitation of accommodating the hardware diversity of nodes because the common mapping paradigm that chooses the optimal rate based on the radio signal strength indicator (RSSI) or the like is hardly adaptable to hardware-dependent RSSIs. To address this issue, we propose MobiRate (Mobility-aware Rate adaptation) that fully exploits the mobility hints from PHY information and the characteristics of backscatter systems. The key insight is that mobility-hints, like velocity and position, can greatly benefit rate selection and channel probing. Specifically, we introduce a novel velocity-based loss rate estimation method that dynamically re-weighs packets based on time and mobility. In addition, we design a mobility-assisted probing trigger and a new selective-probing mechanism, significantly saving probing time. As MobiRate is fully compatible with the current standard, it is prototyped using a COTS RFID reader and a variety of commercial tags. Our extensive experiments demonstrate that MobiRate achieves up to 3.8x throughput gain over the state-of-the-art methods across a wide range of mobility, channel conditions, and tag types.
【Keywords】: Backscatter; Throughput; Estimation; Encoding; Standards; Loss measurement; Downlink
【Paper Link】 【Pages】:1268-1276
【Authors】: Taekyung Kim ; Wonjun Lee
【Abstract】: The emerging deployment of IoT devices increasingly requires more energy-and-cost-efficient wireless links between devices. Nowadays backscatter networks, one of the most feasible technology to meet the requirement in IoT spaces, have evolved for better usability and wider coverage. The most likely consequence of the evolution is Wi-Fi backscatter networks that harmonize with widely deployed commercial devices. However recent Wi-Fi backscatter techniques are stuck in front of several hurdles. The backscatter techniques along with 802.11b devices impair the spectral efficiency of wireless channels and break backward compatibility. Meanwhile, the backscatter techniques along with the other types of 802.11 devices support only per-packet backscatter resulting poor performance. To tackle all these problems, we propose a flicker detector that achieves per-symbol in-band backscatter by exploiting residual channel of Wi-Fi packets. Our approach shows robust performance without any modification on the hardware and any side effect on wireless channels. Extensive experiments on a software-defined radio testbed demonstrate that our approach overcomes the hurdles of existing Wi-Fi backscatter networks.
【Keywords】: Backscatter; Wi-Fi; Internet of Things
【Paper Link】 【Pages】:1277-1285
【Authors】: Wei Wang ; Shiyue He ; Lin Yang ; Qian Zhang ; Tao Jian
【Abstract】: The conventional high-speed Wi-Fi has recently become a contender for low-power Internet-of-Things (IoT) communications. OFDM continues its adoption in the new IoT Wi-Fi standard due to its spectrum efficiency that can support the demand of massive IoT connectivity. While the IoT Wi-Fi standard offers many new features to improve power and spectrum efficiency, the basic physical layer (PHY) structure of transceiver design still conforms to its conventional design rationale where access points (AP) and clients employ the same OFDM PHY. In this paper, we argue that current Wi-Fi PHY design does not take full advantage of the inherent asymmetry between AP and IoT. To fill the gap, we propose an asymmetric design where IoT devices transmit uplink packets using the lowest power while pushing all the decoding burdens to the AP side. Such a design utilizes the sufficient power and computational resources at AP to trade for the transmission (TX) power of IoT devices. The core technique enabling this asymmetric design is that the AP takes full power of its high clock rate to boost the decoding ability. We provide an implementation of our design and show that it can reduce the IoT's TX power by boosting the decoding capability at the receivers.
【Keywords】: Wireless fidelity; Clocks; OFDM; Standards; Decoding; Timing; Transceivers
【Paper Link】 【Pages】:1286-1294
【Authors】: Ali Sehati ; Majid Ghaderi
【Abstract】: This paper considers energy management on LTE-enabled Internet of Things (IoT) devices. A characteristic feature of IoT applications is the periodic generation of small messages, whose transmission over LTE is highly energy inefficient. In this paper, we consider application message bundling to alleviate the effect of short message transmissions on energy consumption. Specifically, we model the interplay between energy consumption and the extended DRX mechanism introduced in LTE to deal with IoT traffic. We formulate bundling as a cost minimization problem and develop an online algorithm to solve the problem. Detailed analysis shows that, depending on DRX and application parameters, our algorithm is 1, 2, or 4-competitive with respect to the optimal offline algorithm that knows the entire sequence of application messages a priori. We evaluate the performance of the proposed algorithm and the accuracy of our analysis in a range of realistic scenarios using both model-driven simulations and real experiments on an IoT testbed. Our results show that, i) depending on application requirements, energy savings ranging from zero to about 100% can be achieved using our algorithm, and ii) ignoring DRX could significantly overestimate or underestimate energy consumption.
【Keywords】: Delays; Long Term Evolution; Energy consumption; Internet of Things; Smart phones; Energy management; Computational modeling
【Paper Link】 【Pages】:1295-1303
【Authors】: Jiaqi Liu ; Qi Lian ; Luoyi Fu ; Xinbing Wang
【Abstract】: Social recommendation has been widely applied to offer users suggestions on who to connect to, where most existing strategies overlook the existence of multi-type connections among users. To overcome such limitation, we characterize each type of connections by a corresponding network layer and then propose a novel algorithm for joint recommendations in cross-layer social networks. Particularly, two types of results are presented in the paper. (i) Our proposed algorithm, named as Cross-layer 2-hop Path (C2P) algorithm, implements the joint recommendation by suggesting a user establish connections to his cross-layer two-hop neighbors, i.e., those who link to the user by two-hop paths with the two hops belonging to two different layers, respectively. In doing so, each produced recommendation item is a combination of user relationships in both two layers and thus can better meet user demands. (ii) By analytical derivations, along with further empirical validation on real datasets, we give the performance evaluation on our proposed algorithm. Firstly, we prove that the algorithm is efficiently implementable with a constant complexity in each recommendation. Then, we evaluate its recommendation performance by two metrics, i.e., acceptance and diversity, where the former metric measures recommendation accuracy and the latter one measures an algorithm's capability to provide diverse recommendation items. Our results show that C2P algorithm is optimal in terms of acceptance and for diversity, its performance is in the same order of the theoretical upperbound. And finally, the effectiveness of the proposed algorithm is validated by our simulations on three real datasets, where it outperforms baseline algorithms with an up to 38% acceptance gain and obtains an around 0.5 diversity ratio to the theoretical upperbound.
【Keywords】: Bipartite graph; Mathematical model; Computational modeling; Measurement; Prototypes; Conferences; Social network services
【Paper Link】 【Pages】:1304-1312
【Authors】: Zhibo Wang ; Yongquan Zhang ; Honglong Chen ; Zhetao Li ; Feng Xia
【Abstract】: Event-based social networks (EBSNs) are the newly emerging social platforms for users to publish events online and attract others to attend events offline. The content information of events plays an important role in event recommendation. However, the content-based approaches in existing event recommender systems cannot fully represent the preference of each user on events since most of them focus on exploiting the content information from events' perspective, and the bag-of-words model, commonly used by them, can only capture word frequency but ignore word orders and sentence structure. In this paper, we shift the focus from events' perspective to users' perspective, and propose a Deep User Modeling framework for Event Recommendation (DUMER) to characterize the preference of users by exploiting the contextual information of events that users have attended. Specifically, we utilize convolutional neural network (CNN) with word embedding to deeply capture the contextual information of a user's interested events and build up a user latent model for each user. We then incorporate the user latent model into probabilistic matrix factorization (PMF) model to enhance the recommendation accuracy. We conduct experiments on the real-world dataset crawled from a typical EBSN, Meetup.com, and the experimental results show that DUMER outperforms the compared benchmarks.
【Keywords】: Recommender systems; Machine learning; Social network services; Feature extraction; Context modeling; Conferences; Probabilistic logic
【Paper Link】 【Pages】:1313-1321
【Authors】: Fan Zhou ; Lei Liu ; Kunpeng Zhang ; Goce Trajcevski ; Jin Wu ; Ting Zhong
【Abstract】: The typical aim of User Identity Linkage (UIL) is to detect when users from across different social platforms are actually one and the same individual. Existing efforts to address this problem of practical relevance span from user-profile-based, through user-generated-content-based, user-behavior-based approaches to supervised or unsupervised learning frameworks, to subspace learning-based models. Most of them often require extraction of relevant features (e.g., profile, location, biography, networks, behavior, etc.) to model the user consistently across different social networks. However, these features are mainly derived based on prior knowledge and may vary for different platforms and applications. Inspired by the recent successes of deep learning in different tasks, especially in automatic feature extraction and representation, we propose a deep neural network based algorithm for UIL, called DeepLink. It is a novel end-to-end approach in a semi-supervised learning manner, without involving any hand-crafting features. Specifically, DeepLink samples the networks and learns to encode network nodes into vector representation to capture local and global network structures which, in turn, can be used to align anchor nodes through deep neural networks. A dual learning based paradigm is exploited to learn how to transfer knowledge and update the linkage using the policy gradient method. Experiments conducted on several public datasets show that DeepLink outperforms the state-of-the-art methods in terms of both linking precision and identity-match ranking.
【Keywords】: user identity linkage; social networks; deep learning; reinforcement learning
【Paper Link】 【Pages】:1322-1330
【Authors】: Tsung-Yen Yang ; Christopher G. Brinton ; Carlee Joe-Wong
【Abstract】: We consider the problem of predicting link formation in Social Learning Networks (SLN), a type of social network that forms when people learn from one another through structured interactions. While link prediction has been studied for general types of social networks, the evolution of SLNs over their lifetimes coupled with their dependence on which topics are being discussed presents new challenges for this type of network. To address these challenges, we develop a time-series prediction methodology that uses a recurrent neural network architecture to pass network state between time periods, and that models over three types of SLN features updated in each period: neighborhood-based (e.g., resource allocation), path-based (e.g., shortest path), and post-based (e.g., topic similarity). Through evaluation on four real-world datasets from Massive Open Online Course (MOOC) discussion forums, we find that our method obtains substantial improvements over a Bayesian model and an unsupervised baseline, with AUCs typically above 0.75 and reaching 0.97 depending on the dataset. Our feature importance analysis shows that while neighborhood-based features contribute the most to the results, post-based and path-based features add additional information that significantly improve the predictions. We also find that several input features have opposite directions of correlation between link formation and post quality, suggesting that response time and quality are two competing objectives to be accounted for in SLN link recommendation systems.
【Keywords】: Predictive models; Message systems; Feature extraction; Social network services; Computational modeling; Discussion forums; Bayes methods
【Paper Link】 【Pages】:1331-1339
【Authors】: M. Hammad Mazhar ; Zubair Shafiq
【Abstract】: The widespread deployment of end-to-end encryption protocols such as HTTPS and QUIC has reduced the visibility for operators into traffic on their networks. Network operators need the visibility to monitor and mitigate Quality of Experience (QoE) impairments in popular applications such as video streaming. To address this problem, we propose a machine learning based approach to monitor QoE metrics for encrypted video traffic. We leverage network and transport layer information as features to train machine learning classifiers for inferring video QoE metrics such as startup delay and rebuffering events. Using our proposed approach, network operators can detect and react to encrypted video QoE impairments in real-time. We evaluate our approach for YouTube adaptive video streams using HTTPS and QUIC. The experimental evaluations show that our approach achieves up to 90% classification accuracy for HTTPS and up to 85 % classification accuracy for QUIC.
【Keywords】: Quality of experience; Streaming media; Measurement; Cryptography; Monitoring; Machine learning; Real-time systems
【Paper Link】 【Pages】:1340-1348
【Authors】: Chenhong Cao ; Wei Gong ; Wei Dong ; Jihong Yu ; Chun Chen ; Jiangchuan Liu
【Abstract】: Multihop wireless networking is a key enabling technology for interconnecting a vast number of IoT devices. Measurement is fundamental to various network operations including management, diagnostics, and optimization. Out-of-band measurement approaches use external sniffers to monitor the network traffic passively, and they provide detailed information about the network. However, existing approaches do not carefully consider lossy and correlated links which are common in low-power wireless networks, resulting in unsatisfactory packet capture ratio and low measurement quality. In this paper, we present NetVision, a practical out-of-band measurement system with special consideration for sniffer deployment. By explicitly considering link quality and link correlation, we are able to achieve a high measurement quality while minimizing the deployment cost. We formulate the sniffer deployment problem as an optimization problem and propose efficient algorithms for solving this problem. We further design a set of instructions and APIs to simplify a variety of common measurement tasks. We implement NetVision on the TinyOS/TelosB platform and evaluate its performance extensively both in simulation and an indoor testbed with 80 TelosB nodes. Results show that NetVision is accurate, generic, and robust. Three typical case studies demonstrate that NetVision can facilitate various measurement and debugging tasks.
【Keywords】: Monitoring; Spread spectrum communication; Wireless networks; Loss measurement; Wireless sensor networks; Correlation
【Paper Link】 【Pages】:1349-1357
【Authors】: Xiang Li ; J. David Smith ; Thang N. Dinh ; My T. Thai
【Abstract】: In this work, we examine the problem of adaptively uncovering network topology in an incomplete network, to support more accurate decision making in various real-world applications, such as modeling for reconnaissance attacks and network probing. While this problem has been partially studied, we provide a novel take on it by modeling it with a set of crawlers termed “bots” which can uncover independent portions of the network in parallel. Accordingly, we develop three adaptive algorithms, which make decisions based on previous observations due to incomplete information, namely AGP, a sequential method; FastAGP, a parallel algorithm; and ALSP, an extension of FastAGP uses local search to improve guarantees. These algorithms are proven to have 1/3, 1/7, and 1/ (5 + ϵ) approximation ratios, respectively. The key analysis of these algorithms is the connection between adaptive algorithms and an intersection of multiple partition matroids. We conclude with an evaluation of these algorithms to quantify the impact of both adaptivity and parallelism. We find that in practice, adaptive approaches perform significantly better, while FastAGP performs nearly as well as AGP in most cases despite operating in a massively parallel fashion. Finally, we show that a balance between the quantity and quality of bots is ideal for maximizing observation of the network.
【Keywords】: Network Probing; Adaptive Approximation Algorithms; Online Social Networks; Matroid Intersection; Optimization
【Paper Link】 【Pages】:1358-1366
【Authors】: Di Xiao ; Xiaoyong Li ; Daren B. H. Cline ; Dmitri Loguinov
【Abstract】: Since inception, DNS has used a TTL-based replication scheme that allows the source (i.e., an authoritative domain server) to control the frequency of record eviction from client caches. Existing studies of DNS predominantly focus on reducing query latency and source bandwidth, both of which are optimized by increasing the cache hit rate. However, this causes less-frequent contacts with the source and results in higher staleness of retrieved records. Given high data-churn rates at certain providers (e.g., dynamic DNS, CDNs) and importance of consistency to their clients, we propose that cache models include the probability of freshness as an integral performance measure. We derive this metric under general update/download processes and present a novel framework for measuring its value using remote observation (i.e., without access to the source or the cache). Besides freshness, our methods can estimate the inter-update distribution of DNS records, cache hit rate, distribution of TTL, and query arrival rate from other clients. Furthermore, these algorithms do not require any changes to the existing infrastructure/protocols.
【Keywords】: Servers; Internet; Delays; Conferences; Random variables; Frequency control
【Paper Link】 【Pages】:1367-1375
【Authors】: Sunjung Kang ; Changhee Joo
【Abstract】: In Cognitive Radio Networks (CRNs), dynamic spectrum access allows (unlicensed) users to identify and access unused channels opportunistically, thus improves spectrum utility. In this paper, we address the user-channel allocation problem in multi-user multi-channel CRNs without a prior knowledge of channel statistics. A reward of a channel is stochastic with unknown distribution, and statistically different for each user. Each user either explores a channel to learn the channel statistics, or exploits the channel with the highest expected reward based on information collected so far. Further, a channel should be accessed exclusively by one user at a time due to a collision. Using multi-armed bandit framework, we develop a provably efficient solution whose computational complexity is linear to the number of users and channels.
【Keywords】: Optimal matching; Greedy algorithms; Cognitive radio; Computational complexity; Channel estimation; Bipartite graph; Conferences
【Paper Link】 【Pages】:1376-1384
【Authors】: Moinul Hossain ; Jiang Xie
【Abstract】: Cognitive Radio (CR) has garnered much attention in the last decade, while the security issues are not fully studied yet. Existing research on attacks and defenses in CR - based networks focuses mostly on individual network layers, whereas cross-layer attacks remain fortified against single-layer defenses. In this paper, we shed light on a new vulnerability in cross-layer routing protocols and demonstrate how a perpetrator can exploit this vulnerability to manipulate traffic flow around it. We propose this cross-layer attack in CR-based wireless mesh networks (CR-WMNs), which we call off-sensing and route manipulation (OS-RM) attack. In this cross-layer assault, off-sensing attack is launched at the lower layers as the point of attack but the final intention is to manipulate traffic flow around the perpetrator. We also introduce a learning strategy for a perpetrator, so that it can gather information from the collaboration with other network entities and capitalize this information into knowledge to accelerate its malice intentions. Simulation results show that this attack is far more detrimental than what we have experienced in the past and need to be addressed before commercialization of CR-based networks.
【Keywords】: Sensors; Routing protocols; Knowledge engineering; Wireless sensor networks; Wireless communication; FCC; Routing
【Paper Link】 【Pages】:1385-1393
【Authors】: Sigit Aryo Pambudi ; Wenye Wang ; Cliff Wang
【Abstract】: The explosive number of IoT nodes and adoption of software-defined radio have enabled an efficient method of exploiting idle frequency spectrums called dynamic spectrum access (DSA). The foremost problem in DSA is for a pair of nodes to rendezvous and form a control channel prior to communication. Existing schemes require a channel hopping (CH) pattern with length O(N 2 ), which is overly complex especially when the number of channels N is large. Moreover, the CH patterns are designed assuming DSA nodes have unlimited CH capability, which is hardly satisfied by nodes with long frequency switching time and limited sensing capacity. In this paper, we design a low-complexity rendezvous scheme that account for CH capability limits. The CH capability is captured using spectrum slice graphs that describe the possible channels for the next hop, given the currently-visited channel. By viewing the CH patterns as random walks over the spectrum graphs, we assign the walks with optimal transition probabilities that achieve the smallest rendezvous delay. The resulting symmetric random CH (S-RCH) scheme, which is suitable for IoT nodes without predetermined roles, achieves a lower rendezvous delay than existing Modular Modified Clock (MMC) scheme and offers more than 80 % successful rendezvous in mobile networks.
【Keywords】: Bandwidth; Tuning; Sensors; Time-frequency analysis; Wireless communication; Conferences; Electronic mail
【Paper Link】 【Pages】:1394-1402
【Authors】: Xu Wang ; Randall A. Berry
【Abstract】: Unlicensed spectrum has been viewed as a way to increase competition in wireless access and promote innovation in new technologies and business models. However, several recent papers have shown that the openness of such spectrum can also lead to it becoming over congested when used by competing wireless service providers (SPs). This in turn can result in the SPs making no profit and may deter them from entering the market. However, this prior work assumes that unlicensed access is a separate service from any service offered using licensed spectrum. Here, we instead consider the more common case were service providers bundle both licensed and unlicensed spectrum as a single service and offer this with a single price. We analyze a model for such a market and show that in this case SPs are able to gain higher profit than the case without bundling. It is also possible to get higher social welfare with bundling. Moreover, we explore the case where SPs are allowed to manage the customers' average percentage of time they receive service on unlicensed spectrum and characterize the social welfare gap between the profit maximizing and social welfare maximizing setting.
【Keywords】: Wireless communication; Computational modeling; Bandwidth; Conferences; Biological system modeling; Games; Licenses
【Paper Link】 【Pages】:1403-1411
【Authors】: Haipeng Dai ; Meng Li ; Alex X. Liu
【Abstract】: This paper concerns the problem of finding persistent items in distributed datasets, which has many applications such as port scanning and intrusion detection. To the best of our knowledge, there is no existing solution for finding persistent items in distributed datasets. In this paper, we propose DISPERSE, a probabilistic algorithm that can find persistent items in distributed datasets without collecting all the datasets. Our basic idea is that each monitor compresses each item ID in its dataset in a lossy fashion, sends the set of lossily compressed item IDs to the server, then the server recovers the IDs of the persistent items. We design the lossy compression so that given one lossily compressed item ID, the server cannot recover the item ID, but when the number of lossily compressed versions of the same item ID exceeds a threshold, which means that such items are persistent ones, the server can recover the item ID with a high probability. This threshold is exactly the threshold in the definition of persistent items. We implemented DISPERSE and evaluated its performance. In comparison with the straightforward solution, DISPERSE achieves a compression ratio of 26.5% with FNR=3.5% and FPR=O. In comparison with a developed Bloom filter based scheme and the adapted kBF and IBF schemes, our scheme can achieve 7.9, 5.7, and 6.6 times performance gains, respectively, in terms of compression ratio.
【Keywords】: Monitoring; Servers; IP networks; Master-slave; Conferences; Intrusion detection; Probabilistic logic
【Paper Link】 【Pages】:1412-1420
【Authors】: Sándor Z. Kiss ; Éva Hosszu ; János Tapolcai ; Lajos Rónyai ; Ori Rottenstreich
【Abstract】: Bloom filters and their variants are widely used as space efficient probabilistic data structures for representing set systems and are very popular in networking applications. They support fast element insertion and deletion, along with membership queries with the drawback of false positives. Bloom filters can be designed to match the false positive rates that are acceptable for the application domain. However, in many applications a common engineering solution is to set the false positive rate very small, and ignore the existence of the very unlikely false positive answers. This paper is devoted to close the gap between the two design concepts of unlikely and not having false positives. We propose a data structure, called EGH filter, that supports the Bloom filter operations and besides it can guarantee false positive free operations for a finite universe and a restricted number of elements stored in the filter. We refer to the limited universe and filter size as the false positive free zone of the filter. We describe necessary conditions for the false positive free zone of a filter and generalize the filter to support listing of the elements. We evaluate the performance of the filter in comparison with the traditional Bloom filters. Our data structure is based on recently developed combinatorial group testing techniques.
【Keywords】: Testing; Probabilistic logic; Probes; Conferences; Arrays; Focusing
【Paper Link】 【Pages】:1421-1429
【Authors】: Xukan Ran ; Haoliang Chen ; Xiaodan Zhu ; Zhenming Liu ; Jiasi Chen
【Abstract】: Deep learning shows great promise in providing more intelligence to augmented reality (AR) devices, but few AR apps use deep learning due to lack of infrastructure support. Deep learning algorithms are computationally intensive, and front-end devices cannot deliver sufficient compute power for real-time processing. In this work, we design a framework that ties together front-end devices with more powerful backend “helpers” (e.g., home servers) to allow deep learning to be executed locally or remotely in the cloud/edge. We consider the complex interaction between model accuracy, video quality, battery constraints, network data usage, and network conditions to determine an optimal offloading strategy. Our contributions are: (1) extensive measurements to understand the tradeoffs between video quality, network conditions, battery consumption, processing delay, and model accuracy; (2) a measurement-driven mathematical framework that efficiently solves the resulting combinatorial optimization problem; (3) an Android application that performs real-time object detection for AR applications, with experimental results that demonstrate the superiority of our approach.
【Keywords】: Machine learning; Computational modeling; Streaming media; Measurement; Real-time systems; Neural networks; Batteries
【Paper Link】 【Pages】:1430-1438
【Authors】: Zongqing Lu ; Kevin S. Chan ; Thomas F. La Porta
【Abstract】: Mobile devices such as smartphones are enabling users to generate and share videos with increasing rates. In some cases, these videos may contain valuable information, which can be exploited for a variety of purposes. However, instead of centrally collecting and processing videos for information retrieval, we consider crowdprocessing videos, where each mobile device locally processes stored videos. While the computational capability of mobile devices continues to improve, processing videos using deep learning, i.e., convolutional neural networks, is still a demanding task for mobile devices. To this end, we design and build CrowdVision, a computing platform that enables mobile devices to crowdprocess videos using deep learning in a distributed and energy-efficient manner leveraging cloud offload. CrowdVision can quickly and efficiently process videos with offload under various settings and different network connections and greatly outperform the existing computation offload framework (e.g., with a 2× speed-up). In doing so CrowdVision tackles several challenges: (i) how to exploit the characteristics of the computing of deep learning for video processing; (ii) how to parallelize processing and offloading for acceleration; and (iii) how to optimize both time and energy at runtime by just determining the right moments to offload.
【Keywords】: Mobile handsets; Task analysis; Machine learning; Performance evaluation; Object detection; Batteries; Batch production systems
【Paper Link】 【Pages】:1439-1447
【Authors】: Qianyi Huang ; Zhice Yang ; Qian Zhang
【Abstract】: Mobile sensing enables unobtrusive monitoring of our daily activities, sleep quality, breathing and heart rate, revolutionizing the health-care system. Dietary information is also a critical dimension for health management but has no convenient solution yet. In this paper, we ask whether we can track meal composition unobtrusively. We introduce Smart-U, a new utensil design that can recognize meal composition during the intake process, without user intervention or on-body instruments. Smart-U makes use of the fact that light spectra reflected by foods are dependent on the food ingredients. By analyzing the reflected light spectra, Smart-U can recognize what food is on top of the utensil. We describe the prototype design of Smart-U and the food recognition algorithm. We demonstrate that Smart-U can recognize 20 types of foods with 93 % accuracy. It can work robustly under different conditions. We envision that Smart-U can enable automatic food intake tracking, and provide personalized food suggestions based on nutrient recommendations and prior consumption. In the long run, Smart-U can contribute to the study of chronic diseases.
【Keywords】: Light emitting diodes; Monitoring; Photodiodes; Receivers; Prototypes; Photonics; Light sources
【Paper Link】 【Pages】:1448-1456
【Authors】: Haishi Du ; Ping Li ; Hao Zhou ; Wei Gong ; Gan Luo ; Panlong Yang
【Abstract】: This paper presents WordRecorder, an efficient and accurate handwriting recognition system that identifies words using acoustic signals generated by pens and paper, thus enabling ubiquitous handwriting recognition. To achieve this, we carefully craft a new deep-learning based acoustic sensing framework with three major components, i.e., segmentation, classification, and word suggestion. First, we design a dual-window approach to segment the raw acoustic signal into a series of words and letters by exploiting subtle acoustic signal features of handwriting. Then we integrate a set of simple yet effective signal processing techniques to further refine raw acoustic signals into normalized spectrograms which are suitable for deep-learning classification. After that, we customize a deep neural network that is suitable for smart devices. Finally, we incorporate a word suggestion module to enhance the recognition performance. Our framework achieves both computation efficiency and desirable classification accuracy simultaneously. We prototype our design using off-the-shelf smartwatches and conduct extensive evaluations. Our results demonstrate that WordRecorder robustly archives 81% accuracy rate for trained users, and 75% for users without training, across a range of different environment, users, and writing habits.
【Keywords】: Acoustics; Handwriting recognition; Writing; Microsoft Windows; Feature extraction; Conferences; Machine learning
【Paper Link】 【Pages】:1457-1465
【Authors】: Tianming Zhao ; Jian Liu ; Yan Wang ; Hongbo Liu ; Yingying Chen
【Abstract】: This paper subverts the traditional understanding of Photoplethysmography (PPG) and opens up a new direction of the utility of PPG in commodity wearable devices, especially in the domain of human computer interaction of fine-grained gesture recognition. We demonstrate that it is possible to leverage the widely deployed PPG sensors in wrist-worn wearable devices to enable finger-level gesture recognition, which could facilitate many emerging human-computer interactions (e.g., sign-language interpretation and virtual reality). While prior solutions in gesture recognition require dedicated devices (e.g., video cameras or IR sensors) or leverage various signals in the environments (e.g., sound, RF or ambient light), this paper introduces the first PPG-based gesture recognition system that can differentiate fine-grained hand gestures at finger level using commodity wearables. Our innovative system harnesses the unique blood flow changes in a user's wrist area to distinguish the user's finger and hand movements. The insight is that hand gestures involve a series of muscle and tendon movements that compress the arterial geometry with different degrees, resulting in significant motion artifacts to the blood flow with different intensity and time duration. By leveraging the unique characteristics of the motion artifacts to PPG, our system can accurately extract the gesture-related signals from the significant background noise (i.e., pulses), and identify different minute finger-level gestures. Extensive experiments are conducted with over 3600 gestures collected from 10 adults. Our prototype study using two commodity PPG sensors can differentiate nine finger-level gestures from American Sign Language with an average recognition accuracy over 88%, suggesting that our PPG-based finger-level gesture recognition system is promising to be one of the most critical components in sign language translation using wearables.
【Keywords】: Gesture recognition; Wearable sensors; Assistive technology; Wrist; Blood; Biomedical monitoring
【Paper Link】 【Pages】:1466-1474
【Authors】: Li Lu ; Jiadi Yu ; Yingying Chen ; Hongbo Liu ; Yanmin Zhu ; Yunfei Liu ; Minglu Li
【Abstract】: To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, etc., to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Towards this end, we explore liveness verification of user authentication leveraging users' lip movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users' speaking lips leveraging build-in audio devices on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users' speaking lips, and find that there are unique lip movement patterns for different individuals. To characterize the lip movements, we propose a deep learning-based method to extract efficient features from Doppler profiles, and employ Support Vector Machine and Support Vector Domain Description to construct binary classifiers and spoofer detectors for user identification and spoofer detection, respectively. Afterwards, we develop a binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.21% accuracy in user identification and 93.1% accuracy in spoofer detection.
【Keywords】: Lips; Authentication; Acoustics; Doppler effect; Smart phones; Feature extraction
【Paper Link】 【Pages】:1475-1483
【Authors】: Shuangshuang Xu ; Lan Zhang ; Anran Li ; Xiang-Yang Li ; Chaoyi Ruan ; Wenchao Huang
【Abstract】: Better understanding of mobile applications' behaviors would lead to better malware detection/classification and better app recommendation for users. In this work, we design a framework AppDNA to automatically generate a compact representation for each app to comprehensively profile its behaviors. The behavior difference between two apps can be measured by the distance between their representations. As a result, the versatile representation can be generated once for each app, and then be used for a wide variety of objectives, including malware detection, app categorizing, plagiarism detection, etc. Based on a systematic and deep understanding of an app's behavior, we propose to perform a function-call-graph-based app profiling. We carefully design a graph-encoding method to convert a typically extremely large call-graph to a 64-dimension fix-size vector to achieve robust app profiling. Our extensive evaluations based on 86,332 benign and malicious apps demonstrate that our system performs app profiling (thus malware detection, classification, and app recommendation) to a high accuracy with extremely low computation cost: it classifies 4024 (benign/malware) apps using around 5.06 second with accuracy about 93.07%; it classifies 570 malware's family (total 21 families) using around 0.83 second with accuracy 82.3%; it classifies 9,730 apps' functionality with accuracy 33.3% for a total of 7 categories and accuracy of 88.1 % for 2 categories.
【Keywords】: Malware; Feature extraction; Encoding; Task analysis; Plagiarism; Neural networks; Machine learning
【Paper Link】 【Pages】:1484-1492
【Authors】: Ting Chen ; Yuxiao Zhu ; Zihao Li ; Jiachi Chen ; Xiaoqi Li ; Xiapu Luo ; Xiaodong Lin ; Xiaosong Zhang
【Abstract】: Being the largest blockchain with the capability of running smart contracts, Ethereum has attracted wide attention and its market capitalization has reached 20 billion USD. Ethereum not only supports its cryptocurrency named Ether but also provides a decentralized platform to execute smart contracts in the Ethereum virtual machine. Although Ether's price is approaching 200 USD and nearly 600K smart contracts have been deployed to Ethereum, little is known about the characteristics of its users, smart contracts, and the relationships among them. To fill in the gap, in this paper, we conduct the first systematic study on Ethereum by leveraging graph analysis to characterize three major activities on Ethereum, namely money transfer, smart contract creation, and smart contract invocation. We design a new approach to collect all transaction data, construct three graphs from the data to characterize major activities, and discover new observations and insights from these graphs. Moreover, we propose new approaches based on cross-graph analysis to address two security issues in Ethereum. The evaluation through real cases demonstrates the effectiveness of our new approaches.
【Keywords】: Conferences
【Paper Link】 【Pages】:1493-1501
【Authors】: Yizhen Jia ; Yinhao Xiao ; Jiguo Yu ; Xiuzhen Cheng ; Zhenkai Liang ; Zhiguo Wan
【Abstract】: Smart home IoT devices have been more prevalent than ever before but the relevant security considerations fail to keep up with due to device and technology heterogeneity and resource constraints, making IoT systems susceptible to various attacks. In this paper, we propose a novel graph-based mechanism to identify the vulnerabilities in communication of IoT devices for smart home systems. Our approach takes one or more packet capture files as inputs to construct a traffic graph by passing the captured messages, identify the correlated subgraphs by examining the attribute-value pairs associated with each message, and then quantify their vulnerabilities based on the sensitivity levels of different keywords. To test the effectiveness of our approach, we setup a smart home system that can control a smart bulb LB100 via either the smartphone APP for LB100 or the Google Home speaker. We collected and analyzed 58,714 messages and exploited 6 vulnerable correlated sub graphs, based on which we implemented 6 attack cases that can be easily reproduced by attackers with little knowledge of IoT. This study is novel as our approach takes only the collected traffic files as inputs without requiring the knowledge of the device firmware while being able to identify new vulnerabilities. With this approach, we won the third prize out of 20 teams in a hacking competition.
【Keywords】: Google; Servers; Smart homes; Cryptography; Protocols; Sensitivity
【Paper Link】 【Pages】:1502-1510
【Authors】: Kun Xie ; Xiaocan Li ; Xin Wang ; Gaogang Xie ; Jigang Wen ; Dafang Zhang
【Abstract】: Detecting anomalous traffic is a crucial task of managing networks. Many anomaly detection algorithms have been proposed recently. However, constrained by their matrix-based traffic data model, existing algorithms often suffer from low detection accuracy. To fully utilize the multi-dimensional information hidden in the traffic data, this paper takes an initiative to investigate the potential and methodologies of performing tensor factorization for more accurate Internet anomaly detection. Only considering the low-rank linearity features hidden in the data, current tensor factorization techniques would result in low anomaly detection accuracy. We propose a novel Graph-based Tensor Recovery model (Graph-TR) to well explore both low rank linearity features as well as the non-linear proximity information hidden in the traffic data for better anomaly detection. We encode the non-linear proximity information of the traffic data by constructing nearest neighbor graphs and incorporate this information into the tensor factorization using the graph Laplacian. Moreover, to facilitate the quick building of neighbor graph, we propose a nearest neighbor searching algorithm with the simple locality-sensitive hashing (LSH). We have conducted extensive experiments using Internet traffic trace data Abilene and GEANT. Compared with the state of art algorithms on matrix-based anomaly detection and tensor recovery approach, our Graph-Trcan achieve significantly lower False Positive Rate and higher True Positive Rate.
【Keywords】: Traffic anomaly detection; Tensor Recovery; Graph
【Paper Link】 【Pages】:1511-1519
【Authors】: Tran Viet Xuan Phuong ; Rui Ning ; Chunsheng Xin ; Hongyi Wu
【Abstract】: While the Internet of Things (IoT) is embraced as important tools for efficiency and productivity, it is becoming an increasingly attractive target for cybercriminals. This work represents the first endeavor to develop practical Puncturable Attribute Based Encryption schemes that are light-weight and applicable in IoTs. In the proposed scheme, the attribute-based encryption is adopted for fine grained access control. The secret keys are puncturable to revoke the decryption capability for selected messages, recipients, or time periods, thus protecting selected important messages even if the current key is compromised. In contrast to conventional forward encryption, a distinguishing merit of the proposed approach is that the recipients can update their keys by themselves without key re-issuing from the key distributor. It does not require frequent communications between IoT devices and the key distribution center, neither does it need deleting components to expunge existing keys to produce a new key. Moreover, we devise a novel approach which efficiently integrates attribute-based key and punctured keys such that the key size is roughly the same as that of the original attribute-based encryption. We prove the correctness of the proposed scheme and its security under the Decisional Bilinear Diffie-Hellman (DBDH) assumption. We also implement the proposed scheme on Raspberry Pi and observe that the computation efficiency of the proposed approach is comparable to the original attribute-based encryption. Both encryption and decryption can be completed within tens of milliseconds.
【Keywords】: Attribute-Based Encryption; Internet-of-Things; Lagrange Polynomial; Linear Secret Sharing
【Paper Link】 【Pages】:1520-1528
【Authors】: Zhen Ling ; Junzhou Luo ; Yaowen Liu ; Ming Yang ; Kui Wu ; Xinwen Fu
【Abstract】: Smart mobile devices have become an integral part of people's life and users often input sensitive information on these devices. However, various side channel attacks against mobile devices pose a plethora of serious threats against user security and privacy. To mitigate these attacks, we present a novel secure Back-of-Device (BoD) input system, SecTap, for mobile devices. To use SecTap, a user tilts her mobile device to move a cursor on the keyboard and tap the back of the device to secretly input data. We design a tap detection method by processing the stream of accelerometer readings to identify the user's taps in real time. The orientation sensor of the mobile device is used to control the direction and the speed of cursor movement. We also propose an obfuscation technique to randomly and effectively accelerate the cursor movement. This technique not only preserves the input performance but also keeps the adversary from inferring the tapped keys. Extensive empirical experiments were conducted on different smart phones to demonstrate the usability and security on both Android and iOS platforms.
【Keywords】: Motion Sensor; Smart Phone; Secure Input
【Paper Link】 【Pages】:1529-1537
【Authors】: Nirnimesh Ghose ; Loukas Lazos ; Ming Li
【Abstract】: We address the problem of trust establishment between wireless devices that do not share any prior secrets. This includes the mutual authentication and agreement to a common key that can be used to further bootstrap essential cryptographic mechanisms. We propose SFIRE, a secret-free trust establishment protocol that allows the secure pairing of commercial off-the-shelf (COTS) wireless devices with a hub. Compared to the state-of-the-art, SFIRE does not require any out-of-band channels, special hardware, or firmware modification, but can be applied to any COTS device. Moreover, SFIRE is resistant to the most advanced active signal manipulations that include recently demonstrated signal nullification at an intended receiver. These security properties are achieved in-band with the assistance of a helper device such as a smartphone and by using the RSS fluctuation patterns to build a robust “RSS authenticator”. We perform extensive experiments using COTS devices and USRP radios and verify the validity of the proposed protocol.
【Keywords】: Wireless communication; Communication system security; Protocols; Wireless sensor networks; Performance evaluation; Channel estimation; Security
【Paper Link】 【Pages】:1538-1546
【Authors】: Jiaxi Gu ; Jiliang Wang ; Zhiwen Yu ; Kele Shen
【Abstract】: Video streaming takes up an increasing proportion of network traffic nowadays. Dynamic Adaptive Streaming over HTTP (DASH) becomes the de facto standard of video streaming and it is adopted by Youtube, Netflix, etc. Despite of the popularity, network traffic during video streaming shows identifiable pattern which brings threat to user privacy. In this paper, we propose a video identification method using network traffic while streaming. Though there is bitrate adaptation in DASH streaming, we observe that the video bitrate trend remains relatively stable because of the widely used Variable Bit-Rate (VBR) encoding. Accordingly, we design a robust video feature extraction method for eavesdropped video streaming traffic. Meanwhile, we design a VBR based video fingerprinting method for candidate video set which can be built using downloaded video files. Finally, we propose an efficient partial matching method for computing similarities between video fingerprints and streaming traces to derive video identities. We evaluate our attack method in different scenarios for various video content, segment lengths and quality levels. The experimental results show that the identification accuracy can reach up to 90 % using only three-minute continuous network traffic eavesdropping.
【Keywords】: video streaming; network traffic; privacy; side-channel attack
【Paper Link】 【Pages】:1547-1555
【Authors】: Kunal Sankhe ; Ufuk Muncuk ; M. Yousof Naderi ; Kaushik Chowdhury
【Abstract】: This paper presents FreeIoT, a control plane paradigm that allows fine grained signaling for city-scale IoT deployments without installing any additional infrastructure. FreeIoT overlays control/wake-up information for sensors over existing standards compliant LTE through the following contributions: First, we develop a novel encoding scheme that changes the spatial positioning of Almost Blank Subframes (ABS) within a standard LTE frame to convey control information. ABS was originally defined in the standard to allow coexistence between the macro-cell eNB and nearby small cells, which FreeIoT leverages as a side channel for IoT signaling. Our approach works with any number of ABS settings chosen by the LTE eNB, and accordingly adjusts the encoding of control messages at maximum possible transmission rates. Second, a session management protocol is introduced to maintain contextual information of the control signaling. This allows FreeIoT to handle situations where the control message may span multiple frames, or when the LTE operator temporarily reduces the number of ABS. FreeIoT also incorporates an error detection and correction mechanism to counter channel and fading errors. Finally, we implement a proof of concept testbed to validate the operation of FreeIoT using a software defined LTE eNB and custom-designed RF energy harvesting circuit interfaced with off-the-shelf sensors.
【Keywords】: Long Term Evolution; Sensors; Encoding; Radio frequency; Indexes; Modulation
【Paper Link】 【Pages】:1556-1564
【Authors】: Chuang Hu ; Wei Bao ; Dan Wang
【Abstract】: Nowadays, manufacturers want to collect the data of their sold-products to the cloud, so that they can conduct analysis and improve the operation, maintenance and services of their products. Manufacturers are looking for a self-contained solution for data transmission since their products are typically deployed in a large number of different buildings, and it is neither feasible to negotiate with each building to use the building's network (e.g., WiFi) nor practical to establish its own network infrastructure. ISPs are aware of this market. Since the readily available 3G/4G is over costly for most IoT devices, ISPs are developing new choices. Nevertheless, it can be expected that the choices from ISPs will not be fine-grained enough to match hundreds or thousands of requirements on different costs and data volumes from the IoT applications. To address this problem, we for the first time propose IoT communication sharing (ICS). We first clarify the ICS scenarios. We then formulate the IoT communication sharing (ICS) problem, and develop a set of algorithms with provable performance. We further present our implementation of a fully functioning system. Our evaluations show that ICS and our algorithms can lead to a cost reduction of five times and eight times respectively for the two real-world cases.
【Keywords】: Buildings; Integrated circuits; Maintenance engineering; Wireless fidelity; Computational modeling; Cloud computing; Wireless sensor networks
【Paper Link】 【Pages】:1565-1573
【Authors】: Yilun Zheng ; Yuan He ; Meng Jin ; Xiaolong Zheng ; Yunhao Liu
【Abstract】: Eccentricity detection is a crucial issue for highspeed rotating machinery, which concerns the stability and safety of the machinery. Conventional techniques in industry for eccentricity detection are mainly based on measuring certain physical indicators, which are costly and hard to deploy. In this paper, we propose RED, a non-intrusive, low-cost, and realtime RFID-based eccentricity detection approach. Differing from the existing RFID-based sensing approaches, RED utilizes the temporal and phase distributions of tag readings as effective features for eccentricity detection. RED includes a Markov chain based model called RUM, which only needs a few sample readings from the tag to make a highly accurate and precise judgement. We implement RED with commercial-of-the-shelf RFID reader and tags, and evaluate its performance across various scenarios. The overall accuracy is 93.59% and the detection latency is 0.68 seconds in average.
【Keywords】: Radiofrequency identification; Rotors; Feature extraction; Real-time systems; Antennas
【Paper Link】 【Pages】:1574-1582
【Authors】: Kun Qian ; Chenshu Wu ; Fu Xiao ; Yue Zheng ; Yi Zhang ; Zheng Yang ; Yunhao Liu
【Abstract】: Vital signs such as heart rate and heartbeat interval are currently measured by electrocardiograms (ECG) or wearable physiological monitors. These techniques either require contact with the patient's skin or are usually uncomfortable to wear, rendering them too expensive and user-unfriendly for daily monitoring. In this paper, we propose a new noninvasive technology to generate an Acousticcardiogram (ACG) that precisely monitors heartbeats using inaudible acoustic signals. ACG uses only commodity microphones and speakers commonly equipped on ubiquitous off-the-shelf devices, such as smartphones and laptops. By transmitting an acoustic signal and analyzing its reflections off human body, ACG is capable of recognizing the heart rate as well as heartbeat rhythm. We employ frequency-modulated sound signals to separate reflection of heart from that of background motions and breath, and continuously track the phase changes of the acoustic data. To translate these acoustic data into heart and breath rates, we leverage the dual microphone design on COTS mobile devices to suppress direct echo from speaker to microphones, identify heart rate in frequency domain, and adopt an advanced algorithm to extract individual heartbeats. We implement ACG on commercial devices and validate its performance in real environments. Experimental results demonstrate ACG monitors user's heartbeat accurately, with median heart rate estimation error of 0.6 beat per minute (bpm), and median heartbeat interval estimation error of 19 ms.
【Keywords】: Heart beat; Monitoring; Biomedical monitoring; Microphones; Sonar; Smart devices
【Paper Link】 【Pages】:1583-1591
【Authors】: Diala Naboulsi ; Assia Mermouri ; Razvan Stanica ; Hervé Rivano ; Marco Fiore
【Abstract】: The development of virtualization techniques enables an architectural shift in mobile networks, where resource allocation, or even signal processing, become software functions hosted in a data center. The centralization of computing resources and the dynamic mapping between baseband processing units (BBUs) and remote antennas (RRHs) provide an increased flexibility to mobile operators, with important reductions of operational costs. Most research efforts on Cloud Radio Access Networks (CRAN) consider indeed an operator perspective and network-side performance indicators. The impact of such new paradigms on user experience has been instead overlooked. In this paper, we shift the viewpoint, and show that the dynamic assignment of computing resources enabled by CRAN generates a new class of mobile terminal handover that can impair user quality of service. We then propose an algorithm that mitigates the problem, by optimizing the mapping between BBUs and RRHs on a time-varying graph representation of the system. Furthermore, we show that a practical online BBU-RRH mapping algorithm achieves results similar to an oracle-based scheme with perfect knowledge of future traffic demand. We test our algorithms with two large-scale real-world datasets, where the total number of handovers, compared with the current architectures, is reduced by more than 20%. Moreover, if a small tolerance to dropped calls is allowed, 30% less handovers can be obtained.
【Keywords】: Handover; Computer architecture; Radio access networks; Quality of experience; Conferences; Cloud computing
【Paper Link】 【Pages】:1592-1600
【Authors】: Hanif Rahbari ; Peyman Siyari ; Marwan Krunz ; Jung-Min "Jerry" Park
【Abstract】: Carrier frequency offset (CFO) arises from the intrinsic mismatch between the operating frequencies of the transmitter and the receiver, as well as their relative speeds (i.e., Doppler effect). Despite advances in CFO estimation techniques, estimation errors are still present. Residual CFO creates time-varying phase error. Modern wireless systems, including WLANs, 5G cellular systems, and satellite communications, use high-order modulation schemes, which are characterized by dense constellation maps. Accounting for the phase error is critical for the demodulation performance of such schemes. In this paper, we analyze the post-estimation probability distribution of residual CFO and use it to develop a CFO-aware demodulation approach for a set of modulation schemes (e.g., QAM and APSK). For a given distribution of the residual CFO, symbols with larger amplitudes are less densely distributed on the constellation map. We explore one important application of our adaptive demodulation approach in the context of PHY-layer security, and more specifically modulation obfuscation (MO) mechanisms. In such mechanisms, the transmitter attempts to hide the modulation order of a frame's payload from eavesdroppers, which could otherwise exploit such information to breach user privacy or launch selective attacks. We go further and complement our CFO-aware demodulation scheme by optimizing the design of a low-complexity MO technique with respect to phase errors. Our results show that when combined, our CFO-aware demodulation and optimized MO techniques achieve up to 5 dB gain over conventional demodulation schemes that are not obfuscated and are oblivious to residual CFO.
【Keywords】: Frequency offset; demodulation; PHY-layer security; modulation obfuscation; WLAN
【Paper Link】 【Pages】:1601-1609
【Authors】: Junse Lee ; François Baccelli
【Abstract】: We propose and analyze a new shadowing field model meant to capture spatial correlations. The interference field associated with this new model is compared to that of the widely used independent shadowing model. Independent shadowing over links is adopted because of the resulting closed forms for performance metrics, and in spite of the well-known fact that the shadowing fields of networks are spatially correlated. The main purpose of this paper is to challenge this independent shadowing approximation. For this, we analyze the interference measured at the origin in networks where 1) nodes which are in the same cell of some random shadowing tessellation share the same shadow, or 2) nodes which share a common mother point in some cluster process share the same shadow. By leveraging stochastic comparison techniques, we give the order relation of the three main user performance metrics, namely coverage probability, Shannon throughput and local delay, under both the correlated and the independent shadowing assumptions. We show that the evaluation of the considered metrics under the independent approximation is systematically pessimistic compared to the correlated shadowing model. The improvement in each metric when adopting the correlated shadow model is quantified and shown to be quite significant.
【Keywords】: Shadow mapping; Stochastic processes; Interference; Base stations; Measurement; Random variables; Computational modeling
【Paper Link】 【Pages】:1610-1618
【Authors】: Amrit S. Bedi ; Ketan Rajawat ; Marceau Coupechoux
【Abstract】: This paper considers the problem of designing the user trajectory in a device-to-device communications setting. We consider a pair of pedestrians connected through a D2D link. The pedestrians seek to reach their respective destinations, while using the D2D link for data exchange applications such as file transfer, video calling, and online gaming. In order to enable better D2D connectivity, the pedestrians are willing to deviate from their respective shortest paths, at the cost of reaching their destinations slightly late. A generic trajectory optimization problem is formulated and solved for the case when full information about the problem in known in advance. Motivated by the D2D user's need to keep their destinations private, we also formulate a regularized variant of the problem that can be used to develop a fully online algorithm. The proposed online algorithm is quite efficient, and is shown to achieve a sublinear offline regret while satisfying the required mobility constraints exactly. The theoretical results are backed by detailed numerical tests that establish the efficacy of the proposed algorithms under various settings.
【Keywords】: Device-to-device communication; Trajectory optimization; Heuristic algorithms; Robots
【Paper Link】 【Pages】:1619-1627
【Authors】: Foivos Michelinakis ; Hossein Doroud ; Abbas Razaghpanah ; Andra Lutu ; Narseo Vallina-Rodriguez ; Phillipa Gill ; Joerg Widmer
【Abstract】: Mobile applications outsource their cloud infrastructure deployment and content delivery to cloud computing services and content delivery networks. Studying how these services, which we collectively denote Cloud Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding some of the performance limitations of today's mobile apps. To that end, we perform the first empirical study of the complex dynamics between applications, MNOs and CSPs. First, we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign to identify the most prevalent CSPs supporting today's mobile Internet. Then, we investigate how well these services interconnect with major European MNOs at a topological level, and measure their performance over European MNO networks through a month-long measurement campaign on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs are used by 85 % of apps, and observe significant differences in their performance across different MNOs due to the nature of their services, peering relationships with MNOs, and deployment strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming, and presence of middleboxes, but not influenced by the choice of DNS resolver.
【Keywords】: Cloud computing; Servers; IP networks; Conferences; Europe; Tools
【Paper Link】 【Pages】:1628-1636
【Authors】: Haibo Wang ; Jilong Wang ; Weizhen Dang ; Jing'an Xue ; Fenghua Li
【Abstract】: Dynamic Host Configuration Protocol (DHCP) is widely used to dynamically assign IP addresses. However, due to little knowledge on the behavior and performance of DHCP, it is challenging to configure a proper lease time in complicated wireless network. In this paper, we conduct the largest known measurement on the behavior and performance of DHCP based on the wireless network of T University (TWLAN). TWLAN has more than 59,000 users, 10,000 wireless access points and 130,000 IP addresses. We find the performance of DHCP is far from satisfactory: (1) The non-authenticated devices lead to a waste of 25% of IP addresses at the rush hour. (2) A device does not generate traffic for 67 % of the lease time on average. Meanwhile, we find devices of different locations and operating systems show diverse online patterns. A unified lease time setting could result in an inefficient utilization of addresses. To address the problems, taking account of authentication information and device online patterns, we propose a new leasing strategy. The results show it reduces the number of assigned addresses by 24 % and reduces the time during which an IP address is occupied by 17 % without sianificantly Increasing the DHCP server load.
【Keywords】: DHCP; measurement; performance; optimization
【Paper Link】 【Pages】:1637-1645
【Authors】: Babak Alipour ; Leonardo Tonetto ; Aaron Yi Ding ; Roozbeh Ketabi ; Jörg Ott ; Ahmed Helmy
【Abstract】: Two major factors affecting mobile network performance are mobility and traffic patterns. Simulations and analytical-based performance evaluations rely on models to approximate factors affecting the network. Hence, the understanding of mobility and traffic is imperative to the effective evaluation and efficient design of future mobile networks. Current models target either mobility or traffic, but do not capture their interplay. Many trace-based mobility models have largely used pre-smartphone datasets (e.g., AP-logs), or much coarser granularity (e.g., cell-towers) traces. This raises questions regarding the relevance of existing models, and motivates our study to revisit this area. In this study, we conduct a multidimensional analysis, to quantitatively characterize mobility and traffic spatio-temporal patterns, for laptops and smartphones, leading to a detailed integrated mobility-traffic analysis. Our study is data-driven, as we collect and mine capacious datasets (with 30TB, 300k devices) that capture all of these dimensions. The investigation is performed using our systematic (FLAMeS) framework. Overall, dozens of mobility and traffic features have been analyzed. The insights and lessons learnt serve as guidelines and a first step towards future integrated mobility-traffic models. In addition, our work acts as a stepping-stone towards a richer, morerealistic suite of mobile test scenarios and benchmarks.
【Keywords】: Smart phones; Portable computers; Wireless LAN; Correlation; Systematics; Fires; Conferences
【Paper Link】 【Pages】:1646-1654
【Authors】: Yue Li ; Haining Wang ; Kun Sun
【Abstract】: Account recovery (usually through a password reset) on many websites has mainly relied on accessibility to a registered email due to its favorable deployability and usability. However, it makes a user's online accounts vulnerable to a single point of failure when the registered email account is compromised. While previous research focuses on strengthening user passwords, the security risk imposed by email-based account recovery has not yet been well studied. In this paper, we investigate the possibility of mounting an email-based account recovery attack. Specifically, we examine the account authentication and recovery protocols in 239 traffic-heavy websites, confirming that most of them use emails for account recovery. We further scrutinize the security policy of major email service providers and show that a significant portion of them take no or marginal effort to protect user email accounts, leaving compromised email accounts readily available for mounting account recovery attacks. Then, we conduct case studies to assess potential losses caused by such attacks. Finally, we propose a lightweight email security enhancement called Secure Email Account Recovery (SEAR) to defend against account recovery attacks as an extra layer of protection to account recovery emails.
【Keywords】: Electronic mail; Password; Protocols; Authentication; Tools; Google
【Paper Link】 【Pages】:1655-1663
【Authors】: Seongho Byeon ; Hwijae Kwon ; Youngwook Son ; Changmok Yang ; Sunghyun Choi
【Abstract】: State-of-the-art IEEE 802.11ac supports wide bandwidth operation, which enables aggregating multiple 20 MHz channels up to 160 MHz bandwidth, as a key feature to achieve high throughput. In this paper, our experiment results reveal various situations where bandwidth adaptation without changing the receiver's baseband bandwidth, called operating channel width (OCW), leads to poor reception performance due surprisingly to time-domain interference not overlapping with the incoming desired signal in frequency domain. To cope with this problem, we develop RECONN, a standard-compliant and receiver-driven OCWadaptation scheme with ease of implementation. Our prototype implementation in commercial 802.11ac devices shows that RECONN achieves up to 1.85× higher throughput by completely eliminating time-domain interference. To our best knowledge, this is the first work to discover the time-domain interference problem, and to develop OCW adaptation scheme in 802.11ac system.
【Keywords】: Interference; Bandwidth; Time-domain analysis; Throughput; Wireless LAN; Baseband; Frequency-domain analysis
【Paper Link】 【Pages】:1664-1672
【Authors】: Yanjiao Chen ; Long Lin ; Guiyan Cao ; Zhenzhong Chen ; Baochun Li
【Abstract】: The use of a combinatorial auction is believed to be an effective way to distribute spectrum to buyers who have diversified valuations for different spectrum combinations. However, the allocation of spectrum with combinatorial auctions mainly aims at optimizing over certain utility functions, e.g., social welfare, but ignores individual preferences of buyers and sellers, who have incentives to deviate from globally optimal allocation results to improve their own utility. In this paper, we explore the possibility of designing a new stable matching algorithm for combinatorial spectrum allocations. Starkly different from existing efforts on spectrum matching mechanism design, our proposed combinatorial spectrum matching framework not only allows buyers to express preferences towards spectrum combinations (rather than individual channels), but also computes the payment that should be transferred from buyers to sellers. Payment determination, while essential in spectrum exchange, has never been addressed in existing spectrum matching frameworks. We design a novel algorithm to achieve a stable combinatorial spectrum matching and to compute the corresponding payment profiles. We conducted an extensive array of experiments to compare the performance of stable combinatorial spectrum matching with spectrum auctions. It is shown that the combinatorial spectrum matching sacrifices little allocation efficiency in terms of social welfare and spectrum utilization, but achieves a much higher individual buyer utility, which will incentivize buyers to participate and comply with the allocation results.
【Keywords】: Resource management; Interference; Cost accounting; Wireless communication; Stability analysis; Conferences; Computer science
【Paper Link】 【Pages】:1673-1681
【Authors】: Mariya Zheleva ; Petko Bogdanov ; Timothy LaRock ; Paul Schmitt
【Abstract】: The current paradigm of exclusive spectrum assignment and allocation is creating artificial spectrum scarcity that has a dramatic impact on network performance and user experience. Thus, governments, industry and academia have endeavored to create novel spectrum management mechanisms that allow multi-tiered access. A key component of such an approach is deep understanding of spectrum utilization in time, frequency and space. To address this challenge, we propose AirVIEW, a one-pass, unsupervised spectrum characterization approach for rapid transmitter detection with high tolerance to noise. AirVIEW autonomously learns its parameters and employs wavelet decomposition in order to amplify and reliably detect transmissions at a given time instant. We show that AirVIEW can robustly identify transmitters even when their power is only 5dBm above the noise floor. Furthermore, we demonstrate AirVIEW's ability to inform next-generation Dynamic Spectrum Access by characterizing essential transmitter properties in wideband spectrum measurements from 50MHz to 4.4GHz.
【Keywords】: Radio transmitters; Robustness; Noise measurement; Next generation networking; Time-frequency analysis; Floors
【Paper Link】 【Pages】:1682-1690
【Authors】: Ayon Chakraborty ; Arani Bhattacharya ; Snigdha Kamal ; Samir R. Das ; Himanshu Gupta ; Petar M. Djuric
【Abstract】: We use a crowdsourcing approach for RF spectrum patrolling, where heterogeneous, low-cost spectrum sensors are deployed widely and are tasked with detecting unauthorized transmissions in a collaborative fashion while consuming only a limited amount of resources. We pose this as a collaborative signal detection problem where the individual sensor's detection performance may vary widely based on their respective hardware or software configurations, but are hard to model using traditional approaches. Still an optimal subset of sensors and their configurations must be chosen to maximize the overall detection performance subject to given resource (cost) limitations. We present the challenges of this problem in crowdsourced settings and present a set of methods to address them. The proposed methods use data-driven approaches to model individual sensors and develops mechanisms for sensor selection and fusion while accounting for their correlated nature. We present performance results using examples of commodity-based spectrum sensors and show significant improvements relative to baseline approaches.
【Keywords】: Sensor fusion; Measurement; Task analysis; Hardware; Crowdsourcing; Signal detection
【Paper Link】 【Pages】:1691-1699
【Authors】: Chuyu Wang ; Jian Liu ; Yingying Chen ; Hongbo Liu ; Lei Xie ; Wei Wang ; Bingbing He ; Sanglu Lu
【Abstract】: Recently, gesture recognition has gained considerable attention in emerging applications (e.g., AR/VR systems) to provide a better user experience for human-computer interaction. Existing solutions usually recognize the gestures based on wearable sensors or specialized signals (e.g., WiFi, acoustic and visible light), but they are either incurring high energy consumption or susceptible to the ambient environment, which prevents them from efficiently sensing the fine-grained finger movements. In this paper, we present RF-finger, a device-free system based on Commercial-Off-The-Shelf (COTS) RFID, which leverages a tag array on a letter-size paper to sense the fine-grained finger movements performed in front of the paper. Particularly, we focus on two kinds of sensing modes: finger tracking recovers the moving trace of finger writings; multi-touch gesture recognition identifies the multi-touch gestures involving multiple fingers. Specifically, we build a theoretical model to extract the fine-grained reflection feature from the raw RF -signal, which describes the finger influence on the tag array in cm- level resolution. For the finger tracking, we leverage K-Nearest Neighbors (KNN) to pinpoint the finger position relying on the fine-grained reflection features, and obtain a smoothed trace via Kalman filter. Additionally, we construct the reflection image of each multi-touch gesture from the reflection features by regarding the multiple fingers as a whole. Finally, we use a Convolutional Neural Network (CNN) to identify the multi-touch gestures based on the images. Extensive experiments validate that RF -finger can achieve as high as 88% and 92% accuracy for finger tracking and multi-touch gesture recognition, respectively.
【Keywords】: Gesture recognition; Tracking; Feature extraction; RFID tags; Interference
【Paper Link】 【Pages】:1700-1708
【Authors】: Jingyu Hua ; Hongyi Sun ; Zhenyu Shen ; Zhiyun Qian ; Sheng Zhong
【Abstract】: Due to the loose authentication requirement between access points (APs) and clients, it is notoriously known that WLANs face long-standing threats such as rogue APs and network freeloading. Take the rogue AP problem as an example, unfortunately encryption alone does not provide authentication. APs need to be equipped with certificates that are trusted by clients ahead of time. This requires either the presence of PKI for APs or other forms of pre-established trust (e.g., distributing the certificates offline), none of which is widely used. Before any strong security solution is deployed, we still need a practical solution that can mitigate the problem. In this paper, we explore a non-cryptographic solution that is readily deployable today on end hosts (e.g., smartphones and laptops) without requiring any changes to the APs or the network infrastructure. The solution infers the Carrier Frequency Offsets (CFOs) of wireless devices from Channel State Information (CSI) as their hardware fingerprints without any special hardware requirement. CFO is attributed to the oscillator drift, which is a fundamental physical property that cannot be manipulated easily and remains fairly consistent over time but varies significantly across devices. The real experiments on 23 smartphones and 34 APs (with both identical and different brands) in different scenarios demonstrate that the detection rate could exceed 94%.
【Keywords】: device fingerprinting; attack detection; authentication; wireless networks
【Paper Link】 【Pages】:1709-1717
【Authors】: Yunting Zhang ; Jiliang Wang ; Weiyi Wang ; Zhao Wang ; Yunhao Liu
【Abstract】: Acoustic motion tracking has been viewed as a promising user interaction technique in many scenarios such as Virtual Reality (VR), Smart Appliance, video gaming, etc. Existing acoustic motion tracking approaches, however, suffer from long window of accumulated signal and time-consuming signal processing. Consequently, they are inherently difficult to achieve both high accuracy and low delay. We propose Vernier, an efficient and accurate acoustic tracking method on commodity mobile devices. In the heart of Vernier lies a novel method to efficiently and accurately derive phase change and thus moving distance. Vernier significantly reduces the tracking delay/overhead by removing the complicated frequency analysis and long window of signal accumulation, while keeping a high tracking accuracy. We implement Vernier on Android, and evaluate its performance on COTS mobile devices including Samsung Galaxy S7 and Sony L50t. Evaluation results show that Vernier outperforms previous approaches with a tracking error less than 4 mm. The tracking speed achieves 3×improvement to existing phase based approaches and 10×to Doppler Effect based approaches. Vernier is also validated in applications like controlling and drawing, and we believe it is generally applicable in many real applications.
【Keywords】: Mobile handsets; Tracking; Acoustics; Microsoft Windows; Delays; Doppler effect; Frequency modulation
【Paper Link】 【Pages】:1718-1726
【Authors】: Yao Yao ; Yan Li ; Xin Liu ; Zicheng Chi ; Wei Wang ; Tiantian Xie ; Ting Zhu
【Abstract】: Researchers have demonstrated the feasibility of detecting human motion behind the wall with radio frequency (RF) sensing techniques. With these techniques, an eavesdropper can monitor people's behavior from outside of the room without the need to access the room. This introduces a severe privacy-leakage issue. To address this issue, we propose Aegis, an interference-negligible RF sensing shield that i) incapacitates the RF sensing of eavesdroppers that work at any unknown locations outside of the protected area; ii) has minimum interference to the ongoing WiFi communication; and iii) preserves authorized RF sensing inside the private region. Our extensive evaluation shows that when Aegis is activated, it i) has a negligible impact on the legitimate sensing system; ii) effectively prevents the illegitimate sensing system from sensing human motions. Moreover, the ongoing data communication throughput is even increased.
【Keywords】: Sensors; Radio frequency; Delays; Doppler shift; Hardware; Distortion; Receivers
【Paper Link】 【Pages】:1727-1735
【Authors】: Luca Baldesi ; Carter T. Butts ; Athina Markopoulou
【Abstract】: Community structure is an important property that captures inhomogeneities common in large networks, and modularity is one of the most widely used metrics for such community structure. In this paper, we introduce a principled methodology, the Spectral Graph Forge, for generating random graphs that preserves community structure from a real network of interest, in terms of modularity. Our approach leverages the fact that the spectral structure of matrix representations of a graph encodes global information about community structure. The Spectral Graph Forge uses a low-rank approximation of the modularity matrix to generate synthetic graphs that match a target modularity within user-selectable degree of accuracy, while allowing other aspects of structure to vary. We show that the Spectral Graph Forge outperforms state-of-the-art techniques in terms of accuracy in targeting the modularity and randomness of the realizations, while also preserving other local structural properties and node attributes. We discuss extensions of the Spectral Graph Forge to target other properties beyond modularity, and its applications to anonymization.
【Keywords】: Matrix decomposition; Eigenvalues and eigenfunctions; Symmetric matrices; Measurement; Conferences; Social network services; Topology
【Paper Link】 【Pages】:1736-1744
【Authors】: Shouling Ji ; Tianyu Du ; Zhen Hong ; Ting Wang ; Raheem A. Beyah
【Abstract】: In this paper, we study the correlation of graph da-ta's anonymity, utility, and de-anonymity. Our main contributions include four perspectives. First, to the best of our knowledge, we conduct the first Anonymity-Utility-De-anonymity (AUD) correlation quantification for graph data and obtain close-forms for such correlation under both a preliminary mathematical model and a general data model. Second, we integrate our AUD quantification to SecGraph [31], a recently published Secure Graph data sharing/publishing system, and extend it to Sec-Graph+. Compared to SecGraph, SecGraph+ is an improved and enhanced uniform and open-source system for comprehensively studying graph anonymization, de-anonymization, and utility evaluation. Third, based on our AUD quantification, we evaluate the anonymity, utility, and de-anonymity of 12 real world graph datasets which are generated from various computer systems and services. The results show that the achievable anonymity/de-anonymity depends on multiple factors, e.g., the preserved data utility, the quality of the employed auxiliary data. Finally, we apply our AUD quantification to evaluate the performance of state-of-the-art anonymization and de-anonymization techniques. Interestingly, we find that there is still significant space to improve state-of-the-art de-anonymization attacks. We also explicitly and quantitatively demonstrate such possible improvement space.
【Keywords】: Correlation; Data models; Erbium; Measurement; Electronic mail; Conferences; Mathematical model
【Paper Link】 【Pages】:1745-1753
【Authors】: Joseph Lubars ; R. Srikant
【Abstract】: Approximate graph matching refers to the problem of finding the best correspondence between the node labels of two correlated graphs. The problem has been applied to a number of domains, including social network de-anonymization. Recently, a number of algorithms have been proposed for seeded graph matching, which uses a few seed matches between two graphs to determine the remaining correspondence. We adapt the ideas from seeded algorithms to develop a graph matching correction algorithm, which takes a partially correct correspondence as input and returns an improved correspondence. We show that this algorithm can correct all errors in graph matching for stochastic block model graphs with high probability. Finally, we apply our algorithm as a post-processing step for other approximate graph matching algorithms to significantly improve the performance of state-of-the-art algorithms for seedless graph matching.
【Keywords】: Approximation algorithms; Social network services; Stochastic processes; Inference algorithms; Mathematical model; Conferences; Measurement
【Paper Link】 【Pages】:1754-1762
【Authors】: Joffroy Beauquier ; Janna Burman ; Fabien Dufoulon ; Shay Kutten
【Abstract】: The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. We consider two problems in this model: (Δ+1)-vertex coloring and maximal independent set (MIS), for a network of unknown size n and unknown maximum degree Δ. Solving these problems allows to overcome communication interferences, and to break symmetry, a core component of many distributed protocols. The presented results apply to general graphs, but are efficient in graphs with low edge density (sparse graphs), such as bounded degree graphs, planar graphs and graphs of bounded arboricity. We present O(Δ 2 log n + Δ 3 ) time deterministic uniform MIS and coloring protocols, which are asymptotically time optimal for bounded degree graphs. Furthermore, we devise O(a 2 log 2 n+a 3 log n) time MIS and coloring protocols, as well as O(a 2 Δ 2 log 2 n + a 3 Δ 3 log n) time 2-hop MIS and 2-hop coloring protocols, where a is the arboricity of the communication graph. Building upon the 2-hop coloring protocols, we show how the strong CONGEST model can be simulated and by using this simulation we obtain an O ( a) -coloring protocol. No results about coloring with less than Δ + 1 colors were known up to now in the beeping model.
【Keywords】: Protocols; Computational modeling; Color; Collision avoidance; Complexity theory; Conferences; Radio networks
【Paper Link】 【Pages】:1763-1771
【Authors】: Yue Cao ; Ahmed Osama Fathy Atya ; Shailendra Singh ; Zhiyun Qian ; Srikanth V. Krishnamurthy ; Thomas F. La Porta ; Prashant Krishnamurthy ; Lisa M. Marvel
【Abstract】: Eavesdroppers can exploit exposed packet headers towards attacks that profile clients and their data flows. In this paper, we propose FOG, a framework for effective header blinding using MIMO, to thwart eavesdroppers. FOG effectively tracks header bits as they traverse physical (PHY) layer sub-systems that perform functions like scrambling and interleaving. It combines multiple blinding signals for more effective and less predictable obfuscation, as compared to using a fixed blinding signal. We implement FOG on the WARP platform and demonstrate via extensive experiments that it yields better obfuscation than prior schemes that deploy full packet blinding. It causes a bit error rate (BER) of > 40 % at an eavesdropper if two blinding streams are sent during header transmissions. Furthermore, FOG incurs a very small throughput hit of ≈5 % with one blinding stream (and 9 % with two streams). Full packet blinding incurs much higher throughput hits (25 % with one stream and 50 % with two streams).
【Keywords】: MIMO communication; Encryption; Throughput; Receivers; Transmitting antennas; Wireless communication
【Paper Link】 【Pages】:1772-1780
【Authors】: Qian Zhou ; Mohammed Elbadry ; Fan Ye ; Yuanyuan Yang
【Abstract】: Scalable, fine-grained access control for Internet-of-Things is needed in enterprise environments, where thousands of subjects need to access possibly one to two orders of magnitude more objects. Existing solutions offer all-or-nothing access, or require all access to go through a cloud backend, greatly impeding access granularity, robustness and scale. In this paper, we propose Heracles, an IoT access control system that achieves robust, fine-grained access control at enterprise scale. Heracles adopts a capability-based approach using secure, unforgeable tokens that describe the authorizations of subjects, to either individual or collections of objects in single or bulk operations. It has a 3-tier architecture to provide centralized policy and distributed execution desired in enterprise environments, and delegated operations for responsiveness of resource-constrained objects. Extensive security analysis and performance evaluation on a testbed prove that Heracles achieves robust, responsive, fine-Qrained access control in large scale enterprise environments.
【Keywords】: Access control; Permission; Robustness; Public key; Conferences; Computer architecture
【Paper Link】 【Pages】:1781-1789
【Authors】: Xiaoran Fan ; Zhijie Zhang ; Wade Trappe ; Yanyong Zhang ; Richard E ; Howard ; Zhu Han
【Abstract】: Ensuring confidentiality of communication is fundamental to securing the operation of a wireless system, where eavesdropping is easily facilitated by the broadcast nature of the wireless medium. By applying distributed beamforming among a coalition, we show that a new approach for assuring physical layer secrecy, without requiring any knowledge about the eavesdropper or injecting any additional cover noise, is possible if the transmitters frequently perturb their phases around the proper alignment phase while transmitting messages. This approach is readily applied to amplitude-based modulation schemes, such as PAM or QAM. We present our secrecy mechanisms, prove several important secrecy properties, and develop a practical secret communication system design. We further implement and deploy a prototype that consists of 16 distributed transmitters using USRP N210s in a 20×20×3m 3 area. By sending more than 160M bits over our system to the receiver, depending on system parameter settings, we measure that the eavesdroppers failed to decode 30%-60% of the bits cross multiple locations while the intended receiver has an estimated bit error ratio of 3×10 -6 .
【Keywords】: Secret communication; distributed beamforming; physical layer security
【Paper Link】 【Pages】:1790-1798
【Authors】: Kaiming Xiao ; Cheng Zhu ; Junjie Xie ; Yun Zhou ; Xianqiang Zhu ; Weiming Zhang
【Abstract】: Stealth malware, a representative tool of advanced persistent threat (APT) attacks, in particular poses an increased threat to cyber-physical systems (CPS). Due to the use of stealthy and evasive techniques (e.g., zero-day exploits, obfuscation techniques), stealth malwares usually render conventional heavyweight countermeasures (e.g., exploits patching, specialized ant-malware program) inapplicable. Light-weight countermeasures (e.g., containment techniques), on the other hand, can help retard the spread of stealth malwares, but the ensuing side effects might violate the primary safety requirement of CPS. Hence, defenders need to find a balance between the gain and loss of deploying light-weight countermeasures. To address this challenge, we model the persistent anti-malware process as a shortest-path tree interdiction (SPTI) Stackelberg game, and safety requirements of CPS are introduced as constraints in the defender's decision model. Specifically, we first propose a static game (SSPTI), and then extend it to a multi-stage dynamic game (DSPTI) to meet the need of real-time decision making. Both games are modelled as bi-level integer programs, and proved to be NP-hard. We then develop a Benders decomposition algorithm to achieve the Stackelberg Equilibrium of SSPTI. Finally, we design a model predictive control strategy to solve DSPTI approximately by sequentially solving an approximation of SSPTI. The extensive simulation results demonstrate that the proposed dynamic defense strategy can achieve a balance between fail-secure ability and fail-safe ability while retarding the stealth malware propagation in CPS.
【Keywords】: Malware; Safety; Games; Security; Loss measurement; Conferences; Cyber-physical systems
【Paper Link】 【Pages】:1799-1807
【Authors】: Stefan Schmid ; Jirí Srba
【Abstract】: While automated network verification is emerging as a critical enabler to manage large complex networks, current approaches come with a high computational complexity. This paper initiates the study of communication networks whose configurations can be verified fast, namely in polynomial time. In particular, we show that in communication networks based on prefix rewriting, which include MPLS networks, important network properties such as reachability, loop-freedom, and transparency, can be verified efficiently, even in the presence of failures. This enables a fast what-if analysis, addressing a major concern of network administrators: while configuring and testing network policies for a fully functional network is challenging, ensuring policy compliance in the face of (possibly multiple) failures, is almost impossible for human administrators. At the heart of our approach lies an interesting connection to the theory of prefix rewriting systems, a subfield of language and automata theory.
【Keywords】: Routing; Multiprotocol label switching; Complexity theory; Automata; Security; Middleboxes
【Paper Link】 【Pages】:1808-1816
【Authors】: Xiaoyang Zhang ; Yuchong Hu ; Patrick P. C. Lee ; Pan Zhou
【Abstract】: To adapt to the increasing storage demands and varying storage redundancy requirements, practical distributed storage systems need to support storage scaling by relocating currently stored data to different storage nodes. However, the scaling process inevitably transfers substantial data traffic over the network. Thus, minimizing the bandwidth cost of the scaling process is critical in distributed settings. In this paper, we show that optimal storage scaling is achievable in erasure-coded distributed storage based on network coding, by allowing storage nodes to send encoded data during scaling. We formally prove the information-theoretically minimum scaling bandwidth. Based on our theoretical findings, we also build a distributed storage system prototype NCScale, which realizes network-coding-based scaling while preserving the necessary properties for practical deployment. Experiments on Amazon EC2 show that the scaling time can be reduced by up to 50% over the state-of-the-art.
【Keywords】: Bandwidth; Maintenance engineering; Redundancy; Network coding; Encoding; Reed-Solomon codes; Distributed databases
【Paper Link】 【Pages】:1817-1825
【Authors】: Masaaki Nishino ; Takeru Inoue ; Norihito Yasuda ; Shin-ichi Minato ; Masaaki Nagata
【Abstract】: Communication networks are an essential infrastructure and must be designed carefully to ensure high reliability. Identifying a fully reliable design is, however, computationally very tough since it requires that a reliability evaluation, which is known to be #P-complete, be repeated an exponential number of times. Existing studies, therefore, attempt to avoid exact optimization to reduce the computational burden by applying heuristics. Due to the importance of communication networks and to better assess the accuracy of heuristic approaches, exact optimization remains a key goal. This paper proposes an exact method for two network design problems: reliability maximization under budget constraints and cost minimization with assurance of reliability. Our method employs a common idea to solve these problems, i.e., a best-first search algorithm that runs on decision diagrams. Our method employs just a single binary decision diagram (BDD) to compute the reliability for any solution and is also used as the basis of a novel heuristic function, called the cost-aware BDD heuristic function, as a search guide. Numerical experiments show that our method scales well; it successfully optimizes a network with 189 links. In addition, our method reveals the poor performance of existing heuristic approaches; a well-known existing heuristic method is shown to yield a solution that offers less than half the optimal reliability.
【Keywords】: Boolean functions; Computer network reliability; Reliability engineering; Optimization; Reliability theory; Communication networks
【Paper Link】 【Pages】:1826-1834
【Authors】: Yi Cao ; Darryl Veitch
【Abstract】: We collect high resolution timing packet data from 459 public Stratum-1 NTP servers during the leap second event of Dec. 2016, including all those participating in the NTP Pool Project, using a testbed with GPS and atomic clock synchronized DAG cards. We report in detail on a wide variety of anomalous behaviors found both at the NTP protocol level, and in the detailed timestamp performance of the server clocks themselves, which can last days or even weeks after the event. Overall, only 37.3% of servers had Adequate performance overall.
【Keywords】: Conferences
【Paper Link】 【Pages】:1835-1843
【Authors】: Ming Shit ; Xiaojun Lin ; Sonia Fahmy ; Dong-Hoon Shin
【Abstract】: We investigate competitive online algorithms for online convex optimization (OCO) problems with linear in-stage costs, switching costs and ramp constraints. While OCO problems have been extensively studied in the literature, there are limited results on the corresponding online solutions that can attain small competitive ratios. We first develop a powerful computational framework that can compute an optimized competitive ratio based on the class of affine policies. Our computational framework can handle a fairly general class of costs and constraints. Compared to other competitive results in the literature, a key feature of our proposed approach is that it can handle scenarios where infeasibility may arise due to hard feasibility constraints. Second, we design a robustification procedure to produce an online algorithm that can attain good performance for both average-case and worst-case inputs. We conduct a case study on Network Functions Virtualization (NFV) orchestration and scaling to demonstrate the effectiveness of our proposed methods.
【Keywords】: Robustness; Uncertainty; Switches; Network function virtualization; Heuristic algorithms; Convex functions; Conferences
【Paper Link】 【Pages】:1844-1852
【Authors】: Igor Kadota ; Abhishek Sinha ; Eytan Modiano
【Abstract】: Age of Information (AoI) is a performance metric that captures the freshness of the information from the perspective of the destination. The AoI measures the time that elapsed since the generation of the packet that was most recently delivered to the destination. In this paper, we consider a single-hop wireless network with a number of nodes transmitting time-sensitive information to a Base Station and address the problem of minimizing the Expected Weighted Sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes. We develop three low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluate their performance against the optimal policy. In particular, we develop a randomized policy, a Max-Weight policy and a Whittle's Index policy, and show that they are guaranteed to be within a factor of two, four and eight, respectively, away from the minimum AoI possible. In contrast, simulation results show that Max-Weight outperforms the other policies, both in terms of AoI and throughput, in every network configuration simulated, and achieves near optimal performance.
【Keywords】: Throughput; Monitoring; Sensor phenomena and characterization; Measurement; Optimized production technology
【Paper Link】 【Pages】:1853-1861
【Authors】: Hao Yu ; Michael J. Neely
【Abstract】: This paper considers utility optimal power control for energy harvesting wireless devices with a finite capacity battery. The distribution information of the underlying wireless environment and harvestable energy is unknown and only outdated system state information is known at the device controller. This scenario shares similarity with Lyapunov opportunistic optimization and online learning but is different from both. By a novel combination of Zinkevich's online gradient learning technique and the drift-plus-penalty technique from Lyapunov opportunistic optimization, this paper proposes a learning-aided algorithm that achieves utility within O (ϵ) of the optimal, for any desired ϵ > 0, by using a battery with an 0 (1/ϵ) capacity. The proposed algorithm has low complexity and makes power investment decisions based on system history, without requiring knowledge of the system state or its probability distribution.
【Keywords】: Batteries; Wireless communication; Power control; Wireless sensor networks; Heuristic algorithms; Energy harvesting; Optimization
【Paper Link】 【Pages】:1862-1870
【Authors】: Jia Liu
【Abstract】: In recent years, the rapid growth of mobile data demands has introduced many stringent requirements on latency and convergence performance in wireless network optimization. To address these challenges, several momentum-based algorithms have been proposed to improve the classical queue-length-based algorithmic framework (QLA). By combining queue-length updates and one-slot weight changes (known as the first-order momentum), it has been shown that these algorithms dramatically improve delay and convergence compared to QLA, while maintaining the same throughput-optimality and low-complexity. These exciting attempts have sparked a lot of conjectures about whether it is useful to further exploit high-order momentum information to improve delay and convergence speed. In this paper, we show that the answer is yes. Specifically, we first propose a new weight updating scheme that enables the incorporation of high-order momentum. We then prove the throughput-optimality and queue-stability of the proposed high-order momentum-based approach and characterize its delay and convergence performances. Through these analytical results, we finally show that delay and convergence would continue to improve as more high-order momentum information is utilized.
【Keywords】: Optimization; Convergence; Delays; Wireless networks; Queueing analysis; Conferences
【Paper Link】 【Pages】:1871-1879
【Authors】: Zhiyuan Xu ; Jian Tang ; Jingsong Meng ; Weiyi Zhang ; Yanzhi Wang ; Chi Harold Liu ; Dejun Yang
【Abstract】: Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-of-the-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.
【Keywords】: Experience-driven Networking; Deep Reinforcement Learning; Traffic Engineering
【Paper Link】 【Pages】:1880-1888
【Authors】: Jianan Zhang ; Abhishek Sinha ; Jaime Llorca ; Antonia Maria Tulino ; Eytan Modiano
【Abstract】: Distributed computing networks, tasked with both packet transmission and processing, require the joint optimization of communication and computation resources. We develop a dynamic control policy that determines both routes and processing locations for packets upon their arrival at a distributed computing network. The proposed policy, referred to as Universal Computing Network Control (UCNC), guarantees that packets i) are processed by a specified chain of service functions, ii) follow cycle-free routes between consecutive functions, and iii) are delivered to their corresponding set of destinations via proper packet duplications. UCNC is shown to be throughput-optimal for any mix of unicast and multicast traffic, and is the first throughput-optimal policy for non-unicast traffic in distributed computing networks with both communication and computation constraints. Moreover, simulation results suggest that UCNC yields substantially lower average packet delay compared with existing control policies for unicast traffic.
【Keywords】: Unicast; Routing; Computational modeling; Process control; Cloud computing; Heuristic algorithms
【Paper Link】 【Pages】:1889-1897
【Authors】: He Huang ; Yu-e Sun ; Shigang Chen ; Shaojie Tang ; Kai Han ; Jing Yuan ; Wenjian Yang
【Abstract】: Traffic measurement in high-speed networks has many applications in improving network performance, assisting resource allocation, and detecting anomalies. In this paper, we study a new problem called k-persistent spread estimation, which measures persist traffic elements in each flow that appear during at least k out of t measurement periods, where k and t can be arbitrarily defined in user queries. Solutions to this problem have interesting applications in network attack detection, popular content identification, user access profiling, etc. Yet, it is under-investigated as the prior work only addresses a special case with a questionable assumption. Designing an efficient and accurate k -persistent estimator requires us to use bitwise SUM (instead of bitwise AND typical in the prior art) to join the information collected from different periods. This seemly simple change has fundamental impact on the mathematical process in deriving an estimator, particular over space-saving virtual bitmaps. Based on real network traces, we show that our new estimator can accurately estimate the k -persistent spreads of the flows. It also performs much better than the existing work on the special case of measuring elements that appear in all periods.
【Keywords】: Traffic measurement; persistent traffic; spread estimation
【Paper Link】 【Pages】:1898-1906
【Authors】: Kehao Wang ; Lin Chen ; Jihong Yu ; Moe Win
【Abstract】: We consider the multichannel opportunistic access problem, in which a user decides, at each time slot, which channel to access among multiple Gilbert-Elliot channels in order to maximize his aggregated utility (e.g., the expected transmission throughput) given that the observation of channel state is error-prone. The problem can be cast into a restless multiarmed bandit problem which is proved to be PSPACE-Hard. An alternative approach, given the problem hardness, is to look for simple channel access policies. Whittle index policy is a very popular heuristic for restless bandits, which is provably optimal asymptotically and has good empirical performance. In the case of imperfect observation, the traditional approach of computing the Whittle index policy cannot be applied because the channel state belief evolution is no more linear, thus rendering the indexability of our problem open. In this paper, we mathematically establish the indexability and establish the closed-form Whittle-index, based on which index policy can be constructed. The major technique in our analysis is a fixed point based approach which enable us to divide the belief information space into a series of regions and then establish a set of periodic structures of the underlying nonlinear dynamic evolving system, based on which we devise the linearization scheme for each region to establish indexability and compute the Whittle index for each region.
【Keywords】: Restless bandit; Whittle index; indexability; fixed point; nonlinear operator
【Paper Link】 【Pages】:1907-1915
【Authors】: Ting Deng ; Jianguo Yao ; Haibing Guan
【Abstract】: Cloud service brokerage (CSB), which procures cloud services from multiple cloud service providers (CSPs) and resells them to cloud customers, has been put forward to facilitate the delivery of cloud services. However, it is challenging to address the economic issues of CSB incurred by insufficient provisioning problem in response to dynamic conditions, for example, dynamic customer demands, dynamic cloud service prices and different availabilities of CSPs. In this paper, we propose a novel mechanism called CSB Demand Response (DR-CSB), which aims to maximize the profit of CSB under dynamic customer demands with respect to the capacity and availability constraints, to mitigate the insufficient provisioning problem. To this end, we formulate an optimization problem of profit maximization for the CSB, and employ economic demand response mechanism to allow cloud customers to adjust their consumptions with dynamic cloud service prices. Our evaluations driven by Google cluster-usage traces have verified that the DR-CSB not only can help the CSB to achieve the profit maximization, but also can handle the impact of the dynamic conditions in CSB. As the result shows, the profit of CSB with implementing DR-CSB can increase by up to 20%, and customers also achieve a 37% aggregated cost saving, compared with the scenario without DR-CSB.
【Keywords】: cloud service brokerage; profit; demand response; cost
【Paper Link】 【Pages】:1916-1924
【Authors】: Gueyoung Jung ; Parisa Rahimzadeh ; Zhang Liu ; Sangtae Ha ; Kaustubh Joshi ; Matti A. Hiltunen
【Abstract】: VM redundancy is the foundation of resilient cloud applications. While active-active approaches combined with load balancing and autoscaling are usually resource efficient, the stateful nature of many cloud applications often necessitates 1+1 (or 1+n) active-standby approaches. Keeping the standbys, however, could result in inefficient utilization of cloud resources. We explore an intriguing cloud-based solution, where standby VMs from active-standby applications are selectively overbooked to reduce resources reserved for failures. The approach requires careful VM placement to avoid a situation where multiple standby VMs activate simultaneously on the same host and thus cannot get the full resource entitlement. Indeed today's clouds do not have this visibility to the applications. We rectify this situation through ShadowBox, a novel redundancy-aware VM scheduler that optimizes the placement and activation of standby VMs, while assuring applications' resource entitlements. Evaluation on a large-scale cloud shows that ShadowBox can significantly improve resource utilization (i.e., more than 2.5 times than traditional approaches) while minimizing the impact on applications' entitlements.
【Keywords】: Cloud computing; Servers; Ear; Redundancy; Resource management; Topology; Conferences
【Paper Link】 【Pages】:1925-1933
【Authors】: Yining Zhu ; Randall A. Berry
【Abstract】: By not requiring expensive licenses, unlicensed spectrum lowers the barriers for firms to offer wireless services. However, incumbent firms may still try to erect other entry barriers. For example, recent work has highlighted how customer contracts may be used as one such barrier by penalizing customers for switching to a new entrant. However, this work did not account for another potential benefit of unlicensed spectrum, having access to this open resource may incentivize entrants to invest in new and potentially better technology. This paper studies the interaction of contracts and the incentives of firms to invest in developing new technology. We use a game theoretic model to study this and characterize the effect of contracts on economic welfare. The role of subsidies or taxes by a social planner is also considered.
【Keywords】: Contracts; Investment; Wireless communication; Switches; Liquids; Quality of service; Games
【Paper Link】 【Pages】:1934-1942
【Authors】: Haoran Yu ; George Iosifidis ; Biying Shou ; Jianwei Huang
【Abstract】: Many mobile applications (abbrev. apps) reward the users who physically visit some locations tagged as POIs (places-of-interest) by the apps. In this paper, we study the POI-based collaboration between apps and venues (e.g., restaurants and cafes). On the one hand, an app charges a venue and tags the venue as a POI, which attracts users to visit the venue and potentially increases the venue's sales. On the other hand, the venue can invest in the app-related infrastructure (e.g., Wi-Fi networks and smartphone chargers), which enhances the users' experience of using the app. However, the existing POI pricing schemes of the apps (e.g., Pokemon Go and Snapchat) cannot incentivize the venue's infrastructure investment, and hence cannot achieve the most effective app-venue collaboration. We model the interactions among an app, a venue, and users by a three-stage Stackelberg game, and design an optimal two-part pricing scheme for the app. This scheme has a charge-with-subsidy structure: the app first charges the venue for becoming a POI, and then subsidizes the venue every time a user interacts with the POI. Compared with the existing pricing schemes, our two-part pricing better incentivizes the venue's investment, attracts more users to interact with the POI, and achieves a much larger app revenue. We analyze the impacts of the app's and venue's characteristics on the app's optimal revenrevenueue, and show that the apps with small and large congestion effects should collaborate with opposite types of venues.
【Keywords】: Pricing; Games; Investment; Collaboration; Advertising; Wireless fidelity
【Paper Link】 【Pages】:1943-1951
【Authors】: Satyam Agarwal ; Francesco Malandrino ; Carla-Fabiana Chiasserini ; Swades De
【Abstract】: Thanks to network slicing, 5G networks will support a variety of services in a flexible and swift manner. In this context, we seek to make high-quality, joint optimal decisions concerning the placement of VNFs across the physical hosts for realizing the services, and the allocation of CPU resources in VNFs sharing a host. To this end, we present a queuing-based system model, accounting for all the entities involved in 5G networks. Then, we propose a fast and efficient solution strategy yielding near-optimal decisions. We evaluate our approach in multiple scenarios that well represent real-world services, and find it to consistently outperform state-of-the-art alternatives and closely match the optimum.
【Keywords】: Resource management; Servers; 5G mobile communication; Computational modeling; Games; Routing; Quality of service
【Paper Link】 【Pages】:1952-1960
【Authors】: Xiaoda Jiang ; Hancheng Lu ; Chang Wen Chen
【Abstract】: Recently, non-orthogonal multiple access (NOMA) has been proposed to achieve higher spectral efficiency over conventional orthogonal multiple access. Although it has the potential to meet increasing demands of video services, it is still challenging to provide high performance video streaming. In this research, we investigate, for the first time, a multi-user NOMA system design for video transmission. Various NOMA systems have been proposed for data transmission in terms of throughput or reliability. However, the perceived quality, or the quality-of-experience of users, is more critical for video transmission. Based on this observation, we design a quality-driven scalable video transmission framework with cross-layer support for multiuser NOMA. To enable low complexity multi-user NOMA operations, a novel user grouping strategy is proposed. The key features in the proposed framework include the integration of the quality model for encoded video with the physical layer model for NOMA transmission, and the formulation of multiuser NOMA-based video transmission as a quality-driven power allocation problem. As the problem is non-concave, a global optimal algorithm based on the hidden monotonic property and a suboptimal algorithm with polynomial time complexity are developed. Simulation results show that the proposed multi-user NOMA system outperforms existing schemes in various video delivery scenarios.
【Keywords】: Multi-media transmission; NOMA; cross-layer framework; monotonic optimization
【Paper Link】 【Pages】:1961-1969
【Authors】: Arash Asadi ; Sabrina Müller ; Gek Hong Sim ; Anja Klein ; Matthias Hollick
【Abstract】: Millimeter-Wave (mmWave) bands have become the de-facto candidate for 5G vehicle-to-everything (V2X) since future vehicular systems demand Gbps links to acquire the necessary sensory information for (semi)-autonomous driving. Nevertheless, the directionality of mmWave communications and its susceptibility to blockage raise severe questions on the feasibility of mmWave vehicular communications. The dynamic nature of 5G vehicular scenarios, and the complexity of directional mmWave communication calls for higher context-awareness and adaptability. To this aim, we propose the first online learning algorithm addressing the problem of beam selection with environment-awareness in mmWave vehicular systems. In particular, we model this problem as a contextual multi-armed bandit problem. Next, we propose a lightweight context-aware online learning algorithm, namely FML, with proven performance bound and guaranteed convergence. FML exploits coarse user location information and aggregates received data to learn from and adapt to its environment. We also perform an extensive evaluation using realistic traffic patterns derived from Google Maps. Our evaluation shows that FML enables mmWave base stations to achieve near-optimal performance on average within 33 minutes of deployment by learning from the available context. Moreover, FML remains within ~ 5% of the optimal performance by swift adaptation to system changes such as blockage and traffic.
【Keywords】: Base stations; Structural beams; Direction-of-arrival estimation; 5G mobile communication; Long Term Evolution; Context modeling; Random variables
【Paper Link】 【Pages】:1970-1978
【Authors】: Arjun Anand ; Gustavo de Veciana ; Sanjay Shakkottai
【Abstract】: Emerging 5G systems will need to efficiently support both broadband traffic (eMBB) and ultra-low-latency (URLLC) traffic. In these systems, time is divided into slots which are further sub-divided into minislots. From a scheduling perspective, eMBB resource allocations occur at slot boundaries, whereas to reduce latency URLLC traffic is pre-emptively overlapped at the minislot timescale, resulting in selective superposition/puncturing of eMBB allocations. This approach enables minimal URLLC latency at a potential rate loss to eMBB traffic. We study joint eMBB and URLLC schedulers for such systems, with the dual objectives of maximizing utility for eMBB traffic while satisfying instantaneous URLLC demands. For a linear rate loss model (loss to eMBB is linear in the amount of superposition/puncturing), we derive an optimal joint scheduler. Somewhat counter-intuitively, our results show that our dual objectives can be met by an iterative gradient scheduler for eMBB traffic that anticipates the expected loss from URLLC traffic, along with an URLLC demand scheduler that is oblivious to eMBB channel states, utility functions and allocations decisions of the eMBB scheduler. Next we consider a more general class of (convex) loss models and study optimal online joint eMBB/URLLC schedulers within the broad class of channel state dependent but time-homogeneous policies. We validate the characteristics and benefits of our schedulers via simulation.
【Keywords】: wireless scheduling; URLLC traffic; 5G systems
【Paper Link】 【Pages】:1979-1987
【Authors】: Tongqing Zhou ; Bin Xiao ; Zhiping Cai ; Ming Xu ; Xuan Liu
【Abstract】: Traditional mobile crowdsensing photo selection process focuses on selecting photos from participants to a server. The server may contain tons of photos for a certain area. A new problem is how to select a set of photos from the server to a smartphone user when the user requests to view an area (e.g., a hot spot). The challenge of the new problem is that the photo set should attain both photo coverage and view quality (e.g., with clear Points of Interest). However, contributions of these geo-tagged photos could be uncertain for a target area due to unavailable information of photo shooting direction and no reference photos. In this paper, we propose a novel and generic server-to-requester photo selection approach. Our approach leverages a utility measure to quantify the contribution of a photo set, where photos' spatial distribution and visual correlation are jointly exploited to evaluate their performance on photo coverage and view quality. Finding the photo set with the maximum utility is proven to be NP-hard. We then propose an approximation algorithm based on a greedy strategy with rigorous theoretical analysis. The effectiveness of our approach is demonstrated with real-world datasets. The results show that the proposal outperforms other approaches with much higher photo coverage and better view quality.
【Keywords】: Servers; Sensors; Correlation; Visualization; Mobile handsets; Uncertainty; Cameras
【Paper Link】 【Pages】:1988-1996
【Authors】: Xiaoting Tang ; Cong Wang ; Xingliang Yuan ; Qian Wang
【Abstract】: In crowd sensing, truth discovery (TD) refers to finding reliable information from noisy/biased data collected from different providers. To protect providers' data while enabling truth distillation, privacy-preserving truth discovery (PPTD) has received wide attention recently. However, all existing approaches require iterative interaction between server(s) and individual providers, which inevitably demand all providers to be always online. Otherwise, the protocol would fail or expose extra provider information. In this paper, we design and implement the first non-interactive PPTD system that completely removes the online requirement with strong privacy guarantees. Our framework follows the same two-server model from the best-known prior solution, and leverages Yao's Garbled Circuit (GC). Yet, we devise non-trivial speedup techniques for TD-optimized implementation. Firstly, we identify reusable computations in TD to accelerate the circuit generation. Secondly, we securely evaluate the burdensome non-linear functions in TD via customized approximation with accuracy and improved efficiency. Thirdly, we reduce the online execution time by bridging together latest advancements of component-based GC and various computations needed in TD. Unlike prior arts, our framework does not reveal any intermediate results, and further supports “late-join” providers without protocol suspension/restart. The practical performance of our proof-of-concept implementation is verified through extensive evaluations.
【Keywords】: Protocols; Servers; Estimation; Security; Sensors; Privacy; Systems architecture
【Paper Link】 【Pages】:1997-2005
【Authors】: Qiuxi Zhu ; Md. Yusuf Sarwar Uddin ; Nalini Venkatasubramanian ; Cheng-Hsin Hsu
【Abstract】: In urban environments, mobile crowdsensing can be used to augment in-situ sensing deployments (e.g. for environmental and community monitoring) in a flexible and cost-efficient manner. The additional participation provided by crowdsensing enables improved data collection coverage and enhances timeliness of data delivery. However, as the number of participating devices/users increases, efficient management is required to handle the increased operational cost of the infrastructure and associated cloud services - exploiting spatiotemporal redundancy in sensing can help cost-efficient utilization of resources. In this paper, we develop solutions to exploit the mobility of the crowd and manage the sensing capability of participating devices to effectively meet application/user demands for hybrid urban sensing applications. Specifically, we address the spatiotemporal scheduling problem to create high-resolution maps (e.g. for pollution sensing) by developing a common framework to capture spatiotemporal impact of multiple sensor types that generate heterogeneous data at different levels of granularity. We develop an online scheduling approach that leverages the knowledge of device location and sensing capability to selectively activate nodes and sensors. We build a multi-sensor platform that enables data collection, data exchange, and node management. Prototype deployments in three different campus/community testbeds were instrumented for measurements. Traces collected from the testbeds are used to drive extensive large scale simulations. Results show that our proposed solution achieves improved data coverage and utility under data constraints with lower costs (30% fewer active nodes) than naive approaches.
【Keywords】: Sensors; Spatiotemporal phenomena; Pollution; Processor scheduling; Monitoring; Redundancy; Urban areas
【Paper Link】 【Pages】:2006-2014
【Authors】: Kechao Cai ; Xutong Liu ; Yu-Zhen Janice Chen ; John C. S. Lui
【Abstract】: Network application optimization is essential for improving the performance of the application as well as its user experience. The network application parameters are crucial in making proper decisions for network application optimizations. However, many works are impractical by assuming a priori knowledge of the parameters which are usually unknown and need to be estimated. There have been studies that consider optimizing network application in an online learning context using multi-armed bandit models. However, existing frameworks are problematic as they only consider to find the optimal decisions to minimize the regret, but neglect the constraints (or guarantee) requirements which may be excessively violated. In this paper, we propose a novel online learning framework for network application optimizations with guarantee. To the best of our knowledge, we are the first to formulate the stochastic constrained multi-armed bandit model with time-varying “multi-level rewards” by taking both “regret” and “violation” into consideration. We are also the first to design a constrained bandit policy, Learning with Minimum Guarantee (LMG), with provable sub-linear regret and violation bounds. We illustrate how our framework can be applied to several emerging network application optimizations, namely, (1) opportunistic multichannel selection, (2) data-guaranteed crowdsensing, and (3) stability-guaranteed crowdsourced transcoding. To show the effectiveness of LMG in optimizing these applications with different minimum requirements, we also conduct extensive simulations by comparing LMG with existing state-of-the-art policies.
【Keywords】: Optimization; Compounds; Stochastic processes; Throughput; Transcoding; Task analysis; Random processes
【Paper Link】 【Pages】:2015-2023
【Authors】: Yuchi Chen ; Wei Gong ; Jiangchuan Liu ; Yong Cui
【Abstract】: Recent years have seen various acoustic applications on mobile devices, e.g. range finding, gesture recognition, and device-to-device data transport, which use near-ultrasound signals at frequencies around 18-24 kHz. Due to the fixed low sound sample rate and hardware limitation, the highest detectable sound frequency on commercial-off-the-shelf (COTS) mobile devices is capped at 24 kHz, presenting a daunting barrier that prevents high-frequency ultrasounds from benefiting acoustic applications. To bridge this gap, we present iChemo, a technology that enables COTS mobile devices to sense high-frequency ultrasound signals. Specifically, we demonstrate how to detect the power spectral density (PSD) of a high-frequency ultrasound signal by customizing the coprime sampling algorithm on COTS devices. Through our prototype and evaluation on extensive mobile devices, we demonstrate that iChemo can sense the PSD of ultrasound at frequency of 60 kHz, which is over twice of the current sensible frequency threshold.
【Keywords】: Ultrasonic imaging; Sensors; Frequency measurement; Ultrasonic variables measurement; Smart phones; Frequency modulation
【Paper Link】 【Pages】:2024-2032
【Authors】: Ning Xiao ; Panlong Yang ; Yubo Yan ; Hao Zhou ; Xiang-Yang Li
【Abstract】: Recently several ground-breaking RF-based motion-recognition systems were proposed to detect and/or recognize macro/micro human movements. These systems often suffer from various interferences caused by multiple-users moving simultaneously, resulting in extremely low recognition accuracy. To tackle this challenge, we propose a novel system, called Motion-Fi, which marries battery-free wireless backscattering and device-free sensing. Motion-Fi is an accurate, interference tolerable motion-recognition system, which counts repetitive motions without using scenario-dependent templates or profiles and enables multi-users performing certain motions simultaneously because of the relatively short transmission range of backscattered signals. Although the repetitive motions are fairly well detectable through the backscattering signals in theory, in reality they get blended into various other system noises during the motion. Moreover, irregular motion patterns among users will lead to expensive computation cost for motion recognition. We build a backscattering wireless platform to validate our design in various scenarios for over 6 months when different persons, distances and orientations are incorporated. In our experiments, the periodicity in motions could be recognized without any learning or training process, and the accuracy of counting such motions can be achieved within 5% count error. With little efforts in learning the patterns, our method could achieve 93.1% motion-recognition accuracy for a variety of motions. Moreover, by leveraging the periodicity of motions, the recognition accuracy could be further improved to nearly 100% with only 3 repetitions. Our experiments also show that the motions of multiple persons separated by around 2 meters cause little accuracy reduction in the counting process.
【Keywords】: Backscatter; Wireless communication; Wireless sensor networks; Antennas; Wireless fidelity; Impedance; Interference
【Paper Link】 【Pages】:2033-2041
【Authors】: Shimon Bitton ; Yuval Emek ; Shay Kutten
【Abstract】: This study was carried in the context of the development of technologies for a cloud that uses an optical network for internal communication. The problem addressed in this paper deals with dispatching jobs - units of work, to be performed by machines on the cloud. Sending (or migrating) a job to a machine involves establishing a lightpath (a la circuit switching); this incurs a significant setup cost, but while it exists, the lightpath's capacity is very high. Hence, moving one job is about as expensive as moving a set of jobs over the same lightpath. Our goal is to develop online network dispatching algorithms for a work conserving job scheduling. That is, consider a set of jobs the dispatcher is responsible for their executions on some set SM of machines. Any machine in the network may join or (when not executing a job) leave SM according to decisions made outside the scope of this paper. Whenever a machine joins the set or is in the set and has just finished executing a job, it issues a request for a new job to perform and the dispatcher must send this machine a job that has not been executed yet (if such exists). Every machine can perform any of the jobs, and each job is performed on a single machine. The main algorithmic challenge in this context boils down to the following questions: How many jobs should we send to a requesting machine (or to some intermediate storage to be distributed from there)? From the storage on which machine should these jobs be taken? The algorithms developed here are shown to be efficient in reducing the costs of establishing lightpaths. As opposed to related algorithms for delivering consumable resources (in other contexts), we prove that our online algorithms are fully competitive. We present randomized online algorithms for two different settings: in the first it is assumed that each message requires establishing a lightpath and thus, incurs a setup cost; in the second, we distinguish between messages that carry job sets and small control messages sent by the algorithm, where the latter type of messages is assumed to be sent over a designated (non-optical) control plane at a negligible cost. Our algorithms are quite simple, though the analysis turns out to be rather involved. They are designed (and rigorously analyzed) for a general architecture, but would be especially efficient in fat tree architectures - the common choice in many data centers.
【Keywords】: Dispatching; Optical fiber networks; Data centers; Cloud computing; Conferences; Industrial engineering; Electronic mail
【Paper Link】 【Pages】:2042-2050
【Authors】: Sungjin Im ; Maryam Shadloo ; Zizhan Zheng
【Abstract】: Coflow has recently been introduced to capture communication patterns that are widely observed in the cloud and massively parallel computing. Coflow consists of a number of flows that each represents data communication from one machine to another. A coflow is completed when all of its flows are completed. Due to its elegant abstraction of the complicated communication processes found in various parallel computing platforms, it has received significant attention. In this paper, we consider coflow for the objective of maximizing partial throughput. This objective seeks to measure the progress made for partially completed coflows before their deadline. Partially processed coflows still could be useful when their flows send out useful data that can be used for the next round computation. In our measure, a coflow is processed by a certain fraction when all of its flows are processed by the same fraction or more. We consider a natural class of greedy algorithms, which we call myopic concurrent. The algorithms seek to maximize the marginal increase of the partial throughput objective at each time. We analyze the performance of our algorithm against the optimal scheduler. In fact, our result is more general as a flow could be extended to demand various heterogeneous resources. Our experiment demonstrates our algorithm's superior performance.
【Keywords】: Task analysis; Throughput; Computational modeling; Schedules; Optimal scheduling; Cloud computing; Processor scheduling
【Paper Link】 【Pages】:2051-2059
【Authors】: Pinghui Wang ; Peng Jia ; Jing Tao ; Xiaohong Guan
【Abstract】: Mining user behaviors over high speed links is important for applications such as network anomaly detection. Previous work focuses on monitoring anomalies such as extremely frequent users occurring in a short timeslot such as 1 minute. Little attention has been paid to detect users with stealthy behaviors such as persistent frequent and co-occurrence behaviors over a long period of time at the timeslot granularity (e.g., 1 minute granularity level). Unlike frequent users, persistent users do not necessarily occur more frequently than other users in a single timeslot, but persist and occur in a larger number of timeslots. Due to limited computation and storage resources on routers, it is prohibitive to collect massive network traffic in a long period of time. We develop an end-to-end method for solving challenges in both long-term online traffic collection and offline user behavior analysis. To achieve this goal, we design a user embedding (UE) method to fast build compact sketches of user-occurrence events over time. To reduce the estimation error introduced by Bloom Filter, we model UE as a sampling method and propose methods to accurately mine a variety of user behaviors from user-occurrence events rebuilt from UE sketches. In addition, we introduce another new embedding method reversible UE (RUE) to detect persistent frequent behaviors when monitored users' IDs are not given in advance for offline analysis. We conduct extensive experiments on real-world traffic, and the results demonstrate that our methods significantly outperform state-of-the-art methods.
【Keywords】: sketch; anomaly detection; data streams; network traffic monitoring
【Paper Link】 【Pages】:2060-2068
【Authors】: Jing Chen ; Shixiong Yao ; Quan Yuan ; Kun He ; Shouling Ji ; Ruiying Du
【Abstract】: In recent years, real-world attacks against PKI take place frequently. For example, malicious domains' certificates issued by compromised CAs are widespread, and revoked certificates are still trusted by clients. In spite of a lot of research to improve the security of SSL/TLS connections, there are still some problems unsolved. On one hand, although log-based schemes provided certificate audit service to quickly detect CAs' misbehavior, the security and data consistency of log servers are ignored. On the other hand, revoked certificates checking is neglected due to the incomplete, insecure and inefficient certificate revocation mechanisms. Further, existing revoked certificates checking schemes are centralized which would bring safety bottlenecks. In this paper, we propose a blockchain-based public and efficient audit scheme for TLS connections, which is called Certchain. Specially, we propose a dependability-rank based consensus protocol in our blockchain system and a new data structure to support certificate forward traceability. Furthermore, we present a method that utilizes dual counting bloom filter (DCBF) with eliminating false positives to achieve economic space and efficient query for certificate revocation checking. The security analysis and experimental results demonstrate that CertChain is suitable in practice with moderate overhead.
【Keywords】: Protocols; Servers; Data structures; Electronic mail; Monitoring
【Paper Link】 【Pages】:2069-2077
【Authors】: Xu Zhang ; Jeffrey Knockel ; Jedidiah R. Crandall
【Abstract】: We present ONIS, a new scanning technique that can perform network measurements such as: inferring TCP/IP-based trust relationships off-path, stealthily port scanning a target without using the scanner's IP address, detecting off-path packet drops between two international hosts. These tasks typically rely on a core technique called the idle scan, which is a special kind of port scan that appears to come from a third machine called a zombie. The scanner learns the target's status from the zombie by using its TCP/IP side channels. Unfortunately, the idle scan assumes that the zombie has IP identifiers (IPIDs) which exhibit the now-discouraged behavior of being globally incrementing. The use of this kind of IPID counter is becoming increasingly rare in practice. Our technique, unlike the idle scan, is based on a much more advanced IPID generation scheme, that of the prevalent Linux kernel. Although Linux's IPID generation scheme is specifically intended to reduce information flow, we show that using Linux machines as zombies in an indirect scan is still possible. ONIS has 87% accuracy, which is comparable to nmap's implementation of the idle scan at 86%. ONIS's much broader choice of zombies will enable it to be a widely used technique which can fulfill various network measurement tasks.
【Keywords】: Linux; IP networks; Kernel; Probes; Extraterrestrial measurements; Current measurement; Conferences
【Paper Link】 【Pages】:2078-2086
【Authors】: Abhishek Roy ; Charles A. Kamhoua ; Prasant Mohapatra
【Abstract】: Recent observations have shown that most of the attacks are fruits of collaboration among attackers. In this work we have developed a coalition formation game to model the collusive behavior among attackers. The novelty of this work is that we are the first to investigate the coalition formation dynamics among attackers with different efficiency. Most of the related works have modeled the attacker as a single entity. We define a new parameter called friction to represent the unwillingness of an attacker to collude. We have shown that the proportion of attackers in the Maximum Average Payoff Coalition (MAPC) decreases with efficiency. We have also shown that as the friction increases, size and heterogeneity of MAPC decrease. We show, using text analysis on a hacker web forum chat data, that the hacker collaboration network shows a strong small-world characteristics. We identify the leaders in these coalitions. The cluster compositions of the hacker collaboration network agree with our model. We also develop method to estimate the friction parameters for the attackers to decide optimal coalition to join. As this model provides insight into coalition formation among attackers, e.g., leaders, composition, and homogeneity, this model will be helpful to develop better defender strategies.
【Keywords】: Computer hacking; Friction; Games; Data models; Collaboration; Computational modeling; Computer crime
【Paper Link】 【Pages】:2087-2095
【Authors】: Lingchen Zhao ; Lihao Ni ; Shengshan Hu ; Yanjiao Chen ; Pan Zhou ; Fu Xiao ; Libing Wu
【Abstract】: Data mining has heralded the major breakthrough in data analysis, serving as a “super cruncher” to discover hidden information and valuable knowledge in big data systems. For many applications, the collection of big data usually involves various parties who are interested in pooling their private data sets together to jointly train machine-learning models that yield more accurate prediction results. However, data owners may not be willing to disclose their own data due to privacy concerns, making it imperative to provide privacy guarantee in collaborative data mining over distributed data sets. In this paper, we focus on tree-based data mining. To begin with, we design novel privacy-preserving schemes for two most common tasks: regression and binary classification, where individual data owners can perform training locally in a differentially private manner. Then, for the first time, we design and implement a privacy-preserving system for gradient boosting decision tree (GBDT), where different regression trees trained by multiple data owners can be securely aggregated into an ensemble. We conduct extensive experiments to evaluate the performance of our system on multiple real-world data sets. The results demonstrate that our system can provide a strong privacy protection for individual data owners while maintaining the prediction accuracy of the original trained model.
【Keywords】: Decision trees; Boosting; Privacy; Data models
【Paper Link】 【Pages】:2096-2104
【Authors】: Yossi Kanizo ; Ori Rottenstreich ; Itai Segall ; Jose Yallouz
【Abstract】: Enabling functionality in modern network is achieved through the use of middleboxes. Middleboxes suffer from temporal unavailability due to various reasons, such as hardware faults. We design a backup scheme that takes advantage of Network Function Virtualization (NFV), an emerging paradigm of implementing network functions in software, deployed on commodity servers. We utilize the agility of software-based systems, and the gap between the resource utilization of active and standby components, in order to design an optimal limited-resource backup scheme. We focus on the case where a small number of middleboxes fail simultaneously, and study the backup resources required for guaranteeing full recovery from any set of failures, of up to some limited size. Via a novel graph-based presentation, we develop a provably optimal construction of such backup schemes. Since full recovery is guaranteed, our construction does not rely on failure statistics, which are typically hard to obtain. Simulation results show that our proposed approach is applicable even for the case of larger numbers of failures.
【Keywords】: Middleboxes; Bipartite graph; Hardware; Conferences; Probability; Computer science; Network function virtualization
【Paper Link】 【Pages】:2105-2113
【Authors】: János Tapolcai ; Balazs Vass ; Zalán Heszberger ; József Bíró ; David Hay ; Fernando A. Kuipers ; Lajos Rónyai
【Abstract】: In order to evaluate the expected availability of a service, a network administrator should consider all possible failure scenarios under the specific service availability model stipulated in the corresponding service-level agreement. Given the increase in natural disasters and malicious attacks with geographically extensive impact, considering only independent single link failures is often insufficient. In this paper, we build a stochastic model of geographically correlated link failures caused by disasters, in order to estimate the hazards a network may be prone to, and to understand the complex correlation between possible link failures. With such a model, one can quickly extract information, such as the probability of an arbitrary set of links to fail simultaneously, the probability of two nodes to be disconnected, the probability of a path to survive a failure, etc. Furthermore, we introduce a pre-computation process, which enables us to succinctly represent the joint probability distribution of link failures. In particular, we generate, in polynomial time, a quasilinear-sized data structure, with which the joint failure probability of any set of links can be computed efficiently.
【Keywords】: Computational modeling; Correlation; Shape; Stochastic processes; Hazards; Earthquakes; Conferences
【Paper Link】 【Pages】:2114-2122
【Authors】: Subhendu Khatuya ; Niloy Ganguly ; Jayanta Basak ; Madhumita Bharde ; Bivas Mitra
【Abstract】: A large population of users gets affected by sudden slowdown or shutdown of an enterprise application. System administrators and analysts spend considerable amount of time dealing with functional and performance bugs. These problems are particularly hard to detect and diagnose in most computer systems, since there is a huge amount of system generated supportability data (counters, logs etc.) that need to be analyzed. Most often, there isn't a very clear or obvious root cause. Timely identification of significant change in application behavior is very important to prevent negative impact on the service. In this paper, we present ADELE, an empirical, data-driven methodology for early detection of anomalies in data storage systems. The key feature of our solution is diligent selection of features from system logs and development of effective machine learning techniques for anomaly prediction. ADELE learns from system's own history to establish the baseline of normal behavior and gives accurate indications of the time period when something is amiss for a system. Validation on more than 4800 actual support cases shows ~ 83% true positive rate and ~ 12% false positive rate in identifying periods when the machine is not performing normally. We also establish the existence of problem “signatures” which help map customer problems to already seen issues in the field. ADELE's capability to predict early paves way for online failure prediction for customer systems.
【Keywords】: System Log; Anomaly Detection
【Paper Link】 【Pages】:2123-2131
【Authors】: Qingyu Liu ; Lei Deng ; Haibo Zeng ; Minghua Chen
【Abstract】: We consider the scenario where a source streams a flow at fixed rate to a receiver across a network, possibly using multiple paths. Transmission over a link incurs a delay modeled as a non-negative, non-decreasing and differentiable function of the link aggregated transmission rate. This setting models various practical network communication scenarios. We study network delay optimization concerning two popular metrics, namely maximum delay and average delay experienced by the flow. A well-known pessimistic result says that a flow cannot simultaneously achieve optimal maximum delay and optimal average delay, or even within constant-ratio gaps to the two optimums. In this paper, we pose an optimistic note on the fundamental compatibility of the two delay metrics. Specifically, we design two polynomial-time solutions to deliver (1 -ε) fraction of the flow with maximum delay and average delay simultaneously within 1/ε to the optimums for any ε ∈ (0,1). Hence, the two delay metrics are “largely” compatible. The ratio 1/ε is independent to the network size and link delay function, and we show that it is tight or near-tight. Simulations based on real-world continent-scale network topology verify our theoretical findings. Note that the proposed delay gap 1/ε, upon sacrificing ε fraction of the flow rate, is guaranteed even under the worst theoretical case setting. Our simulation results show that the empirical delay gaps observed under practical settings can be much smaller than 1/ε. Our results are of particular interest to delay-centric networking applications that can tolerate a small fraction of traffic loss, including cloud video conferencing that recently attracts substantial attention.
【Keywords】: Delays; Optimization; Network topology; Routing; Approximation algorithms; Data models
【Paper Link】 【Pages】:2132-2140
【Authors】: Gamal Sallam ; Gagan Ray Gupta ; Bin Li ; Bo Ji
【Abstract】: With the advent of Network Function Virtualization (NFV), Physical Network Functions (PNFs) are gradually being replaced by Virtual Network Functions (VNFs) that are hosted on general purpose servers. Depending on the call flows for specific services, the packets need to pass through an ordered set of network functions (physical or virtual) called Service Function Chains (SFC) before reaching the destination. Conceivably for the next few years during this transition, these networks would have a mix of PNFs and VNFs, which brings an interesting mix of network problems that are studied in this paper: (1) How to find an SFC-constrained shortest path between any pair of nodes? (2) What is the achievable SFC-constrained maximum flow? (3) How to place the VNFs such that the cost (the number of nodes to be virtualized) is minimized, while the maximum flow of the original network can still be achieved even under the SFC constraint? In this work, we will try to address such emerging questions. First, for the SFC-constrained shortest path problem, we propose a transformation of the network graph to minimize the computational complexity of subsequent applications of any shortest path algorithm. Second, we formulate the SFC-constrained maximum flow problem as a fractional multicommodity flow problem, and develop a combinatorial algorithm for a special case of practical interest. Third, we prove that the VNFs placement problem is NP-hard and present an alternative Integer Linear Programming (ILP) formulation. Finally, we conduct simulations to elucidate our theoretical results.
【Keywords】: Wide area networks; Servers; Network function virtualization; Shortest path problem; Conferences; Approximation algorithms; Integer linear programming
【Paper Link】 【Pages】:2141-2149
【Authors】: Rongwei Yang ; Cuiying Feng ; Luning Wang ; Weiwei Wu ; Kui Wu ; Jianping Wang ; Yinlong Xu
【Abstract】: In the “network-as-a-service” paradigm, network operators have a strong need to know the metrics of critical paths running services to their users/tenants. However, it is usually prohibitive to directly measure the metrics of all such paths due to the measuring overhead. A practical solution is to use network tomography to infer the metrics of such paths based on observations from a small number of monitoring nodes. This problem is termed as path identifiability problem, a new problem that largely differs from existing link identifiability problems. we show that the new problem is harder than link identifiability problems, in the sense that fewer monitors are required for identifying the metrics of given paths than for identifying the metrics of links along the paths. To solve the problem, we develop sufficient and necessary conditions for the identifiability of a given set of interested paths, and design an efficient algorithm that deploys the minimum number of monitors. Experiments show a saving of up to 40% fewer monitors that guarantee the identifiability of a given set of paths.
【Keywords】: Network Tomography; Communication Network; Monitor Placement
【Paper Link】 【Pages】:2150-2158
【Authors】: Marcelo Caggiani Luizelli ; Danny Raz ; Yaniv Sa'ar
【Abstract】: Network Function Virtualization (NFV) is a novel paradigm that enables flexible and scalable implementation of network services on cloud infrastructure. A key factor in the success of NFV is the ability to dynamically allocate physical resources according to the demand. This is particularly important when dealing with the data plane since additional resources are required in order to support the virtual switching of the packets between the Virtual Network Functions (VNFs). The exact amount of these resources depends on the way service chains are deployed and the amount of network traffic being handled. Thus, orchestrating service chains that require high traffic throughput is a very complex task and most existing solutions either concentrate on handcrafted tuning of the servers to achieve the needed performance level, or present theoretical placement functions that assume that the switching cost is part of the input. In this work, we bridge this gap by presenting a deployment algorithm for service chains that optimizes performance by minimizing the actual cost of virtual switching. The results are based on extensive measurements of the actual switching cost and the performance of service chains in a realistic NFV environment. Our evaluation indicates that this new algorithm significantly reduces virtual switching resource utilization when compared to the de-facto standard placement in OpenStack/Nova - allowing a much higher acceptance ratio of network services.
【Keywords】: Servers; Switches; Virtual machine monitors; Conferences; Virtualization; Cloud computing; Task analysis
【Paper Link】 【Pages】:2159-2167
【Authors】: Rahul Vaze
【Abstract】: Online knapsack problem is considered, where items arrive in a sequential fashion that have two attributes; value and weight. Each arriving item has to be accepted or rejected on its arrival irrevocably. The objective is to maximize the sum of the value of the accepted items such that the sum of their weights is below a budget/capacity. Conventionally a hard budget/capacity constraint is considered, for which variety of results are available. In modern applications, e.g., in wireless networks, data centres, cloud computing, etc., enforcing the capacity constraint in expectation is sufficient. With this motivation, we consider the knapsack problem with an expected capacity constraint. For the special case of knapsack problem, called the secretary problem, where the weight of each item is unity, we propose an algorithm whose probability of selecting any one of the optimal items is equal to 1 -1/e and provide a matching lower bound. For the general knapsack problem, we propose an algorithm whose competitive ratio is shown to be 1/4e that is significantly better than the best known competitive ratio of 1/10e for the knapsack problem with the hard capacity constraint.
【Keywords】: Approximation algorithms; Cloud computing; Resource management; Conferences; Capacity planning; Load management; Measurement
【Paper Link】 【Pages】:2168-2176
【Authors】: Zhiwei Zhao ; Wei Dong ; Geyong Min ; Gonglong Chen ; Tao Gu ; Jiajun Bu
【Abstract】: Wireless network simulation is a fundamental service aiming at providing controlled and repeatable environment for protocol design, performance testing, etc. The existing simulators focus on reproducing the packet behaviors on individual links. However, as observed in some recent works, individual link behaviors alone are not enough to characterize the protocol performance. As a result, while the existing works can mimic the link behaviors very closely, they often fail to simulate protocol level performance. In this paper, we propose a novel performance-aware simulation approach which can preserve not only the link-level behaviors but also the performance-level behaviors. We first devise an accurate performance model by combining link quality and the spatial-temporal link correlation. Based on the performance modeling, we then propose a Performance Aware Hidden Markov Model (PA-HMM), where the protocol performance is directly fed into the Markov state transitions. PA-HMM is able to simulate both link-level behaviors and high-level protocol performance. We conduct extensive testbed and simulation experiments with broadcast and anycast protocols. The results show that compared to the state-of-the-art work, 1) the performance model is able to accurately characterize wireless communication performance and 2) the protocol performance is closely simulated as compared to the empirical results.
【Keywords】: Hidden Markov models; Protocols; Correlation; Measurement; Wireless networks; Markov processes
【Paper Link】 【Pages】:2177-2185
【Authors】: Mathieu Leconte ; Georgios S. Paschos ; Panayotis Mertikopoulos ; Ulas C. Kozat
【Abstract】: Telecommunication networks are converging to a massively distributed cloud infrastructure interconnected with software defined networks. In the envisioned architecture, services will be deployed flexibly and quickly as network slices. Our paper addresses a major bottleneck in this context, namely the challenge of computing the best resource provisioning for network slices in a robust and efficient manner. With tractability in mind, we propose a novel optimization framework which allows fine-grained resource allocation for slices both in terms of network bandwidth and cloud processing. The slices can be further provisioned and auto-scaled optimally based on a large class of utility functions in real-time. Furthermore, by tuning a slice-specific parameter, system designers can trade off traffic-fairness with computing-fairness to provide a mixed fairness strategy. We also propose an iterative algorithm based on the alternating direction method of multipliers (ADMM) that provably converges to the optimal resource allocation and we demonstrate the method's fast convergence in a wide range of quasi-stationary and dynamic settings.
【Keywords】: Resource management; Cloud computing; Bandwidth; Routing; Network slicing; Conferences; Computer architecture
【Paper Link】 【Pages】:2186-2194
【Authors】: Jin-Hee Cho ; Terrence J. Moore
【Abstract】: Percolation theory has been studied to investigate network resilience by identifying a critical value of node occupation probability where a giant component (i.e., a largest component in a network) represents high network resilience. However, the concept of network resilience studied in percolation theory has been limited to measuring network fault-tolerance in the presence of failures/attacks. In this work, we take a step to extend the concept of network resilience beyond fault-tolerance by introducing network adaptability. We consider a tactical network where a node is executing multiple tasks by belonging to multiple task groups. In this type of tactical networks, a node is limited with its resource while aiming to maximize its resource utilization and task execution without being overloaded. We investigate network resilience and resource utilization of the tactical network where the network is attacked by either infectious or non-infectious attacks. To mitigate the impact of the failed/attacked nodes in the network, we propose a suite of the proposed adaptation strategies to deal with correlated, cascading failures caused by overloaded nodes due to increased workload introduced by other failed nodes. We conduct a comparative performance analysis of the set of proposed adaptation strategies and a baseline scheme with no adaptation based on metrics including a size of a giant component, node resource utilization, a number of active tasks in execution, and adaptation cost. Our simulation results show that a large size of a giant component does not necessarily ensure high resource utilization of nodes and task performance in the given tactical network.
【Keywords】: Resilience; Task analysis; Resource management; Fault tolerance; Fault tolerant systems; Power system faults; Power system protection
【Paper Link】 【Pages】:2195-2203
【Authors】: Cong Liu ; Yong Cui ; Kun Tan ; Quan Fan ; Kui Ren ; Jianping Wu
【Abstract】: The trends of the increasing middleboxes make the middle network more and more complex. Today, many middleboxes work on application layer and offer significant network services by the plain-text traffic, such as firewalling, intrusion detecting and application layer gateways. At the same time, more and more network applications are encrypting their data transmission to protect security and privacy. It is becoming a critical task and hot topic to continue providing application-layer middlebox services in the encrypted Internet, however, the state of the art is far from being able to be deployed in the real network. In this paper, we propose a practical architecture, named PlainBox, to enable session key sharing between the communication client and the middleboxes in the network path. It employs Attribute-Based Encryption (ABE) in the key sharing protocol to support multiple chaining middleboxes efficiently and securely. We develop a prototype system and apply it to popular security protocols such as TLS and SSH. We have tested our prototype system in a lab testbed as well as real-world websites. Our result shows PlainBox introduces very little overhead and the performance is practically deployable.
【Keywords】: Middleboxes; Protocols; Servers; Encryption
【Paper Link】 【Pages】:2204-2212
【Authors】: Eran Assaf ; Ran Ben-Basat ; Gil Einziger ; Roy Friedman
【Abstract】: For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces.
【Keywords】: Entropy; Approximation algorithms; Microsoft Windows; Estimation; Conferences; Memory management; Windows
【Paper Link】 【Pages】:2213-2221
【Authors】: Kaiping Xue ; Xiang Zhang ; Qiudong Xia ; David S. L. Wei ; Hao Yue ; Feng Wu
【Abstract】: Information Centric Networking (ICN) has been regarded as an ideal architecture for the next-generation network to handle users' increasing demand for content delivery with in-network cache. While making better use of network resources and providing better delivery service, an effective access control mechanism is needed due to wide dissemination of contents. However, in the existing solutions, making cache-enabled routers or content providers authenticate users' requests causes high computation overhead and unnecessary delay. Also, straightforward utilization of advanced encryption algorithms increases the opportunities for DoS attacks. Besides, privacy protection and service accountability are rarely taken into account in this scenario. In this paper, we propose a secure, efficient, and accountable access control framework, called SEAF, for ICN, in which authentication is performed at the network edge to block unauthorized requests at the very beginning. We adopt group signature to achieve anonymous authentication, and use hash chain technique to greatly reduce the overhead when users make continuous requests for the same file. Furthermore, the content providers can affirm the service amount received from the network and extract feedback information from the signatures and hash chains. By formal security analysis and the comparison with related works, we show that SEAF achieves the expected security goals and possesses more useful features. The experimental results also demonstrate that our design is efficient for routers and content providers, and introduces only slight delay for users' content retrieval.
【Keywords】: Access control; Authentication; Encryption; Delays; Privacy
【Paper Link】 【Pages】:2222-2230
【Authors】: Minghui Li ; Mingxue Zhang ; Qian Wang ; Sherman S. M. Chow ; Minxin Du ; Yanjiao Chen ; Chenliang Li
【Abstract】: Image retrieval is crucial for social media sites such as Instagram to identify similar images and make recommendations for users who share similar interests. To get rid of the storage burden and computation for image retrieval, outsourcing to a remote cloud is now a trend. Yet, privacy concerns mandate the use of encryption before outsourcing the images. We need a secure way for retrieving images from a not-fully-trusted server. This paper proposes InstantCryptoGram, a secure image retrieval service. We first design a new data structure called sub-simhash, which fits for the inverted index used by many searchable symmetric encryption schemes. It leads to our modular solution that supports efficient similarity queries and updates over encrypted images. Our experiments on Amazon AWS EC2 over representative datasets show that our scheme is efficient and accurate in finding similar images while preserving privacy.
【Keywords】: Image retrieval; Indexes; Encryption; Servers; Feature extraction
【Paper Link】 【Pages】:2231-2239
【Authors】: Agustin Formoso ; Josiah Chavula ; Amreesh Phokeer ; Arjuna Sathiaseelan ; Gareth Tyson
【Abstract】: The Internet in Africa is evolving rapidly, yet remains significantly behind other regions in terms of performance and ubiquity of access. This clearly has negative consequences for the residents of Africa, but also has implications for organisations designing future networked technologies that might see deployment in the region. This paper presents a measurement campaign methodology to explore the current state of the African Internet. Using vantage points across the continent, we perform the first large-scale mapping of inter-country delays in Africa. Our analysis reveals a number of clusters, where countries have built up low delay interconnectivity, dispelling the myth that intra-communications in Africa are universally poor. Unfortunately, this does not extend to the remainder of the continent, which typically suffers from excessively high delays, often exceeding 300ms. We find that in many cases it is faster to reach European or North American networks that those in other regions of Africa. By mapping the internetwork topology, we identify a number of shortcomings in the infrastructure, most notably an excessive reliance on intercontinental transit providers.
【Keywords】: Africa; Delays; Probes; Servers; Internet; Topology
【Paper Link】 【Pages】:2240-2248
【Authors】: Jianfeng Li ; Xiaobo Ma ; Li Guodong ; Xiapu Luo ; Junjie Zhang ; Wei Li ; Xiaohong Guan
【Abstract】: Domain Name System (DNS) is one of the pillars of today's Internet. Due to its appealing properties such as low data volume, wide-ranging applications and encryption free, DNS traffic has been extensively utilized for network monitoring. Most existing studies of DNS traffic, however, focus on domain name reputation. Little attention has been paid to understanding and profiling what people are doing from DNS traffic, a fundamental problem in the areas including Internet demographics and network behavior analysis. Consequently, simple questions like “How to determine whether a DNS query for www.google.com means searching or any other behaviors?” cannot be answered by existing studies. In this paper, we take the first step to identify user activities from raw DNS queries. We advance a multiscale hierarchical framework to tackle two practical challenges, i.e., behavior ambiguity and behavior polymorphism. Under this framework, a series of novel methods, such as pattern upward mapping and multi-scale random forest classifier, are proposed to characterize and identify user activities of interest. Evaluation using both synthetic and real-world DNS traces demonstrates the effectiveness of our method.
【Keywords】: Conferences; Monitoring; Forestry; Google; Training; Internet; Encryption
【Paper Link】 【Pages】:2249-2257
【Authors】: Ioana Livadariu ; Karyn Benson ; Ahmed Elmokashfi ; Amogh Dhamdhere ; Alberto Dainotti
【Abstract】: Given the increasing scarcity of IPv4 addresses, network operators are resorting to measures to expand their address pool or prolong the life of existing addresses. One such approach is Carrier-Grade NAT (CGN), where many end-users in a network share a single public IPv4 address. There is limited data about the prevalence of CGN, despite the implications on performance, security, and ultimately, the adoption of IPv6. In this work, we present passive measurement-based techniques for detecting CGN deployments across the entire Internet, without the requirement of access to machines behind a CGN. Specifically, we identify patterns in how client IP addresses are observed at M-Lab servers and at the UCSD network telescope to infer whether those clients are behind a CGN. We apply our methods on data collected from 2014 to 2016. We find that CGN deployment is increasing rapidly. Overall, we infer that 4.1K autonomous systems are deploying CGN, 6 times the number inferred by the most recent studies.
【Keywords】: IP networks; Internet; Conferences; Telescopes; Autonomous systems; Organizations; Measurement
【Paper Link】 【Pages】:2258-2266
【Authors】: Minglai Shao ; Jianxin Li ; Feng Chen ; Xunxun Chen
【Abstract】: Evolving anomalous subgraphs detection in dynamic networks is an important and challenging problem that has arisen in multiple applications and is NP-hard in general. The evolving characteristic makes most existing methods incapable to tackle this problem effectively and efficiently, as it involves huge search spaces and continuous changes of evolving connected subgraphs, especially when the data are free of distributions. This paper presents a generic efficient framework, namely dynamic evolving anomalous subgraphs scanning (dGraphScan), to address this problem. We generalize traditional nonparametric scan statistics, and propose a large class of scan statistic functions for measuring the significance of evolving subgraphs in dynamic networks. Furthermore, we make a number of computational studies to optimize this large class of nonparametric scan statistic functions. Specifically, we first decompose each scan statistic function as a sequence of subproblems with provable guarantees, and then propose efficient approximation algorithms for tackling each subproblem, while analyzing their theoretical properties and providing rigorous approximation guarantees. Extensive experiments on three real-world datasets demonstrate that our general framework performs superior over state-of-the-art methods.
【Keywords】: evolving anomalous subgraphs detection; dynamic networks; nonparametric scan statistics; approximation algorithm
【Paper Link】 【Pages】:2267-2275
【Authors】: Fei Wu ; Yin Sun ; Lu Chen ; Jackie Xu ; Kannan Srinivasan ; Ness B. Shroff
【Abstract】: A fundamental challenge in wireless multicast has been how to simultaneously achieve high-throughput and low-delay for reliably serving a large number of users. In this paper, we show how to harness substantial throughput and delay gains by exploiting multi-channel resources. We develop a new scheme called Multi-Channel Moving Window Codes (MC-MWC) for multi-channel multi-session wireless multicast. The salient features of MC-MWC are three-fold. (i) High throughput: we show that MC-MWC achieves order-optimal throughput in the many-user many-channel asymptotic regime. Moreover, the number of channels required by a conventional channel-allocation based scheme is shown to be doubly-exponentially larger than that required by MC-MWC. (ii) Low delay: using large deviations theory, we show that the delay of MC-MWC decreases linearly with the number of channels, while the delay reduction of conventional schemes is no more than a finite constant. (iii) Low feedback overhead: the feedback overhead of MC-MWC is a constant that is independent of both the number of receivers in each session and the number of sessions in the network. Finally, our trace-driven simulation and numerical results validate the analytical results and show that the implementation complexity of MC-MWC is low.
【Keywords】: Receivers; Delays; Throughput; Wireless communication; Merging; Transmitters; Encoding
【Paper Link】 【Pages】:2276-2284
【Authors】: Zhichao Cao ; Jiliang Wang ; Daibo Liu ; Xin Miao ; Qiang Ma ; XuFei Mao
【Abstract】: Due to limited energy supply on many Internet of Things (IoT) devices, asynchronous duty cycle radio management is widely adopted to save energy. Flooding is a critical way to quickly disseminate system parameters to adapt diverse network requirements. Capture effect enabled concurrent broadcast is appealing to accelerate network flooding in asynchronous duty cycle networks. However, when the length of flooding payload is long, due to frequently unsatisfied capture effect construction, the performance of concurrent broadcast is far from efficient. Intuitively, senders can send short packet that contains partial flooding payload to keep the efficiency of concurrent broadcast. In practice, we still face two challenges. Considering packet loss, a receiver needs an effective way to recover entire flooding payload from several received packets as soon as possible. Moreover, considering diverse channel state of different senders, how a sender chooses the optimal packet length to guarantee high channel utilization in a light-weight way is not easy. In this paper, we propose Chase++ a Fountain code based concurrent broadcast control layer to enable fast flooding in asynchronous duty cycle networks. Chase++ uses Fountain code to alleviate the negative influence of the continuous loss of a certain part of flooding payload. Moreover, Chase++ adaptively selects packet length with the local estimation of channel utilization. Specifically, Chase++ partitions long payload into several short payload blocks, which are further encoded into many encoded payload blocks by Fountain code. Then, with temporal and spatial features of the sampled RSS (received signal strength) sequence, a sender estimates the number of concurrent senders. Finally, according to the estimated number of concurrent senders, the sender determines the optimal number of encoded payload blocks in a packet and assembles the encoded payload blocks as lots of packets. Then, concurrent broadcast layer continuously transmits these packets. Receivers can recover original flooding payload after several independent encoded payload blocks are collected. We implement Chase++ in TinyOS with TelosB nodes. We further evaluate Chase++ on local testbed with 50 nodes and Indriya testbed with 95 nodes. The improvement of network flooding speed can reach 23.6% and 13.4%, respectively.
【Keywords】: Payloads; Receivers; Delays; Face; Channel estimation; Conferences; Schedules
【Paper Link】 【Pages】:2285-2293
【Authors】: Yahia Shabara ; C. Emre Koksal ; Eylem Ekici
【Abstract】: The surge in mobile broadband data demands is expected to surpass the available spectrum capacity below 6 GHz. This expectation has prompted the exploration of millimeter wave (mm-wave) frequency bands as a candidate technology for next generation wireless networks. However, numerous challenges to deploying mm-wave communication systems, including channel estimation, need to be met before practical deployments are possible. This work addresses the mm-wave channel estimation problem and treats it as a beam discovery problem in which locating beams with strong path reflectors is analogous to locating errors in linear block codes. We show that a significantly small number of measurements (compared to the original dimensions of the channel matrix) is sufficient to reliably estimate the channel. We also show that this can be achieved using a simple and energy-efficient transceiver architecture.
【Keywords】: Channel estimation; Receivers; Sparse matrices; Block codes; Sensors; Measurement uncertainty; Array signal processing
【Paper Link】 【Pages】:2294-2302
【Authors】: Jinbin Hu ; Jiawei Huang ; Wenjun Lv ; Yutao Zhou ; Jianxin Wang ; Tian He
【Abstract】: Modern data-center applications generate a diverse mix of short and long flows with different performance requirements and weaknesses. The short flows are typically delay-sensitive but to suffer the head-of-line blocking and out-of-order problems. Recent solutions prioritize the short flows to meet their latency requirements, while damaging the throughput-sensitive long flows. To solve these problems, we design a Coding-based Adaptive Packet Spraying (CAPS) that effectively mitigates the negative impact of short and long flows on each other. To exploit the availability of multiple paths and avoid the head-of-line blocking, CAPS spreads the packets of short flows to all paths, while the long flows are limited to a few paths with Equal Cost Multi Path (ECMP). Meanwhile, to resolve the out-of-order problem with low overhead, CAPS encodes the short flows using forward error correction (FEC) technology and adjusts the coding redundancy according to the blocking probability. The coding layer is deployed between the TCP and IP layers, without any modifications on the existing TCP/IP protocols. The experimental results of NS2 simulation and Mininet implementation show that CAPS significantly reduces the average flow completion time of short flows by ~30% -70% over the state-of-the-art multipath transmission schemes and achieves the high throughput for long flows with negligible traffic overhead.
【Keywords】: Data center; TCP; packet spray; multipath
【Paper Link】 【Pages】:2303-2311
【Authors】: Lingnan Gao ; George N. Rouskas
【Abstract】: Efficient allocation of resources is an essential yet challenging problem in a virtual network environment, especially in an online setting whereby virtual network requests may arrive, depart, or be modified in real time. Virtual network reconfiguration may help to improve network performance by remapping a subset of virtual nodes or links to better align the allocation of resources to current network conditions. In this paper, we develop a virtual network reconfiguration scheme that aims to balance the load on the substrate network by dynamically reconfiguring the embedding of both virtual nodes and links. Our solution consists of decomposing the problem into two subproblems: i) virtual node selection, for which we present a linear programming-based fully polynomial time approximation scheme to select the virtual nodes to be migrated, and ii) virtual node remapping, for which we make use of random walk on a Markov chain to select new substrate nodes for the migrated virtual nodes.
【Keywords】: Substrates; Resource management; Indium phosphide; III-V semiconductor materials; Virtualization; Conferences; Bandwidth
【Paper Link】 【Pages】:2312-2320
【Authors】: Peng-Jun Wan ; Huaqiang Yuan ; Jiliang Wang ; Ju Ren ; Yaoxue Zhang
【Abstract】: Consider a set of communication requests in a multichannel wireless network, each of which is associated with a traffic demand of at most one unit of transmission time, and a weight representing the utility if its demand is fully met. A subset of them is said to be feasible if they can be scheduled within one unit of time. The problem Maximum-Weighted Feasible Set (MWFS) seeks a feasible subset with maximum total weight together with a transmission schedule of them whose length is at most one unit of time. This paper develops an efficient O(log 2 α) -approximation algorithms for the problem MWFS under the physical interference model (aka, SINR model) with a fixed monotone and sub-linear power assignment, where α is the maximum number of requests which can transmit successfully at the same time over the same channel.
【Keywords】: Interference; Approximation algorithms; Schedules; Signal to noise ratio; Conferences; Wireless communication; Silicon
【Paper Link】 【Pages】:2321-2329
【Authors】: Riccardo Cavallari ; Stavros Toumpis ; Roberto Verdone
【Abstract】: We study a mobile wireless network in which nodes travel on an infinite plane along trajectories comprised of linear segments; a single packet is traveling towards a destination located at an infinite distance, employing both wireless transmissions and sojourns on node buffers. In this setting, we calculate analytically the long-term averages of (i) the speed with which the packet travels towards its destination and (ii) the rate with which the wireless transmission cost accumulates with distance traveled. Due to the problem's complexity, we use two approximations; simulations show that the resulting errors are modest.
【Keywords】: Delay-Tolerant Routing; Geographic Routing; Mobile Wireless Network; Stochastic Geometry
【Paper Link】 【Pages】:2330-2338
【Authors】: Hamideh Ramezani ; Tooba Khan ; Özgür B. Akan
【Abstract】: Communication among neurons is the highly evolved and efficient nanoscale communication paradigm, hence the most promising technique for biocompatible nanonetworks. This necessitates the understanding of neuro-spike communication from information theoretical perspective to reach a reference model for nanonetworks. This would also contribute towards developing ICT-based diagnostics techniques for neuro-degenerative diseases. Thus, in this paper, we focus on the fundamental building block of neuro-spike communication, i.e., signal transmission over a synapse, to evaluate its information transfer rate. We aim to analyze a realistic synaptic communication model, which for the first time, encompasses the variation in vesicle release probability with time, synaptic geometry and the re-uptake of neurotransmitters by pre-synaptic terminal. To achieve this objective, we formulate the mutual information between input and output of the synapse. Then, since this communication paradigm has memory, we evaluate the average mutual information over multiple transmissions to find its overall capacity. We derive a closed-form expression for the capacity of the synaptic communication as well as calculate the capacity-achieving input probability distribution. Finally, we find the effects of variation in different synaptic parameters on the information capacity and prove that the diffusion process does not decrease the information a neural response carries about the stimulus in real scenario.
【Keywords】: Nanonetworks; molecular communication; neuro-spike communication; information capacity; synaptic transmission
【Paper Link】 【Pages】:2348-2356
【Authors】: Zhenhua Han ; Haisheng Tan ; Rui Wang ; Shaojie Tang ; Francis C. M. Lau
【Abstract】: Heterogeneous cellular networks (HetNets) can significantly improve the spectrum efficiency, where low-power low-complexity base stations (Pico-BSs) are deployed inside the coverage of macro base stations (Macro-BSs). Due to cross-tier interference, joint detection of the uplink signals is widely adopted so that a Pico-BS can either detect the uplink signals locally or forward them to the Macro-BS for processing. The latter can achieve increased throughput at the cost of additional backhaul transmission. However, in existing literature the delay of the backhaul links was often neglected. In this paper, we study the delay-optimal uplink scheduling problem in HetNets with limited backhaul capacity. Local signal detection or joint signal detection is scheduled in a unified delay-optimal framework. Specifically, we first prove that the problem is NP-hard and then formulate it as a Markov Decision Process problem. We propose an efficient and effective algorithm, called OLIUS, that can deal with the exponentially growing state and action spaces. Furthermore, OLIUS is online learning based which does not require any prior statistical knowledge on user behavior or channel characteristics. We prove the convergence of OLIUS and derive an upper bound on its approximation error. Extensive experiments in various scenarios show that our algorithm outperforms existing methods in reducing delay and power consumption.
【Keywords】: Uplink; Interference; Delays; Scheduling; Signal detection; Macrocell networks; Base stations
【Paper Link】 【Pages】:2357-2365
【Authors】: Nishant Budhdev ; Mun Choon Chan ; Tulika Mitra
【Abstract】: In order to provide greater network capacity, the use of small base stations such as Femtocells has increased to allow higher spectrum reuse. In these Femtocells, base station designers have started to explore the use of general purpose multi-core architectures to provide greater flexibility. Multi-core architectures allow power-performance trade-off possibilities through techniques such as Dynamic Voltage Frequency Scaling (DVFS) and Power Gating. In this work, we propose a power management framework based on reinforcement learning called PR 3 , which uses both DVFS and Power Gating. Our approach is unique as it introduces a feedback from the network scheduler and baseband processor to the Power Governor, so that information about both the network and computation workloads are included in the decision making. Evaluation on a hardware platform (Odroid XU3) running PHY LTE uplink baseband processing benchmark, shows that PR 3 performs well in terms of both power and latency. It is able to save upto 50% power while maintaining low processing latency. PR 3 is also adaptive, making it effective over a wide range of traffic loads.
【Keywords】: Baseband; Femtocells; Power demand; Long Term Evolution; Computer architecture; Cellular networks
【Paper Link】 【Pages】:2366-2374
【Authors】: Andres Garcia-Saavedra ; Xavier Costa-Pérez ; Douglas J. Leith ; George Iosifidis
【Abstract】: Virtualized Radio Access Network (vRAN) architectures constitute a promising solution for the densification needs of 5G networks, as they decouple Base Stations (BUs) functions from Radio Units (RUs) allowing the processing power to be pooled at cost-efficient Central Units (CUs). vRAN facilitates the flexible function relocation (split selection), and therefore enables splits with less stringent network requirements compared to state-of-the-art fully Centralized (C-RAN) systems. In this paper, we study the important and challenging vRAN design problem. We propose a novel modeling approach and a rigorous analytical framework, FluidRAN, that minimizes RAN costs by jointly selecting the splits and the RUs-CUs routing paths. We also consider the increasingly relevant scenario where the RAN needs to support multi-access edge computing (MEC) services, that naturally favor distributed RAN (D-RAN) architectures. Our framework provides a joint vRAN/MEC solution that minimizes operational costs while satisfying the MEC needs. We follow a data-driven evaluation method, using topologies of 3 operational networks. Our results reveal that (i) pure C-RAN is rarely a feasible upgrade solution for existing infrastructure, (ii) FluidRAN achieves significant cost savings compared to D-RAN systems, and (iii) MEC can increase substantially the operator's cost as it pushes vRAN function placement back to RUs.
【Keywords】: Delays; Copper; Routing; Topology; Computational modeling; Radio access networks; 5G mobile communication
【Paper Link】 【Pages】:2375-2383
【Authors】: Anfu Zhou ; Leilei Wu ; Shaoqing Xu ; Huadong Ma ; Teng Wei ; Xinyu Zhang
【Abstract】: 60 GHz networks, with multi-Gbps bitrate, are considered as the enabling technology for emerging applications such as wireless Virtual Reality (VR) and 4K/8K real-time Miracast. However, user motion, and even orientation change, can cause mis-alignment between 60 GHz transceivers' directional beams, thus causing severe link outage. Within the practical 3D spaces, the combination of location and orientation dynamics leads to exponential growth of beam searching complexity, which substantially exacerbates the outage and hinders fast recovery. In this paper, we first conduct an extensive measurement to analyze the impact of 3D motion on 60 GHz link performance, in the context of VR and Miracast applications. We find that 3D motion exhibits inherent non-predictability, so conventional beam steering solutions, which targets 2D scenarios with lower search space and short-term motion coherence, fail in practical 3D setup. Motivated by these observations, we propose a model-driven 3D beam-steering mechanism called Orthogonal Scanner (OScan), which can maintain high performance for mobile 60 GHz links in 3D space. OScan discovers and leverages a hidden interaction between 3D beams and the spatial channel profile of 60 GHz radios, and strategically scans the 3D space so as to reduce the search latency by more than one order of magnitude. Experiment results based on a custom-built 60 GHz platform along with a trace-driven emulator demonstrate OScan's remarkable throughput gain, up to 5×, compared with the state-of-the-art.
【Keywords】: Three-dimensional displays; Wireless communication; Beam steering; Two dimensional displays; Dynamics; Wireless sensor networks; Conferences
【Paper Link】 【Pages】:2384-2392
【Authors】: Guillermo Bielsa ; Joan Palacios ; Adrian Loch ; Daniel Steinmetzer ; Paolo Casari ; Joerg Widmer
【Abstract】: The very large bandwidth available in the 60 GHz band allows, in principle, to design highly accurate positioning systems. Integrating such systems with standard protocols (e.g., IEEE 802.11ad) is crucial for the deployment of location-based services, but it is also challenging and limits the design choices. Another key problem is that consumer-grade 60 GHz hardware only provides coarse channel state information, and has highly irregular beam shapes due to its cost-efficient design. In this paper, we explore the location accuracy that can be achieved using such hardware, without modifying the 802.11ad standard. We consider a typical 802.11ad indoor network with multiple access points (APs). Each AP collects the coarse signal-to-noise ratio of the directional beacons that clients transmit periodically. Given the irregular beam shapes, the challenge is to relate each beacon to a set of transmission angles that allows to triangulate a user. We design a location system based on particle filters along with linear programming and Fourier analysis. We implement and evaluate our algorithm on commercial off-the-shelf 802.11ad hardware in an office scenario with mobile human blockage. Despite the strong limitations of the hardware, our system operates in real-time and achieves sub-meter accuracy in 70% of the cases.
【Keywords】: Hardware; Signal to noise ratio; Shape; Phased arrays; Estimation; Standards
【Paper Link】 【Pages】:2393-2401
【Authors】: Morteza Hashemi ; Ashutosh Sabharwal ; C. Emre Koksal ; Ness B. Shroff
【Abstract】: In this paper, we investigate the problem of beam alignment in millimeter wave (mmWave) systems, and design an optimal algorithm to reduce the overhead. Specifically, due to directional communications, the transmitter and receiver beams need to be aligned, which incurs high delay overhead since without a priori knowledge of the transmitter/receiver location, the search space spans the entire angular domain. This is further exacerbated under dynamic conditions (e.g., moving vehicles) where the access to the base station (access point) is highly dynamic with intermittent on-off periods, requiring more frequent beam alignment and signal training. To mitigate this issue, we consider an online stochastic optimization formulation where the goal is to maximize the directivity gain (i.e., received energy) of the beam alignment policy within a time period. We exploit the inherent correlation and unimodality properties of the model, and demonstrate that contextual information improves the performance. To this end, we propose an equivalent structured Multi-Armed Bandit model to optimally exploit the exploration-exploitation tradeoff. In contrast to the classical MAB models, the contextual information makes the lower bound on regret (i.e., performance loss compared with an oracle policy) independent of the number of beams. This is a crucial property since the number of all combinations of beam patterns can be large in transceiver antenna arrays, especially in massive MIMO systems. We further provide an asymptotically optimal beam alignment algorithm, and investigate its performance via simulations.
【Keywords】: Receivers; Transmitters; Array signal processing; Antenna arrays; Base stations; Delays; Correlation
【Paper Link】 【Pages】:2402-2410
【Authors】: Joan Palacios ; Guillermo Bielsa ; Paolo Casari ; Joerg Widmer
【Abstract】: Millimeter wave (mmWave) communications are an essential component of 5G-and-beyond ultra-dense Gbit/s wireless networks, but also pose significant challenges related to the communication environment. Especially beam-training and tracking, device association, and fast handovers for highly directional mmWave links may potentially incur a high overhead. At the same time, such mechanisms would benefit greatly from accurate knowledge about the environment and device locations that can be provided through simultaneous localization and mapping (SLAM) algorithms. In this paper we tackle the above issues by proposing CLAM, a distributed mmWave SLAM algorithm that works with no initial information about the network deployment or the environment, and achieves low computational complexity thanks to a fundamental reformulation of the angle-differences-of-arrival mm Wave anchor location estimation problem. All information required by CLAM is collected by a mmWave device thanks to beam training and tracking mechanisms inherent to mm Wave networks, at no additional overhead. Our results show that CLAM achieves submeter accuracy in the great majority of cases. These results are validated via an extensive experimental measurement campaign carried out with 60-GHz mmWave hardware.
【Keywords】: Shape; Simultaneous localization and mapping; Training; Estimation; Standards; Shape measurement; Handover
【Paper Link】 【Pages】:2411-2419
【Authors】: Shuo Yang ; Kunyan Han ; Zhenzhe Zheng ; Shaojie Tang ; Fan Wu
【Abstract】: In mobile crowdsensing, finding the best match between tasks and users is crucial to ensure both the quality and effectiveness of a crowdsensing system. Existing works usually assume a centralized task assignment by the platform, without addressing the need of fine-grained personalized task matching. In this paper, we argue that it is essential to match tasks to users based on a careful characterization of both the users' preferences and reliability levels. To that end, we propose a personalized task recommender system for mobile crowdsensing, which recommends tasks to users based on a recommendation score that jointly takes each user's preference and reliability into consideration. We first present a simple but effective method to profile the users' preferences by exploiting the implicit feedback from their historical performance. Then, to profile the users' reliability levels, we formalize the problem as a semi-supervised learning model, and propose an efficient block coordinate descent algorithm to solve the problem. For some tasks that lack historical information, we further propose a matrix factorization method to infer the users' reliability on those tasks. We conduct extensive experiments to evaluate the performance of our system, and the evaluation results demonstrate that our system can achieve superior performance to our benchmarks in both user profiling and personalized task matching.
【Keywords】: Task analysis; Reliability; Sensors; Recommender systems; Conferences; Benchmark testing; Measurement
【Paper Link】 【Pages】:2420-2428
【Authors】: Xiong Wang ; Riheng Jia ; Xiaohua Tian ; Xiaoying Gan
【Abstract】: Crowdsensing paradigm facilitates a wide range of data collection, where great efforts have been made to address its fundamental issue of matching workers to their assigned tasks. In this paper, we reexamine this issue by considering the spatiotemporal worker mobility and task arrivals, which more fits the actual situation. Specifically, we study the location-aware and location diversity based dynamic crowdsensing system, where workers move over time and tasks arrive stochastically. We first exploit offline crowdsensing by proposing a combinatorial algorithm, for efficiently distributing tasks to workers. After that, we mainly study the online crowdsensing, and further consider an indispensable aspect of worker's fair allocation. Apart from the stochastic characteristics and discontinuous coverage, the nonlinear expectation is incurred as a new challenge concerning fairness issue. Based on Lyapunov optimization with perturbation parameters, we propose online control policy to overcome those challenges. Hereby we can maintain system stability and achieve a time average sensing utility arbitrarily close to the optimum. Performance evaluation on real data set validates the proposed algorithm, where 116% gain of fairness is achieved at the expense of 12% loss of sensing value on average.
【Keywords】: Task analysis; Sensors; Location awareness; Resource management; Conferences; Optimization; Gallium nitride
【Paper Link】 【Pages】:2429-2437
【Authors】: Huajie Shao ; Shuochao Yao ; Yiran Zhao ; Chao Zhang ; Jinda Han ; Lance M. Kaplan ; Lu Su ; Tarek F. Abdelzaher
【Abstract】: This paper develops a constrained expectation maximization algorithm (CEM) that improves the accuracy of truth estimation in unguided social sensing applications. Unguided social sensing refers to the act of leveraging naturally occurring observations on social media as “sensor measurements”, when the sources post at will and not in response to specific sensing campaigns or surveys. A key challenge in social sensing, in general, lies in estimating the veracity of reported observations, when the sources reporting these observations are of unknown reliability and their observations themselves cannot be readily verified. This problem is known as fact-finding. Unsupervised solutions have been proposed to the fact-finding problem that explore notions of internal data consistency in order to estimate observation veracity. This paper observes that unguided social sensing gives rise to a new (and very simple) constraint that dramatically reduces the space of feasible fact-finding solutions, hence significantly improving the quality of fact-finding results. The constraint relies on a simple approximate test of source independence, applicable to unguided sensing, and incorporates information about the number of independent sources of an observation to constrain the posterior estimate of its probability of correctness. Two different approaches are developed to test the independence of sources for purposes of applying this constraint, leading to two flavors of the CEM algorithm, we call CEM and CEM-Jaccard. We show using both simulation and real data sets collected from Twitter that by forcing the algorithm to converge to a solution in which the constraint is satisfied, the quality of solutions is significantly improved.
【Keywords】: social networks; truth discovery; constrained expectation maximization (CEM); estimation accuracy
【Paper Link】 【Pages】:2438-2446
【Authors】: Jian Lin ; Ming Li ; Dejun Yang ; Guoliang Xue
【Abstract】: Crowdsensing leverages the rapid growth of sensor-embedded smartphones and human mobility for pervasive information collection. To incentivize smartphone users to participate in crowdsensing, many auction-based incentive mechanisms have been proposed for both offline and online scenarios. It has been demonstrated that the Sybil attack may undermine these mechanisms. In a Sybil attack, a user illegitimately pretends multiple identities to gain benefits. Sybil-proof incentive mechanisms have been proposed for the offline scenario. However, the problem of designing Sybil-proof online incentive mechanisms for crowdsensing is still open. Compared to the offline scenario, the online scenario provides users one more dimension of flexibility, i.e., active time, to conduct Sybil attacks, which makes this problem more challenging. In this paper, we design Sybil-proof online incentive mechanisms to deter the Sybil attack for crowdsensing. Depending on users' flexibility on performing their tasks, we investigate both single-minded and multi-minded cases and propose SOS and SOM, respectively. SOS achieves computational efficiency, individual rationality, truthfulness, and Sybil-proofness. SOM achieves individual rationality, truthfulness, and Sybil-proofness. Through extensive simulations, we evaluate the performance of SOS and SOM.
【Keywords】: Task analysis; Smart phones; Sensors; Microsoft Windows; Conferences; Privacy; Cost function
【Paper Link】 【Pages】:2447-2455
【Authors】: Bainan Xia ; Srinivas Shakkottai ; Vijay Subramanian
【Abstract】: We consider a general small-scale market for agent-to-agent resource sharing, in which each agent could either be a server (seller) or a client (buyer) in each time period. In every time period, a server has a certain amount of resources that any client could consume, and randomly gets matched with a client. Our target is to maximize the resource utilization in such an agent-to-agent market, where the agents are strategic. During each transaction, the server gets money and the client gets resources. Hence, trade ratio maximization implies efficiency maximization of our system. We model the proposed market system through a Mean Field Game approach and prove the existence of the Mean Field Equilibrium, which can achieve an almost 100% trade ratio. Finally, we carry out a simulation study motivated by an agent-to-agent computing market, and a case study on a proposed photovoltaic market, and show the designed market benefits both individuals and the system as a whole.
【Keywords】: Servers; Currencies; Games; Computational modeling; Switches; Conferences
【Paper Link】 【Pages】:2456-2464
【Authors】: Xiaoxi Zhang ; Chuan Wu ; Zhiyi Huang ; Zongpeng Li
【Abstract】: State-of-the-art cloud platforms adopt pay-as-you-go pricing, where users pay for the resources on demand according to occupation time. Simple and intuitive as it is, such a pricing scheme is a mismatch for new workloads today such as large-scale machine learning, whose completion time is hard to estimate beforehand. To supplement existing cloud pricing schemes, we propose an occupation-oblivious online pricing mechanism for cloud jobs without pre-specified time duration and for users who prefer a pre-determined cost for job execution. Our strategy posts unit resource prices upon user arrival and decides a fixed charge for completing the user's job, without the need to know how long the job is to occupy the requested resources. At the core of our design is a novel multi-armed bandit based online learning algorithm for estimating unknown input by exploration and exploitation of past resource sales, and deciding resource prices to maximize profit of the cloud provider in an online setting. Our online learning algorithm achieves a low regret sublinear with the time horizon, in terms of overall provider profit, compared with an omniscient benchmark. We also conduct trace-driven simulations to verify efficacy of the algorithm in real-world settings.
【Keywords】: Cloud computing; Pricing; Training; Machine learning; Graphics processing units; Servers; Google
【Paper Link】 【Pages】:2465-2473
【Authors】: Boyuan He ; Haitao Xu ; Ling Jin ; Guanyu Guo ; Yan Chen ; Guangyao Weng
【Abstract】: In-app advertising has served as the major revenue source for millions of app developers in the mobile Internet ecosystem. Ad networks play an important role in app monetization by providing third-party libraries for developers to choose and embed into their apps. However, developers lack guidelines on how to choose from hundreds of ad networks and various ad features to maximize their revues without hurting the user experience of their apps. Our work aims to uncover the best practice and provide app developers guidelines on ad network selection and ad placement. To this end, we investigate 697 unique APIs from 164 ad networks which are extracted from 277,616 Android apps, develop a methodology of ad type classification based on UI interaction and behavior, and perform a large scale measurement study of in-app ads with static analysis techniques at the API granularity. We found that developers have more choices about ad networks than several years before. Most developers are conservative about ad placement and about 71% apps contain at most one ad library. In addition, the likeliness of an app containing ads depends on the app category to which it belongs. The app categories featuring young audience usually contain the most ad libraries maybe because of the ad-tolerance characteristic of young people. Furthermore, we propose a terminology and classify mobile ads into five ad types: Embedded, Popup, Notification, Offerwall, and Floating. We found that embedded and popup ad types are popular with apps in nearly all categories. Our results also suggest that developers should embed at most 6 ad libraries into an app, which otherwise would anger the app users. Also, a developer should use at most one ad network when her app is still at the initial stage and could start using more (2 or 3) ad networks when the app becomes popular. Our research is the first to reveal the preference of both developers and users for ad networks and ad types.
【Keywords】: Libraries; Androids; Humanoid robots; Advertising; Ecosystems; Conferences; Guidelines
【Paper Link】 【Pages】:2474-2482
【Authors】: Liang Zheng ; Carlee Joe-Wong ; Matthew Andrews ; Mung Chiang
【Abstract】: As the U.S. mobile data market matures, Internet service providers (ISPs) generally charge their users with some variation on a quota-based data plan with overage charges. Common variants include unlimited, prepaid, and usage-based data plans. However, despite a recent flurry of research on optimizing mobile data pricing, few works have considered how these data plans affect users' consumption behavior. In particular, while users with such plans have a strong incentive to plan their usage over the month, they also face uncertainty in their future data usage needs that would make such planning difficult. In this work, we develop a dynamic programming model of users' consumption decisions over the month that takes this uncertainty into account. We use this model to quantify which types of users would benefit from different types of data plans, using these conditions to extrapolate the optimal types of data plans that ISPs should offer. Our theoretical findings are complemented by numerical simulations on a dataset of user usage from a large U.S. ISP. The results help mobile users to choose data plans that maximize their utilities and ISPs to gain profit by understanding their user behavior while choosing what data plans to offer.
【Keywords】: Data models; Pricing; Uncertainty; Conferences; Smart phones; Stochastic processes; Web and internet services
【Paper Link】 【Pages】:2483-2491
【Authors】: Yihan Zou ; Xiaojun Lin ; Dionysios Aliprantis ; Minghua Chen
【Abstract】: The uncertainty and variability of renewable generation pose significant challenges to reliable power-grid operations. This paper designs robust online strategies for jointly operating energy storage units and fossil-fuel generators to achieve provably reliable grid operations at all times under high renewable uncertainty, without the need of renewable curtailment. In particular, we jointly consider two power system operations, namely day-ahead reliability assessment commitment (RAC) and real-time dispatch. We first extend the concept of “safe-dispatch sets” to our setting. While finding such safe-dispatch sets and checking their non-emptiness provide crucial answers to both RAC and real-time dispatch, their computation incurs high complexity in general. To develop computationally-efficient solutions, we first study a single-bus case with one generator-storage pair, where we derive necessary conditions and sufficient conditions for the safe-dispatch sets. Our results reveal fundamental trade-offs between storage capacity and generator ramp-up/-down limits to ensure grid reliability. Then, for the more general multi-bus scenario, we split the net-demand among virtual generator-storage pairs (VGSPs) and apply our single-bus decision strategy to each VGSP. Simulation results on an IEEE 30-bus system show that, compared with state-of-art solutions, our scheme requires significantly less storage to ensure reliable grid operation without any renewable curtailment.
【Keywords】: Generators; Reliability; Uncertainty; Power system reliability; Energy storage; Real-time systems; ISO
【Paper Link】 【Pages】:2492-2500
【Authors】: Yongmin Zhang ; Jiayi Chen ; Lin Cai ; Jianping Pan
【Abstract】: Connected electric vehicles (EVs) are a key component of future intelligent and green transportation systems, and the penetration of EVs depends on convenient and cost-effective charging services. In addition to being charged at home or on parking lots, a charging network is needed for EVs right off the road. This paper first focuses on the optimal charging network design for charging service providers, considering the time-varying and location-dependent demands from vehicles and constraints of power grids. To optimize the charging station locations and the number of chargers in each station, we first model the coverage area of each possible location to estimate the dynamic charging requirements of EVs. Then, we formulate the problem as profit maximization, which is a mixed-integer program. To make the problem tractable, we investigate the features of the problem and obtain a necessary condition to deploy a charging station and derive the upper and lower bounds of the number of chargers in each station. Given the analysis, we take two steps to transform and relax the problem to convex optimization. A fast-converging search algorithm is further proposed based on the profit of each possible location. Using real vehicle traces, simulation results show that the proposed algorithm can maximize the total profit when fewer charging stations and chargers are initially needed, which is more attractive for charging service providers.
【Keywords】: Electric Vehicles; Charging Service; Charing Station; Power Grid
【Paper Link】 【Pages】:2501-2509
【Authors】: Mingkui Wei ; Zhuo Lu ; Yufei Tang ; Xiang Lu
【Abstract】: Utilizing advanced communication technologies to facilitate power system monitoring and control, the smart grid is envisioned to be more robust and resilient against cascading failures. Although the integration of communication network does benefit the smart grid in many aspects, such benefits should not overshadow the fact that the interdependence between the communication network and the power infrastructure makes the smart grid more fragile to cascading failures. Thus, it is essential to understand the impact of such cyber-physical integration with interdependence from both positive and negative perspectives. In this paper, we develop a systematic framework to analyze the benefits and drawbacks of the cyber-physical interdependence. We use theoretical analysis and system-level simulations to characterize the impact of such interdependence. We identify two phases during the progress of failure propagation where the integrated communication and interdependence helps and hinders the mitigation of the failure, respectively, which provides practical guidance to smart grid system design and optimization.
【Keywords】: Smart grid; failure propagation; cascading failure; load shedding control; modeling and simulations
【Paper Link】 【Pages】:2510-2518
【Authors】: Jose Cordova-Garcia ; Dongliang Xie ; Xin Wang
【Abstract】: Modern communication technologies are expected to be available in the future Smart Grids to enable the control of equipments over the whole power grid. In this paper, we consider such networked control approach to address failures that may occur at any location of the grid, due to attacks or unit malfunction, and provide a wide-scale solution that prevent the failure impacts from spreading over a large area. Different from literature work that focuses on modifying power equations under the standard constraints of the power system, we estimate the impact of controlling different nodes on topological areas of the grid based on social metrics, which are derived from the graph capturing both the topological and electrical properties of the power grid. We propose a failure control algorithm for topological containment of failures in smart grid. Our algorithm also takes careful consideration of the impact the planned control has on the grid to avoid the possibly involuntary failure extension. We show that social metrics can efficiently trade off between the topological and electrical characteristics revealed by the power grid graph representation. We evaluate the performance against networked control strategies that only use power models to determine the actions to be performed at power nodes. Our results show that the proposed control scheme can effectively contain failures within their original location range.
【Keywords】: Monitoring; Smart grids; Measurement; Load modeling; Admittance; Communication networks
【Paper Link】 【Pages】:2519-2527
【Authors】: Haitao Xu ; Shuai Hao ; Alparslan Sari ; Haining Wang
【Abstract】: Today's online marketing industry has widely employed email tracking techniques, such as embedding a tiny tracking pixel, to track email opens of potential customers and measure marketing effectiveness. However, email tracking could allow miscreants to collect metadata information associated with email reading without user awareness and then leverage the information for stealthy surveillance, which has raised serious privacy concerns. In this paper, we present an in-depth and comprehensive study on the privacy implications of email tracking. First, we develop an email tracking system and perform realworld tracking on hundreds of solicited crowdsourcing participants. We estimate the amount of privacy-sensitive information available from email reading, assess privacy risks of information leakage, and demonstrate how easy it is to launch a long-term targeted surveillance attack in real scenarios by simply sending an email with tracking capability. Second, we investigate the prevalence of email tracking through a large-scale measurement, which includes more than 44,000 email samples obtained over a period of seven years. Third, we conduct a user study to understand users' perception of privacy infringement caused by email tracking. Finally, we evaluate existing countermeasures against email tracking and propose guidelines for developing more comprehensive and fine-grained prevention solutions.
【Keywords】: Electronic mail; Privacy; Target tracking; Servers; Metadata; Crowdsourcing; Surveillance
【Paper Link】 【Pages】:2528-2536
【Authors】: Ruomu Hou ; Irvan Jahja ; Loi Luu ; Prateek Saxena ; Haifeng Yu
【Abstract】: In a sybil attack, an adversary creates a large number of fake identities/nodes and have them join the system. Computational puzzles have long been investigated as a possible sybil defense: If a node fails to solve the puzzle in a timely fashion, it will no longer be accepted by other nodes. However, it is still possible for a malicious node to behave in such a way that it is accepted by some honest nodes but not other honest nodes. This results in different honest nodes having different views on which set of nodes should form the system. Such view divergence, unfortunately, breaks the overarching assumption required by many existing security protocols. Partly spurred by the growing popularity of Bitcoin, researchers have recently formalized the above view divergence problem and proposed interesting solutions (which we call view reconciliation protocols). For example, in CRYPTO 2015, Andrychowicz and Dziembowski proposed a view reconciliation protocol with Θ(N) time complexity, with N being the number of honest nodes in the system. All existing view reconciliation protocols so far have a similar Θ(N) time complexity. As this paper's main contribution, we propose a novel view reconciliation protocol with a time complexity of only Θ([ln N/ln ln N]). To achieve such an exponential improvement, we aggressively exploit randomization.
【Keywords】: Protocols; Bitcoin; Time complexity; Error probability; Computer science
【Paper Link】 【Pages】:2537-2545
【Authors】: Qiang Li ; Xuan Feng ; Haining Wang ; Zhi Li ; Limin Sun
【Abstract】: An increasing number of embedded devices are connecting to the Internet at a surprising rate. Those devices usually run firmware and are exposed to the public by device search engines. Firmware in embedded devices comes from different manufacturers and product versions. More importantly, many embedded devices are still using outdated versions of firmware due to compatibility and release-time issues, raising serious security concerns. In this paper, we propose generating fine-grained fingerprints based on the subtle differences between the filesystems of various firmware images. We leverage the natural language processing technique to process the file content and the document object model to obtain the firmware fingerprint. To validate the fingerprints, we have crawled 9,716 firmware images from official websites of device vendors and conducted real-world experiments for performance evaluation. The results show that the recall and precision of the firmware fingerprints exceed 90%. Furthermore, we have deployed the prototype system on Amazon EC2 and collected firmware in online embedded devices across the IPv4 space. Our findings indicate that thousands of devices are still using vulnerable firmware on the Internet.
【Keywords】: Internet; Microprogramming; Computer security; Natural language processing; Prototypes; Cameras; Conferences
【Paper Link】 【Pages】:2546-2554
【Authors】: Nikolaos Papadis ; Sem C. Borst ; Anwar Walid ; Mohamed Grissa ; Leandros Tassiulas
【Abstract】: The Blockchain paradigm provides a popular mechanism for establishing trust and consensus in distributed environments. While Blockchain technology is currently primarily deployed in crypto-currency systems like Bitcoin, the concept is also expected to emerge as a key component of the Internet-of-Things (IoT), enabling novel applications in digital health, smart energy, asset tracking and smart transportation. As Blockchain networks evolve to industrial deployments with large numbers of geographically distributed nodes, the block transfer and processing delays arise as a critical issue which may create greater potential for forks and vulnerability to adversarial attacks. Motivated by these issues, we develop stochastic network models to capture the Blockchain evolution and dynamics and analyze the impact of the block dissemination delay and hashing power of the member nodes on Blockchain performance in terms of the overall block generation rate and required computational power for launching a successful attack. The results provide useful insight in crucial design issues, e.g., how to adjust the `difficulty-of-work' in the presence of delay so as to achieve a target block generation rate or appropriate level of immunity from adversarial attacks. We employ a combination of analytical calculations and simulation experiments to investigate both stationary and transient performance features, and demonstrate close agreement with measurements on a wide-area network testbed running the Ethereum protocol.
【Keywords】: Delays; Peer-to-peer computing; Analytical models; Stochastic processes; Bitcoin
【Paper Link】 【Pages】:2555-2563
【Authors】: Yuanxing Zhang ; Chengliang Gao ; Yangze Guo ; Kaigui Bian ; Xin Jin ; Zhi Yang ; Lingyang Song ; Jiangang Cheng ; Hu Tuo ; Xiaoming Li
【Abstract】: Decentralizing content delivery to edge devices has become a popular solution for saving the bandwidth consumption of CDN when the CDN bandwidth is expensive. One successful realization is the hybrid CDN-P2P VoD system, where a client is allowed to request video content from a number of seeds (seed clients) in the P2P network. However, the seed scarcity problem may arise for a video resource when there are an insufficient number of seeds to satisfy requests to the video. To alleviate this problem, many commercial VoD systems have employed a video push mechanism that directly sends the recent scarce video resources to randomly-chosen seeds to serve more requests. However, the current video push mechanism fails to consider which videos will become scarce in the future, or differentiate the uploading capability of different seeds. In this paper, we propose Proactive-Push, a video push mechanism that lowers the bandwidth consumption of CDN by predicting future scarce videos and proactively sending them to competent seeds with strong uploading capabilities. Proactive-Push trains neural network models to correctly predict 80% of future scarce video resources, and identify over 90% of competent seeds. We evaluate Proactive-Push using a trace-driven emulation and a real-world pilot deployment over a commercial VoD system. Results show that Proactive-Push can further reduce the proportion of direct download from CDN by 21%, and save the CDN bandwidth cost at peak time by 18%.
【Keywords】: Video streaming; CDN; deep learning; edge computing; network traffic control
【Paper Link】 【Pages】:2564-2572
【Authors】: Xin Wang ; Yinlong Xu ; Richard T. B. Ma
【Abstract】: With the rapid growth of congestion-sensitive and data-intensive applications, traditional settlement-free peering agreements with best-effort delivery often do not meet the QoS requirements of content providers (CPs). Meanwhile, Internet access providers (IAPs) feel that revenues from end-users are not sufficient to recoup the upgrade costs of network infrastructures. Consequently, some IAPs have begun to offer CPs a new type of peering agreement, called paid peering, under which they provide CPs with better data delivery quality for a fee. In this paper, we model a network platform where an IAP makes decisions on the peering types offered to CPs and the prices charged to CPs and end-users. We study the optimal peering schemes for the IAP, i.e., to offer CPs both the paid and settlement-free peering to choose from or only one of them, as the objective is profit or welfare maximization. Our results show that 1) the IAP should always offer the paid and settlement-free peering under the profit-optimal and welfare-optimal schemes, respectively, 2) whether to simultaneously offer the other peering type is largely driven by the type of data traffic, e.g., text or video, and 3) regulators might want to encourage the IAP to allocate more network capacity to the settlement-free peering for increasing user welfare.
【Keywords】: Internet; Data models; Regulators; Pricing; Resource management; Elasticity; Sensitivity
【Paper Link】 【Pages】:2573-2581
【Authors】: Vamseedhar R. Reddyvari ; Parimal Parag ; Srinivas Shakkottai
【Abstract】: The ability of a P2P network to scale its throughput up in proportion to the arrival rate of peers has recently been shown to be crucially dependent on the chunk sharing policy employed. Some policies can result in low frequencies of a particular chunk, known as the missing chunk syndrome, which can dramatically reduce throughput and lead to instability of the system. For instance, commonly used policies that nominally “boost” the sharing of infrequent chunks such as the well-known rarest-first algorithm have been shown to be unstable. Recent efforts have largely focused on the careful design of boosting policies to mitigate this issue. We take a complementary viewpoint, and instead consider a policy that simply prevents the sharing of the most frequent chunk(s). Following terminology from statistics wherein the most frequent value in a data set is called the mode, we refer to this policy as mode suppression. We prove the stability of this algorithm using Lyapunov techniques. We also design a distributed version that suppresses the mode via an estimate obtained by sampling three randomly selected peers. We show numerically that both algorithms perform well at minimizing total download times, with distributed mode suppression outperforming all others that we tested against.
【Keywords】: Servers; Peer-to-peer computing; Stability analysis; Throughput; Boosting; Frequency estimation; Clocks
【Paper Link】 【Pages】:2582-2590
【Authors】: Leonardo Maccari ; Lorenzo Ghiro ; Alessio Guerrieri ; Alberto Montresor ; Renato Lo Cigno
【Abstract】: Centrality metrics are a key instrument for graph analysis and play a central role in many problems related to networking such as service placement, robustness analysis and network optimization. Betweenness centrality is one of the most popular and well-studied metric. While distributed algorithms to compute this metric exist, they are either approximated or limited to certain topologies (directed acyclic graphs or trees). Exact distributed algorithms for betweenness centrality are computationally complex, because its calculation requires the knowledge of all possible shortest paths within the graph. In this paper we consider load centrality, a metric that usually converges to betweenness, and we present the first distributed and exact algorithm to compute it. We prove its convergence, we estimate its complexity and we show it is directly applicable-with minimal modifications-to any distance-vector routing protocol based on Bellman-Ford. We finally implement it on top of the Babel routing protocol and we show that, exploiting centrality, we can significantly reduce Babel's convergence time upon node failure without increasing signalling overhead. Our contribution is relevant in the realm of wireless distributed networks, but the algorithm can be adopted in any distributed system where it is not possible, or computationally impractical, to reconstruct the whole network graph at each node and compute betweenness centrality with the classical approach based on Dijkstra's algorithm.
【Keywords】: Multi-hop networks; Mesh networks; Ad-hoc networks; Bellman-Ford; Load centrality; Distributed Algorithms; Failure recovery
【Paper Link】 【Pages】:2591-2599
【Authors】: Paolo Castagno ; Vincenzo Mancuso ; Matteo Sereno ; Marco Ajmone Marsan
【Abstract】: In this paper we develop a simple, yet accurate, performance model to understand if and how evolutions of traditional cellular network protocols can be exploited to allow large numbers of devices to gain control of transmission resources in smart factory radio access networks. The model results shed light on the applicability of evolved access procedures and help understand how many devices can be served per base station. In addition, considering the simultaneous presence of different traffic classes, we investigate the effectiveness of prioritised access, exploiting access class barring techniques. Our model shows that, even with the sub-millisecond time slots foreseen in LTE Advanced Pro and 5G, a base station can accommodate at most few thousand devices to guarantee access latencies below 100 ms with high transmission success probabilities. This calls for a rethinking of wireless access strategies to avoid ultra-dense cell deployments within smart factory infrastructures.
【Keywords】: 5G mobile communication; Base stations; Smart manufacturing; Analytical models; Conferences; Computational modeling; Performance evaluation
【Paper Link】 【Pages】:2600-2608
【Authors】: Mingyue Ji ; Rong-Rong Chen
【Abstract】: We consider a wireless distributed computing network, where all computing nodes (workers) are connected via wireless medium obeying the seminal protocol channel model. In particular, we focus on the MapReduce-type platform, where each worker is assigned to compute some arbitrary output functions from F input files, which are distributively cached in all workers. The overall computation is decomposed into computing a set of “Map” and “Reduce” functions across all workers. The goal is to characterize the minimum computing latency as a function of the computation load. Unlike other related works, which consider either wireline settings or restrict the communication among workers to single-hop, here we focus on the wireless scenario and do not constrain any communication schemes. We propose a data set cache strategy based on a deterministic assignment of Maximum Distance Separable (MDS)-coded date sets over all input files, and a coded multicast transmission strategy where the workers send linearly coded computing results to each other in order to collectively satisfy their assigned tasks. We show that our approach can achieve a scalable communication latency, outperform the state of the art schemes in the order sense, and achieve the information theoretic outer bound within a multiplicative constant factor in practical parameter regimes.
【Keywords】: Wireless communication; Wireless sensor networks; Communication system security; Protocols; Cloud computing; Conferences
【Paper Link】 【Pages】:2609-2617
【Authors】: Madhumitha Harishankar ; Nagarjun Srinivasan ; Carlee Joe-Wong ; Patrick Tague
【Abstract】: As demand for Internet usage increases, Internet service providers (ISPs) have begun to explore pricing-based solutions to dampen data demand. However few explicitly consider the dual problem of monetizing idle network capacity at uncongested times. PopData is a recent initiative from Verizon that does so by offering supplemental discount offers (SDOs) at these times, in which users can pay a fixed fee in exchange for unlimited data in the next hour. This work is the first of its kind to assess the benefits and viability of SDOs by modeling user and ISP decisions as a game, considering both overall monthly decisions and hour-to-hour decisions throughout the month. We first use our monthly model to show that users are generally willing to accept some SDO offers, allowing the ISP to increase its revenue. We then show that users face a complex hourly decision problem as to which SDOs they should accept over their billing cycles, since they are unaware of their exact future needs or when future SDOs will be made. The ISP faces a similarly challenging problem in deciding when to offer SDOs so as to maximize its revenue, subject to users' decisions. We develop optimal decision criteria for users and ISPs to decide whether to make or accept SDO offers. Our analysis shows that both users and ISPs can benefit from these offers, and we verify this through numerical experiments on a one-week trace of 20 cellular data users. We find that ISPs can exploit user uncertainty in when future SDOs will be made to optimize its revenue.
【Keywords】: Uncertainty; Pricing; Games; Wireless fidelity; Computational modeling; Data models; Cellular networks
【Paper Link】 【Pages】:2618-2626
【Authors】: Rashid Tahir ; Ali Raza ; Faizan Ahmad ; Jehangir Kazi ; Fareed Zaffar ; Chris Kanich ; Matthew Caesar
【Abstract】: Typosquatting is a blackhat practice that relies on human error and low-cost domain registrations to hijack legitimate traffic from well-established websites. The technique is typically used for phishing, driving traffic towards competitors or disseminating indecent or malicious content and as such remains a concern for businesses. We take a fresh new look at this well-studied phenomenon to explore why some URLs are more vulnerable to typing mistakes than others. We explore the relationship between human hand anatomy, keyboard layouts and typing mistakes using various URL datasets. We create an extensive user-centric typographical model and compute a Hardness Quotient (likelihood of mistyping) for each URL using a quantitative measure for finger and hand effort. Furthermore, our model predicts the most likely typos for each URL which can then be defensively registered. Cross-validation against actual URL and DNS datasets suggests that this is a meaningful and effective defense mechanism.
【Keywords】: Uniform resource locators; Keyboards; Layout; Measurement; Phishing; Browsers
【Paper Link】 【Pages】:2636-2644
【Authors】: Tong Yang ; Alex X. Liu ; Yulong Shen ; Qiaobin Fu ; Dagang Li ; Xiaoming Li
【Abstract】: Software-Defined Networking (SDN), which separates the control plane and data plane, is a promising new network architecture for the Future Internet. OpenFlow is the de facto standard which defines the communication protocol between the controller and switches. The most challenging issue in OpenFlow switches is the lookup of multiple OpenFlow tables. The lookup of OpenFlow tables is so complicated that the state-of-the-art research are still focusing on the design of lookup pipeline architecture, and there is no specific algorithm for the lookup of OpenFlow tables. In this paper, we revise the long-pipeline architecture of OpenFlow 1.4 to a 5-stage pipeline architecture to make a trade-off between flexibility and implementability, and decompose the lookup of OpenFlow tables into three kinds of lookup: longest prefix matching (IP lookup), multi-field matching (packet classification), and exact matching. Then we design new algorithms for packet classification, because the state-of-the-art solutions for them seldom support fast update which is highly demanding for OpenFlow. The other two kinds of lookups can be well handled by state-of-the-art. Experimental results show that our proposed algorithms work excellently, and outperform state-of-the-art solutions.
【Keywords】: Computer architecture; IP networks; Pipelines; Pipeline processing; Classification algorithms; Conferences; Protocols
【Paper Link】 【Pages】:2645-2653
【Authors】: Wenjun Li ; Xianfeng Li ; Hui Li ; Gaogang Xie
【Abstract】: Efficient algorithmic solutions for multi-field packet classification have been a challenging problem for many years. This problem is becoming even worse in the era of Software Defined Network (SDN), where flow tables with increasing complexities are playing a central role in the forwarding plane of SDN. In this paper, we first conduct an unprecedented in-depth reasoning on issues that led to the unsuccess of the major quests for scalable algorithmic solutions. With the insights obtained, we propose a practical framework called CutSplit, which can exploit the benefits of cutting and splitting techniques adaptively. By addressing the central problem caused by uncontrollable rule replications suffered by the major efforts, CutSplit not only pushes the performance of algorithmic packet classification more closely to hardware-based solutions, but also reduces the memory consumption to a practical level. Moreover, our work achieves low pre-processing time for rule updates, a problem that has long been ignored by previous decision-trees, but is becoming more relevant in the context of SDN due to frequent updates of rules. Experimental results show that using ClassBench, CutSplit achieves a memory reduction over 10 times, as well as 3x improvement on performance in terms of the number of memory access on average.
【Keywords】: Packet Classification; OpenFlow; Decision Tree; Algorithm; Firewall
【Paper Link】 【Pages】:2654-2662
【Authors】: James Daly ; Eric Torng
【Abstract】: Many networking devices, such as firewalls and routing tables, rely upon packet classifiers to define their behavior for various kinds of network traffic. Since these devices have real-time constraints, it is important for packet classification to be as fast as possible. We present a new method, ByteCuts, which includes two major improvements over existing tree-based packet classifiers. First, it introduces a new cutting method that is more efficient than existing methods. Second, ByteCuts intelligently partitions the rule list into multiple trees in a way that supports the cutting method and reduces rule replication. We compare ByteCuts to several existing methods such as HyperCuts, HyperSplit, and SmartSplit. We find that ByteCuts outperforms SmartSplit, the previous fastest classifier, in every metric; specifically, ByteCuts is able to classify packets 58% faster, can be constructed in seconds rather than minutes, and uses orders of magnitude less memory.
【Keywords】: Vegetation; Decision trees; Memory management; Conferences; Real-time systems; Measurement; Indexes
【Paper Link】 【Pages】:2663-2671
【Authors】: Pranav Madadi ; François Baccelli ; Gustavo de Veciana
【Abstract】: Ultra densification along with the use of wider bands at higher frequencies are likely to be key elements towards meeting the throughput/coverage objectives of 5G wireless networks. In addition to increased parallelism, densification leads to improved, but eventually bounded, benefits from proximity of users to base stations, while resulting in increased aggregate interference. Such networks are expected to be interference limited, and in higher frequency regimes, the interference is expected to become spatially variable due to the increased sensitivity of propagation to obstructions and the proximity of active interferers. This paper studies the characteristics of the spatial random fields associated with interference and Shannon capacity in ultra-dense limiting regimes. They rely on the theory of Gaussian random fields which arise as natural limits under densification. Our models show how densification and operation at higher frequencies, could lead to increasingly rough temporal variations in the interference process. This is characterized by the Hölder exponent of the interference field. We show that these fluctuations make it more difficult for mobile users to adapt modulation and coding. We further study how the spatial correlations in users' rates impact backhaul dimensioning. Therefore, this paper identifies and quantifies challenges associated with densification in terms of the resulting unpredictability and the correlation of interference on the achievable rates.
【Keywords】: Interference; Base stations; Wireless communication; Modulation; Adaptation models; 5G mobile communication; Encoding
【Paper Link】 【Pages】:2672-2680
【Authors】: Fabio Cecchi ; Sem C. Borst ; J. S. H. van Leeuwaarden ; Philip A. Whiting
【Abstract】: As the Internet-of-Things (IoT) emerges, connecting immense numbers of sensors and devices, the continual growth in wireless communications increasingly manifests itself in terms of a larger and denser population of nodes with intermittent traffic patterns. A crucial issue that arises in these conditions is how to set the activation rates as a function of the network density and traffic intensity. Depending on the scaling of the activation rates, dense node populations may either result in excessive activations and potential collisions, or long delays that may increase with the number of nodes, even at low load. Motivated by the above issues, we examine optimal activation rate scalings in ultra-dense networks with intermittent traffic sources. We establish stability conditions, and provide closed-form expressions which indicate that the mean delay is roughly inversely proportional to the nominal activation rate. We also discuss a multi-scale mean-field limit, and use the associated fixed point to determine the buffer content and delay distributions. The results provide insight in the scalings that minimize the delay while preventing excessive activation attempts. Extensive simulation experiments demonstrate that the mean-field asymptotics yield highly accurate approximations, even when the number of nodes is moderate.
【Keywords】: Delays; Aggregates; Multiaccess communication; Sociology; Statistics; Throughput
【Paper Link】 【Pages】:2681-2689
【Authors】: Nikolaos Sapountzis ; Thrasyvoulos Spyropoulos ; Navid Nikaein ; Umer Salim
【Abstract】: Ultra-dense small cell networks will require sophisticated user association algorithms that consider (i) channel characteristics, (ii) base station load, and (iii) uplink/downlink (UL/DL) traffic profiles. They will also be characterized by high spatio-temporal variability in UL/DL traffic demand, due to the fewer users per BS. In this direction, Dynamic TDD is a promising new technique to match BS resources to actual demand. While plenty of literature exists on the problem of user association, and some recent on dynamic TDD, most works consider these separately. In this paper, we argue that user association policies are strongly coupled with the allocation of resources between UL and DL. We propose an algorithm that decomposes the problem into separate subproblems that can each be solved efficiently and in a distributed manner, and prove convergence to the global optimum. Simulation results suggest that our approach can improve UL and DL performance at the same time, with an aggregate improvement of more than 2×, compared to user association under static TDD allocation.
【Keywords】: Interference; Signal to noise ratio; Optimization; Resource management; Heuristic algorithms; Base stations; Downlink
【Paper Link】 【Pages】:2690-2698
【Authors】: Nikolaos Liakopoulos ; Georgios S. Paschos ; Thrasyvoulos Spyropoulos
【Abstract】: We study the user association problem in the context of dense networks, where standard adaptive algorithms become ineffective. The paper proposes a novel data-driven technique leveraging the theory of robust optimization. The main idea is to predict future traffic fluctuations, and use the predictions to design association maps before the actual arrival of traffic. Although the actual playout of the map is random due to prediction error, the maps are robustly designed to handle uncertainty, preventing constraint violations, and maximizing the expectation of a convex utility function, which allows to accurately balance base station loads. We propose a generic iterative algorithm, referred to as GRMA, which is shown to converge to the optimal robust map. The optimal maps have the intriguing property that they jointly optimize the predicted load and the variance of the prediction error. We validate our robust maps in Milano-area traces, with dense coverage and find that we can reduce violations from 25% (achieved by an adaptive algorithm) down to almost zero.
【Keywords】: Base stations; Robustness; Quality of service; Optimization; Wireless communication; Conferences; 5G mobile communication
【Paper Link】 【Pages】:2699-2707
【Authors】: Qiulin Lin ; Lei Deng ; Jingzhou Sun ; Minghua Chen
【Abstract】: We consider the problem of exploring travel demand statistics to optimize ride-sharing routing, in which the driver of a vehicle determines a route to transport multiple customers with similar itineraries and schedules in a cost-effective and timely manner. This problem is important for unleashing economical and societal benefits of ride-sharing. Meanwhile, it is challenging due to the need of (i) meeting travel delay requirements of customers, and (ii) making online decisions without knowing the exact travel demands beforehand. We present a general framework for exploring the new design space enabled by the demand-aware approach. We show that the demand-aware ride-sharing routing is fundamentally a two-stage stochastic optimization problem. We show that the problem is NP-Complete in the weak sense. We exploit the two-stage structure to design an optimal solution with pseudo-polynomial time complexity, which makes it amenable for practical implementation. We carry out extensive simulations based on real-world travel demand traces of Manhattan. The results show that using our demand-aware solution instead of the conventional greedy-routing scheme increases the driver's revenue by 10%. The results further show that as compared to the case without ride-sharing, our ride-sharing solution reduces the customers' payment by 9% and the total vehicle travel time (indicator of greenhouse gas emission) by 17%. The driver can also get 26% extra revenues per slot by participating in ride-sharing.
【Keywords】: Vehicles; Routing; Roads; Delays; Space exploration; Planning; Conferences
【Paper Link】 【Pages】:2708-2716
【Authors】: Takuma Oda ; Carlee Joe-Wong
【Abstract】: Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.
【Keywords】: Public transportation; Vehicles; Uncertainty; Real-time systems; Dispatching; Computational modeling; Training
【Paper Link】 【Pages】:2717-2725
【Authors】: Jianhui Zhang ; Pengqian Lu ; Zhi Li ; Jiayu Gan
【Abstract】: Public Bike Systems (PBSs) offer convenient and green travel service and become popular around the world. In many cities, the local governments build thousands of fixed stations for PBS to alleviate the city traffic jam and solve the last-mile problem. However, the increasing use of PBSs leads to new congestion problems in the form that users have, such as no bike to rent or no dock to return the bike. Further, users wish to receive assistance on deciding how to select bike trips with minimal time cost while taking congestion into account. Meanwhile, crowdsourcing attracted increasing attention in recent years. This paper applies it to help users share information and select bike trips before the bikes or docks are occupied. An interesting and important problem is how to help users select bike trips so that the time consumed on the trips can be minimized. We model the problem as a Bike Trip Selection (BTS) game which is shown to be equivalent to the symmetric network congestion game. This equivalence allows us to design a BTS algorithm by which the users can find at least one Nash Equilibria (NE) distributively. Furthermore, this paper evaluates the algorithm based on real datasets collected from the PBS of Hangzhou City in China. We also design a BTS system including an Android APP and a server to conduct the experiment for the distributed BTS algorithm in practice.
【Keywords】: Bike Trip Selection; Public Bike System; Crowd-sourcing; Game Theory
【Paper Link】 【Pages】:2726-2734
【Authors】: Yulong Tian ; Wei Wei ; Qun Li ; Fengyuan Xu ; Sheng Zhong
【Abstract】: The great potential of mobile crowdsourcing has started to attract attention of both industries and the research community. However, current commercial mobile crowdsourcing marketplaces are unsatisfactory because of the limited worker base and functionality. In this paper, we first revisit the foundation of performing mobile crowdsourcing on location-based social networks (LBSNs) through specially designed survey studies and comparison experiments involving hundreds of users. Our results reveal that active check-ins are good indicators of picking a right user to perform tasks, and LBSN could be an ideal platform for mobile crowdsourcing given proper services provided. We then propose both the centralized and decentralized design of MobiCrowd, a mobile crowdsourcing service built on LBSNs. Our evaluation, through trace-driven simulation and real-world experiments, demonstrates that the proposed schemes can effectively find workers for mobile crowdsourcing tasks associated with different venues by analyzing their location check-in histories.
【Keywords】: Task analysis; Crowdsourcing; Facebook; Conferences; Sensors; History
【Paper Link】 【Pages】:2735-2743
【Authors】: Lan Zhang ; Yannan Li ; Xiang Xiao ; Xiang-Yang Li ; Junjun Wang ; Anxin Zhou ; Qiang Li
【Abstract】: In recent years, advanced machine learning techniques have demonstrated remarkable achievements in many areas. Despite the great success, one of the bottlenecks in applying machine learning techniques in real world applications lies in the lack of a large amount of high-quality training data from diverse domains. Meanwhile, massive personal data is being generated by mobile devices and is often underutilized. To bridge the gap, we propose a general dataset purchasing framework, named CrowdBuy and CrowdBuy++, based on crowdsourcing, with which a buyer can efficiently buy desired data from available mobile users with quality guarantee in a way respecting users' data ownership and privacy. We present a complete set of tools including privacy-preserving image dataset quality measurements and image selection mechanisms, which are budget feasible, truthful and highly efficient for mobile users. We conducted extensive evaluations of our framework on large-scale images and demonstrate that the system is capable of crowdsourcing high quality datasets while preserving image privacy with little computation and communication overhead.
【Keywords】: Data privacy; Privacy; Machine learning; Crowdsourcing; Mobile handsets; Feature extraction; Biomedical imaging
【Paper Link】 【Pages】:2744-2752
【Authors】: Jonatan Krolikowski ; Anastasios Giovanidis ; Marco Di Renzo
【Abstract】: Caching popular content at the wireless edge is recently proposed as a means to reduce congestion at the backbone of cellular networks. The two main actors involved are Mobile Network Operators (MNOs) and Content Providers (CPs). In this work, we consider the following arrangement: an MNO pre-installs memory on its wireless equipment (e.g. Base Stations) and invites a unique CP to use them, with monetary cost. The CP will lease memory space and place its content; the MNO will associate network users to stations. For a given association policy, the MNO may help (or not) the CP to offload traffic, depending on whether the association takes into account content placement. We formulate an optimization problem from the CP perspective, which aims at maximizing traffic offloading with minimum leasing costs. This is a joint optimization problem that can include any association policy, and can also derive the optimal one. We present a general exact solution using Benders decomposition. It iteratively updates decisions of the two actors separately and converges to the global optimum. We illustrate the optimal CP leasing/placement strategy and hit probability gains under different association policies. Performance is maximised when the MNO association follows CP actions.
【Keywords】: Wireless communication; Data centers; Optimization; Communication system security; Conferences; Base stations; Cache memory
【Paper Link】 【Pages】:2753-2761
【Authors】: Hao Ge ; Randall A. Berry
【Abstract】: A fundamental problem in many network systems is how to allocate limited resources among competing agents, who may have their own incentives. The well-known Vickrey-Clarke-Groves (VCG) mechanism provides an elegant solution to this incentive issue. In particular, VCG implements the socially optimal outcome in dominant strategies. However, it is also well-known that this mechanism can require an excessive amount of communication. Approaches have been studied that relax the communication requirements while also relaxing the incentive guarantees to use Nash equilibria instead of dominant strategies. Here, we take a different approach and study mechanisms with limited information that still have dominant strategy outcomes, but suffer an efficiency loss. We characterize this loss for the case of a single divisible resource. We first consider a mechanism in which information is limited by quantizing the resource into a finite number of units and allocating each of these to one agent via a VCG mechanism. This limits each agent to submitting a finite number of real values. We subsequently consider the case where each value is also quantized before being reported by each agent. Finally, we present numerical examples of the performance of these mechanisms.
【Keywords】: Resource management; Optimization; Conferences; Information exchange; Nash equilibrium; Electronic mail; Bandwidth
【Paper Link】 【Pages】:2762-2770
【Authors】: Mikhail Khodak ; Liang Zheng ; Andrew S. Lan ; Carlee Joe-Wong ; Mung Chiang
【Abstract】: As infrastructure-as-a-service clouds become more popular, cloud providers face the complicated problem of maximizing their resource utilization by handling the dynamics of user demand. Auction-based pricing, such as Amazon EC2 spot pricing, provides an option for users to use idle resources at highly reduced yet dynamic prices; under such a pricing scheme, users place bids for cloud resources, and the provider chooses a threshold “spot” price above which bids are admitted. In this paper, we propose a nonlinear dynamical system model for the time-evolution of the spot price as a function of latent states that characterize user demand in the spot and on-demand markets. This model enables us to adaptively predict future spot prices given past spot price observations, allowing us to derive user bidding strategies for heterogeneous cloud resources that minimize the cost to complete a job with negligible probability of interruption. Along the way, the model also yields novel, empirically verifiable insights into cloud provider behavior. We experimentally validate our model and bidding strategy on two months of Amazon EC2 spot price data and find that our proposed bidding strategy is up to 4 times closer to the optimal strategy in hindsight compared to a baseline regression approach while incurring the same negligible probability of interruption.
【Keywords】: Cloud computing; Pricing; Hidden Markov models; Predictive models; Computational modeling; Conferences; Resource management