2015 IEEE Conference on Computer Communications, INFOCOM 2015, Kowloon, Hong Kong, April 26 - May 1, 2015. IEEE 【DBLP Link】
【Paper Link】 【Pages】:1-9
【Authors】: Eilwoo Baik ; Amit Pande ; Chris Stover ; Prasant Mohapatra
【Abstract】: The quality of mobile videos is usually quantified through the Quality of Experience (QoE), which is usually based on network QoS measurements, user engagement, or post-view subjective scores. Such quantifications are not adequate for real-time evaluation. They cannot provide on-line feedback for improvement of visual acuity, which represents the actual viewing experience of the end user. We present a visual acuity framework which makes fast online computations in a mobile device and provide an accurate estimate of mobile video QoE. We identify and study the three main causes that impact visual acuity in mobile videos: spatial distortions, types of buffering and resolution changes. Each of them can be accurately modeled using our framework. We use machine learning techniques to build a prediction model for visual acuity, which depicts more than 78% accuracy. We present an experimental implementation on iPhone 4 and 5s to show that the proposed visual acuity framework is feasible to deploy in mobile devices. Using a data corpus of over 2852 mobile video clips for the experiments, we validate the proposed framework.
【Keywords】: learning (artificial intelligence); mobile computing; quality of experience; quality of service; video signal processing; iPhone 4; iPhone 5s; machine learning techniques; mobile devices; mobile video clips; network QoS measurements; prediction model; quality of experience; spatial distortions; video acuity assessment; visual acuity framework; Accuracy; Distortion; Measurement; Mobile communication; Mobile handsets; Streaming media; Visualization; Mobile Video; Quality of Experience; Video Quality
【Paper Link】 【Pages】:10-18
【Authors】: Kien A. Hua ; Ning Jiang ; Jason Kuhns ; Vaithiyanathan Sundaram ; Cliff Zou
【Abstract】: Statistics show that 79% of the Internet traffic is video and mostly “redundant”. Video-On-Demand in particular follows a 90/10 access pattern, where 90% of the users access the same 10% of all video content. As a result, redundant data are repeatedly transmitted over the Internet. In this paper, we propose a novel traffic deduplication technique to achieve more efficient network communication between video sources (video servers or proxy servers in a CDN) and clients. The proposed SMART (Small packet Merge-Able RouTers) overlay network employs an opportunistic traffic deduplication approach and allows each SMART router to dynamically merge independent streams of the same video content, forming a video streaming tree (VST). The merged streams are tunneled through the overlay together with TCP sessions information before eventually being de-multiplexed and delivered to the clients fully compatible with the TCP protocol. We present theoretical analysis findings on the merging strategy between the video source and clients, the efficiency of the SMART router to save traffic during a merge process, and the overall performance of implementing a SMART overlay topology between a video source and clients. Finally, we prototyped SMART in the PlanetLab environment. We illustrate that performance evaluation results are consistent with our theoretical analysis and significant bandwidth saving is achieved.
【Keywords】: Internet; overlay networks; telecommunication network routing; telecommunication network topology; telecommunication traffic; transport protocols; video on demand; video streaming; Internet traffic; PlanetLab environment; SMART; SMART overlay topology; TCP protocol; VST; access pattern; network communication; proxy servers; redundancy control; small packet mergeable routers; traffic deduplication; video content; video servers; video source; video streaming tree; video-on-demand; Bandwidth; IP networks; Internet; Merging; Servers; Software; Streaming media; Buffers; Multicast; Overlay Network; Routing; Video streaming; Video-On-Demand
【Paper Link】 【Pages】:19-27
【Authors】: Sangki Yun ; Daehyeok Kim ; Xiaofan Lu ; Lili Qiu
【Abstract】: Wireless video traffic has grown at an unprecedented rate and put significant burden on wireless networks. Multicast can significantly reduce traffic by sending a single video to multiple receivers simultaneously. On the other hand, wireless receivers are heterogeneous due to both channel and antenna heterogeneity, the latter of which is rapidly increasing with the emergence of 802.11n and 802.11ac. In this paper, we develop optimized layered integrated video encoding (LIVE) to guarantee reasonable performance to weaker receivers (with worse channel and/or fewer antennas) and allow stronger receivers to enjoy better quality. Our approach has three distinct features: (i) It uses a novel layered coding to naturally accommodate the heterogeneity of different video receivers; (ii) It uses an optimization framework to optimize the amount of time used for transmission and the amount of information to transmit at each layer under the current channel condition; and (iii) It uses an integrated modulation, where most video data are transmitted using soft modulation to enjoy efficiency and resilience while the most important video data are transmitted using a combination of soft modulation and conventional hard modulation to further enhance their reliability. To our knowledge, this is the first approach that handles MIMO antenna heterogeneity in wireless video multicast. We demonstrate its effectiveness through extensive Matlab simulation and USRP testbed experiments.
【Keywords】: MIMO communication; antenna arrays; modulation; multicast communication; radio receivers; telecommunication traffic; video coding; video communication; LIVE; MIMO antenna; Matlab simulation; USRP; antenna heterogeneity; channel heterogeneity; hard modulation; optimized layered integrated video encoding; soft modulation; video data; video receivers; wireless networks; wireless receiver; wireless video multicast; wireless video traffic; Discrete cosine transforms; Encoding; Modulation; Receiving antennas; Transmitting antennas; Wireless communication
【Paper Link】 【Pages】:28-36
【Authors】: Xiaoli Wang ; Jiasi Chen ; Aveek Dutta ; Mung Chiang
【Abstract】: The recently proposed 3-Tier access model for Whitespace by the Federal Communications Commission (FCC) mandates certain classes of devices to share frequency bands in space and time. These devices are envisioned to be a heterogeneous mixture of licensed (Tier-1 and Tier-2) and unlicensed, opportunistic devices (Tier-3). The hierarchy in accessing the channel calls for superior adaptation of Tier-3 devices with varying spectral opportunity. While policies are being ratified for efficient sharing, it also calls for redesigning many common applications to adapt to this novel paradigm. In this paper, we focus on the ever-increasing demand for video streaming and present a methodology suitable for Tier-3 devices in the shared access model. Our analysis begins with a stress test of commonly adopted video streaming methods under the new sharing model. This is followed by the design of a robust MDP-based solution that proactively adapts to fast-varying channel conditions, providing better user quality of experience when compared to existing solutions, such as MPEG-DASH. We evaluate our solution on an experimental testbed and find that our MDP-based algorithm outperforms DASH, and partial information of Tier-2 dynamics improves video quality.
【Keywords】: quality of experience; radio spectrum management; video coding; video streaming; 3-tier access model; 3-tiered spectrum sharing; SVC; adaptive video streaming method; frequency band sharing; robust MDP design; scalable video coding; tier-3 device; user quality of experience; video quality improvement; whitespace; Adaptation models; Heuristic algorithms; Quality assessment; Static VAr compensators; Streaming media; Throughput; Video recording
【Paper Link】 【Pages】:37-45
【Authors】: Chuan Ma ; Weijie Wu ; Ying Cui ; Xinbing Wang
【Abstract】: Device-to-device (D2D) communication underlaying cellular networks is a promising technology to improve network resource utilization. In D2D-enabled cellular networks, the interference among spectrum-sharing links is more severer than that in traditional cellular networks, which motivates the adoption of interference cancellation techniques such as successive interference cancellation (SIC) at the receivers. However, to date, how SIC can affect the performance of D2D-enabled cellular networks is still unknown. In this paper, we present an analytical framework for studying the performance of SIC in large-scale D2D-enabled cellular networks using the tools from stochastic geometry. To facilitate the interference analysis, we propose the approach of stochastic equivalence of the interference, which converts the two-tier interference (interference from both the cellular tier and D2D tier) to an equivalent single-tier interference. Based on the proposed stochastic equivalence models, we derive the general expressions for the successful transmission probabilities of cellular uplinks and D2D links with infinite and finite SIC capabilities respectively. We demonstrate how SIC affects the performance of large-scale D2D-enabled cellular networks by both analytical and numerical results.
【Keywords】: cellular radio; interference suppression; spread spectrum communication; D2D-enabled cellular networks; SIC; device-to-device communication; spectrum-sharing links; stochastic equivalence models; stochastic geometry; successive interference cancellation; Integrated circuits; Interference cancellation; Receivers; Silicon carbide; Stochastic processes; Transmitters
【Paper Link】 【Pages】:46-54
【Authors】: Jiajia Liu ; Shangwei Zhang ; Hiroki Nishiyama ; Nei Kato ; Jun Guo
【Abstract】: Based on the tool of stochastic geometry, we present in this paper a framework for analyzing the coverage probability and ergodic rate in a D2D overlaying multi-channel downlink cellular network. Different from previous works, 1) we consider a flexible new scheme for mobile UEs to select operation mode individually, under which a mobile UE decides to establish a cellular link (with a BS) or a D2D link (with a neighboring UE) based on the pilot signal strength received from its nearest BS; 2) we allow a mobile UE which is located far from BSs to connect to a nearby BS via another intermediate UE in a two-hop manner. Our results indicate that the developed framework is very helpful for network designers to efficiently determine the optimal network parameters at which the optimum system performance can be achieved. Furthermore, as corroborated by extensive numerical results, enabling the D2D link based two-hop connection can significantly improve the network coverage performance, especially for the low SIR regime.
【Keywords】: cellular radio; probability; resource allocation; stochastic processes; D2D overlaying multi-channel downlink cellular networks; cellular link; coverage probability; ergodic rate; optimal network parameters; stochastic geometry analysis; two-hop connection; Channel allocation; Data communication; Downlink; Interference; Mobile communication; Mobile computing; Resource management; Device-to-device communication; downlink; multi-channel; performance analysis
【Paper Link】 【Pages】:55-63
【Authors】: Wenchi Cheng ; Xi Zhang ; Hailin Zhang
【Abstract】: Recently, both academia and industry are moving their research attention to the fifth-generation (5G) wireless networks - the next new era of wireless networks. The wireless full-duplex transmission, as one of promising candidate techniques for 5G, can significantly boost the spectrum efficiency of the wireless networks, thus providing a powerful thrust to optimize the quality-of-service (QoS) performances for the wireless networks. However, due to the heterogeneity caused by different types of simultaneous traffics over the wireless full-duplex link, supporting QoS guarantees for wireless full-duplex networks imposes the new challenges that we need to provide heterogeneous QoS guarantees for different types of traffics over the same link simultaneously. To overcome the aforementioned problems, in this paper we propose the heterogeneous statistical QoS provisioning framework for bidirectional transmission based wireless full-duplex networks. In particular, we formulate the optimization problems to maximize the system throughput subject to heterogeneous statistical delay-bound QoS requirements. Then, we convert the resulted non-convex optimization problem into an equivalent convex optimization problem, solving which we can derive the optimal QoS-driven power allocation scheme to maximize the system throughput while guaranteeing the heterogeneous statistical delay-bound QoS requirements. The extensive simulation results obtained show that our proposed QoS-driven power allocation scheme for heterogeneous statistical delay-bound QoS requirements can achieve larger aggregate system throughput than the scheme for the homogeneous statistical delay-bound QoS requirement over 5G mobile wireless full-duplex networks.
【Keywords】: 5G mobile communication; channel allocation; concave programming; convex programming; delays; quality of service; radio spectrum management; statistical analysis; 5G wireless full duplex transmission network; bidirectional transmission based wireless full duplex networks; equivalent convex optimization problem; heterogeneous statistical QoS provisioning; homogeneous statistical delay bound QoS requirement; nonconvex optimization problem; optimal QoS-driven power allocation scheme; quality of service; spectrum efficiency; system throughput maximization; Lead; Optimization; Quality of service; Resource management; Wireless networks; 5G mobile wireless networks; full-duplex wireless networks; heterogeneous statistical delay-bounded quality-of-service (QoS) provisioning; non-convex optimization
【Paper Link】 【Pages】:73-81
【Authors】: Weiwei Wu ; Jianping Wang ; Minming Li ; Kai Liu ; Junzhou Luo
【Abstract】: In a wireless system, when multiple applications can share data transmitted by rate-adaptive wireless devices, there exists a trade-off between transmission redundancy and energy efficiency. This paper conducts the first theoretical analysis on such a trade-off. We formulate the problem as a bi-objective optimization problem to simultaneously minimize the transmission redundancy and the energy consumption. In the offline setting that the full information is known in advance, we provide optimal algorithms for the bi-objective optimization problem. In the online setting, we provide an online algorithm with proven performance bound to approximate the optimal solution without relying on any assumed distribution or future information. The proposed online algorithm is proved O(ln T)-competitive with respect to transmission redundancy and also O(ln T)-competitive with respect to energy consumption, where T is the number of time slots. That is, the output of the algorithm always approximates the optimal solution within a logarithmic factor over all possible inputs. Our simulation results further validate the efficiency of our online algorithm.
【Keywords】: optimisation; radio networks; O(ln T)-competitive; adaptive wireless devices; bi-objective optimization problem; data sharing; energy efficiency; energy-efficient transmission; transmission redundancy; wireless system; Algorithm design and analysis; Approximation algorithms; Energy consumption; Optimal scheduling; Redundancy; Schedules; Wireless communication
【Paper Link】 【Pages】:82-90
【Authors】: Philipp Kindt ; Han Jing ; Nadja Peters ; Samarjit Chakraborty
【Abstract】: Reducing the energy consumption to the minimum is a crucial design requirement for all body area sensor networks. Sensors deployed on the human body, especially at the limbs often move along different positions. Usually, the transmit power is set to a sufficiently high value to achieve reliable transmission for the constellation with highest attenuation. For periodic movements, data transmission can be carried out at the position of the lowest path loss between the sender and the receiver, provided this position can be reliably identified. We propose a novel framework that predicts this position using acceleration data and the received signal strength. By learning a correlation between these signals, accurate predictions can be performed and up to 24.7% of the power spent by a Bluetooth Low Energy module for the transmission of a packet can be saved while still achieving the same packet error rate as with sending using the higher transmit power.
【Keywords】: Bluetooth; RSSI; body area networks; body sensor networks; computer network reliability; energy conservation; medical computing; telecommunication power management; Bluetooth low energy module; ExPerio; body area sensor network; energy consumption reduction; exploiting periodicity; human body; opportunistic energy efficient data transmission; packet error rate; power transmission reliability; received signal strength; Acceleration; Accelerometers; Attenuation; Computers; Receivers; Wireless sensor networks; Wrist
【Paper Link】 【Pages】:91-99
【Authors】: Michael J. Neely
【Abstract】: This paper considers a wireless link with randomly arriving data that is queued and served over a time-varying channel. It is known that any algorithm that comes within ε of the minimum average power required for queue stability must incur average queue size at least Ω(log(l/ε)). However, the optimal convergence time is unknown, and prior algorithms give convergence time bounds of O(l/ε2). This paper shows that it is possible to achieve the optimal O(log(l/ε)) average queue size tradeoff with an improved convergence time of O(log(l/ε)/ε). Further, this is shown to be within a logarithmic factor of the best possible convergence time. The method uses the simple drift-plus-penalty technique with an improved convergence time analysis.
【Keywords】: convergence; queueing theory; radio links; telecommunication power management; telecommunication scheduling; time-varying channels; average queue size; convergence time analysis; convergence time bounds; drift-plus-penalty technique; energy-aware wireless scheduling; logarithmic factor; optimal convergence time; queue stability; time-varying channel; wireless link; Computers; Conferences; Convergence; Optimization; Processor scheduling; Resource management; Wireless communication
【Paper Link】 【Pages】:100-108
【Authors】: Swetank Kumar Saha ; Pratik Deshpande ; Pranav P. Inamdar ; Ramanujan K. Sheshadri ; Dimitrios Koutsonikolas
【Abstract】: This paper presents the first, to the best of our knowledge, detailed experimental study of 802.11n/ac throughput and power consumption in modern smartphones. We experiment with a variety of smartphones, supporting different subsets of 802.11n/ac features. We investigate the power consumption in various states of the wireless interface (sleep, idle, active), the impact of various features of 802.11n/ac (PHY bitrate, frame aggregation, channel bonding, MIMO) on both throughput and power consumption, and the tradeoffs between these two metrics. Some of our findings are significantly different from the findings of previous studies using 802.11n/ac wireless cards for laptop/desktop computers. We believe that these findings will help in understanding various performance and power consumption issues in today's smartphones and will guide the design of power optimization algorithms for the next generation of mobile devices.
【Keywords】: smart phones; wireless LAN; wireless channels; 802.11n/ac; MIMO; channel bonding; laptop-desktop computers; mobile devices; power consumption; power optimization algorithms; power throughput tradeoffs; smartphones; wireless interface; IEEE 802.11n Standard; MIMO; Portable computers; Power demand; Power measurement; Smart phones; Throughput
【Paper Link】 【Pages】:109-117
【Authors】: Márton Csernai ; Florin Ciucu ; Ralf-Peter Braun ; András Gulyás
【Abstract】: Amongst data center structures, flattened butterfly (FBFly) networks have been shown to outperform their common counterparts such as fat-trees in terms of energy proportionality and cost efficiency. This efficiency is achieved by using less networking equipment (switches, ports, cables) at the expense of increased control plane complexity. In this paper we show that cabling complexity can be further reduced by an order of magnitude, by reconfiguring the optical fully meshed components into optical “pseudo”-fully meshed components. Following established methods, optical star networks are obtained by exchanging the FBFly's regular (grey) optical transceivers for dense wavelength division multiplexing (DWDM or colored) optical transceivers and placing an arrayed waveguide grating router (AWGR) in the center. Depending on the data center configuration and equipment prices, our colored FBFly (C-FBFly) proposal yields lower capital expenditure than the original FBFly. The key advantage of our structural modification of FBFly, however, is that in large FBFly networks (e.g., > 50K nodes) it reduces the number of inter-rack cables by a factor as large as 48.
【Keywords】: cable laying; computer centres; hypercube networks; optical cables; optical fibre networks; telecommunication network topology; arrayed waveguide grating router; cabling complexity reduction; colored optical transceivers; cost efficiency; data center structures; dense wavelength division multiplexing; energy proportionality; large flattened butterfly networks; optical fully meshed component reconfigutation; optical pseudofully meshed components; optical star networks; Communication cables; Optical devices; Optical switches; Ports (Computers); Servers; Topology; Transceivers
【Paper Link】 【Pages】:118-126
【Authors】: Danfeng Shan ; Wanchun Jiang ; Fengyuan Ren
【Abstract】: In data center networks, micro-burst is a common traffic pattern and the packet dropping caused by it usually leads to serious performance degradation. Meanwhile, most of the current commodity switches employ on-chip shared memory, and the buffer management policies of them ensure fair sharing of memory among all ports. Among various polices, Dynamic Threshold (DT) is widely used by switch vendors. However, because DT needs to reserve a fraction of switch buffer, there is free buffer space while packets from micro-burst traffic are dropped. In this paper, we theoretically deduce the sufficient conditions for packet dropping caused by micro-burst traffic, and estimate the corresponding free buffer size. The results show that the free buffer size is very large when the number of overloaded ports is small. What's worse, to ensure fair sharing of memory among output ports, packets from micro-burst traffic may be dropped even when the traffic size is much smaller than the buffer size. In light of these results, we propose Enhanced Dynamic Threshold (EDT) policy, which can alleviate packet dropping caused by micro-burst traffic through fully utilizing the switch buffer and temporarily relaxing the fairness constraint. The simulation results show that EDT can absorb more micro-burst traffic than DT.
【Keywords】: telecommunication switching; telecommunication traffic; EDT policy; buffer management policy; data center networks; data center switches; enhanced dynamic threshold policy; memory fair sharing; microburst traffic; on-chip shared memory; switch buffer; Computers; Conferences; Memory management; Nickel; Ports (Computers); Queueing analysis; Steady-state; dynamic threshold; micro-burst; packet dropping; shared memory; switch buffer management
【Paper Link】 【Pages】:127-135
【Authors】: Han Zhang ; Xingang Shi ; Xia Yin ; Fengyuan Ren ; Zhiliang Wang
【Abstract】: Data center network has become an important facility for hosting various online services and applications, and thus its performance and underlying technologies are attracting more and more interests. In order to achieve better network performance, recent studies have proposed to tailor data center network traffic management in different aspects, devising various routing and transport schemes. In particular, for applications that must serve users in a timely manner, strict deadlines for their internal traffic flows should be met, and are explicitly taken into consideration in some latest flow rate control or scheduling algorithms in data center networks. In this paper, we advocate that when designing such deadline-aware rate control schemes, a simple principle should be followed: flows with different deadlines should be differentiated in their bandwidth allocation/occupation, and the more traffic load, the more differentiation should be made. We derive sufficient and necessary conditions for a flow rate control scheme to follow this principle, and present a simple congestion control algorithm called Load Proportional Differentiation (LPD) as its application. We have evaluated LPD under different topologies and load scenarios, both by simulation and in real testbed. Our results show that LPD nearly always outperforms D2TCP, a latest deadline-aware rate control scheme, and often reduces the number of flows missing their deadlines by more than 50%. We also give some other applications of this principle, for example, in reducing flow completion time.
【Keywords】: bandwidth allocation; computer centres; computer networks; telecommunication network management; telecommunication network routing; telecommunication network topology; telecommunication traffic; LPD; bandwidth allocation; bandwidth occupation; data center network traffic management; deadline aware congestion control design principle; load proportional differentiation; network topology; routing scheme; scheduling algorithm; transport scheme; Algorithm design and analysis; Bandwidth; Channel allocation; Computers; Conferences; Hardware; Switches
【Paper Link】 【Pages】:136-144
【Authors】: Jun Duan ; Zhiyang Guo ; Yuanyuan Yang
【Abstract】: Most of today's data center networks (DCNs) adopt a multi-rooted tree structure called fat-tree, which delivers large bisection bandwidth through rich path multiplicity. In fat-tree DCNs, core switch modules play an important role in providing nonblocking capability, and form a significant part of network cost simultaneously. Reducing core switches while simultaneously guaranteeing performance has been a constant challenge. For example, multicast is an essential communication pattern in cloud services which needs to be supported efficiently. In this paper, we propose virtual network embedding schemes to deal with this problem. In the first scheme, we place the virtual machines (VMs) of a multicast-capable virtual network (MVN) as compact as possible, without any disturbance to existing traffic. In the second scheme, we manage to keep VMs in an even more compact way to reduce cost by allowing a small degree of VM migration. Both schemes are guaranteed to support any multicast communications within MVNs, and simultaneously achieve significant cost saving in terms of core switches, compared to currently best known result. Moreover, we show that our schemes incur only a small overhead in terms of migrations. Finally, we evaluate the performance of proposed schemes and validate the theoretical analysis through extensive simulations.
【Keywords】: computer centres; embedded systems; multicast communication; trees (mathematics); virtual machines; virtualisation; MVN; VM migration; core switch modules; cost saving; data center networks; fat-tree DCN; multi-rooted tree structure; multicast-capable virtual network; network cost; nonblocking capability; rich path multiplicity; virtual machines; virtual network embedding schemes; Bandwidth; Computers; Resource management; Servers; Switches; Virtual machining; Virtualization; Data center networks; fat-tree; multicast; nonblocking; virtual machine migration
【Paper Link】 【Pages】:145-153
【Authors】: Pei Huang ; Chin-Jung Liu ; Xi Yang ; Li Xiao
【Abstract】: To improve spectrum utilization, cognitive radio (CR) is introduced to detect and exploit available spectrum resources autonomously. The flexible spectrum use imposes special challenges on broadcast because different CR devices may detect different available spectrum fragments at different locations. The sender and the receivers have to agree on spectrum fragments that will be used for broadcast. There may not exist a common spectrum fragment that is available to all receivers. Most existing work assumes that a device works only in a single channel and thus the sender has to broadcast multiple times in different channels to reach all receivers. The broadcast problem is studied as a channel rendezvous and minimum latency scheduling problem. Recent spectrum-agile designs have enabled a device to utilize partially occupied spectrum. We thus view a wideband channel as an aggregation of multiple narrow channels that can be evaluated independently. A Spectrum Fragment Agile Broadcast (SFAB) scheme is introduced in this paper to support efficient broadcast on fragmented spectrum. It aims at achieving spectrum agreement between the transmitter and the receivers efficiently and maximizing the channel width used for broadcast regardless of the spectrum availability differences at receivers. We validate the effectiveness of SFAB through implementation on the GNU Radio / USRP platform and use ns-2 simulations to evaluate the performance in large deployments.
【Keywords】: cognitive radio; radio spectrum management; telecommunication scheduling; channel rendezvous; cognitive radio networks; efficient broadcast; fragmented spectrum; minimum latency scheduling problem; spectrum fragment agile broadcast scheme; spectrum resources; wideband channel; Interference; OFDM; Radio transmitters; Receivers; Time-domain analysis; Unicast; Wideband
【Paper Link】 【Pages】:154-162
【Authors】: Zhaoquan Gu ; Haosen Pu ; Qiang-Sheng Hua ; Francis C. M. Lau
【Abstract】: Cognitive radio networks (CRNs) have been proposed to solve the spectrum scarcity problem. One of their fundamental procedures is to construct a communication link on a common channel for the users, which is referred as rendezvous. In reality, the capability to sense the spectrum may vary from user to user, and such users form what is known as a heterogeneous cognitive radio network (HCRN). The licensed spectrum is divided in to n channels, U = {1, 2,..., n}. We denote the capability of user i as Ci ⊆ U and the set of available channels (i.e. the channels not occupied by the paying users) as Vi ⊆ Ci. We study the rendezvous problem in HCRN under two circumstances: fully available spectrum (Vi = Ci) and partially available spectrum (Vi ≠ Ci). For any two users a, b, we propose the Traversing Pointer (TP) algorithm that guarantees rendezvous in O(max{|Ca|,|Cb|}log log n) time slots for the fully available spectrum scenario. This result is only O (log log n) larger than our constructive lower bound. Moreover, it removes an O(min{|Ca|, |Cb|}) factor as compared to the state-of-the-art result (O(|Ca||Cb|) in [26]). For the partially available spectrum scenario, we propose the Moving Traversing Pointers (MTP) algorithm to guarantee rendezvous in O((max{|Va|, |Vb|})2 log log n) time slots, which works more efficiently than the previous best result (O(|Ca||Cb|) in [25]) in various circumstances. We also conduct extensive simulations and the results corroborate our analysis.
【Keywords】: cognitive radio; radio links; radio spectrum management; HCRN; MTP algorithm; communication link; heterogeneous cognitive radio networks; licensed spectrum; moving traversing pointers; rendezvous algorithms; spectrum scarcity problem; Algorithm design and analysis; Cognitive radio; Computers; Conferences; Sensors; Wireless sensor networks; Fully available spectrum; Heterogeneous Cognitive Radio Network; Partially available spectrum; Rendezvous
【Paper Link】 【Pages】:163-171
【Authors】: Yi Song
【Abstract】: The size of a packet has a strong influence on the quality of wireless data communications. In traditional wireless networks, there is an inherent trade-off in determining the packet size. First of all, compared with long packets, short packets are less likely to be affected by the error-prone wireless channels. However, short packets suffer overhead due to headers. In mobile cognitive radio (CR) networks, determining the secondary user (SU) packet size becomes much more complicated and critical. In addition to all the impacts in traditional wireless networks, the primary user (PU) activity and the mobility of SUs and PUs have significant impacts on the SU packet size. Moreover, the channel fading caused by the SU mobility also has an impact on the SU packet size. More importantly, all these impacts on SU packet size constantly vary with time and space, which makes this issue extremely challenging. Without a careful design of the SU packet size, both SU and PU transmissions may suffer severe performance degradation. In this paper, the optimal SU packet size issue in mobile CR networks under fading channels is studied. We mathematically model these impacts and derive the optimal SU packet size. To the best of our knowledge, this is the first work that systematically investigates the optimal SU packet size issue in mobile CR networks.
【Keywords】: cognitive radio; data communication; fading channels; mobile radio; error-prone wireless fading channel; inherent trade-off; mobile CR wireless network; mobile cognitive radio network; optimal secondary user packet size; primary user activity; wireless data communication quality; Data communication; Mobile communication; Mobile computing; Rayleigh channels; Throughput; Wireless communication
【Paper Link】 【Pages】:172-180
【Authors】: Xiaocong Jin ; Jingchao Sun ; Rui Zhang ; Yanchao Zhang ; Chi Zhang
【Abstract】: Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.
【Keywords】: mobile computing; radio spectrum management; DSA systems; MATLAB simulations; Specguard; USRP; crowdsourced spectrum misuse detection; dynamic spectrum access systems; open wireless medium; spectrum shortage; ubiquitous mobile users; Conferences; Detectors; Encoding; Mobile communication; Phase shift keying; Receivers; Transmitters
【Paper Link】 【Pages】:181-189
【Authors】: Shan-Hsiang Shen ; Liang-Hao Huang ; De-Nian Yang ; Wen-Tsuen Chen
【Abstract】: Current traffic engineering in SDN mostly focuses on unicast. By contrast, compared with individual unicast, multicast can effectively reduce network resources consumption to serve multiple clients jointly. Since many important applications require reliable transmissions, it is envisaged that reliable multicast plays a crucial role when an SDN operator plans to provide multicast services. However, the shortest-path tree (SPT) adopted in current Internet is not bandwidth-efficient, while the Steiner tree (ST) in Graph Theory is not designed to support reliable transmissions since the selection of recovery nodes is not examined. In this paper, therefore, we propose a new reliable multicast tree for SDN, named Recover-aware Steiner Tree (RST). The goal of RST is to minimize both tree and recovery costs, while finding an RST is very challenging. We prove that the RST problem is NP-Hard and inapproximable within k, which is the number of destination nodes. Thus, we design an approximate algorithm, called Recover Aware Edge Reduction Algorithm (RAERA), to solve the problem. The simulation results on real networks and large synthetic networks, together with the experiment on our SDN testbed with real YouTube traffic, all manifest that RST outperforms both SPT and ST. Also, the implementation of RAERA in SDN controllers shows that an RST can be returned within a few seconds and thereby is practical for SDN networks.
【Keywords】: Internet; computer network reliability; multicast communication; social networking (online); software defined networking; telecommunication network routing; telecommunication traffic; trees (mathematics); Internet; NP-Hard; RAERA; RST; SDN network; SPT; YouTube traffic; approximate algorithm; multicast routing reliability; network resource consumption reduction; recover aware edge reduction algorithm; recover-aware Steiner tree; shortest-path tree; software defined network; traffic engineering; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computer network reliability; Reliability; Routing; TV; SDN; multicast; reliable transmissions; traffic engineering
【Paper Link】 【Pages】:190-198
【Authors】: Marco Canini ; Petr Kuznetsov ; Dan Levin ; Stefan Schmid
【Abstract】: Software-defined networking (SDN) is a novel paradigm that outsources the control of programmable network switches to a set of software controllers. The most fundamental task of these controllers is the correct implementation of the network policy, i.e., the intended network behavior. In essence, such a policy specifies the rules by which packets must be forwarded across the network. This paper studies a distributed SDN control plane that enables concurrent and robust policy implementation. We introduce a formal model describing the interaction between the data plane and a distributed control plane (consisting of a collection of fault-prone controllers). Then we formulate the problem of consistent composition of concurrent network policy updates (termed the CPC Problem). To anticipate scenarios in which some conflicting policy updates must be rejected, we enable the composition via a natural transactional interface with all-or-nothing semantics. We show that the ability of an f-resilient distributed control plane to process concurrent policy updates depends on the tag complexity, i.e., the number of policy labels (a.k.a. tags) available to the controllers, and describe a CPC protocol with optimal tag complexity f + 2.
【Keywords】: software defined networking; telecommunication control; CPC problem; all-or-nothing semantics; concurrent network policy updates; concurrent policy implementation; conflicting policy updates; distributed SDN control plane; f-resilient distributed control plane; fault-prone controllers; intended network behavior; natural transactional interface; optimal tag complexity; programmable network; robust SDN control plane; robust policy implementation; software controller; software defined networking; transactional network updates; Complexity theory; Conferences; Decentralized control; History; Ports (Computers); Protocols
【Paper Link】 【Pages】:199-207
【Authors】: Zhiming Hu ; Jun Luo
【Abstract】: The outputs of network monitoring such as traffic matrix and elephant flow identification are essential inputs to many network operations and system designs in DCNs, but most solutions for network monitoring adopt direct measurements or inference alone, which may suffer from either high network overhead or low precision. Different from those approaches, we combine the direct measurements offered by software defined network (SDN) and inference techniques based on network tomography to derive a hybrid network monitoring scheme in this paper; it can strike a balance between measurement overhead and accuracy. Essentially, we use SDN to make the severely low determined network tomography (TM estimation) problem in DCNs to be a more determined one. Thus many classic network tomography algorithms in ISP networks become feasible for DCNs. By combining SDN with network tomography, we can also identify the elephant flows with high precision while occupying very little network resource. According to our experiment results, the accuracy of estimating the TM is far higher than those inferred by SNMP link counters only and the performance of identifying elephant flows is also very promising.
【Keywords】: computer centres; inference mechanisms; software defined networking; DCN direct measurement; ISP network; SDN direct measurement; TM estimation problem; data center network; high network overhead; hybrid network monitoring cracking scheme; inference technique; low network precision; software defined network tomography; Computers; Estimation; Monitoring; Optimization; Radiation detectors; Servers; Tomography
【Paper Link】 【Pages】:208-216
【Authors】: Di Wu ; Dmitri I. Arkhipov ; Eskindir Asmare ; Zhijing Qin ; Julie A. McCann
【Abstract】: The growing of Internet of Things (IoT) devices has resulted in a number of urban-scale deployments of IoT multinetworks, where heterogeneous wireless communication solutions coexist. Managing the multinetworks for mobile IoT access is a key challenge. Software-defined networking (SDN) is emerging as a promising paradigm for quick configuration of network devices, but its application in multinetworks with frequent IoT access is not well studied. In this paper we present UbiFlow, the first software-defined IoT system for ubiquitous flow control and mobility management in multinetworks. UbiFlow adopts distributed controllers to divide urban-scale SDN into different geographic partitions. A distributed hashing based overlay structure is proposed to maintain network scalability and consistency. Based on this UbiFlow overlay structure, relevant issues pertaining to mobility management such as scalable control, fault tolerance, and load balancing have been carefully examined and studied. The UbiFlow controller differentiates flow scheduling based on the per-device requirement and whole-partition capability. Therefore, it can present a network status view and optimized selection of access points in multinetworks to satisfy IoT flow requests, while guaranteeing network performance in each partition. Simulation and realistic testbed experiments confirm that UbiFlow can successfully achieve scalable mobility management and robust flow scheduling in IoT multinetworks.
【Keywords】: Internet of Things; mobility management (mobile radio); overlay networks; software defined networking; ubiquitous computing; Internet of Things; IoT devices; IoT multinetworks; SDN; UbiFlow; UbiFlow controller; UbiFlow overlay structure; distributed controllers; distributed hashing; fault tolerance; geographic partitions; load balancing; mobility management; network devices; overlay structure; per device requirement; scalable control; software defined networking; ubiquitous flow control; urban scale software defined IoT; urban-scale SDN; urban-scale deployments; wireless communication; Delays; Handover; Mobile radio mobility management; Switches
【Paper Link】 【Pages】:217-225
【Authors】: Jing Gao ; Jianzhong Li ; Zhipeng Cai ; Hong Gao
【Abstract】: Event monitoring is a popular task carried out by Wireless Sensor Networks (WSNs). A composite event involves multiple properties requiring different types of sensors to monitor. Considering the costs of different deployment of heterogeneous sensors and the total budget for a monitored region, this paper investigates the composite event coverage problem with the purpose of optimizing coverage quality subjecting to the constraint of not exceeding the total budget. This is a novel coverage problem which is different from the traditional ones where deployment costs of sensors, total budget and composite events are not considered. Two exact algorithms are proposed whose time complexities are O(nk) and O(nk-1) respectively in the worst case, and a (1 - e-1)-approximate algorithm are designed. The simulation results indicate the efficiency and effectiveness of the proposed algorithms.
【Keywords】: approximation theory; sensor placement; wireless sensor networks; WSN; composite event coverage problem; coverage quality; deployment costs; event monitoring; heterogeneous sensors deployment; time complexities; wireless sensor networks; Approximation algorithms; Monitoring; Optimization; Temperature measurement; Temperature sensors; Wireless sensor networks
【Paper Link】 【Pages】:226-234
【Authors】: Bahram Alinia ; Mohammad Hassan Hajiesmaili ; Ahmad Khonsari
【Abstract】: In deadline-constrained data aggregation in wireless sensor networks (WSNs), the imposed sink deadline in an interference-limited network hinders participation of all sensor nodes in data aggregation. Thus, a subset of nodes can contribute in aggregation and quality of aggregation (QoA) increases with the growth of the number of participating nodes. Scheduling the nodes' transmissions is a central problem, which aims to maximize the QoA, while satisfying the sink deadline, i.e., on-time delivery of the sensed data to the sink node. Although the previous studies have proposed optimal scheduling algorithms to this problem given a particular aggregation tree, there is no work on constructing optimal tree in this context. The underlying aggregation tree can make a big difference on QoA since we demonstrate that the ratio between the maximum achievable QoAs of different trees could be as large as O(2D), where D is the sink deadline. In this paper, we cast an optimization problem to address optimal tree construction for deadline-constrained data aggregation in WSNs. The problem is combinatorial in nature and difficult to solve as we prove its NP-hardness. We employ Markov approximation framework and devise two distributed algorithms with different computation overheads to find bounded close-to-optimal solutions. Simulation experiments in a set of representative randomly-generated scenarios show that the proposed algorithms significantly improve QoA by 101% and 93% on average compared to the best, to our knowledge, existing alternative methods.
【Keywords】: Markov processes; approximation theory; interference; trees (mathematics); wireless sensor networks; Markov approximation framework; NP-hardness problem; QoA; deadline-constrained WSN; deadline-constrained data aggregation; distributed algorithms; imposed sink deadline; interference-limited network; maximum-quality aggregation trees; optimal scheduling algorithms; optimization problem; quality of aggregation; wireless sensor networks; Approximation algorithms; Approximation methods; Delays; Markov processes; Optimal scheduling; Vegetation; Wireless sensor networks
【Paper Link】 【Pages】:235-243
【Authors】: Steffen Bondorf ; Jens B. Schmitt
【Abstract】: Sensor Network Calculus (SensorNC) provides a framework for worst-case analysis of wireless sensor networks. The analysis proceeds in two steps: For a given flow, (1) the network is reduced to a tandem of nodes by computing the arrival bounds of cross-traffic; (2) the flow is separated from the cross-traffic by subtracting cross-flows and concatenating nodes on its path. While the second step has seen much treatment, the first step has not at all. This is in sharp contrast to the fact that arrival bounding takes roughly 80% of the total analysis time and is equally crucial for the tightness of the bounds. Therefore, we turn our attention to this first SensorNC analysis step with the goal to boost the performance and applicability of the overall framework. The main technical contribution is a generalized version of the concatenation theorem within the SensorNC setting. This generalization is instrumental in simplifying and streamlining the cross-traffic arrival bound computations such that run times can be reduced by more than a factor of 5. Even more important, it enables a localization of the information necessary to execute the calculations at the node level, thus enabling a distribution of the SensorNC analysis within a self-modeling WSN.
【Keywords】: telecommunication traffic; wireless sensor networks; SensorNC analysis; WSN self-modeling; concatenating node subtraction; cross-flow subtraction; cross-traffic arrival bound computation; sensor network calculus boosting; wireless sensor network; Aggregates; Calculus; Computers; Delays; Servers; Topology; Wireless sensor networks
【Paper Link】 【Pages】:244-252
【Authors】: Shuangjuan Li ; Hong Shen
【Abstract】: Border surveillance for intrusion detection is an important application of wireless sensor networks. Given a set of mobile sensors and their initial positions, how to move these sensors to a region border to achieve barrier coverage energy-efficiently is challenging. In this paper, we study the 2-D MinMax barrier coverage problem of moving n sensors in a two-dimensional plane to form a barrier coverage of a specified line segment in the plane while minimizing the maximum sensor movement for the sake of balancing battery power consumption. Previously, this problem was shown to be NP-hard for the general case. It was an open problem whether the problem is polynomial-time solvable for the case when sensors have a fixed number of sensing ranges. We study a special case of great practical significance that the sensors have the same sensing range and present an O(n3 log n) time algorithm. Our algorithm computes a permutation of the left and right endpoints of the moving ranges of all the sensors forming a barrier coverage and minimizes the maximum sensor movement distance by characterizing permutation switches that are critical. To the best of our knowledge, this is the first result for solving the 2-D MinMax barrier coverage problem for the case that all sensors have a uniform sensing range.
【Keywords】: minimax techniques; wireless sensor networks; 2-D MinMax barrier coverage problem; border surveillance; intrusion detection; maximum sensor movement; wireless sensor networks; Algorithm design and analysis; Computers; Conferences; Mobile communication; Sensors; Silicon; Wireless sensor networks
【Paper Link】 【Pages】:253-261
【Authors】: Zhuo Lu ; Yalin Evren Sagduyu ; Jason H. Li
【Abstract】: The backpressure algorithm is known to provide throughput optimality in routing and scheduling decisions for multi-hop networks with dynamic traffic. The essential assumption in the backpressure algorithm is that all nodes are benign and obey the algorithm rules governing the information exchange and underlying optimization needs. Nonetheless, such an assumption does not always hold in realistic scenarios, especially in the presence of security attacks with intent to disrupt network operations. In this paper, we propose a novel mechanism, called virtual trust queuing, to protect backpressure algorithm based routing and scheduling protocols from various insider threats. Our objective is not to design yet another trust-based routing to heuristically bargain security and performance, but to develop a generic solution with strong guarantees of attack resilience and throughput performance in the backpressure algorithm. To this end, we quantify a node's algorithm-compliance behavior over time and construct a virtual trust queue that maintains deviations from expected algorithm outcomes. We show that by jointly stabilizing the virtual trust queue and the real packet queue, the backpressure algorithm not only achieves resilience, but also sustains the throughput performance under an extensive set of security attacks.
【Keywords】: queueing theory; radio networks; routing protocols; telecommunication scheduling; telecommunication security; telecommunication traffic; dynamic traffic; heuristic bargain security; information exchange; multihop wireless network threat; routing protocol; scheduling protocol; secure backpressure algorithm; virtual trust queuing; Algorithm design and analysis; Heuristic algorithms; Optimization; Queueing analysis; Routing; Scheduling; Throughput
【Paper Link】 【Pages】:262-270
【Authors】: Jing Chen ; Quan Yuan ; Guoliang Xue ; Ruiying Du
【Abstract】: Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.
【Keywords】: cryptography; digital signatures; game theory; mobile communication; quality of service; telecommunication security; GBIM; Nash Equilibrium; QoS network; batch cryptography technology; batch identification; batch verification; digital signature; dynamic attack; game theory; game theory based batch identification model; invalid signatures; message authentication; signature verification; wireless mobile networks; Algorithm design and analysis; Games; Heuristic algorithms; Magnetic resonance imaging; Mobile communication; Mobile computing; Testing; Batch identification; game theory; wireless mobile networks
【Paper Link】 【Pages】:271-279
【Authors】: Kuan Zhang ; Xiaohui Liang ; Rongxing Lu ; Kan Yang ; Xuemin Sherman Shen
【Abstract】: In this paper, we propose a Social-based Mobile Sybil Detection (SMSD) scheme to detect Sybil attackers from their abnormal contacts and pseudonym changing behaviors. Specifically, we first define four levels of Sybil attackers in mobile environments according to their attacking capabilities. We then exploit mobile users' contacts and their pseudonym changing behaviors to distinguish Sybil attackers from normal users. To alleviate the storage and computation burden of mobile users, the cloud server is introduced to store mobile user's contact information and to perform the Sybil detection. Furthermore, we utilize a ring structure associated with mobile user's contact signatures to resist the contact forgery by mobile users and cloud servers. In addition, investigating mobile user's contact distribution and social proximity, we propose a semi-supervised learning with Hidden Markov Model to detect the colluded mobile users. Security analysis demonstrates that the SMSD can resist the Sybil attackers from the defined four levels, and the extensive trace-driven simulation shows that the SMSD can detect these Sybil attackers with high accuracy.
【Keywords】: cloud computing; hidden Markov models; learning (artificial intelligence); network servers; security of data; Sybil attackers; abnormal contacts; cloud server; hidden Markov model; mobile environments; mobile social behaviors; mobile user contact distribution; mobile user contact signatures; pseudonym changing behaviors; security analysis; semisupervised learning; social proximity; social-based mobile Sybil detection; trace-driven simulation; Aggregates; Computers; Hidden Markov models; Mobile communication; Mobile computing; Resists; Servers
【Paper Link】 【Pages】:280-288
【Authors】: Zhuo Lu ; Cliff Wang
【Abstract】: Network inference is an effective mechanism to infer end-to-end flow rates and has enabled a variety of applications (e.g., network surveillance and diagnosis). The paper is focused on the opposite side of network inference, i.e., how to make inference inaccurate, which we call network anti-inference. As most research efforts have been focused on developing efficient inference methods, design of anti-inference is largely overlooked. Anti-inference scenarios can rise when network inference is not desirable, such as in clandestine communication and military applications. Our objective is to explore network dynamics to provide anti-inference. In particular, we consider two proactive strategies that cause network dynamics: transmitting deception traffic and changing routing to mislead the inference. We build an analytical framework to quantify the induced inference errors of the proactive strategies that maintain limited costs. We find via analysis and simulations that for deception traffic, a simple random transmission strategy can achieve inference errors on the same order of the best coordinated transmission strategy; while changing routing can cause inference errors of higher order than any deception traffic strategy. Our results not only reveal the fundamental perspective on proactive strategies, but also offer the guidance into practical design of anti-inference.
【Keywords】: network theory (graphs); telecommunication network routing; telecommunication traffic; end-to-end flow rate; network antiinference; network routing; proactive strategy; random transmission strategy; transmitting deception traffic; Computers; Conferences; Degradation; Delays; Least squares approximations; Routing; Routing protocols
【Paper Link】 【Pages】:289-297
【Authors】: Dmytro Karamshuk ; Nishanth Sastry ; Andrew Secker ; Jigna Chandaria
【Abstract】: In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness - requiring peers to be within the same ISP, bitrate stratification - the need to match peers with others needing similar bitrate, and partial participation - some peers choosing not to redistribute content. This work relates potential gains from peer assistance to the average number of users in a swarm, its capacity, and empirically studies the effects of these obstacle factors at scale, using a month-long trace of over 2 million users in London accessing BBC shows online. Results indicate that even when P2P swarms are localised within ISPs, up to 88% of traffic can be saved. Surprisingly, bitrate stratification results in 2 large sub-swarms and does not significantly affect savings. However, partial participation, and the need for a minimum swarm size do affect gains. We investigate improvements to gain from increasing content availability through two well-studied techniques: content bundling-combining multiple items to increase availability, and historical caching of previously watched items. Bundling proves ineffective as increased server traffic from larger bundles outweighs benefits of availability, but simple caching can considerably boost traffic gains from peer assistance.
【Keywords】: peer-to-peer computing; telecommunication traffic; video on demand; video streaming; BBC iPlayer; CDN; ISP-friendly peer-assisted on-demand streaming; P2P; bitrate stratification; content bundling-combining multiple; content delivery network; historical caching; partial participation; traffic saving; Analytical models; Bandwidth; Bit rate; Computers; Conferences; Peer-to-peer computing; Servers
【Paper Link】 【Pages】:298-306
【Authors】: Stefanie Roos ; Thorsten Strufe
【Abstract】: Virtual overlays generate topologies for greedy routing, like rings or hypercubes, on connectivity restricted networks. They have been proposed to achieve efficient content discovery in the Darknet mode of Freenet, for instance, which provides a private and secure communication platform for dissidents and whistle-blowers. Virtual overlays create tunnels between nodes with neighboring addresses in the topology. The routing performance hence is directly related to the length of the tunnels, which have to be set up and maintained at the cost of communication overhead in the absence of an underlying routing protocol. In this paper, we show the impossibility to efficiently maintain sufficiently short tunnels. Specifically, we prove that in a dynamic network either the maintenance or the routing eventually exceeds polylog cost in the number of participants. Our simulations additionally show that the length of the tunnels increases fast if standard maintenance protocols are applied. Thus, we show that virtual overlays can only offer efficient routing at the price of high maintenance costs.
【Keywords】: overlay networks; routing protocols; telecommunication network topology; telecommunication security; Darknet mode; churn; connectivity restricted networks; dynamic network; greedy routing; hypercubes; routing protocol; secure communication platform; self-stabilization; topologies; topology; virtual overlays; whistle-blowers; Maintenance engineering; Network topology; Random processes; Random variables; Routing; Topology; Zinc
【Paper Link】 【Pages】:307-315
【Authors】: Bang Liu ; Di Niu ; Zongpeng Li ; H. Vicky Zhao
【Abstract】: With an increasing popularity of real-time applications, such as live chat and gaming, latency prediction between personal devices including mobile devices becomes an important problem. Traditional approaches recover all-pair latencies in a network from sampled measurements using either Euclidean embedding or matrix factorization. However, these approaches targeting static or mean network latency prediction are insufficient to predict personal device latencies, due to unstable and time-varying network conditions, triangle inequality violation and unknown rank of latency matrices. In this paper, by analyzing latency measurements from the Seattle platform, we propose new methods for both static latency estimation as well as the dynamic estimation problem given 3D latency matrices sampled over time. We propose a distance-feature decomposition algorithm that can decompose latency matrices into a distance component and a network feature component, and further leverage the structured pattern inherent in the 3D sampled data to increase estimation accuracy. Extensive evaluations driven by real-world traces show that our proposed approaches significantly outperform various state-of-the-art latency prediction techniques.
【Keywords】: Internet; matrix decomposition; mobile computing; telecommunication traffic; 3D latency matrices; 3D sampling; Euclidean embedding; Seattle platform; distance-feature decomposition algorithm; dynamic estimation problem; matrix factorization; mobile devices; network latency prediction; personal devices; static latency estimation; time-varying network conditions; triangle inequality violation; Estimation; Extraterrestrial measurements; Linear matrix inequalities; Matrix decomposition; Peer-to-peer computing; Prediction algorithms; Three-dimensional displays
【Paper Link】 【Pages】:316-324
【Authors】: Qiben Yan ; Yao Zheng ; Tingting Jiang ; Wenjing Lou ; Y. Thomas Hou
【Abstract】: Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.
【Keywords】: command and control systems; feature extraction; invasive software; pattern classification; peer-to-peer computing; probability; statistical analysis; telecommunication traffic; C&C network flow traffic; P2P bot-infected host; P2P botnet; PeerClean; command and control; detection probability; detection technique; dynamic group behavior analysis; flow level traffic statistic; high-level feature extraction; multiclass classification model; network connection pattern; peer-to-peer botnet; Computers; Conferences; Feature extraction; Peer-to-peer computing; Robustness; Support vector machines; Training
【Paper Link】 【Pages】:325-333
【Authors】: Wei Wang ; Xiaobing Wu ; Lei Xie ; Sanglu Lu
【Abstract】: Heterogeneous cellular networks use small base stations, such as femtocells and WiFi APs, to offload traffic from macrocells. While network operators wish to globally balance the traffic, users may selfishly select the nearest base stations and make some base stations overcrowded. In this paper, we propose to use an auction-based algorithm - Femto-Matching, to achieve both load balancing among base stations and fairness among users. Femto-Matching optimally solves the global proportional fairness problem in polynomial time by transforming it into an equivalent matching problem. Furthermore, it can efficiently utilize the capacity of randomly deployed small cells. Our trace-driven simulations show Femto-Matching can reduce the load of macrocells by more than 30% compared to non-cooperative game based strategies.
【Keywords】: femtocellular radio; game theory; multi-access systems; polynomials; telecommunication traffic; wireless LAN; WiFi AP; auction-based algorithm; base stations; equivalent matching problem; femto-matching; femtocells; heterogeneous cellular networks; load balancing; macrocells; network operators; noncooperative game based strategies; polynomial time; traffic offloading; Algorithm design and analysis; Femtocells; Macrocell networks; Optimization; Signal to noise ratio; Throughput
【Paper Link】 【Pages】:334-342
【Authors】: Wei Bao ; Ben Liang
【Abstract】: We study optimal radio resource allocation across multiple tiers of a heterogeneous wireless network in order to maximize the downlink sum throughput. Different from prior works, we consider both the randomness of base stations in space and dynamic user traffic session arrivals in time, accounting for both elastic and inelastic user traffic. A new stochastic analysis framework, which accommodates both spatial and temporal dimensions, is proposed to quantify the throughput objective. The derived throughput function is not in closed form and is non-concave in terms of the radio resource allocation factors to be optimized, hindering the search for an efficient optimization solution. Therefore, we further develop closed-form concave bounds that envelop the throughput function, to form convex approximations of the original optimization problem that can be solved efficiently. We characterize the performance gap when these bounds are used instead of the original objective. Both analytical bounding and simulation experiments demonstrate that the proposed solution is nearly optimal.
【Keywords】: approximation theory; convex programming; radio links; radio networks; stochastic processes; telecommunication traffic; base stations; concave bounds; convex approximations; downlink sum throughput; dynamic user traffic session arrivals; heterogeneous wireless networks; inelastic user traffic; optimal radio resource allocation; spatial temporal perspective; stochastic analysis framework; Computers; Downlink; Interference; Optimization; Resource management; Stochastic processes; Throughput
【Paper Link】 【Pages】:343-351
【Authors】: Hao Zhou ; Yusheng Ji ; Xiaoyan Wang ; Baohua Zhao
【Abstract】: Interference management is one of the most important issues in the heterogeneous cellular networks (HetNet) with macro and pico cells. The enhanced inter cell interference coordination (eICIC) has been proposed to protect downlink pico cell transmissions by mitigating interference from neighboring macro cells. The adaptive eICIC configuration problem is studied in this paper to adjust the parameters including the ratio of Almost Blank Subframes (ABS) and the bias of cell range expansion (RE). We formulate the problem as a general form consensus problem with regularization, and solve the problem by providing an efficient distributed optimization framework. Our algorithm is based on the alternating direction method of multipliers (ADMM) in which the solutions to local subproblems on each macro cell and pico cell are coordinated to find a solution to the global problem for the whole network. We also propose the dynamic programming based algorithms to solve the local subproblems on macro cell or pico cell. The simulation results demonstrate the efficiency of the proposed algorithm compared with existing approaches, and verify the convergence properties of the proposed algorithm.
【Keywords】: dynamic programming; interference suppression; optimisation; picocellular radio; ADMM based algorithm; almost blank subframes; alternating direction method of multipliers; cell range expansion; distributed optimization framework; downlink picocell transmissions; dynamic programming; eICIC configuration; enhanced inter cell interference coordination; general form consensus problem; heterogeneous cellular networks; interference management; mitigating interference; neighboring macrocells; Computers; Conferences; Convergence; Heuristic algorithms; Interference; Optimization; Resource management
【Paper Link】 【Pages】:352-360
【Authors】: Maryam Ahmadi ; Fei Tong ; Lei Zheng ; Jianping Pan
【Abstract】: Tiered networks have been introduced to mitigate the issues related to poor cellular coverage in dead zones and indoor environments. However, the large-scale deployment of multiple tiers can result in severe intra-tier and inter-tier interference that can considerably degrade the performance of users in all tiers. Thus, the network interference analysis has been an important topic in tiered networks. In this paper, we focus on the uplink resource reusing scenario in a two-tier cellular network consisting of a macro cell and multiple femto cells. Without imposing any limitations on the shape of the macro/femto cells (as long as they are approximated by polygons), for the first time in the literature, we obtain the distance distributions associated with tiered structures. Utilizing these distance distributions and the path-loss model in an interference-limited environment, we obtain the distributions of the received signal and interference for both tiers. Further, we give details on how our approach applies to the downlink resource reusing scenario as well as a network with multiple macro cells. Our performance study provides insights into the Signal-to-Interference Ratio and outage probability for macro/femto-cell base stations.
【Keywords】: femtocellular radio; indoor radio; probability; radiofrequency interference; dead zones; distance distributions; downlink resource reusing scenario; femtocell base station; indoor environments; intertier interference; intratier interference; macrocell base station; network interference analysis; outage probability; path-loss model; performance analysis; poor cellular coverage; probabilistic distance models; signal-to-interference ratio; tiered networks; two-tier cellular systems; uplink resource reusing scenario; Geometry; Interference; Probabilistic logic; Radio frequency; Shape; Stochastic processes; Uplink; Tiered heterogeneous networks; arbitrary polygons; distance distributions; femto cells; interference
【Paper Link】 【Pages】:361-369
【Authors】: Kaixin Sui ; Youjian Zhao ; Dan Pei ; Zimu Li
【Abstract】: Enterprise 802.11 Network (EWLAN) is an important infrastructure to the Mobile Internet, but its performance is being significantly impacted by the ever-increasing Rogue access points (RAPs). For example, in the university EWLAN we studied, the number of RAPs is more than seven times that of the enterprise APs. In this paper, we propose a generic methodology to measure RAP's carrier sense interference and hidden terminal interference, and it only uses readily available SNMP metrics, without any additional measurement hardware. Our results show that, on average, the carrier sense interference due to RAPs causes only 5% access delay increase at the MAC layer, because of careful engineering and software optimization. However, hidden terminal interference due to RAPs causes (a much more severe) up to 30% MAC layer loss rate increase on average, because no existing approach has explicitly dealt with the hidden terminal impact from rogue APs. Overall, the RAP interference would increase the IP layer delay at the WiFi hop by up to 50%.
【Keywords】: mobile communication; wireless LAN; enterprise 802.11 network performance; hidden terminal interference; mobile Internet; rogue access points; sense interference; Buildings; IEEE 802.11 Standard; Interference; Logic gates; Mobile communication; Radiation detectors; Software
【Paper Link】 【Pages】:370-378
【Authors】: Weichao Li ; Ricky K. P. Mok ; Daoyuan Wu ; Rocky K. C. Chang
【Abstract】: As most of mobile apps rely on network connections for their operations, measuring and understanding the performance of mobile networks is becoming very important for end users and operators. Despite the availability of many measurement apps, their measurement accuracy has not received sufficient scrutiny. In this paper, we appraise the accuracy of smartphone-based network performance measurement using the Android platform and the network round-trip time as the metric. We use a multiple-sniffer testbed to overcome the challenge of obtaining a complete trace for acquiring the required timestamps. Our experiment results show that the RTTs measured by the apps are all inflated, ranging from a few milliseconds (ms) to tens of milliseconds. Moreover, the 95% confidence interval can be as high as 2.4ms. A finer-grained analysis reveals that the delay inflation can be introduced both in the Dalvik VM (DVM) and below the Linux kernel. The in-DVM overhead can be mitigated but the other cannot be. Finally, we propose and implement a native app which uses HTTP messages for network measurement, and the delay inflation can be kept under 5ms for almost all cases.
【Keywords】: Android (operating system); delay estimation; smart phones; time measurement; Android platform; Dalvik VM; HTTP messages; Linux kernel; RTT; delay inflation; in-DVM overhead; measurement accuracy; measurement apps; mobile apps; mobile networks; multiple-sniffer testbed; network connections; network measurement; network round-trip time; smartphone-based network performance measurement; timestamps; Accuracy; Clocks; Delays; Kernel; Servers; Smart phones; Wireless communication
【Paper Link】 【Pages】:379-387
【Authors】: Song Min Kim ; Shuai Wang ; Tian He
【Abstract】: Contradicting the widely believed assumption of link independence, recently the phenomenon of reception correlation among nearby receivers has been revealed and exploited for varieties of protocols [3], [8], [17], [21], [23], [24]. However, despite the diversified correlation-aware designs proposed up to date, they commonly suffer from a shortcoming where link correlation is inaccurately measured, which leads them to sub-optimal performance. In this work we propose a general framework for accurate capturing of link correlation, enabling better utilization of the phenomenon for protocols lying on top of it. Our framework uses SINR (Signal to Interference plus Noise Ratio) to detect correlations, followed by modeling the correlations for in-network use. We show that our design is light-weight, both computation and storage-wise. We apply our model to opportunistic routing and network coding on a physical 802.15.4 test-bed for energy savings of 25% and 15%.
【Keywords】: Zigbee; diversity reception; energy conservation; network coding; radio links; radio receivers; routing protocols; SINR; energy saving; network coding; opportunistic routing; physical 802.15.4 test-bed; reception correlation phenomenon; signal to interference plus noise ratio; wireless link correlation effect; wireless receiver; Correlation; IEEE 802.11 Standard; IEEE 802.15 Standard; Interference; Receivers; Shadow mapping; Signal to noise ratio
【Paper Link】 【Pages】:388-396
【Authors】: Dziugas Baltrunas ; Ahmed Elmokashfi ; Amund Kvalbein
【Abstract】: This paper demonstrates that end-to-end active measurements can give invaluable insights into the nature and characteristics of packet loss in cellular networks. We conduct a large-scale measurement study of packet loss in four UMTS networks. The study is based on active measurements from hundreds of measurement nodes over a period of one year. We find that a significant fraction of loss occurs during pathological and normal Radio Resource Control (RRC) state transitions. The remaining loss exhibits pronounced diurnal patterns and shows a relatively strong correlation between geographically diverse measurement nodes. Our results indicate that the causes of a significant part of the remaining loss lie beyond the radio access network.
【Keywords】: 3G mobile communication; cellular radio; losses; UMTS network; cellular network edge; end-to-end active measurement; mobile broadband network; normal radio resource control; packet loss; pathological radio resource control; radio access network; 3G mobile communication; Artificial neural networks; Computers; Correlation; Packet loss; Pathology
【Paper Link】 【Pages】:397-405
【Authors】: Yousi Zheng ; Ness B. Shroff ; R. Srikant ; Prasun Sinha
【Abstract】: The number and size of data centers has seen a rapid growth in the last few years. It is no longer uncommon to see large data centers with thousands or even tens of thousands of machines. Hence, it is critical to develop scalable scheduling mechanisms for processing the enormous number of jobs handled by popular paradigms such as the MapReduce framework. This work explores the possibility of simplifying the scheduling procedure by exploiting the “largeness” of the data center system. Specifically, we consider the problem of minimizing the total flow time of a sequence of jobs under the MapReduce framework, where the jobs arrive over time and need to be processed through both Map and Reduce procedures before leaving the system. We show that any work-conserving scheduler is asymptotically optimal under a wide range of traffic loads, including the heavy traffic limit. Our results are shown for scenarios in which the tasks can be preempted and served in parallel over different machines, as well as scenarios when each task has to be served only on one machine and cannot be preempted. This result implies, somewhat surprisingly, that when we have a large number of machines, there is little to be gained by optimizing beyond ensuring that a scheduler should be work-conserving. For long running applications, we also study the relationship between the number of machines and total running time, and show sufficient conditions to guarantee the asymptotic optimality of work-conserving schedulers. Further, we run extensive simulations, that indeed verify that when the total number of machines is large, state-of-the-art work-conserving schedulers have similar and close-to-optimal delay performance.
【Keywords】: computer centres; data handling; large-scale systems; optimisation; parallel processing; scheduling; MapReduce framework; asymptotic optimality; data center scheduler design; large system dynamics; optimization; scheduling mechanism; work-conserving scheduler; Complexity theory; Computers; Conferences; Delays; Minimization; Multicore processing
【Paper Link】 【Pages】:406-414
【Authors】: Chang-Heng Wang ; Tara Javidi ; George Porter
【Abstract】: This paper considers the end-to-end scheduling for all-optical data center networks with zero in-network buffer and non-negligible reconfiguration delay. It is known that in the regime where the scheduling reconfiguration delay is non-negligible, the rate of schedule reconfiguration should be limited in such a way as to minimize the impact of reduced duty-cycles and to ensure bounded delay. However, when the scheduling rate is restricted, the existing literature also tends to restrict the rate of monitoring and decision processes. We first present a framework for scheduling with reconfiguration delay that decouples the rate of scheduling from the rate of monitoring. Under this framework, we then present two scheduling algorithms for switches with reconfiguration delay, both based on the well-known MaxWeight scheduling policy. The first one is the Periodic MaxWeight (PMW), which is simpler in computation, but requires prior knowledge of traffic load. The other is the Adaptive MaxWeight (AMW), which, in contrast, requires no prior knowledge. We show the stability condition for both algorithms and evaluate their delay performance through simulations.
【Keywords】: computer centres; optical switches; telecommunication network topology; telecommunication scheduling; telecommunication traffic; adaptive maxweight; all-optical data center networks; end-to-end scheduling; maxweight scheduling policy; non-negligible reconfiguration delay; periodic maxweight; scheduling rate; traffic load; zero in-network buffer; Delays; Monitoring; Optical buffering; Optical switches; Processor scheduling; Schedules; Stability analysis
【Paper Link】 【Pages】:415-423
【Authors】: Kai Han ; Zhiming Hu ; Jun Luo ; Liu Xiang
【Abstract】: The recent development of 60GHz technology has made hybrid Data Center Networks (hybrid DCNs) possible, i.e., augmenting wired DCNs with highly directional 60GHz wireless links to provide flexible network connectivity. Although a few recent proposals have demonstrated the feasibility of this hybrid design, it still remains an open problem how to route DCN traffics with guaranteed performance under a hybrid DCN environment. In this paper, we make the first attempt to tackle this challenge, and propose the RUSH framework to minimize the network congestion in hybrid DCNs, by jointly routing flows and scheduling wireless (directional) antennas. Though the problem is shown to be NP-hard, the RUSH algorithms offer guaranteed performance bounds. Our algorithms are able to handle both batched arrivals and sequential arrivals of flow demands, and the theoretical analysis shows that they achieve competitive ratios of O(log n), where n is the number of switches in the network. We also conduct extensive simulations using ns-3 to verify the effectiveness of RUSH. The results demonstrate that RUSH produces nearly optimal performance and significantly outperforms the current practice and a simple greedy heuristics.
【Keywords】: computer centres; directive antennas; millimetre wave antennas; radio links; telecommunication congestion control; telecommunication network routing; telecommunication scheduling; DCN traffic routing; NP-hard; RUSH framework; directional antenna; frequency 60 GHz; hybrid data center network; network congestion minimiation; ns-3; routing and scheduling; wireless link; Algorithm design and analysis; Computers; Directional antennas; Routing; Schedules; Topology; Wireless communication
【Paper Link】 【Pages】:424-432
【Authors】: Yangming Zhao ; Kai Chen ; Wei Bai ; Minlan Yu ; Chen Tian ; Yanhui Geng ; Yiming Zhang ; Dan Li ; Sheng Wang
【Abstract】: In the data flow models of today's data center applications such as MapReduce, Spark and Dryad, multiple flows can comprise a coflow group semantically. Only completing all flows in a coflow is meaningful to an application. To optimize application performance, routing and scheduling must be jointly considered at the level of a coflow rather than individual flows. However, prior solutions have significant limitation: they only consider scheduling, which is insufficient. To this end, we present Rapier, a coflow-aware network optimization framework that seamlessly integrates routing and scheduling for better application performance. Using a small-scale testbed implementation and large-scale simulations, we demonstrate that Rapier significantly reduces the average coflow completion time (CCT) by up to 79.30% compared to the state-of-the-art scheduling-only solution, and it is readily implementable with existing commodity switches.
【Keywords】: computer centres; optimisation; telecommunication network routing; telecommunication scheduling; RAPIER; coflow-aware data center networks; coflow-aware network optimization framework; Algorithm design and analysis; Approximation algorithms; Bandwidth; Optimal scheduling; Processor scheduling; Routing; Scheduling
【Paper Link】 【Pages】:433-441
【Authors】: Dong-Hoon Shin ; Shibo He ; Junshan Zhang
【Abstract】: While most of existing efforts for dynamic spectrum access have focused on spectrum sensing of a narrowband band in a given region, this paper takes a holistic perspective to determine the usage profile of wide spectrum bands over a large geographic region. Specifically, a mobile crowdsensing approach is taken to develop a spectrum-profiling framework, which leverages the wisdom of many mobile devices to accomplish large-scale sensing tasks. A key step for spectrum profiling via mobile crowdsensing is to strategically assign sensing tasks to mobile users, so as to maximize the utility of the sensing data acquired. We cast this problem as a joint sensing task and subband allocation problem for utility maximization, capturing the location-specific characteristics of spectrum sensing. Since the problem is NP-hard, we design approximation algorithms. First, we design a greedy approximation algorithm as a baseline. Our analysis shows that the proposed greedy algorithm achieves an approximation ratio of 1/6, i.e., at least 1/6 of the utility obtained by the optimal allocation. Next, we design a Linear Program (LP) rounding based approximation algorithm, aiming to achieve a better approximation ratio than the greedy algorithm. We show that the propopsed LP-rounding algorithm attains an approximation ratio of 1/2 (1 - 1/e) for the general case, and further it achieves 1 - 1/e for a special case of the problem, which is the best possible approximation ratio. We also present the complexity analysis of the two proposed algorithms. We perform numerical experiments to evaluate the average performance of the the proposed algorithms.
【Keywords】: linear programming; radio spectrum management; resource allocation; NP-hard; greedy approximation algorithm; joint sensing task and subband allocation; large-scale spectrum profiling; linear program rounding; spectrum sensing; utility maximization; Algorithm design and analysis; Approximation algorithms; Approximation methods; Greedy algorithms; Mobile communication; Resource management; Sensors
【Paper Link】 【Pages】:442-450
【Authors】: Jelena V. Misic ; Md. Mizanur Rahman ; Vojislav B. Misic
【Abstract】: Cognitive radio networks rely on spectrum sensing performed in a collaborative manner by the cognitive nodes themselves. Priority differentiation in such a network can be accomplished through different scheduling policies, differentiated duration of mandatory spectrum sensing, or a combination of the two. This differentiation will affect not only packet delays, but also the accuracy of channel sensing and, by extension, the probability of collisions with primary user transmissions which will critically affect the operation of the network. In this paper we provide a probabilistic analysis of the interplay between priority differentiation and network performance, and investigate the resulting tradeoffs under different prioritization approaches.
【Keywords】: cognitive radio; probability; radio spectrum management; signal detection; telecommunication scheduling; telecommunication traffic; channel sensing; cognitive radio network; collision probability; packet delay; primary user transmission; priority differentiation; probabilistic analysis; scheduling policy; spectrum sensing; Bandwidth; Conferences; Media Access Protocol; Personal area networks; Probability; Quality of service; Sensors
【Paper Link】 【Pages】:451-459
【Authors】: Narendra Anand ; Jeongkeun Lee ; Sung-Ju Lee ; Edward W. Knightly
【Abstract】: A Multi-User MIMO (MU-MIMO) Access Point (AP) can obtain a capacity gain by simultaneously transmitting to multiple clients. This technique requires Channel State Information (CSI) at the transmitting AP to set antenna gains and phases to enable simultaneous reception through beamforming. The AP must also select both the mode (number of transmit and collective receive antennas) and the user set prior to transmission. While the ideal mode and user selection is a function of CSI, CSI must be estimated with an overhead intensive channel sounding process. We design, implement, and evaluate Pre-sounding User and Mode selection Algorithm (PUMA), a method for mode and user selection prior to channel sounding. We show that even without CSI, PUMA (i) exploits theoretical properties of MU-MIMO system scaling with respect to mode, (ii) characterizes the relative cost of each potential mode, and (iii) estimates per-stream transmission rate and aggregate throughput in each mode for a potential user set, all without CSI. Once PUMA has selected the appropriate mode and user group, the chosen protocol's channel sounding method is used on the intended user subset to carry out the transmission. We show that, on average, PUMA selects the mode and group that achieves an aggregate rate within 3% of the saturation throughput of what would have been achieved by sounding all users (which would require significant additional overhead). Moreover, we show that PUMA obtains 30% higher aggregate throughput compared to the best fixed-mode policy that uses the maximum number of available transmit and receive antennas.
【Keywords】: MIMO communication; channel allocation; multi-access systems; wireless LAN; MIMO access point; PUMA algorithm; aggregate throughput; multiuser MIMO WLAN; overhead intensive channel sounding process; perstream transmission rate; presounding user and mode selection algorithm; relative cost; user selection; Aggregates; Interference; Receiving antennas; Signal to noise ratio; Throughput; Transmitting antennas
【Paper Link】 【Pages】:460-468
【Authors】: Wei Wang ; Yingjie Victor Chen ; Zeyu Wang ; Jin Zhang ; Kaishun Wu ; Qian Zhang
【Abstract】: Fixed channelization configuration in today's wireless devices falls inefficient in the presence of growing data traffic and heterogeneous devices. In this regard, a number of fairly recent studies have provided spectrum adaptation capabilities for current wireless devices, however, they are limited to inband adaptation or incur substantial coordination overhead. The target of this paper is to fill the gaps in spectrum adaptation by overcoming these limitations. We propose Seer, a frame-level wideband spectrum adaptation system which consists of two major components: i) a specially-constructed preamble that can be detected by receivers with arbitrary RF bands, and ii) a spectrum detection algorithm that identifies the intended transmission band in the context of multiple asynchronous senders by exploiting the preamble's temporal and spectral properties. Seer can be realized on commodity radios, and can be easily integrated into devices running different PHY/MAC protocols. We have prototyped Seer on the GNURadio/USRP platform and tested it under various environments. Furthermore, our evaluation using 1.6GHz spectrum measurements shows that Seer largely improves system throughput over fixed channel configuration and state-of-the-art spectrum adaptation approaches.
【Keywords】: access protocols; radio spectrum management; signal detection; GNURadio-USRP platform; MAC protocols; PHY protocols; Seer; commodity radios; coordination overhead; fixed channel configuration; fixed channelization configuration; frame-level wideband spectrum adaptation system; frequency 1.6 GHz; growing data traffic; heterogeneous devices; inband adaptation; multiple asynchronous senders; preamble; receivers; spectrum adaptation capabilities; spectrum detection algorithm; wireless devices; Encoding; Protocols; Radio frequency; Receivers; Wideband; Wireless communication
【Paper Link】 【Pages】:469-477
【Authors】: Stefano Vissicchio ; Luca Cittadini ; Olivier Bonaventure ; Geoffrey G. Xie ; Laurent Vanbever
【Abstract】: Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and Open-Flow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works.
【Keywords】: routing protocols; software defined networking; centralized routing control-planes; distributed routing control-planes; Computers; Conferences; Process control; Routing; Routing protocols; Taxonomy
【Paper Link】 【Pages】:478-486
【Authors】: Xuan Nam Nguyen ; Damien Saucez ; Chadi Barakat ; Thierry Turletti
【Abstract】: The Software-Defined Networking approach permits to realize new policies. In OpenFlow in particular, a controller decides on behalf of the switches which forwarding rules must be installed and where. However with this flexibility comes the challenge of the computation of a rule allocation matrix meeting both high-level policies and the network constraints such as memory or link capacity limitations. Nevertheless, in many situations (e.g., data-center networks), the exact path followed by packets does not severely impact performances as long as packets are delivered according to the endpoint policy. It is thus possible to deviate part of the traffic to alternative paths so to better use network resources without violating the endpoint policy. In this paper, we propose a linear optimization model of the rule allocation problem in resource constrained OpenFlow networks with relaxing routing policy. We show that the general problem is NP-hard and propose a polynomial time heuristic, called OFFICER, which aims to maximize the amount of carried traffic in under-provisioned networks. Our numerical evaluation on four different topologies shows that exploiting various paths allows to increase the amount of traffic supported by the network without significantly increasing the path length.
【Keywords】: computational complexity; computer centres; optimisation; polynomials; software defined networking; telecommunication network routing; telecommunication network topology; telecommunication traffic; NP-hard problem; OFFICER; OpenFlow networks; OpenFlow rule allocation; carried traffic; data-center networks; endpoint policy enforcement; general optimization framework; high-level policy; linear optimization; link capacity limitations; memory capacity limitations; network constraints; network resources; polynomial time heuristic; relaxing routing policy; rule allocation matrix; software defined networking; under-provisioned networks; Control systems; Linear programming; Network topology; Optimization; Resource management; Routing; Topology
【Paper Link】 【Pages】:487-495
【Authors】: Huandong Wang ; Yong Li ; Ying Zhang ; Depeng Jin
【Abstract】: Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing. Despite its usefulness, the live migration still introduces performance degradations during the migration process. Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact. From the network's perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer. In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time. We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing. More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration. We formulate this problem as a mixed integer programming, which is NP-hard. To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound. Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution and two state-of-the-art algorithms. That is, our proposed FPTA algorithm approaches to the optimal solution with less than 10% variation and much less computation time. Meanwhile, it reduces the total migration time and the service downtime by up to 40% and 20% compared with the state-of-the-art algorithms, respectively.
【Keywords】: computational complexity; fault tolerance; integer programming; polynomial approximation; resource allocation; software defined networking; telecommunication network planning; virtual machines; FPTA algorithm; NP-hard problem; data center networks; fault tolerance; fully polynomial time approximation algorithm; linear approximation; live migration; load balancing; mixed integer programming; resource optimization; software-defined networks; virtual machine migration planning; Approximation algorithms; Bandwidth; Linear approximation; Linear programming; Polynomials; Virtual machining
【Paper Link】 【Pages】:496-504
【Authors】: Calvin Newport ; Wenchao Zhou
【Abstract】: A software defined network (SDN) separates the centralized control plane from the distributed data plane. This approach simplifies control logic at the cost of a heavy burden on the software-based controller and potential long reaction time to data plane events. One solution to this problem is to distribute control logic to multiple controllers spread across the network. Such a solution, however, requires additional mechanisms to enforce correctness properties (e.g., consistency) among the controllers and it still does not fully eliminate latency, as controller decisions happen in software. In this paper, we explore a novel approach to this problem: configuring the rules used by the data plane switches to allow these switches to effectively handle latency-sensitive network management tasks without the direct intervention of the control plane. We are not suggesting to add distributed control logic capability to the switches, we are instead exploring the feasibility of encoding such logic using the standard forwarding rules already available to these devices. To this end, we formally model a network of SDN switches, and then prove using tools from computability theory that such systems are capable of simulating polynomial space Turing Machines, indicating a surprising amount of computational power.
【Keywords】: Turing machines; centralised control; control engineering computing; multivariable control systems; software defined networking; SDN data plane; centralized control plane; computability theory; computational power; control logic; latency-sensitive network management; polynomial space Turing machines; software defined network; software-based controller; Computational modeling; Conferences; Control systems; Data models; Pattern matching; Standards; Turing machines
【Paper Link】 【Pages】:505-512
【Authors】: Rahul Singh ; Xueying Guo ; P. R. Kumar
【Abstract】: A problem of much current practical interest is the replacement of the wiring infrastructure connecting approximately 200 sensor and actuator nodes in automobiles by an access point. This is motivated by the considerable savings in automobile weight, simplification of manufacturability, and future upgradability. A key issue is how to schedule the nodes on the shared access point so as to provide regular packet delivery. In this and other similar applications, the mean of the inter-delivery times of packets, i.e., throughput, is not sufficient to guarantee service-regularity. The time-averaged variance of the inter-delivery times of packets is also an important metric. So motivated, we consider a wireless network where an Access Point schedules real-time generated packets to nodes over a fading wireless channel. We are interested in designing simple policies which achieve optimal mean-variance tradeoff in interdelivery times of packets by minimizing the sum of time-averaged means and variances over all clients. Our goal is to explore the full range of the Pareto frontier of all weighted linear combinations of mean and variance so that one can fully exploit the design possibilities. We transform this problem into a Markov decision process and show that the problem of choosing which node's packet to transmit in each slot can be formulated as a bandit problem. We establish that this problem is indexable and explicitly derive the Whittle indices. The resulting Index policy is optimal in certain cases. We also provide upper and lower bounds on the cost for any policy. Extensive simulations show that Index policies perform better than previously proposed policies.fading wireless channel.
【Keywords】: Markov processes; fading channels; wireless sensor networks; Markov decision process; access point; bandit problem; fading wireless channel; index policies; inter-delivery times; optimal mean-variance trade-off of inter-delivery times; optimal mean-variance tradeoff; real-time sensor networks; regular packet delivery; weighted linear combinations; wireless network; Computers; Conferences; Couplings; Indexes; Markov processes; Steady-state; Throughput
【Paper Link】 【Pages】:513-521
【Authors】: Laura Galluccio ; Sebastiano Milardo ; Giacomo Morabito ; Sergio Palazzo
【Abstract】: In this paper SDN-WISE, a software defined networking (SDN) solution for wireless sensor networks, is introduced. Differently from the existing SDN solutions for wireless sensor networks, SDN-WISE is stateful and pursues two objectives: (i) to reduce the amount of information exchanged between sensor nodes and the SDN network controller, and (ii) to make sensor nodes programmable as finite state machines so enabling them to run operations that cannot be supported by stateless solutions. A detailed description of SDN-WISE is provided in this paper. SDN-WISE offers APIs that allow software developers to implement the SDN Controller using the programming language they prefer. This represents a major advantage of SDN-WISE as compared to existing solutions because it increases flexibility and simplicity in network programming. A prototype of SDN-WISE has been implemented and is described in this paper. Such implementation contains the modules that allow a real SDN Controller to manage an OMNeT++ simulated network. Finally, the paper illustrates the results obtained through an experimental testbed which has been developed to evaluate the performance of SDN-WISE in several operating conditions.
【Keywords】: finite state machines; software defined networking; wireless sensor networks; API; OMNeT++ simulated network; SDN network controller; SDN-WISE; application program interface; finite state machines; programmable sensor node; software defined networking; stateful SDN solution; wireless sensor networks; Arrays; Network topology; Protocols; Software; Topology; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:522-530
【Authors】: Dora Spenza ; Michele Magno ; Stefano Basagni ; Luca Benini ; Mario Paoli ; Chiara Petrioli
【Abstract】: Emerging wake-up radio technologies have the potential to bring the performance of sensing systems and of the Internet of Things to the levels of low latency and very low energy consumption required to enable critical new applications. This paper provides a step towards this goal with a twofold contribution. We first describe the design and prototyping of a wake-up receiver (WRx) and its integration to a wireless sensor node. Our WRx features very low power consumption (<; 1.3μW), high sensitivity (up to -55dBm), fast reactivity (wake-up time of 130μs), and selective addressing, a key enabler of new high performance protocols. We then present ALBA-WUR, a cross-layer solution for data gathering in sensing systems that redesigns a previous leading protocol, ALBA-R, extending it to exploit the features of our WRx. We evaluate the performance of ALBA-WUR via simulations, showing that the use of the WRx produces remarkable energy savings (up to five orders of magnitude), and achieves lifetimes that are decades longer than those obtained by ALBA-R in sensing systems with duty cycling, while keeping latencies at bay.
【Keywords】: protocols; radio receivers; telecommunication power management; wireless sensor networks; ALBA-R extension; ALBA-WUR protocol; cross layer data gathering solution; high performance protocol; long lived wireless sensing systems; selective addressing; selective awakening; wake-up radio; wake-up receiver; wireless sensor node; Color; Power demand; Prototypes; Receivers; Relays; Sensitivity; Wireless sensor networks
【Paper Link】 【Pages】:531-539
【Authors】: Siyao Cheng ; Zhipeng Cai ; Jianzhong Li ; Xiaolin Fang
【Abstract】: The amount of sensory data manifests an explosive growth due to the increasing popularity of Wireless Sensor Networks. The scale of the sensory data in many applications has already exceeds several petabytes annually, which is beyond the computation and transmission capabilities of the conventional WSNs. On the other hand, the information carried by big sensory data has high redundancy because of strong correlation among sensory data. In this paper, we define the concept of e-dominant dataset, which is only a small data set and can represent the vast information carried by big sensory data with the information loss rate being less than e, where e can be arbitrarily small. We prove that drawing the minimum e-dominant dataset is polynomial time solvable and provide a centralized algorithm with 0(n3) time complexity. Furthermore, a distributed algorithm with constant complexity (O(l)) is also designed. It is shown that the result returned by the distributed algorithm can satisfy the e requirement with a near optimal size. Finally, the extensive real experiment results and simulation results are carried out. The results indicate that all the proposed algorithms have high performance in terms of accuracy and energy efficiency.
【Keywords】: polynomials; wireless sensor networks; WSN; big sensory data; centralized algorithm; computation capabilities; distributed algorithm; dominant dataset; polynomial time; sensory data; time complexity; transmission capabilities; wireless sensor networks; Complexity theory; Correlation; Distributed algorithms; Maintenance engineering; Nickel; Sensors; Wireless sensor networks
【Paper Link】 【Pages】:540-548
【Authors】: Michela Becchi ; Anat Bremler-Barr ; David Hay ; Omer Kochba ; Yaron Koral
【Abstract】: This paper focuses on regular expression matching over compressed traffic. The need for such matching arises from two independent trends. First, the volume and share of compressed HTTP traffic is constantly increasing. Second, due to their superior expressibility, current Deep Packet Inspection engines use regular expressions more and more frequently. We present an algorithmic framework to accelerate such matching, taking advantage of information gathered when the traffic was initially compressed. HTTP compression is typically performed through the GZIP protocol, which uses back-references to repeated strings. Our algorithm is based on calculating (for every byte) the minimum number of (previous) bytes that can be part of a future regular expression matching. When inspecting a back-reference, only these bytes should be taken into account, thus enabling one to skip repeated strings almost entirely without missing a match. We show that our generic framework works with either NFA-based or DFA-based implementations and gains performance boosts of more than 70%. Moreover, it can be readily adapted to most existing regular expression matching algorithms, which usually are based either on NFA, DFA or combinations of the two. Finally, we discuss other applications in which calculating the number of relevant bytes becomes handy, even when the traffic is not compressed.
【Keywords】: data compression; deterministic automata; finite automata; hypermedia; pattern matching; transport protocols; DFA-based implementation; GZIP protocol; NFA-based implementation; compressed HTTP traffic; deterministic finite automata; hypertext transfer protocol; nondeterministic finite automata; regular expression matching; Acceleration; Automata; Computers; Conferences; Estimation; Inspection; Pattern matching
【Paper Link】 【Pages】:549-557
【Authors】: Jingyu Hua ; Yue Gao ; Sheng Zhong
【Abstract】: Trajectory data, i.e., human mobility traces, is extremely valuable for a wide range of mobile applications. However, publishing raw trajectories without special sanitization poses serious threats to individual privacy. Recently, researchers begin to leverage differential privacy to solve this challenge. Nevertheless, existing mechanisms make an implicit assumption that the trajectories contain a lot of identical prefixes or n-grams, which is not true in many applications. This paper aims to remove this assumption and propose a differentially private publishing mechanism for more general time-series trajectories. One natural solution is to generalize the trajectories, i.e., merge the locations at the same time. However, trivial merging schemes may breach differential privacy. We, thus, propose the first differentially-private generalization algorithm for trajectories, which leverage a carefully-designed exponential mechanism to probabilistically merge nodes based on trajectory distances. Afterwards, we propose another efficient algorithm to release trajectories after generalization in a differential private manner. Our experiments with real-life trajectory data show that the proposed mechanism maintains high data utility and is scalable to large trajectory datasets.
【Keywords】: data privacy; time series; differential privacy; differentially private publication; differentially-private generalization algorithm; human mobility traces; time-serial trajectory data; time-series trajectories; Computers; Conferences; Data Publishing; Differential Privacy; Trajectory
【Paper Link】 【Pages】:558-566
【Authors】: Jiefei Ma ; Franck Le ; Alessandra Russo ; Jorge Lobo
【Abstract】: Signature-based network intrusion detection systems (S-IDS) have become an important security tool in the protection of an organisation's infrastructure against external intruders. By analysing network traffic, S-IDS' detect network intrusions. An organisation may deploy one or multiple S-IDS', each working independently with the assumption that it can monitor all packets of a given flow to detect intrusion signatures. However, emerging technologies (e.g., Multi-Path TCP) violate this assumption, as traffic can be concurrently sent across different paths (e.g., WiFi, Cellular) to boost network performance. Attackers may exploit this capability and split malicious payloads across multiple paths to evade traditional signature-based network intrusion detection systems. Although multiple monitors may be deployed, none of them has the full coverage of the network traffic to detect the intrusion signature. In this paper, we formalise this distributed signature-based intrusion detection problem as an asynchronous online exact string matching problem, and propose an algorithm for it. To demonstrate its effectiveness we conducted comprehensive experiments. Our results show that the behaviour of our algorithm depends only on the packet arrival rate: delay in detecting the signature grows linearly with respect to the packet arrival rate and with small communication overhead.
【Keywords】: computer network security; multipath channels; telecommunication traffic; asynchronous online exact string matching problem; distributed signature-based intrusion; intrusion signatures; multi-path routing attacks; network intrusion detection systems; network traffic; packet arrival rate; Automata; Computers; Conferences; Intrusion detection; Monitoring; Payloads; Synchronization
【Paper Link】 【Pages】:567-575
【Authors】: Chao Zhang ; Mehrdad Niknami ; Kevin Zhijie Chen ; Chengyu Song ; Zhaofeng Chen ; Dawn Song
【Abstract】: Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims' systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
【Keywords】: Internet; data protection; online front-ends; source code (software); CFI; Firefox Web browser; Internet resources; JIT compiled code; JIT optimization technique; JITScope; W⊕X policy; Web user protection; arbitrary malicious code; control-flow hijacking attacks; control-flow integrity; just-in-time compilation; source code compilation; Browsers; Engines; Instruments; Layout; Runtime; Safety; Security
【Paper Link】 【Pages】:576-584
【Authors】: Jian Zhao ; Xiaowen Chu ; Hai Liu ; Yiu-Wing Leung ; Zongpeng Li
【Abstract】: Latest developments in cloud computing technologies have enabled a plethora of cloud based data storage services. Cloud storage service providers are facing significant bandwidth cost as the user population scales. Such bandwidth cost can be substantially slashed by exploring a hybrid cloud storage architecture that takes advantage of under-utilized storage and network resources at storage clients. A critical component in the new hybrid cloud storage architecture is an economic mechanism that incentivizes clients to contribute their local resources, while at the same time minimizes the provider's cost for pooling those resources. This work studies online procurement auction mechanisms towards these goals. The online nature of the auction is in line with asynchronous user request arrivals in practice. After carefully characterizing truthfulness conditions under the online procurement auction paradigm, we prove that truthfulness can be guaranteed by a price-based allocation rule and payment rule. Our truthfulness characterization actually converts the mechanism design problem into an online algorithm design problem, with a marginal pricing function for resources as variables set by cloud storage service providers for online procurement auction. We derive the marginal pricing function for the online algorithm. We also prove the competitive ratio of the social cost of our algorithm against that of the offline VCG mechanism and of the resource pooling cost of our algorithm against that of the offline optimal auction. Simulation studies driven by real-world traces are conducted to show the efficacy of our online auction mechanism.
【Keywords】: cloud computing; cost reduction; pricing; procurement; resource allocation; storage management; asynchronous user request arrivals; bandwidth cost; client-assisted cloud storage systems; cloud based data storage service; cloud computing technology; cloud storage service provider; competitive ratio; economic mechanism; hybrid cloud storage architecture; local resource; marginal pricing function; network resource; offline VCG mechanism; online algorithm design problem; online procurement auction mechanism; online procurement auctions; payment rule; price-based allocation rule; provider cost minimization; resource pooling cost; social cost; storage client; truthfulness conditions; underutilized storage; Algorithm design and analysis; Bandwidth; Cloud computing; Pricing; Procurement; Resource management; Servers
【Paper Link】 【Pages】:585-593
【Authors】: Yin Sun ; Zizhan Zheng ; Can Emre Koksal ; Kyu-Han Kim ; Ness B. Shroff
【Abstract】: One key requirement for storage clouds is to be able to retrieve data quickly. Recent system measurements have shown that the data retrieving delay in storage clouds is highly variable, which may result in a long latency tail. One crucial idea to improve the delay performance is to retrieve multiple data copies by using parallel downloading threads. However, how to optimally schedule these downloading threads to minimize the data retrieving delay remains to be an important open problem. In this paper, we develop low-complexity thread scheduling policies for several important classes of data downloading time distributions, and prove that these policies are either delay-optimal or within a constant gap from the optimum delay performance. These theoretical results hold for an arbitrary arrival process of read requests that may contain finite or infinite read requests, and for heterogeneous MDS storage codes that can support diverse storage redundancy and reliability requirements for different data files. Our numerical results show that the delay performance of the proposed policies is significantly better than that of First-Come-First-Served (FCFS) policies considered in prior work.
【Keywords】: cloud computing; information retrieval; multi-threading; reliability; storage management; FCFS policies; data downloading time distributions; data retrieving delay; first-come-first-served policies; heterogeneous MDS storage codes; infinite read requests; latency tail; low-complexity thread scheduling policies; optimal downloading thread scheduling; optimum delay performance; parallel downloading threads; reliability requirements; storage cloud; system measurements; Cloud computing; Conferences; Delays; Instruction sets; Optimized production technology; Redundancy; Servers
【Paper Link】 【Pages】:594-602
【Authors】: Yanfei Guo ; Jia Rao ; Dazhao Cheng ; Changjun Jiang ; Cheng-Zhong Xu ; Xiaobo Zhou
【Abstract】: Virtualizing Hadoop clusters provides many benefits, including rapid deployment, on-demand elasticity and secure multi-tenancy. However, a simple migration of Hadoop to a virtualized environment does not fully exploit these benefits. The dual role of a Hadoop worker, acting as both a compute node and a data node, makes it difficult to achieve efficient IO processing, maintain data locality, and exploit resource elasticity in the cloud. We find that decoupling per-node storage from its computation opens up opportunities for IO acceleration, locality improvement, and on-the-fly cluster resizing. To fully exploit these opportunities, we propose StoreApp, a shared storage appliance for virtual Hadoop worker nodes co-located on the same physical host. To completely separate storage from computation and prioritize IO processing, StoreApp pro-actively pushes intermediate data generated by map tasks to the storage node. StoreApp also implements late-binding task creation to take the advantage of prefetched data due to mis-aligned records. Experimental results show that StoreApp achieves up to 61% performance improvement compared to stock Hadoop and resizes the cluster to the (near) optimal degree of parallelism.
【Keywords】: cloud computing; data handling; input-output programs; storage management; virtualisation; IO acceleration; StoreApp; cloud computing; data locality; efficient IO processing; late-binding task creation; on-the-fly cluster resizing; prefetched data; resource elasticity; shared storage appliance; virtualized Hadoop clusters; Benchmark testing; Cloud computing; Computers; Conferences; Elasticity; Parallel processing; Prefetching
【Paper Link】 【Pages】:603-611
【Authors】: Boyang Yu ; Jianping Pan
【Abstract】: Data-intensive applications need to address the problem of how to properly place the set of data items to distributed storage nodes. Traditional techniques use the hashing method to achieve the load balance among nodes such as those in Hadoop and Cassandra, but they do not work efficiently for the requests reading multiple data items in one transaction, especially when the source locations of requests are also distributed. Recent works proposed the managed data placement schemes for online social networks, but have a limited scope of applications due to their focuses. We propose an associated data placement (ADP) scheme, which improves the co-location of associated data and the localized data serving while ensuring the balance between nodes. In ADP, we employ the hypergraph partitioning technique to efficiently partition the set of data items and place them to the distributed nodes, and we also take replicas and incremental adjustment into considerations. Through extensive experiments with both synthesized and trace-based datasets, we evaluate the performance of ADP and demonstrate its effectiveness.
【Keywords】: data handling; storage management; associated data colocation; data intensive applications; geo-distributed applications; localized data serving; location aware associated data placement; Computers; Conferences; Data models; Distributed databases; Measurement; Routing; System performance
【Paper Link】 【Pages】:612-620
【Authors】: Shimin Gong ; Lingjie Duan ; Ping Wang
【Abstract】: We consider a cognitive radio network, where primary users (PUs) share their spectrum with energy harvesting (EH) enabled secondary users (SUs), conditioned on a limited SUs' interference at PU receivers. Due to the lack of information exchange between SUs and PUs, the SU-PU interference channels are subject to uncertainty in channel estimation. Besides channel uncertainty, SUs' EH profile is also subject to spatial and temporal variations, which enforce an energy causality constraint on SUs' transmit power control and affect SUs' interference at PU receivers. Considering both the channel and EH uncertainties, we propose a robust design for SUs' power control to maximize SUs' throughput performance. Our robust design targets at the worst-case interference constraint to provide a robust protection for PUs, while guarantees a transmission probability to reflect SUs' minimum QoS requirements. To make the non-convex throughput maximization problem tractable, we develop a convex approximation for each robust constraint and successfully design a successive approximation approach that converges to the global optimum of the throughput objective. Simulations show that SUs will change transmission strategies according to PUs' sensitivity to interference, and we also exploit the impact of SUs' EH profile (e.g., mean, variance, and correlation) on SUs' power control.
【Keywords】: approximation theory; channel estimation; cognitive radio; energy harvesting; optimisation; power control; radio networks; radio receivers; radiofrequency interference; telecommunication control; telecommunication power management; PU receivers; SU-PU interference channels; channel estimation; cognitive radio networks; energy causality constraint; energy harvesting; nonconvex throughput maximization problem; power control; primary users; robust constraint; robust optimization; secondary users; successive approximation; transmission probability; worst-case interference constraint; Channel estimation; Interference; Power control; Receivers; Robustness; Throughput; Uncertainty
【Paper Link】 【Pages】:621-629
【Authors】: Rahul Urgaonkar ; Prithwish Basu ; Saikat Guha ; Ananthram Swami
【Abstract】: We study the problem of maximizing the multicast throughput in a dense multi-channel multi-radio (MC-MR) wireless network with multiple multicast sessions. Specifically, we consider a fully connected network topology where all nodes are within transmission range of each other. In spite of its simplicity, this topology is practically important since it is encountered in several real-world settings. Further, a solution to this network can serve as a building block for more general scenarios that are otherwise intractable. For this network, we show that the problem of maximizing the uniform multicast throughput across multiple sessions is NP-hard. However, its special structure allows us to derive useful upper bounds on the achievable uniform multicast throughput. We show that an intuitive class of algorithms that maximally exploit the wireless broadcast feature can result in very poor worst case performance. Using a novel group splitting idea, we then design two polynomial time approximation algorithms that are guaranteed to achieve a constant factor of the throughput bound under arbitrary multicast group memberships. These algorithms are simple to implement and provide interesting tradeoffs between the achievable throughput and the total number of transmissions used.
【Keywords】: multicast communication; polynomial approximation; radio networks; telecommunication network topology; achievable uniform multicast throughput; arbitrary multicast group memberships; dense MC-MR wireless network; dense multi-channel multi-radio wireless network; fully connected network topology; group splitting idea; multiple multicast sessions; polynomial time approximation algorithms; wireless broadcast feature; Algorithm design and analysis; Approximation algorithms; Channel allocation; Schedules; Throughput; Transceivers; Upper bound
【Paper Link】 【Pages】:630-638
【Authors】: Jincheng Zhang ; Wenjie Zhang ; Minghua Chen ; Zhi Wang
【Abstract】: The Federal Communications Commission (FCC) released the final rule to approve of TV white spaces (TVWS), i.e., locally vacant TV channels, for unlicensed use in 2010. This TV spectrum will mitigate the shortage of wireless spectrum resources and provide opportunities for new applications. TVWS differ from the conventional Wi-Fi spectrum in three aspects: spectrum fragmentation, spatial variation, and temporal variation. These differences make the network design over TVWS challenging and fundamentally different from Wi-Fi networks. While most prior works on TVWS network design focused on outdoor large-area scenario, the important indoor scenario is largely open for investigation. In this paper, we present WINET (for White-space Indoor NETwork), the first design framework for indoor multi-AP white space network. We optimize AP placement, spectrum allocation, and AP association. Spectrum fragmentation, spatial variation, and temporal variation are all tackled in our network design. We build a test-bed and conduct extensive measurements inside an office building across four months to obtain real-world traces. Experimental results show that WINET can increase AP coverage area by an average of 62.2% and obtain 67.9% higher system throughput while achieving fairness among users as compared to alternative approaches.
【Keywords】: radio networks; television broadcasting; wireless LAN; AP association; FCC; TV channels; TV white spaces; TVWS; TVWS network design; WINET; Wi-Fi networks; Wi-Fi spectrum; federal communications commission; indoor white space network design; optimize AP placement; spatial variation; spectrum allocation; spectrum fragmentation; temporal variation; white space indoor NETwork; wireless spectrum resources; Buildings; FCC; IEEE 802.11 Standard; Resource management; TV; Throughput; White spaces
【Paper Link】 【Pages】:639-647
【Authors】: Yanzhi Dou ; Kexiong Curtis Zeng ; Yaling Yang ; Danfeng (Daphne) Yao
【Abstract】: Cognitive Radio (CR) is an intelligent radio technology to boost spectrum utilization and is likely to be widely spread in the near future. However, its flexible software-oriented design may be exploited by an adversary to control CR devices to launch large scale attacks on a wide range of critical wireless infrastructures. To proactively mitigate the potentially serious threat, this paper presents MadeCR, a Correlation-based Malware detection system for CR. MadeCR exploits correlations among CR applications' component actions to detect malicious behaviors. In addition, a significant contribution of the paper is a general experimentation method referred to as mutation testing to comprehensively evaluate the effectiveness of the anomaly detection method against a large number of artificial malware cases. Evaluation shows that MadeCR detects malicious behaviors within 1.10s at an accuracy of 94.9%.
【Keywords】: cognitive radio; invasive software; radio spectrum management; anomaly detection method; cognitive radio; correlation-based malware detection system for CR; intelligent radio technology; madeCR; software-oriented design; spectrum utilization; wireless infrastructures; Databases; Detectors; Hidden Markov models; Malware; Runtime; Training
【Paper Link】 【Pages】:648-656
【Authors】: Weiguo Dai ; Zhaoquan Gu ; Xiao Lin ; Qiang-Sheng Hua ; Francis C. M. Lau
【Abstract】: Controlling a dynamic network is interesting and important in practical applications, which is to drive the network from any initial state to any desired state. Much research has been conducted in revealing the controllability and seeking the underlying correlations of the network. However, no existing works have considered the time needed to control the network, which we refer to as control latency. In this paper, we initiate the study of control latency of dynamic networks. First of all, we formulate the minimum control latency (MCL) problem for designing the controlling pattern with minimum number of controllers. We show that the MCL problem is NP-hard by reducing the multiprocessor scheduling problem to it. Then, we propose a greedy algorithm for designing a controlling pattern that can control the network within two times the minimum control latency. Moreover, when the control latency is bounded by a given value, we propose another constant approximation algorithm to design a controlling pattern which uses at most three times the minimum number of controllers. We conduct extensive simulations on both synthetic and real networks to corroborate our theoretic analysis.
【Keywords】: approximation theory; computational complexity; greedy algorithms; processor scheduling; MCL problem; NP-hard problem; constant approximation algorithm; greedy algorithm; minimum control latency problem; multiprocessor scheduling problem; Approximation algorithms; Approximation methods; Computers; Conferences; Controllability; Heuristic algorithms; Processor scheduling; Controlling pattern design; Minimum control latency; Structural controllability
【Paper Link】 【Pages】:657-665
【Authors】: Randeep Bhatia ; Fang Hao ; Murali S. Kodialam ; T. V. Lakshman
【Abstract】: Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice.
【Keywords】: IP networks; matrix algebra; protocols; telecommunication network routing; telecommunication traffic; IETF protocol; IP networks; competitive algorithm; online route selection; online segment routing; optimal parameters; optimized network traffic engineering; route traffic; traffic engineering; traffic matrix; Computers; Conferences; Games; IP networks; Multiprotocol label switching; Routing
【Paper Link】 【Pages】:666-674
【Authors】: Feng Wang ; Lixin Gao ; Xiaozhe Shao ; Hiroaki Harai ; Kenji Fujikawa
【Abstract】: The Internet is facing the double-challenge of accelerating growth of routing table size and ever higher reliability requirements. Considerable progress has been made toward the scalability and reliability of the Internet. However, most of the proposals are only partial solutions that address some of the challenges. In this paper, we present a new addressing encoding scheme and a corresponding forwarding mechanism for Internet routing to solve the aforementioned problems. Underlying our design is a succinct data structure that allows us to compactly embed a set of addresses into packet headers. At the same time, the structure allows the data plane to efficiently extract multiple address information for the same destination without decompression. We provide time and space complexity analysis, and present experimental results evaluating the performance of our encoding method. It shows that the proposed encoding method can achieve a good compression factor without degrading packet-forwarding performance.
【Keywords】: Internet; encoding; telecommunication network routing; Internet routing; addressing encoding scheme; compact location encodings; compression factor; forwarding mechanism; multiple address information; packet headers; routing table size; time and space complexity analysis; Computers; Conferences; Encoding; Internet; Peer-to-peer computing; Routing; Scalability
【Paper Link】 【Pages】:675-683
【Authors】: Rowan Klöti ; Vasileios Kotronis ; Bernhard Ager ; Xenofontas Dimitropoulos
【Abstract】: How many links can be cut before a network is bisected? What is the maximal bandwidth that can be pushed between two nodes of a network? These questions are closely related to network resilience, path choice for multipath routing or bisection bandwidth estimations in data centers. The answer is quantified using metrics such as the number of edge-disjoint paths between two network nodes and the cumulative bandwidth that can flow over these paths. In practice though, such calculations are far from simple due to the restrictive effect of network policies on path selection. Policies are set by network administrators to conform to service level agreements, protect valuable resources or optimize network performance. In this work, we introduce a general methodology for estimating lower and upper bounds for the policy-compliant path diversity and bisection bandwidth between two nodes of a network, effectively quantifying the effect of policies on these metrics. Exact values can be obtained if certain conditions hold. The approach is based on regular languages and can be applied in a variety of use cases.
【Keywords】: channel estimation; computer network reliability; telecommunication network routing; bisection bandwidth estimations; data center; edge disjoint paths; multipath routing; network policies; network resiliency; policy compliant path diversity; Approximation methods; Automata; Bandwidth; Internet; Routing; Tensile stress; Transforms
【Paper Link】 【Pages】:684-692
【Authors】: Zuoming Yu ; Fan Yang ; Jin Teng ; Adam C. Champion ; Dong Xuan
【Abstract】: Barrier coverage in visual camera sensor networks (visual barrier coverage) has important real-world applications like battlefield surveillance, environmental monitoring, and protection of government property. Cost-effective deployment, a fundamental issue of visual barrier coverage, considers how to deploy the fewest camera sensors along the barrier to detect intruders (e.g., capture faces) with desirable performance. Existing visual barrier coverage approaches like full-view coverage require numerous camera sensors for capturing intruders' faces deterministically for any trajectory and facing angle. However, intruders' trajectories and facing angles are bounded and deterministic intruder detection requires many camera sensors for rare intrusion cases. Certain practical applications can tolerate limited intrusion mis-detection given budget limitations. This paper proposes local face-view barrier coverage, a novel concept that achieves statistical barrier coverage in camera sensor networks leveraging intruders' trajectory lengths ℓ along the barrier and head rotation angles δ. Using (ℓ, δ) and other parameters, we derive a rigorous probability bound for intruder detection for local face-view barrier coverage via a feasible deployment pattern. Our detection probability bound and deployment pattern can guide practical camera sensor network deployments with camera sensor budgets. Extensive evaluations show that local face-view barrier coverage requires up to 50% fewer camera sensors than full-view barrier coverage.
【Keywords】: cameras; signal detection; statistical analysis; wireless sensor networks; intruder detection; intrusion misdetection; local face-view barrier coverage; probability bound; statistical barrier coverage; visual barrier coverage; visual camera sensor network; Cameras; Computers; Conferences; Sensors; Trajectory; Visualization; Wireless sensor networks; Barrier coverage; camera sensor networks
【Paper Link】 【Pages】:693-701
【Authors】: Lin Chen ; Ruolin Fan ; Kaigui Bian ; Lin Chen ; Mario Gerla ; Tao Wang ; Xiaoming Li
【Abstract】: Neighbor discovery plays a crucial role in the formation of wireless sensor networks and mobile networks where the power of sensors (or mobile devices) is constrained. Due to the difficulty of clock synchronization, many asynchronous protocols based on wake-up scheduling have been developed over the years in order to enable timely neighbor discovery between neighboring sensors while saving energy. However, existing protocols are not fine-grained enough to support all heterogeneous battery duty cycles, which can lead to a more rapid deterioration of long-term battery health for those without support. Existing research can be broadly divided into two categories according to their neighbor-discovery techniques - the quorum based protocols and the co-primality based protocols. In this paper, we propose two neighbor discovery protocols, called Hedis and Todis, that optimize the duty cycle granularity of quorum and co-primality based protocols respectively, by enabling the finest-grained control of heterogeneous duty cycles. We compare the two optimal protocols via analytical and simulation results, which show that although the optimal co-primality based protocol (Todis) is simpler in its design, the optimal quorum based protocol (Hedis) has a better performance since it has a lower relative error rate and smaller discovery delay, while still allowing the sensor nodes to wake up at a more infrequent rate.
【Keywords】: synchronisation; wireless sensor networks; Hedis and Todis; asynchronous protocols; clock synchronization; co-primality based protocols; duty cycle granularity; heterogeneous neighbor discovery; mobile networks; quorum based protocols; wake-up scheduling; wireless sensor networks; Clocks; Computers; Conferences; Delays; Protocols; Schedules; Synchronization; Neighbor discovery; heterogeneous duty cycles
【Paper Link】 【Pages】:702-710
【Authors】: Hao Cai ; Tilman Wolf
【Abstract】: Neighbor discovery is a crucial first step in configuring and managing a wireless network. Most existing studies on neighbor discovery are based on broadcast algorithms, where nodes send 1-way messages without getting response from their neighbors. However, when directional antennas are used, the ability to coordinate with a neighbor is crucial for later communication between nodes, which requires handshake-based (at least 2-way) protocols. In this paper, we provide a detailed analysis of neighbor discovery protocols with 2-way communication when using directional antennas. Based on this analysis, we present the design of a randomized 2-way neighbor discovery algorithm that uses a selective feedback. Our result shows that a node needs Θ(n2/k) time to discover its n neighbors with k antenna sectors, which yields a significant performance improvement over pure randomized algorithms. We also extend our schemes to practical cases, where the number of neighbors is unknown, and show a factor of no more than 4/3 slowdown in performance.
【Keywords】: directive antennas; protocols; radio networks; 2-way neighbor discovery; directional antennas; neighbor discovery protocols; selective feedback; wireless networks; Algorithm design and analysis; Computers; Conferences; Directional antennas; Omnidirectional antennas; Protocols
【Paper Link】 【Pages】:711-719
【Authors】: Qiang Zhai ; Sihao Ding ; Xinfeng Li ; Fan Yang ; Jin Teng ; Junda Zhu ; Dong Xuan ; Yuan F. Zheng ; Wei Zhao
【Abstract】: Human tracking in video has many practical applications such as visual guided navigation, assisted living, etc. In such applications, it is necessary to accurately track multiple humans across multiple cameras, subject to real-time constraints. Despite recent advances in visual tracking research, the tracking systems purely relying on visual information fail to meet the accuracy and real-time requirements at the same time. In this paper, we present a novel accurate and real-time human tracking system called VM-Tracking. The system aggregates the information of motion (M) sensor on human, and integrates it with visual (V) data based on physical locations. The system has two key features, i.e. location-based VM fusion and appearance-free tracking, which significantly distinguish itself from other existing human tracking systems. We have implemented the VM-Tracking system and conducted comprehensive experiments on challenging scenarios.
【Keywords】: image sensors; object tracking; real-time systems; video signal processing; VM-tracking; assisted living; multiple cameras; real-time constraints; real-time human tracking; real-time human tracking system; real-time requirements; track multiple humans across multiple cameras; visual guided navigation; visual information; visual motion sensing integration; visual tracking research; Acceleration; Accuracy; Cameras; Real-time systems; Tracking; Trajectory; Visualization
【Paper Link】 【Pages】:720-728
【Authors】: Xu Zhang ; Jeffrey Knockel ; Jedidiah R. Crandall
【Abstract】: We present an Internet measurement technique for finding machines that are hidden behind firewalls. That is, if a firewall prevents outside IP addresses from sending packets to an internal protected machine that is only accessible on the local network, our technique can still find the machine. We employ a novel TCP/IP side channel technique to achieve this. The technique uses side channels in “zombie” machines to learn information about the network from the perspective of a zombie. Unlike previous TCP/IP side channel techniques, our technique does not require a high packet rate and does not cause denial-of-service. We also make no assumptions about globally incrementing IPIDs, as do idle scans. This paper addresses two key questions about our technique: how many machines are there on the Internet that are hidden behind firewalls, and how common is ingress filtering that prevents our scan by not allowing spoofed IP packets into the network. We answer both of these questions, respectively, by finding 1,296 hidden machines and measuring that only 23.9% of our candidate zombie machines are on networks that perform ingress filtering.
【Keywords】: IP networks; Internet; firewalls; IP addresses; IP identification; Internet measurement technique; TCP/IP side channel technique; firewalls; hidden machines; idle scans; original SYN; zombie machines; IP networks; Internet; Kernel; Linux; Ports (Computers); Probes; Size measurement
【Paper Link】 【Pages】:729-737
【Authors】: Fida Gillani ; Ehab Al-Shaer ; Samantha Lo ; Qi Duan ; Mostafa H. Ammar ; Ellen W. Zegura
【Abstract】: DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.
【Keywords】: computer network security; formal logic; virtualisation; DDoS attacks; Mininet; PlanetLab; SMT logic; VN migration; VN placement; agile virtualized infrastructure; attack mitigation techniques; critical network resources; cyber attacks; distributed denial-of-service attack; network availability; network resource reallocation; virtual networks; Computational modeling; Computer crime; Mathematical model; Reconnaissance; Routing protocols; Servers; Substrates
【Paper Link】 【Pages】:738-746
【Authors】: Jafar Haadi Jafarian ; Ehab Al-Shaer ; Qi Duan
【Abstract】: Network reconnaissance of IP addresses and ports is prerequisite to many host and network attacks. Meanwhile, static configurations of networks and hosts simplify this adversarial reconnaissance. In this paper, we present a novel proactive-adaptive defense technique that turns end-hosts into untraceable moving targets, and establishes dynamics into static systems by monitoring the adversarial behavior and reconfiguring the addresses of network hosts adaptively. This adaptability is achieved by discovering hazardous network ranges and addresses and evacuating network hosts from them quickly. Our approach maximizes adaptability by (1) using fast and accurate hypothesis testing for characterization of adversarial behavior, and (2) achieving a very fast IP randomization (i.e., update) rate through separating randomization from end-hosts and managing it via network appliances. The architecture and protocols of our approach can be transparently deployed on legacy networks, as well as software-defined networks. Our extensive analysis and evaluation show that by adaptive distortion of adversarial reconnaissance, our approach slows down the attack and increases its detectability, thus significantly raising the bar against stealthy scanning, major classes of evasive scanning and worm propagation, as well as targeted (hacking) attacks.
【Keywords】: IP networks; computer network security; software defined networking; adversary-aware IP address randomization; network hosts; proactive agility; software-defined networks; sophisticated attackers; Conferences; IP networks; Logic gates; Probes; Protocols; Reconnaissance; Servers
【Paper Link】 【Pages】:747-755
【Authors】: Pengfei Hu ; Hongxing Li ; Hao Fu ; Derya Cansever ; Prasant Mohapatra
【Abstract】: The landscape of cyber security has been reformed dramatically by the recently emerging Advanced Persistent Threat (APT). It is uniquely featured by the stealthy, continuous, sophisticated and well-funded attack process for long-term malicious gain, which render the current defense mechanisms inapplicable. A novel design of defense strategy, continuously combating APT in a long time-span with imperfect/incomplete information on attacker's actions, is urgently needed. The challenge is even more escalated when APT is coupled with the insider threat (a major threat in cyber-security), where insiders could trade valuable information to APT attacker for monetary gains. The interplay among the defender, APT attacker and insiders should be judiciously studied to shed insights on a more secure defense system. In this paper, we consider the joint threats from APT attacker and the insiders, and characterize the fore-mentioned interplay as a two-layer game model, i.e., a defense/attack game between defender and APT attacker and an information-trading game among insiders. Through rigorous analysis, we identify the best response strategies for each player and prove the existence of Nash Equilibrium for both games. Extensive numerical study further verifies our analytic results and examines the impact of different system configurations on the achievable security level.
【Keywords】: game theory; security of data; APT; Nash equilibrium; advanced persistent threat; attack process; cyber security; defense/attack game; dynamic defense strategy; information-trading game; malicious gain; two-layer game model; Computer security; Computers; Cost function; Games; Joints; Nash equilibrium
【Paper Link】 【Pages】:756-764
【Authors】: Jad Hachem ; Nikhil Karamchandani ; Suhas N. Diggavi
【Abstract】: Emerging heterogeneous wireless architectures consist of a dense deployment of local-coverage wireless access points (APs) with high data rates, along with sparsely-distributed, large-coverage macro-cell base stations (BS). We design a coded caching-and-delivery scheme for such architectures that equips APs with storage, enabling content pre-fetching prior to knowing user demands. Users requesting content are served by connecting to local APs with cached content, as well as by listening to a BS broadcast transmission. For any given content popularity profile, the goal is to design the caching-and-delivery scheme so as to optimally trade off the transmission cost at the BS against the storage cost at the APs and the user cost of connecting to multiple APs. We design a coded caching scheme for non-uniform content popularity that dynamically allocates user access to APs based on requested content. We demonstrate the approximate optimality of our scheme with respect to information-theoretic bounds. We numerically evaluate it on a YouTube dataset and quantify the trade-off between transmission rate, storage, and access cost. Our numerical results also suggest the intriguing possibility that, to gain most of the benefits of coded caching, it suffices to divide the content into a small number of popularity classes.
【Keywords】: 5G mobile communication; broadcast communication; cache storage; radio networks; 5G systems; BS broadcast transmission; YouTube dataset; access cost; coded caching-and-delivery scheme; content caching; content delivery; content prefetching; data rates; heterogeneous wireless architectures; heterogeneous wireless networks; information-theoretic bounds; large-coverage macrocell base stations; local AP; local-coverage wireless access points; nonuniform content popularity profile; storage cost; transmission cost; transmission rate; user demands; Cache memory; Color; Computers; Conferences; Joining processes; Wireless networks
【Paper Link】 【Pages】:765-773
【Authors】: Qiao Xiang ; Hongwei Zhang ; Jianping Wang ; Guoliang Xing ; Shan Lin ; Xue Liu
【Abstract】: Network coding (NC) based opportunistic routing has been well studied, but the impact of routing diversity on the performance of NC-based routing remains largely unexplored. Towards understanding the importance of routing diversity in NC-based routing, we study the problems of estimating and minimizing the data delivery cost in NC-based routing. In particular, we propose an analytical framework for estimating the total number of packet transmissions for NC-based routing in arbitrary topologies. We design a greedy algorithm that minimizes the total transmission cost of NC-based routing and determines the corresponding forwarder set for each node. We prove the optimality of this algorithm and show that 1) nodes on the shortest path may not always be favored when selecting forwarders for NC-based routing and 2)the minimal cost of NC-based routing is upper-bounded by the cost of shortest path routing. Based on the greedy, optimal algorithm, we design and implement ONCR, a distributed minimal cost NC-based routing protocol. Using the NetEye sensor testbed, we comparatively study the performance of ONCR and existing approaches such as the single path routing protocol CTP and the NC-based opportunistic routing protocols MORE and CodeOR. Results show that ONCR achieves close to 100% delivery reliability while having the lowest delivery cost among all the protocols and 25-28% less than the second best protocol CTP. This low delivery cost also enables ONCR to achieve the highest network goodput, i.e., about two-fold improvement over MORE and CodeOR. Our findings demonstrate the significance of optimizing data forwarding diversity in NC-based routing for data delivery reliability, efficiency, and goodput.
【Keywords】: greedy algorithms; network coding; routing protocols; wireless sensor networks; CTP; ONCR; greedy algorithm; network-coding-based routing; opportunistic routing protocols; optimal diversity; packet transmissions; shortest path routing; single path routing protocol; wireless networks; Greedy algorithms; Routing; Routing protocols; Silicon; Topology; Wireless networks
【Paper Link】 【Pages】:774-782
【Authors】: Fangzhou Chen ; Bin Li ; Can Emre Koksal
【Abstract】: We consider a system in which two nodes take correlated measurements of a random source with time-varying and unknown statistics. The observations of the source at the first node are to be losslessly replicated with a given probability of outage at the second node, which receives data from the first node over a constant-rate channel. We develop a system and associated strategies for joint distributed source coding (encoding and decoding) and transmission control in order to achieve low end-to-end delay. Slepian-Wolf coding in its traditional form cannot be applied in our scenario, since the encoder requires the joint statistics of the observations and the associated decoding delay is very high. We analytically evaluate the performance of our strategies and show that the delay achieved by them are order optimal, as the conditional entropy of the source approaches to the channel rate. We also evaluate the performance of our algorithms based on real-world experiments using two cameras recording videos of a scene at different angles. Having realized our schemes, we demonstrated that, even with a very low-complexity quantizer, a compression ratio of approximately 50% is achievable for lossless replication at the decoder, at an average delay of a few seconds.
【Keywords】: decoding; probability; quantisation (signal); source coding; statistical analysis; telecommunication network reliability; video cameras; video coding; video recording; Slepian-Wolf coding; camera; channel rate; compression ratio; conditional entropy; constant-rate channel; decoding delay; encoding; end-to-end delay; low-complexity quantizer; low-delay distributed source coding; outage probability; time-varying source; transmission control; unknown statistic; video recording; Cameras; Decoding; Delays; Joints; Source coding; Videos; Lossless distributed source coding; delay optimal control; heavy-traffic analysis; universal algorithms
【Paper Link】 【Pages】:783-791
【Authors】: Yuben Qu ; Chao Dong ; Haipeng Dai ; Fan Wu ; Shaojie Tang ; Hai Wang ; Chang Tian
【Abstract】: The benefits of network coding on multicast in traditional multi-hop wireless networks have already been demonstrated in previous works. However, most existing approaches cannot be directly applied to multi-hop cognitive radio networks (CRNs), given the unpredictable primary user occupancy on licensed channels. Specifically, due to the unpredictable occupancy, the channel's bandwidth is uncertain and thus the capacity of the link using this channel is also uncertain, which may result in severe throughput loss. In this paper, we study the problem of network coding-based multicast in multi-hop CRNs considering the uncertain spectrum availability. To capture the uncertainty of spectrum availability, we first formulate our problem as a chance-constrained program. Given the computationally intractability of the above program, we transform the original problem into a tractable convex optimization problem, through appropriate Bernstein approximation together with relaxation on link scheduling. We further leverage Lagrangian relaxation-based optimization techniques to propose an efficient distributed algorithm for the original problem. Extensive simulation results show that, the proposed algorithm achieves higher multicast rates, compared to a state-of-the-art non-network coding algorithm in multi-hop CRNs, and a conservative robust algorithm that treats the link capacity as a constant value in the optimization.
【Keywords】: cognitive radio; convex programming; network coding; Bernstein approximation; Lagrangian relaxation-based optimization techniques; chance-constrained program; cognitive radio networks; link capacity; link scheduling; multi-hop CRN; multi-hop wireless networks; network coding-based multicast; tractable convex optimization problem; uncertain spectrum availability; Approximation methods; Bandwidth; Encoding; Optimization; Random variables; Spread spectrum communication; Uncertainty
【Paper Link】 【Pages】:792-800
【Authors】: Shubhadip Mitra ; Sayan Ranu ; Vinay Kolar ; Aditya Telang ; Arnab Bhattacharya ; Ravi Kokku ; Sriram Raghavan
【Abstract】: We handle the problem of efficient user-mobility driven macro-cell planning in cellular networks. As cellular networks embrace heterogeneous technologies (including long range 3G/4G and short range WiFi, Femto-cells, etc.), most traffic generated by static users gets absorbed by the short-range technologies, thereby increasingly leaving mobile user traffic to macro-cells. To this end, we consider a novel approach that factors in the trajectories of mobile users as well as the impact of city geographies and their associated road networks for macro-cell planning. Given a budget k of base-stations that can be upgraded, our approach selects a deployment that improves the most number of user trajectories. The generic formulation incorporates the notion of quality of service of a user trajectory as a parameter to allow different application-specific requirements, and operator choices. We show that the proposed trajectory utility maximization problem is NP-hard, and design multiple heuristics. We evaluate our algorithms with real and synthetic datasets emulating different city geographies to demonstrate their efficacy. For instance, with an upgrade budget k of 20%, our algorithms perform 3-8 times better in improving the user quality of service on trajectories when compared to greedy location-based base-station upgrades.
【Keywords】: cellular radio; optimisation; quality of service; telecommunication network planning; application-specific requirements; base-stations; cellular networks; city geographies; heterogeneous technologies; mobile user traffic; mobile users trajectories; multiple heuristics; quality of service; road networks; short-range technologies; static users; trajectory utility maximization problem; user-mobility driven macro-cell planning; Bismuth; Mobile communication; Mobile computing; Quality of service; Streaming media; Throughput; Trajectory
【Paper Link】 【Pages】:801-809
【Authors】: François Baccelli ; Xinchen Zhang
【Abstract】: This paper presents an analytically tractable stochastic geometry model for urban wireless networks, where the locations of the nodes and the shadowing are highly correlated and different path loss functions can be applied to line-of-sight (LOS) and non-line-of-sight (NLOS) links. Using a distance-based LOS path loss model and a blockage (shadowing)-based NLOS path loss model, we are able to derive the distribution of the interference observed at a typical location and the joint distribution at different locations. When applied to cellular networks, this model leads to tractable expressions for the coverage probability (SINR distribution). We show that this model captures important features of urban wireless networks, which cannot be analyzed using existing models. The numerical results also suggest that even in the presence of significant penetration loss, ignoring the NLOS interference can lead to erroneous estimations on coverage. They also suggest that allowing users to be associated with NLOS BSs may provide a non-trivial gain on coverage.
【Keywords】: cellular radio; probability; wireless channels; NLOS interference; analytically tractable stochastic geometry; blockage shadowing; cellular networks; correlated shadowing; coverage probability; distance-based LOS path loss; interference distribution; nonline-of-sight links; path loss functions; urban wireless networks; Analytical models; Fading; Interference; Joints; Laplace equations; Numerical models; Shadow mapping
【Paper Link】 【Pages】:810-818
【Authors】: Sangki Yun ; Lili Qiu
【Abstract】: Motivated by the recent push to deploy LTE in unlicensed spectrum, this paper develops a novel system to enable co-existence between LTE and WiFi. Our approach leverages LTE and WiFi antennas already available on smartphones to let LTE and WiFi transmit together and successfully decode the interfered signals. Our system offers several distinct advantages over existing MIMO work: (i) it can decode all the interfering signals under cross technology interference even when the interfering signals have similar power and occupy similar frequency, (ii) it does not need clean reference signals from either WiFi or LTE transmission, (iii) it can decode interfering WiFi MIMO and LTE transmissions, and (iv) it has a simple yet effective carrier sense mechanism for WiFi to access the medium under interfering LTE signals while avoiding other WiFi transmissions. We use USRP implementation and experiments to show its effectiveness.
【Keywords】: Long Term Evolution; MIMO communication; antenna arrays; radiofrequency interference; wireless LAN; LTE antennas; LTE co-existence; LTE transmission; MIMO work; USRP implementation; WiFi antennas; WiFi co-existence; WiFi transmission; cross technology interference; decode signal; interfered signal; unlicensed spectrum; Channel estimation; Decoding; IEEE 802.11 Standard; Interference; MIMO; OFDM; Receivers
【Paper Link】 【Pages】:819-827
【Authors】: Eugene Chai ; Kang G. Shin ; Sung-Ju Lee ; Jeongkeun Lee ; Raúl H. Etkin
【Abstract】: Cloud-RANs (Radio Access Networks) assume the existence of a high-capacity, low-delay/latency fronthaul to support cooperative transmission schemes such as CoMP (Coordinated Multi-Point) and coordinated beamforming. However, building such hierarchical wired fronthauls is challenging as the typical I/Q data stream is non-elastic - I/Q data over the wired fronthaul has little tolerance for delay jitters and zero tolerance for losses. Any distortion to the I/Q data stream will make the resulting wireless transmission completely unintelligible. We propose Spiro, a mechanism that efficiently transports RF signals over a wired fronthaul network. The primary goal of Spiro is to make I/Q data streams elastic and resilient to unexpected network condition changes. This is accomplished through a novel combination of compression and data prioritization of I/Q data on the wired fronthaul. For a given wireless throughput, Spiro can reduce the bandwidth demand of the fronthaul data stream by up to 50% without any noticeable degradation in the wireless reception quality. Further bandwidth reduction via compression and frame losses only have a limited impact on the wireless throughput.
【Keywords】: array signal processing; cooperative communication; jitter; radio access networks; CoMP; I/Q data stream; RF transport; SPIRO; cloud-RAN; cooperative transmission schemes; coordinated beamforming; coordinated multipoint; data prioritization; delay jitters; hierarchical wired fronthauls; radio access networks; wired fronthaul network; wireless reception quality; wireless throughput; wireless transmission; Bandwidth; Computer architecture; Noise; Quantization (signal); Radio frequency; Uplink; Wireless communication
【Paper Link】 【Pages】:828-836
【Authors】: Sookhyun Yang ; Jim Kurose ; Simon Heimlicher ; Arun Venkataramani
【Abstract】: Physical human mobility has played an important role in the design and operation of mobile networks. Physical mobility, however, differs from user identity (name) mobility in both traditional mobility management protocols such as Mobile-IP and in new architectures, such as XIA and MobilityFirst, that support identity mobility and location independence as first class objects. A multi-homed stationary user or a stationary user shifting among multiple devices attached to different networks will persistently keep his/her identity but will change access networks and the IP address to which his/her identity is associated. We perform a measurement study of such user transitioning among networks from a network-level point of view, characterizing the sequence of networks to which a user is attached and discuss insights and implications drawn from these measurements. We characterize network transitioning in terms of network residency time, degree of multi-homing, transition rates and more. We find that users typically spend time attached to a small number of access networks, and that a surprisingly large number of users access two networks contemporaneously. We develop and validate a parsimonious Markov chain model of canonical user transitioning among networks that can be used to provision network services and to analyze mobility protocols.
【Keywords】: IP networks; Markov processes; mobility management (mobile radio); Markov chain model; MobilityFirst; XIA; canonical user transitioning; mobile-IP; mobility management protocols; multihomed stationary user; physical human mobility; user transitioning; Aggregates; Electronic mail; IP networks; Mobile communication; Mobile computing; Protocols; Servers
【Paper Link】 【Pages】:837-845
【Authors】: Dmytro Karamshuk ; Nishanth Sastry ; Andrew Secker ; Jigna Chandaria
【Abstract】: Using nine months of access logs comprising 1.9 Billion sessions to BBC iPlayer, we survey the UK ISP ecosystem to understand the factors affecting adoption and usage of a high bandwidth TV streaming application across different providers. We find evidence that connection speeds are important and that external events can have a huge impact for live TV usage. Then, through a temporal analysis of the access logs, we demonstrate that data usage caps imposed by mobile ISPs significantly affect usage patterns, and look for solutions. We show that product bundle discounts with a related fixed-line ISP, a strategy already employed by some mobile providers, can better support user needs and capture a bigger share of accesses. We observe that users regularly split their sessions between mobile and fixed-line connections, suggesting a straightforward strategy for offloading by speculatively pre-fetching content from a fixed-line ISP before access on mobile devices.
【Keywords】: mobile television; video streaming; BBC iPlayer; UK ISP ecosystem; access logs; adoption factor; fixed-line ISP; fixed-line connections; live TV usage; mobile ISP; mobile connections; mobile devices; mobile providers; nation-wide TV streaming service; pre-fetching content; product bundle; temporal analysis; usage factor; Broadband communication; Computers; Internet; Mobile communication; Mobile computing; Mobile handsets; TV
【Paper Link】 【Pages】:846-854
【Authors】: Xuetao Wei ; Nicholas Valler ; Harsha V. Madhyastha ; Iulian Neamtiu ; Michalis Faloutsos
【Abstract】: The Bring-Your-Own-Handheld-device (BYOH) phenomenon continues to make inroads as more people bring their own handheld devices to work or school. While convenient to device owners, this trend presents novel management challenges to network administrators. Prior efforts only focused on studying either the comparative characterization of aggregate network traffic between BYOHs and non-BYOHs or network performance issues, such as TCP and download times or mobility issues. We identify one critical question that network administrators need to answer: how do these BYOHs behave individually? In response, we design and deploy Brofiler, a behavior-aware profiling framework that improves visibility into the management of BYOHs. The contributions of our work are two-fold. First, we present Brofiler, a time-aware device-centric approach for grouping devices into intuitive behavioral groups. Second, we conduct an extensive study of BYOHs using our approach with real data collected over a year, and highlight several novel insights on the behavior of BYOHs. These observations underscore the importance of that BYOHs need to be managed explicitly as they behave in unique and unexpected ways.
【Keywords】: smart phones; Brofiler; aggregate network traffic; behavior-aware profiling; bring-your-own-handheld-device; handheld devices; Androids; Computers; Conferences; IP networks; Mobile communication; Protocols; Servers
【Paper Link】 【Pages】:855-863
【Authors】: Shu Wang ; Vignesh Venkateswaran ; Xinyu Zhang
【Abstract】: Full-duplex radio technology is becoming mature and holds potential to boost the spectrum efficiency of a point-to-point wireless link. However, a fundamental understanding is still lacking, with respect to its advantage over half-duplex in multi-cell wireless networks with contending links. In this paper, we establish a spatial stochastic framework to analyze the mean network throughput gain from full-duplex, and pinpoint the key factors that determine the gain. Our framework extends classical stochastic geometry analysis with a new tool-set, which allows us to model a trade-off between the benefit from concurrent full-duplex transmissions and the loss of spatial reuse, particularly for CSMA-based transmitters with random backoff. The analysis derives closed-form expressions for the full-duplex gain as a function of link distance, interference range, network density, and carrier sensing schemes. It can be easily applied to guide the deployment choices during the early stage of network planning.
【Keywords】: access protocols; radio networks; carrier sensing; closed-form expressions; full-duplex gains; interference range; link distance; multi-cell wireless networks; network density; point-to-point wireless link; spatial stochastic framework; stochastic geometry analysis; Analytical models; Interference; Radio transmitters; Receivers; Sensors; Stochastic processes
【Paper Link】 【Pages】:864-872
【Authors】: Pengfei Zhang ; Xi Li ; Rui Chu ; Huaimin Wang
【Abstract】: In IaaS cloud environments, peak memory demand caused by hotspot applications in Virtual Machine (VM) often results in performance degradation within and outside of this VM. Some solutions such as host swapping and ballooning for memory consolidation and overcommitment have been proposed. These solutions, however, have no help for addressing guest swapping issues inside VM. Even though host holds sufficient memory pages, guest OS is unable to utilize free pages in host directly due to the semantic gap between VMM and it. Our goal is to alleviate the performance degradation by decreasing disk I/O operations generated by guest swapping. Based on the insight analysis of behavioral features of guest swapping, we design HybridSwap, a distributed scalable framework which organize surplus memory in all hosts within data center into virtual pools for swapping. This framework builds up a synthetic swapping mechanism in a peer-to-peer way, which VM can adaptively choose suitable pools for swapping. We implement the prototype of HybridSwap and evaluate it with different benchmarks. The results demonstrate that our solution has the ability to promote the guest swapping efficiency indeed. Even in some cases, it shows 2-5 times of performance promotion compared with the baseline setup.
【Keywords】: cloud computing; virtual machines; virtualisation; HybridSwap design; IaaS cloud environments; VM; VMM; data center; disk I/O operations; distributed scalable framework; guest OS; guest swapping efficiency; host swapping; memory pages; peer-to-peer computing; performance degradation; synthetic swapping mechanism; virtual machine; virtual pools; virtualization platform; Benchmark testing; Degradation; Instruction sets; Operating systems; Semantics; Servers; Virtualization; IaaS; Memory Consolidation; Overcommitment; Synthetic Swapping; Virtualization
【Paper Link】 【Pages】:873-881
【Authors】: Mohammad Y. Hajjat ; Ruiqi Liu ; Yiyang Chang ; T. S. Eugene Ng ; Sanjay G. Rao
【Abstract】: Provider policy (e.g., bandwidth rate limits, virtualization, CPU scheduling) can significantly impact application performance in cloud environments. This paper takes a first step towards understanding the impact of provider policy and tackling the complexity of selecting configurations that can best meet the cost and performance requirements of applications. We make three contributions. First, we conduct a measurement study spanning a 19 months period of a wide variety of applications on Amazon EC2 to understand issues involved in configuration selection. Our results show that provider policy can impact communication and computation performance in unpredictable ways. Moreover, seemingly sensible rules of thumb are inappropriate - e.g., VMs with latest hardware or larger VM sizes do not always provide the best performance. Second, we systematically characterize the overheads and resulting benefits of a range of testing strategies for configuration selection. A key focus of our characterization is understanding the overheads of a testing approach in the face of variability in performance across deployments and measurements. Finally, we present configuration pruning and short-listing techniques for minimizing testing overheads. Evaluations on a variety of compute, bandwidth and data intensive applications validate the effectiveness of these techniques in selecting good configurations with low overheads.
【Keywords】: cloud computing; computational complexity; processor scheduling; Amazon EC2; CPU scheduling; VMs; application-specific configuration selection; bandwidth rate limits; cloud environments; configuration pruning technique; configuration selection; provider policy; short-listing technique; systematic testing; testing strategies; Bandwidth; Computers; Conferences; Hardware; Systematics; Testing; Throughput
【Paper Link】 【Pages】:882-890
【Authors】: Jing Fu ; Jun Guo ; Eric W. M. Wong ; Moshe Zukerman
【Abstract】: Energy efficiency of server farms is an important design consideration of data centers. One effective approach is to optimize energy consumption by controlling carried load on the networked servers. In this paper, we propose a robust heuristic policy for job assignment in a server farm, aiming to improve the energy efficiency by maximizing the ratio of the long-run average throughput to the expected energy consumption. Our model of the server farm considers parallel processor-sharing queues with finite buffer sizes, heterogeneous server speeds, and an arbitrary energy consumption function. We devise the new energy-efficient (EE) policy in a way that the state distribution of the system depends on the service requirement distribution only through the mean. We show that the state-of-the-art slowest server first (SSF) policy can be obtained as a special case of EE and both policies have the same computational complexity. We provide a rigorous analysis of EE and derive conditions under which EE is guaranteed to outperform SSF in terms of energy efficiency. Extensive numerical results are presented and demonstrate that, in comparison with SSF, EE yields a consistently better system throughput and yet improves the energy efficiency by up to 70%.
【Keywords】: computational complexity; computer centres; network servers; parallel processing; power consumption; SSF policy; arbitrary energy consumption function; computational complexity; data centers; energy-efficient heuristics; energy-efficient policy; finite buffer size; heterogeneous server speeds; job assignment; networked servers; parallel processor-sharing queues; processor-sharing server farms; robust heuristic policy; service requirement distribution; state-of-the-art slowest server first policy; Computers; Conferences; Energy consumption; Processor scheduling; Robustness; Servers; Throughput
【Paper Link】 【Pages】:891-899
【Authors】: Zhe Huang ; Bharath Balasubramanian ; Michael Wang ; Tian Lan ; Mung Chiang ; Danny H. K. Tsang
【Abstract】: There is an increasing need for cloud service performance that can be tailored to customer requirements. In the context of jobs submitted to cloud computing clusters, a crucial requirement is the specification of job completion-times. A natural way to model this specification, is through client/job utility functions that are dependent on job completion-times. We present a method to allocate and schedule heterogeneous resources to jointly optimize the utilities of jobs in a cloud. Specifically: (i) we formulate a completion-time optimal resource allocation (CORA) problem to apportion cluster resources across the jobs that enforces max-min fairness among job utilities, and (ii) starting with an integer programming problem, we perform a series of steps to transform it into an equivalent linear programming problem, and (iii) we implement the proposed framework as a utility-aware resource scheduler in the widely used Hadoop data processing framework, and finally (iv) through extensive experiments with real-world datasets, we show that our prototype achieves significant performance improvement over existing resource-allocation policies.
【Keywords】: cloud computing; data handling; formal specification; integer programming; linear programming; minimax techniques; parallel processing; resource allocation; scheduling; CORA scheduler; Hadoop data processing framework; client-job utility function; cloud computing cluster; cloud service performance; completion-time optimal resource allocation problem; completion-time optimization; customer requirement; equivalent linear programming problem; heterogeneous resource allocation; heterogeneous resource scheduling; integer programming problem; job completion time specification; max-min fairness; resource allocation policy; specification modeling; utility-aware resource scheduler; Conferences; Containers; Convex functions; Linear programming; Resource management; Sensitivity; Transforms
【Paper Link】 【Pages】:900-908
【Authors】: Yuan Luo ; Lin Gao ; Jianwei Huang
【Abstract】: We propose a hybrid spectrum and information market for database-assisted TV white space networks, where a geo-location white space database serves as the platform for both the spectrum market and the information market. We study the interactions among the database operator, the spectrum licensee, and unlicensed users systematically, using a three-layer hierarchical model. In Layer I, the licensee negotiates with the database regarding the commission fee of using the spectrum market platform. In Layer II, the database and the licensee compete for selling information or channels to unlicensed users. In Layer III, unlicensed users determine whether to buy the exclusive usage right of licensed channels from the licensee, or to buy the information regarding unlicensed channels from the database. Analyzing such a three-layer model is challenging, due to the coexistence of both positive and negative network externalities in the information market. We characterize the market equilibrium systematically, and analyze how the network externalities affect the equilibrium behaviours of all parties involved. Our numerical results show that the proposed hybrid market can improve the network profit more than 80%, compared with a pure information market. Meanwhile, the achieved network profit is very close to the coordinated benchmark (e.g., the gap is less than 4%).
【Keywords】: information services; marketing; radio spectrum management; telecommunication computing; HySIM; database assisted TV white space networks; database operator; geolocation white space database; hybrid spectrum-information market; spectrum licensee; spectrum market; unlicensed user; Analytical models; Computers; Conferences; Databases; Interference; TV; White spaces
【Paper Link】 【Pages】:909-917
【Authors】: Ming Li ; Pan Li ; Linke Guo ; Xiaoxia Huang
【Abstract】: Many truthful spectrum auction schemes have been recently proposed to to ensure that the dominant strategy for bidders is to bid truthfully and thus protect the auctioneer's benefits. However, most of them assume the auctioneer is trustful and do not protect bidders' interests. An auctioneer can manipulate the winner's charging price if it knows bidders' bids. Thus, it is critical to protect bids from the auctioneer. Towards this end, we develop a Privacy-Preserving Economic-Robust spectrum auction scheme, namely PPER. Not only does it well protect users' bid privacy, but also guarantees economic-robustness which is another important auction property. Besides, only transmitters but not receivers are considered in most previous spectrum auctions, resulting in many unexpected collisions during transmissions. In this work, we consider interference constraints from transmissions instead of transmitters in spectrum allocation. Extensive privacy analysis and simulation results show the effectiveness and efficiency of our scheme.
【Keywords】: radio spectrum management; radio transmitters; radiofrequency interference; telecommunication security; telecommunication traffic; PPER; auction property; interference constraints; privacy analysis; privacy-preserving economic-robust spectrum auction scheme; spectrum allocation; user bid privacy; wireless networks; Computers; Conferences; Interference; Linear programming; Pricing; Privacy; Resource management
【Paper Link】 【Pages】:918-926
【Authors】: Ruihao Zhu ; Kang G. Shin
【Abstract】: The rapid growth of wireless mobile users and applications has led to high demand of spectrum. Auction is a powerful tool to improve the utilization of spectrum resource, and many auction mechanisms have been proposed thus far. However, none of them has considered both the privacy of bidders and the revenue gain of the auctioneer together. In this paper, we study the design of privacy-preserving auction mechanisms. We first propose a differentially private auction mechanism which can achieve strategy-proofness and a near optimal expected revenue based on the concept of virtual valuation. Assuming the knowledge of the bidders' valuation distributions, the near optimal differentially private and strategy-proof auction mechanism uses the generalized Vickrey-Clarke-Groves auction payment scheme to achieve high revenue with a high probability. To tackle its high computational complexity, we also propose an approximate differentially PrivAte, Strategy-proof, and polynomially tractable Spectrum (PASS) auction mechanism that can achieve a suboptimal revenue. PASS uses a monotone allocation algorithm and the critical payment scheme to achieve strategy-proofness. We also evaluate PASS extensively via simulation, showing that it can generate more revenue than existing mechanisms in the spectrum auction markets.
【Keywords】: mobile communication; polynomial approximation; telecommunication security; PASS auction mechanism; Vickrey-Clarke-Groves auction payment scheme; approximate revenue maximization; auction mechanisms; computational complexity; monotone allocation algorithm; near optimal expected revenue; polynomially tractable spectrum; privacy preserving auction mechanisms; revenue gain; spectrum auction markets; spectrum demand; spectrum resource; strategy proof spectrum auction; valuation distributions; virtual valuation; wireless mobile users; Computational complexity; Computers; Conferences; Cost accounting; Privacy; Resource management; Wireless communication
【Paper Link】 【Pages】:927-935
【Abstract】: The hardware improvement of mobile devices and pervasiveness of wireless technology expedite the convergence with the fast growing cloud computing trend, where the abundant resources on the cloud meet well with the deficiency of hand-held devices. Cloudlet, as a newly emerging paradigm “bringing the cloud closer” to the end users, features a more scalable deployment fashion where idle personal servers can be efficiently harnessed. Despite an envisioned monetary saving, such a paradigm confines itself to limited application scenarios, which fails to reach a wide realm of roaming users outside the distance coverage of the access points. In this paper, we propose a cloudlet-based multi-lateral resource exchange framework for mobile users, relying on no central entities. Inspired by the success of BitCoin, we design a novel virtual currency tailored for our framework. To realize an efficient resource exchange market, we also introduce flexible pricing strategies adopted by the individual users whom we assume are rational price-takers, with solid theoretical analysis on the equilibrium state and the stability. After elaborating the key functional modules, we introduce a prototype design enabling seamless trading among mobile users on Internet bandwidth as a proof-of-concept, with least user intervention. Both simulations and experiments are conducted to verify the practicality and efficiency of our system.
【Keywords】: cloud computing; computer network reliability; mobile computing; mobile radio; pricing; resource allocation; Internet bandwidth; cloud computing; cloudlet-based multilateral resource exchange framework; flexible pricing strategy; handheld device; least user intervention; mobile device user; proof-of-concept; seamless trading; wireless technology; Cloud computing; Computer architecture; Computers; Mobile communication; Mobile handsets; Online banking; Servers
【Paper Link】 【Pages】:936-944
【Authors】: Mostafa Dehghan ; Anand Seetharam ; Bo Jiang ; Ting He ; Theodoros Salonidis ; Jim Kurose ; Don Towsley ; Ramesh K. Sitaraman
【Abstract】: We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1 - 1/e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1% of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50% reduction in average delay over solutions based on LRU content caching.
【Keywords】: graph theory; mobile radio; optimisation; telecommunication congestion control; telecommunication network routing; NP-complete problem; average content access delay; back-end server; congestion insensitive path; congestion sensitive path; heterogeneous networks; in-network cache; in-network content caching; optimal request routing; optimal routing; polynomial time; user cache graph; Complexity theory; Delays; Joints; Load modeling; Polynomials; Routing; Servers
【Paper Link】 【Pages】:945-953
【Authors】: Felix Poloczek ; Florin Ciucu
【Abstract】: This paper proposes a martingale extension of effective-capacity, a concept which has been instrumental in teletraffic theory to model the link-layer wireless channel and analyze QoS metrics. Together with a recently developed concept of an arrival-martingale, the proposed service-martingale concept enables the queueing analysis of a bursty source sharing a MAC channel. In particular, the paper derives the first rigorous and accurate stochastic delay bounds for a Markovian source sharing either an Aloha or CSMA/CA channel, and further considers two extended scenarios accounting for 1) in-source scheduling and 2) spatial multiplexing MIMO. By leveraging the powerful martingale methodology, the obtained bounds are remarkably tight and improve state-of-the-art bounds by several orders of magnitude. Moreover, the obtained bounds indicate that MIMO spatial multiplexing is subject to the fundamental power-of-two phenomena.
【Keywords】: MIMO communication; Markov processes; carrier sense multiple access; channel capacity; delay estimation; quality of service; queueing theory; radio links; space division multiplexing; telecommunication scheduling; telecommunication traffic; wireless channels; ALOHA channel; CSMA/CA channel; MAC channel; MIMO spatial multiplexing; Markovian source sharing; QoS metrics; arrival martingale; bursty source sharing; delay analysis; in-source scheduling; link layer wireless channel; martingale extension; queueing analysis; random access protocol; service martingale; stochastic delay bound; teletraffic theory; Computers; Conferences; Delays; MIMO; Multiaccess communication; Queueing analysis; Stochastic processes
【Paper Link】 【Pages】:954-962
【Authors】: Ari Arapostathis ; Anup Biswas ; Guodong Pang
【Abstract】: We consider the optimal scheduling problem for a large-scale parallel server system with one large pool of statistically identical servers and multiple classes of jobs under the expected long-run average (ergodic) cost criterion. Jobs of each class arrive as a Poisson process, are served in the FCFS discipline within each class and may elect to abandon while waiting in their queue. The service and abandonment rates are both class-dependent. Assume that the system is operating in the Halfin-Whitt regime, where the arrival rates and the number of servers grow appropriately so that the system gets critically loaded while the service and abandonment rates are fixed. The optimal solution is obtained via the ergodic diffusion control problem in the limit, which forms a new class of problems in the literature of ergodic controls. A new theoretical framework is provided to solve this class of ergodic control problems. The proof of the convergence of the values of the multiclass parallel server system to that of the diffusion control problem relies on a new approximation method, spatial truncation, where the Markov policies follow a fixed priority policy outside a fixed compact set.
【Keywords】: Markov processes; approximation theory; parallel processing; scheduling; statistical mechanics; FCFS discipline; Halfin-Whitt regime; Markov policies; Poisson process; abandonment rates; approximation method; ergodic cost criterion; ergodic diffusion control problem; large-scale multiclass parallel server system; long-run average cost criterion; optimal scheduling; service rates; spatial truncation; statistically identical servers; Computers; Conferences; Cost function; Diffusion processes; Markov processes; Mathematical model; Servers
【Paper Link】 【Pages】:963-972
【Authors】: Qiaomin Xie ; Yi Lu
【Abstract】: The prevalence of data-parallel applications has made near-data scheduling an important problem. An example is the map task scheduling in the map-reduce framework. Wang et. al. [13] was the first to identify its capacity region and proposed a throughput-optimal algorithm based on MaxWeight. However, the study of the algorithm's delay performance revealed that it is only heavy-traffic optimal for a very special traffic scenario, where all traffic concentrates on a subset of servers. We propose a simple “local-tasks first” priority algorithm and show that it is throughput-optimal and heavy-traffic optimal for all traffic scenarios, i.e., it asymptotically minimizes the average delay as the arrival rate vector approaches the boundary of the capacity region. So far, it is the only known heavy-traffic optimal algorithm for this setting. As the algorithm is based on pre-determined priority, a direct application of the Lyapunov drift technique does not work. The main proof ideas are the construction of an ideal load decomposition and the separate treatment of two subsystems based on their ideal load. To the best of our knowledge, this is the only setup of affinity scheduling where a simple priority algorithm is shown to be heavy-traffic optimal. Simulation shows that our algorithm also significantly outperforms existing algorithms at loads away from the boundary of the capacity region.
【Keywords】: telecommunication scheduling; telecommunication traffic; vectors; MaxWeight; affinity scheduling; arrival rate vector; capacity region boundary; data-parallel applications; delay performance; heavy-traffic optimal algorithm; ideal load decomposition; local-tasks first priority algorithm; map task scheduling; map-reduce framework; near-data scheduling; predetermined priority; throughput-optimal algorithm; very special traffic scenario; Computers; Conferences; Delays; Job shop scheduling; Load modeling; Servers; Throughput
【Paper Link】 【Pages】:972-980
【Authors】: Jiguo Yu ; Lili Jia ; Dongxiao Yu ; Guangshun Li ; Xiuzhen Cheng
【Abstract】: Discrete beeping is an extremely rigorous local broadcast model depending only on carrier sensing. It describes an anonymous broadcast network where the nodes do not need unique identifiers and have no knowledge about the topology and size of the network. Within such a model, time is divided into slots, and nodes can either beep or keep silent at each slot. We consider the problem of constructing a minimum dominating set (MDS) and a minimum connected dominating set (MCDS), respectively, under the discrete beeping model in this paper. By assuming that an upper bound N of the network size is known, we first propose and analyze a distributed synchronous algorithm termed BMDS for constructing a minimum dominating set (MDS) and then propose a distributed synchronous algorithm BCDS for CDS construction based on a maximal independent set (MIS) algorithm and a weakly CDS (WCDS). To our best knowledge, we are the first to study the MCDS construction under the discrete beeping model. We prove that the time complexity of BMDS is O(log2 N) rounds with constant approximation ratio of at most 2, and BCDS can converge to a CDS within O(log3 N) rounds.
【Keywords】: broadcast communication; radio networks; telecommunication network topology; BCDS; BMDS; MCDS construction; MIS algorithm; WCDS; broadcast model; broadcast network; carrier sensing; discrete beeping model; distributed synchronous algorithm; maximal independent set; minimum connected dominating set; topology; weakly CDS; wireless networks; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computational modeling; Conferences; Distributed algorithms; Wireless networks
【Paper Link】 【Pages】:981-989
【Authors】: Chen Wang ; Hongbo Jiang
【Abstract】: Many applications in wireless sensor networks (WSNs) require that sensor observations in a given monitoring area be aggregated in a serial fashion. This demands a routing path to be constructed traversing all sensors in that area, which is also called to linearize the network. In this paper, we present SURF, a Space filling cURve construction scheme for high genus 3D surFace WSNs, yielding a traversal path provably aperiodic (that is, any node is covered at most a constant number of times). SURF first utilizes the hop-count distance function to construct the iso-contour in discrete settings, then it uses the concept of the Reeb graph and the maximum cut set to divide the network into different regions. Finally it conducts a novel serial traversal scheme, enabling the traversal within and between regions. To the best of our knowledge, SURF is the first high genus 3D surface WSNs targeted and pure connectivity-based solution for linearizing the networks. It is fully distributed and highly scalable, requiring a nearly constant storage and communication cost per node in the network. Extensive simulations on several representative networks demonstrate that SURF works well on high genus 3D surface WSNs.
【Keywords】: graph theory; wireless sensor networks; Reeb graph; SURF; high genus 3D surface WSN; hop count distance function; novel serial traversal scheme; space filling curve construction algorithm; traversal path; wireless sensor networks; Computers; Conferences; Level set; Routing; Three-dimensional displays; Topology; Wireless sensor networks
【Paper Link】 【Pages】:990-998
【Authors】: Zhiwei Zhao ; Wei Dong ; Gaoyang Guan ; Jiajun Bu ; Tao Gu ; Chun Chen
【Abstract】: Wireless link correlation can greatly affect the performance of wireless protocols such as flooding, and opportunistic routing. Researchers have proposed a variety of approaches to optimize existing protocols exploiting link correlation. Most existing works directly measure link correlation using packet-level transmissions and receptions. Measurement alone is insufficient because it lacks predictive power and scalability. In this paper, we present CorModel, a model for predicting link correlation in low-power wireless networks. Based on the underlying causes of link correlation, we explore four easily measurable parameters for our modeling. Besides PHY-layer parameters that previous studies have explored, we find that network-layer parameters can also have significant impact on link correlation. We validate our model and illustrate its usefulness by integrating it into existing protocols for more accurate correlation estimation. Experimental results show that our model can significantly increase the accuracy of wireless link estimation, resulting in better protocol performance.
【Keywords】: protocols; radio links; CorModel; low-power wireless network; packet-level reception; packet-level transmission; wireless link correlation modeling; wireless protocol; Correlation; Interference; Measurement; Protocols; Receivers; Signal to noise ratio
【Paper Link】 【Pages】:999-1007
【Authors】: Lin Chen ; Wei Wang ; Hua Huang ; Shan Lin
【Abstract】: Data harvesting using mobile data ferries has recently emerged as a promising alternative to the traditional multi-hop transmission paradigm. The use of data ferries can significantly reduce energy consumption at sensor nodes and increase network lifetime. However, it usually incurs longer data delivery latency as the data ferry needs to travel through the network to collect data, during which some delay-sensitive data may become obsolete. Therefore, optimizing the trajectory of the data ferry with data delivery latency bound is important for this approach to be effective in practice. To address this problem, we formally define the time-constrained data harvesting problem, which seeks an optimal data harvesting path in a network to collect as much data as possible within a time duration. We first characterise the performance bound given by the optimal data harvesting algorithm and show that the optimal algorithm significantly outperforms the random algorithm, especially when network scales. Motivated by the theoretical analysis and proving the NP-completeness of the time-constrained data harvesting problem, we then devise polynomial-time approximation schemes (PTAS) and mathematically prove the output being a constant-factor approximation of the optimal solution.
【Keywords】: optimisation; polynomial approximation; wireless sensor networks; NP-completeness; WSN; polynomial-time approximation schemes; time-constrained data harvesting; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computers; Conferences; Polynomials; Robot sensing systems
【Paper Link】 【Pages】:1008-1016
【Authors】: Xiaowen Gong ; Xu Chen ; Kai Xing ; Dong-Hoon Shin ; Mengyuan Zhang ; Junshan Zhang
【Abstract】: With increasing popularity of location-based services (LBSs), there have been growing concerns for location privacy. To protect location privacy in a LBS, mobile users in physical proximity can work in concert to collectively change their pseudonyms, in order to hide spatial-temporal correlation in their location traces. In this study, we leverage the social tie structure among mobile users to motivate them to participate in pseudonym change. Drawing on a social group utility maximization (SGUM) framework, we cast users' decision making of whether to change pseudonyms as a socially-aware pseudonym change game (PCG). The PCG further assumes a general anonymity model that allows a user to have its specific anonymity set for personalized location privacy. For the SGUM-based PCG, we show that there exists a socially-aware Nash equilibrium (SNE), and quantify the system efficiency of the SNE with respect to the optimal social welfare. Then we develop a greedy algorithm that myopically determines users' strategies, based on the social group utility derived from only the users whose strategies have already been determined. It turns out that this algorithm can efficiently find a Pareto-optimal SNE with social welfare higher than that for the socially-oblivious PCG, pointing out the impact of exploiting social tie structure. We further show that the Pareto-optimal SNE can be achieved in a distributed manner.
【Keywords】: data privacy; game theory; mobile computing; optimisation; telecommunication security; LBS; Pareto-optimal SNE; SGUM-based PCG; location traces; location-based services; mobile networks; optimal social welfare; personalized location privacy; physical proximity; social group utility maximization framework; social tie structure; socially-aware Nash equilibrium; socially-aware pseudonym change game; spatial-temporal correlation; system efficiency quantification; Computers; Games; Mobile communication; Mobile handsets; Nash equilibrium; Privacy; Tin
【Paper Link】 【Pages】:1017-1025
【Authors】: Ben Niu ; Qinghua Li ; Xiaoyan Zhu ; Guohong Cao ; Hui Li
【Abstract】: Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.
【Keywords】: data privacy; entropy; query processing; caching-aware dummy selection; caching-based solution; entropy-based privacy metric; location-based services; privacy enhancement; privacy protection; untrusted LBS server; users query service data; Algorithm design and analysis; Computers; Entropy; Measurement; Mobile communication; Privacy; Servers
【Paper Link】 【Pages】:1026-1034
【Authors】: Linke Guo ; Yuguang Fang ; Ming Li ; Pan Li
【Abstract】: Widely deployed mHealth systems enable patients to efficiently collect, aggregate, and report their Personal Health Records (PHRs), and then lower the costs and shorten their response time. The increasing needs of PHR monitoring require the involvement of healthcare companies that provide monitoring programs for analyzing PHRs. Unfortunately, healthcare companies are lack of the computation, storage, and communication capability on supporting millions of patients. To tackle this problem, they seek for the help from the cloud. However, delegating monitoring programs to the cloud may incur serious security and privacy breaches because people have to provide their identity information and PHRs to the public domain. Even worse, the cloud may mistakenly return the incorrect computation results, which will put patients' life in jeopardy. In this paper, we propose a verifiable privacy-preserving monitoring scheme for cloud-assisted mHealth systems. Our scheme allows patients to verify the correctness of computation results from the cloud without revealing their PHRs and identity information. In addition, our advanced schemes offer efficient PHR updates and PHR computations on complex monitoring programs. By detailed performance evaluation, we have shown the security and efficiency of our proposed scheme.
【Keywords】: cloud computing; data privacy; electronic health records; health care; mobile computing; PHR computations; PHR updates; cloud-assisted mHealth systems; healthcare companies; personal health records; privacy breaches; serious security; verifiable privacy-preserving monitoring; Companies; Computers; Cryptography; Medical services; Monitoring; Polynomials; Privacy; PHR; Privacy; Verifiable Computation; mHealth
【Paper Link】 【Pages】:1035-1043
【Authors】: Sergio Salinas ; Changqing Luo ; Xuhui Chen ; Pan Li
【Abstract】: Solving large-scale linear systems of equations (LSEs) is one of the most common and fundamental problems in big data. But such problems are often too expensive to solve for resource-limited users. Cloud computing has been proposed as a timely, efficient, and cost-effective way of solving such computing tasks. Nevertheless, one critical concern in cloud computing is data privacy. To be more prominent, in many cases, clients's LSEs contain private data that should remain hidden from the cloud for ethical, legal, or security reasons. Many previous works on secure outsourcing of LSEs have high computational complexity. More importantly, they share a common serious problem, i.e., a huge number of external memory I/O operations. This problem has been largely neglected in the past, but in fact is of particular importance and may eventually render those outsourcing schemes impractical. In this paper, we develop an efficient and practical secure outsourcing algorithm for solving large-scale LSEs, which has both low computational complexity and low memory I/O complexity and can protect clients' privacy well. We implement our algorithm on a real-world cloud server and a laptop. We find that the proposed algorithm offers significant time savings for the client (up to 65%) compared to previous algorithms.
【Keywords】: Big Data; cloud computing; data privacy; input-output programs; linear systems; outsourcing; LSE; big data; cloud computing; cloud server; data privacy; external memory I/O operations; large-scale linear systems of equations; resource-limited users; secure outsourcing; Computational complexity; Computers; Outsourcing; Privacy; Random access memory; Symmetric matrices
【Paper Link】 【Pages】:1044-1052
【Authors】: Yipei Niu ; Bin Luo ; Fangming Liu ; Jiangchuan Liu ; Bo Li
【Abstract】: With rapid development in online shopping, e-commerce websites are facing intensive user requests from an increasing number of customers. Especially in promotion seasons, these websites may encounter flash crowds which pull heavy pressure o private infrastructure and even make he website unavailable. Such severe flash crowds can be addressed by leveraging hybrid cloud solution, which relieves workloads of the private cloud by offloading the excessive user requests to the IaaS public cloud. However, the bursty and fluctuation of flash crowds bring challenges to distributing user requests with targest of delay-minimizing and cost-saving. In his paper, we apply the queueing theory to evaluate the average response time and explore the tradeoff between performance and cost in the hybrid cloud. By taking advantage of Lyapunov optimization techniques, we design an online decision algorithm for request distribution which achieves the average response time arbitrarily close to the theoretically optimum and controls he outsourcing cost based on a given budge. The simulation results demonstrate ha in a hybrid cloud, our solution can reduce he cost of e-commerce services as well as guarantee performance when encountering flash crowds.
【Keywords】: Lyapunov methods; Web sites; cloud computing; delays; electronic commerce; optimisation; outsourcing; queueing theory; retail data processing; IaaS public cloud; Lyapunov optimization technique; Website unavailability; cost-effective service provisioning; cost-saving; delay minimization; e-commerce Websites; e-commerce service; flash crowd; hybrid cloud solution; online decision algorithm; online shopping; outsourcing cost; performance-cost tradeoff; private cloud workload relief; private infrastructure; promotion season; queueing theory; response time; user request distribution; Cloud computing; Computers; Conferences
【Paper Link】 【Pages】:1053-1061
【Authors】: Xiaoxi Zhang ; Chuan Wu ; Zongpeng Li ; Francis C. M. Lau
【Abstract】: On-demand resource provisioning in cloud computing provides tailor-made resource packages (typically in the form of VMs) to meet users' demands. Public clouds nowadays provide more and more elaborated types of VMs, but have yet to offer the most flexible dynamic VM assembly, which is partly due to the lack of a mature mechanism for pricing tailor-made VMs on the spot. This work proposes an efficient randomized auction mechanism based on a novel application of smoothed analysis and randomized reduction, for dynamic VM provisioning and pricing in geo-distributed cloud data centers. This auction, to the best of our knowledge, is the first one in literature that achieves (i) truthfulness in expectation, (ii) polynomial running time in expectation, and (iii) (1 - ϵ)-optimal social welfare in expectation for resource allocation, where ϵ can be arbitrarily close to 0. Our mechanism consists of three modules: (1) an exact algorithm to solve the NP-hard social welfare maximization problem, which runs in polynomial time in expectation, (2) a perturbation-based randomized resource allocation scheme which produces a VM provisioning solution that is (1 - ϵ)-optimal and (3) an auction mechanism that applies the perturbation-based scheme for dynamic VM provisioning and prices the customized VMs using a randomized VCG payment, with a guarantee in truthfulness in expectation. We validate the efficacy of the mechanism through careful theoretical analysis and trace-driven simulations1.
【Keywords】: cloud computing; computational complexity; pricing; resource allocation; virtual machines; (1-€)-optimal social welfare; NP-hard social welfare maximization problem; cloud computing; dynamic VM pricing; dynamic VM provisioning; flexible dynamic VM assembly; geo-distributed cloud data centers; on-demand cloud resource provisioning; perturbation-based randomized resource allocation scheme; perturbation-based scheme; polynomial time; randomized VCG payment; randomized auction mechanism; tailor-made VM pricing; tailor-made resource packages; trace-driven simulations; truthful (1-€)-optimal mechanism; user demands; Algorithm design and analysis; Approximation algorithms; Approximation methods; Pareto optimization; Polynomials; Pricing; Resource management
【Paper Link】 【Pages】:1062-1070
【Authors】: Doron Zarchy ; David Hay ; Michael Schapira
【Abstract】: Cloud computing platforms provide computational resources (CPU, storage, etc.) for running users' applications. Often, the same application can be implemented in various ways, each with different resource requirements. Taking advantage of this flexibility when allocating resources to users can both greatly benefit users and lead to much better global resource utilization. We develop a framework for fair resource allocation that captures such implementation tradeoffs by allowing users to submit multiple “resource demands”. We present and analyze two mechanisms for fairly allocating resources in such environments: the Lexicographically-Max-Min-Fair (LMMF) mechanism and the Nash-Bargaining (NB) mechanism. We prove that NB has many desirable properties, including Pareto optimality and envy freeness, in a broad variety of environments whereas the seemingly less appealing LMMF fares better, and is even immune to manipulations, in restricted settings of interest.
【Keywords】: Pareto optimisation; cloud computing; resource allocation; storage management; LMMF mechanism; NB mechanism; Nash-Bargaining mechanism; Pareto optimality; cloud computing; computational resource tradeoff; fair resource allocation; global resource utilization; lexicographically-max-min-fair mechanism; multiresource allocation; Cloud computing; Computers; Conferences; Economics; Memory management; Niobium; Resource management
【Paper Link】 【Pages】:1071-1079
【Authors】: Huanle Xu ; Wing Cheong Lau
【Abstract】: A parallel processing job can be delayed substantially as long as one of its many tasks is being assigned to an unreliable machine. To tackle this so-called straggler problem, most parallel processing frameworks such as MapReduce have adopted various strategies under which the system may speculatively launch additional copies of the same task if its progress is abnormally slow or simply because extra idling resource is available. In this paper, we focus on the design of speculative execution schemes for a parallel processing cluster under different loading conditions. For the lightly loaded case, we analyze and propose two optimization-based schemes, namely, the Smart Cloning Algorithm (SCA) which is based on maximizing the job utility. We also derive the workload threshold under which SCA should be used for speculative execution. Our simulation results show SCA can reduce the total job flowtime by nearly 22% comparing to the speculative execution strategy of Microsoft Mantri. For the heavily loaded case, we propose the Enhanced Speculative Execution (ESE) algorithm which is an extension of the Microsoft Mantri scheme. We show that the ESE algorithm can beat the Mantri baseline scheme by 35% in terms of job flowtime while consuming the same amount of resource.
【Keywords】: data handling; optimisation; parallel processing; pattern clustering; resource allocation; ESE algorithm; MapReduce; SCA; enhanced speculative execution algorithm; parallel processing cluster; resource consumption; smart cloning algorithm; speculative execution optimization; Algorithm design and analysis; Cloning; Computers; Conferences; Delays; Monitoring; Optimization; Job scheduling; cloning; optimization; speculative execution; straggler detection
【Paper Link】 【Pages】:1080-1085
【Authors】: Zhao Zhang ; Yishuo Shi
【Abstract】: In a wireless sensor network, the virtual backbone plays an important role. Due to accidental damage or energy depletion, it is desirable that the virtual backbone is fault-tolerant. Such a consideration leads to the problem of finding a minimum weight k-connected m-fold dominating set ((k, m)-MWCDS for short). In this paper, we give an (α + 2.5ρ)-approximation for (2, m)-MWCDS with m ≥ 2 in unit disk graph, where α is the performance ratio for the minimum weight m-fold dominating set problem, and ρ is the performance ratio for the {0,1,2}-Steiner Network Design problem. In view of currently best known ratios for α and ρ, (2, m)-MWCDS has a (9 + ε)-approximation for m ≥ 3 and a (8 + ε)-approximation for m =2, where ε is an arbitrary positive real number.
【Keywords】: approximation theory; fault tolerance; graph theory; virtualisation; wireless sensor networks; MWCDS; Steiner network design problem; approximation algorithm; minimum weight fault-tolerant virtual backbone; minimum weight k-connected m-fold dominating set; unit disk graph; wireless sensor network; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computers; Conferences; Sensors; Wireless sensor networks
【Paper Link】 【Pages】:1086-1094
【Authors】: Cing-yu Chu ; Kang Xi ; Min Luo ; H. Jonathan Chao
【Abstract】: As service providers have started deploying SDN in their networks, traditional IP routers are gradually upgraded to SDN enabled switches. In other words, the network will have traditional IP routers and SDN switches coexisting, and it is called a hybrid SDN network. With such a network, we take advantage of SDN and propose an approach to guarantee traffic reachability in the presence of any single link failure. By redirecting traffic on the failed link to SDN switches through pre-configured IP tunnels, the proposed approach is able to react to the failures very fast. With the help of coordination among SDN switches, we are also able to explore multiple backup paths for the failure recovery. This allows the proposed approach to avoid potential congestion in the post-recovery network by choosing proper backup paths. Simulation results show that our proposed scheme requires only a very few number of SDN switches in the hybrid SDN network to achieve fast recovery and guarantee 100% reachability from any single link failure. It also shows that the proposed approach is able to better load-balance the post-recovery network comparing to IP Fast Reroute and shortest path re-calculation.
【Keywords】: resource allocation; software defined networking; telecommunication network reliability; telecommunication network routing; telecommunication traffic; IP routers; SDN switches; congestion-aware single link failure recovery; hybrid SDN networks; load balance; post recovery network; pre-configured IP tunnels; service providers; software defined networking; traffic reachability; Computers; Conferences; IP networks; Peer-to-peer computing; Routing; Routing protocols
【Paper Link】 【Pages】:1095-1103
【Authors】: Shehla S. Rana ; Nitin H. Vaidya
【Abstract】: This paper considers reliable communication in presence of Byzantine faulty nodes, using multiple node-disjoint routes. To tolerate f Byzantine faults, at least 2f + 1 node-disjoint paths are needed between a source and destination node pair. However, often the faulty nodes' misbehavior manifests itself as a "disagreement" between information provided by the faulty node and its neighbors. This disagreement can be captured in the form of a conflict graph. Even though the conflict graph does not always allow us to identify faulty nodes precisely, we show that it can still be used to reduce the number of paths necessary for reliable communication (to smaller than 2f+1). We consider two strategies for using the node-disjoint paths for reliable delivery of messages: replication and coding across different paths. For each strategy, we propose iPath, a scheme to identify the optimal set of paths that needs to be used to achieve reliable communication for a given conflict graph.
【Keywords】: electronic messaging; fault tolerance; graph theory; network coding; radio networks; telecommunication network reliability; telecommunication network routing; Byzantine fault tolerant communication reliability; conflict graph; iPath; intelligent and optimal path selection; message coding; message delivery; message replication; multiple node-disjoint route; Algorithm design and analysis; Computers; Conferences; Encoding; Fault tolerance; Throughput
【Paper Link】 【Pages】:1104-1112
【Authors】: Xiaoyong Li ; Daren B. H. Cline ; Dmitri Loguinov
【Abstract】: We analyze synchronization issues arising between two stochastic point processes, one of which models data churn at an information source and the other periodic downloads from its replica (e.g., search engine, web cache, distributed database). Due to lazy (pull-based) synchronization, the replica experiences recurrent staleness, which translates into some form of penalty stemming from its reduced ability to perform consistent computation and/or provide up-to-date responses to customer requests. We model this system under non-Poisson update/refresh processes and obtain sample-path averages of various metrics of staleness cost, generalizing previous results and exposing novel problems in this field.
【Keywords】: data handling; data models; stochastic processes; synchronisation; data churn models; information source; lazy data replication; nonPoisson update-refresh processes; sample-path staleness; stochastic point processes; synchronization analysis; Computational modeling; Delays; Gold; Limiting; Random variables; Synchronization
【Paper Link】 【Pages】:1113-1121
【Authors】: Kristen Gardner ; Sem C. Borst ; Mor Harchol-Balter
【Abstract】: This paper considers the problem of server-side scheduling for jobs composed of multiple pieces with consecutive (progressive) deadlines. One example is server-side scheduling for video service, where clients request flows of content from a server with limited capacity, and any content not delivered by its deadline is lost. We consider the simultaneous goals of 1) minimizing overall loss, and 2) differentiating loss fractions across classes of flows in proportion to relative weights. State-of-the-art policies, like Discriminatory Processor Sharing and Weighted Fair Queueing, use a fixed static proportional allocation of service rate and fail to achieve both goals. The well-known Earliest Deadline First policy minimizes overall loss, but fails to provide proportional loss across flows, because it treats packets as independent jobs. This paper introduces the Earliest Progressive Deadline First (EPDF) class of policies. We prove that all policies in this broad class minimize overall loss. Furthermore, we demonstrate that many EPDF policies accurately differentiate loss fractions in proportion to class weights, satisfying the second goal.
【Keywords】: client-server systems; video servers; EPDF; discriminatory processor sharing; earliest deadline first policy; fixed static proportional allocation; optimal scheduling; progressive deadline; video service; weighted fair queueing; Bismuth; Bit rate; Buffer storage; Conferences; Optimal scheduling; Servers; Streaming media
【Paper Link】 【Pages】:1122-1130
【Authors】: Florin Ciucu ; Felix Poloczek
【Abstract】:
This paper analyzes queueing behavior subject to multiplexing a stochastic process M(n) of flows, and not a constant as conventionally assumed. By first considering the case when M(n) is iid, it is shown that flows' multiplexing hurts' the queue size (i.e., the queue size increases in distribution). The simplicity of the iid case enables the quantification of the
best' and worst' distributions of M(n), i.e., minimizing/maximizing the queue size. The more general, and also realistic, case when M(n) is Markov-modulated reveals an interesting behavior: flows' multiplexing
hurts' but only when the multiplexed flows are sufficiently long. An important caveat raised by such observations is that the conventional approximation of M(n) by a constant can be very misleading for queueing analysis.
【Keywords】: Markov processes; multiplexing; queueing theory; Markov modulation; multiplexing flow; queue size maximization; queue size minimization; queueing analysis; stochastic process; Computers; Conferences; Delays; Multiplexing; Queueing analysis; Stochastic processes
【Paper Link】 【Pages】:1131-1139
【Authors】: Lei Ying ; R. Srikant ; Xiaohan Kang
【Abstract】: In many computing and networking applications, arriving tasks have to be routed to one of many servers, with the goal of minimizing queueing delays. When the number of processors is very large, a popular routing algorithm works as follows: select two servers at random and route an arriving task to the least loaded of the two. It is well-known that this algorithm dramatically reduces queueing delays compared to an algorithm which routes to a single randomly selected server. In recent cloud computing applications, it has been observed that even sampling two queues per arriving task can be expensive and can even increase delays due to messaging overhead. So there is an interest in reducing the number of sampled queues per arriving task. In this paper, we show that the number of sampled queues can be dramatically reduced by using the fact that tasks arrive in batches (called jobs). In particular, we sample a subset of the queues such that the size of the subset is slightly larger than the batch size (thus, on average, we only sample slightly more than one queue per task). Once a random subset of the queues is sampled, we propose a new load balancing method called batch-filling to attempt to equalize the load among the sampled servers. We show that our algorithm dramatically reduces the sample complexity compared to previously proposed algorithms.
【Keywords】: cloud computing; computational complexity; queueing theory; randomised algorithms; resource allocation; batch-filling; cloud computing applications; popular routing algorithm; queueing delays; randomized load balancing method; Algorithm design and analysis; Complexity theory; Computers; Delays; Markov processes; Queueing analysis; Servers
【Paper Link】 【Pages】:1140-1148
【Authors】: Malhar Mehta ; Veeraruna Kavitha ; N. Hemachandra
【Abstract】:
When agents compete for common resource and when the utilities derived by them, upon allocation, are independent across the agents and time slots, an opportunistic scheduler is used. The instantaneous utility of one agent can be low, however few among many would have good' utility with high probability. Opportunistic schedulers utilize these opportunities, allocate resource at any time to a
good' agent. Efficient schedulers maximize the sum of accumulated utilities. Thus, every time best' agent is allocated. This can result in negligible (unfair) accumulations for some agents, whose instantaneous utilities are
low' with high probability. Fair opportunistic schedulers are thus introduced (e.g., alpha-fair schedulers). We study their price of fairness (PoF). We group the agents into finite classes, each class having identical utilities and QoS requirements. We study the asymptotic PoF as agents increase, while maintaining class-wise proportions constant. Asymptotic PoF is less than one, depends only upon the differences in the largest utilities of individual classes and is less than the maximum such normalized differences. The PoF is zero initially and increases with increase in fairness requirements to an upper bound strictly less than one. We observe that the fair schedulers are essentially priority schedulers, which facilitated easy analysis of PoF.
【Keywords】: multi-agent systems; optimisation; quality of service; resource allocation; telecommunication scheduling; QoS requirement; asymptotic price of fairness; best agent; fairness price; good agent; opportunistic scheduler; priority scheduler; resource allocation; Computers; Conferences; Manganese; Optimization; Sociology; Statistics; Upper bound; α-fair schedulers; Constrained optimization; Fairness; Opportunistic schedulers; Price of Fairness; Priority
【Paper Link】 【Pages】:1149-1157
【Authors】: Qiao Xiang ; Xi Chen ; Linghe Kong ; Lei Rao ; Xue Liu
【Abstract】: Vehicle-to-vehicle safety data dissemination plays an increasingly important role in ensuring the safety and efficiency of vehicle transportation. When collecting safety data, vehicles always prefer data generated at a closer location over data generated at a distant location, and prefer recent data over outdated data. However, these data preferences have been overlooked in most of existing safety data dissemination protocols, preventing vehicles getting more precise traffic information. In this paper, we explore the feasibility and benefits of incorporating the data preferences of vehicles in designing efficient safety data dissemination protocols. In particular, we propose the concept of packet-value to quantify these data preferences. We then design PVCast, a packet-value-based safety data dissemination protocol in VANET. PVCast makes the dissemination decision for each packet based on its packet-value and effective dissemination coverage in order to satisfy the data preferences of all the vehicles in the network. In addition, PVCast is lightweight and fully distributed. We evaluate the performance of PVCast on the ns-2 platform by comparing it with three representative data dissemination protocols. Simulation results in a typical highway scenario show that PVCast provides a significant improvement on per-vehicle throughput, per-packet dissemination coverage with small per-packet delay. Our findings demonstrate the importance and necessity of comprehensively considering the data preferences of vehicles when designing an efficient safety data dissemination protocol for VANET.
【Keywords】: access protocols; packet radio networks; transportation; vehicular ad hoc networks; PVCast; VANET; data preference matters; dissemination decision; ns-2 platform; packet value concept; per-packet dissemination coverage; per-vehicle throughput; safety data dissemination protocols; small per-packet delay; vehicle transportation; vehicle-to-vehicle safety data dissemination; vehicular ad hoc networks; Data models; Delays; Mathematical model; Protocols; Safety; Vehicles; Vehicular ad hoc networks
【Paper Link】 【Pages】:1158-1166
【Authors】: Stefania Santini ; Alessandro Salvi ; Antonio Saverio Valente ; Antonio Pescapè ; Michele Segata ; Renato Lo Cigno
【Abstract】: Automated and coordinated vehicles' driving (platooning) is gaining more and more attention today and it represents a challenging scenario heavily relying on wireless Inter-Vehicular Communication (IVC). In this paper, we propose a novel controller for vehicle platooning based on consensus. Opposed to current approaches where the logical control topology is fixed a priori and the control law designed consequently, we design a system whose control topology can be reconfigured depending on the actual network status. Moreover, the controller does not require the vehicles to be radar equipped and automatically compensates outdated information caused by network delays. We define the control law and analyze it in both analytical and simulative way, showing its robustness in different network scenarios. We consider three different wireless network settings: uncorrelated Bernoullian losses, correlated losses using a Gilbert-Elliott channel, and a realistic traffic scenario with interferences caused by other vehicles. Finally, we compare our strategy with another state of the art controller. The results show the ability of the proposed approach to maintain a stable string of vehicles even in the presence of strong interference, delays, and fading conditions, providing higher comfort and safety for platoon drivers.
【Keywords】: mobile communication; road traffic; telecommunication network topology; Bernoullian losses; Gilbert-Elliott channel; coordinated vehicles driving; correlated losses; inter-vehicular communications; logical control topology; network delays; platooning; wireless intervehicular communication; Algorithm design and analysis; Delays; Heuristic algorithms; Stability analysis; Topology; Vehicle dynamics; Vehicles
【Paper Link】 【Pages】:1167-1175
【Authors】: Shuo Zhang ; Fei He ; Ming Gu
【Abstract】: As a part of the international standard IEC 61375, the multifunction vehicle bus (MVB) has been used in most of the modern train control systems. It is highly desirable to check the temporal properties of the data transmitted on the bus. However, we are not aware of any published work on this problem. We proposed VeRV, the first temporal and data-concerned verification framework for the vehicle bus systems. A domain-specific language, called VeSpec, is proposed to specify the packet formats and the desired properties. The language is expressive, modular and easy to use. Given a VeSpec script, the VeRV allows automatic generation of runtime analyzer. We have applied our technique to a real tube train system and succeeded in diagnosing a real failure in this system. The industry application illustrates the effectiveness and efficiency of our technique.
【Keywords】: IEC standards; programming languages; railway communication; telecommunication control; VeRV; VeSpec domain-specific language; VeSpec script; automatic generation; data-concerned verification framework; international standard IEC 61375; modern train control systems; multifunction vehicle bus; packet formats; real tube train system; runtime analyzer; temporal properties; temporal verification framework; vehicle bus systems; Automata; History; Java; Monitoring; Temperature measurement; Temperature sensors; Vehicles; Vehicle bus systems; domain-specific language; onling monitoring; runtime verification
【Paper Link】 【Pages】:1176-1184
【Authors】: Sanjib Sur ; Xinyu Zhang
【Abstract】: We explore the use of TV White Space (TVWS) wireless networks for providing robust and long range connectivity to vehicles. A key distinctive requirement of TVWS networks is the power asymmetry - the static APs are allowed to transmit at up to 4 W, while the mobile clients in vehicles are limited to only 100 mW. Our measurements reveal that the power asymmetry not only causes severe uplink blackouts but also poses significant coexistence problems, as high-power fixed nodes can easily starve the low-power mobile ones due to carrier sensing loss. To tackle these unique challenges, we propose a cross-layer design of a Direct-Sequence Spread Spectrum (DSSS) based system. We employ an adaptive DSSS mechanism that strategically configures the spreading code, so as to boost uplink coverage while maximizing throughput. We further design a traffic-aware code assignment algorithm for uplink packets to balance the requirement of throughput-intensive and latency-sensitive flows. We have implemented the design on a TVWS software-radio platform on a moving vehicle in an urban environment, and demonstrated that link asymmetry can be completely removed to support realistic application traffic, while the carrier sense loss rate at fixed nodes can be reduced by around 85%.
【Keywords】: code division multiple access; mobile radio; spread spectrum communication; TV white space wireless networks; TVWS software-radio platform; adaptive DSSS mechanism; direct-sequence spread spectrum; link power asymmetry; mobile whitespace networks; traffic-aware code assignment algorithm; Downlink; Gain; Mobile communication; Signal to noise ratio; Spread spectrum communication; Throughput; Uplink
【Paper Link】 【Pages】:1185-1193
【Authors】: Wenjie Hu ; Guohong Cao
【Abstract】: Video streaming on smartphone consumes lots of energy. One common solution is to download and buffer future video data for playback so that the wireless interface can be turned off most of time and then save energy. However, this may waste energy and bandwidth if the user skips or quits before the end of the video. Using a small buffer can reduce the bandwidth wastage, but may consume more energy and introduce rebuffering delay. In this paper, we analyze the power consumption during video streaming considering user skip and early quit scenarios. We first propose an offline method to compute the minimum power consumption, and then introduce an online solution to save energy based on whether the user tends to watch video for a long time or tends to skip. We have implemented the online solution on Android based smartphones. Experimental results and trace-driven simulation results show that that our method can save energy while achieving a better tradeoff between delay and bandwidth compared to existing methods.
【Keywords】: power consumption; smart phones; telecommunication power management; video streaming; Android based smartphones; bandwidth wastage; early quit scenarios; future video data; minimum power consumption; playback; rebuffering delay; user skip scenarios; video streaming; wireless interface; Bandwidth; Data communication; Delays; Smart phones; Streaming media; Watches; Wireless communication
【Paper Link】 【Pages】:1194-1202
【Authors】: Yanzhi Ren ; Chen Wang ; Jie Yang ; Yingying Chen
【Abstract】: Sleep monitoring has drawn increasingly attention as the quality and quantity of the sleep are important to maintain a person's health and well-being. For example, inadequate and irregular sleep are usually associated with serious health problems such as fatigue, depression and cardiovascular disease. Traditional sleep monitoring systems, such as PSG, involve wearable sensors with professional installations, and thus are limited to clinical usage. Recent work in using smartphone sensors for sleep monitoring can detect several events related to sleep, such as body movement, cough and snore. Such coarse-grained sleep monitoring however is unable to detect the breathing rate which is an important vital sign and health indicator. This work presents a fine-grained sleep monitoring system which is capable of detecting the breathing rate by leveraging smartphones. Our system exploits the readily available smartphone earphone placed close to the user to reliably capture the human breathing sound. Given the captured acoustic sound, our system performs noise reduction to remove environmental noise and then identifies the breathing rate based on the signal envelope detection. Our system can further detect detailed sleep events including snore, cough, turn over and get up based on the acoustic features extracted from the acoustic sound. Our experimental evaluation of six subjects over six months time period demonstrates that the breathing rate monitoring and sleep events detection are highly accurate and robust under various environments. By combining breathing rate and sleep events, our system can provide continuous and noninvasive fine-grained sleep monitoring for healthcare related applications, such as sleep apnea monitoring as evidenced by our experimental study.
【Keywords】: diseases; earphones; hearing; pneumodynamics; sleep; smart phones; acoustic features; breathing rate monitoring; captured acoustic sound; environmental noise; fine-grained sleep monitoring; health indicator; healthcare related applications; hearing; human breathing sound; important vital sign; signal envelope detection; sleep apnea monitoring; sleep events; smartphone earphone; smartphone sensors; wearable sensors; Acoustics; Headphones; Microphones; Monitoring; Noise; Sleep apnea; Smart phones
【Paper Link】 【Pages】:1203-1211
【Authors】: Boyuan Sun ; Qiang Ma ; Shanfeng Zhang ; Kebin Liu ; Yunhao Liu
【Abstract】: To meet the demand of more intelligent automation services on smartphone, more and more applications are developed based on users' emotion and personality. It has been a consensus that a relationship exists between personal emotions and usage pattern of smartphone. Most of existing work studies this relationship by learning manually labeled samples collected from smartphone users. The manual labeling process, however, is time-consuming, labor-intensive and money-consuming. To address this issue, we propose iSelf, a system which provides a general service of automatic detection for user's emotions in cold-start conditions with smartphone. Using transfer learning technology, iSelf achieves high accuracy given only a few labeled samples. We also develop a hybrid public/personal inference engine and validation system, so as to make iSelf maintain continuous update. Through extensive experiments, the inferring accuracy is tested about 75% and can be improved increasingly through validation and update.
【Keywords】: learning (artificial intelligence); smart phones; automatic detection; cold-start emotion labeling; hybrid public personal inference engine; iSelf; manual labeling process; smartphones; transfer learning technology; Accuracy; Data collection; Feature extraction; IEEE 802.11 Standard; Labeling; Mobile communication; Smart phones
【Paper Link】 【Pages】:1212-1220
【Authors】: Ge Peng ; Gang Zhou ; David T. Nguyen ; Xin Qi
【Abstract】: Smartphones save energy by entering a low power suspend mode (<;20mW) when they are idle. We find that on some smartphones, WiFi broadcast frames interrupt suspend mode and force the phone to switch to active mode (>120mW). As a result, power consumption increases dramatically. To improve energy efficiency, some phones employ a hardware broadcast filter in the WiFi driver. All UDP broadcast frames other than Multicast DNS frames are blocked, thus none is received by upper layer applications. So, we have a dilemma of handling WiFi broadcast traffic during smartphone suspend mode: either receive all of them suffering high power consumption, or receive none of them sacrificing functionalities. In this paper, we propose Software Broadcast Filter (SBF) to address the dilemma. SBF is smarter than the hardware broadcast filter as it only filters out useless broadcast frames and does not impair functionalities of applications. SBF is also more energy efficient than the “receive all” method. Our trace driven evaluation shows that SBF can save up to 52% energy consumption than the “receive all” method.
【Keywords】: smart phones; telecommunication traffic; wireless LAN; SBF; WiFi broadcast frames; WiFi broadcast traffic; WiFi driver; active mode; hardware broadcast filter; multicast DNS frames; power consumption; smartphone suspend mode; software broadcast filter; Computers; Conferences; Energy consumption; IEEE 802.11 Standard; Ports (Computers); Power demand; Power measurement
【Paper Link】 【Pages】:1221-1229
【Authors】: Ashwin Pananjady ; Vivek Kumar Bagaria ; Rahul Vaze
【Abstract】: Given a universe U of n elements and a collection of subsets S of U, the maximum disjoint set cover problem (DSCP) is to partition S into as many set covers as possible, where a set cover is defined as a collection of subsets whose union is U. We consider the online DSCP, in which the subsets arrive one by one (possibly in an order chosen by an adversary), and must be irrevocably assigned to some partition on arrival with the objective of minimizing the competitive ratio. The competitive ratio of an online DSCP algorithm A is defined as the maximum ratio of the number of disjoint set covers obtained by the optimal offline algorithm to the number of disjoint set covers obtained by A across all inputs. We propose an online algorithm for solving the DSCP with competitive ratio ln n. We then show a lower bound of Ω(√ln n) on the competitive ratio for any online DSCP algorithm. The online disjoint set cover problem has wide ranging applications in practice, including the online crowd-sourcing problem, the online coverage lifetime maximization problem in WSNs, and in online resource allocation problems.
【Keywords】: computational complexity; set theory; Ω(√ln n) bound; WSN; competitive ratio minimization; maximum disjoint set cover problem; online DSCP algorithm; online coverage lifetime maximization problem; online crowd-sourcing problem; online disjoint set cover problem; online resource allocation problem; optimal offline algorithm; subsets; Algorithm design and analysis; Color; Computers; Conferences; Partitioning algorithms; Resource management; Silicon
【Paper Link】 【Pages】:1230-1238
【Authors】: Maialen Larrañaga ; Urtzi Ayesta ; Ina Maria Verloop
【Abstract】: We develop a unifying framework to obtain efficient index policies for restless multi-armed bandit problems with birth-and-death state evolution. This is a broad class of stochastic resource allocation problems whose objective is to determine efficient policies to share resources among competing projects. In a seminal work, Whittle developed a methodology to derive well-performing (Whittle's) index policies that are obtained by solving a relaxed version of the original problem. Our first main contribution is the derivation of a closed-form expression for Whittle's index as a function of the steady-state probabilities. It can be efficiently calculated, however, it requires several technical conditions to be verified, and in addition, it does not provide qualitative insights into Whittle's index. We therefore formulate a fluid version of the relaxed optimization problem and in our second main contribution we develop a fluid index policy. The latter does provide qualitative insights and is close to Whittle's index. The applicability of our approach is illustrated by two important problems: optimal class selection and optimal load balancing. Allowing state-dependent capacities we can model important phenomena: e.g. power-aware server-farms and opportunistic scheduling in wireless systems. Numerical simulations show that Whittle's index and our fluid index policy are both nearly optimal.
【Keywords】: indexing; probability; resource allocation; stochastic processes; Whittle index policies; birth-and-death state evolution; closed-form expression; fluid index policies; opportunistic scheduling; optimal class selection; optimal load balancing; power-aware server-farms; resource allocation problems; restless multiarmed bandit problems; state-dependent capacities; steady-state probabilities; stochastic policies; stochastic resource allocation problems; wireless systems; Conferences; Indexes; Load management; Optimization; Resource management; Servers; Stochastic processes
【Paper Link】 【Pages】:1239-1247
【Authors】: Abhishek Dixit ; Bart Lannoo ; Didier Colle ; Mario Pickavet ; Piet Demeester
【Abstract】: A variety of dynamic bandwidth allocation (DBA) algorithms have been proposed to foster the performance of Ethernet passive optical networks (EPONs). These DBA algorithms use packet delay as an important quality of service (QoS) metric. This has led to a significant interest in developing mathematical models for analyzing the delay. These delay models often provide valuable qualitative results and worthwhile insights in understanding the mechanism of the delay and the manner in which it depends upon algorithm characteristics. Up to now, the delay models have been developed under some approximations, e.g., fixed packet sizes, negligible distances between a server and its users, a gated bandwidth assignment method, and Poisson traffic. In this paper, we develop the delay models for more realistic scenarios than the current state-of-the-art, including gated and limited bandwidth assignment methods, Poisson and Pareto traffic, and long-reach PONs in which the distance between the server and the users is significant and hence not negligible. We model different DBA paradigms, such as REPORT after data, REPORT before data, and multi-thread polling. The results from simulation experiments confirm the accuracy of the delay models.
【Keywords】: Pareto analysis; bandwidth allocation; optical fibre LAN; passive optical networks; quality of service; stochastic processes; DBA algorithm; EPON; Ethernet long-reach passive optical network; Pareto traffic; Poisson traffic; QoS metric; delay model; dynamic bandwidth allocation algorithm; limited bandwidth assignment method; quality of service metric; Bandwidth; Delays; EPON; IEEE 802.3 Standard; Logic gates; Optical network units; Passive optical networks; delay models; dynamic bandwidth allocation; long-reach PON (LR-PON); multi-threads
【Paper Link】 【Pages】:1248-1256
【Authors】: Abhishek Sinha ; Georgios S. Paschos ; Chih-Ping Li ; Eytan Modiano
【Abstract】: We study the problem of broadcasting packets in wireless networks. At each time slot, a network controller activates non-interfering links and forwards packets to all nodes at a common rate; the maximum rate is referred to as the broadcast capacity of the wireless network. Existing policies achieve the broadcast capacity by balancing traffic over a set of spanning trees, which are difficult to maintain in a large and time-varying wireless network. We propose a new dynamic algorithm that achieves the broadcast capacity when the underlying network topology is a directed acyclic graph (DAG). This algorithm utilizes local queue-length information, does not use any global topological structures such as spanning trees, and uses the idea of in-order packet delivery to all network nodes. Although the in-order packet delivery constraint leads to degraded throughput in cyclic graphs, we show that it is throughput optimal in DAGs and can be exploited to simplify the design and analysis of optimal algorithms. Our simulation results show that the proposed algorithm has superior delay performance as compared to tree-based approaches.
【Keywords】: broadcast communication; channel capacity; radio links; radio networks; telecommunication network topology; telecommunication traffic; acyclic graphs; broadcast capacity; broadcasting packets; directed acyclic graph; network controller; network topology; non-interfering links; queue-length information; spanning trees; throughput-optimal broadcast; time-varying wireless network; wireless networks; Algorithm design and analysis; Heuristic algorithms; Network topology; Throughput; Upper bound; Wireless networks
【Paper Link】 【Pages】:1257-1265
【Authors】: Matthew Clark ; Konstantinos Psounis
【Abstract】: With limited opportunities to open up new unencumbered bands to mobile wireless services, interest in enhancing methods for sharing of spectrum between services is high. For example, the band 1695-1710 MHz is expected to be made available to 3GPP Long-Term Evolution cellular network uplinks by sharing with incumbent meteorological satellite services already in the band. The LTE networks are to be operated in a manner that ensures no loss of incumbent capability by adhering to protection requirements such as a limit on the aggregate interference power at fixed incumbent earth station locations. In this paper, we consider this specific spectrum sharing scenario as motivation and formulate an optimization framework for power control and time-frequency resource scheduling on the LTE uplink with an aggregate interference constraint. We design and propose a novel algorithm inspired by numerical solution and analysis of the optimization problem. Using theory and simulation, we show that our algorithm significantly outperforms more simplistic approaches, well approximates the optimal solution, and is of sufficient scope and complexity for practical implementation, even in relatively large LTE networks. Algorithms of this kind are necessary for mobile wireless networks to make the most of constrained spectrum resources in shared bands.
【Keywords】: Long Term Evolution; cellular radio; optimisation; radio spectrum management; telecommunication scheduling; 3GPP Long-Term Evolution cellular network uplinks; LTE networks; aggregate interference constraint; aggregate interference power; constrained spectrum resources; fixed incumbent earth station locations; frequency 1695 MHz to 1710 MHz; incumbent meteorological satellite services; mobile wireless networks; optimization framework; power control; protection requirements; shared bands; spectrum sharing; time-frequency resource scheduling; unencumbered bands; Aggregates; Approximation algorithms; Gain; Interference; Optimization; Receivers; Resource management
【Paper Link】 【Pages】:1266-1274
【Authors】: Jiasi Chen ; Mung Chiang ; Jeffrey Erman ; Guangzhi Li ; K. K. Ramakrishnan ; Rakesh K. Sinha
【Abstract】: With recent standardization and deployment of LTE eMBMS, cellular multicast is gaining traction as a method of efficiently using wireless spectrum to deliver large amounts of multimedia data to multiple cell sites. Cellular operators still seek methods of performing optimal resource allocation in eMBMS based on a complete understanding of the complex interactions among a number of mechanisms: the multicast coding scheme, the resources allocated to unicast users and their scheduling at the base stations, the resources allocated to a multicast group to satisfy the user experience of its members, and the number of groups and their membership, all of which we consider in this work. We determine the optimal allocation of wireless resources for users to maximize proportional fair utility. To handle the heterogeneity of user channel conditions, we efficiently and optimally partition multicast users into groups so that users with good signal strength do not suffer by being grouped together with users of poor signal strength. Numerical simulations are performed to compare our scheme to practical heuristics and state-of-the-art schemes. We demonstrate the tradeoff between improving unicast user rates and improving spectrum efficiency through multicast. Finally, we analyze the interaction between the globally fair solution and individual user's desire to maximize its rate. We show that even if the user deviates from the global solution in a number of scenarios, we can bound the number of selfish users that will choose to deviate.
【Keywords】: Long Term Evolution; cellular radio; multicast communication; multimedia communication; radio spectrum management; resource allocation; LTE eMBMS; base stations scheduling; cellular multicast; cellular operators; group partitioning; multicast coding scheme; multicast users; multimedia data; multiple cell sites; optimal resource allocation; proportional fair utility; selfish users; spectrum efficiency; unicast user rates; unicast users; user channel conditions; wireless resources; wireless spectrum; Computer architecture; Conferences; Encoding; Optimization; Resource management; Streaming media; Unicast
【Paper Link】 【Pages】:1275-1283
【Authors】: Hamzeh Beyranvand ; Martin Lévesque ; Martin Maier ; Jawad A. Salehi
【Abstract】: To cope with the unprecedented growth of mobile data traffic, we investigate the performance gains obtained from unifying coverage-centric 4G mobile networks and capacity-centric fiber-wireless (FiWi) broadband access networks based on data-centric Ethernet technologies with resulting fiber backhaul sharing and WiFi offloading capabilities. Despite recent progress on backhaul-aware 4G studies with capacity-limited backhaul links, the performance-limiting impact of backhaul latency and reliability has not been examined in sufficient detail previously. In this paper, we evaluate the maximum aggregate throughput, offloading efficiency, and in particular the delay performance of FiWi enhanced LTE-A heterogeneous networks (HetNets), including the beneficial impact of various localized fiber-lean backhaul redundancy and wireless protection techniques, by means of probabilistic analysis and verifying simulation, paying close attention to fiber backhaul reliability issues and WiFi offloading limitations due to WiFi mesh node failures as well as temporal and spatial WiFi coverage constraints.
【Keywords】: 4G mobile communication; Long Term Evolution; broadband networks; local area networks; optical fibre communication; probability; statistical analysis; telecommunication network reliability; telecommunication security; telecommunication traffic; FiWi broadband access networks; LTE-A HetNets; Long Term Evolution; WiFi coverage constraints; WiFi mesh node failures; WiFi offloading capabilities; backhaul latency; backhaul-aware 4G studies; capacity-centric fiber-wireless broadband access networks; capacity-limited backhaul links; coverage-centric 4G mobile networks; data-centric Ethernet technologies; fiber backhaul reliability; fiber backhaul sharing; localized fiber-lean backhaul redundancy; mobile data traffic; probabilistic analysis; wireless protection techniques; Delays; IEEE 802.11 Standard; Mobile communication; Optical network units; Passive optical networks; Wireless communication
【Paper Link】 【Pages】:1284-1292
【Authors】: Rajarajan Sivaraj ; Ioannis Broustis ; N. K. Shankaranarayanan ; Vaneet Aggarwal ; Prasant Mohapatra
【Abstract】: LTE network service reliability is highly dependent on the wireless coverage that is provided by cell towers (eNB). Therefore, the network operator's response to outage scenarios needs to be fast and efficient, in order to minimize any degradation in the Quality of Service (QoS). In this paper, we propose an outage mitigation framework for LTE-Advanced (LTE-A) wireless networks. Our framework exploits the inherent design features of LTE-A; it performs a dual optimization of the transmission power and beamforming weight parameters at each neighbor cell sector of the outage eNBs, while taking into account both the channel characteristics and residual eNB resources, after serving its current traffic load. Assuming statistical Channel State Information about the users at the eNBs, we show that this problem is theoretically NP-hard; thus we relax it as a convex optimization problem and solve for the optimal points using an iterative algorithm. Contrary to previously-proposed power control studies, our framework is specifically designed to alleviate the effects of sudden LTE-A eNB outages, where a large number of mobile users need to be efficiently offloaded to nearby towers. We present the detailed analytical design of our framework, and we assess its efficacy via extensive NS-3 simulations on an LTE-A topology. Our simulations demonstrate that our framework provides adequate coverage and QoS across all examined outage scenarios.
【Keywords】: Long Term Evolution; array signal processing; computational complexity; optimisation; quality of service; telecommunication network reliability; LTE network service reliability; LTE-A eNB outage; Long Term Evolution-Advanced deployment; NP-hard problem; NS-3 simulation; QoS; beamforming; cell tower; channel characteristic; convex optimization problem; iterative algorithm; macrocell outage mitigation; mobile user; quality of service; residual eNB resource; wireless coverage; wireless network; Approximation methods; Array signal processing; Interference; Optimization; Quality of service; Signal to noise ratio; Transmitting antennas
【Paper Link】 【Pages】:1293-1301
【Authors】: Chi-Yu Li ; Chunyi Peng ; Songwu Lu ; Xinbing Wang ; Ranveer Chandra
【Abstract】: Latency-sensitive applications (e.g., wireless gaming and TV remote play) are increasingly popular in home WiFi networks. Such millisecond-level latency requirements call for new fine-grained approaches at the link layer. In this paper, we show that current solutions work well for throughput but not for latency due to the long tail of the packet delay distribution. We thus propose LLRA, a new latency-aware rate adaptation scheme that reduces the tail latency for delay-sensitive applications. LLRA takes concerted design in rate control, frame aggregation scheduling and software/hardware retransmission dispatching. Our implementation and evaluation confirm the viability of LLRA in 802.11n home networks.
【Keywords】: home networks; wireless LAN; 802.11n home networks; LLRA; TV remote play; delay-sensitive applications; frame aggregation scheduling; hardware retransmission dispatching; home WiFi networks; latency-aware rate adaptation scheme; latency-sensitive applications; link layer; packet delay distribution; rate control; software retransmission dispatching; tail latency reduction; wireless gaming; Delays; Hardware; IEEE 802.11n Standard; MIMO; Software; Wireless communication
【Paper Link】 【Pages】:1302-1310
【Authors】: Bin Li ; Atilla Eryilmaz ; R. Srikant
【Abstract】: It is well-known that maximum weight scheduling, with link weights which are either functions of queue lengths or the ages of the Head-of-Line (HoL) packets in each queue, maximizes the throughput region of wireless networks with persistent flows. In particular, with only persistent flows, it does not matter for throughput optimality whether one uses queue lengths or HoL ages as weights. In this paper, we show the following interesting result: when some flows in the network are dynamic (i.e., they arrive and depart from the network and are not persistent), then HoL-age-based scheduling algorithms are throughput-optimal while it has previously been shown that queue-length-based algorithms are not. This reveals that, age-based algorithms are universal in the sense that their throughput optimality does not depend on whether the arriving traffic is persistent or not. We also present a distributed implementation of the proposed age-based algorithm using CSMA techniques, where each flow only knows its own age and carrier sensing information. Finally, we support our analytical results through simulations. The proof of throughput optimality may be interesting in its own right: it uses a novel Lyapunov function which is the sum of the ages of all the packets in the network.
【Keywords】: queueing theory; radio networks; scheduling; CSMA techniques; Head-of-Line packets; HoL packets; Lyapunov function; age based scheduling; carrier sensing information; maximum weight scheduling; network packets; queue lengths; scheduling algorithms; wireless networks; Dynamic scheduling; Heuristic algorithms; Lyapunov methods; Multiaccess communication; Scheduling algorithms; Throughput; Wireless networks
【Paper Link】 【Pages】:1311-1319
【Authors】: Lixin Wang ; Peng-Jun Wan ; Kyle Young
【Abstract】: Beaconing is a primitive communication task in which every node locally broadcasts a packet to all of its neighbors within a fixed distance. The problem Minimum Latency Beaconing Schedule (MLBS) seeks a shortest schedule for beaconing subject to the interference constraint. MLBS has been well studied when all the nodes are always awake. However, it is well-known that the networking nodes often switch between the active state and the sleep state to save energy. A node in duty-cycled scenarios may require transmitting multiple times to inform all of its neighbors due to their different active times. Thus, all of the known algorithms for MLBS are not suitable for duty-cycled multihop wireless networks. In this paper, we study MLBS in Duty-Cycled multihop wireless networks (MLBSDC). We first present two constant-approximation algorithms for MLBSDC under the protocol interference model with the approximation bounds independent of the length of a scheduling period. Then, we develop an efficient algorithm for MLBSDC under the physical interference model. To the best of our knowledge, this is the first paper that develops efficient algorithms for MLBSDC under either of these two interference models.
【Keywords】: approximation theory; energy conservation; protocols; radio networks; radiofrequency interference; telecommunication power management; telecommunication scheduling; MLBS; MLBSDC; constant-approximation algorithm; duty-cycled multihop wireless network; energy saving; minimum-latency beaconing schedule; physical interference model; primitive communication task; protocol interference model; Approximation algorithms; Approximation methods; Interference; Protocols; Schedules; Spread spectrum communication; Wireless networks; Beaconing schedule; approximation algorithms; duty cycle; physical interference model; protocol interference model
【Paper Link】 【Pages】:1320-1327
【Authors】: Marcin Bienkowski ; Jaroslaw Byrka ; Krzysztof Chrobak ; Tomasz Jurdzinski ; Dariusz R. Kowalski
【Abstract】: We consider the task of assigning time slots on a user-dependent and time-varying wireless channel. This scheduling problem occurs in cellular networks due to the presence of channel fading and user mobility. We introduce a simple notion of global fairness, where each of n users is guaranteed a 1/(n + ε) fraction of its total possible throughput, for some approximation parameter ε ≥ 0, and study its limitations from theoretical and experimental perspectives. We formally prove that a slight modification of the standard proportional fair algorithm satisfies the global fairness constraint. To the best of our knowledge, this is the first formal analysis providing global fairness property to the channel in any execution and any channel conditions. As confirmed by our simulations, our global fairness constraint is in fact satisfied by a wide class of algorithms. Our framework allows optimization of an arbitrary metric subject to the global fairness constraint. In particular, we have analyzed a variant of the provably fair algorithm that optimizes the total throughput. It turned out that the channel utilization of this algorithm is significantly better than that of the classical Proportional Fair algorithm.
【Keywords】: approximation theory; cellular radio; fading channels; mobility management (mobile radio); telecommunication scheduling; time division multiple access; time-varying channels; TDMA scheduling; approximation parameter; arbitrary metric; cellular networks; channel fading; channel utilization; global fairness constraint; proportional fair algorithm; provable fairness; time slots; time-varying wireless channel; user mobility; user-dependent wireless channel; Algorithm design and analysis; Channel capacity; Computers; Conferences; Stability analysis; Throughput; Wireless communication; Proportional Fair algorithms; Wireless channel; fairness; throughput
【Paper Link】 【Pages】:1328-1336
【Authors】: Bo Wang ; Jinlei Jiang ; Guangwen Yang
【Abstract】: As a widely used programming model and implementation for processing large data sets, MapReduce performs poorly on heterogeneous clusters, which, unfortunately, are common in current computing environments. To deal with the problem, this paper: 1) analyzes the causes of performance degradation and identifies the key one as the large volume of inter-node data transfer resulted from even data distribution among nodes of different computing capabilities, and 2) proposes ActCap, a solution that uses a Markov chain based model to do node-capability-aware data placement for the continuously incoming data. ActCap has been incorporated into Hadoop and evaluated on a 24-node heterogeneous cluster by 13 benchmarks. The experimental results show that ActCap can reduce the percentage of inter-node data transfer from 32.9% to 7.7% and gain an average speedup of 49.8% when compared with Hadoop, and achieve an average speedup of 9.8% when compared with Tarazu, the latest related work.
【Keywords】: Markov processes; data handling; parallel programming; ActCap; MapReduce acceleration; Markov chain; Tarazu; data distribution; heterogeneous clusters; inter-node data transfer; node-capability-aware data placement; Benchmark testing; Computational modeling; Computers; Conferences; Data transfer; Hardware; Markov processes; Big Data; Data Placement; Heterogeneous Clusters; Load Balancing; MapReduce
【Paper Link】 【Pages】:1337-1345
【Authors】: Yucheng Zhang ; Hong Jiang ; Dan Feng ; Wen Xia ; Min Fu ; Fangting Huang ; Yukun Zhou
【Abstract】: Data deduplication, a space-efficient and bandwidth-saving technology, plays an important role in bandwidth-efficient data transmission in various data-intensive network and cloud applications. Rabin-based and MAXP-based Content-Defined Chunking (CDC) algorithms, while robust in finding suitable cut-points for chunk-level redundancy elimination, face the key challenges of (1) low chunking throughput that renders the chunking stage the deduplication performance bottleneck and (2) large chunk-size variance that decreases deduplication efficiency. To address these challenges, this paper proposes a new CDC algorithm called the Asymmetric Extremum (AE) algorithm. The main idea behind AE is based on the observation that the extreme value in an asymmetric local range is not likely to be replaced by a new extreme value in dealing with the boundaries-shift problem, which motivates AE's use of asymmetric (rather than symmetric as in MAXP) local range to identify cut-points and simultaneously achieve high chunking throughput and low chunk-size variance. As a result, AE simultaneously addresses the problems of low chunking throughput in MAXP and Rabin and high chunk-size variance in Rabin. The experimental results based on four real-world datasets show that AE improves the throughput performance of the state-of-the-art CDC algorithms by 3x while attaining comparable or higher deduplication efficiency.
【Keywords】: computer networks; data handling; AE algorithms; CDC algorithm; asymmetric extremum algorithm; asymmetric extremum content defined chunking algorithm; bandwidth efficient data transmission; bandwidth saving technology; bandwidth-efficient data deduplication; cloud applications; content defined chunking algorithms; fast data deduplication; Algorithm design and analysis; Arrays; Computers; Conferences; Power capacitors; Redundancy; Throughput
【Paper Link】 【Pages】:1346-1354
【Authors】: Rami Cohen ; Liane Lewin-Eytan ; Joseph Naor ; Danny Raz
【Abstract】: Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.
【Keywords】: computational complexity; computer networks; graph theory; virtualisation; NFV; NFV location problem; approximation factor; cloud nodes; commodity servers; distance cost; economical network services; near optimal placement; network function virtualization; networking evolution; networking paradigm; setup cost; software defined mechanism; virtual network functions; Approximation algorithms; Approximation methods; Bismuth; Computers; Conferences; Optimization; Servers
【Paper Link】 【Pages】:1355-1363
【Authors】: Zhan Qiu ; Juan F. Pérez
【Abstract】: Computing clusters have been widely deployed for scientific and engineering applications to support intensive computation and massive data operations. As applications and resources in a cluster are subject to failures, fault-tolerance strategies are commonly adopted, sometimes at the expense of additional delays in job response times, or unnecessarily increasing resource usage. In this paper, we explore concurrent replication with canceling, a fault-tolerance approach where jobs and their replicas are processed concurrently, and the successful completion of either triggers the removals of its replica. We propose a stochastic model to study how this approach affects the cluster service level objectives (SLOs), particularly the offered response time percentiles. In addition to the expected gains in reliability, the proposed model allows us to determine the regions of the utilization where introducing replication with canceling effectively reduces the response times. Moreover, we show how this model can support resource provisioning decisions with reliability and response time guarantees.
【Keywords】: concurrency (computers); fault tolerant computing; software reliability; system recovery; SLO; computing clusters; concurrent replication; failures; fault-tolerance strategies; intensive computation; job response times; massive data operations; reliability; service level objectives; Computational modeling; Computers; Conferences; Reliability; Servers; Switches; Time factors
【Paper Link】 【Pages】:1364-1372
【Authors】: James Willson ; Zhao Zhang ; Weili Wu ; Ding-Zhu Du
【Abstract】: Energy efficiency is an important issue in the study of wireless sensor networks. Given a homogeneous set of sensors with unit lifetime and a set of target points, find an active/sleeping schedule for sensors to maximize the lifetime of k-coverage, i.e., the time period during which every target point is covered by at least k active sensors. This is a well known problem in wireless sensor networks concerning with energy efficiency. When k = 1, it is called the maximum lifetime coverage problem which has been proved to have a polynomial-time (4 + ε)-approximation. When k ≥ 2, it is the maximum lifetime fault-tolerant coverage problem. Previous to this work, only in the case k = 2, a polynomial-time (6 + ε)-approximation is found. In this paper, we will make a significant progress by showing that for any positive integer k, there exists a polynomial-time (4 + ε)-approximation, and for k = 1,2, the performance ratio can be improved to (3 + ε).
【Keywords】: polynomial approximation; scheduling; wireless sensor networks; active-sleeping schedule; fault tolerant coverage; least k active sensors; polynomial time approximation; target points; wireless sensor networks; Approximation algorithms; Approximation methods; Computers; Conferences; Sensors; Strips; Wireless sensor networks
【Paper Link】 【Pages】:1373-1381
【Authors】: Andrés J. Gonzalez ; Bjarne E. Helvik ; Prakriti Tiwari ; Denis M. Becker ; Otto Wittner
【Abstract】: The dependability of ICT systems is vital for today's society. However, operational systems are not fault free. Providers and customers have to define clear availability requirements and penalties on the delivered services by using SLAs. Fulfilling the stipulated availability may be expensive. The lack of mechanisms that allow a fine control of the SLA risk may lead to over-dimension the provided resources. Therefore, a relevant question for ICT service providers is: How to guarantee the SLA availability in a cost efficient way? This paper studies how to combine different fault tolerant techniques with different costs and properties, in order to economically fulfill a given SLA requirement. GEARSHIFT is a mechanism that enables ICT providers to set the fault tolerance technique (gear ratio) needed, depending on the current service conditions and requirements. We illustrate how to use the proposed model in a backbone network scenario, using measurements from a production national network. Finally, we show that the total costs of delivering an ICT service follow a simple convex function, which allows an easy selection of the optimal risk by tuning properly the combination of fault tolerant techniques.
【Keywords】: contracts; convex programming; costing; fault tolerant computing; risk management; GEARSHIFT; ICT service delivery cost; ICT service provider; ICT system dependability; SLA availability; SLA risk; availability requirement guarantee; backbone network scenario; convex function; cost efficiency; fault tolerant technique; hybrid fault tolerance; operational systems; optimal risk selection; production national network; service condition; service requirement; Approximation methods; Computers; Conferences; Convolution; Fault tolerance; Fault tolerant systems; Switches; SLA; accumulated downtime; fault tolerance; network recovery; renewal theory; risk optimization
【Paper Link】 【Pages】:1382-1390
【Authors】: Rein Houthooft ; Sahel Sahhaf ; Wouter Tavernier ; Filip De Turck ; Didier Colle ; Mario Pickavet
【Abstract】: Although geometric routing is proposed as a memory-efficient alternative to traditional lookup-based routing and forwarding algorithms, it still lacks: (i) adequate mechanisms to trade stretch against load balancing, and (ii) robustness to cope with network topology change. The main contribution of this paper involves the proposal of a family of routing schemes, called Forest Routing. These are based on the principles of geometric routing, adding flexibility in its load balancing characteristics. This is achieved by using an aggregation of greedy embeddings along with a configurable distance function. Incorporating link load information in the forwarding layer enables load balancing behavior while still attaining low path stretch. In addition, the proposed schemes are validated regarding their resilience towards network failures.
【Keywords】: radio links; resource allocation; telecommunication network reliability; telecommunication network routing; greedy embedding aggregation; link load information; network failure; robust geometric forest routing; tunable load balancing; Computers; Conferences; Extraterrestrial measurements; Load management; Robustness; Routing
【Paper Link】 【Pages】:1391-1399
【Authors】: Mohammed Shatnawi ; Mohamed Hefeeda
【Abstract】: Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.
【Keywords】: Web services; computer centres; contracts; data mining; failure analysis; real-time systems; system recovery; data mining technique; distributed data centers; dynamic failure predictor; large-scale online ad service; online service changes; real-time failure prediction; synthetic transactions; up-to-date failure predictor; working conditions; Accuracy; Data mining; Monitoring; Production; Real-time systems; Testing; Time factors
【Paper Link】 【Pages】:1400-1408
【Abstract】: Network functions are widely deployed in modern networks, providing various network services ranging from intrusion detection to HTTP caching. Various virtual network function instances can be consolidated into one physical middlebox. Depending on the type of services, packet processing for different flows consumes different hardware resources in the middlebox. Previous solutions of multi-resource packet scheduling suffer from high computational complexity and memory cost for packet buffering and scheduling, especially when the number of flows is large. In this paper, we design a novel low-complexity and space-efficient packet scheduling algorithm called Myopia, which supports multi-resource environments such as network function virtualization. Myopia is developed based upon the fact that most Internet traffic is contributed by a small fraction of elephant flows. Myopia schedules elephant flows with precise control and treats mice flows using FIFO, to achieve simplicity of packet buffering and scheduling. We will demonstrate, via theoretical analysis, prototype implementation, and simulations, that Myopia achieves multi-resource fairness at low cost with short packet delay.
【Keywords】: Internet; security of data; telecommunication scheduling; telecommunication traffic; transport protocols; virtualisation; FIFO; HTTP caching; Internet traffic; Myopia schedule elephant flow; intrusion detection; multiresource environments; multiresource packet scheduling; network function virtualization; packet buffering; space-efficient packet scheduling algorithm; virtual network function; Arrays; Mice; Middleboxes; Resource management; Schedules; Scheduling algorithms
【Paper Link】 【Pages】:1409-1417
【Authors】: Yin-Chi Chan ; Jun Guo ; Eric W. M. Wong ; Moshe Zukerman
【Abstract】: Overflow loss systems have wide applications in telecommunications and multimedia systems. In this paper, we consider an overflow loss system consisting of a set of finite-buffer processor-sharing (PS) queues, and develop effective methods for evaluation of its blocking probability. For such a problem, an existing approximation of the blocking probability is based on decomposition of the system into independent PS queues. We provide a new approximation which instead performs decomposition on a surrogate model of the original system, and demonstrate via extensive numerical results that our new approximation is more accurate and robust than the existing approach. We also examine the sensitivity of the blocking probability to the service time distribution, and demonstrate that an exponential distribution is a good approximation for a wide range of service time distributions.
【Keywords】: probability; queueing theory; blocking probability; exponential distribution; finite-buffer processor-sharing queues; overflow loss systems; service time distribution; Approximation methods; Computers; Conferences; Information exchange; Mathematical model; Numerical models; Peer-to-peer computing
【Paper Link】 【Pages】:1418-1426
【Authors】: Pavel Chuprikov ; Sergey I. Nikolenko ; Kirill Kogan
【Abstract】: Modern network elements are increasingly required to deal with heterogeneous traffic. Recent works consider processing policies for buffers that hold packets with different processing requirement (number of processing cycles needed before a packet can be transmitted out) but uniform value, aiming to maximize the throughput, i.e., the number of transmitted packets. Other developments deal with packets of varying value but uniform processing requirement (each packet requires one processing cycle); the objective here is to maximize the total transmitted value. In this work, we consider a more general problem, combining packets with both nonuniform processing and nonuniform values in the same queue. We study the properties of various processing orders in this setting. We show that in the general case natural processing policies have poor performance guarantees, with linear lower bounds on their competitive ratio. Moreover, we show an adversarial lower bound that holds for every online policy. On the positive side, in the special case when only two different values are allowed, 1 and V, we present a policy that achieves competitive ratio (1 + W+2/V), where W is the maximal number of required processing cycles. We also consider copying costs during admission.
【Keywords】: diversity reception; packet radio networks; queueing theory; telecommunication traffic; combining packet transmission; heterogeneous traffic; multiple packet characteristics; priority queueing; Admission control; Computers; Conferences; Optimized production technology; Process control; Throughput; Upper bound
【Paper Link】 【Pages】:1427-1435
【Authors】: Sucha Supittayapornpong ; Michael J. Neely
【Abstract】: One practical open problem is the development of a distributed algorithm that achieves near-optimal utility using only a finite (and small) buffer size for queues in a stochastic network. This paper studies utility maximization (or cost minimization) in a finite-buffer regime and considers the corresponding delay and reliability (or rate of packet drops) tradeoff. A floating-queue algorithm allows the stochastic network optimization framework to be implemented with finite buffers at the cost of packet drops. Further, the buffer size requirement is significantly smaller than previous works in this area. With a finite buffer size of B packets, the proposed algorithm achieves within O(e-B) of the optimal utility while maintaining average per-hop delay of O(B) and an average per-hop drop rate of O(e-B) in steady state. From an implementation perspective, the floating-queue algorithm requires little modification of the well-known Drift-Plus-Penalty policy (including MaxWeight and Backpressure policies). As a result, the floating-queue algorithm inherits the distributed and low complexity nature of these policies.
【Keywords】: delays; optimisation; queueing theory; telecommunication network reliability; Backpressure policy; MaxWeight policy; distributed algorithm; drift-plus-penalty policy modification; finite buffer size; floating queue algorithm; near optimal utility; stochastic network optimization; stochastic network queueing; utility-delay-reliability tradeoff; Computers; Conferences; Delays; Heuristic algorithms; Optimization; Standards; Steady-state
【Paper Link】 【Pages】:1436-1444
【Authors】: Zhice Yang ; Jiansong Zhang ; Kun Tan ; Qian Zhang ; Yongguang Zhang
【Abstract】: Today's WLANs are struggling to provide desirable features like high efficiency, fairness and QoS because of the use of Distributed Coordination Function (DCF). In this paper we present OpenTDMF, an architecture to enable TDMA on commodity WLAN devices. Our hope is to provide the desirable features without entirely rebuilding the WLAN infrastructure. OpenTDMF is inspired by and architecturally similar to Software Defined Networking (SDN). Specifically, we leverage the backhaul of WLAN to coordinate all the stations for channel access. This fine-grained coordination is performed in a decoupled control plane which includes a central controller and programmable APs. To realize OpenTDMF on commodity WLAN devices, we develop several novel techniques to achieve μs-level time synchronization among all the APs. We also enable AP-triggered uplink transmission so that all the transmissions in the WLAN can be determined. We implemented a prototype of OpenTDMF based on commodity WLAN devices. Empirical results validate the OpenTDMF design and demonstrate its benefits.
【Keywords】: quality of service; software defined networking; synchronisation; telecommunication control; time division multiple access; wireless LAN; AP-triggered uplink transmission; DCF; OpenTDMF design; QoS; SDN; TDMA; WLAN devices; central controller; channel access; decoupled control plane; distributed coordination function; fine-grained coordination; programmable AP; quality of service; software defined networking; time division multiple access; time synchronization; wireless LAN; Delays; IEEE 802.11 Standard; Software; Synchronization; Time division multiple access; Uplink; Wireless LAN
【Paper Link】 【Pages】:1445-1453
【Authors】: Ehsan Monsef ; Alireza Keshavarz-Haddad ; Ehsan Aryafar ; Jafar Saniie ; Mung Chiang
【Abstract】: We study the convergence properties of distributed network selection in HetNets with priority-based service. Clients in such networks have different priority weights (e.g., QoS requirements, scheduling policies, etc.) for different access networks and act selfishly to maximize their own throughput. We formulate the problem as a non-cooperative game, and study its convergence for two models: (i) A purely client-centric model where each client uses its own preference to select a network, and (ii) a hybrid client-network model that uses a combination of client and network preferences to arrive at pairings. Our results reveal that: (a) Pure client-centric network selection with generic weights can result in infinite oscillations for any improvement path (i.e., shows strongly cyclic behavior). However, we show that under several classes of practical priority weights (e.g., weights that achieve different notions of fairness) or under additional client-side policies, convergence can be guaranteed; (b) We study convergence time under client-centric model and provide tight polynomial and linear bounds; (c) We show that applying a minimal amount of network control in the hybrid model, guarantees convergence for clients with generic weights. We also introduce a controllable knob that network controller can employ to balance between convergence time and its network-wide objective with predictable tradeoff.
【Keywords】: 5G mobile communication; game theory; radio access networks; 5G network; HetNets; access network; client-centric model; distributed network selection; general network selection game convergence property; heterogeneous network; hybrid client-network model; network controller; noncooperative game; priority-based service; Biological system modeling; Convergence; Games; IEEE 802.11 Standard; Quality of service; Switches; Throughput
【Paper Link】 【Pages】:1454-1462
【Authors】: Ashish Patro ; Suman Banerjee
【Abstract】: In dense wireless deployments at homes, such as apartment buildings, neighboring home WLANs share the same unlicensed spectrum by deploying consumer-grade access points in their individual homes. In such environments, WiFi networks can suffer from intermittent performance issues such as wireless packet losses, interference from WiFi and non-WiFi sources due to the rapid growth and increasing diversity of devices that share the spectrum. In this paper, we propose a vendor-neutral cloud-based centralized framework called COAP to configure, coordinate and manage individual home APs using an open API implemented by these commodity APs. The framework, implemented using OpenFlow extensions, allows the APs to share various types of information with a centralized controller - interference and traffic phenomenon and various flow contexts, and in turn receive instructions - configuration parameters (e.g., channel) and transmission parameters (through coarse-grained schedules and throttling parameters). This paper describes the framework and associated techniques, applications to motivate its potential benefits, such as, upto 47% reduction in channel congestion and our experiences from having deployed it in actual home environments.
【Keywords】: application program interfaces; home networks; radio access networks; OpenFlow extensions; channel congestion; dense wireless deployments; home wireless access points; open API; vendor-neutral cloud-based centralized framework; Buildings; Channel allocation; IEEE 802.11 Standard; Interference; Packet loss; Streaming media; Wireless communication
【Paper Link】 【Pages】:1463-1471
【Authors】: Jun Huang ; Guoliang Xing ; Jianwei Niu ; Shan Lin
【Abstract】: Prior studies show that repairing partially corrupted packets, instead of retransmitting them in their entirety, holds potential in improving the performance of 802.11 networks. However, the efficiency of existing packet recovery approaches is severely limited by various overhead associated to redundant transmission and repeated channel contention. In this paper, we propose CodeRepair, a practical coding-based protocol that recovers partially corrupted 802.11 packets without these pains. The design of CodeRepair is based on two novel ideas. First, CodeRepair pushes the limit of 802.11 PHY to piggyback parities in the padded bits of OFDM, obviating the need of transmitting extra information for error correction. Second, CodeRepair corrects errors at the PHY layer, which is significantly more efficient than traditional link-layer approaches. This is due to the fact that a single coded bit usually affects the decoding of a group of data bits in 802.11 convolutional code. As a result, CodeRepair can salvage a partially corrupted packet by correcting a small number of erroneous coded bits using the padded parities. To reduce computational cost of error recovery, CodeRepair employs single parity code for correcting coded bit errors. We propose several techniques to augment the error correcting capability of single parity code without compromising its computation efficiency. Our evaluation shows that CodeRepair recovers an average of 34% partially corrupted packets, and improves the end-to-end link goodput by 59% on lossy 802.11 links.
【Keywords】: convolutional codes; error correction; parity check codes; protocols; wireless LAN; 802.11 networks; CodeRepair; OFDM; PHY layer; coded bit errors; convolutional code; end-to-end link goodput; erroneous coded bits; error correcting capability; error recovery; lossy 802.11 links; packet recovery approaches; padded parities; partially corrupted packets; practical coding-based protocol; single parity code; Convolutional codes; Decoding; Forward error correction; IEEE 802.11 Standard; OFDM; Receivers; Reflective binary codes
【Paper Link】 【Pages】:1472-1480
【Authors】: Heba Abdelnasser ; Moustafa Youssef ; Khaled A. Harras
【Abstract】: We present WiGest: a system that leverages changes in WiFi signal strength to sense in-air hand gestures around the user's mobile device. Compared to related work, WiGest is unique in using standard WiFi equipment, with no modifications, and no training for gesture recognition. The system identifies different signal change primitives, from which we construct mutually independent gesture families. These families can be mapped to distinguishable application actions. We address various challenges including cleaning the noisy signals, gesture type and attributes detection, reducing false positives due to interfering humans, and adapting to changing signal polarity. We implement a proof-of-concept prototype using off-the-shelf laptops and extensively evaluate the system in both an office environment and a typical apartment with standard WiFi access points. Our results show that WiGest detects the basic primitives with an accuracy of 87.5% using a single AP only, including through-the-wall non-line-of-sight scenarios. This accuracy increases to 96% using three overheard APs. In addition, when evaluating the system using a multi-media player application, we achieve a classification accuracy of 96%. This accuracy is robust to the presence of other interfering humans, highlighting WiGest's ability to enable future ubiquitous hands-free gesture-based interaction with mobile devices.
【Keywords】: gesture recognition; mobile computing; mobile handsets; wireless LAN; WiFi signal strength; WiGest; in-air hand gesture; mobile device; multimedia player; ubiquitous WiFi-based gesture recognition system; Accuracy; Discrete wavelet transforms; Gesture recognition; IEEE 802.11 Standard; Image edge detection; Mobile handsets; Wireless communication
【Paper Link】 【Pages】:1481-1489
【Authors】: Qiang Xu ; Yong Liao ; Stanislav Miskovic ; Zhuoqing Morley Mao ; Mario Baldi ; Antonio Nucci ; Thomas Andrews
【Abstract】: There are network management, traffic engineering, and security practices adopted in today's networking that rely on the knowledge about what applications' traffic is passing through the networks. These practices might fail with mobile apps whose identity remains hidden in generic HTTP traffic. The main reason is that unlike traditional applications, most mobile apps do not use specific protocols or IP ports with distinctive features. Many enterprises and service providers are in a great need of regaining control over their networks that increasingly carry mobile traffic. In this paper we propose FLOWR, a system that automatically identifies mobile apps by continually learning the apps' distinguishing features via traffic analysis. FLOWR focuses solely on key-value pairs in HTTP headers and intelligently identifies the pairs suitable for app signatures. Our system employs a custom supervised learning approach that leverages a very limited knowledge of app-signature seeds and autonomously grows its capacity for app identification. The approach is motivated by a simple but effective hypothesis that unknown app-identifying features should co-occur with the known signatures. Our experimental results show a significant growth in flow identification coverage provided by FLOWR. Specifically, we show that FLOWR can achieve identification of 86-95% of flows related to their generating apps.
【Keywords】: digital signatures; mobile communication; telecommunication network management; telecommunication traffic; transport protocols; FLOW recognition; FLOWR; custom supervised learning approach; generic HTTP traffic; mobile app signatures automatic generation; mobile application; mobile traffic; network management; traffic analysis; traffic engineering; traffic observation; Computers; Conferences; IP networks; Mobile communication; Mobile computing; Protocols; Web services
【Paper Link】 【Pages】:1490-1498
【Authors】: Shusen Yang ; Usman Adeel ; Julie A. McCann
【Abstract】: The use of sensor-enabled smart phones is considered to be a promising solution to large-scale urban data collection. In current approaches to mobile phone sensing systems (MPSS), phones directly transmit their sensor readings through cellular radios to the server. However, this simple solution suffers from not only significant costs in terms of energy and mobile data usage, but also produces heavy traffic loads on bandwidth-limited cellular networks. To address this issue, this paper investigates cost-effective data collection solutions for MPSS using hybrid cellular and opportunistic short-range communications. We first develop an adaptive and distribute algorithm OptMPSS to maximize phone user financial rewards accounting for their costs across the MPSS. To incentivize phone users to participate, while not subverting the behavior of OptMPSS, we then propose BMT, the first algorithm that merges stochastic Lyapunov optimization with mechanism design theory. We show that our proven incentive compatible approaches achieve an asymptotically optimal gross profit for all phone users. Experiments with Android phones and trace-driven simulations verify our theoretical analysis and demonstrate that our approach manages to improve the system performance significantly (around 100%) while confirming that our system achieves incentive compatibility, individual rationality, and server profitability.
【Keywords】: cellular radio; distributed algorithms; optimisation; smart phones; stochastic games; wireless sensor networks; BMT algorithm; OptMPSS algorithm; adaptive algorithm; cellular radio; cost effective data collection; distributed algorithm; faithful data collection; hybrid cellular-opportunistic short range communications; mechanism design theory; optimal gross profit; phone user financial rewards; sensor enabled smart phones; stochastic Lyapunov optimization; stochastic mobile phone sensing systems; urban data collection; Algorithm design and analysis; Heuristic algorithms; IEEE 802.11 Standard; Mobile communication; Mobile handsets; Sensors; Servers
【Paper Link】 【Pages】:1499-1507
【Authors】: Carlee Joe-Wong ; Sangtae Ha ; Mung Chiang
【Abstract】: In January 2014, AT&T introduced sponsored data to the U.S. mobile data market, allowing content providers (CPs) to subsidize users' cost of mobile data. As sponsored data gains traction in industry, it is important to understand its implications. This work considers CPs' choice of how much content to sponsor and the implications for users, CPs, and ISPs (Internet service providers). We first formulate a model of user, CP, and ISP interaction for heterogeneous users and CPs and derive their optimal behaviors. We then show that these behaviors can reverse our intuition as to how user demand and utility change with different user and CP characteristics. While all three parties can benefit from sponsored data, we find that sponsorship disproportionately favors less cost-constrained CPs and more cost-constrained users, exacerbating CP inequalities but making user demand more even. We also show that users' utilities increase more than CPs' with sponsored data. We finally illustrate these results in practice through numerical simulations with data from a commercial pricing trial and introduce a framework for CPs to decide which, in addition to how much, content to sponsor.
【Keywords】: mobile radio; numerical analysis; traction; CP user characteristics; U.S. mobile data market; content provider framework; economic analysis; industrial traction; mobile data sponsoring; numerical simulation; user demand; Computers; Conferences; Data models; Elasticity; Mobile communication; Pricing; Quality of service
【Paper Link】 【Pages】:1508-1516
【Abstract】: Efficient use of shared resources is a key problem in a wide range of computer systems, from cloud computing to multicore processors. Optimized allocation of resources among users can result in dramatically improved overall system performance. Resource allocation is in general NP-complete, and past works have mostly focused on studying concave performance curves, applying heuristics to nonconcave curves, or finding optimal solutions using slow dynamic programming methods. These approaches have drawbacks in terms of generality, accuracy and efficiency. In this paper, we observe that realistic performance curves are often not concave, but rather can be broken into a small number of concave or convex segments. We present efficient algorithms for optimal and approximately optimal resource allocation leveraging this idea. We also introduce several algorithmic techniques that may be of independent interest. Our optimal algorithm runs in O(snα(m)m(log m)2) time, and our approximation algorithm finds a 1 - ε optimal allocation for any ε > 0 in O(s/ε α(n/ε)n2 log n/ε log m) time; here, s is the number of segments, n the number of processes, m the amount of shared resource, and α is the inverse Ackermann function that is ≤ 4 in practice. Existing exact and approximation algorithms have O(nm2) and O(n2 m/ε) running times, resp., so our algorithms are much faster in the practical case where n <;<; m. Experiments show that our algorithms are 215 times faster than dynamic programming for finding optimal solutions when m = 1M, and produce solutions with 33% better performance than greedy algorithms.
【Keywords】: cloud computing; computational complexity; dynamic programming; multiprocessing systems; resource allocation; storage allocation; NP-complete problem; O(n2 m/ε) running time; O(nm2) running time; O(s/ε α(n/ε)n2 log n/ε log m) time; O(snα(m)m(log m)2) time; cloud computing; concave performance curve; dynamic programming method; inverse Ackermann function; multicore processor; optimal nonconcave resource allocation; realistic performance curve; Approximation algorithms; Approximation methods; Computers; Dynamic programming; Heuristic algorithms; Resource management; Throughput; Resource allocation; approximation algorithms; optimization
【Paper Link】 【Pages】:1517-1525
【Authors】: Chris Milling ; Constantine Caramanis ; Shie Mannor ; Sanjay Shakkottai
【Abstract】: In many networks the operator is faced with nodes that report a potentially important phenomenon such as failures, illnesses, and viruses. The operator is faced with the question: Is it spreading over the network, or simply occurring at random? We seek to answer this question from highly noisy and incomplete data, where at a single point in time we are given a possibly very noisy subset of the infected population (including false positives and negatives). While previous work has focused on uniform spreading rates for the infection, heterogeneous graphs with unequal edge weights are more faithful models of reality. Critically, the network structure may not be fully known and modeling epidemic spread on unknown graphs relies on non-homogeneous edge (spreading) weights. Such heterogeneous graphs pose considerable challenges, requiring both algorithmic and analytical development. We develop an algorithm that can distinguish between a spreading phenomenon and a randomly occurring phenomenon while using only local information and not knowing the complete network topology and the weights. Further, we show that this algorithm can succeed even in the presence of noise, false positives and unknown graph edges.
【Keywords】: computer network security; computer viruses; graph theory; critical network structure; false negatives; false positives; graph edge; heterogeneous graph; heterogeneous networks; infected population; infection local detection; local information; Analytical models; Approximation algorithms; Computers; Conferences; Electronic mail; Noise measurement; Probabilistic logic
【Paper Link】 【Pages】:1526-1534
【Authors】: Peng-Jun Wan ; Fahad Al-dhelaan ; Sai Ji ; Lei Wang ; Ophir Frieder
【Abstract】: Linear interference alignment (LIA) is one of the key interference mitigation techniques to enhance the wireless MIMO network capacity. The generic LIA feasibility amounts to whether or not a well-structured random matrix with entries drawn from a continuous distribution has full row-rank almost surely. Recently, a randomized algebraic test of feasibility was proposed in the literature. It is a pseudo-polynomial bounded-error probabilistic algorithm in nature, and has intrinsic limitations of requiring an inordinate amount of running time and memory even for a moderate sized input and being prone to round-off errors in floating-point computations. This paper presents necessary conditions and sufficient conditions of the generic LIA feasibility and develops fast and robust tests of them based on network flow. In certain settings, these conditions are both necessary and sufficient, and their flow-based tests yield efficient algorithm for feasibility test.
【Keywords】: MIMO communication; polynomials; radiofrequency interference; telecommunication network topology; LIA; arbitrary interference topology; floating-point computations; flow based feasibility test; key interference mitigation techniques; linear interference alignment; pseudo polynomial bounded error probabilistic algorithm; randomized algebraic test; wireless MIMO network capacity; Computers; Conferences; Interference; MIMO; Network topology; Topology; Wireless communication
【Paper Link】 【Pages】:1535-1543
【Authors】: Fang Dong ; Kui Wu ; S. Venkatesh
【Abstract】: The tightness of stochastic performance bounds has been a lingering issue in the theory and practical applications of stochastic network calculus (SNC). More often than not, inappropriate stochastic traffic arrival models and/or service models lead to loose stochastic bounds. In practice, loose bounds occur due to inaccurate a-prior assumption on traffic arrivals and/or the obliviousness to possible correlation between arrivals. To alleviate this problem, this paper uses copula analysis to capture the correlation in traffic flows and introduces a statistical method to infer traffic arrival models. Using copula theory, we show the range of performance bounds that SNC can achieve. With concrete numerical examples and real-world experiments, it is demonstrated that copula analysis offers a new opportunity for extending SNC research and augmenting its impact in practice.
【Keywords】: calculus; stochastic processes; SNC; copula analysis; statistical method; statistical network calculus; stochastic bounds; stochastic network calculus; stochastic performance; stochastic traffic arrival models; traffic arrival models; Analytical models; Calculus; Distribution functions; IP networks; Joints; Random variables; Stochastic processes; Copulas Analysis; Network Calculus
【Paper Link】 【Pages】:1544-1552
【Authors】: Tobias Friedrich ; Anton Krohmer
【Abstract】: Most complex real-world networks display scale-free features. This motivated the study of numerous random graph models with a power-law degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by Papadopoulos, Krioukov, Boguñá, Vahdat (INFOCOM, pp. 2973-2981, 2010) and has shown theoretically and empirically to fulfill all typical properties of real-world networks, including power-law degree distribution and high clustering. We study cliques in hyperbolic random graphs G and present new results on the expected number of k-cliques E[Kk] and the size of the largest clique ω(G). We observe that there is a phase transition at power-law exponent γ = 3. More precisely, for γ ε (2,3) we prove E[Kk] = nk(3-γ)/2 Θ(k)-k and ω(G) = Θ(n(3-γ)/2) while for γ ≥ 3 we prove E[Kk] = nΘ(k)-k and ω(G) = Θ(log(n)/log log n). We empirically compare the ω(G) value of several scale-free random graph models with real-world networks. Our experiments show that the ω(G)-predictions by hyperbolic random graphs are much closer to the data than other scale-free random graph models.
【Keywords】: complex networks; computational complexity; graph theory; complex real-world networks; high clustering; hyperbolic random graphs; k-cliques; phase transition; power-law degree distribution; power-law exponent; scale-free features; scale-free random graph models; Computational modeling; Computers; Conferences; Geometry; Internet; Predictive models; Routing
【Paper Link】 【Pages】:1553-1561
【Authors】: Minas Gjoka ; Balint Tillman ; Athina Markopoulou
【Abstract】: In networking research, it is often desirable to generate synthetic graphs with certain properties. In this paper, we present a new algorithm, 2K_Simple, for exact construction of simple graphs with a target joint degree matrix (JDM). We prove that the algorithm constructs exactly the target JDM and that its running time is linear in the number of edges. Furthermore, we show that the algorithm poses less constraints on the graph structure than previous state-of-the-art construction algorithms. We exploit this flexibility to extend 2K_Simple and design two algorithms that achieve additional network properties on top of the exact target JDM. In particular, 2K_Simple_Clustering produces simple graphs with a target JDM and average clustering coefficient close to a target, while 2K_Simple_Attributes produces exactly simple graphs with a target JDM and joint occurrence of node attribute pairs. We exhaustively evaluate our algorithms through simulation for small graphs, and we also demonstrate their benefits in generating graphs that resemble real-world social networks in terms of accuracy and speed; we reduce the running time by orders of magnitudes compared to previous approaches that rely on Monte Carlo Markov Chains.
【Keywords】: Markov processes; Monte Carlo methods; computer networks; graph theory; matrix algebra; pattern clustering; social networking (online); Monte Carlo Markov chain; average clustering coefficient; computer network; simple graph construction; social network; synthetic graph generation; target JDM; target joint degree matrix; Algorithm design and analysis; Approximation algorithms; Clustering algorithms; Conferences; Joints; Social network services; Switches
【Paper Link】 【Pages】:1562-1570
【Authors】: Jungseul Ok ; Jinwoo Shin ; Yung Yi
【Abstract】: We study how an innovation (e.g., product or technology) diffuses over a social network when individuals strategically make selfish, rational choices in adopting the new innovation. This diffusion has been studied by modeling individuals' interactions with a noisy best response dynamic over a networked coordination game, but mainly in the nonprogressive setup. In this paper, we study the case when people are progressive, i.e., never going back to the old technology once the new technology is chosen, where such a progressive behavior is explained using the notion of sunk cost fallacy in social psychology. Our main focus is on the diffusion time, i.e., time till all choose the new innovation. To this end, we first provide a combinatorial characterization of the diffusion time that corresponds to the time reaching the absorbing state in a Markov chain. Based on this, we propose a polynomial-time algorithm that computes the diffusion time, where such a task is known to be computationally intractable in the non-progressive diffusion. Second, we asymptotically quantify the diffusion times for a class of well-known social graph topologies, and compare them to those under the non-progressive diffusion. Finally, we study the impact of seeding to speed up the diffusion in the progressive setup, and show that the diffusion speed is impossible to significantly accelerate with just a small-budget seeding, which is in part in stark contrast to that in the non-progressive diffusion. Our results provide not only understandings on the progressive strategic diffusion in a social network, but also computational tractability on other related problems, e.g., seeding, which we believe should be of broader interest in the future.
【Keywords】: Markov processes; computational complexity; game theory; graph theory; psychology; social aspects of automation; social networking (online); technology transfer; Markov chain; asymptotic quantification; combinatorial characterization; diffusion speed; diffusion time; individual interaction modeling; innovation adoption; networked coordination game; noisy best response dynamic; nonprogressive diffusion; polynomial-time algorithm; progressive behavior; progressive strategic diffusion; social graph topology; social network; social psychology; sunk cost fallacy; Computational modeling; Computers; Conferences; Games; Integrated circuit modeling; Social network services; Technological innovation
【Paper Link】 【Pages】:1571-1579
【Authors】: Carla-Fabiana Chiasserini ; Michele Garetto ; Emilio Leonardi
【Abstract】: We address the problem of social network de-anonymization when relationships between people are described by scale-free graphs. In particular, we propose a rigorous, asymptotic mathematical analysis of the network de-anonymization problem while capturing the impact of power-law node degree distribution, which is a fundamental and quite ubiquitous feature of many complex systems such as social networks. By applying bootstrap percolation and a novel graph slicing technique, we prove that large inhomogeneities in the node degree lead to a dramatic reduction of the initial set of nodes that must be known a priori (the seeds) in order to successfully identify all other users. We characterize the size of this set when seeds are selected using different criteria, and we show that their number can be as small as n% for any small ε > 0. Our results are validated through simulation experiments on real social network graphs.
【Keywords】: complex networks; graph theory; network theory (graphs); social networking (online); asymptotic mathematical analysis; complex systems; graph slicing technique; percolation graph matching; power-law node degree distribution; real social network graphs; scale-free graphs; scale-free social network de-anonymization problem; Algorithm design and analysis; Analytical models; Computers; Conferences; Privacy; Radiation detectors; Social network services
【Paper Link】 【Pages】:1580-1588
【Authors】: Liang Zheng ; Carlee Joe-Wong ; Chee Wei Tan ; Sangtae Ha ; Mung Chiang
【Abstract】: The growing volume of mobile data traffic has led many Internet service providers (ISPs) to cap their users' monthly data usage, with overage fees for exceeding their caps. In this work, we examine a secondary data market in which users can buy and sell leftover data caps from each other. China Mobile Hong Kong recently introduced such a market. While similar to an auction in that users submit bids to buy and sell data, it differs from traditional double auctions in that the ISP serves as the middleman between buyers and sellers. We derive the optimal prices and amount of data that different buyers and sellers are willing to bid in this market and then propose an algorithm for ISPs to match buyers and sellers. We compare the optimal matching for different ISP objectives and derive conditions under which an ISP can obtain higher revenue with the secondary market: while the ISP loses revenue from overage fees, it can assess administration fees and take the differences between the buyer and seller prices. Finally, we use one year of usage data from 100 U.S. mobile users to illustrate that the conditions for a revenue increase can hold in practice.
【Keywords】: Internet; mobile radio; pricing; telecommunication network management; telecommunication services; tendering; China Mobile Hong Kong; ISP objectives; Internet service providers; administration fee; auction; bidding; buyer price; data buy and sell; leftover data caps; mobile data secondary market; mobile data traffic; optimal price; revenue increase; seller price; usage data; user monthly data usage; Computers; Conferences; Data models; Mobile communication; Optimal matching; Pricing; Web and internet services
【Paper Link】 【Pages】:1589-1597
【Authors】: Geoffrey Alexander ; Jedidiah R. Crandall
【Abstract】: We present a novel technique for estimating the round trip time network latency between two off-path end hosts. That is, given two arbitrary machines, A and B, on the Internet, our technique measures the round trip time from A to B. We take advantage of information side-channels that are present in the TCP/IP network stack of modern Linux kernels to infer information about off-path routes. Compared to previous tools, ours does not require additional resources, machines, or require additional protocols beyond TCP. The only requirements are that one end host have an open port and be running a modern Linux kernel and that the other end host responds to unsolicited SYN-ACK packets with a RST packet. We evaluate our technique “in the wild” and compare our off-path estimates to on-path measurements. Our experiments show that our technique provides accurate, real-time estimates of off-path network latency. In over 80% of measurements our technique provides off-path round trip time measurements within 20% of the actual round trip time. We also discuss possible causes of errors that impact the accuracy of our measurements.
【Keywords】: IP networks; Internet; telecommunication channels; transport protocols; Internet; Linux kernel; RST packet; TCP-IP network stack protocol; TCP-IP side channel; off-path round trip time measurement; round trip time network latency estimation; unsolicited SYN-ACK packet; Extraterrestrial measurements; IP networks; Kernel; Linux; Loss measurement; Servers; Time measurement
【Paper Link】 【Pages】:1598-1606
【Authors】: Sarker Tanzir Ahmed ; Clint Sparkman ; Hsin-Tsang Lee ; Dmitri Loguinov
【Abstract】: Exponential growth of the web continues to present challenges to the design and scalability of web crawlers. Our previous work on a high-performance platform called IRLbot [28] led to the development of new algorithms for realtime URL manipulation, domain ranking, and budgeting, which were tested in a 6.3B-page crawl. Since very little is known about the crawl itself, our goal in this paper is to undertake an extensive measurement study of the collected dataset and document its crawl dynamics. We also propose a framework for modeling the scaling rate of various data structures as crawl size goes to infinity and offer a methodology for comparing crawl coverage to that of commercial search engines.
【Keywords】: information retrieval; search engines; IRLbot platform; URL manipulation; Web crawlers; budgeting; crawl coverage; crawl dynamics; crawl size; data structures; domain ranking; large-scale crawl documentation; Admission control; Bandwidth; Crawlers; HTML; Robots; Servers; Uniform resource locators
【Paper Link】 【Pages】:1607-1615
【Authors】: Yanjiao Chen ; Lingjie Duan ; Qian Zhang
【Abstract】: Major cellular operators are planning to upgrade to high-speed 4G networks, but due to budget constraints, they have to dynamically plan and deploy the 4G networks through multiple stages of time. By considering one-time deployment cost, daily operational cost and 3G network congestion, this paper studies how an operator financially manages the cash flow and plans the 4G deployment in a finite time horizon to maximize his final-stage profit. The operator provides both the traditional 3G service and the new 4G service, and we show that users will start to use the 4G service only when it reaches a sizable coverage. At each time stage, the operator first decides an additional 4G deployment size, by predicting users' responses in choosing between the 3G and 4G services. We formulate this problem as a dynamic programming problem, and propose an optimal threshold-based 4G deployment policy. We show that the operator will not deploy to a full 4G coverage in an area with low user density or high deployment/operational cost. Perhaps surprisingly, during the 4G deployment process, we show that the 4G subscriber number first increases and then decreases, as the 4G service helps mitigate 3G network congestion and increases its QoS.
【Keywords】: 4G mobile communication; financial management; profitability; 4G network deployment; dynamic programming problem; final-stage profit; financial analysis; finite time horizon; Computational modeling; Computers; Conferences; Dynamic programming; Planning; Quality of service; Wireless communication
【Paper Link】 【Pages】:1616-1624
【Authors】: Yu Hua ; Wenbo He ; Xue Liu ; Dan Feng
【Abstract】: Rapid disaster relief is important to save human lives and reduce property loss. With the wide use of smartphones and their ubiquitous easy access to the Internet, sharing and uploading images to the cloud via smartphones offer a nontrivial opportunity to provide information of disaster zones. However, due to limited available bandwidth and energy, smartphone-based crowdsourcing fails to support the real-time data analytics. The key to efficiently and timely share and analyze the images is to determine the value/worth of the images based on their significance and redundancy, and only upload those valuable and unique images. In this paper, we propose a near-realtime and cost-efficient scheme, called SmartEye, in the cloud-assisted disaster environment. The idea behind SmartEye is to implement QoS-aware in-network deduplication over DiffServ in the software-defined networks (SDN). Due to the ease of use, simplicity and scalability, DiffServ supports the in-network deduplication to meet the needs of differentiated QoS. SmartEye aggregates the flows with similar features via a semantic hashing, and provides communication services for the aggregated, not a single, flow. To achieve these goals, we leverage two main optimization schemes, including semantic hashing and space-efficient filters. Efficient image sharing is helpful to disaster detection and scene recognition. To demonstrate the feasibility of SmartEye, we conduct two real-world case studies in which the loss in Typhoon Haiyan (2013) and Hurricane Sandy (2012) can be identified in a timely fashion by analyzing massive data consisting of more than 22 million images using our SmartEye system. Extensive experimental results illustrate that SmartEye is efficient and effective to achieve real-time analytics in disasters.
【Keywords】: DiffServ networks; cloud computing; cryptography; disasters; emergency management; quality of service; smart phones; software defined networking; storms; DiffServ; Hurricane Sandy; QoS-aware in-network deduplication; SDN; SmartEye; Typhoon Haiyan; cloud image sharing; cloud-assisted disaster environment; communication services; data analysis; differentiated QoS; disaster detection; disaster environments; disaster zones; flows aggregates; image analysis; image upload; optimization schemes; scene recognition; semantic hashing; smart phones; software-defined networks; space-efficient filters; Bandwidth; Computers; Diffserv networks; Feature extraction; Quality of service; Servers; Smart phones
【Paper Link】 【Pages】:1625-1633
【Authors】: Sarker Tanzir Ahmed ; Dmitri Loguinov
【Abstract】: Many BigData applications (e.g., MapReduce, web caching, search in large graphs) process streams of random key-value records that follow highly skewed frequency distributions. In this work, we first develop stochastic models for the probability to encounter unique keys during exploration of such streams and their growth rate over time. We then apply these models to the analysis of LRU caching, MapReduce overhead, and various crawl properties (e.g., node-degree bias, frontier size) in random graphs.
【Keywords】: Big Data; cache storage; information retrieval; parallel processing; stochastic processes; Big Data applications; LRU caching; MapReduce overhead; caching application; crawl properties; crawling application; data processing; frequency distribution; probability; random graphs; randomized data streams; stochastic model; Analytical models; Computational modeling; Computers; Conferences; Random variables; Stochastic processes; Yttrium
【Paper Link】 【Pages】:1634-1642
【Authors】: Xiaoyong Li ; Daren B. H. Cline ; Dmitri Loguinov
【Abstract】: Network applications commonly maintain local copies of remote data sources in order to provide caching, indexing, and data-mining services to their clients. Modeling performance of these systems and predicting future updates usually requires knowledge of the inter-update distribution at the source, which can only be estimated through blind sampling - periodic downloads and comparison against previous copies. In this paper, we first introduce a stochastic modeling framework for this problem, where the update and sampling processes are both renewal. We then show that all previous approaches are biased unless the observation rate tends to infinity or the update process is Poisson. To overcome these issues, we propose four new algorithms that achieve various levels of consistency, which depend on the amount of temporal information revealed by the source and capabilities of the download process.
【Keywords】: blind source separation; signal sampling; stochastic processes; Poisson process; blind sampling; consistency level; download process capabilities; interupdate source distribution; network applications; observation rate; periodic downloads; remote data sources; renewal process; sampling process; stochastic modeling framework; temporal information; temporal update dynamics; update process; Computational modeling; Computers; Conferences; Delays; Gold; Observers
【Paper Link】 【Pages】:1643-1651
【Authors】: William Culhane ; Kirill Kogan ; Chamikara Jayalath ; Patrick Eugster
【Abstract】: Aggregation of computed sets of results fundamentally underlies the distillation of information in many of today's big data applications. To this end there are many systems which have been introduced which allow users to obtain aggregate results by aggregating along communication structures such as trees, but they do not focus on optimizing performance by optimizing the underlying structure to perform the aggregation. We consider two cases of the problem - aggregation of (1) single blocks of data, and of (2) streaming input. For each case we determine which metric of “fast” completion is the most relevant and mathematically model resulting systems based on aggregation trees to optimize that metric. Our assumptions and model are laid out in depth. From our model we determine how to create a provably ideal aggregation tree (i.e., with optimal fanin) using only limited information about the aggregation function being applied. Experiments in the Amazon Elastic Compute Cloud (EC2) confirm the validatity of our models in practice.
【Keywords】: Big Data; data handling; Amazon Elastic Compute Cloud; Big Data aggregation; Big Data applications; EC2; aggregation trees; communication structures; information distillation; Aggregates; Bandwidth; Big data; Computational modeling; Computers; Conferences; Mathematical model
【Paper Link】 【Pages】:1652-1660
【Authors】: Han Ding ; Jinsong Han ; Alex X. Liu ; Jizhong Zhao ; Panlong Yang ; Wei Xi ; Zhiping Jiang
【Abstract】: In this paper, we propose a system called R# to estimate the number of human objects using passive RFID tags but without attaching anything to human objects. The idea is based on our observation that the more human objects are present, the higher the variance in the RSS values of the tag backscattered RF signal. Thus, based on the received RF signal, the reader can estimate the number of human objects. R# includes an RFID reader and some (say 20) passive tags, which are deployed in the region that we want to monitor the number of human objects, such as the region in front of a painting. The RFID reader periodically emits RF signal to identify all tags and the tags simply respond with their IDs via C1G2 standard protocols. We implemented R# using commercial Impinj H47 passive RFID tags and Impinj reader model R420. We conducted experiments in a simulated picking aisle area of the supermarket environment. The experimental results show that R# can achieve high estimation accuracy (more than 90%).
【Keywords】: object detection; protocols; C1G2 standard protocols; Impinj reader model R420; R#; RF signal; backscattered radio frequency signal; commercial Impinj H47 passive RFID tags; human object estimation; Entropy; Estimation; Feature extraction; Monitoring; Passive RFID tags; RF signals
【Paper Link】 【Pages】:1661-1669
【Authors】: Qiongzheng Lin ; Lei Yang ; Yuxin Sun ; Tianci Liu ; Xiang-Yang Li ; Yunhao Liu
【Abstract】: For today's computer users, the mouse plays such an important role that it dominates the interaction interface in personal computer for nearly half a century since it was invented. However, the mouse is gradually unfit for the demand of modern 3D display techniques, e.g. 3D-projection or -screen, for the reason that the relevant interactions are confined in a surface. Although some new methods such as computer vision based techniques attempt to bridge the human-computer barrier, they suffer from many limitations such as ambiguity in multitargets and dependence on light. This paper presents a battery-free device called Tagball for 3D human-computer interaction via RFID tags. Tagball devises a control ball, on which N passive tags are attached, for users to generate two basic kinds of interactive commands: translation and rotation. Instead of locating N tags independently, we model the ball as a whole in a more cooperative way under the circumstance that their geometric relationships are known in advance. In addition, we consider the phase values measured by M RF antennas for these N tags as observations of the ball state. Our key innovations are the studies on motion behaviors of a group of tags by using Extended Kalman Filter, and the implementation based on purely Commercial Off-The-Shelf (COTS) RFID products. The systematical evaluation shows that Tagball traces the ball translation to 1.5cm and identifies ball orientation to 1.8° in 3D space.
【Keywords】: Kalman filters; human computer interaction; microcomputers; mouse controllers (computers); radiofrequency identification; 3D display techniques; 3D human-computer interaction; 3D-projection; 3D-screen; RF antennas; RFID tags; Tagball; battery-free device; commercial off-the-shelf RFID products; computer users; computer vision; control ball; extended Kalman filter; human-computer barrier; passive tags; personal computer; Antenna measurements; Antennas; Kalman filters; Mice; Phase measurement; Radio frequency; Three-dimensional displays; Extended Kalman Filter; Interaction Peripheral; RFID; Tagball
【Paper Link】 【Pages】:1670-1678
【Authors】: Tianci Liu ; Lei Yang ; Xiang-Yang Li ; Huaiyi Huang ; Yunhao Liu
【Abstract】: To stay competitive, plenty of data mining techniques have been introduced to help stores better understand consumers' behaviors. However, these studies are generally confined within the customer transaction data. Actually, another kind of `deep shopping data', e.g. which and why goods receiving much attention are not purchased, offers much more valuable information to boost the product design. Unfortunately, these data are totally ignored in legacy systems. This paper introduces an innovative system, called TagBooth, to detect commodities' motion and further discover customers' behaviors, using COTS RFID devices. We first exploit the motion of tagged commodities by leveraging physical-layer information, like phase and RSS, and then design a comprehensive solution to recognize customers' actions. The system has been tested extensively in the lab environment and used for half a year in real retail store. As a result, TagBooth generally performs well to acquire deep shopping data with high accuracy.
【Keywords】: consumer behaviour; data acquisition; data mining; marketing data processing; radiofrequency identification; COTS RFID devices; RFID tags; RSS; TagBooth; consumer behaviors; customer transaction data; data mining techniques; deep shopping data acquisition; legacy systems; physical-layer information; received signal strength; Accuracy; Conferences; Interference; Legged locomotion; Motion detection; Radio frequency; Radiofrequency identification; Action Recognition; Deep Shopping Data; Motion Detection; RFID; TagBooth
【Paper Link】 【Pages】:1679-1687
【Authors】: Xiulong Liu ; Bin Xiao ; Keqiu Li ; Jie Wu ; Alex X. Liu ; Heng Qi ; Xin Xie
【Abstract】: The widely used RFID tags impose serious privacy concerns as a tag responds to queries from readers no matter they are authorized or not. The common solution is to use a commercially available blocker tag which behaves as if a set of tags with known blocking IDs are present. The use of blocker tags makes RFID estimation much more challenging as some genuine tag IDs are covered by the blocker tag and some are not. In this paper, we propose REB, the first RFID estimation scheme with the presence of blocker tags. REB uses the framed slotted Aloha protocol specified in the C1G2 standard. For each round of the Aloha protocol, REB first executes the protocol on the genuine tags and the blocker tag, and then virtually executes the protocol on the known blocking IDs using the same Aloha protocol parameters. The basic idea of REB is to conduct statistically inference from the two sets of responses and estimate the number of genuine tags. We conduct extensive simulations to evaluate the performance of REB, in terms of time-efficiency and estimation reliability. The experimental results reveal that our REB scheme runs tens of times faster than the fastest identification protocol with the same accuracy requirement.
【Keywords】: protocols; radiofrequency identification; statistical analysis; telecommunication network reliability; telecommunication security; Aloha protocol parameters; C1G2 standard; RFID cardinality estimation; RFID estimation scheme; RFID tags; blocker tags; blocking ID; estimation reliability; genuine tags; slotted Aloha protocol; statistically inference; Accuracy; Computers; Conferences; Estimation; Privacy; Protocols; Radiofrequency identification; Blocker Tags; RFID Estimation; RFID Privacy
【Paper Link】 【Pages】:1688-1696
【Authors】: Wen Chen ; Fengyuan Ren ; Jing Xie ; Chuang Lin ; Kevin Yin ; Fred Baker
【Abstract】: Since TCP Incast has been identified as a catastrophic problem in many typical data center applications, a lot of efforts have been made to analyze or solve it. The analysis work intends to model Incast problem from certain perspective, and the solutions try to solve the problem through designing enhanced mechanisms or algorithms. However, the proposed models are either closely coupled with particular protocol version or dependent on empirical observations, and the solutions cannot eliminate Incast problem entirely because the underlying issues are not identified completely. There is little work which attempts to close the gap between “analyzing” and “solving”, and present a comprehensive understanding. In this paper, we provide an in-depth understanding of how TCP Incast problem happens. We build up an interpretive model which emphasizes particularly on describing qualitatively how various factors, including system parameters and mechanism variables, affect network performances in Incast traffic pattern, but not on calculating the accurate throughput. With this model, we give plausible explanations why the various solutions for TCP Incast problem can help, but do not solve it entirely.
【Keywords】: transport protocols; Incast traffic pattern; TCP incast problem; catastrophic problem; data center applications; empirical observations; protocol version; Analytical models; Computers; Conferences; Data models; Gaussian distribution; Receivers; Throughput; Modeling; Solutions; TCP Incast; Timeout; Window Size Distribution
【Paper Link】 【Pages】:1697-1705
【Authors】: Soojeon Lee ; Myungjin Lee ; Dongman Lee ; Hyungsoo Jung ; Byoung-Sun Lee
【Abstract】: As many-to-one traffic patterns prevail in data center networks, TCP flows often suffer from severe unfairness in sharing bottleneck bandwidth, which is known as the TCP outcast problem. The cause of the TCP outcast problem is the bursty packet losses by a drop-tail queue that triggers TCP timeouts and leads to decreasing the congestion window. This paper proposes TCPRand, a transport layer solution to TCP outcast. The main idea of TCPRand is the randomization of TCP payload size, which breaks synchronized packet arrivals between flows from different input ports. We investigate how TCPRand reduces consecutive packet drops and demonstrate various benefits of TCPRand with extensive experiments and ns-3 simulation. Our evaluation results show that TCPRand guarantees the superior enhancement of TCP fairness with negligible overheads in all of our test cases.
【Keywords】: computer centres; transport protocols; TCP fairness; TCP outcast problem; TCP payload size; TCPRand; bursty packet losses; congestion window; data center networks; drop tail queue; randomizing TCP payload size; synchronized packet arrivals; traffic patterns; transport layer; Linux; Network topology; Packet loss; Payloads; Ports (Computers); Topology; Data center networks; Fairness; TCP outcast
【Paper Link】 【Pages】:1706-1714
【Authors】: Peng-Jun Wan ; Boliu Xu ; Lei Wang ; Sai Ji ; Ophir Frieder
【Abstract】: Multiflow problems are one of the most fundamental problems in both wired networks and wireless networks. Due to the cross-layer nature, multiflow problems in wireless networks are significantly harder than their counterparts in wired networks and have received much research interest over the past decade. Common to most other early-staged research, the characterization of computational hardness and the “war” on achievable approximation bounds have been the priority to the existing studies of multiflow problems in wireless networks while their practical feasibility in both running time and memory requirement is ignored as long they are polynomial. In fact, almost all of the state-of-the-art approximation algorithms for multiflow problems in wireless networks are all resorted to the traditional linear programming (LP) methods exclusively. However, those traditional LP methods can require an inordinate amount of running time and memory even for a moderate sized input, and consequently they often prove unusable in practice. This paper presents a completely new paradigm for multiflow problems in general wireless networks which is radically different from the prevailing LP-based paradigm, and develops practical algorithmic solutions which are much faster and simpler.
【Keywords】: linear programming; radio networks; LP method; computational hardness characterization; cross-layer nature; linear programming method; multiflow problem; state-of-the-art approximation algorithm; wired network; wireless network; Algorithm design and analysis; Approximation algorithms; Approximation methods; Games; Interference; Schedules; Wireless networks
【Paper Link】 【Pages】:1715-1723
【Authors】: Georgios Tychogiorgos ; Athanasios Gkelias ; Kin K. Leung
【Abstract】: The continuously growing number of multimedia applications in current communication networks highlights the necessity for an efficient resource allocation mechanism to capture the unique characteristics of multi-tiered multimedia applications and allocate network capacity in an efficient way. This paper examines the problem of sharing the network throughput under the existence of inelastic traffic flows that follow a multi-tiered utility function. First, the concept of multi-sigmoidal utilities is introduced in order to describe user satisfaction, then, the implications of the use of such utilities are discussed for two different allocation policies; the bandwidth-proportional and the utility-proportional fairness allocation policies. In the former case, the intrinsic reasons of possible network oscillations are analyzed in detail and a heuristic to overcome such situations is proposed. In the latter one, where such oscillations are not possible, efficient ways to calculate a closed form solution for the optimal rate allocation are described. Moreover, a novel mathematical representation of such a multi-sigmoidal utility is presented and closed form solutions for a number of application types are calculated. Finally, the efficiency and robustness of the proposed algorithms is evaluated by simulations for different network topologies and compared against other work in literature.
【Keywords】: multimedia systems; resource allocation; bandwidth-proportional fairness allocation policy; communication networks; distributed network resource allocation; inelastic traffic flow; multi-sigmoidal utilities concept; multi-tiered multimedia applications; multi-tiered utility function; network capacity allocation; network oscillation; network throughput; network topology; rate allocation; user satisfaction; utility-proportional fairness allocation policy; Aggregates; Mathematical model; Multimedia communication; Optimization; Oscillators; Resource management; Shape
【Paper Link】 【Pages】:1724-1732
【Authors】: Bo-Xian Wu ; Kate Ching-Ju Lin ; Kai-Cheng Hsu ; Hung-Yu Wei
【Abstract】: Multi-user MIMO (MU-MIMO) has recently been specified in wireless standards, e.g., LTE-Advance and 802.11ac, to allow an access point (AP) to transmit multiple unicast streams simultaneously to different clients. These protocols however have no specific mechanism for multicasting. Existing systems hence simply allow a single multicast transmission, as a result underutilizing the AP's multiple antennas. Even worse, in most of systems, multicast is by default sent at the base rate, wasting a considerable link margin available for delivering extra information. To address this inefficiency, we present the design and implementation of HybridCast, a MU-MIMO system that enables joint unicast and multicast. HybridCast efficiently leverages the unused MIMO capability and link margin to send unicast streams concurrently with a multicast session, while ensuring not to harm the achievable rate of multicasting. We evaluate the performance of HybridCast via both testbed experiments and simulations. The results show that HybridCast always outperforms single multicast transmission. The average throughput gain for 4-antenna AP scenarios is 6.22× and 1.54× when multicast is sent at the base rate and the best rate of the bottleneck receiver, respectively.
【Keywords】: Long Term Evolution; MIMO communication; antenna arrays; multicast communication; AP multiple antennas; HybridCast; LTE-Advance; MU-MIMO system; access point; joint multicast-unicast design; multicast transmission; multiple unicast streams; multiuser MIMO networks; wireless standards; Decoding; Interference; MIMO; Receiving antennas; Signal to noise ratio; Unicast
【Paper Link】 【Pages】:1733-1741
【Authors】: Diep N. Nguyen ; Marwan Krunz
【Abstract】: Full-duplex (FD) radios have the potential to double a link's capacity. However, it has been recently reported that the network throughput gain of FD radios over half-duplex (HD) ones is unexpectedly marginal or even negative. This is because both ends of each link transmit at the same time, a set of concurrent FD links will experience more network interference (hence, reduction in the spatial reuse). This article identifies the unique advantages of FD radios and leverage multi-input multioutput (MIMO) communications to translate the FD spectral efficiency gain at the PHY level to throughput and power efficiency gain at the network layer. To that end, we first study the power minimization problem subject to rate demands in a FD-MIMO network. Sufficient conditions under which the FD network throughput can asymptotically double that of an HD network are then established. These conditions also guarantee the existence of a unique Nash Equilibrium that the game quickly converges to. By capturing “spatial signatures” of other radios, a FD-MIMO radio can instantly adjust its ongoing radiation pattern to avoid interfering with the reception directions at other radios. We exploit that to develop a novel MAC protocol that allows multiple FD links to concurrently communicate while adapting their radiation patterns to minimize network interference. The protocol does not require any feedback or coordination among nodes, but relies on the network interference perceived by these FD radios. Extensive simulations show that the proposed MAC design dramatically outperforms traditional FD-based CSMA protocols and HD radios w.r.t. both throughput and energy efficiency. A centralized algorithm for the FD network-wide transmit power minimization problem is also developed. Simulations show that, the proposed MAC protocol on average achieves almost the same power efficiency as the centralized algorithm. Interestingly, we even observe cases when the proposed distributed alg- rithm outperforms the centralized approach.
【Keywords】: MIMO communication; access protocols; antenna arrays; antenna radiation patterns; game theory; minimisation; radiofrequency interference; FD network-wide; FD-MIMO network; MAC protocol; Nash equilibrium; PHY level; communications scheme; full-duplex MIMO radios; multiinput multioutput communications; network interference; network throughput gain; power minimization problem; radiation pattern; spatial signatures; Games; High definition video; Interference; MIMO; Media Access Protocol; Minimization; Throughput; MAC; MIMO; Nash equilibrium; Power efficiency; beamforming; full-duplex; network throughput; optimization
【Paper Link】 【Pages】:1742-1750
【Authors】: Omid Abari ; Hariharan Rahul ; Dina Katabi ; Mondira Pant
【Abstract】: Distributed coherent transmission is necessary for a variety of high-gain communication protocols such as distributed MIMO and creating codes over the air. Unfortunately, however, distributed coherent transmission is intrinsically difficult because different nodes are driven by independent clocks, which do not have the exact same frequency. This causes the nodes to have frequency offsets relative to each other, and hence their transmissions fail to combine coherently over the air. This paper presents AirShare, a primitive that makes distributed coherent transmission seamless. AirShare transmits a shared clock on the air and feeds it to the wireless nodes as a reference clock, hence eliminating the root cause for incoherent transmissions. The paper addresses the challenges in designing and delivering such a shared clock. It also implements AirShare in a network of USRP software radios, and demonstrates that it achieves tight phase coherence. Further, to illustrate AirShare's versatility, the paper uses it to deliver a coherent-radio abstraction on top of which it demonstrates two cooperative protocols: distributed MIMO, and distributed rate adaptation.
【Keywords】: MIMO communication; cooperative communication; protocols; software radio; AirShare versatility; USRP software radio network node; cooperative protocol; distributed MIMO; frequency offset; high-gain communication protocol; reference clock; seamless distributed coherent transmission; shared clock; Clocks; MIMO; Phase locked loops; Protocols; Radio transmitters; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:1751-1759
【Authors】: Mohammad Amir Khojastepour ; Karthikeyan Sundaresan ; Sampath Rangarajan ; Mohammad Farajzadeh-Tehrani
【Abstract】: We investigate the open problem of characterizing the multiplexing gain offered by FD in a network of M cells (compared to the gain of two available on a single link). While self-interference cancellation is fundamental in realizing full duplex (FD) capability, the more challenging problem in a network-wide deployment of FD communication is a new form of uplink-downlink interference, namely UDI, caused by transmission of uplink clients on the downlink reception of other clients operating in the same frequency band during FD. We leverage spatial interference alignment (IA) as an effective approach to address UDI and characterize the scalability of the FD's multiplexing gain (in terms of M) by providing a closed-form expression. To the best of our knowledge, this is the first characterization of FD's multiplexing gain in a multi-cell network. We also provide an IA construction that can achieve the best scaling possible. Further, we extend our results to practical settings with limited number of clients and limited information sharing between access points.
【Keywords】: cellular radio; interference (signal); multiplexing; access points; closed-form expression; multi-cell networks; multiplexing gain; spatial interference alignment; wireless full-duplex; Antennas; Downlink; High definition video; Interference; MIMO; Multiplexing; Uplink
【Paper Link】 【Pages】:1760-1768
【Authors】: Eli A. Meirom ; Shie Mannor ; Ariel Orda
【Abstract】: We establish a network formation game for the Internet's Autonomous System (AS) interconnection topology. The game includes different types of players, accounting for the heterogeneity of ASs in the Internet. We incorporate reliability considerations in the player's utility function, and analyze static properties of the game as well as its dynamic evolution. We provide dynamic analysis of topological quantities, and explain the prevalence of some “network motifs” in the Internet graph. We assess our predictions with real-world data.
【Keywords】: Internet; computer games; graph theory; telecommunication network reliability; telecommunication network topology; Internet autonomous system interconnection topology; Internet graph; dynamic evolution; reliable networks; Computer network reliability; Cost function; Games; Internet; Reliability theory; Topology
【Paper Link】 【Pages】:1769-1777
【Authors】: Avhishek Chatterjee ; Lav R. Varshney ; Sriram Vishwanath
【Abstract】: Crowdsourcing of jobs to online freelance markets is rapidly gaining popularity. Most crowdsourcing platforms are uncontrolled and offer freedom to customers and freelancers to choose each other. This works well for unskilled jobs (e.g., image classification) with no specific quality requirement since freelancers are functionally identical. For skilled jobs (e.g., software development) with specific requirements, however, this does not ensure the maximum number of job requests is satisfied. In this work we determine the capacity of freelance markets, in terms of maximum satisfied job requests, and propose centralized schemes that achieve capacity. To ensure decentralized operation and freedom of choice for customers and freelancers, we propose simple schemes compatible with the operation of current crowd-sourcing platforms that approximately achieve capacity. Further, for settings where job requests exceed capacity, we propose an optimal and fair scheme for declining jobs without wait.
【Keywords】: image classification; job specification; software engineering; current crowd-sourcing platforms; decentralized operation; decentralized schemes; fair scheme; fundamental limits; image classification; job crowdsourcing; maximum satisfied job requests; online freelance markets; optimal scheme; software development; unskilled jobs; work capacity; Approximation methods; Computers; Conferences; Crowdsourcing; Random variables; Resource management; Sociology
【Paper Link】 【Pages】:1778-1786
【Authors】: Aemen Lodhi ; Nikolaos Laoutaris ; Amogh Dhamdhere ; Constantine Dovrolis
【Abstract】: Peering in the Internet interdomain network has long been considered a “black art”, understood in-depth only by a select few peering experts while the majority of the network operator community only scratches the surface employing conventional rules-of-thumb to form peering links through ad hoc personal interactions. Why is peering considered a black art? What are the main sources of complexity in identifying potential peers, negotiating a stable peering relationship, and utility optimization through peering? How do contemporary operational practices approach these problems? In this work we address these questions for Tier-2 Network Service Providers. We identify and explore three major sources of complexity in peering: (a) inability to predict traffic flows prior to link formation (b) inability to predict economic utility owing to a complex transit and peering pricing structure (c) computational infeasibility of identifying the optimal set of peers because of the network structure. We show that framing optimal peer selection as a formal optimization problem and solving it is rendered infeasible by the nature of these problems. Our results for traffic complexity show that 15% NSPs lose some fraction of customer traffic after peering. Additionally, our results for economic complexity show that 15% NSPs lose utility after peering, approximately, 50% NSPs end up with higher cumulative costs with peering than transit only, and only 10% NSPs get paid-peering customers.
【Keywords】: Internet; computational complexity; peer-to-peer computing; telecommunication traffic; Internet interdomain network; Internet peering; ad hoc personal interactions; black art; complex transit structure; customer traffic fraction; economic complexity; economic utility prediction inability; formal optimization problem; framing optimal peer selection; peering links; peering pricing structure; potential peer identification; tier-2 network service providers; traffic complexity; traffic flow prediction inability; Complexity theory; Economics; Internet; Peer-to-peer computing; Ports (Computers); Pricing; Topology; Autonomous System interconnections; IXPs; Internet; economic utility; paid peering; settlement-free
【Paper Link】 【Pages】:1787-1795
【Authors】: Xiaofan He ; Huaiyu Dai ; Peng Ning
【Abstract】: With the advancement of modern technologies, the security battle between a legitimate system (LS) and an adversary is becoming increasingly sophisticated, involving complex interactions in unknown dynamic environments. Stochastic game (SG), together with multi-agent reinforcement learning (MARL), offers a systematic framework for the study of information warfare in current and emerging cyber-physical systems. In practical security games, each player usually has only incomplete information about the opponent, which induces information asymmetry. This work exploits information asymmetry from a new angle, considering how to exploit local information unknown to the opponent to the player's advantage. Two new MARL algorithms, termed minimax-PDS and WoLF-PDS, are proposed, which enable the LS to learn and adapt faster in dynamic environments by exploiting its private local information. The proposed algorithms are provably convergent and rational, respectively. Also, numerical results are presented to show their effectiveness through two concrete anti-jamming examples.
【Keywords】: learning (artificial intelligence); multi-agent systems; security of data; stochastic games; LS; MARL; SG; WoLF-PDS; adaptation; concrete anti-jamming; cyber-physical systems; information asymmetry; information warfare; legitimate system; minimax-PDS; multiagent reinforcement learning; security games; stochastic game; unknown dynamic environments; Computers; Conferences; Games; Heuristic algorithms; Jamming; Security; Sensors
【Paper Link】 【Pages】:1796-1804
【Authors】: Wei Wang ; Bei Liu ; Donghyun Kim ; Deying Li ; Jingyi Wang ; Yaolin Jiang
【Abstract】: Over years, virtual backbone has attracted lots of attentions as a promising approach to deal with the broadcasting storm problem in wireless networks. One popular way to construct a quality virtual backbone is to solve the minimum connected dominating set problem. However, a virtual backbone computed in this way is not resilient against topology change since the induced graph by the connected dominating set is one-vertex-connected. As a result, the minimum k-connected m-dominating set problem is introduced to construct a fault-tolerant virtual backbone. Currently, the best known approximation algorithm for the problem in unit disk graph assumes k ≤ 3 and m ≥ 1 and its performance ratio is 280 when k = m = 3. In this paper, we use a classical result from graph theory, Tutte decomposition, to design a new approximation algorithm for the problem in unit disk graph for k ≤ 3 and m ≥ 3. In particular, the algorithm features with much simpler structure and much smaller performance ratio, e.g. nearly 66 when k = m = 3. We also conduct simulation to evaluate the performance of our algorithm.
【Keywords】: approximation theory; fault tolerance; graph theory; set theory; wireless sensor networks; Tutte decomposition; constant approximation algorithm; fault-tolerance; induced graph; minimum 3-connected m-dominating set problem; minimum k-connected m-dominating set problem; one-vertex-connected set; performance ratio; topology change; unit disk graph; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computers; Conferences; Particle separators; Wireless networks; 3-connected m-dominating set; Tutte decomposition; approximation algorithm; fault-tolerant; virtual backbone; wireless networks
【Paper Link】 【Pages】:1805-1813
【Authors】: Peng-Jun Wan ; Fahad Al-dhelaan ; Xiaohua Jia ; Baowei Wang ; Guowen Xing
【Abstract】: Multi-packet reception (MPR) technology provides a means of boosting wireless network capacity without requiring additional spectrum. It has received widespread attention over the past two decades from both industry and academic researchers. Despite the huge promise and considerable attention, provable good algorithms for maximizing network capacity in MPR-capable wireless networks are missing in the state of the art. One major technical obstacle is due to the complicated non-binary nature of the link independence; something which appears intractable with existing graph-theoretic methods. In this paper, we present practical polynomial-time approximation algorithms for variants of capacity optimization problems in MPR-capable wireless networks which achieve constant approximation bounds for the first time ever. In addition, polynomial-time approximation schemes are developed for those variants in wireless networks with constant-bounded MPR capabilities.
【Keywords】: optimisation; polynomial approximation; radio networks; MPR-capable wireless networks; capacity optimization problems; constant approximation bounds; link independence; multi-packet reception technology; polynomial-time approximation algorithms; wireless network capacity; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computers; Interference; Schedules; Wireless networks
【Paper Link】 【Pages】:1814-1821
【Authors】: Jesús Gómez-Vilardebó
【Abstract】: This paper investigates the problem of finding optimal paths in single-source single-destination accumulative multi-hop networks. We consider a single source that communicates to a single destination assisted by several relays through multiple-hops. At each hop, only one node transmits, while the rest of nodes receive the transmitted signal, and store it after processing/decoding and mixing with the signals received in previous hops. This is, we consider that terminals make use of advanced energy accumulation transmission/reception techniques such us maximal ratio combining reception of repetition codes, or information accumulation with rateless codes. Accumulative techniques increase communication reliability, reduce energy consumption, and decrease latency. We investigate the properties that a routing metric must satisfy in these accumulative networks to guarantee that optimal paths can be computed with Dijkstra's algorithm. We model the problem of routing in an accumulative multi-hop networks, as the problem of routing in a hypergraph. We show that optimality properties in traditional multi-hop network (monotonicity and isotonicity) are no longer valid and derive a new set of sufficient conditions for optimality.
【Keywords】: diversity reception; relay networks (telecommunication); telecommunication network routing; accumulative multi-hop networks; hypergraph; information accumulation; maximal ratio combining; Computers; Conferences; High definition video; Measurement; Relays; Routing; Spread spectrum communication
【Paper Link】 【Pages】:1822-1830
【Authors】: Jie Chuai ; Victor O. K. Li
【Abstract】: The minimum amount of information that should be supplied to transmitters to resolve traffic conflicts in a multiple access system is investigated in this paper. The arriving packets are modeled as the random points of a homogeneous Poisson point process distributed within a unit interval. The minimum information required is equal to the minimum entropy of a random partition that separates the points of the Poisson point process. Only a lower bound of this minimum is known in previous work. We provide an upper bound of this minimum entropy, and the gap with the existing lower bound is shown to be smaller than log2 e bits. The upper bound asymptotically achieves the minimum entropy required to resolve per unit traffic. We then analyze the control information used to resolve the traffic conflicts in the splitting algorithm and in the slotted-ALOHA protocol, and identify their gaps with the theoretic bound.
【Keywords】: access protocols; entropy; radio transmitters; stochastic processes; telecommunication control; telecommunication traffic; arriving packets; control information; homogeneous Poisson point process; minimum entropy; multiple access communications; radio transmitters; random partition; slotted-ALOHA protocol; splitting algorithm; traffic conflicts; upper bound; Entropy; Mathematical model; Media Access Protocol; Throughput; Transmitters; Upper bound
【Paper Link】 【Pages】:1831-1839
【Authors】: François Baccelli ; Avhishek Chatterjee ; Sriram Vishwanath
【Abstract】: Traditional models in opinion dynamics involve agents updating their opinions based on the opinions of their neighbors in a static social-graph, regardless of their differences in opinions. In contrast, the bounded confidence opinion dynamics does not presume a static interaction graph, and instead models interactions between those agents that share similar opinions (i.e., are close to one another, capturing online discussion groups and conventional meetings). We generalize the bounded confidence opinion dynamics model by incorporating pairwise stochastic interactions based on opinion differences as well as the self or endogenous evolution of the agent opinions, which is represented by a random process. We analytically characterize the conditions under which this stochastic dynamics is stable in an appropriate sense. This characterization relates well to what is observed in social systems. Moreover, this generalization sheds light on dynamics that combine aspects of graph-based updates and bounded confidence models.
【Keywords】: graph theory; random processes; social sciences; stochastic processes; agent opinion evolution; graph-based update; opinion difference; pairwise stochastic bounded confidence opinion dynamics; pairwise stochastic interaction; random process; social system; Biological system modeling; Computers; Conferences; Mathematical model; Noise; Stability analysis; Stochastic processes
【Paper Link】 【Pages】:1840-1848
【Authors】: Chul-Ho Lee ; Do Young Eun
【Abstract】: The Metropolis-Hastings (MH) algorithm, in addition to its application for Markov Chain Monte Carlo sampling or simulation, has been popularly used for constructing a random walk that achieves a given, desired stationary distribution over a graph. Applications include crawling-based sampling of large graphs or online social networks, statistical estimation or inference from massive scale of networked data, efficient searching algorithms in unstructured peer-to-peer networks, randomized routing and movement strategies in wireless sensor networks, to list a few. Despite its versatility, the MH algorithm often causes self-transitions of its resulting random walk at some nodes, which is not efficient in the sense of the Peskun ordering - a partial order between off-diagonal elements of transition matrices of two different Markov chains, and in turn results in deficient performance in terms of asymptotic variance of time averages and expected hitting times with slower speed of convergence. To alleviate this problem, we present simple yet effective distributed algorithms that are guaranteed to improve the MH algorithm over time when running on a graph, and eventually reach `efficiency-optimality', while ensuring the same desired stationary distribution throughout.
【Keywords】: Markov processes; Monte Carlo methods; matrix algebra; network theory (graphs); peer-to-peer computing; randomised algorithms; sampling methods; search problems; wireless sensor networks; MH algorithm; Markov chain Monte Carlo sampling; Metropolis-Hastings algorithm; asymptotic variance; crawling-based sampling; distributed algorithm; efficiency optimality; expected hitting times; graph theory; movement strategy; off-diagonal elements; online social network; random walk; randomized routing; searching algorithm; stationary distribution throughout; statistical estimation; time averages; transition matrices; unstructured peer-to-peer networks; wireless sensor networks; Computers; Conferences; Distributed algorithms; Eigenvalues and eigenfunctions; Markov processes; Peer-to-peer computing; Proposals
【Paper Link】 【Pages】:1849-1857
【Authors】: AmirMahdi Ahmadinejad ; Sina Dehghani ; MohammadTaghi Hajiaghayi ; Hamid Mahini ; Saeed Seddighin ; Sadra Yazdanbod
【Abstract】: People make decisions and express their opinions according to their communities. A natural idea for controlling the diffusion of a behavior is to find influential people, and employ them to spread a desired behavior. We investigate an influencing problem when individuals' behaviors are affected by their friends in an opinion formation process. Our goal is to design efficient algorithms for finding opinion leaders such that changing their opinions has a great impact on the overall external behaviors in the society. We study directed social networks and define a set of problems like maximizing the sum of individuals' behaviors or maximizing the number of individuals whose external behaviors are above a threshold. We discuss the complexity of the defined problems and design polynomial-time optimum algorithms for the non NP-hard variants of them. We also propose polynomial-time approximation algorithms with guaranteed performances and prove inapproximability results for the NP-hard variants of these problems. Furthermore, we run simulations on real-world social networks and show our proposed algorithm outperforms the classical algorithms such as degree-based algorithm, closeness-based algorithm, and pagerank-based algorithm.
【Keywords】: behavioural sciences computing; polynomial approximation; social networking (online); behavior diffusion; closeness-based algorithm; degree-based algorithm; directed social networks; influential people; opinion formation process; opinion leaders; overall external behaviors; pagerank-based algorithm; polynomial-time approximation algorithms; polynomial-time optimum algorithms; real-world social networks; Algorithm design and analysis; Approximation algorithms; Approximation methods; Cost function; Games; Social network services; Stochastic processes
【Paper Link】 【Pages】:1858-1866
【Authors】: Li Yan ; Haiying Shen ; Kang Chen
【Abstract】: Node searching in delay tolerant networks (DTNs) is of great importance for different applications, in which a locator node finds a target node in person. In the previous distributed node searching method, a locator traces the target along its movement path from its most frequently visited location. For this purpose, nodes leave traces during their movements and also store their long-term movement patterns in their frequently visited locations (i.e., preferred locations). However, such tracing leads to a long delay and high overhead on the locator by longdistance moving. Our trace data study confirms these problems and provides foundation of our design of a new node searching method, called target-oriented method (TSearch). By leveraging social network properties, TSearch aims to enable a locator to directly move towards the target. Nodes create encounter records (ERs) indicating the locations and times of their encounters and make the ERs easily accessible by locators through message exchanges or a hierarchical structure. In node searching, a locator follows the target's latest ER, the latest ERs of its friends (i.e., frequently meeting nodes), and its preferred locations in order. Extensive trace-driven and real-world experiments show that TSearch achieves significantly higher success rate and lower delay in node searching compared with previous methods.
【Keywords】: delay tolerant networks; mobile computing; mobility management (mobile radio); search problems; social networking (online); DTN; TSearch; delay tolerant networks; distributed node searching method; encounter records; locator node; social network properties; target-oriented low-delay node searching; target-oriented method; Computers; Conferences
【Paper Link】 【Pages】:1867-1875
【Authors】: Han Deng ; I-Hong Hou
【Abstract】: WiFi offloading, where mobile users opportunistically obtain data through WiFi rather than through cellular networks, is a promising technique to greatly improve spectrum efficiency and reduce cellular network congestion. We consider a system where the service provider deploys multiple WiFi hotspots to offload mobile traffic, and study the scheduling policy to maximize the amount of offloaded data. Since the movements of users are unpredictable, we focus on online scheduling policy where APs do not have any knowledge about the users' mobility patterns. We study performance of online policies by comparing against the optimal offline policy. We prove that any work-conserving policy is able to offload at least half as much data as the offline policy, and then propose an online policy that can offload (e-1)/e as much data as the offline policy. We further study the case where the service provider can increase the capacity of WiFi so as to provide guarantees on the amount of offloaded data. We propose a simple online policy and prove that our policy only needs half as much capacity as current mechanism to provide the same performance guarantee.
【Keywords】: cellular radio; mobile radio; telecommunication congestion control; telecommunication scheduling; wireless LAN; Wi-Fi offloading; cellular network congestion network; delayed mobile offloading; online scheduling policy; spectrum efficiency; work conserving policy; Computers; Conferences; IEEE 802.11 Standard; Linear programming; Mobile communication; Optimized production technology; Schedules
【Paper Link】 【Pages】:1876-1884
【Abstract】: Mobile Cloud Computing (MCC) is of particular importance to address the contradiction between the increasing complexity of user applications and the limited lifespan of mobile device's battery, by offloading the computational workloads from local devices to the remote cloud. Current offloading schemes either require the programmer's annotations, which restricts its wide application; or transmits too much unnecessary data, resulting bandwidth and energy waste. In this paper, we propose a novel method-level offloading methodology to offload local computational workload with as least data transmission as possible. Our basic idea is to identify the contexts which are necessary to the method execution by parsing application binaries in advance and applying this parsing result to selectively migrate heap data while allowing successful method execution remotely. Our implementation of this design is built upon Dalvik Virtual Machine. Our experiments and evaluation against applications downloaded from Google Play show that our approach can save data transmission significantly comparing to existing schemes.
【Keywords】: cloud computing; mobile computing; program compilers; virtual machines; Dalvik virtual machine; Google Play; MCC; code offloading; data transmission; least context migration; local computational workload offloading; mobile cloud computing; parsing application binaries; Androids; Context; Humanoid robots; Instruction sets; Java; Registers
【Paper Link】 【Pages】:1885-1893
【Authors】: Ozlem Bilgir Yetim ; Margaret Martonosi
【Abstract】: Today's worldwide mobile data traffic is roughly 18× larger than the full internet traffic in 2000, and continued large growth is expected. High mobile data usage has implications both for users and providers. For individual users, relying on cellular data connectivity incurs high cellular data fees. For cellular network providers, high mobile data usage requires expensive, ongoing infrastructure upgrades. Cellular data usage can be reduced by offloading to WiFi when available. If not available, prior work has considered delaying transmissions to wait for WiFi availability. While exploiting such application delay tolerance offers significant energy and performance leverage for data offloading and other techniques, a key question is: how long to wait? Prior work does not discuss how to estimate application delay tolerance without explicit help from programmers, nor how to adjust the estimate dynamically. This work proposes, implements, and evaluates four schemes to dynamically and adaptively deduce an application's delay tolerance. These schemes (Adaptive, Decision Tree-Based, Hybrid and Lazy) are low-overhead and effective. In our experiments, they cut cellular usage by 2× or more compared to non-delay-tolerant approaches. Furthermore, our dynamically adaptive decision schemes achieve up to 15% further cellular data reduction compared to fixed static delay tolerance values.
【Keywords】: Internet; cellular radio; delays; telecommunication traffic; Internet traffic; WiFi availability; cellular data connectivity; cellular data fees; cellular data reduction; cellular network providers; delay tolerance; dynamic adaptive techniques; fixed static delay tolerance values; learning application delay tolerance; mobile data offloading; mobile data traffic; Decision trees; Delays; Electronic mail; IEEE 802.11 Standard; Mobile communication; Receivers
【Paper Link】 【Pages】:1894-1902
【Authors】: Yi-Hsuan Kao ; Bhaskar Krishnamachari ; Moo-Ryong Ra ; Fan Bai
【Abstract】: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either improve resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. Given an application described by a task dependency graph, we formulate an optimization problem to minimize the latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on an integer linear programming formulation, which is NP-hard and not applicable to general task dependency graph for latency metrics, or on intuitively derived heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time problem approximation scheme (FPTAS) algorithm to solve this problem. Hermes pros vides a solution with latency no more than (1 + ε) times of the minimum while incurring complexity that is an polynomial in problem size and //ε We evaluate the performance by using real data set collected from several benchmarks, and show that Hermes improves the latency by 16% (36% for larger scale application) compared to a previously published heuristic and increases CPU computing time by only 0.4% of overall latency.
【Keywords】: cloud computing; computational complexity; file servers; graph theory; integer programming; linear programming; mobile computing; mobile handsets; resource allocation; CPU computing time; FPTAS algorithm; Hermes; application-specific profile; cloud servers; computational resources; computational task offloading; delay-sensitive applications; energy consumption costs; fully polynomial time approximation scheme algorithm; integer linear programming formulation; latency metrics; latency minimization; latency optimal task assignment; link connectivity; mobile devices; optimal task assignments; polynomial complexity; remote devices; resource utilization constraints; resource-constrained devices; resource-constrained mobile computing; task dependency graph; Approximation algorithms; Approximation methods; Heuristic algorithms; Optimization; Performance evaluation; Polynomials
【Paper Link】 【Pages】:1903-1911
【Authors】: Kai Chen ; Xitao Wen ; Xingyu Ma ; Yan Chen ; Yong Xia ; Chengchen Hu ; Qunfeng Dong
【Abstract】: Optical data center networks (DCNs) are becoming increasingly attractive due to their technological strengths compared to traditional electrical networks. However, prior optical DCNs are either hard to scale, vulnerable to single point of failure, or provide limited network bisection bandwidth for many practical DCN workloads. To this end, we present WaveCube, a scalable, fault-tolerant, high-performance optical DCN architecture. To scale, WaveCube removes MEMS1, a potential bottleneck, from its design. Wave-Cube is fault-tolerant since it does not have single point of failure and there are multiple node-disjoint parallel paths between any pair of Top-of-Rack (ToR) switches. WaveCube delivers high performance by exploiting multi-pathing and dynamic link bandwidth along the path. Our extensive evaluation results show that WaveCube outperforms previous optical DCNs by up to 400% and delivers network bisection bandwidth that is 70%-85% of an ideal non-blocking network under both realistic and synthetic traffic patterns. WaveCube's performance degrades gracefully under failures - it drops 20% even with 20% links cut. WaveCube also holds promise in practice - its wiring complexity is orders of magnitude lower than Fattree, BCube and c-Through at large scale, and its power consumption is 35% of them.
【Keywords】: computer centres; computer networks; fault tolerance; microswitches; optical switches; telecommunication network topology; BCube; DCN workloads; Fattree; MEMS; ToR switches; WaveCube; c-Through; dynamic link bandwidth; electrical networks; fault-tolerant optical data center architecture; high-performance optical data center architecture; network bisection bandwidth; node-disjoint parallel paths; nonblocking network; optical DCN; optical data center networks; power consumption; synthetic traffic patterns; top-of-rack switches; Bandwidth; Fault tolerance; Micromechanical devices; Optical fiber networks; Optical switches; Ports (Computers); Topology
【Paper Link】 【Pages】:1912-1920
【Authors】: Feiyang Liu ; Haibo Zhang ; Yawen Chen ; Zhiyi Huang ; Huaxi Gu
【Abstract】: Optical Network on Chip (ONoC) is a promising technology for the next-generation many-core chip multiprocessors owing to its tremendous advantages in low power consumption, low communication delay, and high bandwidth. In this paper we present WRH-ONoC, a novel wavelength-reused hierarchical architecture that is capable of interconnecting thousands of cores using a limited number of wavelengths while providing extremely high-throughput data communication between connected cores. In WRH-ONoC, the cores are divided into small subsystems that are interconnected using multiple λ-routers and gateways in a hierarchical manner. Each λ-router can provide non-blocking parallel communication among the directly connected cores or gateways, and all λ-routers can reuse the limited number of available wavelengths. Communications between cores in different subsystems are routed via gateways in which optical signals can change their wavelengths via optical-electrical signal conversions. For a given number of cores, we give the minimum number of levels, λ-routers, and gateways required to interconnect these cores, and derive the expected end-to-end data communication delay under the Uniform-Poisson traffic pattern. Both theoretical analysis and simulation results demonstrate that WRH-ONoC can achieve significant improvement on performance and reduction on hardware cost in comparison with the existing solutions.
【Keywords】: internetworking; network-on-chip; optical fibre networks; optical interconnections; stochastic processes; telecommunication power management; telecommunication traffic; wavelength assignment; WRH-ONoC; cores interconnecting; high bandwidth; high-throughput data communication; low communication delay; low power consumption; multiple λ-routers; multiple gateways; next generation many-core chip multiprocessor; nonblocking parallel communication; optical network on chip; optical-electrical signal conversion; uniform Poisson traffic pattern; wavelength-reused hierarchical architecture; High-speed optical techniques; Logic gates; Optical buffering; Optical interconnections; Optical resonators; Optical waveguides; Ports (Computers); λ-Router; ONoC; On-Chip Communication
【Paper Link】 【Pages】:1921-1929
【Authors】: Zizhong Cao ; Paul Claisse ; René-Jean Essiambre ; Murali S. Kodialam ; T. V. Lakshman
【Abstract】: It is well established that physical layer impairments significantly affect the performance of optical networks. The management of these impairments is critical for successful transmission, and may significantly affect network layer routing decisions. Hence the traditional divide-and-conquer layered approach is sub-optimal, which has led to work on cross-layer techniques for routing in optical networks. Apart from fiber loss, one critical physical layer impairment that limits the capacity of optical networks is fiber nonlinearity. Handling nonlinearity introduces significant complexity to the traditional cross-layer approaches. We formulate and solve a joint routing and power control problem to optimize the system throughput that takes into consideration both fiber loss and nonlinearity. The joint power control and routing problem considered is a nonlinear integer programming problem. By characterizing the feasible solution space of the power control problem we find a set of universal power settings that transforms the complex power control and routing problem into a constrained path routing problem. We then propose an efficient Fully Polynomial Time Approximation Scheme (FPTAS) to solve the constrained path routing problem. Simulation results show that our proposed algorithm significantly improves network throughput and greatly outperforms greedy heuristics by providing a guaranteed performance bound.
【Keywords】: integer programming; nonlinear programming; optical fibre networks; polynomial approximation; power control; telecommunication control; telecommunication network routing; complex power control; constrained path routing problem; cross-layer techniques; divide-and-conquer layered approach; fiber loss; fiber nonlinearity; fully polynomial time approximation scheme; joint routing problem; network layer routing decisions; nonlinear integer programming problem; optical networks; physical layer impairments; power control problem; universal power settings; Nonlinear optics; Optical fiber networks; Optical noise; Physical layer; Power control; Routing; Signal to noise ratio
【Paper Link】 【Pages】:1930-1938
【Authors】: Yiting Xia ; T. S. Eugene Ng ; Xiaoye Steven Sun
【Abstract】: Multicast data dissemination is the performance bottleneck for high-performance data analytics applications in cluster computing, because terabytes of data need to be distributed routinely from a single data source to hundreds of computing servers. The state-of-the-art solutions for delivering these massive data sets all rely on application-layer overlays, which suffer from inherent performance limitations. This paper presents Blast, a system for accelerating data analytics applications by optical multicast. Blast leverages passive optical power splitting to duplicate data at line rate on a physical-layer broadcast medium separate from the packet-switched network core. We implement Blast on a small-scale hardware testbed. Multicast transmission can start 33ms after an application issues the request, resulting in a very small control overhead. We evaluate Blast's performance at the scale of thousands of servers through simulation. Using only a 10Gbps optical uplink per rack, Blast achieves upto 102× better performance than the state-of-the-art solutions even when they are used over a non-blocking core network with a 400Gbps uplink per rack.
【Keywords】: data analysis; multicast communication; optical fibre networks; Blast; bit rate 400 Gbit/s; cluster computing; high-performance data analytics application acceleration; massive data sets; multicast data dissemination; nonblocking core network; optical multicast communication; packet switched network core; Adaptive optics; Optical fiber networks; Optical packet switching; Optical receivers; Optical sensors; Optical switches; Unicast
【Paper Link】 【Pages】:1939-1947
【Authors】: Muhammad Shahzad ; Alex X. Liu
【Abstract】: RFID systems have been deployed to detect missing products by affixing them with cheap passive RFID tags and monitoring them with RFID readers. Existing missing tag detection protocols require the tag population to contain only those tags whose IDs are already known to the reader. However, in reality, tag populations often contain tags with unknown IDs, called unexpected tags, and cause unexpected false positives i.e., due to them, missing tags are detected as present. We take the first step towards addressing the problem of detecting the missing tags from a population that contains unexpected tags. Our protocol, RUN, mitigates the adverse effects of unexpected false positives by executing multiple frames with different seeds. It minimizes the missing tag detection time by first estimating the number of unexpected tags and then using it along with the false positive probability to obtain optimal frame sizes and number of times Aloha frames should be executed. RUN works with multiple readers with overlapping regions. It is easy to deploy because it is implemented on readers as a software module and does not require modifications to tags or to the communication protocol between tags and readers. We implemented RUN along with four major missing tag detection protocols and the fastest tag ID collection protocol and compared them side-by-side. Our experimental results show that RUN always achieves the required reliability whereas the best existing protocol achieves a maximum reliability of only 67%.
【Keywords】: protocols; radiofrequency identification; telecommunication network reliability; Aloha frames; RFID systems; communication protocol; fast detection; missing RFID tags; missing products; passive RFID tags; reliable detection; software module; tag ID collection protocol; tag detection protocols; Probabilistic logic; Protocols; Radiofrequency identification; Reliability; Sociology; Standards; Statistics
【Paper Link】 【Pages】:1948-1956
【Authors】: Jia Liu ; Bin Xiao ; Shigang Chen ; Feng Zhu ; Lijun Chen
【Abstract】: In RFID systems, the grouping problem is to efficiently group all tags according to a given partition such that tags in the same group will have the same group ID. Unlike previous research on the unicast transmission from a reader to a tag, grouping provides a fundamental mechanism for efficient multicast transmissions and aggregate queries in large RFID-enabled applications. A message can be transmitted to a group of m tags simultaneously in multicast, which improves the efficiency by m times when comparing with unicast. We study fast grouping protocols in large RFID systems. To the best of our knowledge, it is the first attempt to tackle this practically important yet uninvestigated problem. We start with a straightforward solution called the Enhanced Polling Grouping (EPG) protocol. We then propose a time-efficient FIltering Grouping (FIG) protocol that uses Bloom filters to remove the costly ID transmissions. We point out the limitation of the Bloom-filter based solution due to its intrinsic false positive problem, which leads to our final ConCurrent Grouping (CCG) protocol. With a drastically different design, CCG is able to outperform FIG by exploiting collisions to inform multiple tags of their group ID simultaneously and by removing any wasteful slots in its frame-based execution. Simulation results demonstrate that our best protocol CCG can reduce the execution time by a factor of 11 when comparing with a baseline polling protocol.
【Keywords】: data structures; protocols; radiofrequency identification; Bloom filters; CCG protocol; ConCurrent Grouping; EPG protocol; FIG protocol; RFID enabled applications; RFID grouping protocols; RFID systems; aggregate queries; baseline polling protocol; enhanced polling grouping; group ID; grouping problem; multicast transmissions; reader; time-efficient filtering grouping; unicast transmission; Computers; Conferences; Filtering; Labeling; Protocols; Radiofrequency identification; Unicast
【Paper Link】 【Pages】:1957-1965
【Authors】: Yuxiao Hou ; Jiajue Ou ; Yuanqing Zheng ; Mo Li
【Abstract】: Estimating the number of RFID tags is a fundamental operation in RFID systems and has recently attracted wide attentions. Despite the subtleties in their designs, previous methods estimate the tag cardinality from the slot measurements, which distinguish idle and busy slots and based on that derive the cardinality following some probability models. In order to fundamentally improve the counting efficiency, in this paper we introduce PLACE, a physical layer based cardinality estimator. We show that it is possible to extract more information and infer integer states from the same slots in RFID communications. We propose a joint estimator that optimally combines multiple sub-estimators, each of which independently counts the number of tags with different inferred PHY states. Extensive experiments based on the GNURadio/USRP platform and the large-scale simulations demonstrate that PLACE achieves approximately 3~4× performance improvement over state-of-the-art cardinality estimation approaches.
【Keywords】: probability; radiofrequency identification; GNURadio/USRP platform; PLACE; RFID communications; cardinality estimator; large-scale RFID systems; large-scale simulations; physical layer cardinality estimation; probability models; Accuracy; Clustering algorithms; Computers; Estimation; Noise; Physical layer; Radiofrequency identification
【Paper Link】 【Pages】:1966-1974
【Authors】: Lei Yang ; Pai Peng ; Fan Dang ; Cheng Wang ; Xiang-Yang Li ; Yunhao Liu
【Abstract】: RFID has been widely adopted as an effective method for anti-counterfeiting. Legacy systems based on security protocol are either too heavy to be affordable by passive tags or suffering from various protocol-layer attacks, e.g. reverse engineering, cloning, side-channel. In this work, we present a novel anti-counterfeiting system, TagPrint, using COTS RFID tags and readers. Achieving a low-cost and offline genuineness validation utilizing passive tags has been a daunting task. Our system achieves these three goals by leveraging a few of federated tags' fingerprints and geometric relationships. In TagPrint, we exploit a new kind of fingerprint, called phase fingerprint, extracted from the phase value of the backscattered signal, provided by the COTS RFID readers. To further solve the separation challenge, we devise a geometric solution to validate the genuineness. We have implemented a prototype of TagPrint using COTS RFID devices. The system has been tested extensively over 6,000 tags. The results show that our new fingerprint exhibits a good fitness of uniform distribution and the system achieves a surprising Equal Error Rate of 0.1% for anti-counterfeiting.
【Keywords】: access protocols; error analysis; radiofrequency identification; COTS RFID readers; COTS RFID tags; TagPrint; anti-counterfeiting system; backscattered signal; equal error rate; federated RFID tags fingerprints; geometric relationships; legacy systems; low-cost offline genuineness validation; passive tags; phase fingerprint; protocol-layer attacks; security protocol; Antennas; Counterfeiting; Cryptography; Fingerprint recognition; Phase measurement; Radiofrequency identification; Anti-counterfeiting; Phase fingerprint; RFID; Tag-Print
【Paper Link】 【Pages】:1975-1983
【Authors】: Wei-Liang Shen ; Kate Ching-Ju Lin ; Ming-Syan Chen ; Kun Tan
【Abstract】: Multi-user multiple input and multiple output (MU-MIMO) is one predominate approach to improve the wireless capacity. However, since the aggregate capacity of MU-MIMO heavily depends on the channel correlations among the mobile users in a beamforming group, unwisely selecting beamforming groups may result in reduced overall capacity, instead of increasing it. How to select users into a beamforming group becomes the bottleneck of realizing the MU-MIMO gain. The fundamental challenge for user selection is the large searching space, and hence there exists a tradeoff between search complexity and achievable capacity. Previous works have proposed several low complexity heuristic algorithms, but they suffer a significant capacity loss. In this paper, we present a novel MU-MIMO MAC, called SIEVE. The core of SIEVE design is its scalable multi-user selection module that provides a knob to control the aggressiveness in searching the best beamforming group. SIEVE maintains a central database to track the channel and the coherence time for each mobile user, and largely avoids unnecessary computing with a progressive update strategy. Our evaluation, via both small-scale testbed experiments and large-scale trace-driven simulations, shows that SIEVE can achieve around 90% of the capacity compared to exhaustive search.
【Keywords】: MIMO communication; access protocols; array signal processing; correlation methods; mobile communication; multi-access systems; search problems; telecommunication control; MU-MIMO MAC; MU-MIMO gain; MU-MIMO systems; SIEVE design; beamforming group; central database; channel correlations; heuristic algorithms; mobile users; multiuser multiple input and multiple output systems; multiuser selection module; scalable user grouping; search complexity; searching space; wireless capacity; Antennas; Array signal processing; Coherence; Complexity theory; Mobile communication; Signal to noise ratio; Wireless communication
【Paper Link】 【Pages】:1984-1992
【Authors】: Jae-Han Lim ; Katsuhiro Naito ; Ji-Hoon Yun ; Mario Gerla
【Abstract】: In wireless networks, broadcasting is a fundamental communication primitive for network management and information sharing. However, in multi-channel networks, the broadcast efficiency is very poor as devices are distributed across various channels. Thus, a sender tries all channels for broadcasting a single message, which causes large overhead. In this paper, we propose a novel scheme for efficient broadcast in multichannel networks. Our scheme leverages the overlapped band, which is the frequency range that partially overlapped channels (i.e., adjacent channels) share within their channel boundaries. Specifically, a sender advertises the rendezvous channel through the overlapped band of adjacent channels; the message sharing via broadcast is done on the rendezvous channel. Our scheme employs Signaling via Overlapped Band (SOB), which defines a new signal processing mechanism for communication via the overlapped band. SOB is integrated with MAC layer mechanisms: 1) Reserve Idle Spectrum Fragment (RISF) to reduce waiting time, 2) Reinforce Switch Notification (RSN) to reduce the residing time at a wrong channel, and 3) Multi-sender Agreement on Rendezvous CHannel (MARCH) to support multisender broadcasts. We implemented our scheme on the SORA platform. Experiment results validated communication through the overlapped band. Intensive simulation studies showed that our scheme drastically outperformed previous approach.
【Keywords】: radio networks; telecommunication network management; wireless channels; MAC layer mechanisms; RISF; RSN; SOB; adjacent channels; channel boundaries; information sharing; multichannel networks; multichannel wireless networks; network management; overlapped band; reinforce switch notification; reserve idle spectrum fragment; revisiting overlapped channels; signal processing mechanism; signaling via overlapped band; Bandwidth; Broadcasting; Conferences; IEEE 802.11 Standard; Receivers; Signal processing; Switches; 802.11 Wi-Fi; broadcast; multi-channel network; overlapped band
【Paper Link】 【Pages】:1993-2001
【Authors】: Ehsan Aryafar ; Alireza Keshavarz-Haddad
【Abstract】: We present the design and implementation of FD2, a directional full-duplex (FD) communication system for indoor wireless networks. An FD2 AP uses directional transmit and receive antennas to reduce self-interference, and to combat AP-AP and client-client interferences that arise due to FD operation in multi-cell networks. FD2 addresses the joint problem of scheduling and beam selection by proposing efficient practical algorithms. FD2 is implemented on the WARP platform, and its performance is compared against CSMA/CA and other FD and directional communication systems. Our experimental results reveal that: (i) Simple application of FD to multi-cell networks can result in significant loss of capacity due to high FD induced interference, while FD2 can effectively overcome the problem and provide an average gain of ninefold; (ii) FD2's performance depends on the hardware capture properties and the corresponding rate table, and increases when packets can be captured at lower SINR margins, or when dynamic range of the rate table is high; and (iii) FD2's uplink and downlink performances are susceptible to channel dynamics, and are impacted differently due to mobility. However, we show that training FD2's rates according to traffic direction, mobility, and feedback rate, increases its robustness to channel dynamics.
【Keywords】: indoor radio; radio networks; receiving antennas; transmitting antennas; channel dynamics; directional full duplex communication system; directional receive antennas; directional transmit antennas; indoor wireless networks; multi-cell networks; Directional antennas; Directive antennas; Interference; Scheduling; Silicon; Wireless networks
【Paper Link】 【Pages】:2002-2010
【Authors】: Abishek Sankararaman ; François Baccelli
【Abstract】: The CSMA/CA protocol is based on the “Interference as Noise” (IAN) paradigm i.e. it always gets rid of strong interference near a receiver to ensure quality of reception. However, it is well known from Multi-user Information Theory that treating Interference as Noise is not optimal. This paper proposes a class of protocols that employ the Successive Interference Cancellation (SIC) technique in a systematic fashion to move beyond always treating interference as noise. Such protocols allow one to pack more links than the classical CSMA. We describe the protocols along with their signaling mechanism to implement them in a distributed fashion. We then perform Monte Carlo simulations to evaluate the performance and show significant gains over the IAN based CSMA/CA protocol in large random networks.
【Keywords】: Monte Carlo methods; carrier sense multiple access; multiuser channels; radiofrequency interference; CSMA k-SIC; CSMA-CA protocol; IAN paradigm; Monte Carlo simulations; distributed MAC protocols; interference as noise paradigm; multiuser information theory; performance evaluation; random networks; successive interference cancellation; Decoding; Interference; Multiaccess communication; Protocols; Receivers; Silicon carbide; Transmitters
【Paper Link】 【Pages】:2011-2019
【Authors】: Hao Huang ; Jihoon Yun ; Ziguo Zhong
【Abstract】: This paper presents LDSP (i.e., low-duty-cycle synchronization protocol), a design that enables scalable clock synchronization in wireless networks with low-duty-cycle radio operations. LDSP prevents exponential error proliferation of many available solutions, if applied in the low-duty-cycle scenario, by introducing a new mechanism of parallel synchronization that is naturally immune to excessive message delays widely existing in such networks. The key novelty behind LDSP is its separation of clock drift rate estimation from the error polluted global reference time in a unique manner, which helps eliminate compound errors that would otherwise amplify aggressively over the message delay at each hop during time dissemination. With LDSP, the time error is bounded to low-order polynomial growth as O(h√h), where h is the hop distance to the reference node, according to theoretical analysis verified by numerical simulation. To evaluate, LDSP was implemented on two hardware platforms equipped with different driven clock sources and compared with representative synchronization protocols via experiments in both indoor and outdoor environments. Results show that the proposed design is practical, effective, and features significantly improved scalability under real-world conditions.
【Keywords】: access protocols; polynomials; synchronisation; wireless channels; LDSP; clock drift rate estimation; clock sources; error polluted global reference time; excessive message delays; low duty cycle radio operations; low duty cycle synchronization protocol; low-order polynomial growth; parallel synchronization; reference node; scalable clock synchronization; wireless networks; Clocks; Delays; Estimation; Hardware; Jitter; Synchronization; Time dissemination
【Paper Link】 【Pages】:2020-2028
【Authors】: Yuan Li ; Antonio Capone ; Di Yuan
【Abstract】: The problem of scheduling transmission in single hop and multi-hop wireless networks with arbitrary topology under the physical interference model has been extensively studied. The focus has been on optimizing the efficiency of transmission parallelization through a minimum-frame-length schedule that meets a given set of traffic demands using the smallest number of time slots, each of which is associated with a set of compatible (according to the interference model) transmissions. This approach maximizes the resource reuse efficiency, but in general does not correspond to the best performance in terms of end-to-end packet delivery delay for multiple source-destination pairs, due to the inherent restriction of frame periodicity. In this paper, we study the problem of scheduling to minimize the end-to-end delay in wireless networks under the Signal to Interference plus Noise Ratio (SINR) constraints, and propose two schemes. The first scheme extends the minimum-frame-length approach with a phase of time slot ordering to account for the delay metric. The second scheme directly optimizes delay without the constraint of periodic framing. We propose novel mixed integer programming models for the two schemes and study their properties and complexity. Moreover, we present an efficient heuristic method that provides good quality solutions time-efficiently.
【Keywords】: delays; integer programming; minimisation; radiocommunication; radiofrequency interference; telecommunication scheduling; SINR; end-to-end delay minimization; end-to-end packet delivery delay; mixed integer programming model; multihop wireless networks; multiple source-destination; periodic framing; physical interference model; signal to interference plus noise ratio; single hop wireless networks; transmission scheduling; Delays; Interference; Optimization; Routing; Scheduling; Signal to noise ratio; Wireless networks; SINR model; link scheduling; mathematical programming; multi-hop wireless networks; optimization; routing
【Paper Link】 【Pages】:2029-2037
【Authors】: Dongxiao Yu ; Yuexuan Wang ; Yu Yan ; Jiguo Yu ; Francis C. M. Lau
【Abstract】: This paper initiates the study of distributed information exchange in multi-channel wireless ad hoc networks. Information exchange is a basic operation in which each node of the network sends an information packet to other nodes within a specific distance R. Our study is motivated by the increasing presence and popularity of wireless networks and devices that operate on multiple channels. Consequently, there is a need for a better understanding of how and by how much multiple channels can improve communication. Based on the SINR interference model, we propose a multi-channel network model which incorporates certain features commonly seen in wireless ad hoc networks, including asynchrony, little non-local knowledge, limited message size, and limited power control. We then present a randomized algorithm that can accomplish information exchange in O ((Δ/F + Δlog n/P) log n + log Δ log n) timeslots with high probability, where n is the number of nodes in the network, Δ is the maximum number of nodes within the range R, F is the number of available channels and P is the maximum number of packets that can fit in a message. Our algorithm significantly surpasses the best known results in single-channel networks, achieving a Θ(F) times speedup if Δ and P are sufficiently large. We conducted empirical studies that confirmed the performance of the proposed algorithm as derived in the analysis.
【Keywords】: ad hoc networks; computational complexity; probability; radiofrequency interference; randomised algorithms; wireless channels; SINR interference model; communication improvement; distributed information exchange; high probability; information packet; limited message size; limited power control; little nonlocal knowledge; multichannel wireless ad hoc networks; randomized algorithm; Algorithm design and analysis; Bismuth; Conferences; Information exchange; Interference; Signal to noise ratio; Synchronization
【Paper Link】 【Pages】:2038-2046
【Authors】: Chenshu Wu ; Zheng Yang ; Zimu Zhou ; Kun Qian ; Yunhao Liu ; Mingyan Liu
【Abstract】: WiFi technology has fostered numerous mobile computing applications, such as adaptive communication, finegrained localization, gesture recognition, etc., which often achieve better performance or rely on the availability of Line-Of-Sight (LOS) signal propagation. Thus the awareness of LOS and Non-Line-Of-Sight (NLOS) plays as a key enabler for them. Realtime LOS identification on commodity WiFi devices, however, is challenging due to limited bandwidth of WiFi and resulting coarse multipath resolution. In this work, we explore and exploit the phase feature of PHY layer information, harnessing both space diversity with antenna elements and frequency diversity with OFDM subcarriers. On this basis, we propose PhaseU, a real-time LOS identification scheme that works in both static and mobile scenarios on commodity WiFi infrastructure. Experimental results in various indoor scenarios demonstrate that PhaseU consistently outperforms previous approaches, achieving overall LOS and NLOS detection rates of 94.35% and 94.19% in static cases and both higher than 80% in mobile contexts. Furthermore, PhaseU achieves real-time capability with millisecond-level delay for a connected AP and 1-second delay for unconnected APs, which is far beyond existing approaches.
【Keywords】: OFDM modulation; antennas; diversity reception; mobile computing; radiowave propagation; wireless LAN; OFDM subcarriers; PHY layer information; PhaseU; WiFi technology; adaptive communication; antenna elements; commodity WiFi devices; fine-grained localization; frequency diversity; gesture recognition; line-of-sight signal propagation; millisecond-level delay; mobile computing applications; overall LOS detection rate; overall NLOS detection rate; real-time LOS identification scheme; space diversity; Antenna measurements; Antennas; Feature extraction; IEEE 802.11 Standard; Phase measurement; Real-time systems; Wireless communication
【Paper Link】 【Pages】:2047-2055
【Authors】: Hongxing Li ; Chuan Wu ; Zongpeng Li
【Abstract】: Spectrum auctions are efficient mechanisms for licensed users to relinquish their under-utilized spectrum to secondary links for monetary remuneration. Truthfulness and social welfare maximization are two natural goals in such auctions, but cannot be achieved simultaneously with polynomial-time complexity by existing methods, even in a static network with fixed parameters. The challenge escalates in practical systems with QoS requirements and volatile traffic demands for secondary communication. Online, dynamic decisions are required for rate control, channel evaluation/bidding, and packet dropping at each secondary link, as well as for winner determination and pricing at the primary user. This work proposes an online spectrum auction framework with cross-layer decision making and randomized winner determination on the fly. The framework is truthful-in-expectation, and achieves close-to-offline-optimal time-averaged social welfare and individual utilities with polynomial time complexity. A new method is introduced for online channel evaluation in a stochastic setting. Simulation studies further verify the efficacy of the proposed auction in practical scenarios.
【Keywords】: electronic commerce; radio spectrum management; QoS requirements; channel bidding; cross-layer decision making; online channel evaluation; packet dropping; polynomial time complexity; randomized winner determination; rate control; secondary communication; secondary link; secondary wireless communication; socially-optimal online spectrum auctions; volatile traffic demands; Algorithm design and analysis; Channel allocation; Computers; Conferences; Delays; Optimization; Quality of service
【Paper Link】 【Pages】:2056-2064
【Authors】: Dan Peng ; Shuo Yang ; Fan Wu ; Guihai Chen ; Shaojie Tang ; Tie Luo
【Abstract】: Auctions are believed to be effective methods to solve the problem of wireless spectrum allocation. Existing spectrum auction mechanisms are all centralized and suffer from several critical drawbacks of the centralized systems, which motivates the design of distributed spectrum auction mechanisms. However, extending a centralized spectrum auction to a distributed one broadens the strategy space of agents from one dimension (bid) to three dimensions (bid, communication, and computation), and thus cannot be solved by traditional approaches from mechanism design. In this paper, we propose two distributed spectrum auction mechanisms, namely distributed VCG and FAITH. Distributed VCG implements the celebrated Vickrey-Clarke-Groves mechanism in a distributed fashion to achieve optimal social welfare, at the cost of exponential communication overhead. In contrast, FAITH achieves sub-optimal social welfare with tractable computation and communication overhead. We prove that both of the two proposed mechanisms achieve faithfulness, i.e., the agents' individual utilities are maximized, if they follow the intended strategies. We also implement FAITH and evaluate its performance in various setups. Evaluation results show that FAITH achieves superior performance compared with the Nash equilibrium based approach.
【Keywords】: radio spectrum management; FAITH; Nash equilibrium; VCG; Vickrey-Clarke-Groves mechanism; distributed wireless spectrum auction mechanism; exponential communication overhead; social welfare; three-dimensional manipulation; wireless spectrum allocation; Channel allocation; Conferences; Cost accounting; Interference; Nash equilibrium; Resource management; Wireless communication
【Paper Link】 【Pages】:2065-2073
【Authors】: Zhili Chen ; Liusheng Huang ; Lin Chen
【Abstract】: Truthful auctions make bidders reveal their true valuations for goods to maximize their utilities. Currently, almost all spectrum auction designs are required to be truthful. However, disclosure of one's true value causes numerous security vulnerabilities. Secure spectrum auctions are thus called for to address such information leakage. Previous secure auctions either did not achieve enough security, or were very slow due to heavy computation and communication overhead. In this paper, inspired by the idea of secret sharing, we design an information-theoretically secure framework (ITSEC) for truthful spectrum auctions. As a distinguished feature, ITSEC not only achieves information-theoretic security for spectrum auction protocols in the sense of cryptography, but also greatly reduces both computation and communication overhead by ensuring security without using any encryption/description algorithm. To our knowledge, ITSEC is the first information-theoretically secure framework for truthful spectrum auctions in the presence of semi-honest adversaries. We also design and implement circuits for both single-sided and double spectrum auctions under the ITSEC framework. Extensive experimental results demonstrate that ITSEC achieves comparable performance in terms of computation with respect to spectrum auction mechanisms without any security measure, and incurs only limited communication overhead.
【Keywords】: cryptography; radio spectrum management; telecommunication security; ITSEC; cryptography; encryption-description algorithm; information leakage; information theoretically secure framework; radio spectrum; secret sharing; secure spectrum auctions; spectrum auction designs; spectrum auction protocols; truthful spectrum auctions; Conferences; Cryptography; Logic gates; Privacy; Protocols; Random variables
【Paper Link】 【Pages】:2074-2082
【Authors】: Yueming Wei ; Yanmin Zhu ; Hongzi Zhu ; Qian Zhang ; Guangtao Xue
【Abstract】: Stimulating both service users and service providers is of paramount importance to mobile crowdsourcing. A few incentive mechanisms have been proposed, but all of them have focused only on one-sided interactions either among service users or among service providers. For the first time, to the best of our knowledge, we investigate the important two-sided online interactions among service users and service providers in mobile crowdsourcing. We model such interactions as online double auctions, explicitly taking the dynamic nature of both users and providers into account We propose a general framework for the design of truthful online double auctions for dynamic mobile crowdsourcing. The framework is expressive and can work with different price schedules. We propose price-ranked online double auctions with four price schedules to implement the framework, which are suitable for different scenarios. With theoretical analysis and extensive simulations we demonstrate that the proposed auctions are strategy-proof, individual rational, and ensure budget balance.
【Keywords】: commerce; mobile radio; telecommunication scheduling; dynamic mobile crowdsourcing; ensure budget balance; general framework; incentive mechanisms; individual rational; one-sided interactions; price schedules; price-ranked online double auctions; service providers; service users; strategy-proof; truthful online double auctions; two-sided online interactions; Computers; Conferences; Mobile crowdsourcing; auction; double; online; smartphones; truthful
【Paper Link】 【Pages】:2083-2091
【Authors】: Jiawei Yuan ; Shucheng Yu ; Linke Guo
【Abstract】: Image search has been widely deployed in many applications for the rich content that images contain. In the era of big data, image search engines have to be hosted in data centers. As a viable solution, outsourcing the image search to public clouds is an economic choice for many small organizations. However, as many images contain sensitive information, e.g., healthcare information and personal faces/locations, directly outsourcing image search services to public clouds obviously raises privacy concerns. With this observation, several attempts are made towards secure image search over encrypted dataset, but they are limited by either search accuracy or search efficiency. In this paper, we propose a lightweight secure image search scheme over encrypted data, namely SEISA. Compared with image search techniques over plaintexts, SEISA only increases about 9% search cost and sacrifices about 3% on search accuracy. SEISA also efficiently supports search access control by employing a novel polynomial based design, which enables data owners to define who can search a specific image. Furthermore, we design a secure k-means outsourcing algorithm that significantly saves the data owner's cost. To demonstrate SEISA's performance, we implement a prototype of SEISA on Amazon EC2 cloud over a dataset with 10 million images.
【Keywords】: Big Data; authorisation; cloud computing; computer centres; cryptography; data privacy; image retrieval; polynomials; search engines; Amazon EC2 cloud; SEISA; big data; data centers; economic choice; efficient encrypted image search; image search engines; polynomial based design; privacy concerns; public clouds; search access control; search accuracy; search efficiency; secure image search scheme; secure k-means outsourcing algorithm; Access control; Accuracy; Cloud computing; Encryption; Indexes; Servers
【Paper Link】 【Pages】:2092-2100
【Authors】: Bing Wang ; Wei Song ; Wenjing Lou ; Y. Thomas Hou
【Abstract】: With the growing awareness of data privacy, more and more cloud users choose to encrypt their sensitive data before outsourcing them to the cloud. Search over encrypted data is therefore a critical function facilitating efficient cloud data access given the high data volume that each user has to handle nowadays. Inverted index is one of the most efficient searchable index structures and has been widely adopted in plaintext search. However, securing an inverted index and its associated search schemes is not a trivial task. A major challenge exposed from the existing efforts is the difficulty to protect user's query privacy. The challenge roots on two facts: 1) the existing solutions use a deterministic trapdoor generation function for queries; and 2) once a keyword is searched, the encrypted inverted list for this keyword is revealed to the cloud server. We denote this second property in the existing solutions as one-time-only search limitation. Additionally, conjunctive multi-keyword search, which is the most common form of query nowadays, is not supported in those works. In this paper, we propose a public-key searchable encryption scheme based on the inverted index. Our scheme preserves the high search efficiency inherited from the inverted index while lifting the one-time-only search limitation of the previous solutions. Our scheme features a probabilistic trapdoor generation algorithm and protects the search pattern. In addition, our scheme supports conjunctive multi-keyword search. Compared with the existing public key based schemes that heavily rely on expensive pairing operations, our scheme is more efficient by using only multiplications and exponentiations. To meet stronger security requirements, we strengthen our scheme with an efficient oblivious transfer protocol that hides the access pattern from the cloud. The simulation results demonstrate that our scheme is suitable for practical usage with moderate overhead.
【Keywords】: cloud computing; data privacy; public key cryptography; cloud computing; cloud data access; cloud server; cloud users; conjunctive multikeyword search; data privacy; data volume; inverted index; multikeyword public key searchable encryption; plaintext search; probabilistic trapdoor generation algorithm; public key searchable encryption scheme; search pattern; searchable index structures; sensitive data; trapdoor generation function; user query privacy; Encryption; Indexes; Polynomials; Privacy; Public key; Servers
【Paper Link】 【Pages】:2101-2109
【Authors】: Dongsheng Wang ; Xiaohua Jia ; Cong Wang ; Kan Yang ; Shaojing Fu ; Ming Xu
【Abstract】: Searchable encryption is an important and challenging issue. It allows people to search on encrypted data. This is a very useful function when more and more people choose to host their data in the cloud and the cloud server is not fully trustable. Existing solutions for searchable encryption are only limited to some simple functions of search, such as boolean search or similarity search. In this paper, we propose a scheme for Generalized Pattern-matching String-search on Encrypted data (GPSE) in cloud systems. GPSE allows users to specify their search queries by using generalized wildcard-based string patterns (such as SQL-like patterns). It gives users great expressive power in specifying highly targeted search queries. In the framework of GPSE, we particularly implemented two most commonly used pattern matching search functions on encrypted data, the substring matching and the longest-prefix-first matching. We also prove that GPSE is secure under the known-plaintext model. Experiments over real data sets show that GPSE achieves high search accuracy.
【Keywords】: cloud computing; cryptography; query processing; string matching; GPSE scheme; cloud systems; encrypted data; generalized pattern matching string search; generalized wildcard-based string patterns; known-plaintext model; longest-prefix-first matching; search query specification; searchable encryption; substring matching; Accuracy; Cryptography; Euclidean distance; Indexes; Pattern matching; Servers
【Paper Link】 【Pages】:2110-2118
【Authors】: Wenhai Sun ; Xuefeng Liu ; Wenjing Lou ; Y. Thomas Hou ; Hui Li
【Abstract】: Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.
【Keywords】: cloud computing; cryptography; trusted computing; computation outsourcing model; data users; dynamic encrypted cloud data; efficient verifiable conjunctive keyword search; encrypted data search scheme; file collection size; public trusted authority; result verification mechanism; semitrusted server; universally composable security; Conferences; Cryptography; Indexes; Keyword search; Polynomials; Servers
【Paper Link】 【Pages】:2119-2127
【Authors】: Jian Li ; Rajarshi Bhattacharyya ; Suman Paul ; Srinivas Shakkottai ; Vijay Subramanian
【Abstract】: We consider the problem of streaming live content to a cluster of co-located wireless devices that have both an expensive unicast base-station-to-device (B2D) interface, as well as an inexpensive broadcast device-to-device (D2D) interface, which can be used simultaneously. Our setting is a streaming system that uses a block-by-block random linear coding approach to achieve a target percentage of on-time deliveries with minimal B2D usage. Our goal is to design an incentive framework that would promote such cooperation across devices, while ensuring good quality of service. Based on ideas drawn from truth-telling auctions, we design a mechanism that achieves this goal via appropriate transfers (monetary payments or rebates) in a setting with a large number of devices, and with peer arrivals and departures. Here, we show that a Mean Field Game can be used to accurately approximate our system. Furthermore, the complexity of calculating the best responses under this regime is low. We implement the proposed system on an Android testbed, and illustrate its efficient performance using real world experiments.
【Keywords】: linear codes; radio equipment; radio networks; real-time systems; video streaming; Android testbed; B2D interface; D2D interface; block-by-block random linear coding; colocated wireless devices; incentivizing sharing; inexpensive broadcast device-to-device; mean field game perspective; realtime D2D streaming networks; streaming live content; streaming system; unicast base-station-to-device; Computers; Conferences; Games; Performance evaluation; Quality of service; Resource management; Wireless communication
【Paper Link】 【Pages】:2128-2136
【Authors】: Bo Chen ; Vivek Yenamandra ; Kannan Srinivasan
【Abstract】: The time variance of a channel is exploited in interference alignment techniques. However, in the case of static channels, these interference alignment techniques do not hold, specifically for single-antenna systems. In this paper, we introduce simple data processing techniques that enable us to create the effect of a time varying channel from the underlying static channel. We call this channel the shadow channel. We exploit the time-varying nature of the relative channels introduced by the shadow channel for interference alignment. We demonstrate the throughput benefits of this interference alignment technique for different topologies. This technique can be thought of as operating over the two dimensions of the complex space of typical communication systems. In this paper, we present the feasibility of interference alignment techniques for single antenna nodes in static channel. We establish bound on the throughput gain due to this technique. Finally, we implement interference alignment over the shadow channel on the NI PXIe 1082 based software defined radio. We achieve throughput gains of upto 1.44X over TDMA systems using this interference alignment technique and upto 1.61X when this technique is coupled with interference cancellation.
【Keywords】: antennas; radio networks; radiofrequency interference; telecommunication network topology; NI PXIe 1082; data processing techniques; interference alignment technique; interference alignment techniques; interference cancellation; radio architecture; shadow channel; single antenna nodes; single-antenna systems; static channel; time variance; time varying channel; topologies; Antennas; Interference; Radio transmitters; Receivers; Throughput; Uplink
【Paper Link】 【Pages】:2137-2145
【Authors】: Chao Kong ; Zengwen Yuan ; Xushen Han ; Feng Yang ; Xinbing Wang ; Tao Wang ; Songwu Lu
【Abstract】: Multiple-Input Multiple-Output (MIMO) technology has become an efficient way to improve the capacity and reliability of wireless networks. Traditional MIMO schemes are designed mainly for the scenario of contiguous spectrum ranges. However, in cognitive radio networks, the available spectrum is discontiguous, making traditional MIMO schemes inefficient for spectrum usage. This motivates the design of new MIMO schemes that apply to networks with discontiguous spectrum ranges. In this paper, we propose a scheme called VSMC MIMO, which enables MIMO nodes to transmit variable numbers of streams in multiple discontinuous spectrum ranges. This scheme can largely improve the spectrum utilization and meanwhile maintain the same spatial multiplexing and diversity gains as traditional MIMO schemes. To implement this spectral-efficient scheme on cooperative MIMO relays in cognitive radio networks, we propose a joint relay selection and spectrum allocation algorithm and a corresponding MAC protocol for the system. We also build a testbed by the Universal Software Radio Peripherals (USRPs) to evaluate the performances of the proposed scheme in practical networks. The experimental results show that VSMC MIMO can efficiently utilize the discontiguous spectrum and greatly improve the throughput of cognitive radio networks.
【Keywords】: MIMO communication; access protocols; cognitive radio; cooperative communication; diversity reception; multiplexing; radio spectrum management; relay networks (telecommunication); software radio; telecommunication network reliability; MAC protocol; USRP; VSMC MIMO scheme; cognitive radio network; contiguous spectrum range; cooperative relay; diversity gain; joint relay selection and spectrum allocation algorithm; multiple discontiguous spectrum range; multiple input multiple output technology; spatial multiplexing; spectral efficient scheme; spectrum utilization; universal software radio peripheral; wireless network reliability; Antennas; Cognitive radio; MIMO; Receivers; Relays; Resource management; Throughput
【Paper Link】 【Pages】:2146-2154
【Authors】: Dimitris Syrivelis ; George Iosifidis ; Dimosthenis Delimpasis ; Konstantinos Chounos ; Thanasis Korakis ; Leandros Tassiulas
【Abstract】: The recent mobile data explosion has increased the interest for mobile user-provided networks (MUPNs), where users share their Internet access by exploiting the diversity in their needs and resource availability. Although promising, MUPNs raise unique challenges. Namely, the success of such services relies on user participation which in turn can be achieved on the basis of a fair and efficient resource (i.e., Internet access and battery energy) exchange policy. The latter should be devised and imposed in a very fast time scale, based on near real-time feedback from mobile users regarding their needs, resources, and network conditions that are rapidly changing. To address these challenges we design and implement a novel cloud-controlled MUPN system, that employs software defined networking support on mobile terminals, to dynamically apply data forwarding policies with adaptive flow-control. We devise these policies by solving a coalitional game that is played among the users. We prove that the game has a non-empty core and hence the solution, which determines the servicing policy, incentivizes the users to participate. Finally, we evaluate the performance of the service in a prototype, where we investigate its performance limits, quantify the implementation overheads, and justify our architecture design choices.
【Keywords】: Internet; adaptive control; cloud computing; game theory; mobility management (mobile radio); software defined networking; Internet access sharing; adaptive flow control; cloud controlled MUPN system; coalitional game; collaborative consumption; data forwarding policies; mobile Internet; mobile terminals; mobile user provided network; nonempty core; software defined networking; user participation; Batteries; Games; IEEE 802.11 Standard; Internet; Logic gates; Mobile communication; Mobile computing
【Paper Link】 【Pages】:2155-2163
【Authors】: Jason Cloud ; Douglas J. Leith ; Muriel Médard
【Abstract】: Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.
【Keywords】: automatic repeat request; encoding; protocols; telecommunication network reliability; ARQ; application layer; automatic repeat request; coded generalization; in-order delivery delay; streaming services; transport layer protocols; Automatic repeat request; Computers; Conferences; Delays; Encoding; Redundancy; Servers
【Paper Link】 【Pages】:2164-2172
【Authors】: Yang Yang ; Ness B. Shroff
【Abstract】: The recent breakthrough in wireless full-duplex communication makes possible a brand new way of multi-hop wireless communication, namely full-duplex cut-through transmission, where for a traffic flow that traverses through multiple links, every node along the route can receive a new packet and simultaneously forward the previously received packet. This wireless transmission scheme brings new challenges in the design of MAC layer algorithms that aim to reap its full benefit. First, the MAC layer rate region of the cut-through enabled network is directly a function of the routing decision, leading to a strong coupling between routing and scheduling. Second, it is unclear how to dynamically form/change cut-through routes based on the traffic rates and patterns. In this work, we introduce a novel method to characterize the interference relationship between links in the network with cut-through transmission, which decouples the routing decision with the scheduling decision and enables a seamless adaptation of traditional half-duplex routing/scheduling algorithm into wireless networks with full-duplex cut-through capabilities. Based on this interference model, a queue-length based CSMA-type scheduling algorithm is proposed, which both leverages the flexibility of full-duplex cut-through transmission and permits distributed implementation.
【Keywords】: carrier sense multiple access; radio networks; telecommunication network routing; telecommunication scheduling; telecommunication traffic; MAC layer algorithms; full-duplex cut-through transmission; queue-length based CSMA-type scheduling algorithm; routing decision; traffic flow; wireless networks; wireless transmission; Interference; Routing; Schedules; Scheduling; Scheduling algorithms; Wireless networks; Cut-through Transmission; Dynamic Routing; Scheduling; Wireless Full-duplex
【Paper Link】 【Pages】:2173-2181
【Authors】: Hojin Lee ; Sangwoo Moon ; Yung Yi
【Abstract】: Optimal CSMA, which is fully distributed wireless MAC theory, has provided a rule of dynamically adapting CSMA parameters according to some theoretically developed principles, and has reported to offer nice analytical guarantees on throughput and fairness. Despite a couple of research efforts that transfer Optimal CSMA to practical protocols, e.g., O-DCF, our evaluation results show that they are still far from being deployable in practice mainly due to bad performance with TCP. In this paper, we first investigate how Optimal CSMA based MAC conflicts with TCP and degrades end-to-end performance, if poorly transferred to practice. Then, we propose a new wireless MAC protocol, called A-DCF, that inherits the basic framework and rationale of Optimal CSMA and O-DCF, but are largely redesigned to make A-DCF work well with TCP. The key idea of A-DCF lies in smartly exploiting both queue length and delay which widens our design space for compatibility with TCP. Our extensive simulation and experimental results demonstrate that A-DCF outperforms the traditional 802.11 and O-DCF. Particularly, we report our implementation code of A-DCF as a device driver module. To our knowledge, it is the first driver-level implementation of an Optimal CSMA based MAC protocol, being of broad interest to the community.
【Keywords】: carrier sense multiple access; queueing theory; wireless channels; A-DCF; CSMA parameters; TCP compatibility; delay length; device driver module; distributed wireless MAC theory; end-to-end performance; optimal CSMA; queue length; wireless MAC protocol; Boosting; Delays; IEEE 802.11 Standard; Multiaccess communication; Radiation detectors; Topology; Wireless communication
【Paper Link】 【Pages】:2182-2190
【Authors】: Mostafa Uddin ; Tamer Nadeem
【Abstract】: In this paper, we utilize a novel communication framework Acoustic-WiFi to develop a smart contention resolution scheme Harmony among the contending devices to address the overhead of the traditional Wi-Fi backoff scheme (i.e. contention window countdown, DIFS) and reduce the overall collisions among the devices. Harmony uses the acoustic channel for contention resolution in Wi-Fi networks. To the best of our knowledge, Harmony is the first to leverage the acoustic interface on commodity smart devices as an addition control channel in parallel with the Wi-Fi interface. We evaluate our scheme using real testbed and simulation. Testbed experiments show more than 40% throughput gain over traditional Wi-Fi networks, while simulation results show more than 27% gain for dense networks.
【Keywords】: wireless LAN; Wi-Fi backoff scheme; acoustic channel; acoustic-WiFi; commodity smart devices; smart contention resolution scheme; Acoustics; Conferences; Data communication; Hardware; IEEE 802.11 Standard; Smart phones; Wireless communication
【Paper Link】 【Pages】:2191-2199
【Authors】: Tomasz Jurdzinski ; Dariusz R. Kowalski ; Michal Rozanski ; Grzegorz Stachowiak
【Abstract】: This paper studies the task of setting up ad hoc wireless networks. In such networks, it is often the case that nodes become active at different times, without coordination or knowledge about network topology. We consider the following tasks: wake-up, clock synchronization, leader election, and multimessage broadcast. We show how to achieve these goals in scalable O(D polylog(n)) time. As a tool we define and give a solution to a quasi-backbone problem, which aims to set up transmission probabilities at nodes in a way that they can be efficiently used to solve other tasks. Our results are obtained by minimalistic algorithms, which do not require power control or carrier sensing capabilities, use very small energy, local computation and memory. Moreover, unlike many previous work, they remain scalable even if the network is not highly connected.
【Keywords】: ad hoc networks; electronic messaging; probability; signal detection; synchronisation; telecommunication network topology; asynchronous ad hoc wireless network topology; clock synchronization; leader election; minimalistic algorithm; multimessage broadcast; quasibackbone problem; transmission probability; wake up; Ad hoc networks; Interference; Nominations and elections; Signal to noise ratio; Synchronization; Wireless networks; SINR model; Wireless ad hoc network; clock synchronization; leader election; multi-message broadcast; no GPS; quasi-backbone; random linear network coding; randomized distributed algorithms; wake-up
【Paper Link】 【Pages】:2200-2208
【Authors】: Okhwan Lee ; Weiping Sun ; Jihoon Kim ; Hyuk Lee ; Bo Ryu ; Jungwoo Lee ; Sunghyun Choi
【Abstract】: Due to considerable increases in user mobility and frame length through aggregation, the wireless channel remains no longer time-invariant during the (aggregated) frame transmission time. However, the existing IEEE 802.11 standards still define the channel estimation to be performed only once at the preamble for coherent OFDM receivers, and the same channel information to be used throughout the entire (aggregated) frame processing. Our experimental results reveal that this baseline channel estimation approach seriously deteriorates the WiFi performance, especially for pedestrian mobile users and the recently adopted frame aggregation scheme. In this paper, we propose Channel-Aware Symbol Error Reduction (ChASER), a new practical channel estimation and tracking scheme for WiFi receivers. ChASER utilizes the re-encoding and re-modulation of the received data symbol to keep up with the wireless channel dynamics at the granularity of OFDM symbols. Our extensive, trace-driven link-level simulation shows significant performance gains over a wide range of channel conditions based on the real wireless channel traces collected by the off-the-shelf WiFi device. In addition, the feasibility of its low-complexity and standard compliance is demonstrated by Microsoft's Software Radio (Sora) prototype implementation and experimentation. To our knowledge, ChASER is the first IEEE 802.11n-compatible channel tracking algorithm since other approaches addressing the time-varying channel conditions over a single (aggregated) frame duration require costly modifications of the IEEE 802.11n standard.
【Keywords】: OFDM modulation; channel estimation; time-varying channels; wireless LAN; wireless channels; ChASER; IEEE 802.11 standards; channel estimation; channel tracking algorithm; channel-aware symbol error reduction; coherent OFDM receivers; dynamic channel environment; high-performance WiFi systems; time-varying channel; wireless channel; Channel estimation; Dispersion; IEEE 802.11n Standard; OFDM; Receivers; Wireless communication
【Paper Link】 【Pages】:2209-2217
【Authors】: Ronghui Hou ; Yu Cheng ; Jiandong Li ; Min Sheng ; King-Shan Lui
【Abstract】: Hybrid wireless networks are networks that are composed of both ad hoc transmissions and cellular transmissions. Many existing works have analyzed the capacity of hybrid wireless networks. By assuming the uniform traffic model that a source node would select a random node as the destination, the network capacity is a function of number of nodes and number of base stations. Nevertheless, the real network traffic pattern is related to the social behaviors of users. In this work, we study the capacity of hybrid wireless networks with the social traffic model under the L-maximum-hop routing policy. If two nodes are within L hops away, packets will be transmitted in the ad hoc mode; otherwise, packets are transmitted through the base stations. To our best knowledge, we are the first to study this problem and develop the capacity as a function of number of nodes, number of stations, traffic model parameters, and L.
【Keywords】: ad hoc networks; radio networks; telecommunication network routing; telecommunication traffic; L-maximum-hop routing policy; ad hoc transmissions; capacity analysis; cellular transmissions; hybrid wireless networks; long-range social contacts behavior; network capacity; social traffic model; uniform traffic model; Ad hoc networks; Bandwidth; Base stations; Computers; Routing; Throughput; Wireless networks
【Paper Link】 【Pages】:2218-2226
【Authors】: Yanchao Zhao ; Wenzhong Li ; Jie Wu ; Sanglu Lu
【Abstract】: Conflict graph has been widely used for wireless network optimization in dealing with the issues of channel assignment, spectrum allocation, links scheduling and etc. Despite its simplicity, the traditional conflict graph suffers from two drawbacks. On one hand, it is a rough representation of the interference condition, which is inaccurate and will cause suboptimal results for wireless network optimization. On the other hand, it only defines the interference between two entities, which neglects the accumulative effect of small amount interference. In this paper, we propose the model of quantized conflict graph (QCG) to tackle the above issues. The properties, usage and construction methods of QCG are explored. We show that in its matrix form, a QCG owns the properties of low-rank and high-similarity. These properties give birth to three complementary QCG estimation strategies, namely low-rank approximation approach, similarity based approach, and comprehensive approach, to construct the QCG efficiently and accurately from partial interference measurement results. We further explore the potential of QCG for wireless network optimization by applying QCG in minimizing the total network interference. Extensive experiments using real collected wireless network are conducted to evaluate the system performance, which confirm the efficiency of the proposed algorithms.
【Keywords】: channel allocation; graph theory; matrix algebra; optimisation; radiofrequency interference; telecommunication scheduling; wireless channels; QCG estimation; channel assignment; links scheduling; low-rank approximation; matrix form; network interference; partial interference measurement; quantized conflict graphs; real collected wireless network; spectrum allocation; wireless network optimization; Computers; Correlation; Estimation; Interference; Optimization; Wireless networks; Conflict graph; Interference model; Matrix completion; Wireless network optimization
【Paper Link】 【Pages】:2227-2235
【Authors】: Mingjun Xiao ; Jie Wu ; Liusheng Huang ; Yunsheng Wang ; Cong Liu
【Abstract】: Mobile crowdsensing is a new paradigm in which a crowd of mobile users exploit their carried smart devices to conduct complex computation and sensing tasks in mobile social networks (MSNs). In this paper, we focus on the task assignment problem in mobile crowdsensing. Unlike traditional task scheduling problems, the task assignment in mobile crowdsensing must follow the mobility model of users in MSNs. To solve this problem, we propose an oFfline Task Assignment (FTA) algorithm and an oNline Task Assignment (NTA) algorithm. Both FTA and NTA adopt a greedy task assignment strategy. Moreover, we prove that the FTA algorithm is an optimal offline task assignment algorithm, and give a competitive ratio of the NTA algorithm. In addition, we demonstrate the significant performance of our algorithms through extensive simulations, based on four real MSN traces and a synthetic MSN trace.
【Keywords】: graph theory; mobile computing; network theory (graphs); outsourcing; FTA algorithm; MSN; NTA algorithm; greedy task assignment strategy; mobile crowdsensing; mobile social network; offline task assignment; online task assignment; smart device; Algorithm design and analysis; Computers; Conferences; IEEE 802.11 Standard; Mobile communication; Mobile computing; Sensors; Crowdsensing; delay tolerant network; mobile social network; task assignment
【Paper Link】 【Pages】:2236-2244
【Authors】: John P. Rula ; Fabián E. Bustamante
【Abstract】: Crowdsensing leverages the pervasiveness and power of mobile devices, such as smartphones and tablets, to enable ordinary citizens to collect, transport and verify data. Application domains range from environment monitoring, to infrastructure management and social computing. Crowdsensing services' effectiveness is a direct result of their coverage, which is driven by the recruitment and mobility patterns of participants. Due to the typically uneven population distributions of most areas, and the regular mobility patterns of participants, less popular or populated areas suffer from poor coverage. In this paper, we present Crowd Soft Control (CSC), an approach to exert limited control over the actions of participants by leveraging the built-in incentives of location-based gaming and social applications. By pairing crowdsensing with location-based applications, CSC allows sensing services to reuse the incentives of location-based apps to steer the actions of participating users and increase the effectiveness of sensing campaigns. While there are several domains where this intentional movement is useful such as data muling, this paper presents the design, implementation and evaluation of CSC applied to crowdsensing. We built a prototype of CSC and integrated it with two location-based applications, and crowdsensing services. Our experimental results demonstrate the low-cost of integration and minimal overhead of CSC.
【Keywords】: mobile computing; smart phones; CSC; crowd soft control; crowdsensing services; data muling; location-based apps incentives; location-based gaming; mobile device pervasiveness; mobile device power; mobility patterns; population distributions; sensing services; smartphones; social applications; tablets; Context; Games; Mobile handsets; Noise; Pollution; Runtime; Sensors
【Paper Link】 【Pages】:2245-2253
【Authors】: Yang Tian ; Kaigui Bian ; Guobin Shen ; Xiaochen Liu ; Xiaoguang Li ; Thomas Moscibroda
【Abstract】: The popularity of QR code clearly indicates the strong demand of users to acquire (or pull) further information from interested sources (e.g., a poster) in the physical world. However, existing information pulling practices such as a mobile search or QR code scanning incur heavy user involvement to identify the targeted posters. Meanwhile, businesses (e.g., advertisers) are also interested to learn about the behaviors of potential customers such as where, when, and how users show interests in their offerings. Unfortunately, little such context information are provided by existing information pulling systems. In this paper, we present Contextual-Code (C-Code) - an information pulling system that greatly relieves users' efforts in pulling information from targeted posters, and in the meantime provides rich context information of user behavior to businesses. C-Code leverages the rich contextual information captured by the smartphone sensors to automatically disambiguate information sources in different contexts. It assigns simple codes (e.g., a character) to sources whose contexts are not discriminating enough. To pull the information from an interested source, users only need to input the simple code shown on the targeted source. Our experiments demonstrate the effectiveness of C-Code design. Users can effectively and uniquely identify targeted information sources with an average accuracy over 90%.
【Keywords】: binary codes; smart phones; ubiquitous computing; QR code; contextual code; information pulling; interested source; physical world; quick-response code; rich context information; smartphone sensors; targeted information sources; targeted sources; user behavior; Business; Context; IEEE 802.11 Standard; Interference; Magnetic separation; Sensor phenomena and characterization
【Paper Link】 【Pages】:2254-2262
【Authors】: Merkourios Karaliopoulos ; Orestis Telelis ; Iordanis Koutsopoulos
【Abstract】: We look into the realization of mobile crowdsensing campaigns that draw on the opportunistic networking paradigm, as practised in delay-tolerant networks but also in the emerging device-to-device communication mode in cellular networks. In particular, we ask how mobile users can be optimally selected in order to generate the required space-time paths across the network for collecting data from a set of fixed locations. The users hold different roles in these paths, from collecting data with their sensing-enabled devices to relaying them across the network and uploading them to data collection points with Internet connectivity. We first consider scenarios with deterministic node mobility and formulate the selection of users as a minimum-cost set cover problem with a submodular objective function. We then generalize to more realistic settings with uncertainty about the user mobility. A methodology is devised for translating the statistics of individual user mobility to statistics of spacetime path formation and feeding them to the set cover problem formulation. We describe practical greedy heuristics for the resulting NP-hard problems and compute their approximation ratios. Our experimentation with real mobility datasets (a) illustrates the multiple tradeoffs between the campaign cost and duration, the bound on the hopcount of space-time paths, and the number of collection points; and (b) provides evidence that in realistic problem instances the heuristics perform much better than what their pessimistic worst-case bounds suggest.
【Keywords】: mobile communication; NP-hard problems; cellular networks; delay tolerant networks; device-to-device communication mode; greedy heuristics; mobile crowdsensing; mobile users; opportunistic networks; space time paths; submodular objective function; user recruitment; Approximation methods; Conferences; Data collection; Mobile communication; Mobile computing; Recruitment; Sensors
【Paper Link】 【Pages】:2263-2271
【Authors】: Michele Garetto ; Emilio Leonardi ; Stefano Traverso
【Abstract】: In this paper we develop a novel technique to analyze both isolated and interconnected caches operating under different caching strategies and realistic traffic conditions. The main strength of our approach is the ability to consider dynamic contents which are constantly added into the system catalogue, and whose popularity evolves over time according to desired profiles. We do so while preserving the simplicity and computational efficiency of models developed under stationary popularity conditions, which are needed to analyze several caching strategies. Our main achievement is to show that the impact of content popularity dynamics on cache performance can be effectively captured into an analytical model based on a fixed content catalogue (i.e., a catalogue whose size and objects' popularity do not change over time).
【Keywords】: cache storage; cataloguing; cache performance; caching strategies; content popularity dynamics; dynamic content popularity; fixed content catalogue; interconnected caches; isolated caches; realistic traffic conditions; stationary popularity conditions; system catalogue; Analytical models; Approximation methods; Computational modeling; Computers; Conferences; Shape; Standards; Cache Networks; Caching; Content Popularity; Dynamic Scenarios
【Paper Link】 【Pages】:2272-2280
【Authors】: Sean Sanders ; Jasleen Kaur
【Abstract】: Web page classification is useful in many domains- including ad targeting, traffic modeling, and intrusion detection. In this paper, we investigate whether learning-based techniques can be used to classify web pages based only on anonymized TCP/IP headers of traffic generated when a web page is visited. We do this in three steps. First, we select informative TCP/IP features for a given downloaded web page, and study which of these remain stable over time and are also consistent across client browser platforms. Second, we use the selected features to evaluate four different labeling schemes and learning-based classification methods for web page classification. Lastly, we empirically study the effectiveness of the classification methods for real-world applications.
【Keywords】: Web sites; online front-ends; security of data; telecommunication traffic; transport protocols; TCP/IP header; Web page classification; ad targeting; client browser platforms; intrusion detection; labeling schemes; learning-based classification methods; learning-based techniques; traffic modeling; Browsers; Feature extraction; IP networks; Labeling; Navigation; Streaming media; Web pages; Traffic Classification; Web Page Measurement
【Paper Link】 【Pages】:2281-2289
【Authors】: Emilio Leonardi ; Giovanni Luca Torrisi
【Abstract】: In this paper we develop an analytical framework, based on Che's approximation [2], for the analysis of Least Recently Used (LRU) caches operating under the Shot Noise requests Model (SNM). The SNM was recently proposed in [10] to better capture the main characteristics of today Video on Demand (Vod) traffic. In this context, Che's approximation is derived as the application of a mean field principle to the cache eviction time. We investigate the validity of this approximation through an asymptotic analysis of the cache eviction time. Particularly, we provide a large deviation principle and a central limit theorem for the cache eviction time, as the cache size grows large. Furthermore, we obtain a non-asymptotic analytical upper bound on the error entailed by Che's approximation of the hit probability.
【Keywords】: approximation theory; cache storage; probability; shot noise; video on demand; Che approximation; LRU caches; SNM; Vod traffic; analytical framework; asymptotic analysis; central limit theorem; hit probability; large deviation principle; least recently used caches; shot noise requests model; video on demand; Analytical models; Approximation methods; Computational modeling; Computers; Conferences; Noise; Random variables
【Paper Link】 【Pages】:2290-2298
【Authors】: Huichen Dai ; Jianyuan Lu ; Yi Wang ; Bin Liu
【Abstract】: Named Data Networking (NDN) as an instantiation of the Content-Centric Networking (CCN) approach, embraces the major shift of the network function - from host-to-host conversation to content dissemination. The NDN forwarding architecture consists of three tables - Content Store (CS), Pending Interest Table (PIT) and Forwarding Information Base (FIB), as well as two lookup rules - Longest Prefix Match (LPM) and Exact Match (EM). A software-based implementation for this forwarding architecture would be low-cost, flexible and have rich memory resource, but may also make the pipelining technique not readily applicable to table lookups. Therefore, forwarding a packet would go through multiple tables sequentially without pipelining, leading to high latency and low throughput. In order to take advantage of the software-based implementation and overcome its shortcoming, we find that, a single unified index that supports all the three tables and both LPM and EM lookup rules would benefit the forwarding performance. In this paper, we present such an index data structure called BFAST (Bloom Filter-Aided haSh Table). BFAST employs a Counting Bloom Filter to balance the load among hash table buckets, making the number of prefixes in each non-empty bucket close to 1, and thus enabling high lookup throughput and low latency. Evaluation results show that, for solely LMP lookup, BFAST can arrive at 36.41 million lookups per second (M/s) using 24 threads, and the latency is around 0.46 μs. When utilized to build the NDN forwarding architecture, BFAST obtains remarkable performance promotion under various request composition, e.g., BFAST achieves a lookup speed of 81.32 M/s with a synthetic request trace where 30% of the requests hit CS, another 30% hit PIT and the rest 40% hit FIB, while the lookup latency is only 0.29 μs
【Keywords】: computer networks; data communication; data structures; information dissemination; pipeline processing; table lookup; BFAST; CCN approach; NDN forwarding architecture; bloom filter-aided hash table; content centric networking; content dissemination; content storage; exact match lookup rule; forwarding information base; hash table bucket; longest prefix match lookup rule; named data networking; pending interest table; pipelining technique; scalable index; unified index; Bismuth; Conferences; Data structures; Indexes; Radiation detectors; Throughput
【Paper Link】 【Pages】:2299-2307
【Authors】: Christopher G. Brinton ; Mung Chiang
【Abstract】: We study student performance prediction in Massive Open Online Courses (MOOCs), where the objective is to predict whether a user will be Correct on First Attempt (CFA) in answering a question. In doing so, we develop novel techniques that leverage behavioral data collected by MOOC platforms. Using video-watching clickstream data from one of our MOOCs, we first extract summary quantities (e.g., fraction played, number of pauses) for each user-video pair, and show how certain intervals/sets of values for these behaviors quantify that a pair is more likely to be CFA or not for the corresponding question. Motivated by these findings, our methods are designed to determine suitable intervals from training data and to use the corresponding success estimates as learning features in prediction algorithms. Tested against a large set of empirical data, we find that our schemes outperform standard algorithms (i.e., without behavioral data) for all datasets and metrics tested. Moreover, the improvement is particularly pronounced when considering the first few course weeks, demonstrating the “early detection” capability of such clickstream data. We also discuss how CFA prediction can be used to depict graphs of the Social Learning Network (SLN) of students, which can help instructors manage courses more effectively.
【Keywords】: courseware; educational courses; social networking (online); CFA; MOOC performance prediction; SLN; correct-on-first attempt; course management; learning features; massive open online courses; prediction algorithms; social learning networks; summary quantities extraction; user-video pair; video-watching clickstream data; Algorithm design and analysis; Computers; Conferences; Hidden Markov models; Measurement; Prediction algorithms; Standards
【Paper Link】 【Pages】:2308-2316
【Authors】: Jie Xu ; Mihaela van der Schaar ; Jiangchuan Liu ; Haitao Li
【Abstract】: This paper presents Pop-Forecast, a systematic method for accurately forecasting the popularity of videos promoted through social networks. Pop-Forecast aims to optimize the forecasting accuracy and the timeliness with which forecasts are issued, by explicitly taking into account the dynamic propagation of videos in social networks. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the performance loss of Pop-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its fast rate of convergence as well as its asymptotic convergence to the optimal performance. We validate the performance of Pop-Forecast through extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing approaches for popularity prediction (which do not take into account the propagation in social network) by more than 30% in terms of prediction rewards.
【Keywords】: optimisation; social networking (online); technological forecasting; video signal processing; China; Pop-Forecast; RenRen; asymptotic convergence; dynamic video propagation; forecasting accuracy optimization; online social networks; performance loss; prediction rewards; real-world data traces; timely video popularity forecasting; video arrivals; Accuracy; Context; Forecasting; Hypercubes; Partitioning algorithms; Prediction algorithms; Social network services
【Paper Link】 【Pages】:2317-2325
【Authors】: Felix Ming Fai Wong ; Zhenming Liu ; Mung Chiang
【Abstract】: We study a fundamental question that arises in social recommender systems: whether it is possible to simultaneously maximize (a) an individual's benefit from using a social network and (b) the efficiency of the network in disseminating information. To tackle this question, our study consists of three components. First, we introduce a stylized stochastic model for recommendation diffusion. Such a model allows us to highlight the connection between user experience at the individual level, and network efficiency at the macroscopic level. We also propose a set of metrics for quantifying both user experience and network efficiency. Second, based on these metrics, we extensively study the tradeoff between the two factors in a Yelp dataset, concluding that Yelp's social network is surprisingly efficient, though not optimal. Finally, we design a friend recommendation and news feed curation algorithm that can simultaneously address individuals' need to connect to high quality friends, and service providers' need to maximize network efficiency in information propagation.
【Keywords】: recommender systems; social networking (online); Yelp dataset; Yelp social network; friend recommendation; high-quality friends; information dissemination; information propagation; macroscopic level; network efficiency; network efficiency maximization; news feed curation algorithm; recommendation diffusion; service providers; social recommender networks; stylized stochastic model; user experience; Business; Computational modeling; Eigenvalues and eigenfunctions; Measurement; Recommender systems; Social network services; Stochastic processes
【Paper Link】 【Pages】:2326-2334
【Authors】: Shahzad Ali ; Gianluca Rizzo ; Vincenzo Mancuso ; Marco Ajmone Marsan
【Abstract】: This work presents the first experimental evaluation of the Floating Content (FC) communication paradigm in a campus/large office setting. By logging information transfer events we have characterized mobility patterns, and we have assessed the performance of services implemented using the FC paradigm. Our results unveil the key relevance of group dynamics in user movements for the FC performance. Surprisingly, in such an environment, our results show that a relatively low user density is enough to guarantee content persistence over time, contrarily to predictions from available models. Based on these experimental findings, we develop a novel simple analytical model that accounts for the peculiarities of the mobility patterns in such a setting, and that can accurately predict the effectiveness of FC for the implementation of services in a campus/large office setting.
【Keywords】: computer aided instruction; educational institutions; mobile computing; smart phones; FC communication paradigm; campus environment; campus setting; floating content availability; floating content persistence; information transfer events; large office setting; mobility pattern characterization; user density; Analytical models; Androids; Bluetooth; Computers; Conferences; Humanoid robots; IEEE 802.11 Standard
【Paper Link】 【Pages】:2335-2343
【Authors】: Shizhen Zhao ; Xiaojun Lin ; Minghua Chen
【Abstract】: We study competitive online algorithms for EV (electrical vehicle) charging under the scenario of an aggregator serving a large number of EVs together with its background load, using both its own renewable energy (for free) and the energy procured from the external grid. The goal of the aggregator is to minimize its peak procurement from the grid, subject to the constraint that each EV has to be fully charged before its deadline. Further, the aggregator can predict the future demand and the renewable energy supply with some levels of uncertainty. The key challenge here is how to develop a model that captures the prior knowledge from such prediction, and how to best utilize this prior knowledge to reduce the peak under future uncertainty. In this paper, we first propose a 2-level increasing precision model (2-IPM), to capture the system uncertainty. We develop a powerful computation approach that can compute the optimal competitive ratio under 2-IPM over any online algorithm, and also online algorithms that can achieve the optimal competitive ratio. A dilemma for online algorithm design is that an online algorithm with good competitive ratio may exhibit poor average-case performance. We then propose a new Algorithm-Robustification procedure that can convert an online algorithm with reasonable average-case performance to one with both the optimal competitive ratio and good average-case performance. The robustified version of a well-known heuristic algorithm, Receding Horizon Control (RHC), is found to demonstrate superior performance via trace-based simulations.
【Keywords】: electric vehicles; minimisation; renewable energy sources; 2-IPM; 2-level increasing precision model; RHC; algorithm robustification; electrical vehicle charging; external grid; heuristic algorithm; online algorithms; peak-minimizing online EV charging; price-of-uncertainty; receding horizon control; renewable energy supply; trace-based simulations; Algorithm design and analysis; Optimization; Prediction algorithms; Predictive models; Renewable energy sources; Robustness; Uncertainty
【Paper Link】 【Pages】:2344-2352
【Authors】: Sheng Zhang ; Zhuzhong Qian ; Fanyu Kong ; Jie Wu ; Sanglu Lu
【Abstract】: Wireless power transfer is a promising technology to extend the lifetime of, and thus enhance the usability of, the energy-hungry battery-powered devices. It enables energy to be wirelessly transmitted from power chargers to energy receiving devices. Existing studies have mainly focused on maximizing network lifetime, optimizing charging efficiency, minimizing charging delay, etc. Different from these works, our objective is to optimize charging quality in a 2-D target area. Specifically, we consider the following charger Placement and Power allocation Problem (P3): Given a set of candidate locations for placing chargers, find a charger placement and a corresponding power allocation to maximize the charging quality, subject to a power budget. We prove that P3 is NP-complete. We first study P3 with fixed power levels, for which we propose a (1-1/e)-approximation algorithm; we then design an approximation algorithm of factor 1-1/e / 2L for P3, where e is the base of the natural logarithm, and L is the maximum power level of a charger. We also show how to extend P3 in a cycle. Extensive simulations demonstrate that, the gap between our design and the optimal algorithm is within 4.5%, validating our theoretical results.
【Keywords】: approximation theory; computational complexity; inductive power transmission; optimisation; 2D target area; NP-complete; P3; approximation algorithm; charger placement; energy receiving devices; energy-hungry battery-powered devices; joint optimization; power allocation problem; power budget; power chargers; wireless power transfer; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computers; Resource management; Wireless communication; Wireless sensor networks; Wireless power transfer; approximation algorithm; power allocation; sub-modularity
【Paper Link】 【Pages】:2353-2361
【Authors】: Subhankar Mishra ; Xiang Li ; Alan Kuhnle ; My T. Thai ; Jungtaek Seo
【Abstract】: Smart Grid addresses the problem of existing power grid's increasing complexity, growing demand and requirement for greater reliability, through two-way communication and automated residential load control among others. These features also makes the Smart Grid a target for a number of cyber attacks. In the paper, we study the problem of rate alteration attack (RAA) through fabrication of price messages which induces changes in load profiles of individual users and eventually causes major alteration in the load profile of the entire network. Combining with cascading failure, it ends up with a highly damaging attack. We prove that the problem is NP-Complete and provide its inapproximability. We devise two approaches for the problem, former deals with maximizing failure of lines with the given resource and then extending the effect with cascading failure while the later takes cascading potential into account while choosing the lines to fail. To get more insight into the impact of RAA, we also extend our algorithms to maximize number of node failures. Empirical results on both IEEE Bus data and real network help us evaluate our approaches under various settings of grid parameters.
【Keywords】: optimisation; power system security; smart power grids; IEEE bus data; NP-complete problem; RAA; cascading failure; load profile; node failures; price messages fabrication; rate alteration attacks; smart grid; Generators; Mathematical model; Power system faults; Power system protection; Power transmission lines; Smart grids
【Paper Link】 【Pages】:2362-2370
【Authors】: Valentino Pacifici ; György Dán
【Abstract】: Internet service providers increasingly deploy internal CDNs with the objective of reducing the traffic on their transit links and to improve their customers' quality of experience. Once ISP managed CDNs (nCDNs) become commonplace, ISPs would likely provide common interfaces to interconnect their nCDNs for mutual benefit, as they do with peering today. In this paper we consider the problem of using distributed algorithms for computing a content allocation for nCDNs. We show that if every ISP aims to minimize its cost and bilateral payments are not allowed then it may be impossible to compute a content allocation. For the case of bilateral payments we propose two distributed algorithms, the aggregate value compensation (AC) and the object value compensation (OC) algorithms, which differ in terms of the level of parallelism they allow and in terms of the amount of information exchanged between nCDNs. We prove that the algorithms converge, and we propose a scheme to ensure ex-post individual rationality. Simulations performed on a real AS-level network topology and synthetic topologies show that the algorithms have geometric rate of convergence, and scale well with the graphs' density and the nCDN capacity.
【Keywords】: Internet; distributed algorithms; graph theory; quality of experience; telecommunication network topology; telecommunication traffic; AS-level network topology; Internet service providers; OC algorithms; bilateral payments; content allocation; distributed algorithms; graph density; interconnected content distribution networks; object value compensation; quality of experience; synthetic topologies; transit links; Aggregates; Convergence; Cost function; Distributed algorithms; Games; Nash equilibrium; Resource management
【Paper Link】 【Pages】:2371-2379
【Authors】: Wenji Chen ; Yong Guan
【Abstract】: We consider a new type of distinct element counting problem in dynamic data streams, where (1) insertions and deletions of an element can appear not only in the same data stream but also in two or more different streams, (2) a deletion of a distinct element cancels out all the previous insertions of this element, and (3) a distinct element can be re-inserted after it has been deleted. Our goal is to count the number of distinct elements that were inserted but have not been deleted in a continuous data stream. We also solve this new type of distinct element counting problem in a distributed setting. This problem is motivated by several network monitoring and attack detection applications where network traffic can be modelled as single or distributed dynamic streams and the number of distinct elements in the data streams, such as unsuccessful TCP connection setup requests, is calculated to be used as an indicator to detect certain network events such as service outage and DDoS attacks. Although there are known tight bounds for distinct element counting in insertion-only data streams, no good bounds are known for it in dynamic data streams, neither for this new type of problem. None of the existing solutions for distinct element counting can solve our problem. In this paper, we will present the first solution to this problem, using a space-bounded data structure with a computation-efficient probabilistic data streaming algorithm to estimate the number of distinct elements in single or distributed dynamic data streams. We have done both theoretical analysis and experimental evaluations, using synthetic and real data traces, of our algorithm to show its effectiveness.
【Keywords】: computer network security; transport protocols; DDoS attacks; TCP connection; attack detection applications; continuous data stream; distinct element counting; distributed dynamic data streams; distributed setting; network monitoring; network traffic; probabilistic data streaming algorithm; service outage; space bounded data structure; Computers; Data structures; Distributed databases; Estimation; Heuristic algorithms; Monitoring; Servers
【Paper Link】 【Pages】:2380-2388
【Authors】: Thang N. Dinh ; My T. Thai
【Abstract】: A considerable amount of research effort has focused on developing metrics and approaches to assess network vulnerability. However, most of them neglect the network uncertainty arisen due to various reasons such as mobility and dynamics of the network, or noise introduced in data collection process. To this end, we introduce a framework to assess vulnerability of networks with uncertainty, modeling such networks as probabilistic graphs. We adopt expected pairwise connectivity (EPC) as a measure to quantify global connectivity and use it to formulate vulnerability assessment as a stochastic optimization problem. The objective is to identify a few number of critical nodes whose removal minimizes EPC in the residual network. While solutions for stochastic optimization problems are often limited to small networks, we present a practical solution that works for larger networks. The key advantages of our solution include 1) the application of a weighted averaging technique that avoids considering all, exponentially many, possible realizations of probabilistic graphs and 2) a Fully Polynomial Time Randomized Approximation Scheme (FPRAS) to efficiently estimate the EPC with any desired accuracy. Extensive experiments demonstrate significant improvement on performance of our solution over other heuristic approaches.
【Keywords】: graph theory; polynomial approximation; radio networks; stochastic processes; stochastic programming; EPC; FPRAS; expected pairwise connectivity; fully polynomial time randomized approximation scheme; network attack vulnerability assessment; network uncertainty; probabilistic graph; stochastic optimization problem; Approximation methods; Computer network reliability; Monte Carlo methods; Optimization; Probabilistic logic; Reliability; Uncertainty
【Paper Link】 【Pages】:2389-2397
【Authors】: Fei Chen ; Tao Xiang ; Yuanyuan Yang ; Cong Wang ; Shengyu Zhang
【Abstract】: Cloud storage has gained a remarkable success in recent years with an increasing number of consumers and enterprises outsourcing their data to the cloud. To assure the availability and integrity of the outsourced data, several protocols have been proposed to audit cloud storage. Despite the formally guaranteed security, the constructions employed heavy cryptographic operations as well as advanced concepts (e.g., bilinear maps over elliptic curves and digital signatures), and thus are inefficient to admit wide applicability in practice. In this paper, we design a novel secure cloud storage protocol, which is conceptually and technically simpler and significantly more efficient than previous constructions. Inspired by a classic string equality checking protocol in distributed computing, our protocol uses only basic integer arithmetic (without advanced techniques and concepts). As simple as the protocol is, it supports both randomized and deterministic auditing to fit different applications. We further extend the proposed protocol to support data dynamics, i.e., adding, deleting and modifying data, using a novel technique. As a further contribution, we find a systematic way to design secure cloud storage protocols based on verifiable computation protocols. Theoretical and experimental analyses validate the efficacy of our protocol.
【Keywords】: cloud computing; cryptography; data integrity; digital signatures; distributed processing; protocols; cloud storage protocol; cloud storage security; computation protocol; cryptographic operation; data integrity; digital signature; distributed computing; string equality checking protocol; Cloud computing; Computational modeling; Computers; Conferences; Protocols; Secure storage; Security
【Paper Link】 【Pages】:2398-2406
【Authors】: Jun Zhou ; Zhenfu Cao ; Xiaolei Dong ; Xiaodong Lin
【Abstract】: Cloud-assisted e-healthcare systems significantly facilitate the patients to outsource their personal health information (PHI) for medical treatment of high quality and efficiency. Unfortunately, a series of unaddressed security and privacy issues dramatically impede its practicability and popularity. In e-healthcare systems, it is expected that only the primary physicians responsible for the patients treatment can not only access the PHI content but verify the real identity of the patient. Secondary physicians participating in medical consultation and/or research tasks, however, are only permitted to view or use the content of the protected PHI, while unauthorized entities cannot obtain anything. Existing work mainly focuses on patients conditional identity privacy by exploiting group signatures, which are very computationally costly. In this paper, we propose a white-box traceable and revocable multi-authority attribute-based encryption named TR-MABE to efficiently achieve multilevel privacy preservation without introducing additional special signatures. It can efficiently prevent secondary physicians from knowing the patients identity. Also, it can efficiently track the physicians who leak secret keys used to protect patients identity and PHI. Finally, formal security proof and extensive simulations demonstrate the effectiveness and practicability of our proposed TR-MABE in e-healthcare cloud computing systems.
【Keywords】: cloud computing; cryptography; data privacy; digital signatures; health care; medical information systems; PHI; TR-MABE encryption; cloud-assisted e-healthcare systems; e-healthcare cloud computing systems; electronic health care; formal security proof; group signatures; medical consultation; medical research; medical treatment; multilevel privacy-preserving e-healthcare; patient identity; patient treatment; patients conditional identity privacy; personal health information; privacy issue; security issue; white-box traceable revocable multiauthority attribute-based encryption; Access control; Cloud computing; Encryption; Medical services; Privacy; Cloud computing system; attribute-based encryption; multi-authority; traceability and revocability
【Paper Link】 【Pages】:2407-2415
【Authors】: Hyewon Lee ; Tae Hyun Kim ; Jun Won Choi ; Sunghyun Choi
【Abstract】: Smart devices such as smartphones and tablet/wearable PCs are equipped with voice user interface, i.e., speaker and microphone. Accordingly, various aerial acoustic communication techniques have been introduced to utilize the voice user interface as a communication interface. In this paper, we propose an aerial acoustic communication system using inaudible audio signal for low-rate communication in indoor environments. By adopting chirp signal, which is widely used for radar applications due to its capability of resolving multi-path propagation, the proposed acoustic modem supports long-range communication independent of device characteristics over severely frequency-selective acoustic channel. We also design a backend server architecture to compensate for the low data rate of chirp signal-based acoustic modem. Via extensive experiments, we evaluate various characteristics of the proposed modem including multi-path resolution and multiple chirp signal detection. We also verify that the proposed chirp signal can deliver data at 16 bps in typical indoor environments, where its maximum transmission range is drastically extended up to 25 m compared to the few meters of the previous research.
【Keywords】: acoustic communication (telecommunication); acoustic wave propagation; indoor radio; microphones; multipath channels; radar resolution; speaker recognition; user interfaces; wireless LAN; wireless channels; Wi-Fi; acoustic modem; aerial acoustic communication technique; backend server architecture design; communication interface; frequency selective acoustic channel; inaudible audio chirp signal; indoor environment; microphone device; multipath propagation; multipath resolution; multiple chirp signal detection; radar applications; smart device; speaker device; voice user interface; Chirp; Correlation; Frequency shift keying; Microphones; Receivers; Chirp signal; aerial acoustic communication; smart devices; software-based digital modem
【Paper Link】 【Pages】:2416-2424
【Authors】: Thomas Nitsche ; Adriana B. Flores ; Edward W. Knightly ; Joerg Widmer
【Abstract】: Millimeter-wave communication achieves multi-Gbps data rates via highly directional beamforming to overcome pathloss and provide the desired SNR. Unfortunately, establishing communication with sufficiently narrow beamwidth to obtain the necessary link budget is a high overhead procedure in which the search space scales with device mobility and the product of the sender-receiver beam resolution. In this paper, we design, implement, and experimentally evaluate Blind Beam Steering (BBS) a novel architecture and algorithm that removes in-band overhead for directional mm-Wave link establishment. Our system architecture couples mm-Wave and legacy 2.4/5 GHz bands using out-of-band direction inference to establish (overhead-free) multi-Gbps mm-Wave communication. Further, BBS evaluates direction estimates retrieved from passively overheard 2.4/5 GHz frames to assure highest mm-Wave link quality on unobstructed direct paths. By removing in-band overhead, we leverage mm-Wave's very high throughput capabilities, beam-width scalability and provide robustness to mobility. We demonstrate that BBS achieves 97.8% accuracy estimating direction between pairing nodes using at least 5 detection band antennas. Further, BBS successfully detects unobstructed direct path conditions with an accuracy of 96.5% and reduces the IEEE 802.11ad beamforming training overhead by 81%.
【Keywords】: antenna radiation patterns; array signal processing; beam steering; blind source separation; directive antennas; millimetre wave antenna arrays; radio links; wireless LAN; BBS; IEEE 802.11ad beamforming training overhead; SNR; blind beam steering; detection band antenna; device mobility; directional beamforming; directional mm-Wave link quality; eyes closed; in-band overhead removal; link budget; millimeter wave communication; mm-Wave beam steering; out-of-band direction inference; sender-receiver beam resolution; Attenuation; Computer architecture; Directive antennas; History; Throughput; Training
【Paper Link】 【Pages】:2425-2433
【Authors】: Zhangyu Guan ; Giuseppe Enrico Santagati ; Tommaso Melodia
【Abstract】: We consider the problem of designing optimal network control algorithms for distributed networked systems of implantable medical devices wirelessly interconnected by means of ultrasonic waves, which are known to propagate better than radio-frequency electromagnetic waves in aqueous media such as human tissues. Specifically, we propose lightweight, asynchronous, and distributed algorithms for joint rate control and stochastic channel access designed to maximize the throughput of ultrasonic intra-body area networks under energy constraints. We first develop (and validate through testbed experiments) a statistical model of the ultrasonic channel and of the spatial and temporal variability of ultrasonic interference. Compared to in-air radio frequency (RF), human tissues show a much lower propagation speed, which further causes unaligned interference at the receiver. It is therefore inefficient to perform adaptation based on instantaneous channel state information (CSI). Based on this model, we formulate the problem of maximizing the network throughput by jointly controlling the transmission rate and the channel access probability over a finite time horizon based only on a statistical characterization of interference. We then propose a fully distributed solution algorithm, and through both simulation and testbed results, we show that the algorithm achieves considerable throughput gains compared with traditional algorithms.
【Keywords】: biological tissues; biomedical ultrasonics; body area networks; interference (signal); stochastic processes; CSI; aqueous media; channel access probability; distributed algorithms; distributed networked systems; energy constraints; finite time horizon; human tissues; implantable medical devices; instantaneous channel state information; joint rate control; network throughput; optimal network control algorithms; spatial variability; statistical model; stochastic channel access; temporal variability; transmission rate; ultrasonic channel; ultrasonic interference; ultrasonic intra-body area networks; ultrasonic waves; unaligned interference; Acoustics; Interference; Nakagami distribution; Radio frequency; Receivers; Throughput; Ultrasonic variables measurement
【Paper Link】 【Pages】:2434-2442
【Authors】: Jialiang Zhang ; Xinyu Zhang ; Gang Wu
【Abstract】: Visible Light Communications (VLC) is emerging as an appealing technology to complement WiFi in indoor environments. Yet maintaining VLC performance under link dynamics remains a challenging problem. In this paper, we build a VLC software-radio testbed and examine VLC channel dynamics through comprehensive measurement. We find minor device movement or orientation change can cause the VLC link SNR to vary by tens of dB even within one packet duration, which renders existing WiFi rate adaptation protocols ineffective. We thus propose a new mechanism, DLit, that leverages two unique properties of VLC links (predictability and full-duplex) to realize fine-grained, in-frame rate adaptation. Our prototype implementation and experiments demonstrate that DLit achieves near-optimal performance for mobile VLC usage cases, and outperforms conventional packet-level adaptation schemes by multiple folds.
【Keywords】: optical communication; optical links; software radio; wireless LAN; VLC links; VLC software radio; WiFi; WiFi rate adaptation protocols; appealing technology; conventional packet-level adaptation schemes; indoor environments; packet duration; predictive in-frame rate selection; visible light communications; visible light networks; Calibration; Gain; Light emitting diodes; Modulation; Receivers; Signal to noise ratio; Transmitters
【Paper Link】 【Pages】:2443-2451
【Authors】: Kun Xie ; Lele Wang ; Xin Wang ; Gaogang Xie ; Guangxing Zhang ; Dongliang Xie ; Jigang Wen
【Abstract】: End-to-end network monitoring is essential to ensure transmission quality for Internet applications. However, in large-scale networks, full-mesh measurement of network performance between all transmission pairs is infeasible. As a newly emerging sparsity representation technique, matrix completion allows the recovery of a low-rank matrix using only a small number of random samples. Existing schemes often fix the number of samples assuming the rank of the matrix is known, while the data features thus the matrix rank vary over time. In this paper, we propose to exploit the matrix completion techniques to derive the end-to-end network performance among all node pairs by only measuring a small subset of end-to-end paths. To address the challenge of rank change in the practical system, we propose a sequential and information-based adaptive sampling scheme, along with a novel sampling stopping condition. Our scheme is based only on the data observed without relying on the reconstruction method or the knowledge on the sparsity of unknown data. We have performed extensive simulations based on real-world trace data, and the results demonstrate that our scheme can significantly reduce the measurement cost while ensuring high accuracy in obtaining the whole network performance data.
【Keywords】: matrix algebra; signal sampling; wireless mesh networks; end-to-end network monitoring system performance; information-based adaptive sampling scheme; large-scale network full-mesh measurement; low-rank matrix completion technique; sampling stopping condition; sequential sampling scheme; Accuracy; Computers; Conferences; Internet; Matrix decomposition; Monitoring; Sparse matrices; Matrix Completion; Round-Trip Time Measurement; Sampling Stopping Condition
【Paper Link】 【Pages】:2452-2460
【Authors】: Maria Rosário de Oliveira ; João C. Neves ; Rui Valadas ; Paulo Salvador
【Abstract】: The classification of Internet traffic using supervised or semi-supervised statistical learning techniques, both for anomaly detection and identification of Internet applications, has been impaired by difficulties in obtaining a reliable ground-truth, required both to train the classifier and to evaluate its performance. A perfect ground-truth is increasingly difficult, or sometimes impossible, to obtain due to the growing percentage of cyphered traffic, the sophistication of network attacks, and the constant updates of Internet applications. In this paper, we study the impact of the ground-truth on training the classifier and estimating its performance measures. We show both theoretically and through simulation that ground-truth imperfections can severely bias the performance estimates. We then propose a latent class model that overcomes this problem by combining estimates of several classifiers over the same dataset. The model is evaluated using a high-quality dataset that includes the most representative Internet applications and network attacks. The results show that our latent class model produces very good performance estimates under mild levels of ground-truth imperfection, and can thus be used to correctly benchmark Internet traffic classifiers when only an imperfect ground-truth is available.
【Keywords】: Internet; learning (artificial intelligence); statistical analysis; telecommunication traffic; Internet traffic classification; ground-truth imperfection; latent class model; semisupervised statistical learning technique; Computers; Conferences; Estimation; IP networks; Internet; Standards; Training; Anomaly Detection; Identification of Internet Applications; Latent Class Models; Traffic Classification
【Paper Link】 【Pages】:2461-2469
【Authors】: Jianfeng Li ; Jing Tao ; Xiaobo Ma ; Junjie Zhang ; Xiaohong Guan
【Abstract】: With the growing stickiness of the Internet, numerous automated programs running in terminal facilities (e.g., laptops) tend to keep closely connected to the Internet by repetitively interacting with remote services. It is of fundamental importance to study such repeating behaviors of automated programs in areas like traffic engineering and network monitoring. This paper focuses on repeating behaviors in packet arrivals that are of interest, aiming at a hierarchical characterization of packet arrivals, detection methods and quantitative metrics. To this end, we present a structure-oriented characterization of packet arrivals, which reflects the temporal structure of repeating behaviors at different scales. Based on such characterization, a repeating behavior detection method is proposed by leveraging online-learning prediction, and two novel metrics of repeating behaviors are proposed from different aspects. In addition, a denoising method is developed to enhance the noise-tolerant capability of detection and measurement in face of noises. Experimental results based on real-world traces demonstrate the effectiveness of our proposed approaches in automated program behavior detection and behavioral botnet analysis.
【Keywords】: Internet; invasive software; automated programs; botnet analysis; denoising method; noise tolerant capability; packet arrival detection method; packet arrival repeating behaviors; quantitative metrics; repeating behavior detection method; structure oriented characterization; Computers; Conferences; Couplings; Electronic mail; Indexes; Internet; Measurement; repeating behavior; temporal structure; traffic modeling
【Paper Link】 【Pages】:2470-2478
【Authors】: Romain Fontugne ; Johan Mazel ; Kensuke Fukuda
【Abstract】: Monitoring delays in the Internet is essential to understand the network condition and ensure the good functioning of time-sensitive applications. Large-scale measurements of round-trip time (RTT) are promising data sources to gain better insights into Internet-wide delays. However, the lack of efficient methodology to model RTTs prevents researchers from leveraging the value of these datasets. In this work, we propose a log-normal mixture model to identify, characterize, and monitor spatial and temporal dynamics of RTTs. This data-driven approach provides a coarse grained view of numerous RTTs in the form of a graph, thus, it enables efficient and systematic analysis of Internet-wide measurements. Using this model, we analyze more than 13 years of RTTs from about 12 millions unique IP addresses in passively measured backbone traffic traces. We evaluate the proposed method by comparison with external data sets, and present examples where the proposed model highlights interesting delay fluctuations due to route changes or congestion. We also introduce an application based on the proposed model to identify hosts deviating from their typical RTTs fluctuations, and we envision various applications for this empirical model.
【Keywords】: IP networks; Internet; log normal distribution; mixture models; telecommunication traffic; IP address; Internet-wide delays; Internet-wide measurements; data-driven approach; delay fluctuations; empirical mixture model; large-scale RTT measurements; log-normal mixture model; passively measured backbone traffic traces; round-trip time; spatial dynamics; systematic analysis; temporal dynamics; Delays; Geology; IP networks; Internet; Monitoring; Tin; RTT; backbone traffic; mixture model
【Paper Link】 【Pages】:2479-2487
【Authors】: Yutian Wen ; Xiaohua Tian ; Xinbing Wang ; Songwu Lu
【Abstract】: Indoor localization has been an active research field for decades, where the received signal strength (RSS) fingerprinting based methodology is widely adopted and induces many important localization techniques such as the recently proposed one building the fingerprint database with crowd-sourcing. While efforts have been dedicated to improve the accuracy and efficiency of localization, the fundamental limits of RSS fingerprinting based methodology itself is still unknown in a theoretical perspective. In this paper, we present a general probabilistic model to shed light on a fundamental question: how good the RSS fingerprinting based indoor localization can achieve? Concretely, we present the probability that a user can be localized in a region with certain size, given the RSS fingerprints submitted to the system. We reveal the interaction among the localization accuracy, the reliability of location estimation and the number of measurements in the RSS fingerprinting based location determination. Moreover, we present the optimal fingerprints reporting strategy that can achieve the best accuracy for given reliability and the number of measurements, which provides a design guideline for the RSS fingerprinting based indoor localization facilitated by crowdsourcing paradigm.
【Keywords】: RSSI; fingerprint identification; indoor communication; probability; telecommunication network reliability; RSS fingerprinting based indoor localization; crowdsourcing paradigm; general probabilistic model; location estimation reliability; received signal strength fingerprinting method; Accuracy; Crowdsourcing; Databases; Mobile handsets; Probabilistic logic; Reliability; Wireless communication
【Paper Link】 【Pages】:2488-2496
【Authors】: Kaikai Sheng ; Zhicheng Gu ; Xueyu Mao ; Xiaohua Tian ; Weijie Wu ; Xiaoying Gan ; Xinbing Wang
【Abstract】: With the pervasion of mobile devices, crowdsourcing based received signal strength (RSS) fingerprint collection method has drawn much attention to facilitate the indoor localization since it is effective and requires no pre-deployment. However, in large open indoor environment like museums and exhibition centres, RSS measurement points cannot be collocated densely, which degrades localization accuracy. This paper focuses on measurement point collocation in different cases and their effects on localization accuracy. We first study two simple preliminary cases under assumption that users are uniformly distributed: when measurement points are collocated regularly, we propose a collocation pattern which is most beneficial to localization accuracy; when measurement points are collocated randomly, we prove that localization accuracy is limited by a tight bound. Under the general case that users are distributed asymmetrically, we show the best allocation scheme of measurement points: measurement point density ρ is proportional to (cμ)2/3 in every part of the region, where μ is user density and c is a constant determined by the collocation pattern. We also give some guidelines on collocation choice and perform extensive simulations to validate our assumptions and results.
【Keywords】: RSSI; indoor navigation; indoor radio; mobile handsets; RSS fingerprint collection method; crowdsourcing based received signal strength fingerprint collection method; large open indoor environment; measurement point collocation; measurement point density; mobile device; Accuracy; Area measurement; Computers; Conferences; Density measurement; Indoor environments; Loss measurement
【Paper Link】 【Pages】:2497-2505
【Authors】: Chenshu Wu ; Zheng Yang ; Chaowei Xiao ; Chaofan Yang ; Yunhao Liu ; Mingyan Liu
【Abstract】: The proliferation of mobile computing has prompted WiFi-based indoor localization to be one of the most attractive and promising techniques for ubiquitous applications. A primary concern for these technologies to be fully practical is to combat harsh indoor environmental dynamics, especially for long-term deployment. Despite numerous research on WiFi fingerprint-based localization, the problem of radio map adaptation has not been sufficiently studied and remains open. In this work, we propose AcMu, an automatic and continuous radio map self-updating service for wireless indoor localization that exploits the static behaviors of mobile devices. By accurately pinpointing mobile devices with a novel trajectory matching algorithm, we employ them as mobile reference points to collect real-time RSS samples when they are static. With these fresh reference data, we adapt the complete radio map by learning an underlying relationship of RSS dependency between different locations, which is expected to be relatively constant over time. Extensive experiments for 20 days across 6 months demonstrate that AcMu effectively accommodates RSS variations over time and derives accurate prediction of fresh radio map with average errors of less than 5dB. Moreover, AcMu provides 2x improvement on localization accuracy by maintaining an up-to-date radio map.
【Keywords】: mobile computing; wireless LAN; AcMu; WiFi fingerprint-based localization; WiFi-based indoor localization; automatic and continuous radio map self-updating service; harsh indoor environmental dynamics; long-term deployment; mobile computing; mobile devices; mobile reference points; novel trajectory matching algorithm; real-time RSS samples; wireless indoor localization; Estimation; Mobile communication; Mobile handsets; Real-time systems; Sensors; Trajectory; Wireless communication
【Paper Link】 【Pages】:2506-2514
【Authors】: Suining He ; S.-H. Gary Chan ; Lei Yu ; Ning Liu
【Abstract】: Fusing fingerprints with mutual distance information potentially improves indoor localization accuracy. Such distance information may be spatial (e.g., via inter-node measurement) or temporal (e.g., via dead reckoning). Previous approaches on distance fusion often require exact distance measurement, assume the knowledge of distance distribution, or apply narrowly to some specific sensing technology or scenario. Due to random signal fluctuation, wireless fingerprints are inherently noisy and distance cannot be exactly measured. We hence propose Wi-Dist, a highly accurate indoor localization framework fusing noisy fingerprints with uncertain mutual distances (given by their bounds). Wi-Dist is a generic framework applicable to a wide range of sensors (peer-assisted, INS, etc.) and wireless fingerprints (Wi-Fi, RFID, CSI, etc.). It achieves low errors by a convex-optimization formulation which jointly considers distance bounds and only the first two moments of measured fingerprint signals. We implement Wi-Dist, and conduct extensive simulation and experimental studies based on Wi-Fi in our international airport and university campus. Our results show that Wi-Dist achieves significantly better accuracy than other state-of-the-art schemes (often by more than 40%).
【Keywords】: airports; educational institutions; optimisation; radiofrequency identification; wireless LAN; RFID; Wi-Dist; Wi-Fi; convex-optimization formulation; dead reckoning; distance bounds; distance measurement; fusing noisy fingerprints; indoor localization; inter-node measurement; international airport; mutual distance information; random signal fluctuation; uncertain mutual distances; university campus; wireless fingerprints; Accuracy; Dead reckoning; Distance measurement; IEEE 802.11 Standard; Manganese; Noise measurement; Sensors; Indoor localization; convex optimization; distance bounds; fusion; measurement uncertainty; noisy fingerprint
【Paper Link】 【Pages】:2515-2523
【Authors】: Tie Luo ; Salil S. Kanhere ; Hwee-Pink Tan ; Fan Wu ; Hongyi Wu
【Abstract】: Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize - which is fixed in conventional Tullock contests - with a prize function that is dependent on the (unknown) winner's contribution, in order to maximize the crowdsourcer's utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players' antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcer's utility cum profit and up to nine folds on the players' social welfare.
【Keywords】: commerce; optimal systems; smart phones; all-pay auctions; contest prize; crowdsourcer utility cum profit; distributed Web agents; fixed-prize Tullock contests; incentive mechanisms; optimal Tullock contest design; optimal benchmark; players antagonism; players social welfare; prize function; rapid prototyping; smartphone apps; user entry; Bayes methods; Benchmark testing; Computers; Conferences; Cost accounting; Crowdsourcing; Games
【Paper Link】 【Pages】:2524-2532
【Authors】: Fei Chen ; Cong Zhang ; Feng Wang ; Jiangchuan Liu
【Abstract】: Empowered by today's rich tools for media generation and distribution, and the convenient Internet access, crowdsourced streaming generalizes the single-source streaming paradigm by including massive contributors for a video channel. It calls a joint optimization along the path from crowdsourcers, through streaming servers, to the end-users to minimize the overall latency. The dynamics of the video sources, together with the globalized request demands and the high computation demand from each sourcer, make crowdsourced live streaming challenging even with powerful support from modern cloud computing. In this paper, we present a generic framework that facilitates a cost-effective cloud service for crowdsourced live streaming. Through adaptively leasing, the cloud servers can be provisioned in a fine granularity to accommodate geo-distributed video crowdsourcers. We present an optimal solution to deal with service migration among cloud instances of diverse lease prices. It also addresses the location impact to the streaming quality. To understand the performance of the proposed strategies in the realworld, we have built a prototype system running over the planetlab and the Amazon/Microsoft Cloud. Our extensive experiments demonstrate that the effectiveness of our solution in terms of deployment cost and streaming quality.
【Keywords】: cloud computing; optimisation; outsourcing; video servers; video streaming; Amazon cloud; Microsoft cloud; cloud service; crowdsourced live streaming; geodistributed video crowdsourcer; joint optimization; service migration; streaming server; video channel; Cloud computing; Computers; Media; Production; Servers; Streaming media
【Paper Link】 【Pages】:2533-2541
【Authors】: Aifang Xu ; Xiaonan Feng ; Ye Tian
【Abstract】: Crowdsourcing services have emerged and become popular on the Internet in recent years. However, evidence shows that crowdsourcing can be maliciously manipulated. In this paper, we focus on the “dark side” of the crowdsourcing services. More specifically, we investigate the spam campaigns that are originated and orchestrated on a large Chinese-based crowdsourcing website, namely ZhuBaJie.com, and track the crowd workers to their spamming behaviors on Baidu Zhidao, the largest community-based question answering (QA) site in China. By linking the spam campaigns, workers, spammer accounts, and spamming behaviors together, we are able to reveal the entire ecosystem that underlies the crowdsourcing spam attacks. We present a comprehensive and insightful analysis of the ecosystem from multiple perspectives, including the scale and scope of the spam attacks, Sybil accounts and colluding strategy employed by the spammers, workers' efforts and monetary rewards, and quality control performed by the spam campaigners, etc. We also analyze the behavioral discrepancies between the spammer accounts and the legitimate users in community QA, and present methodologies for detecting the spammers based on our understandings on the crowdsourcing spam ecosystem.
【Keywords】: Internet; Web sites; outsourcing; security of data; unsolicited e-mail; Baidu Zhidao; China; Chinese-based crowdsourcing Website; Internet; Sybil accounts; ZhuBaJie.com; community Q&A; community-based question answering site; crowd workers; crowdsourcing services; crowdsourcing spam attacks; crowdsourcing spammer characterization; crowdsourcing spammer detection; quality control; spam campaigns; spammer accounts; spamming behaviors; Computers; Conferences; Crowdsourcing; Ecosystems; Knowledge discovery; Unsolicited electronic mail
【Paper Link】 【Pages】:2542-2550
【Authors】: Zongjian He ; Jiannong Cao ; Xuefeng Liu
【Abstract】: The potential of crowdsourcing for complex problem solving has been revealed by smartphones. Nowadays, vehicles have also been increasingly adopted as participants in crowd-sourcing applications. Different from smartphones, vehicles have the distinct advantage of predictable mobility, which brings new insight into improving the crowdsourcing quality. Unfortunately, utilizing the predictable mobility in participant recruitment poses a new challenge of considering not only current location but also the future trajectories of participants. Therefore, existing participant recruitment algorithms that only use the current location may not perform well. In this paper, based on the predicted trajectory, we present a new participant recruitment strategy for vehicle-based crowdsourcing. This strategy guarantees that the system can perform well using the currently recruited participants for a period of time in the future. The participant recruitment problem is proven to be NP-complete, and we propose two algorithms, a greedy approximation and a genetic algorithm, to find the solution for different application scenarios. The performance of our algorithms is demonstrated with traffic trace dataset. The results show that our algorithms outperform some existing approaches in terms of the crowdsourcing quality.
【Keywords】: computational complexity; genetic algorithms; greedy algorithms; mobility management (mobile radio); smart phones; traffic information systems; vehicles; NP-complete problem; complex problem solving; crowdsourcing quality; genetic algorithm; greedy approximation; high quality participant recruitment; participant location; participant recruitment algorithm; participant trajectory; predictable mobility; smartphones; traffic trace dataset; vehicle-based crowdsourcing; Algorithm design and analysis; Approximation algorithms; Crowdsourcing; Monitoring; Recruitment; Trajectory; Vehicles
【Paper Link】 【Pages】:2551-2559
【Authors】: Tal Mizrahi ; Ori Rottenstreich ; Yoram Moses
【Abstract】: Network configuration and policy updates occur frequently, and must be performed in a way that minimizes transient effects caused by intermediate states of the network. It has been shown that accurate time can be used for coordinating network-wide updates, thereby reducing temporary inconsistencies. However, this approach presents a great challenge; even if network devices have perfectly synchronized clocks, how can we guarantee that updates are performed at the exact time for which they were scheduled? In this paper we present a practical method for implementing accurate time-based updates, using TIMEFLIPs. A TimeFlip is a time-based update that is implemented using a timestamp field in a Ternary Content Addressable Memory (TCAM) entry. TIMEFLIPs can be used to implement Atomic Bundle updates, and to coordinate network updates with high accuracy. We analyze the amount of TCAM resources required to encode a TimeFlip, and show that if there is enough flexibility in determining the scheduled time, a TimeFlip can be encoded by a single TCAM entry, using a single bit to represent the timestamp, and allowing the update to be performed with an accuracy on the order of 1 microsecond.
【Keywords】: content-addressable storage; scheduling; TCAM ranges; TCAM resources; TimeFlip; network configuration; scheduling network; ternary content addressable memory; time based updates; transient effects; Accuracy; Clocks; Encoding; Optimal scheduling; Performance evaluation; Schedules; Software
【Paper Link】 【Pages】:2560-2568
【Authors】: Gil Einziger ; Benny Fellman ; Yaron Kassner
【Abstract】: Measurement capabilities are essential for a variety of network applications, such as load balancing, routing, fairness and intrusion detection. These capabilities require large counter arrays in order to monitor the traffic of all network flows. While commodity SRAM memories are capable of operating at line speed, they are too small to accommodate large counter arrays. Previous works suggested estimators, which trade precision for reduced space. However, in order to accurately estimate the largest counter, these methods compromise the accuracy of the rest of the counters. In this work we present a closed form representation of the optimal estimation function. We then introduce Independent Counter Estimation Buckets (ICE-Buckets), a novel algorithm that improves estimation accuracy for all counters. This is achieved by separating the flows to buckets and configuring the optimal estimation function according to each bucket's counter scale. We prove an improved upper bound on the relative error and demonstrate an accuracy improvement of up to 57 times on real Internet packet traces.
【Keywords】: estimation theory; closed form representation; independent counter estimation buckets; optimal estimation function; Accuracy; Computers; Conferences; Estimation; Monitoring; Radiation detectors; Random access memory
【Paper Link】 【Pages】:2569-2577
【Authors】: Marat Radan ; Isaac Keslassy
【Abstract】: The growing demand for network programmability has led to the introduction of complex packet processing features that are increasingly hard to provide at full line rates. In this paper, we introduce a novel load-balancing approach that provides more processing power to congested linecards by tapping into the processing power of underutilized linecards. Using different switch-fabric models, we introduce algorithms that aim at minimizing the total average delay and maximizing the capacity region. Our simulations with real-life traces then confirm that our algorithms outperform current algorithms as well as simple alternative load-balancing algorithms. Finally, we discuss the implementation issues involved in this new way of sharing the router processing power.
【Keywords】: channel capacity; minimisation; packet radio networks; telecommunication network routing; capacity region maximisation; complex packet processing; congested linecards; delay minimization; load balancing approach; network programmability; router unutilized processing power; switch fabric model; Complexity theory; Computers; Conferences; Delays; Fabrics; Load modeling; Switches
【Paper Link】 【Pages】:2578-2586
【Authors】: Anat Bremler-Barr ; Shimrit Tzur-David ; Yotam Harchol ; David Hay
【Abstract】: Deep Packet Inspection (DPI) plays a major role in contemporary networks. Specifically, in datacenters of content providers, the scanned data may be highly repetitive. Most DPI engines are based on identifying signatures in the packet payload. This pattern matching process is expensive both in memory and CPU resources, and thus, often becomes the bottleneck of the entire application. In this paper we show how DPI can be accelerated by leveraging repetitions in the inspected traffic. Our new mechanism makes use of these repetitions to allow the repeated data to be skipped rather than scanned again. The mechanism consists of a slow path, in which frequently repeated strings are identified and stored in a dictionary, along with some succinct information for accelerating the DPI process, and a data path, where the traffic is scanned byte by byte but strings from the dictionary, if encountered, are skipped. Upon skipping, the data path recovers to the state it would have been in had the scanning continued byte by byte. Our solution achieves a significant performance boost, especially when data is from the same content source (e.g., the same website). Our experiments show that for such cases, our solution achieves a throughput gain of 1.25-2.5 times the original throughput, when implemented in software.
【Keywords】: computer networks; telecommunication traffic; data path; high-speed deep packet inspection; packet payload; pattern matching process; traffic repetitions; Acceleration; Automata; Conferences; Dictionaries; Engines; Hardware; Software
【Paper Link】 【Pages】:2587-2595
【Authors】: Xin Wang ; Richard T. B. Ma ; Yinlong Xu ; Zhipeng Li
【Abstract】: Most sampling techniques for online social networks (OSNs) are based on a particular sampling method on a single graph, which is referred to as a statistic. However, various realizing methods on different graphs could possibly be used in the same OSN, and they may lead to different sampling efficiencies, i.e., asymptotic variances. To utilize multiple statistics for accurate measurements, we formulate a mixture sampling problem, through which we construct a mixture unbiased estimator which minimizes the asymptotic variance. Given fixed sampling budgets for different statistics, we derive the optimal weights to combine the individual estimators; given a fixed total budget, we show that a greedy allocation towards the most efficient statistic is optimal. In practice, the sampling efficiencies of statistics can be quite different for various targets and are unknown before sampling. To solve this problem, we design a two-stage framework which adaptively spends a partial budget to test different statistics and allocates the remaining budget to the inferred best statistic. We show that our two-stage framework is a generalization of 1) randomly choosing a statistic and 2) evenly allocating the total budget among all available statistics, and our adaptive algorithm achieves higher efficiency than these benchmark strategies in theory and experiment.
【Keywords】: greedy algorithms; sampling methods; social networking (online); OSN; asymptotic variance; greedy allocation; heterogeneous statistics; online social network; sampling technique; Benchmark testing; Computers; Conferences; Estimation; Resource management; Sampling methods; Social network services
【Paper Link】 【Pages】:2596-2604
【Authors】: Mohammad Rezaur Rahman ; Jinyoung Han ; Chen-Nee Chuah
【Abstract】: This paper demystifies the adoption and cascading process of OSN-based applications that grow via user invitations. We analyze a detailed large-scale dataset of a popular Facebook gifting application, iHeart, that contains more than 2 billion entries of user activities generated by 190 million users during a span of 64 weeks. We investigate: (1) how users invite their friends to an OSN-based application, (2) what factors drive the cascading process of application adoptions, and (3) what are the good predictors of the ultimate cascade sizes. We find that sending or receiving a large number of invitations does not necessarily help to recruit new users to iHeart. We also identify a set of distinctive features that are good predictors of the growth of the application adoptions in terms of final population size. Finally, based on the insights learned from our analyses, we propose a prediction model to infer whether a cascade of application adoption will continue to grow in the future based on observing the initial adoption process. Results show our proposed model can achieve high precision (over 80%) in iHeart as well as in another OSN-based gifting application, Hugged.
【Keywords】: social networking (online); Hugged; OSN-based gifting applications; adoption process; cascading process; detailed large-scale dataset; good predictors; iHeart; over online social networks; popular Facebook gifting application; ultimate cascade sizes; user invitations; Analytical models; Computers; Conferences; Electronic mail; Facebook; Predictive models; Twitter
【Paper Link】 【Pages】:2605-2613
【Authors】: Swapna Buccapatnam ; Jian Tan ; Li Zhang
【Abstract】: Information sharing is an important issue for stochastic bandit problems in a distributed setting. Consider N players dealing with the same multi-armed bandit problem. All players receive requests simultaneously and must choose one of M actions for each request. Sharing information among these N players can decrease the regret for each of them but also incurs cooperation and communication overhead. In this setting, we study how cooperation and communication can impact the system performance measured by regret and communication cost. For both scenarios, we establish a uniform lower bound to the regret for the entire system as a function of time and network size. Concerning cooperation, we study the problem from a game-theoretic perspective. When each player's actions and payoffs are immediately visible to all others, we identify strategies for all players under which co-operative exploration is ensured. Regarding the communication cost, we consider incomplete information sharing such that a player's payoffs and actions are not entirely available to others. The players communicate observations to each other to reduce their regret, however with a cost. We show that a logarithmic communication cost is necessary to achieve the optimal regret. For Bernoulli arrivals, we specify a policy that achieves the optimal regret with a logarithmic communication cost. Our work opens a novel direction towards understanding information sharing for active learning in a distributed environment.
【Keywords】: game theory; statistical distributions; Bernoulli arrivals; distributed stochastic bandit; game-theoretic perspective; information sharing; logarithmic communication cost; multiarmed bandit problem; Computers; Conferences; Games; Information management; Monitoring; Nash equilibrium; Probability distribution
【Paper Link】 【Pages】:2614-2622
【Authors】: Huanle Xu ; Pili Hu ; Wing Cheong Lau ; Qiming Zhang ; Yang Wu
【Abstract】: Social Networking Service has become an essential part of our life today. However, many privacy concerns have recently been raised due to the centralized nature of such services. Decentralized Social Network (DSN) is believed to be a viable solution for these problems. In this paper, we design a protocol to coordinate the pulling operation of DSN nodes. The protocol is the result of forward engineering via utility maximization that takes communication layer congestion level as well as social network layer centrality into consideration. We solve the pulling rate control problem using the primal-dual approach and prove that the protocol can converge quickly when executed in a decentralized manner. Furthermore, we develop a novel “drumbeats” algorithm to estimate node centrality purely based on passively-observed information. Simulation results show that our protocol reduces the average message propagation delay by 15% when comparing to the baselined Fixed Equal Gap Pull protocol. In addition, the estimated node centrality matches well with the ground-truth derived from the actual topology of the social network.
【Keywords】: protocols; social networking (online); telecommunication traffic; DPCP; communication layer congestion; decentralized social networks; drumbeats algorithm; fixed equal gap pull protocol; social network layer; social networking service; Algorithm design and analysis; Measurement; Optimization; Peer-to-peer computing; Protocols; Social network services; Topology
【Paper Link】 【Pages】:2623-2631
【Authors】: Haihang Zhou ; Jianguo Yao ; Haibing Guan ; Xue Liu
【Abstract】: To reduce the operation cost incurred by the rapidly growing energy consumption in internet data centers (IDCs), more and more internet service providers have spontaneously started using energy storage in various forms. The approach of energy storage is used to store cheap electricity energy when the electricity price from smart gird is low or the renewable energy is used. There are two typical forms of energy storage equipped in many IDCs, i.e., the battery storage and thermal energy storage. Recent work shows the energy storage can significantly reduce the operation cost for IDCs. However, the cost of the energy storage devices are still at a high level, and it may increase the operation cost for IDCs. In this paper, we investigate the comprehensive understanding on the operation cost reduction for IDCs using the energy storage. To this end, we conduct a quantitative analysis on the normalized electricity price in the two energy storage forms. The experiments demonstrate that the cost of the storage devices and renewable energy supply are largely affected by the storage capacity and the location of data centers, and we conclude that it does not always reduce operation cost using energy storage for IDCs.
【Keywords】: Internet; cells (electric); computer centres; cost reduction; energy consumption; power aware computing; power markets; pricing; thermal energy storage; IDCs; Internet data centers; Internet service providers; battery storage; electricity energy storage devices; energy consumption; normalized electricity price; operation cost reduction; renewable energy supply; smart grid; storage capacity; thermal energy storage; Batteries; Discharges (electric); Optimization; Renewable energy sources; Smart grids; Thermal energy; Internet data center (IDC); battery storage; operation cost; thermal energy storage (TES)
【Paper Link】 【Pages】:2632-2640
【Authors】: Linquan Zhang ; Shaolei Ren ; Chuan Wu ; Zongpeng Li
【Abstract】: Data centers are key participants in demand response programs, including emergency demand response (EDR), where the grid coordinates large electricity consumers for demand reduction in emergency situations to prevent major economic losses. While existing literature concentrates on owner-operated data centers, this work studies EDR in multi-tenant colocation data centers where servers are owned and managed by individual tenants. EDR in colocation data centers is significantly more challenging, due to lack of incentives to reduce energy consumption by tenants who control their servers and are typically on fixed power contracts with the colocation operator. Consequently, to achieve demand reduction goals set by the EDR program, the operator has to rely on the highly expensive and/or environmentally-unfriendly on-site energy backup/generation. To reduce cost and environmental impact, an efficient incentive mechanism is therefore in need, motivating tenants' voluntary energy reduction in case of EDR. This work proposes a novel incentive mechanism, Truth-DR, which leverages a reverse auction to provide monetary remuneration to tenants according to their agreed energy reduction. Truth-DR is computationally efficient, truthful, and achieves 2-approximation in colocation-wide social cost. Trace-driven simulations verify the efficacy of the proposed auction mechanism.
【Keywords】: computer centres; energy conservation; EDR; Truth-DR incentive mechanism; auction mechanism; colocation operator; colocation-wide social cost; demand reduction; demand reduction goals; emergency demand response; energy backup; energy consumption reduction; energy generation; multitenant colocation data centers; owner-operated data centers; reverse auction; trace-driven simulation; truthful incentive mechanism; Algorithm design and analysis; Approximation algorithms; Energy consumption; Load management; Power demand; Power grids; Servers
【Paper Link】 【Pages】:2641-2649
【Authors】: Ruiting Zhou ; Zongpeng Li ; Chuan Wu
【Abstract】: The quintessential problem in a smart grid is the matching between power supply and demand - a perfect balance across the temporal domain, for the stable operation of the power network. Recent studies have revealed the critical role of electricity storage devices, as exemplified by rechargeable batteries and plug-in electric vehicles (PEVs), in helping achieve the balance through power arbitrage. Such potential from batteries and PEVs can not be fully realized without an appropriate economic mechanism that incentivizes energy discharging at times when supply is tight. This work aims at a systematic study of such demand response problem in storage-assisted smart grids through a well-designed online procurement auction mechanism. The long-term social welfare maximization problem is naturally formulated into a linear integer program. We first apply a primal-dual optimization algorithm to decompose the online auction design problem into a series of one-round auction design problems, achieving a small loss in competitive ratio. For the one round auction, we show that social welfare maximization is still NP-hard, and design a primal-dual approximation algorithm that works in concert with the decomposition algorithm. The end result is a truthful power procurement auction that is online, truthful, and 2-competitive in typical scenarios.
【Keywords】: demand side management; integer programming; linear programming; procurement; smart power grids; demand response problem; linear integer program; one-round auction design problems; online auction design problem; online procurement auction; power demand response; power procurement auction; primal-dual optimization algorithm; social welfare maximization; social welfare maximization problem; storage-assisted smart grids; Algorithm design and analysis; Approximation algorithms; Approximation methods; Batteries; Load management; Procurement; Smart grids
【Paper Link】 【Pages】:2650-2658
【Authors】: Zhi Zhou ; Fangming Liu ; Zongpeng Li ; Hai Jin
【Abstract】: Datacenter demand response is envisioned as a promising tool for mitigating operational stability issues faced by smart grids. It enables significant potentials in peak load reduction and facilitates the incorporation of distributed generation. Monetary refund from the smart grid can also alleviate the cloud's burden in escalating electricity cost. However, the current demand response paradigm is inefficient towards incentivizing a cloud that runs over geo-distributed datacenters. Leveraging auction theory, this work presents an efficient incentive mechanism to elicit demand response from geo-distributed clouds. To determine the winning bids and their corresponding payments, the cloud that acts as the auctioneer needs to solve a set of winner determination problems that are highly challenging. By integrating techniques from the Gibbs sampling method and the alternating direction method of multipliers, we propose a decentralized algorithm for each datacenter to make autonomous decisions on winning bid selection and workload management, striking a balance among the economic efficiency, truthfulness and the computational efficiency. Through extensive trace-driven evaluations, we demonstrate that our incentive mechanism constitutes a win-win mechanism for both the geo-distributed cloud and the smart grid.
【Keywords】: Monte Carlo methods; cloud computing; computer centres; distributed power generation; power engineering computing; smart power grids; Gibbs sampling method; auction approach; datacenter demand response; geodistributed cloud; operational stability; smart grid; winner determination problem; winning bid selection; workload management; Computers; Conferences; Cost accounting; Load management; Power demand; Servers; Smart grids
【Paper Link】 【Pages】:2659-2667
【Authors】: Helei Cui ; Xingliang Yuan ; Cong Wang
【Abstract】: In storage outsourcing, highly correlated datasets can occur commonly, where the rich information buried in correlated data can be useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, the proposed design aims to save the transmission cost from mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. While the benefits are obvious, how to leverage the encrypted image datasets makes the problem particular challenging. To tackle the problem, we first propose a secure and efficient index design that allows the mobile client to securely find from the encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support the secure image reproduction inside the cloud directly from the encrypted candidate selection. We formally analyze the security strength of the design. Our experiments show that up to 90% of the transmission cost at the mobile client can be saved, while achieving all service requirements and security guarantees.
【Keywords】: cloud computing; correlation methods; cryptography; data privacy; image processing; mobile computing; outsourcing; visual databases; cloud data dissemination services; cloud data generation services; cloud-assisted image sharing architecture; correlated datasets; encrypted candidate selection; encrypted data; index design; mobile clients; mobile devices; outsourced encrypted image datasets; privacy assurance; secure image reproduction; security guarantees; security strength analysis; service requirements; specialized encryption mechanisms; storage outsourcing; transmission cost saving; Encryption; Feature extraction; Indexes; Mobile communication; Servers
【Paper Link】 【Pages】:2668-2676
【Authors】: Zijiang Hao ; Yutao Tang ; Yifan Zhang ; Edmund Novak ; Nancy Carter ; Qun Li
【Abstract】: Mobile devices are now ubiquitous in the modern world. In this paper, we propose a novel and practical mobile-cloud platform for smart mobile devices. Our platform allows users to run the entire mobile device operating system and arbitrary applications on a cloud-based virtual machine. It has two design fundamentals. First, applications can freely migrate between the user's mobile device and a backend cloud server. We design a file system extension to enable this feature, so users can freely choose to run their applications either in the cloud (for high security guarantees), or on their local mobile device (for better user experience). Second, in order to protect user data on the smart mobile device, we leverage hardware virtualization technology, which isolates the data from the local mobile device operating system. We have implemented a prototype of our platform using off-the-shelf hardware, and performed an extensive evaluation of it. We show that our platform is efficient, practical, and secure.
【Keywords】: cloud computing; data protection; mobile computing; security of data; virtual machines; backend cloud server; cloud-based virtual machine; file system extension; hardware virtualization technology; mobile device operating system; secure mobile cloud computing platform; smart mobile devices; user data protection; Hardware; Keyboards; Mobile communication; Mobile handsets; Security; Virtual machine monitors; Virtualization
【Paper Link】 【Pages】:2677-2685
【Authors】: Jun Shao ; Rongxing Lu ; Xiaodong Lin
【Abstract】: Due to the convenience, the data sharing in cloud computing via mobile devices has become more and more popular. However, data confidentiality and online computational cost still present practical concerns to the deployment of data sharing in cloud computing for mobile devices. Existing data sharing protocols in cloud computing either cannot support the flexible sharing style for the encrypted data, or suffer from massive online computational cost that scales with the complexity of the access policy. In this paper, to cope with these challenging concerns, we propose a new data sharing protocol for cloud computing by using a new cryptographic primitive named online/offline attribute-based proxy re-encryption and the transform key technique. To the best of our knowledge, the proposed data sharing protocol is the first one featuring with fine-grained access control, flexible sharing, data confidentiality, and minimum online computational cost on the user side at the same time. Furthermore, the proposed online/offline attribute-based proxy re-encryption scheme may be of independent interest. At last, extensive analysis shows that our proposed data sharing protocol is secure in terms of data confidentiality, and suitable for mobile devices in terms of online computational cost.
【Keywords】: cloud computing; cryptography; data privacy; mobile computing; protocols; access policy complexity; cloud computing; cryptographic primitive; data confidentiality; data encryption; data sharing protocol; mobile device; online computational cost; online/offline attribute-based proxy reencryption; transform key technique; Cloud computing; Computational efficiency; Encryption; Mobile handsets; Protocols; Servers
【Paper Link】 【Pages】:2686-2694
【Authors】: Yimin Chen ; Jingchao Sun ; Rui Zhang ; Yanchao Zhang
【Abstract】: Multi-touch mobile devices have penetrated into everyday life to support personal and business communications. Secure and usable authentication techniques are indispensable for preventing illegitimate access to mobile devices. This paper presents RhyAuth, a novel two-factor rhythm-based authentication scheme for multi-touch mobile devices. RhyAuth requires a user to perform a sequence of rhythmic taps/slides on a device screen to unlock the device. The user is authenticated and admitted only when the features extracted from her rhythmic taps/slides match those stored on the device. RhyAuth is a two-factor authentication scheme that depends on a user-chosen rhythm and also the behavioral metrics for inputting the rhythm. Through a 32-user experiment on Android devices, we show that RhyAuth is highly secure against various attacks and also very usable for both sighted and visually impaired people.
【Keywords】: security of data; smart phones; touch sensitive screens; Android device; RhyAuth; device screen; multitouch mobile devices; rhythm based two-factor authentication; rhythmic slides; rhythmic taps; secure authentication technique; two-factor rhythm based authentication scheme; usable authentication technique; Authentication; Computers; Data processing; Feature extraction; Measurement; Mobile handsets; Rhythm
【Paper Link】 【Pages】:2695-2703
【Authors】: Philipp Kindt ; Daniel Yunge ; Mathias Gopp ; Samarjit Chakraborty
【Abstract】: Bluetooth Low Energy is a time-slotted wireless protocol aimed towards low power communication for battery-driven devices. As a power-management capability, whenever there is less data to send, the slave is allowed to remain in a low power mode during a given number of time-slots in a row. However, since the master does not know the exact sleep behavior of the slave, it has to wake-up at every time-slot and repeat its packets until the slave is awake. As a result, applications with variable throughput lead to many energy-consuming idle-slots at the master. In such applications, usually the connection parameters are chosen considering the worst case at design time and remain constant during operation. In this paper, we propose a novel power-management framework for BLE. Rather than skipping slots at the slave side, the proposed system updates the interval between two consecutive time-slots during runtime by applying online algorithms. To avoid data-loss or high delays, the framework guarantees that constraints on latency are met and buffers never overflow. Energy measurements of three different test-cases show that up to 42 percent of the energy consumption of a BLE master can be saved with our power management system.
【Keywords】: Bluetooth; protocols; telecommunication power management; Bluetooth low energy; adaptive online power management; data loss avoidance; idle slots; online algorithms; time interval updating; time slotted wireless protocol; Bluetooth; Computers; Payloads; Protocols; Throughput; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2704-2712
【Authors】: Keyu Wang ; Zheng Yang ; Zimu Zhou ; Yunhao Liu ; Lionel M. Ni
【Abstract】: The continual proliferation of mobile devices has stimulated the development of opportunistic encounter-based networking and has spurred a myriad of proximity-based mobile applications. A primary cornerstone of such applications is to discover neighboring devices effectively and efficiently. Despite extensive protocol optimization, current neighbor discovery modalities mainly rely on radio interfaces, whose energy and wake up delay required to initiate, configure and operate these protocols hamper practical applicability. Unlike conventional schemes that actively emit radio tones, we exploit ubiquitous audio events to discover neighbors passively. The rationale is that spatially adjacent neighbors tend to share similar ambient acoustic environments. We propose AIR, an effective and efficient neighbor discovery protocol via low power acoustic sensing to reduce discovery latency. Especially, AIR substantially increases the discovery probability of the first time they turn the radio on. Compared with the state-of-the-art neighbor discovery protocol, AIR significantly decreases the average discovery latency by around 70%, which is promising for supporting vast proximity-based mobile applications.
【Keywords】: acoustic applications; mobile radio; protocols; AIR protocol; ambient rendezvous; discovery latency; energy efficient neighbor discovery; low power acoustic sensing; neighbor discovery protocol; protocol optimization; proximity based mobile applications; Acoustics; Encoding; Entropy; Feature extraction; Microphones; Protocols; Sensors
【Paper Link】 【Pages】:2713-2721
【Authors】: Ioannis Pefkianakis ; Henrik Lundgren ; Augustin Soule ; Jaideep Chandrashekar ; Pascal Le Guyadec ; Christophe Diot ; Martin May ; Karel Van Doorselaer ; Koen Van Oost
【Abstract】: In this paper, we analyze a large dataset of passive wireless measurements and obtain insights about wireless performance. We monitor 167 homes continuously for 4 months from the vantage point of the gateway, which allows us to capture all the activity on the home wireless network. We report on the makeup of the home wireless network, traffic activity, and performance characteristics. We find that in most homes, a small number of devices account for most of the observed traffic volume and the bulk of this traffic activity occurs in the evenings. Studying link performance, we find that overall, the vast majority of transmissions are carried out at high data rates and the wireless networks have good coverage. We find a small number of episodes where performance is poor; a few homes have a disproportionate number of poor performance reports. Investigating further, we observe that most of these are not caused by poor coverage (pointing to network interference). Our results significantly add to the understanding of home wireless networks and will help ISPs to understand their subscriber networks.
【Keywords】: computer network performance evaluation; home networks; internetworking; radio networks; subscriber loops; gateway vantage point; home wireless network; home wireless performance; network interference; passive wireless measurement; performance reports; subscriber network; traffic activity; IEEE 802.11n Standard; Logic gates; Performance evaluation; Throughput; Wireless communication
【Paper Link】 【Pages】:2722-2730
【Authors】: Jianwei Niu ; Fei Gu ; Ruogu Zhou ; Guoliang Xing ; Wei Xiang
【Abstract】: This paper presents VINCE - a novel visible light sensing design for smartphone-based Near Field Communication (NFC) systems. VINCE encodes information as different brightness levels of smartphone screens, while receivers capture the light signal via light sensors. In contrast to RF technologies, the direction and distance of such a Visible Light Communication (VLC) link can be easily controlled, preserving communication privacy and security. As a result, VINCE can be used in a wide range of NFC applications such as contactless payments and device pairing. We experimentally profile the impact of screen brightness levels and refresh rates of smartphones, and then use the results to guide the design of light intensity encoding scheme of VINCE. We adopt several signal processing techniques and empirically derive a model to deal with the significant variation of received light intensity caused by noises and low screen refresh rates. To improve the communication reliability, VINCE adopts a feedback-based retransmission scheme, and dynamically adjusts the number of encoding brightness levels based on the current light channel condition. We also derive an analytical model that characterizes the relation among the distance, SNR (Signal to Noise Ratio), and BER (Bit Error Rate) of VINCE. Our design and theoretical model are validated via extensive evaluations using a hardware implementation of VINCE on Android smartphones and the Arduino platform.
【Keywords】: near-field communication; optical communication; smart phones; Android smartphones; Arduino platform; VINCE; near field communication systems; signal processing techniques; smartphone-based NFC systems; visible light communication; visible light sensing; Brightness; Decoding; Encoding; Receivers; Sensors; Signal to noise ratio
【Paper Link】 【Pages】:2731-2739
【Authors】: Li Li ; Ke Xu ; Dan Wang ; Chunyi Peng ; Qingyang Xiao ; Rashid Mijumbi
【Abstract】: TCP has been the dominant transport protocol for mobile internet since its origin. Its behaviors play an essential role in determining quality of service/experience (QoS and QoE) for mobile apps. While TCP has been extensively studied in a static, walking, or driving mobility, it has not been well explored in highspeed (> 200 km/h) mobility cases. With increasing investment and deployment of high speed rails (HSRs), a critical demand of understanding TCP performance under extremely high-speed mobility arises. In this paper, we conduct an in-depth study to investigate TCP behaviors on HSR. We collect 90 GB of measurement data on HSPA+ networks in Chinese high-speed trains with a peak speed of 310 km/h, along various routes (covering 5,000 km) during an 8-month period. We analyze the impacts of high-speed mobility and handoff on performance metrics including RTT, packet loss and network disconnection. Then we demystify the grand challenges posed on TCP operations (TCP establishment, transmission, congestion control and termination). Our study shows that performance greatly declines in HSR, where RTT spikes, packet drops and network disconnections are more significant and occur more frequently, compared with static, slowly moving or driving mobility cases. Moreover, TCP fails to adapt well to such extremely high-speed and yields severely abnormal behaviors, such as high spurious RTO rate, aggressive congestion window reduction, long delay of connection establishment and closure, and transmission interruption. All these findings indicate that extremely high-speed indeed poses a big threat to today's TCP and it calls for urgent efforts to develop HSR-friendly protocols and wireless networks to address even more complicated challenges raised by faster trains/aircrafts in the foreseeable future.
【Keywords】: Internet; mobile computing; quality of experience; quality of service; rail traffic; transport protocols; HSPA; HSR-friendly protocols; QoE; QoS; RTT spikes; TCP behaviors; high-speed rails; mobile Internet; mobile apps; network disconnections; packet drops; quality of experience; quality of service; transport protocol; wireless networks; Base stations; Mobile communication; Mobile computing; Packet loss; Rail transportation; Servers
【Paper Link】 【Pages】:2740-2748
【Authors】: Abner Mendoza ; Kapil Singh ; Guofei Gu
【Abstract】: The growing popularity of smartphones and continuous user demand for a rich web experience has resulted in an exponential surge in cellular bandwidth requirements. Cellular providers have struggled to keep pace with the new requirements while users often face a monetary cost associated with the data downloaded to their device. While many modern websites have adapted to the new mobile habitat, they often take shortcuts to transition from their desktop to mobile versions, many times carrying redundant content that is never utilized. Moreover, mobile users are effectively paying for certain undesirable content, such as advertisements, in the form of their bandwidth costs. In this paper, we study the composition and complexity of modern websites, from both a mobile and desktop perspective, to identify sources of wasted bandwidth. We developed a custom crawler-based framework to perform an in-depth analysis of the top 100,000 popular sites ranked by Alexa. Our results show that 23% or more of the content size on an average website is unnecessary, unused or redundant. Our results serve as a motivation for developing optimized websites and enhancing the web infrastructure to better suit the mobile environment with emphasis on reducing bandwidth costs, while also improving performance and efficiency.
【Keywords】: Internet; Web sites; cellular radio; mobile computing; smart phones; Web infrastructure enhancement; Websites; advertisements; bandwidth cost reduction; cellular bandwidth requirements; custom crawler-based framework; data plan; desktop version; efficiency improvement; measurement study; mobile Web overhead; mobile habitat; mobile version; monetary cost; performance improvement; smartphones; Cascading style sheets; Computers; Conferences; HTML; Mobile communication
【Paper Link】 【Pages】:2749-2757
【Authors】: Xenofon Foukas ; Antonio Carzaniga ; Alexander L. Wolf
【Abstract】: Mixing time is a global property of a network that indicates how fast a random walk gains independence from its starting point. Mixing time is an essential parameter for many distributed algorithms, but especially those based on gossip. We design, implement, and evaluate a distributed protocol to measure mixing time. The protocol extends an existing algorithm that models the diffusion of information seen from each node in the network as the impulse response of a particular dynamic system. In its original formulation, the algorithm was susceptible to topology changes (or “churn”) and was evaluated only in simulation. Here we present a concrete implementation of an enhanced version of the algorithm that exploits multiple parallel runs to obtain a robust measurement, and evaluate it using a network testbed (Emulab) in combination with a peer-to-peer system (FreePastry) to assess both its performance and its ability to deal with network churn.
【Keywords】: distributed algorithms; peer-to-peer computing; transient response; Emulab; FreePastry; distributed algorithm; distributed protocol; gossip algorithm; impulse response; network churn; network mixing time; network testbed; peer-to-peer system; Algorithm design and analysis; Eigenvalues and eigenfunctions; Estimation; Heuristic algorithms; Peer-to-peer computing; Protocols; Time measurement
【Paper Link】 【Pages】:2758-2766
【Authors】: Chien-Chun Ni ; Yu-Yao Lin ; Jie Gao ; Xianfeng David Gu ; Emil Saucan
【Abstract】: Analysis of Internet topologies has shown that the Internet topology has negative curvature, measured by Gromov's “thin triangle condition”, which is tightly related to core congestion and route reliability. In this work we analyze the discrete Ricci curvature of the Internet, defined by Ollivier [1], Lin et al. [2], etc. Ricci curvature measures whether local distances diverge or converge. It is a more local measure which allows us to understand the distribution of curvatures in the network. We show by various Internet data sets that the distribution of Ricci cuvature is spread out, suggesting the network topology to be non-homogenous. We also show that the Ricci curvature has interesting connections to both local measures such as node degree and clustering coefficient, global measures such as betweenness centrality and network connectivity, as well as auxilary attributes such as geographical distances. These observations add to the richness of geometric structures in complex network theory.
【Keywords】: Internet; complex networks; telecommunication network reliability; telecommunication network topology; Gromov thin triangle condition; Internet data sets; Internet topology; betweenness centrality; clustering coefficient; complex network theory; curvatures distribution; discrete Ricci curvature; network connectivity; route reliability; Histograms; Internet topology; Measurement; Network topology; Peer-to-peer computing; Power grids; Topology
【Paper Link】 【Pages】:2767-2775
【Authors】: Ayon Chakraborty ; Luis E. Ortiz ; Samir R. Das
【Abstract】:
We address the problem of network-side localization where cellular operators are interested in localizing cellular devices by means of signal strength measurements alone. While fingerprinting-based approaches have been used recently to address this problem, they require significant amount of geo-tagged (labeled') measurement data that is expensive for the operator to collect. Our goal is to use semi-supervised and unsupervised machine learning techniques to reduce or eliminate this effort without compromising the accuracy of localization. Our experimental results in a university campus (6 sq. km) demonstrate that sub-100m median localization accuracy is achievable with very little or no labeled data so long as enough training is possible with
unlabeled' measurements. This provides an opportunity for the operator to improve the model with time. We present extensive analysis of the error characteristics to gain insight and improve performance, including understanding spatial properties and developing confidence measures.
【Keywords】: cellular radio; learning (artificial intelligence); mobility management (mobile radio); cellular-band devices; fingerprinting-based approaches; geo-tagged measurement data; machine learning techniques; median localization accuracy; network-side localization; network-side positioning; signal strength measurements; Accuracy; Base stations; Computational modeling; Data models; Maximum likelihood estimation; Training; Training data
【Paper Link】 【Pages】:2776-2784
【Authors】: Danilo Cicalese ; Diana Joumblatt ; Dario Rossi ; Marc-Olivier Buob ; Jordan Augé ; Timur Friedman
【Abstract】: Use of IP-layer anycast has increased in the last few years: once relegated to DNS root and top-level domain servers, anycast is now commonly used to assist distribution of general purpose content by CDN providers. Yet, the measurement techniques for discovering anycast replicas have been designed around DNS, limiting their usefulness to this particular service. This raises the need for protocol agnostic methodologies, that should additionally be as lightweight as possible in order to scale up anycast service discovery. This is precisely the aim of this paper, which proposes a new method for exhaustive and accurate enumeration and city-level geolocation of anycast instances, requiring only a handful of latency measurements from a set of known vantage points. Our method exploits an iterative workflow that enumerates (an optimization problem) and geolocates (a classification problem) anycast replicas. We thoroughly validate our methodology on available ground truth (several DNS root servers), using multiple measurement infrastructures (PlanetLab, RIPE), obtaining extremely accurate results (even with simple algorithms, that we compare with the global optimum), that we make available to the scientific community. Compared to the state of the art work that appeared in INFOCOM 2013 and IMC 2013, our technique (i) is not bound to a specific protocol, (ii) requires 1000 times fewer vantage points, not only (iii) achieves over 50% recall but also (iv) accurately identifies the city-level geolocation for over 78% of the enumerated servers, with (v) a mean geolocation error of 361 km for all enumerated servers.
【Keywords】: IP networks; Internet; iterative methods; protocols; CDN provider; DNS root server; IP-layer anycast; city-level geolocation identification; content delivery network; iterative workflow; latency measurement; lightweight anycast enumeration; protocol agnostic methodology; top-level domain server; Airports; Cities and towns; Extraterrestrial measurements; Geology; IP networks; Servers; Unicast
【Paper Link】 【Pages】:2785-2793
【Authors】: Qingquan Zhang ; Ziqiao Zhou ; Wei Xu ; Jing Qi ; Chenxi Guo ; Ping Yi ; Ting Zhu ; Sheng Xiao
【Abstract】: Wireless sensor networks are often deployed for tracking moving objects. Many tracking algorithms have been proposed with two general assumptions: the preset fingerprints(prior landmark or context information) and an interference-free environment. These algorithms, however, cannot be used for on-demand deployment where finger-prints are unavailable and would perform poorly in interference-rich environments. In this paper, we present a fingerprint-free localizing and tracking algorithm, called Enhanced Field Division (EFD). The EFD algorithm is used to dynamically divide the field into areas with unique signatures and tracks the target, without any finger-prints. We also implemented a proof-of-concept localization platform to demonstrate the tracking accuracy and the algorithm performance in practical, interference rich environment.
【Keywords】: object tracking; radiofrequency interference; wireless sensor networks; EFD algorithm; dynamic enhanced field division; fingerprint-free localizing algorithm; fingerprint-free tracking; interference-free environment; interference-rich environments; preset fingerprints; tracking accuracy; tracking algorithms; wireless sensor networks; Accuracy; Conferences; Heuristic algorithms; Interference; Mathematical model; Target tracking
【Paper Link】 【Pages】:2794-2802
【Authors】: Fu Xiao ; Chaoheng Sha ; Lei Chen ; Lijuan Sun ; Ruchuan Wang
【Abstract】: Accurate and sufficient range measurements are essential for range-based localization in wireless sensor networks. However, noise and data missing are inevitable in distance ranging, which may degrade localization accuracy drastically. Existing localization approaches often degrade in terms of accuracy in the co-existence of incomplete and corrupted range measurements. To address this challenge, a noise-tolerant localization algorithm called NLIRM is presented. By utilizing the natural low rank property of Euclidean distance matrix, the reconstruction of partially sampled and noisy distance matrix is formulated as a norm-regularized matrix completion problem, where Gaussian noises and outliers are smoothed by Frobenius-norm and L1 norm regularization, respectively. As far as we are aware of, this is the first scheme that can recover the missing range measurements and explicitly sift Gaussian noise and outlier simultaneously. Simulation results demonstrate that, compared with traditional algorithms, NLIRM achieves better localization performance under the same experiment setting. In addition, our algorithm provides an accurate prediction of outlier positions, which is the prerequisite for malfunction diagnosis in WSN.
【Keywords】: Gaussian noise; signal reconstruction; wireless sensor networks; Euclidean distance matrix; Gaussian noises; incomplete range measurements; natural low rank property; noise-tolerant localization; noisy distance matrix; norm-regularized matrix completion problem; wireless sensor networks; Algorithm design and analysis; Distance measurement; Matrix decomposition; Noise; Noise measurement; Sparse matrices; Wireless sensor networks
【Paper Link】 【Pages】:2803-2811
【Authors】: Lin Gao ; Fen Hou ; Jianwei Huang
【Abstract】: Providing an adequate long-term user participation incentive is important for a participatory sensing system to maintain enough number of active users (sensors), so as to collect a sufficient number of data samples and support a desired level of service quality. In this work, we consider the sensor selection problem in a general time-dependent and location-aware participatory sensing system, taking the long-term user participation incentive into explicit consideration. We study the problem systematically under different information scenarios, regarding both future information and current information (realization). In particular, we propose a Lyapunov-based VCG auction policy for the on-line sensor selection, which converges asymptotically to the optimal off-line benchmark performance, even with no future information and under asymmetry of current information. Extensive numerical results show that our proposed policy outperforms the state-of-art policies in the literature, in terms of both user participation (e.g., reducing the user dropping probability by 25% ~ 90%) and social performance (e.g., increasing the social welfare by 15% ~ 80%).
【Keywords】: Lyapunov methods; convergence; incentive schemes; mobility management (mobile radio); quality of service; sensor placement; Lyapunov-based VCG auction policy; asymptotic convergence; data samples; location aware participatory sensing system; long-term user participation incentive; optimal offline benchmark performance; sensor selection problem; service quality; time dependent participatory sensing system; Mobile communication; Optimization; Queueing analysis; Resource management; Sensor systems; Stability analysis
【Paper Link】 【Pages】:2812-2820
【Authors】: Qi Zhang ; Yutian Wen ; Xiaohua Tian ; Xiaoying Gan ; Xinbing Wang
【Abstract】: Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.
【Keywords】: Bayes methods; Internet; budgeting; computational complexity; optimisation; outsourcing; Internet; budget constraint; crowd labeling task; crowdsourcing system; incentive mechanism; incentivize crowd labeling; platform utility maximization; polynomial-time computation complexity; sequential Bayesian approach; Approximation methods; Computational modeling; Computers; Conferences; Crowdsourcing; Labeling; Resource management
【Paper Link】 【Pages】:2821-2829
【Authors】: Alberto Tarable ; Alessandro Nordio ; Emilio Leonardi ; Marco Ajmone Marsan
【Abstract】: This paper presents the first systematic investigation of the potential performance gains for crowdsourcing systems, deriving from available information at the requester about individual worker earnestness (reputation). In particular, we first formalize the optimal task assignment problem when workers' reputation estimates are available, as the maximization of a monotone (submodular) function subject to Matroid constraints. Then, being the optimal problem NP-hard, we propose a simple but efficient greedy heuristic task allocation algorithm. We also propose a simple “maximum a-posteriori“ decision rule. Finally, we test and compare different solutions, showing that system performance can greatly benefit from information about workers' reputation. Our main findings are that: i) even largely inaccurate estimates of workers' reputation can be effectively exploited in the task assignment to greatly improve system performance; ii) the performance of the maximum a-posteriori decision rule quickly degrades as worker reputation estimates become inaccurate; iii) when workers' reputation estimates are significantly inaccurate, the best performance can be obtained by combining our proposed task assignment algorithm with the LRA decision rule introduced in the literature.
【Keywords】: combinatorial mathematics; computational complexity; computer networks; decision theory; greedy algorithms; matrix algebra; maximum likelihood estimation; optimisation; LRA decision rule; crowdsourcing systems; greedy heuristic task allocation algorithm; low rank approximation; matroid constraints; maximum a-posteriori decision rule; monotone function; optimal NP-hard problem; optimal task assignment problem; submodular function; worker reputation estimates; Computers; Crowdsourcing; Error probability; Mutual information; Optimization; Reliability; Resource management
【Paper Link】 【Pages】:2830-2838
【Authors】: Xiang Zhang ; Guoliang Xue ; Ruozhou Yu ; Dejun Yang ; Jian Tang
【Abstract】: With the prosperity of smart devices, crowdsourcing has emerged as a new computing/networking paradigm. Through the crowdsourcing platform, service requesters can buy service from service providers. An important component of crowdsourcing is its incentive mechanism. We study three models of crowdsourcing, which involve cooperation and competition among the service providers. Our simplest model generalizes the well-known user-centric model studied in a recent Mobicom paper. We design an incentive mechanism for each of the three models, and prove that these incentive mechanisms are individually rational, budget-balanced, computationally efficient, and truthful.
【Keywords】: incentive schemes; outsourcing; Mobicom paper; crowdsourcing platform; smart device; truthful incentive mechanism; user-centric model; Biological system modeling; Computational modeling; Computers; Conferences; Cost accounting; Crowdsourcing; Monopoly