31. INFOCOM 2012:Orlando, FL, USA

Proceedings of the IEEE INFOCOM 2012, Orlando, FL, USA, March 25-30, 2012. IEEE 【DBLP Link

Paper Num: 397 || Session Num: 0

1. Exact throughput capacity under power control in mobile ad hoc networks.

Paper Link】 【Pages】:1-9

【Authors】: Jiajia Liu ; Xiaohong Jiang ; Hiroki Nishiyama ; Nei Kato

【Abstract】: The lack of a general capacity theory on mobile ad hoc networks (MANETs) is still a challenging roadblock stunting the application of such networks. The available works on this line mainly focus on deriving order sense results, which are helpful for us to explore the general scaling laws of throughput capacity but tell us little about the exact achievable throughput. This paper studies the exact per node throughput capacity of a MANET, where the transmission power of each node can be controlled to adapt to a specified transmission range v and a generalized two-hop relay with limited packet redundancy f is adopted for packet routing. Based on the concept of automatic feedback control and the Markov chain model, we first develop a general theoretical framework to fully depict the complicated packet delivery process in the challenging MANET environment. With the help of the framework, we are then able to derive the exact per node throughput capacity for a fixed setting of both v and f. Based on the new throughput result, we further explore the optimal throughput capacity for any f but a fixed v and also determine the corresponding optimum setting of f to achieve it. This result helps us to understand how such optimal capacity varies with v (and thus transmission power) and to find the maximum possible throughput capacity of such a network for any f and v. Surprisingly, our results here indicate that usually such maximum throughput capacity can not be achieved through the local transmission, a fact different from what is generally believed in literature.

【Keywords】: Markov processes; mobile ad hoc networks; power control; telecommunication control; telecommunication network routing; Markov chain model; automatic feedback control; general capacity theory; generalized two hop relay; local transmission; mobile ad hoc networks; node throughput capacity; optimal throughput capacity; packet delivery process; packet redundancy; packet routing; power control; scaling laws; transmission power; Ad hoc networks; Markov processes; Mobile computing; Redundancy; Relays; Routing; Throughput

2. Design and performance study of a Topology-Hiding Multipath Routing protocol for mobile ad hoc networks.

Paper Link】 【Pages】:10-18

【Authors】: Yujun Zhang ; Guiling Wang ; Qi Hu ; Zhongcheng Li ; Jie Tian

【Abstract】: Existing multipath routing protocols for MANET ignore the topology-exposure problem. This paper analyzes the threat of topology-exposure and proposes a Topology-Hiding Multipath Routing protocol (THMR). THMR doesn't allow packets to carry routing information, so malicious nodes cannot deduce topology information and launch various attacks based on that. The protocol can also establish multiple node-disjoint routes in a route discovery attempt and exclude unreliable routes before transmitting packets. We formally prove that THMR is loop-free and topology-hiding. Simulation results show that our protocol has better capability of finding routes and can greatly increase the capability of delivering packets in the scenario where there are attackers at the cost of low routing overhead.

【Keywords】: mobile ad hoc networks; routing protocols; telecommunication network topology; telecommunication security; MANET; loop-free routing; mobile ad hoc networks; route discovery; topology hiding multipath routing protocol; topology information; topology-exposure threat; topology-hiding routing; Mobile ad hoc networks; Network topology; Probes; Routing; Routing protocols; Topology

3. Capacity of distributed content delivery in large-scale wireless ad hoc networks.

Paper Link】 【Pages】:19-27

【Authors】: Wang Liu ; Kejie Lu ; Jianping Wang ; Yi Qian ; Tao Zhang ; Liusheng Huang

【Abstract】: In most existing wireless networks, end users obtain data content from the wired network, typically, the Internet. In this manner, virtually all of their traffic must go through a few access points, which implies that the capacity of wireless network is limited by the aggregated transmission data rate of these access points. To fully exploit the capability of wireless network, we envision that future wireless networks shall be able to provide data content within themselves. In this paper, we address the behavior of such networks from a theoretical perspective. Specifically, we consider that multicast is used for distributed content delivery, and we investigate the asymptotic upper bound of the throughput capacity for distributed content delivery in large-scale wireless ad hoc networks (DCD-WANET). Our analysis shows how the upper bound of throughput capacity is affected by the geometric size of the network, the number of data items, the popularity of the data content, and the number of storage nodes that contain those data items. In particular, our theoretical results show that, if the number of storage nodes exceed a critical threshold, the upper bound grows with the number of storage nodes, according to a power-law where the scaling exponent depends on the popularity of data items. We also provide the data item placement strategy to achieve the upper bound of throughput capacity for DCD-WANET.

【Keywords】: Internet; ad hoc networks; computer networks; distributed processing; multicast communication; asymptotic upper bound; data item placement strategy; distributed content delivery capacity; large scale wireless ad hoc networks; multicast communication; power law; storage nodes; Base stations; Internet; Mobile ad hoc networks; Peer to peer computing; Throughput; Upper bound; Wireless networks

4. On the spatial modeling of wireless networks by random packing models.

Paper Link】 【Pages】:28-36

【Authors】: Nguyen Tien Viet ; François Baccelli

【Abstract】: In order to represent the set of transmitters simultaneously accessing a wireless network using carrier sensing based medium access protocols, one needs tractable point processes satisfying certain exclusion rules. Such exclusion rules forbid the use of Poisson point processes within this context. It has been observed that Matérn point processes, which have been advocated in the past because of their exclusion based definition, are rather conservative within this context. The present paper confirms that it would be more appropriate to use the point processes induced by the Random Sequential Algorithm in order to describe such point patterns. It also shows that this point process is in fact as tractable as the Matérn model. The generating functional of this point process is shown to be the solution of a differential equation, which is the main new mathematical result of the paper. In comparison, no equivalent result is known for the Matérn hard-core model. Using this differential equation, a new heuristic method is proposed, which leads to simple bounds and estimates for several important network performance metrics. These bounds and estimates are evaluated by Monte Carlo simulation.

【Keywords】: Monte Carlo methods; access protocols; differential equations; radio networks; Matern hard-core model; Matern point process; Monte Carlo simulation; Poisson point process; carrier sensing-based medium access protocols; differential equation; exclusion rules; exclusion-based definition; heuristic method; network performance metrics; random packing models; random sequential algorithm; spatial modeling; transmitters; wireless networks; Interference; Mathematical model; Measurement; Multiaccess communication; Receivers; Transmitters; Wireless networks

5. Spectrum mobility games.

Paper Link】 【Pages】:37-45

【Authors】: Richard Southwell ; Jianwei Huang ; Xin Liu

【Abstract】: Cognitive radio gives users the ability to switch channels and make use of dynamic spectrum opportunities. However, switching channels takes time, and may affect the quality of a user's transmission. When a cognitive radio user's channel becomes unavailable, sometimes it may be better waiting until its current channel becomes available again. Motivated by the recent FCC ruling on TV white space, we consider the scenario where cognitive radio users are given the foreknowledge of channel availabilities. Using this information, each user must decide when and how to switch channels. The users wish to exploit spectrum opportunities, but they must take account of the cost of switching channels and the congestion that comes from sharing channels with one another. We model the scenario as a game which, as we show, is equivalent to a network congestion game in the literature after proper and non-trivial transformations. This allows us to design a protocol which the users can apply to find Nash equilibria in a distributed manner. We further evaluate how the performance of the proposed schemes depends on switching cost using real channel availability measurements.

【Keywords】: cognitive radio; game theory; radio spectrum management; wireless channels; Nash equilibria; channel availability measurement; channel switching; cognitive radio; dynamic spectrum opportunities; network congestion game; spectrum mobility games; switching cost; Availability; Cognitive radio; Databases; Games; Licenses; Switches; Time frequency analysis

6. Scaling laws for cognitive radio network with heterogeneous mobile secondary users.

Paper Link】 【Pages】:46-54

【Authors】: Yingzhe Li ; Xinbing Wang ; Xiaohua Tian ; Xue Liu

【Abstract】: We study the capacity and delay scaling laws for cognitive radio network (CRN) with static primary users and heterogeneous mobile secondary users coexisting in the unit planar area. The primary network consists of n randomly and uniformly distributed static primary users (PUs) with higher priority to access the spectrum. The secondary network consists of m = (h + 1)n1+ϵ heterogeneous mobile secondary users (SUs) which should access the spectrum opportunistically, here h = O(log n) and ϵ >; 0. Each secondary user moves within a circular area centered at its initial position with a restricted speed. The moving area of each mobile SU is n, where a is a random variable which follows the discrete uniform distribution with h + 1 different values, ranging from 0 to α00 >; 0). α0 and h together determine the mobility heterogeneity of secondary users. By allowing the secondary users to relay the packets for primary users, we have proposed a joint routing and scheduling scheme to fully utilize the mobility heterogeneity of secondary users. We show that the primary network and secondary network can achieve optimal capacity and delay scalings if we increase the mobility heterogeneity of secondary users, i.e., the value of h and α0, until h = Θ(log n) and α0 ≥ 1 + ϵ. In this optimal condition, both the primary network and part of the secondary network can achieve almost constant capacity and delay scalings except for poly-logarithmic factor.

【Keywords】: channel capacity; cognitive radio; telecommunication network routing; CRN; capacity scaling law; cognitive radio network; delay scaling law; discrete uniform distribution; heterogeneous mobile secondary user; mobility heterogeneity; poly-logarithmic factor; random variable; routing scheme; scheduling scheme; static primary user; Delay; Interference; Mobile communication; Relays; Routing; Throughput; Transmitters

7. Localization in 3D surface sensor networks: Challenges and solutions.

Paper Link】 【Pages】:55-63

【Authors】: Yao Zhao ; Hongyi Wu ; Miao Jin ; Su Xia

【Abstract】: This work aims to address the problem of localization in 3D surface wireless sensor networks. First, it reveals the unique hardness in localization on 3D surface in comparison with the well-studied localization problems in 2D and 3D space, and offers useful insight into the necessary conditions to achieve desired localizability. Second, it formulates the localization problem under a practical setting with estimated link distances (between nearby nodes) and nodal height measurements, and introduces a layered approach to promote the localizability of such 3D surface sensor networks. Crossbow sensor-based experiments and large-scale simulations are carried out to evaluate the performance of the proposed localization algorithm. The numeric results show that it can effectively improve localizable rate and achieve low location errors and computational overhead, with the desired tolerability to measurement errors and high scalability to large-size wireless sensor networks.

【Keywords】: height measurement; measurement errors; wireless sensor networks; 3D surface wireless sensor network; crossbow sensor-based experiment; large-scale simulation; layered approach; link distance estimation; localization problem; measurement error; nodal height measurement; performance evaluation

8. WILL: Wireless indoor localization without site survey.

Paper Link】 【Pages】:64-72

【Authors】: Chenshu Wu ; Zheng Yang ; Yunhao Liu ; Wei Xi

【Abstract】: Indoor localization is of great importance for a range of pervasive applications, attracting many research efforts in the past two decades. Most radio-based solutions require a process of site survey, in which radio signatures are collected and stored for further comparison and matching. Site survey involves intensive costs on manpower and time. In this work, we study unexploited RF signal characteristics and leverage user motions to construct radio floor plan that is previously obtained by site survey. On this basis, we design WILL, an indoor localization approach based on off-the-shelf WiFi infrastructure and mobile phones. WILL is deployed in a real building covering over 1600m2, and its deployment is easy and rapid since site survey is no longer needed. The experiment results show that WILL achieves competitive performance comparing with traditional approaches.

【Keywords】: indoor radio; mobile radio; ubiquitous computing; wireless LAN; RF signal characteristics; mobile phones; off-the-shelf WiFi infrastructure; pervasive application; radio floor plan; radio signature; radio-based solution; wireless indoor localization; Databases; Fingerprint recognition; Floors; IEEE 802.11 Standards; Mobile handsets; Skeleton

9. Priv-Code: Preserving privacy against traffic analysis through network coding for multihop wireless networks.

Paper Link】 【Pages】:73-81

【Authors】: Zhiguo Wan ; Kai Xing ; Yunhao Liu

【Abstract】: Traffic analysis presents a serious threat to wireless network privacy due to the open nature of wireless medium. Traditional solutions are mainly based on the mix mechanism proposed by David Chaum, but the main drawback is its low network performance due to mixing and cryptographic operations. We propose a novel privacy preserving scheme based on network coding called Priv-Code to counter against traffic analysis attacks for wireless communications. Priv-Code is able to provide strong privacy protection for wireless networks as the mix system because of its intrinsic mixing feature, and moreover, it can achieve better network performance owing to the advantage of network coding. We first construct a hypergraph-based network coding model for wireless networks, under which we formalize an optimization problem whose objective function is to make each node have identical transmission rate. Then we provide a decentralized algorithm for this optimization problem. After that we develop an information theoretic metric for privacy measurement using entropy, and based on this metric we demonstrate that Priv-Code achieves stronger privacy protection than the mix system while achieving better network performance.

【Keywords】: cryptography; data privacy; entropy; graph theory; network coding; optimisation; radio networks; Priv-Code; cryptographic operations; decentralized algorithm; entropy; hypergraph-based network coding model; information theoretic metric; multihop wireless networks; network performance; objective function; optimization problem; privacy measurement; privacy preserving scheme; strong privacy protection; traffic analysis; wireless network privacy; Optimization

10. UFlood: High-throughput flooding over wireless mesh networks.

Paper Link】 【Pages】:82-90

【Authors】: Jayashree Subramanian ; Robert Morris ; Hari Balakrishnan

【Abstract】: This paper proposes UFlood, a flooding protocol for wireless mesh networks. UFlood targets situations such as software updates where all nodes need to receive the same large file of data, and where limited radio range requires forwarding. UFlood's goals are high throughput and low airtime, defined respectively as rate of completion of a flood to the slowest receiving node and total time spent transmitting. The key to achieving these goals is good choice of sender for each transmission opportunity. The best choice evolves as a flood proceeds in ways that are difficult to predict. UFlood's core new idea is a distributed heuristic to dynamically choose the senders likely to lead to all nodes receiving the flooded data in the least time. The mechanism takes into account which data nearby receivers already have as well as internode channel quality. The mechanism includes a novel bit-rate selection algorithm that trades off the speed of high bit-rates against the larger number of nodes likely to receive low bitrates. Unusually, UFlood uses both random network coding to increase the usefulness of each transmission and detailed feedback about what data each receiver already has; the feedback is critical in deciding which node's coded transmission will have the most benefit to receivers. The required feedback is potentially voluminous, but UFlood includes novel techniques to reduce its cost. The paper presents an evaluation on a 25-node 802.11 test-bed. UFlood achieves 150% higher throughput than MORE, a high-throughput flooding protocol, using 65% less airtime. UFlood uses 54% less airtime than MNP, an existing efficient protocol, and achieves 300% higher throughput.

【Keywords】: network coding; protocols; random codes; wireless LAN; wireless mesh networks; 25-node 802.11 test-bed; MNP; MORE; UFlood; bit-rate selection algorithm; distributed heuristic; high-throughput flooding protocol; internode channel quality; radio range; random network coding; software updates; wireless mesh network; Encoding; Interpolation; Network coding; Protocols; Receivers; Throughput; Wireless mesh networks

11. Memory-assisted universal compression of network flows.

Paper Link】 【Pages】:91-99

【Authors】: Mohsen Sardari ; Ahmad Beirami ; Faramarz Fekri

【Abstract】: Recently, the existence of considerable amount of redundancy in the Internet traffic has stimulated the deployment of several redundancy elimination techniques within the network. These techniques are often based on either packet-level Redundancy Elimination (RE) or Content-Centric Networking (CCN). However, these techniques cannot exploit sub-packet redundancies. Further, other alternative techniques such as the end-to-end universal compression solutions would not perform well either over the Internet traffic, as such techniques require infinite length traffic to effectively remove redundancy. This paper proposes a memory-assisted universal compression technique that holds a significant promise for reducing the amount of traffic in the networks. The proposed work is based on the observation that if a source is to be compressed and sent over a network, the associated universal code entails a substantial overhead in transmission due to finite length traffic. However, intermediate nodes can learn the source statistics and this can be used to reduce the cost of describing the source statistics, reducing the transmission overhead for such traffics. We present two algorithms (statistical and dictionary-based) for the memory-assisted universal lossless compression of information sources. These schemes are universal in the sense that they do not require any prior knowledge of the traffic's statistical distribution. We demonstrate the effectiveness of both algorithms and characterize the memorization gain using the real Internet traces. Furthermore, we apply these compression schemes to Internet-like power-law graphs and solve the routing problem for compressed flows. We characterize the network-wide gain of the memorization from the information theoretic point of view. In particular, through our analysis on power-law graphs, we show that non-vanishing network-wide gain of memorization is obtained even when the number of memory units is a tiny fraction of the total number- of nodes in the network. Finally, we validate our predictions of the memorization gain by simulation on real traffic traces.

【Keywords】: Internet; information theory; telecommunication network routing; telecommunication transmission lines; Internet traffic; associated universal code; content-centric networking; information sources; information theory; memory-assisted universal compression; memory-assisted universal lossless compression; network flows; packet-level redundancy elimination; power-law graph; routing problem; transmission overhead; Context; Decoding; Dictionaries; Internet; Redundancy; Routing; Source coding; Compression; Information Flow; Memory-Assisted Source Coding; Network Memory; Random Power-Law Graph; Redundancy Elimination

12. CodePipe: An opportunistic feeding and routing protocol for reliable multicast with pipelined network coding.

Paper Link】 【Pages】:100-108

【Authors】: Peng Li ; Song Guo ; Shui Yu ; Athanasios V. Vasilakos

【Abstract】: Multicast is an important mechanism in modern wireless networks and has attracted significant efforts to improve its performance with different metrics including throughput, delay, energy efficiency, etc. Traditionally, an ideal loss-free channel model is widely used to facilitate routing protocol design. However, the quality of wireless links would be affected or even jeopardized by many factors like collisions, fading or the noise of environment. In this paper, we propose a reliable multicast protocol, called CodePipe, with advanced performance in terms of energy-efficiency, throughput and fairness in lossy wireless networks. Built upon opportunistic routing and random linear network coding, CodePipe not only simplifies transmission coordination between nodes, but also improves the multicast throughput significantly by exploiting both intra-batch and inter-batch coding opportunities. In particular, four key techniques, namely, LP-based opportunistic routing structure, opportunistic feeding, fast batch moving and inter-batch coding, are proposed to offer substantial improvement in throughput, energy-efficiency and fairness. We evaluate CodePipe on ns2 simulator by comparing with other two state-of-art multicast protocols, MORE and Pacifier. Simulation results show that CodePipe significantly outperforms both of them.

【Keywords】: linear codes; multicast protocols; network coding; random codes; routing protocols; telecommunication network reliability; CodePipe; LP based opportunistic routing structure; MORE protocols; Pacifier protocols; fast batch moving; interbatch coding; intrabatch coding; loss free channel model; lossy wireless networks; ns2 simulator; opportunistic feeding protocol; pipelined network coding; random linear network coding; reliable multicast protocols; routing protocol; transmission coordination; wireless links; Encoding; Linear programming; Network coding; Reliability; Routing; Throughput; Wireless networks

13. E-V: Efficient visual surveillance with electronic footprints.

Paper Link】 【Pages】:109-117

【Authors】: Jin Teng ; Junda Zhu ; Boying Zhang ; Dong Xuan ; Yuan F. Zheng

【Abstract】: Video cameras have been deployed at almost every critical location, and they keep generating huge volumes of video data. The current visual processing technologies are not efficient in handling all these data for surveillance purposes, and a large amount of human power is needed to process them. In this paper, we propose the E-V system, which uses electronic footprints to help sort through this swamp of data. Electronic footprints are wireless signals emitted by mobile devices carried by people. They are ubiquitous and amenable to collection and indexing. We study how to use electronic footprints to help quickly and accurately identify object's appearance model from large volumes of video data. We have formulated the problem and provided efficient algorithms to achieve the identification on large data sets. Real world experiments and large-scale simulations have been done, which confirms the feasibility and efficiency of the proposed algorithms.

【Keywords】: video cameras; video surveillance; electronic footprints; human power; mobile devices; video cameras; video data; visual processing; visual surveillance; wireless signals; Cameras; Monitoring; Sensors; Smoothing methods; Visualization; Wireless communication; Wireless sensor networks

14. Energy-efficient intrusion detection with a barrier of probabilistic sensors.

Paper Link】 【Pages】:118-126

【Authors】: Junkun Li ; Jiming Chen ; Ten H. Lai

【Abstract】: Intrusion detection is a significant application in wireless sensor networks (WSNs). S. Kumar et al have introduced the concept of barrier coverage, which deploys sensors in a narrow belt region to guarantee that any intrusion across the region is to be detected. However, the practical issues have not been investigated such as scheduling sensors energy-efficiently while guaranteeing the detection probability of any intrusion across the region based on probabilistic sensing model, which is a more realistic sensing model. Besides, the intruders may be humans, animals, fighter planes or other things, which obviously have diverse moving speeds. In this paper, we analyze the detection probability of arbitrary path across the barrier of sensors theoretically and take the maximum speed of possible intruders into consideration since the sensor networks are designed for different intruders in different scenarios. Based on the theoretical analysis of detection probability, we formulate a Minimum Weight ∈-Barrier Problem about how to schedule sensors energy-efficiently. We show the problem NP-hard and propose a bounded approximation algorithm, called Minimum Weight Barrier Algorithm (MWBA) to schedule the activation of sensors. To evaluate our design, we analyze the performance of MWBA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm.

【Keywords】: approximation theory; scheduling; security of data; wireless sensor networks; WSN; animals; approximation algorithm; barrier coverage; fighter planes; humans; intruders; intrusion detection; minimum weight barrier algorithm; probabilistic sensing model; probabilistic sensors; wireless sensor networks; Algorithm design and analysis; Belts; Image edge detection; Probabilistic logic; Sensor phenomena and characterization; Wireless sensor networks; barrier coverage; energy-efficient; probabilistic sensing model; wireless sensor networks

15. Submodular game for distributed application allocation in shared sensor networks.

Paper Link】 【Pages】:127-135

【Authors】: Chengjie Wu ; You Xu ; Yixin Chen ; Chenyang Lu

【Abstract】: Wireless sensor networks are evolving from single-application platforms towards an integrated infrastructure shared by multiple applications. Given the resource constraints of sensor nodes, it is important to optimize the allocation of applications to maximize the overall Quality of Monitoring (QoM). Recent solutions to this challenging application allocation problem are centralized in nature, limiting their scalability and robustness against network failures and dynamics. This paper presents a distributed game-theoretic approach to application allocation in shared sensor networks. We first transform the optimal application allocation problem to a submodular game and then develop a decentralized algorithm that only employs localized interactions among neighboring nodes. We prove that the network can converge to a pure strategy Nash equilibrium with an approximation bound of 1=2. Simulations based on three real-world datasets demonstrate that our algorithm is competitive against a state-of-the-art centralized algorithm in terms of QoM.

【Keywords】: game theory; resource allocation; wireless sensor networks; Nash equilibrium; distributed application allocation; distributed game theory; optimal application allocation problem; quality of monitoring; shared sensor networks; submodular game; wireless sensor network; Approximation algorithms; Approximation methods; Games; Nash equilibrium; Optimization; Resource management; Wireless sensor networks

16. Energy-efficient reporting mechanisms for multi-type real-time monitoring in Machine-to-Machine communications networks.

Paper Link】 【Pages】:136-144

【Authors】: Huai-Lei Fu ; Hou-Chun Chen ; Phone Lin ; Yuguang Fang

【Abstract】: In Machine-to-Machine (M2M) communications, machines are wirelessly connected to accomplish collaborative tasks without human intervention, and provide ubiquitous solutions for real-time monitoring. The real-time monitoring application is one of the killer applications for M2M communications, where M2M nodes transmit sensed data to an M2M gateway, and then the M2M gateway can have real-time monitoring for each sensing region. In real-time monitoring application, the energy consumption for the M2M nodes to send sensed data to the M2M gateway is an important factor that significantly affects the performance of the system. In this paper, we first consider the energy consumption as well as the validity of sensed data to design either centralized or distributed energy-efficient reporting mechanisms. We then analyze the complexity of the reporting mechanisms. Simulation experiments are conducted to investigate the performance of the proposed mechanisms, and show that the distributed mechanism outperforms the centralized mechanism when the M2M nodes are mobile.

【Keywords】: computerised monitoring; internetworking; mobile radio; network servers; M2M communication network; M2M gateway; distributed energy-efficient reporting mechanism; energy consumption; machine-to-machine communications network; multitype real-time monitoring; sensed data transmission; Logic gates; Minimization; Monitoring; Real time systems; Schedules; Sensors; Strontium; Energy efficiency; Machine-to-machine communications; Reporting mechanism; Sensed data validity

17. A generic framework for throughput-optimal control in MR-MC wireless networks.

Paper Link】 【Pages】:145-153

【Authors】: Hongkun Li ; Yu Cheng ; Xiaohua Tian ; Xinbing Wang

【Abstract】: In this paper, we study the throughput-optimal control in the multi-radio multi-channel (MR-MC) wireless networks, which is particularly challenging due to the coupled link scheduling and channel/radio assignment. This paper has threefold contributions: 1) We develop a new model by transforming a network node into multiple node-radio-channel (NRC) tuples. Such modeling facilitates the development of a tuple-based back pressure algorithm, the solution of which can jointly solve the link scheduling, routing and channel/radio assignment in the MRMC network. 2) The tuple-based model enables the extensions of some well-known algorithms, e.g., greedy maximal scheduling and maximal scheduling, to MR-MC networks with guaranteed performance. We provide stability and capacity efficiency ratio analysis to the tuple-based scheduling algorithms. 3) The tuple-based framework facilitates a decomposable cross-layer formulation that enhances the delay performance of throughput-optimal control by integrating the link-layer scheduling with the network-layer path selection, where both hop-count and queuing delay are considered. Simulation results are presented to demonstrate the capacity region and delay performance of the proposed methodology, with comparison to the existing approach [3].

【Keywords】: channel allocation; optimal control; queueing theory; radio networks; scheduling; telecommunication control; telecommunication network routing; MR-MC wireless networks; NRC tuple; channel-radio assignment; greedy maximal scheduling; hop-count; link-layer scheduling; multiradio multichannel wireless networks; network node; network-layer path selection; node-radio-channel tuple; queuing delay; routing; stability-capacity efficiency ratio analysis; throughput-optimal control; tuple-based back pressure algorithm; tuple-based scheduling algorithms; Context; Context modeling; Delay; Resource management; Schedules; Scheduling; Scheduling algorithms

18. CSI-SF: Estimating wireless channel state using CSI sampling & fusion.

Paper Link】 【Pages】:154-162

【Authors】: Riccardo Crepaldi ; Jeongkeun Lee ; Raúl H. Etkin ; Sung-Ju Lee ; Robin Kravets

【Abstract】: One of the key features of high speed WLAN such as 802.11n is the use of MIMO (Multiple Input Multiple Output) antenna technology. The MIMO channel is described with fine granularity by Channel State Information (CSI) that can be utilized in many ways to improve network performance. Many complex parameters of a MIMO system require numerous samples to obtain CSI for all possible channel configurations. As a result, measuring the complete CSI space requires excessive sampling overhead and thus degrades network performance. We propose CSI-SF (CSI Sampling & Fusion), a method for estimating CSI for every MIMO configuration by sampling a small number of frames transmitted with different settings and extrapolating data for the remaining settings. For instance, we predict CSI of multi-stream settings using CSI obtained only from single stream packets. We evaluate the effectiveness of CSI-SF in various scenarios using our 802.11n testbed and show that CSI-SF provides an accurate, complete knowledge of the MIMO channel with reduced overhead from traditional sampling. We also show that CSI-SF can be applied to network algorithms such as rate adaptation, antenna selection and association control to significantly improve their performance and efficiency.

【Keywords】: MIMO communication; antennas; channel estimation; state estimation; telecommunication standards; wireless LAN; wireless channels; CSI-SF; IEEE 802.11n; MIMO configuration; antenna selection; association control; channel state information sampling and fusion; multiple input multiple output antenna technology; multistream settings; network performance; rate adaptation; sampling overhead; single stream packets; wireless LAN; wireless channel state estimation; Bandwidth; Channel estimation; IEEE 802.11n Standard; MIMO; OFDM; Signal to noise ratio; Transmitting antennas

19. EleSense: Elevator-assisted wireless sensor data collection for high-rise structure monitoring.

Paper Link】 【Pages】:163-171

【Authors】: Feng Wang ; Dan Wang ; Jiangchuan Liu

【Abstract】: Wireless sensor networks have been widely suggested to be used in Cyber-Physical Systems for Structural Health Monitoring. However, for nowadays high-rise structures (e.g., the Guangzhou New TV Tower, peaking at 600m above ground), the extensive vertical dimension creates enormous challenges toward sensor data collection, beyond those addressed in state-of-the-art mote-like systems. One example is the data transmission from the sensor nodes to the base station. Given the long span of the civil structures, neither a strategy of long-range one-hop data transmission nor short-range hop-by-hop communication is cost-efficient. In this paper, we propose EleSense, a novel high-rise structure monitoring framework that uses elevators to assist data collection. In EleSense, an elevator is attached with the base station and collects data when it moves to serve passengers; as such, the communication distance can be effectively reduced. To maximize the benefit, we formulate the problem as a cross-layer optimization problem and propose a centralized algorithm to solve it optimally. We further propose a distributed implementation to accommodate the hardware capability of sensor nodes and address other practical issues. Through extensive simulations, we show that EleSense has achieved a significant throughput gain over the case without elevators and a straightforward 802.11 MAC scheme without the cross-layer optimization. Moreover, EleSense can greatly reduce the communication costs while maintaining good fairness and reliability. We also conduct a case study with real experiments and data sets on the Guangzhou New TV Tower, which further validates the effectiveness of our EleSense.

【Keywords】: access protocols; computerised monitoring; condition monitoring; lifts; sensor fusion; structural engineering; wireless LAN; wireless sensor networks; EleSense; base station; centralized algorithm; civil structure; cyber-physical systems; data transmission; elevator assisted wireless sensor data collection; high rise structure monitoring; straightforward 802.11 MAC; structural health monitoring; wireless sensor networks; Base stations; Elevators; Monitoring; Schedules; Strain; Temperature sensors; Wires

20. Stochastic optimal multirate multicast in socially selfish wireless networks.

Paper Link】 【Pages】:172-180

【Authors】: Hongxing Li ; Chuan Wu ; Zongpeng Li ; Wei Huang ; Francis C. M. Lau

【Abstract】: Multicast supporting non-uniform receiving rates is an effective means of data dissemination to receivers with diversified bandwidth availability. Designing efficient rate control, routing and capacity allocation to achieve optimal multirate multicast has been a difficult problem in fixed wireline networks, let alone wireless networks with random channel fading and volatile node mobility. The challenge escalates if we consider also the selfishness of users who prefer to relay data for others with strong social ties. Such social selfishness of users is a new constraint in network protocol design. Its impact on efficient multicast in wireless networks has yet to be explored especially when multiple receiving rates are allowed. In this paper, we design an efficient, social-aware multirate multicast scheme that can maximize the overall utility of socially selfish users in a wireless network, and its distributed implementation. We model social preferences of users as differentiated costs for packet relay, which are weighted by the strength of social tie between the relay and the destination. Stochastic Lyapunov optimization techniques are utilized to design optimal scheduling of multicast transmissions, which are combined with multi-resolution coding and random linear network coding. With rigorous theoretical analysis, we study the optimality, stability, and complexity of our algorithm, as well as the impact of social preferences. Empirical studies further confirm the superiority of our algorithm under different social selfishness patterns.

【Keywords】: Lyapunov methods; linear codes; multicast communication; network coding; optimal control; optimisation; radio networks; random codes; stochastic processes; telecommunication control; differentiated costs; distributed implementation; multicast transmission; multiresolution coding; nonuniform receiving rates; optimal scheduling; packet relay; random linear network coding; social aware multirate multicast scheme; social selfishness pattern; socially selfish user; socially selfish wireless networks; stochastic Lyapunov optimization techniques; stochastic optimal multirate multicast; Heuristic algorithms; Network coding; Optimization; Receivers; Resource management; Routing; Wireless networks

21. VDN: Virtual machine image distribution network for cloud data centers.

Paper Link】 【Pages】:181-189

【Authors】: Chunyi Peng ; Minkyong Kim ; Zhe Zhang ; Hui Lei

【Abstract】: Cloud computing centers face the key challenge of provisioning diverse virtual machine instances in an elastic and scalable manner. To address this challenge, we have performed an analysis of VM instance traces collected at six production data centers during four months. One key finding is that the number of instances created from the same VM image is relatively small at a given time and thus conventional file-based p2p sharing approaches may not be effective. Based on the understanding that different VM image files often have many common chunks of data, we propose a chunk-level Virtual machine image Distribution Network (VDN). Our distribution scheme takes advantage of the hierarchical network topology of data centers to reduce the VM instance provisioning time and also to minimize the overhead of maintaining chunk location information. Evaluation shows that VDN achieves as much as 30-80× speed up for large VM images under heavy traffic.

【Keywords】: cloud computing; computer centres; peer-to-peer computing; telecommunication traffic; virtual machines; VDN; VM instance provisioning time reduction; chunk location information maintenance; chunk-level virtual machine image distribution network; cloud computing; cloud data centers; file-based P2P sharing approach; heavy traffic; hierarchical network topology; overhead minimization; Collaboration; Linux; Network topology; Peer to peer computing; Servers; Virtual machining

22. Optimal bidding in spot instance market.

Paper Link】 【Pages】:190-198

【Authors】: Yang Song ; Murtaza Zafer ; Kang-Won Lee

【Abstract】: Amazon introduced Spot Instance Market to utilize the idle resources of Amazon Elastic Compute Cloud (EC2) more efficiently. The price of a spot instance changes dynamically according to the current supply and demand for cloud resources. Users can bid for a spot instance and the job request will be granted if the current spot price falls below the bid, whereas the job will be terminated if the spot price exceeds the bid. In this paper, we investigate the problem of designing a bidding strategy from a cloud service broker's perspective, where the cloud service broker accepts job requests from cloud users, and leverages the opportunistic yet less expensive spot instances for computation in order to maximize its own profit. In this context, we propose a profit aware dynamic bidding (PADB) algorithm, which observes the current spot price and selects the bid adaptively to maximize the time average profit of the cloud service broker. We show that our bidding strategy achieves a near-optimal solution, i.e., (1-∈) of the optimal solution to the profit maximization problem, where ∈ can be arbitrarily small. The proposed dynamic bidding algorithm is self-adaptive and requires no a priori statistical knowledge on the distribution of random job sizes from cloud users.

【Keywords】: cloud computing; electronic commerce; optimisation; resource allocation; supply and demand; Amazon elastic compute cloud resources; PADB algorithm; cloud service broker perspective; cloud users; near-optimal solution; optimal bidding; profit aware dynamic bidding algorithm; profit maximization problem; random job size distribution; self-adaptive dynamic bidding algorithm; spot instance market; spot price; statistical knowledge; supply and demand; Cloud computing; Heuristic algorithms; History; Kernel; Resource management; Stochastic processes; Tin

23. CALMS: Cloud-assisted live media streaming for globalized demands with time/region diversities.

Paper Link】 【Pages】:199-207

【Authors】: Feng Wang ; Jiangchuan Liu ; Minghua Chen

【Abstract】: Live media streaming has become one of the most popular applications over the Internet. We have witnessed the successful deployment of commercial systems with CDN- or peer-to-peer based engines. While each being effective in certain aspects, having an all-round scalable, reliable, responsive and cost-effective solution remains an illusive goal. Moreover, today's live streaming services have become highly globalized, with subscribers from all over the world. Such a globalization makes user behaviors and demands even more diverse and dynamic, further challenging state-of-the-art system designs. The emergence of cloud computing however sheds new lights into this dilemma. Leveraging the elastic resource provisioning from cloud, we present CALMS (Cloud-Assisted Live Media Streaming), a generic framework that facilitates a migration to the cloud. CALMS adaptively leases and adjusts cloud server resources in a fine granularity to accommodate temporal and spatial dynamics of demands from live streaming users. We present optimal solutions to deal with cloud servers with diverse capacities and lease prices, as well as the potential latencies in initiating and terminating leases in real world cloud platforms. Our solution well accommodates location heterogeneity, mitigating the impact from user globalization. It also enables seamless migration for existing streaming systems, e.g., peer-to-peer, and fully explores their potentials. Simulations with data traces from both cloud service provider (Amazon EC2) and live media streaming service provider (PPTV) demonstrate that CALMS effectively mitigates the overall system deployment costs and yet provides users with satisfactory streaming latency and rate.

【Keywords】: cloud computing; globalisation; media streaming; peer-to-peer computing; CALMS; CDN; Internet; cloud-assisted live media streaming; commercial systems; elastic resource provisioning; globalization; globalized demands; peer-to-peer based engines; time/region diversities; Availability; Bandwidth; Organizations; Peer to peer computing; Schedules; Servers; Streaming media

24. SageShift: Managing SLAs for highly consolidated cloud.

Paper Link】 【Pages】:208-216

【Authors】: Orathai Sukwong ; Akkarit Sangpetch ; Hyong S. Kim

【Abstract】: Maximizing consolidation ratio, the number of virtual machines (VMs) in a physical machine, without violating customers' SLAs is an important goal in the cloud. We show that it is difficult to achieve this goal with existing hypervisor schedulers. The schedulers control only the amount of resource allocation, but not the sequence of VM execution. This sequence can significantly impact the response time when requests arrive concurrently for the VMs sharing the same CPU. We find that the response time can increase as much as 100% for every additional VM in the system, even if the utilization does not exceed the maximum capacity. Therefore, existing schedulers have to reduce the consolidation ratio to meet SLAs. Previous resource-provisioning works rely on existing schedulers that cannot guarantee SLAs without reducing the consolidation ratio. We propose SageShift, a system that can achieve SLAs without penalizing the consolidation ratio. SageShift consists of a VM admission control - Sage, and a hypervisor scheduler - Shift. To admit a VM, Sage assesses feasibility of its SLA based on the patterns of incoming requests. Shift maintains the admitted SLAs by adjusting both the amount of resource allocation and the sequence of VM execution. The dynamic adjustment is based on the observed response time and the SLAs. We modify the KVM scheduler in Linux kernel to implement Shift. We show that Shift can improve the consolidation ratio by 66% without compromising the SLAs. Under bursty incoming requests, Shift maintains all SLAs within 3% of the percentile target. But existing schedulers in VMware ESXi, Xen and KVM fail to meet one or more SLAs with up to 33% below the percentile target. Shift is also work-conserving. It allows best-effort VMs to run in the background in order to maximize hardware utilization without impacting SLAs.

【Keywords】: Linux; cloud computing; processor scheduling; resource allocation; virtual machines; CPU sharing; KVM scheduler; Linux kernel; SLA management; SageShift; VM admission control; VM execution; VMware ESXi; Xen; consolidation ratio maximization; customer SLA violation; hardware utilization maximization; highly consolidated cloud; hypervisor schedulers; resource allocation; resource-provisioning works; service level agreement; virtual machines; Admission control; Kernel; Linux; Resource management; Time factors; Virtual machine monitors; Admission control; Dynamic scheduling; Quality of service; Virtual machine monitors; Web services

25. Minimum camera barrier coverage in wireless camera sensor networks.

Paper Link】 【Pages】:217-225

【Authors】: Huan Ma ; Meng Yang ; Deying Li ; Yi Hong ; Wenping Chen

【Abstract】: Barrier coverage is an important issue in wireless sensor network. In wireless camera sensor networks, the cameras take the images or videos of target objects, the position and angle of camera sensor impact on the sense range. Therefore, the barrier coverage problem in camera sensor network is different from scalar sensor network. In this paper, based on the definition of full-view coverage, we focus on the Minimum Camera Barrier Coverage Problem (MCBCP) in wireless camera sensor networks in which the camera sensors are deployed randomly in a target field. Firstly, we partition the target field into disjoint subregions which are full-view-covered regions or not-full-view-covered regions. Then we model the full-view-covered regions and their relationship as a weighted directed graph. Based on the graph, we propose an algorithm to find a feasible solution for the MCBCP problem. We also proved the correctness of the solution for the MCBCP problem. Furthermore, we propose an optimal algorithm for the MCBCP problem. Finally, simulation results demonstrate that our algorithm outperforms the existing algorithm.

【Keywords】: cameras; image sensors; optimisation; wireless sensor networks; MCBCP; minimum camera barrier coverage problem; not-full-view-covered regions; optimal algorithm; scalar sensor network; wireless camera sensor networks; Cameras; Partitioning algorithms; Robot sensing systems; Vectors; Wireless communication; Wireless sensor networks

26. A statistical approach for target counting in sensor-based surveillance systems.

Paper Link】 【Pages】:226-234

【Authors】: Dengyuan Wu ; Dechang Chen ; Kai Xing ; Xiuzhen Cheng

【Abstract】: Target counting in sensor-based surveillance systems is an interesting task that potentially could have many important applications in practice. In such a system, each sensor outputs the number of targets in its sensing region, and the problem is how one can combine all the reported numbers from sensors to provide an estimate of the total number of targets present in the entire monitored area. The main challenge of the problem is how to handle different sensors' outputs that contain some counts of the same targets falling into the overlapped area from these sensors' sensing regions. This paper introduces a statistical approach to estimate the target count in such a surveillance system. Our approach avoids direct handling of the overlapping issue by adopting statistical methods. First, depending on whether or not certain prior knowledge is available regarding the target distribution, the procedure in minimizing the residual sum of squares or kernel regression is used to estimate the distribution of targets. Then the estimated count of the total targets is obtained by the method of likelihood estimation based on a sequence of binomial distributions that are derived from a sampling procedure. Comparisons based on simulations show that our proposed counting approach outperform the state of art counting algorithms. Extensive simulations also show that our proposed approach is very fast and very promising in estimating the target count in sensor-based surveillance systems.

【Keywords】: maximum likelihood estimation; radiotelemetry; regression analysis; statistical distributions; surveillance; wireless sensor networks; art counting algorithms; binomial distribution sequence; kernel regression residual sum minimization; likelihood estimation method; sensor-based surveillance systems; square residual sum minimization; statistical approach; target counting; target distribution; wireless counting sensor network; Approximation methods; Kernel; Maximum likelihood estimation; Monitoring; Sensors; Wireless sensor networks

27. A simpler and better design of error estimating coding.

Paper Link】 【Pages】:235-243

【Authors】: Nan Hua ; Ashwin Lall ; Baochun Li ; Jun (Jim) Xu

【Abstract】: We study error estimating codes with the goal of establishing better bounds for the theoretical and empirical overhead of such schemes. We explore the idea of using sketch data structures for this problem, and show that the tug-of-war sketch gives an asymptotically optimal solution. The optimality of our algorithms are proved using communication complexity lower bound techniques. We then propose a novel enhancement of the tug-of-war sketch that greatly reduces the communication overhead for realistic error rates. Our theoretical analysis and assertions are supported by extensive experimental evaluation.

【Keywords】: error detection codes; error statistics; asymptotically optimal solution; communication complexity lower bound technique; communication overhead reduction; error estimating coding; sketch data structure; tug-of-war sketch; Bit error rate; Complexity theory; Encoding; Estimation; Hamming distance; Radiation detectors; Vectors

28. Strategizing surveillance for resource-constrained event monitoring.

Paper Link】 【Pages】:244-252

【Authors】: Xi Fang ; Dejun Yang ; Guoliang Xue

【Abstract】: Surveillance systems, such as sensor networks and surveillance camera networks, have been widely deployed to monitor events in many different scenarios. One common way to conserve resource (such as energy) usage is to have only a subset of devices activated at any give time. In this paper, we look at this classic problem from a new perspective: we do not try to cover all the event areas as usually studied, but aim to find the most valuable event areas among all the event areas (i.e., the ones leading to the most utility) to monitor, subject to resource constraints. This problem poses two major challenges. First, the utility brought by monitoring an event area is not known beforehand. Second, even if this information is known in advance, solving the problem of which event areas should be monitored to maximize the total utility, subject to resource constraints, is NP-hard. We formulate this problem as a novel programming system, called online integer linear programming, and present a polynomial time algorithm to solve this problem. For any given σ∈(0, 1), we prove a bound on the gap between the expected utility obtained by constantly using the global optimal strategy multiplied by σ and the expected utility obtained by following our algorithm.

【Keywords】: integer programming; linear programming; sensor placement; video surveillance; wireless sensor networks; NP-hard prolems; event area; online integer linear programming; polynomial time algorithm; resource constrained event monitoring; surveillance strategy; Approximation algorithms; Cameras; Integer linear programming; Surveillance; Vectors; Wireless communication

29. Multicast capacity, delay and delay jitter in intermittently connected mobile networks.

Paper Link】 【Pages】:253-261

【Authors】: Jiajia Liu ; Xiaohong Jiang ; Hiroki Nishiyama ; Nei Kato

【Abstract】: Many important real networks can be modeled as intermittently connected mobile networks (ICMNs), like the vehicular ad hoc networks, wildlife tracking and habitat monitoring sensor networks, military networks, etc. However, the fundamental performance limits of ICMNs are still largely unknown so far. This paper explores the capability of these networks to support multicast traffic, where each source node desires to send packets to k distinct destinations and all nodes move according to the generalized hybrid random walk mobility model. We show how the network capacity and related delay/delay jitter for supporting multicast in such ICMNs are scaling with the basic network parameters under three transmission protocols: one-hop relay, two-hop relay without packet redundancy and two-hop relay with packet redundancy.

【Keywords】: channel capacity; jitter; mobile radio; multicast communication; protocols; telecommunication traffic; delay jitter; habitat monitoring sensor networks; hybrid random walk mobility model; intermittently connected mobile networks; military networks; multicast capacity; network capacity; one-hop relay; transmission protocols; two-hop relay with packet redundancy; two-hop relay without packet redundancy; vehicular ad hoc networks; wildlife tracking sensor networks; Ad hoc networks; Delay; Jitter; Mobile computing; Redundancy; Relays; Throughput

30. Cooperative topology control with adaptation for improved lifetime in wireless ad hoc networks.

Paper Link】 【Pages】:262-270

【Authors】: Xiaoyu Chu ; Harish Sethu

【Abstract】: Topology control algorithms allow each node in a wireless multi-hop network to adjust the power at which it makes its transmissions and choose the set of neighbors with which it communicates directly, while preserving global goals such as connectivity or coverage. This allows each node to conserve energy and contribute to increasing the lifetime of the network. Previous work on topology control has largely used an approach based on considering only the energy costs across links without considering the amount of energy available on a node. Further, previous work has largely used a static approach where the topology is determined at the beginning of the network's life and does not incorporate the varying rates of energy consumption at different nodes. In this paper, we address these weaknesses and introduce a new topology control algorithm that dynamically adapts to current energy levels at nodes. The algorithm, called Cooperative Topology Control with Adaptation (CTCA), employs a game-theoretic approach that maps the problem of maximizing the network's lifetime into an ordinal potential game. This allows a node running the CTCA algorithm to make a sacrifice by increasing its transmission power if it can help reduce energy consumption at another node with a smaller lifetime. We prove the existence of a Nash equilibrium for the game. Our simulation results indicate that the CTCA algorithm extends the life of a network by more than 50% compared to the best previouslyknown algorithm.

【Keywords】: ad hoc networks; cooperative communication; game theory; telecommunication control; telecommunication network reliability; telecommunication network topology; CTCA algorithm; Nash equilibrium; cooperative topology control algorithm; cooperative topology control with adaptation algorithm; energy conservation; energy consumption; energy cost; game-theoretic approach; networks lifetime maximization; ordinal potential game; power transmission; wireless ad hoc network; wireless multihop network; Positron emission tomography

31. Multicast capacity in mobile wireless ad hoc network with infrastructure support.

Paper Link】 【Pages】:271-279

【Authors】: Xi Chen ; Wentao Huang ; Xinbing Wang ; Xiaojun Lin

【Abstract】: We study the multicast capacity under a network model featuring both node's mobility and infrastructure support. Combinations between mobility and infrastructure, as well as multicast transmission and infrastructure, have already been shown effective ways to increase capacity. In this work, we jointly consider the impact of the above three factors on network capacity. We assume that m static base stations and n mobile users are placed in an ad hoc network, of which the area scales with n as f2(n). A general mobility model is adopted, such that each user moves within a bounded distance from its homepoint with an arbitrary pattern. In addition, each mobile node serves as the source of a multicast transmission, which results in a total number of n multicast transmissions. We focus on the situations that base stations actually benefit the capacity, and prove that multicast capacity of mobile hybrid network falls into three regimes. For each regime, matching upper and lower bounds are derived.

【Keywords】: mobile ad hoc networks; mobility management (mobile radio); multicast communication; general mobility model; infrastructure support; mobile hybrid network; mobile users; mobile wireless ad hoc network; multicast capacity; multicast transmission; network capacity; network model; node mobility; static base stations; upper-bound-lower-bound matching; Ad hoc networks; Bandwidth; Mobile communication; Mobile computing; Routing; Upper bound; Wireless communication; MANETs; Multicast capacity; hybrid networks; mobility

32. Resource allocation in load-constrained multihopwireless networks.

Paper Link】 【Pages】:280-288

【Authors】: Xi Fang ; Dejun Yang ; Guoliang Xue

【Abstract】: In this paper, we study the influence of network entity load constraints on network resource allocation. We focus on the problem of allocating network resources to optimize the total utility of multiple users in a wireless network, taking into account four resource and social requirements: 1) user QoS rate constraints, 2) node max load constraints, 3) node load balance constraints, and 4) node-user load constraints. We formulate this problem as a programming system. In order to solve this programming system, we first propose an optimization framework, called α-approximation dual subgradient algorithm, which may be applicable for many networking optimization problems. Given an approximation/optimal algorithm for solving the subproblem at each iteration, the framework leads to a result that can provide the following bounds at each iteration: 1) the bounds on the Lagrangian multipliers; 2) the bound on the amount of feasibility violation of the generated primal solutions; and 3) the upper and lower bounds on the gap between the optimal solution and the generated primal solutions. Based on this framework, we then present a distributed iterative algorithm to solve the network resource allocation problem. At each iteration, we provide bounds on the amount of feasibility violation, the gap between our solution and the optimal solution, node queue lengths, user utility deficits, and node load violation ratios.

【Keywords】: approximation theory; iterative methods; optimisation; quality of service; queueing theory; radio networks; α-approximation dual subgradient algorithm; Lagrangian multipliers; approximation-optimal algorithm; distributed iterative algorithm; generated primal solutions; load-constrained multihop wireless networks; network entity load constraints; network resource allocation; networking optimization problems; node load balance constraints; node load violation ratios; node max-load constraints; node queue lengths; node-user load constraints; optimization framework; programming system; user QoS rate constraints; user utility deficits; Approximation algorithms; Interference; Optimization; Quality of service; Resource management; Vectors; Wireless networks

33. What details are needed for wireless simulations? - A study of a site-specific indoor wireless model.

Paper Link】 【Pages】:289-297

【Authors】: Mustafa Al-Bado ; Cigdem Sengul ; Ruben Merz

【Abstract】: The wireless networking community continuously questions the accuracy and validity of simulation-based performance evaluations. The main reason is the lack of widely-accepted models that represent the real wireless characteristics, especially at the physical (PHY) layer. Hence, the trend in wireless networking is to rely more and more on testbeds, which on one hand bring more realism to network and protocol evaluation, but on the other hand present a high implementation barrier before an idea is ready to be tested. Therefore, realistic network simulators are still very much needed to reduce the time and effort for “concept testing” of novel ideas. In this case, the main question is how detailed should wireless simulators be to evaluate network and protocol performance. In this paper, we attempt a first answer to this question by using the Berlin Open Wireless Lab (BOWL) indoor model (BIM) in the ns-3 simulator. BIM includes several measurement-based models to characterize wireless communication such as frame detection ratio (FDR), frame error ratio (FER), capture and interference models. Through extensive measurements, we analyze the accuracy that we obtain with these PHY-layer models. Our experiments also show whether the detailed models at the PHY layer play an important role to represent transport layer performance in simulations.

【Keywords】: discrete event simulation; indoor radio; telecommunication computing; BOWL BIM; Berlin Open Wireless Lab; FDR; FER; PHY-layer models; capture-interference models; concept testing; extensive measurements; frame detection ratio; frame error ratio; measurement-based models; network performance evaluation; ns-3 simulator; physical layer; protocol performance evaluation; real-wireless characteristics; realistic network simulators; simulation-based performance evaluations; site-specific indoor wireless model; transport layer performance; widely-accepted models; wireless communication; wireless networking community; wireless simulations; Accuracy; Databases; Interference; Load modeling; Receivers; Wireless networks

34. TurfCast: A service for controlling information dissemination in wireless networks.

Paper Link】 【Pages】:298-306

【Authors】: Xinfeng Li ; Jin Teng ; Boying Zhang ; Adam C. Champion ; Dong Xuan

【Abstract】: Recent years have witnessed mass proliferation of mobile devices with rich wireless communication capabilities as well as emerging mobile device based information dissemination applications that leverage these capabilities. This paper proposes TurfCast, a novel information dissemination service that selectively broadcasts information in particular “turfs,” abstract logical spaces in which receivers are situated. Such turfs can be temporal or spatial based on receivers' lingering time or physical areas, respectively. TurfCast has many applications such as electronic proximity advertising and mobile social networking. To enable TurfCast, we propose two supporting technologies: TurfCode and TurfBurst. TurfCode is a nested 0-1 fountain code that enables the broadcaster to transmit either all information or none at all to receivers. TurfBurst exploits the Shannon bound to differentiate among receivers: those who cannot receive information fast enough receive none at all, even if they linger near the broadcaster. We implement TurfCast on real-world devices and conduct experiments in both indoor and outdoor environments. Our experimental results illustrate TurfCast's potential for controlling information dissemination in wireless networks.

【Keywords】: codes; information dissemination; mobile handsets; Shannon bound; TurfBurst; TurfCast; TurfCode; electronic proximity advertising; fountain code; indoor environment; information dissemination control; mobile devices; mobile social networking; outdoor environment; wireless networks; Broadcasting; Decoding; Encoding; Mobile handsets; Receivers; Signal to noise ratio; Wireless networks

35. A low-cost channel scheduling design for multi-hop handoff delay reduction in internet-based wireless mesh networks.

Paper Link】 【Pages】:307-315

【Authors】: Haopeng Li ; Jiang Xie

【Abstract】: Seamless handoff support is an essential issue to ensure continuous communications in multi-hop wireless mesh networks (WMNs). Due to the multi-hop transmission of network-layer handoff signaling packets, the handoff performance in WMNs can be largely degraded by the long queueing delay and medium access delay at each mesh router, especially when the backbone traffic volume is high. However, this issue is ignored in existing handoff solutions and multi-channel allocation schemes. In this paper, we address the seamless handoff support from a different perspective and propose a novel channel allocation design in hierarchical WMNs to reduce the queueing delay and medium access delay of handoff signaling packets over multihop wireless links and to improve the average channel utilization simultaneously. Both analytical and OPNET simulation results show that the performance of the average channel utilization and total handoff delay can be improved significantly using our proposed channel allocation scheme under various scenarios, as compared to other existing channel allocation and handoff solutions in multi-hop WMNs.

【Keywords】: Internet; channel allocation; scheduling; wireless mesh networks; Internet-based wireless mesh networks; channel allocation design; hierarchical WMN; low-cost channel scheduling design; medium access delay; multichannel allocation; multihop handoff delay reduction; multihop transmission; multihop wireless links; multihop wireless mesh networks; network-layer handoff signaling packets; queueing delay; seamless handoff support; Channel allocation; Delay; Directional antennas; Internet; Spread spectrum communication; Wireless communication

36. Dynamic index coding for wireless broadcast networks.

Paper Link】 【Pages】:316-324

【Authors】: Michael J. Neely ; Arash Saber Tehrani ; Zhen Zhang

【Abstract】: We consider a wireless broadcast station that transmits packets to multiple users. The packet requests for each user may overlap, and some users may already have certain packets. This presents a problem of broadcasting in the presence of side information, and is a generalization of the well known (and unsolved) index coding problem of information theory. Rather than achieving the full capacity region, we develop a code-constrained capacity region, which restricts attention to a pre-specified set of coding actions. We develop a dynamic max-weight algorithm that allows for random packet arrivals and supports any traffic inside the code-constrained capacity region. Further, we provide a simple set of codes based on cycles in the underlying demand graph. We show these codes are optimal for a class of broadcast relay problems.

【Keywords】: encoding; radio networks; broadcast relay problems; code-constrained capacity region; demand graph; dynamic index coding; dynamic max-weight algorithm; index coding problem; information theory; random packet arrivals; side information; wireless broadcast networks; wireless broadcast station; Encoding; Heuristic algorithms; Indexes; Queueing analysis; Relays; Vectors; Wireless communication

37. On benefits of network coding in bidirected networks and hyper-networks.

Paper Link】 【Pages】:325-333

【Authors】: Xunrui Yin ; Xin Wang ; Jin Zhao ; Xiangyang Xue ; Zongpeng Li

【Abstract】: Network coding is a technique that allows information flows to be encoded while routed across a data network. It was shown that network coding helps increase the throughput and reduce the cost of data transmission, especially for one-to-many multicast applications. An important direction in network coding research is to understand and quantify the coding advantage and cost advantage, i.e., the potential benefits of network coding, as compared to routing, in terms of increasing throughput and reducing transmission cost, respectively. Two classic network models were considered in previous studies of coding advantage: directed networks and undirected networks. The study of coding advantage in this work further focuses on two types of parameterized networks, including bidirected networks and hyper-networks, which generalizes the directed and the undirected network models, respectively. With proper parameter setting, more realistic modeling of networks in practice can be achieved. We prove upper-bounds and lower-bounds on the coding advantage for multicast in these models. Some of our bounds are new and unknown before, some improve upon previously proven bounds, and some answer open questions in the literature.

【Keywords】: multicast communication; network coding; telecommunication network routing; bidirected networks; coding advantage; data network routing; data transmission; hyper-network; information flow; network coding; network model; network throughput; one-to-many multicast application; parameterized network; transmission cost reduction; undirected network; Encoding; Internet; Network coding; Receivers; Relays; Routing; Throughput

38. Cooperative multicasting in network-coding enabled multi-rate wireless relay networks.

Paper Link】 【Pages】:334-342

【Authors】: Hsiao-Chen Lu ; Wanjiun Liao

【Abstract】: Network coding has been broadly applied to improve the efficiency of wireless multicast. In this paper, we consider the multicast process in modern relay-assisted wireless communication systems such as the IEEE 802.16j and the LTE-advanced networks, where the relay stations can cooperatively forward network-coded packets to the subscriber stations using different transmission rates. We show that under such multi-rate environments, previous solutions which seek to minimize the packet forwarding counts may lead to longer multicast delay. To solve this problem, in this work, we aim at finding a minimal delay transmission schedule of the relay stations under multi-rate considerations. We first show that this problem is NP-hard. Then we use a Markov decision process to model the relay station re-transmission process. Via this model, we derive the formulations for optimal re-transmission strategies as well as optimal re-transmission delays. Moreover, based on the recursive structure of the re-transmission delays derived from the model, we propose a dynamic programming algorithm which can solve optimal re-transmission strategies for the system. For complexity considerations, we also propose two light-weight on-line re-transmission heuristics. Simulation results show that the Markov decision process can accurately characterize the relay re-transmission process in network-coding-enabled wireless relay networks, and that minimal multicast delay can be achieved by dynamic programming-based relay re-transmissions. Moreover, simulation results suggest that the two heuristics may be suited to different scenarios, and both can achieve near-optimal performances efficiently.

【Keywords】: Markov processes; computational complexity; dynamic programming; multicast communication; network coding; radio networks; IEEE 802.16j; LTE-advanced networks; Markov decision process; NP-hard problem; cooperative multicasting; dynamic programming; minimal delay transmission schedule; multicast delay; multirate wireless relay networks; network coding; optimal retransmission delays; optimal retransmission strategies; packet forwarding; recursive structure; relay station retransmission process; relay-assisted wireless communication systems; wireless multicast; Delay; Dynamic programming; Markov processes; Network coding; Relays; Wireless networks; MBMS; Network coding; cooperative communications; multi-rate networks; wireless relay networks

39. On detecting pollution attacks in inter-session network coding.

Paper Link】 【Pages】:343-351

【Authors】: Anh Le ; Athina Markopoulou

【Abstract】: Dealing with pollution attacks in inter-session network coding is challenging due to the fact that sources, in addition to intermediate nodes, can be malicious. In this work, we first define precisely corrupted packets in inter-session pollution based on the commitment of the source packets. We then propose three detection schemes: one hash-based and two MAC-based schemes: InterMacCPK and SpaceMacPM. InterMacCPK is the first multi-source homomorphic MAC scheme that supports multiple keys. Both MAC schemes can replace traditional MACs, e.g., HMAC, in networks that employ inter-session coding. All three schemes provide in-network detection, are collusion-resistant, and have very low online bandwidth and computation overhead.

【Keywords】: access protocols; cryptography; network coding; pollution; InterMacCPK; MAC-based scheme; SpaceMacPM; collusion resistance; cryptographic primitives; hash-based scheme; in-network detection; inter-session coding; inter-session network coding; inter-session pollution; multisource homomorphic MAC scheme; pollution attack detection; Encoding; Games; Network coding; Pollution; Security; Silicon; Vectors

40. Optimal routing and scheduling for a simple network coding scheme.

Paper Link】 【Pages】:352-360

【Authors】: Nathaniel M. Jones ; Brooke Shrader ; Eytan Modiano

【Abstract】: We consider jointly optimal routing, scheduling, and network coding strategies to maximize throughput in wireless networks. While routing and scheduling techniques for wireless networks have been studied for decades, network coding is a relatively new technique that allows for an increase in throughput under certain topological and routing conditions. In this work we introduce k-tuple coding, a generalization of pairwise coding with next-hop decodability, and fully characterize the region of arrival rates for which the network queues can be stabilized under this coding strategy. We propose a dynamic control policy for routing, scheduling, and k-tuple coding, and prove that our policy is throughput optimal subject to the k-tuple coding constraint. We provide analytical bounds on the coding gain of our policy, and present numerical results to support our analytical findings. We show that most of the gains are achieved with pairwise coding, and that the coding gain is greater under 2-hop than 1-hop interference. Simulations show that under 2-hop interference our policy yields median throughput gains of 31% beyond optimal scheduling and routing on random topologies with 16 nodes.

【Keywords】: network coding; radio networks; scheduling; telecommunication network routing; 1-hop interference; 2-hop interference; coding gain; dynamic control policy; k-tuple coding constraint; network coding strategy; network queues; next-hop decodability; optimal routing condition; optimal scheduling; pairwise coding; random topology; simple network coding scheme; wireless networks; Encoding; Interference; Network coding; Routing; Schedules; Throughput; Wireless networks

41. Robust multi-pipeline scheduling in low-duty-cycle wireless sensor networks.

Paper Link】 【Pages】:361-369

【Authors】: Yongle Cao ; Shuo Guo ; Tian He

【Abstract】: Data collection is one of the major traffic pattern in wireless sensor networks, which requires regular source nodes to send data packets to a common sink node with limited end-to-end delay. However, the sleep latency brought by duty cycling mode results in significant rise on the delivery latency. In order to reduce unnecessary forwarding interruption, the state-of-the-art has proposed pipeline scheduling technique by allocating sequential wakeup time slots along the forwarding path. We experimentally show that previously proposed pipeline is fragile and ineffective in reality when wireless communication links are unreliable. To overcome such challenges and improve the performance on the delivery latency, we propose Robust Multi-pipeline Scheduling (RMS) algorithm to coordinate multiple parallel pipelines and switch the packet timely among different pipelines if failure happens in former attempts of transmissions. RMS combines the pipeline features with the advantages brought by multi-parents forwarding. Large-scale simulations and testbed implementations verify that the end-to-end delivery latency can be reduced by 40% through exploiting multi-pipeline scheduled forwarding path with tolerable energy overhead.

【Keywords】: radio links; scheduling; wireless sensor networks; RMS algorithm; data collection; data packets; duty cycling mode; end-to-end delay; end-to-end delivery latency; low-duty-cycle wireless sensor networks; multiparent forwarding; multiple-parallel pipelines; packet switching; robust multipipeline scheduling; sequential wakeup time slots; sink node; sleep latency; source nodes; wireless communication links; Delay; Network topology; Pipelines; Schedules; Scheduling; Switches; Wireless sensor networks

42. On capacity of magnetic induction-based wireless underground sensor networks.

Paper Link】 【Pages】:370-378

【Authors】: Zhi Sun ; Ian F. Akyildiz

【Abstract】: The magnetic induction (MI)-based wireless underground sensor networks (WUSNs) use the novel MI waveguide technique to establish long range and low cost wireless communications in harsh underground environments, which enable a large variety of novel and important applications. One of the main research challenges is the theoretical study of the channel and network capacities in these networks. Compared to the traditional wireless networks, both the channel and network capacities of MI-based WUSNs have significant different characteristics due to the completely different signal propagation techniques and network geometric structure. Moreover, the usage of multiple resonant MI relay coils in MI-based WUSNs brings more reliability concerns. In this paper, mathematical models are developed to evaluate the channel capacity, network capacity, and the reliability of MI-based WUSNs. Specifically, the closed-form expression for the channel capacity in MI-based WUSNs is first derived to capture the effects of multiple system parameters. Then the network capacity scaling laws of MI-based WUSNs are investigated under different deployment strategies. Finally, the system reliability of MI-based WUSNs in terms of the channel capacity and network capacity is discussed. The results of this paper provide principles and guidelines for the design and deployment of MI-based WUSNs.

【Keywords】: channel capacity; coils; electromagnetic induction; electromagnetic wave propagation; telecommunication network reliability; underground communication; wireless sensor networks; MI waveguide technique; MI-based WUSN; channel capacity; deployment strategy; long range wireless communication; low cost wireless communication; magnetic induction-based wireless underground sensor network; mathematical model; multiple resonant MI relay coils; network capacity scaling law; network geometric structure; network reliability; signal propagation technique; system parameter; system reliability; underground environment; Bandwidth; Channel capacity; Coils; Relays; Reliability; Resistance; Wireless communication

43. A simple asymptotically optimal energy allocation and routing scheme in rechargeable sensor networks.

Paper Link】 【Pages】:379-387

【Authors】: Shengbo Chen ; Prasun Sinha ; Ness B. Shroff ; Changhee Joo

【Abstract】: In this paper, we investigate the utility maximization problem for a sensor network with energy replenishment. Each sensor node consumes energy in its battery to generate and deliver data to its destination via multi-hop communications. Although the battery can be replenished from renewable energy sources, the energy allocation should be carefully designed in order to maximize system performance, especially when the replenishment profile is unknown in advance. In this paper, we address the joint problem of energy allocation and routing to maximize the total system utility, without prior knowledge of the replenishment profile. We first characterize optimal throughput of a single node under general replenishment profile, and extend our idea to the multi-hop network case. After characterizing the optimal network utility with an upper bound, we develop a low-complexity online solution that achieves asymptotic optimality. Focusing on long-term system performance, we can greatly simplify computational complexity while maintaining high performance. We also show that our solution can be approximated by a distributed algorithm using standard optimization techniques. Through simulations with replenishment profile traces for solar and wind energy, we numerically evaluate our solution, which outperforms a state-of-the-art scheme that is developed based on the Lyapunov optimization technique.

【Keywords】: Lyapunov methods; computational complexity; optimisation; renewable energy sources; telecommunication network routing; wireless sensor networks; Lyapunov optimization technique; asymptotically optimal energy allocation; computational complexity; energy replenishment; general replenishment profile; multi-hop communications; rechargeable sensor networks; renewable energy sources; routing scheme; system performance; Batteries; Distributed algorithms; Optimization; Resource management; Routing; Throughput; Upper bound

44. Exploiting prediction to enable Secure and Reliable routing in Wireless Body Area Networks.

Paper Link】 【Pages】:388-396

【Authors】: Xiaohui Liang ; Xu Li ; Qinghua Shen ; Rongxing Lu ; Xiaodong Lin ; Xuemin Shen ; Weihua Zhuang

【Abstract】: In this paper, we propose a distributed Prediction-based Secure and Reliable routing framework (PSR) for emerging Wireless Body Area Networks (WBANs). It can be integrated with a specific routing protocol to improve the latter's reliability and prevent data injection attacks during data communication. In PSR, using past link quality measurements, each node predicts the quality of every incidental link, and thus any change in the neighbor set as well, for the immediate future. When there are multiple possible next hops for packet forwarding (according to the routing protocol used), PSR selects the one with the highest predicted link quality among them. Specially-tailored lightweight source and data authentication methods are employed by nodes to secure data communication. Further, each node adaptively enables or disables source authentication according to predicted neighbor set change and prediction accuracy so as to quickly filter false source authentication requests. We demonstrate that PSR significantly increases routing reliability and effectively resists data injection attacks through in-depth security analysis and extensive simulation study.

【Keywords】: body area networks; radio links; routing protocols; telecommunication network reliability; telecommunication security; data authentication; data injection attacks; distributed prediction; link quality measurements; packet forwarding; reliable routing; routing protocol; secure data communication; secure routing; source authentication; wireless body area networks; Artificial neural networks; Jamming; Reliability; Wireless body area networks; data injection attacks; prediction; reliability; routing; security

45. Application-aware MIMO Video Rate Adaptation.

Paper Link】 【Pages】:397-405

【Authors】: Sobia Jangsher ; Syed Ali Khayam ; Qasim M. Chaudhari

【Abstract】: High data rates and multiple channel profiles of a MIMO system are naturally well-suited to carry video content. However, a video communication scheme to exploit these desirable properties of MIMO systems is widely unexplored. Even the most sophisticated MIMO rate adaptation methods rely on channel BER which shows a monotonic decrease as the channel quality improves. However, video quality, for which PSNR is a better measure than BER, does not show a proportional increase with improved channel quality because of error concealment. In this paper, we present a novel application-aware rate adaptation method which can detect variations in a MIMO channel at a video receiver, quantify the impact of these variations on the received video quality, and adaptively select a transmission profile (consisting of modulation, coding and MIMO mode) to provide unprecedented improvements in video quality. The proposed application-aware MIMO Video Rate Adaptation (MVRA) method relies on a comprehensive model of source- and channel-induced distortions in video quality. Using this model, a MIMO receiver can select an appropriate transmission profile on a per-GOP basis. Trace-driven evaluations over an 802.11n channel show that the proposed MVRA method's PSNR is very close to the PSNR of an optimal RA scheme. Comparison with state-of-the-art MIMO RA methods shows that the proposed MVRA method provides consistently better PSNR, with an average improvement of 19.29% (5.5486 dBs) over the best existing MIMO RA scheme.

【Keywords】: MIMO communication; multimedia communication; video coding; video communication; wireless channels; 802.11n channel; MIMO RA methods; MIMO channel; MIMO systems; MVRA method; PSNR; application-aware MIMO video rate adaptation; channel BER; channel quality; channel-induced distortion; comprehensive model; error concealment; optimal RA scheme; per-GOP basis; source-induced distortion; transmission profile; video communication scheme; video content; video quality; video receiver; Adaptation models; Encoding; MIMO; PSNR; Quantization; Receivers; Video sequences; MIMO video communication; Rate adaptation; Source and channel distortion; Wireless video communication

46. On the effect of channel fading on greedy scheduling.

Paper Link】 【Pages】:406-414

【Authors】: Aneesh Reddy ; Sujay Sanghavi ; Sanjay Shakkottai

【Abstract】: Greedy Maximal Scheduling (GMS) is an attractive low-complexity scheme for scheduling in wireless networks. Recent work has characterized its throughput for the case when there is no fading/channel variations. This paper aims to understand the effect of channel variations on the relative throughput performance of GMS vis-a-vis that of an optimal scheduler facing the same fading. The effect is not a-priori obvious because, on the one hand, fading could help by decoupling/precluding global states that lead to poor GMS performance, while on the other hand fading adds another degree of freedom in which an event unfavorable to GMS could occur. We show that both these situations can occur when fading is adversarial. In particular, we first define the notion of a Fading Local Pooling factor (F-LPF), and show that it exactly characterizes the throughput of GMS in this setting. We also derive general upper and lower bounds on F-LPF. Using these bounds, we provide two example networks - one where the relative performance of GMS is worse than if there were no fading, and one where it is better.

【Keywords】: channel allocation; fading channels; F-LPF; GMS; channel fading; channel variation; fading local pooling factor; greedy maximal scheduling; low-complexity scheme; relative throughput performance; wireless network; Fading; Interference; Optimal scheduling; Schedules; Throughput; Upper bound; Vectors; Channel Fading; Greedy Maximal Scheduling; Local Pooling factor; Throughput Region

47. Maximizing capacity with power control under physical interference model in duplex mode.

Paper Link】 【Pages】:415-423

【Authors】: Peng-Jun Wan ; Dechang Chen ; Guojun Dai ; Zhu Wang ; F. Frances Yao

【Abstract】: This paper addresses the joint selection and power assignment of a largest set of given links which can communicate successfully at the same time under the physical interference model in the duplex (i.e. bidirectional) mode. For the special setting in which all nodes have unlimited maximum transmission power, Halldorsson and Mitra [5] developed an approximation algorithm with a huge constant approximation bound. For the general setting in which all nodes have bounded maximum transmission power, the existence of constant approximation algorithm remains open. In this paper, we resolve this open problem by developing an approximation algorithm which not only works for the general setting of bounded maximum transmission power, but also has a much smaller constant approximation bound.

【Keywords】: approximation theory; interference (signal); radio networks; telecommunication power supplies; approximation algorithm; duplex mode; joint selection; physical interference model; power assignment; power control; unlimited maximum transmission power; Algorithm design and analysis; Approximation algorithms; Approximation methods; Interference; Measurement; Power control; Wireless networks; Link scheduling; approximation algorithms; physical interference

48. Squeezing the most out of interference: An optimization framework for joint interference exploitation and avoidance.

Paper Link】 【Pages】:424-432

【Authors】: Canming Jiang ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou ; Sastry Kompella ; Scott F. Midkiff

【Abstract】: There is a growing interest in exploiting interference (rather than avoiding it) to increase network throughput. In particular, the so-called successive interference cancellation (SIC) scheme appears very promising, due to its ability to enable concurrent receptions from multiple transmitters as well as interference rejection. Although SIC has been extensively studied as a physical layer technology, its research and advances in the context of multi-hop wireless network remain limited. In this paper, we try to answer the following fundamental questions. What are the limitations of SIC? How to overcome such limitations? How to optimize the interaction between SIC and interference avoidance? How to incorporate multiple layers (physical, link, and network) in an optimization framework? We find that SIC alone is not adequate to handle interference in a multi-hop wireless network, and advocate the use of joint SIC and interference avoidance. To optimize a joint scheme, we propose a cross-layer optimization framework that incorporates variables at physical, link, and network layers. This is the first work that combines successive interference cancellation and interference avoidance in multi-hop wireless network. We use numerical results to affirm the validity of our optimization framework and give insights on how SIC and interference avoidance can complement each other in an optimal manner.

【Keywords】: optimisation; radio networks; radio transmitters; radiofrequency interference; concurrent reception; cross-layer optimization framework; interference avoidance; interference rejection; joint interference exploitation; multihop wireless network; network throughput; physical layer technology; successive interference cancellation scheme; transmitter; Interference; Lead; Optimization; Time division multiple access; Wireless communication

49. Intra-cloud lightning: Building CDNs in the cloud.

Paper Link】 【Pages】:433-441

【Authors】: Fangfei Chen ; Katherine Guo ; John Lin ; Thomas F. La Porta

【Abstract】: Content distribution networks (CDNs) using storage clouds have recently started to emerge. Compared to traditional CDNs, storage cloud-based CDNs have the advantage of cost effectively offering hosting services to Web content providers without owning infrastructure. However, existing work on replica placement in CDNs does not readily apply in the cloud. In this paper, we investigated the joint problem of building distribution paths and placing Web server replicas in cloud CDNs to minimize the cost incurred on the CDN providers while satisfying QoS requirements for user requests. We formulate the cost optimization problem with accurate cost models and QoS requirements and show that the monthly cost can be as low as 2.62 US Dollars for a small Web site. We develop a suite of offline, online-static and online-dynamic heuristic algorithms that take as input network topology and work load information such as user location and request rates. We then evaluate the heuristics via Web trace-based simulation, and show that our heuristics behave very close to optimal under various network conditions.

【Keywords】: cloud computing; storage management; Web content providers; Web site; Web trace-based simulation; content distribution networks; cost optimization problem; hosting services; input network topology; intra-cloud lightning; storage cloud-based CDN; Bandwidth; Heuristic algorithms; Network topology; Quality of service; Servers; Topology; Web sites

50. Measurement and utilization of customer-provided resources for cloud computing.

Paper Link】 【Pages】:442-450

【Authors】: Haiyang Wang ; Feng Wang ; Jiangchuan Liu ; Justin Groen

【Abstract】: Recent years have witnessed cloud computing as an efficient means for providing resources as a form of utility. Driven by the strong demands, such industrial leaders as Amazon, Google, and Microsoft have all offered practical cloud platforms, mostly datacenter-based. These platforms are known to be powerful and cost-effective. Yet, as the cloud customers are pure consumers, their local resources, though abundant, have been largely ignored. In this paper, we for the first time investigate a novel customer-provided cloud platform, SpotCloud, through extensive measurements. Complementing data centers, SpotCloud enables customers to contribute/sell their private resources to collectively offer cloud services. We find that, although the capacity as well as the availability of this platform is not yet comparable to enterprise datacenters, SpotCloud can provide very flexible services to customers in terms of both performance and pricing. It is friendly to the customers who often seek to run short-term and customized tasks at minimum costs. However, different from the standardized enterprise instances, SpotCloud instances are highly diverse, which greatly increase the difficulty of instance selection. To solve this problem, we propose an instance recommendation mechanism for cloud service providers to recommend short-listed instances to the customers. Our model analysis and the real-world experiments show that it can help the customers to find the best trade off between benefit and cost.

【Keywords】: cloud computing; recommender systems; SpotCloud; cloud computing; cloud service providers; cloud services; customer-provided cloud platform; customer-provided resources measurement; customer-provided resources utilization; data centers; instance recommendation mechanism; IEL; Integrated circuits

Paper Link】 【Pages】:451-459

【Authors】: Cong Wang ; Kui Ren ; Shucheng Yu ; Karthik Mahendra Raje Urs

【Abstract】: As the data produced by individuals and enterprises that need to be stored and utilized are rapidly increasing, data owners are motivated to outsource their local complex data management systems into the cloud for its great flexibility and economic savings. However, as sensitive cloud data may have to be encrypted before outsourcing, which obsoletes the traditional data utilization service based on plaintext keyword search, how to enable privacy-assured utilization mechanisms for outsourced cloud data is thus of paramount importance. Considering the large number of on-demand data users and huge amount of outsourced data files in cloud, the problem is particularly challenging, as it is extremely difficult to meet also the practical requirements of performance, system usability, and high-level user searching experiences. In this paper, we investigate the problem of secure and efficient similarity search over outsourced cloud data. Similarity search is a fundamental and powerful tool widely used in plaintext information retrieval, but has not been quite explored in the encrypted data domain. Our mechanism design first exploits a suppressing technique to build storage-efficient similarity keyword set from a given document collection, with edit distance as the similarity metric. Based on that, we then build a private trie-traverse searching index, and show it correctly achieves the defined similarity search functionality with constant search time complexity. We formally prove the privacy-preserving guarantee of the proposed mechanism under rigorous security treatment. To demonstrate the generality of our mechanism and further enrich the application spectrum, we also show our new construction naturally supports fuzzy search, a previously studied notion aiming only to tolerate typos and representation inconsistencies in the user searching input. The extensive experiments on Amazon cloud platform with real data set further demonstrate the validity and practicality of the p- oposed mechanism.

【Keywords】: cloud computing; computational complexity; cryptography; data privacy; document handling; fuzzy set theory; information retrieval; Amazon cloud platform; cloud data outsourcing; data management systems; data utilization service; document collection; edit distance; encryption; fuzzy search; plaintext information retrieval; plaintext keyword search; privacy-assured similarity search; privacy-assured utilization mechanisms; private trie-traverse searching index; sensitive cloud data; similarity metric; similarity search usability; storage-efficient similarity keyword set; suppressing technique; time complexity; Encryption; Indexes; Keyword search; Servers; Usability

52. Quality-assured cloud bandwidth auto-scaling for video-on-demand applications.

Paper Link】 【Pages】:460-468

【Authors】: Di Niu ; Hong Xu ; Baochun Li ; Shuqiao Zhao

【Abstract】: There has been a recent trend that video-on-demand (VoD) providers such as Netflix are leveraging resources from cloud services for multimedia streaming. In this paper, we consider the scenario that a VoD provider can make reservations for bandwidth guarantees from cloud service providers to guarantee the streaming performance in each video channel. We propose a predictive resource auto-scaling system that dynamically books the minimum bandwidth resources from multiple data centers for the VoD provider to match its short-term demand projections. We exploit the anti-correlation between the demands of video channels for statistical multiplexing and for hedging the risk of under-provision. The optimal load direction from channels to data centers is derived with provable performance. We further provide suboptimal solutions that balance bandwidth and storage costs. The system is backed up by a demand predictor that forecasts the demand expectation, volatility and correlations based on learning. Extensive simulations are conducted driven by the workload traces from a commercial VoD system.

【Keywords】: cloud computing; media streaming; video on demand; cloud services; commercial VoD system; multimedia streaming; predictive resource auto-scaling system; quality-assured cloud bandwidth auto-scaling; statistical multiplexing; video channels; video-on-demand applications; Aggregates; Bandwidth; Channel estimation; Correlation; Load modeling; Monitoring; Streaming media

53. Inter-Call Mobility model: A spatio-temporal refinement of Call Data Records using a Gaussian mixture model.

Paper Link】 【Pages】:469-477

【Authors】: Michal Ficek ; Lukas Kencl

【Abstract】: With global mobile phone penetration nearing 100%, cellular Call Data Records (CDRs) provide a large-scale and ubiquitous, but also sparse and skewed snapshot of human mobility. It may be difficult or inappropriate to reach strong conclusions about user movement based on such data without proper understanding of user movement between call records. Based on an analysis of a real-world trace, we propose a novel, probabilistic Inter-Call Mobility (ICM) model of users' position in between calls. The ICM model combines Gaussian mixtures to build a general, comprehensive spatio-temporal refinement of CDRs.We demonstrate that ICM model's application yields strikingly different conclusions to the existing models when applied to basic CDR analyses, such as user proximity probability.

【Keywords】: Gaussian processes; cellular radio; mobile handsets; mobility management (mobile radio); Gaussian mixture model; cellular call data records; global mobile phone penetration; human mobility; probabilistic intercall mobility model; real-world trace; spatio-temporal refinement; user movement; user proximity probability; Accuracy; Analytical models; Estimation; Mobile communication; Mobile computing; Mobile handsets; Trajectory

54. Expected loss bounds for authentication in constrained channels.

Paper Link】 【Pages】:478-485

【Authors】: Christos Dimitrakakis ; Aikaterini Mitrokotsa ; Serge Vaudenay

【Abstract】: We derive bounds on the expected loss for authentication protocols in channels which are constrained due to noisy conditions and communication costs. This is motivated by a number of authentication protocols, where at least some part of the authentication is performed during a phase, lasting n rounds, with no error correction. This requires assigning an acceptable threshold for the number of detected errors and taking into account the cost of incorrect authentication and of communication. This paper describes a framework enabling an expected loss analysis for all the protocols in this family. Computationally simple methods to obtain nearly optimal values for the threshold, as well as for the number of rounds are suggested and upper bounds on the expected loss, holding uniformly, are given. These bounds are tight, as shown by a matching lower bound. Finally, a method to adaptively select both the number of rounds and the threshold is proposed for a certain class of protocols.

【Keywords】: message authentication; protocols; authentication protocols; communication costs; constrained channels; expected loss analysis; expected loss bounds; incorrect authentication; noisy conditions; Authentication; Error analysis; Error correction; Noise; Protocols; Radiofrequency identification; Upper bound

55. Location Aware Peak Value Queries in sensor networks.

Paper Link】 【Pages】:486-494

【Authors】: Siyao Cheng ; Jianzhong Li ; Lei Yu

【Abstract】: In the applications of wireless sensor networks, the peak values, such as largest sensed values and their locations, are very useful for detecting abnormal events happened in the monitored region. Although the results returned by the traditional top-k queries provide k largest sensed values, they ignore the spatial-correlation of the sensed data so that the locations of the returned values are very close to each other and only tell a small area being abnormal or few number of abnormal events happening. Due to this reason, the Location Aware Peak Value Query, denoted by LAP-(D,k) query, is proposed in this paper. For any given D and k, the LAP-(D,k) query returns k largest sensed values and their locations, and the distance between the any two locations is larger than D. The problem of processing LAP-(D,k) query is proved to be NP-hard, and two distributed approximation algorithms are proposed to solve this problem. One is a distributed greedy algorithm with ratio bound 5.8. The other one is a region partition based algorithm with ratio bound 3. The theoretical analysis and experimental results show that the proposed algorithms have high performance in terms of accuracy and energy consumption.

【Keywords】: approximation theory; energy consumption; greedy algorithms; wireless sensor networks; LAP-(D,k) query; NP-hard; abnormal event detection; distributed approximation algorithm; distributed greedy algorithm; energy consumption; location aware peak value query; region partition based algorithm; spatial-correlation; wireless sensor network; Algorithm design and analysis; Approximation algorithms; Complexity theory; Greedy algorithms; Partitioning algorithms; Radiation detectors; Wireless sensor networks

56. Traffic clustering and online traffic prediction in vehicle networks: A social influence perspective.

Paper Link】 【Pages】:495-503

【Authors】: Bowu Zhang ; Kai Xing ; Xiuzhen Cheng ; Liusheng Huang ; Rongfang Bie

【Abstract】: In this paper we investigate the dynamic traffic relationship characterized by a similarity value from one road point to another in vehicle networks. Due to the regularity of human mobility, traffic exhibits strong correlations in both temporal domain and spatial domain. By exploiting the similarity values, we derive application-specific message update rules for affinity propagation, based on which we propose an instant traffic clustering algorithm to partition the road points into time variant clusters, where the traffics within the same cluster are strongly spatially correlated. Online traffic clustering is also considered by clustering combination via evidence accumulation for further influence study. We also present a neural network based traffic prediction algorithm to predict the traffic conditions cluster by cluster for a future time based on the current and historical traffic data. Simulation study on real traffic data demonstrates that our proposed algorithms are able to identify the true influences among road points and provide accurate traffic predictions.

【Keywords】: neural nets; pattern clustering; road traffic; traffic engineering computing; affinity propagation; application-specific message update rule; dynamic traffic relationship; evidence accumulation; human mobility; neural network based traffic prediction algorithm; online traffic clustering algorithm; online traffic prediction; road point; similarity value; spatial domain; temporal domain; time variant cluster; traffic condition; vehicle network; Algorithm design and analysis; Clustering algorithms; Correlation; Partitioning algorithms; Prediction algorithms; Roads; Vehicles

57. Minimizing data collection latency in wireless sensor network with multiple mobile elements.

Paper Link】 【Pages】:504-512

【Authors】: Donghyun Kim ; Baraki H. Abay ; R. N. Uma ; Weili Wu ; Wei Wang ; Alade O. Tokuta

【Abstract】: This paper considers the problem of computing the optimal trajectories of multiple mobile elements (e.g. robots, vehicles, etc.) to minimize data collection latency in wireless sensor networks (WSNs). By relying on slightly different assumption, we define two interesting problems, the k-traveling salesperson problem with neighborhood (k-TSPN) and the k-rooted path cover problem with neighborhood (k-PCPN). Since both problems are NP-hard, we propose constant factor approximation algorithms for them. Our simulation results indicate our algorithms outperform their alternatives.

【Keywords】: computational complexity; travelling salesman problems; wireless sensor networks; NP-hard problem; constant factor approximation; data collection latency; k-rooted path cover problem with neighborhood; k-traveling salesperson problem with neighborhood; multiple mobile elements; wireless sensor network; Approximation algorithms; Approximation methods; Mobile communication; Optimized production technology; Robot sensing systems; Trajectory; Wireless sensor networks; Approximation algorithm; graph theory; k-rooted tree cover problem; mobile elements; traveling salesperson problem with neighborhood; wireless sensor network

58. Percolation and connectivity on the signal to interference ratio graph.

Paper Link】 【Pages】:513-521

【Authors】: Rahul Vaze

【Abstract】: A wireless communication network is considered where any two nodes are connected if the signal-to-interference ratio (SIR) between them is greater than a threshold. Assuming that the nodes of the wireless network are distributed as a Poisson point process (PPP), percolation (formation of an unbounded connected cluster) on the resulting SIR graph is studied as a function of the density of the PPP. It is shown that for a small enough threshold, there exists a closed interval of densities for which percolation happens with non-zero probability. Conversely, it is shown that for a large enough threshold, there exists a closed interval of densities for which the probability of percolation is zero. Connectivity properties of the SIR graph are also studied by restricting all the nodes to lie in a bounded area. Assigning separate frequency bands or time-slots proportional to the logarithm of the number of nodes to different nodes for transmission/reception is shown to be necessary and sufficient for guaranteeing connectivity in the SIR graph.

【Keywords】: graph theory; probability; radio networks; radiofrequency interference; stochastic processes; Poisson point process; connectivity property; nonzero probability; percolation probability; signal to interference ratio graph; wireless communication network; Ad hoc networks; Attenuation; Face; Interference; Lattices; Signal to noise ratio; Wireless networks

59. Impact of directional transmission in large-scale multi-hop wireless ad hoc networks.

Paper Link】 【Pages】:522-530

【Authors】: Chi-Kin Chau ; Richard J. Gibbens ; Don Towsley

【Abstract】: In multi-hop wireless networks, per-hop forwarding strategies that optimize local transmissions can have a subtle impact on network performance. Motivated by a number of scenarios for improving signal strength or mitigating interference, we study a fundamental problem that arises in a wireless ad hoc network with directional transmission (e.g., using directional antennas), where nodes are randomly placed with their transmission footprints (each as a sector) aligned toward the destinations. Only the nodes located in the transmission footprint of a transmitter act as forwarders. Our study addresses connectivity of this setting. We first examine through simulation the percolation probability and the number of cross-area paths available to directional transmission, at different spread angles of transmission footprints. We observe that there is a critical spread angle, above which there is little impact on these properties. Analytically, we derive upper and lower bounds for the critical spread angle. Moreover, we show that with high probability there exist at least Ω(n/ log n) number of disjoint paths across a strip area of n × Θ(n), when the critical spread angle lies above the threshold. Our results provide insights on optimizing directional transmission in wireless ad hoc networks.

【Keywords】: ad hoc networks; communication complexity; directive antennas; interference suppression; probability; telecommunication network routing; critical spread angle; cross-area path; directional antennas; directional transmission; interference mitigation; large-scale wireless ad hoc network; local transmission; multihop wireless ad hoc network; per-hop forwarding strategy; percolation probability; signal strength; transmission footprint; Directional antennas; Lattices; Mobile ad hoc networks; Road transportation; Strips; Transmitters; Wireless networks; Connectivity; Directional Antenna; Directional Forwarding; Oriented Percolation Theory

60. Asymptotic laws for content replication and delivery in wireless networks.

Paper Link】 【Pages】:531-539

【Authors】: Savvas Gitzenis ; Georgios S. Paschos ; Leandros Tassiulas

【Abstract】: A key consideration in novel communication paradigms in multihop wireless networks regards the scalability of the network. We investigate the case of nodes making random requests on content stored in multiple replicas over the wireless network. We show that, in contrast to the conventional paradigm of random communicating pairs, multihop communication is a sustainable scheme for certain values of file popularity, cache and network size. In particular, we formulate the joint problem of replication and routing and compute an order optimal solution. Assuming a Zipf file popularity distribution, we vary the number of files M in the system as a function of the nodes N, let both go to infinity and identify the scaling regimes of the required link capacity, from O(√N) down to O(1).

【Keywords】: radio networks; Zipf file popularity distribution; asymptotic laws; content replication; link capacity; multihop communication; multihop wireless networks; multiple replicas; order optimal solution; random communicating pairs; random request; Approximation methods; Routing; Spread spectrum communication; Throughput; Topology; Wireless networks

61. Distributed Opportunistic Scheduling: A control theoretic approach.

Paper Link】 【Pages】:540-548

【Authors】: Andres Garcia-Saavedra ; Albert Banchs ; Pablo Serrano ; Joerg Widmer

【Abstract】: Distributed Opportunistic Scheduling (DOS) techniques have been recently proposed to improve the throughput performance of wireless networks. With DOS, each station contends for the channel with a certain access probability. If a contention is successful, the station measures the channel conditions and transmits in case the channel quality is above a certain threshold. Otherwise, the station does not use the transmission opportunity, allowing all stations to recontend. A key challenge with DOS is to design a distributed algorithm that optimally adjusts the access probability and the threshold of each station. To address this challenge, in this paper we first compute the configuration of these two parameters that jointly optimizes throughput performance in terms of proportional fairness. Then, we propose an adaptive algorithm based on control theory that converges to the desired point of operation. Finally, we conduct a control theoretic analysis of the algorithm to find a setting for its parameters that provides a good tradeoff between stability and speed of convergence. Simulation results validate the design of the proposed algorithm and confirm its advantages over previous proposals.

【Keywords】: distributed algorithms; probability; radio networks; scheduling; wireless channels; access probability; channel quality; control theoretic approach; distributed algorithm; distributed opportunistic scheduling; throughput performance improvement; wireless channel; wireless network; Adaptive algorithms; Algorithm design and analysis; Control theory; Equations; Stability analysis; Throughput; Transfer functions

62. Use your frequency wisely: Explore frequency domain for channel contention and ACK.

Paper Link】 【Pages】:549-557

【Authors】: Xiaojun Feng ; Jin Zhang ; Qian Zhang ; Bo Li

【Abstract】: The promise of high speed (over 1Gbps) wireless transmission rate at the physical layer can be significantly compromised with the current design of 802.11 DCF. There are three overheads in the 802.11 MAC that contribute to the performance degradation: DIFS, random backoff and ACK. Motivated by the recent progress in OFDM and self-interference cancellation technologies, in this paper, we propose a novel MAC design called REPICK (REversed contention and PIggy-backed ACK) to collectively address all the three overheads. The key idea in our proposal is to take advantage of OFDM subcarriers in the frequency domain to enhance the MAC efficiency. In REPICK, we propose a novel reverse contention algorithm, which enables and facilitates receivers to contend channel in the frequency domain (reversed contention). We also design an efficient mechanism which allows ACKs from receivers to be piggy-backed through subcarriers together with the contention information (piggy-backed ACK). We prove through rigorous analysis that the proposed scheme can substantially reduce the overheads associated with 802.11 DCF and a guaranteed throughput gain can be obtained. In addition, results from extensive simulations demonstrate that REPICK can improve the throughput by up to 170%.

【Keywords】: OFDM modulation; access protocols; channel allocation; frequency-domain analysis; interference suppression; wireless LAN; 802.11 DCF; 802.11 MAC; DIFS; MAC design; MAC efficiency; OFDM subcarriers; REPICK; channel contention; contention information; frequency domain; guaranteed throughput gain; high speed wireless transmission rate; performance degradation; physical layer; piggy-backed ACK; random backoff; reverse contention algorithm; reversed contention; rigorous analysis; self-interference cancellation technology; Data communication; Frequency domain analysis; IEEE 802.11 Standards; Media Access Protocol; OFDM; Receivers; Throughput

63. RxIP: Monitoring the health of home wireless networks.

Paper Link】 【Pages】:558-566

【Authors】: Justin Manweiler ; Peter Franklin ; Romit Roy Choudhury

【Abstract】: Deploying home access points (AP) is hard. Untrained users typically purchase, install, and configure a home AP with very little awareness of wireless signal coverage and complex interference conditions. We envision a future of autonomous wireless network management that uses the Internet as an enabling technology. By leveraging a P2P architecture over wired Internet connections, nearby APs can coordinate to manage their shared wireless spectrum, especially in the face of network-crippling faults. As a specific instance of this architecture, we build RxIP, a network diagnostic and recovery tool, initially targeted towards hidden terminal mitigation. Our stable, in-kernel implementation demonstrates that APs in real home settings can detect hidden interferers, and agree on a mutually beneficial channel access strategy. Consistent throughput and fairness gains with TCP traffic and in-home micro-mobility confirm the viability of the system. We believe that using RxIP to address other network deficiencies opens a rich area for further research, helping to ensure that smarter homes of the future embed smarter networks. In the near term, with the wireless and entertainment industries poised for home-centric wireless gadgets, RxIP-type home management systems will become increasingly relevant.

【Keywords】: Internet; interference suppression; peer-to-peer computing; telecommunication network management; telecommunication traffic; wireless channels; P2P architecture; RxIP; TCP traffic; autonomous wireless network management; channel access strategy; complex interference condition; home AP; home access point; home management system; home wireless network; home-centric wireless gadgets; in-home micro-mobility; network diagnostic tool; network recovery tool; network-crippling fault; wired Internet connection; wireless signal coverage; wireless spectrum; Arrays; Interference; Internet; Radiation detectors; Synchronization; Throughput; Wireless communication

64. Jointly optimizing multi-user rate adaptation for video transport over wireless systems: Mean-fairness-variability tradeoffs.

Paper Link】 【Pages】:567-575

【Authors】: Vinay Joseph ; Gustavo de Veciana

【Abstract】: User perceived video quality depends on a variety of only partially understood factors, e.g., the application domain, content, compression, transport mechanism, and most importantly psycho-visual systems determining the ultimate Quality of Experience (QoE) of users. This paper centers on two key observations in addressing the problem of joint rate adaptation for video streams sharing a congested resource. First, we note that a user viewing a given video will experience temporal variations in the dependence of perceived video quality to the compression rate. Intuitively this is due to the possibly changing nature of the content, e.g., from an action to a slower scene. Thus, in allocating rates to users sharing a congested resource, in particular a wireless system where additional temporal variability in users' capacity may be high, content dependent tradeoffs can be realized to deliver a better overall average perceived video quality. Second, we note that such adaptation of users' rates, may result in temporal variations in video quality which combined with perceptual hysteresis effects will degrade users' QoE. We develop an asymptotically optimal online algorithm, requiring minimal statistical information, for optimizing users' QoE by realizing tradeoffs across mean, variance and fairness. Simulations show that our approach achieves significant gains in viewers' QoE. The novelty of this work lies not only in tackling the fundamental problem of achieving fair allocations of perceived video quality across a user population with time varying sensitivities and capacity, but, in addition, in integrating the deleterious impact that variations in perceived quality has on their QoE.

【Keywords】: data compression; radio networks; statistical analysis; video coding; video communication; video streaming; application domain; compression rate; content-changing nature; content-dependent tradeoff; fair allocations; mean-fairness-variability tradeoffs; minimal statistical information; multiuser rate adaptation; optimal online algorithm; perceptual hysteresis effects; psycho-visual systems; quality-of-experience; temporal variations; time-varying sensitivities; transport mechanism; user QoE; user capacity; user-perceived video quality; video stream sharing; video transport; viewer QoE; wireless systems; Convex functions; Equations; Joints; Measurement; Resource management; Streaming media; Wireless communication

65. Evaluating service disciplines for mobile elements in wireless ad hoc sensor networks.

Paper Link】 【Pages】:576-584

【Authors】: Liang He ; Zhe Yang ; Jianping Pan ; Lin Cai ; Jingdong Xu

【Abstract】: The introduction of mobile elements in wireless sensor networks creates a new dimension to reduce and balance the energy consumption for resource-constrained sensor nodes; however, it also introduces extra latency in the data collection process due to the limited mobility of mobile elements. Therefore, how to arrange and schedule the movement of mobile elements throughout the sensing field is of ultimate importance. In this paper, the online scenario where data collection requests arrive progressively is investigated, and the data collection process is modeled as an M/G/1/c-NJN queuing system, where NJN stands for nearest-job-next, a simple and intuitive service discipline. Based on this model, the performance of data collection is evaluated through both theoretical analysis and extensive simulation. The NJN discipline is further extended by considering the possibility of requests combination (NJNC). The simulation results validate our analytical models and give more insights when comparing with the first-come-first-serve (FCFS) discipline. In contrast to the conventional wisdom of the starvation problem, we reveal that NJN and NJNC have a better performance than FCFS, in both the average and more importantly the worst cases, which gives the much needed assurance to adopt NJN and NJNC in the design of more sophisticated data collection schemes for mobile elements in wireless ad hoc sensor networks, as well as many other similar scheduling application scenarios.

【Keywords】: ad hoc networks; energy consumption; mobile radio; queueing theory; wireless sensor networks; M/G/1/c-nearest-job-next queuing system; data collection process; energy consumption; mobile element; requests combination; resource-constrained sensor node; service discipline evaluation; wireless ad hoc sensor network; Analytical models; Data models; Mobile communication; Sensors; Steady-state; Wireless communication; Wireless sensor networks; Wireless ad hoc sensor networks; mobile elements; nearest-job-next; queue-based modeling; service disciplines

66. Di-Sec: A distributed security framework for heterogeneous Wireless Sensor Networks.

Paper Link】 【Pages】:585-593

【Authors】: Marco Valero ; Sang Shin Jung ; A. Selcuk Uluagac ; Yingshu Li ; Raheem A. Beyah

【Abstract】: Wireless Sensor Networks (WSNs) are deployed for monitoring in a range of critical domains (e.g., health care, military, critical infrastructure). Accordingly, these WSNs should be resilient to attacks. The current approach to defending against malicious threats is to develop and deploy a specific defense mechanism for a specific attack. However, the problem with this traditional approach to defending sensor networks is that the solution for the Jamming attack does not defend against other attacks (e.g., Sybil and Selective Forwarding). In reality, one cannot know a priori what type of attack an adversary will launch. This work addresses the challenges with the traditional approach to securing sensor networks and presents a comprehensive framework, Di-Sec, that can defend against all known and forthcoming attacks. At the heart of Di-Sec lies the monitoring core (M-Core), which is an extensible and lightweight layer that gathers statistics relevant for the defense mechanisms. The M-Core allows for the monitoring of both internal and external threats and supports the execution of multiple detection and defense mechanisms (DDMs) against different threats in parallel. Along with Di-Sec, a new user-friendly domain-specific language was developed, the M-Core Control Language (MCL). Using the MCL, a user can implement new defense mechanisms without the overhead of learning the details of the underlying software architecture (i.e., TinyOS, Di-Sec). Hence, the MCL expedites the development of sensor defense mechanisms by significantly simplifying the coding process for developers. The Di-Sec framework has been implemented and tested on real sensors to evaluate its feasibility and performance. Our evaluation of memory, communication, and sensing components shows that Di-Sec is feasible on today's resource-limited sensors and has a nominal overhead. Furthermore, we illustrate the basic functionality of Di-Sec by implementing and simultaneously executing DDMs for attacks at v- rious layers of the communication stack (i.e., Jamming, Selective Forwarding, Sybil, and Internal attacks).

【Keywords】: encoding; jamming; telecommunication security; wireless sensor networks; Di-Sec; M-Core control language; Sybil; TinyOS; coding process; communication; critical infrastructure; distributed security framework; health care; internal attacks; jamming attack; memory; military; monitoring core; multiple detection; selective forwarding; sensing components; sensor defense mechanisms; software architecture; wireless sensor networks; Computational modeling; Computer architecture; Jamming; Monitoring; Security; Sensors; Wireless sensor networks; Distributed Security Framework; M-Core Control Language (MCL); Wireless Sensor Network Security

67. A binary-classification-tree based framework for distributed target classification in multimedia sensor networks.

Paper Link】 【Pages】:594-602

【Authors】: Liang Liu ; Anlong Ming ; Huadong Ma ; Xi Zhang

【Abstract】: With rapid improvements and miniaturization in hardware, sensor nodes equipped with acoustic and visual information collection modules promise an unprecedented opportunity for target surveillance applications. This paper investigates a critical task of target surveillance, multi-class classification, in distributed multimedia sensor networks. We first analyze the procedure of target classification utilizing the acoustic and visual information. Then, we propose a binary classification tree based framework for distributed target classification in multimedia sensor networks. The proposed framework includes three main components: Generation of binary classification tree, Division of binary classification tree, and Selection of multimedia sensor nodes. Finally, we conduct an experimental application of target classification and extensive simulations to validate and evaluate our proposed framework and related schemes.

【Keywords】: multimedia communication; tree searching; binary classification tree based framework; distributed multimedia sensor networks; distributed target classification; multiclass classification; multimedia sensor nodes; target surveillance application; visual information collection modules; Acoustics; Feature extraction; Multimedia communication; Support vector machines; Training; Visualization; Wireless sensor networks; Multimedia sensor network; classification tree; node selection; target classification

68. Data gathering in wireless sensor networks through intelligent compressive sensing.

Paper Link】 【Pages】:603-611

【Authors】: Jin Wang ; Shaojie Tang ; Baocai Yin ; Xiang-Yang Li

【Abstract】: The recently emerged compressive sensing (CS) theory provides a whole new avenue for data gathering in wireless sensor networks with benefits of universal sampling and decentralized encoding. However, existing compressive sensing based data gathering approaches assume the sensed data has a known constant sparsity, ignoring that the sparsity of natural signals vary in temporal and spatial domain. In this paper, we present an adaptive data gathering scheme by compressive sensing for wireless sensor networks. By introducing autoregressive (AR) model into the reconstruction of the sensed data, the local correlation in sensed data is exploited and thus local adaptive sparsity is achieved. The recovered data at the sink is evaluated by utilizing successive reconstructions, the relation between error and measurements. Then the number of measurements is adjusted according to the variation of the sensed data. Furthermore, a novel abnormal readings detection and identification mechanism based on combinational sparsity reconstruction is proposed. Internal error and external event are distinguished by their specific features. We perform extensive testing of our scheme on the real data sets and experimental results validate the efficiency and efficacy of the proposed scheme. Up to about 8dB SNR gain can be achieved over conventional CS based method with moderate increase of complexity.

【Keywords】: autoregressive processes; compressed sensing; encoding; signal detection; signal reconstruction; signal sampling; wireless sensor networks; AR model; CS theory; SNR gain; abnormal readings detection; adaptive data gathering; autoregressive model; combinational sparsity reconstruction; compressive sensing theory; constant sparsity; decentralized encoding; extensive testing; external event; identification mechanism; intelligent compressive sensing; internal error; local adaptive sparsity; local correlation; natural signals; real data sets; recovered data; sensed data reconstruction; spatial domain; temporal domain; universal sampling; wireless sensor networks; Correlation; Current measurement; Data models; Pollution measurement; Temperature measurement; Temperature sensors; Wireless sensor networks

69. Maximizing profit on user-generated content platforms with heterogeneous participants.

Paper Link】 【Pages】:612-620

【Authors】: Shaolei Ren ; Jaeok Park ; Mihaela van der Schaar

【Abstract】: In this paper, we consider a user-generated content platform monetized through advertising and managed by an intermediary. To maximize the intermediary's profit given the rational decision-making of content viewers and heterogeneous content producers, a payment scheme is proposed in which the intermediary can either tax or subsidize the content producers. First, we use a model with a representative content viewer to determine how the content viewers' attention is allocated across available content by solving a utility maximization problem. Then, by modeling the content producers as self-interested agents making independent production decisions, we show that there exists a unique equilibrium in the content production stage, and propose a best-response dynamics to model the decision-making process. Next, we study the intermediary's optimal payment based on decisions made by the representative content viewer and the content producers. In particular, by considering the well-known quality-adjusted Dixit-Stiglitz utility function for the representative content viewer, we derive explicitly the optimal payment maximizing the intermediary's profit and characterize analytical conditions under which the intermediary should tax or subsidize the content producers. Finally, we generalize the analysis by considering heterogeneity in terms of production costs among the content producers.

【Keywords】: advertising; decision making; profitability; Dixit-Stiglitz utility function; advertising; best-response dynamics; content production stage; heterogeneous content producers; heterogeneous participants; independent production decision making; intermediary optimal payment; intermediary profit maximization; payment scheme; production costs; rational decision-making; representative content viewer; selfinterested agents; user-generated content platforms; utility maximization problem; Advertising; Finance; Internet; Pricing; Production; User-generated content; YouTube

70. Profiling Skype video calls: Rate control and video quality.

Paper Link】 【Pages】:621-629

【Authors】: Xinggong Zhang ; Yang Xu ; Hao Hu ; Yong Liu ; Zongming Guo ; Yao Wang

【Abstract】: Video telephony has recently gained its momentum and is widely adopted by end-consumers. But there have been very few studies on the network impacts of video calls and the user Quality-of-Experience (QoE) under different network conditions. In this paper, we study the rate control and video quality of Skype video calls. We first measure the behaviors of Skype video calls on a controlled network testbed. By varying packet loss rate, propagation delay and bandwidth, we observe how Skype adjusts its rates, FEC redundancy and video quality. We find that Skype is robust against mild packet losses and propagation delays, and can efficiently utilize the available network bandwidth. We also find that Skype employs an overly aggressive FEC protection strategy. Based on the measurement results, we develop rate control model, FEC model, and video quality model for Skype. Extrapolating from the models, we conduct numerical analysis to study the network impacts of Skype. We demonstrate that user back-offs upon quality degradation serve as an effective user-level rate control scheme. We also show that Skype video calls are indeed TCP-friendly and respond to congestion quickly when the network is overloaded.

【Keywords】: Internet telephony; bandwidth allocation; forward error correction; numerical analysis; transport protocols; FEC redundancy; Skype video call profiling; TCP-friendly; mild packet losses; network bandwidth utilization; numerical analysis; overly aggressive FEC protection strategy; packet loss rate; propagation delay; quality degradation; user quality-of-experience; user-level rate control scheme; video quality; video telephony; Bandwidth; Delay; Loss measurement; Propagation delay; Propagation losses; Protocols; Streaming media

71. Fully decentralized estimation of some global properties of a network.

Paper Link】 【Pages】:630-638

【Authors】: Antonio Carzaniga ; Cyrus P. Hall ; Michele Papalini

【Abstract】: It is often beneficial to architect networks and overlays as fully decentralized systems, in the sense that any computation (e.g., routing or search) would only use local information, and no single node would have a complete view or control over the whole network. Yet sometimes it also important to compute global properties of the network. In this paper we propose a fully decentralized algorithm to compute some global properties that can be derived from the spectrum of the network. More specifically, we compute the most significant eigenvalues of a descriptive matrix closely related to the adjacency matrix of the network graph. Such spectral properties can then lead to, for example, the “mixing time” of a network, which can be used to parametrize random walks and related search algorithms typical of peer-to-peer networks. Our key insight is to view the network as a linear dynamic system whose impulse response can be computed efficiently and locally by each node. We then use this impulse response to identify the spectral properties of the network. This algorithm is completely decentralized and requires only minimal local state and local communication. We show experimentally that the algorithm works well on different kinds of networks and in the presence of network instability.

【Keywords】: eigenvalues and eigenfunctions; graph theory; matrix algebra; multivariable systems; network theory (graphs); overlay networks; peer-to-peer computing; random processes; search problems; spectral analysis; transient response; adjacency matrix; decentralized systems; descriptive matrix; eigenvalues; fully decentralized algorithm; fully decentralized estimation; global network property; global property; impulse response; linear dynamic system; local communication; local information; minimal local state communication; network graph; network instability; network mixing time; network spectrum; overlay network; peer-to-peer networks; random walks; search algorithms; spectral property; Approximation algorithms; Approximation methods; Eigenvalues and eigenfunctions; Estimation; Heuristic algorithms; Peer to peer computing; Vectors

72. Optimal resource allocation to defend against deliberate attacks in networking infrastructures.

Paper Link】 【Pages】:639-647

【Authors】: Xun Xiao ; Minming Li ; Jianping Wang ; Chunming Qiao

【Abstract】: Protecting networking infrastructures from malicious attacks is important as a successful attack on a high data rate link can cause the loss or delay of large amounts of data. In this paper, we consider a proactive approach where the ISPs are willing to allocate some (limited) resources to defend the networking infrastructures against the attacks. We aim to answer where and how much the defending resource should be placed so that the expected data loss can be minimized no matter where the attacker may launch the attack. We model the problem as a 2-player zero-sum game where the payoffs are measured by the maximum network flow. In order to overcome the unique challenges of such payoffs, we transform the payoffs into explicit piece-wise functions through multi-parametric linear programming (MP-LP) and divide the entire strategy space into a set of critical regions. We prove that a global Nash Equilibrium (NE) exists when there is only one critical region. However, when the number of critical regions is greater than 1, there is no global NE. We also prove that there exists one and only one local NE in each critical region. We then design a mixed-strategy solution. Our results have shown that to dedicate all defending resources to one min-cut set when there are multiple min-cut sets will not be an optimal solution, however, min-cut strategies will have higher probabilities to be selected in the mixed-strategy solution when the defending resource is limited.

【Keywords】: game theory; linear programming; resource allocation; telecommunication network management; telecommunication security; 2-player zero-sum game; ISP; data loss; defending resource; deliberate attacks; explicit piecewise function; global Nash equilibrium; high data rate link; malicious attacks; maximum network flow; min-cut set; min-cut strategy; mixed-strategy solution; multiparametric linear programming; networking infrastructures; optimal resource allocation; Games; Linear programming; Nash equilibrium; Network topology; Resource management; Transforms

73. Optimal max-min fairness rate control in wireless networks: Perron-Frobenius characterization and algorithms.

Paper Link】 【Pages】:648-656

【Authors】: Desmond W. H. Cai ; Chee Wei Tan ; Steven H. Low

【Abstract】: Rate adaptation and power control are two key resource allocation mechanisms in multiuser wireless networks. In the presence of interference, how do we jointly optimize end-to-end source rates and link powers to achieve weighted max-min rate fairness for all sources in the network? This optimization problem is hard to solve as physical layer link rate functions are nonlinear, nonconvex, and coupled in the transmit powers. We show that the weighted max-min rate fairness problem can, in fact, be decoupled into separate fairness problems for flow rate and power control. For a large class of physical layer link rate functions, we characterize the optimal solution analytically by a nonlinear Perron-Frobenius theory (through solving a conditional eigenvalue problem) that captures the interaction of multiuser interference. We give an iterative algorithm to compute the optimal flow rate that converges geometrically fast without any parameter configuration. Numerical results show that our iterative algorithm is computationally fast for both the Shannon capacity, CDMA, and piecewise linear link rate functions.

【Keywords】: code division multiple access; eigenvalues and eigenfunctions; iterative methods; minimax techniques; piecewise linear techniques; radio networks; radiofrequency interference; resource allocation; CDMA; Perron-Frobenius algorithm; Perron-Frobenius characterization; Shannon capacity; conditional eigenvalue problem; end-to-end source rate; iterative algorithm; multiuser interference; multiuser wireless network; nonlinear Perron-Frobenius theory; optimal flow rate; optimal max-min fairness rate control; optimization problem; physical layer link rate function; piecewise linear link rate function; power control; power transmission; rate adaptation; resource allocation; weighted max-min rate fairness problem; Interference; Optimization; Physical layer; Power control; Resource management; Signal to noise ratio; Vectors; Max-min fairness; convex optimization; nonlinear Perron-Frobenius theory; nonnegative matrix theory; power control; wireless network

74. Jointly optimal bit loading, channel pairing and power allocation for multi-channel relaying.

Paper Link】 【Pages】:657-665

【Authors】: Mahdi Hajiaghayi ; Min Dong ; Ben Liang

【Abstract】: We aim to enhance the end-to-end rate of a general dual-hop relay network with multiple channels and finite modulation formats, by jointly optimizing channel pairing, power allocation, and integer bit loading. Such an optimization problem has both a discrete feasible region, due to the combinatoric nature of channel pairing, and a discrete objective, due to the bit loading requirement. For this type of mixed-integer programming problems, the Lagrange dual method generally is inapplicable, due to the non-zero duality gap. However, by exploring the structure of our problem, we are able to bound the gap to within one bit, allowing the extraction of an exact optimal integer solution. We further present complexity reduction techniques, and demonstrate that the proposed solution only requires a computational complexity that is polynomial in the number of channels, realizing efficient implementation in practical systems. Through numerical experiments, we show that the jointly optimal solution can significantly outperform common sub-optimal alternatives.

【Keywords】: channel allocation; duality (mathematics); integer programming; wireless channels; Lagrange dual method; channel pairing; complexity reduction technique; discrete feasible region; discrete objective; dual-hop relay network; end-to-end rate; finite modulation format; integer bit loading; mixed-integer programming problem; multichannel relaying; multiple channels format; nonzero duality gap; optimal bit loading; optimal integer solution; power allocation; Bit rate; Loading; Modulation; OFDM; Optimization; Relays; Resource management

75. Dynamic spectrum access as a service.

Paper Link】 【Pages】:666-674

【Authors】: Chunsheng Xin ; Min Song

【Abstract】: Recently there have been various studies on dynamic spectrum access (DSA) approaches, e.g., opportunistic spectrum access and spectrum auction, to address spectrum scarcity and inefficient spectrum utilization caused by today's static spectrum allocation policy. In this paper, we propose a new approach, demand spectrum access as a service (DSAS), to achieve DSA. We consider a spectrum service provider that dynamically offers spectrum service to users such that the users can set up dynamic topologies for data communication, e.g., transport a bulk data flow between two nodes, or carry out a video conference among a set of nodes. Through DSAS, the precious spectrum is dynamically shared and efficiently utilized by users. In this paper, we consider two spectrum services, SameBand and DiffBand, and develop efficient online algorithms to allocate spectrum for the two services, so that users can set up dynamic topologies. The performance of the algorithms is evaluated through both analysis and simulation.

【Keywords】: radio spectrum management; telecommunication network topology; telecommunication services; DiffBand; SameBand; bulk data flow; data communication; demand spectrum access as a service; dynamic spectrum access; dynamic topology; online algorithm; opportunistic spectrum access; spectrum auction; spectrum scarcity; spectrum service provider; spectrum utilization; static spectrum allocation policy; video conference; Base stations; Dynamic scheduling; Interchannel interference; Resource management; Switches; Topology

76. Equilibrium selection in power control games on the interference channel.

Paper Link】 【Pages】:675-683

【Authors】: Gesualdo Scutari ; Francisco Facchinei ; Jong-Shi Pang ; Lorenzo Lampariello

【Abstract】: In recent years, game-theoretic tools have been increasingly used to study many important resource allocation problems in communications and networking. One common feature shared by all these approaches is that, when it comes to (distributed) computation of equilibria, assumptions are always made that imply uniqueness of the Nash Equilibrium. This simplifies considerably the analysis of the games under investigation and permits to design distributed solution methods with convergence guarantee. However, requiring the uniqueness of the solution may be too demanding in many practical situations, thus strongly limiting the applicability of current game theoretical methodologies. In this paper, we overcome this limitation and propose novel distributed algorithms for noncooperative games having multiple solutions. The new methods, whose convergence analysis is based on variational inequality techniques, are able to select, among all the equilibria of a game, those which optimize a given performance criterion. We apply the developed methods to a power control problem over parallel Gaussian interference channels and show that they yield a considerable performance improvement over classical power control schemes.

【Keywords】: Gaussian channels; game theory; interference (signal); resource allocation; Nash equilibrium; convergence analysis; equilibrium selection; game theoretic tools; parallel Gaussian interference channels; power control games; resource allocation; variational inequality techniques; Convergence; Distributed algorithms; Games; Optimization; Partitioning algorithms; Power control; Vectors

77. Scaling social media applications into geo-distributed clouds.

Paper Link】 【Pages】:684-692

【Authors】: Yu Wu ; Chuan Wu ; Bo Li ; Linquan Zhang ; Zongpeng Li ; Francis C. M. Lau

【Abstract】: Federation of geo-distributed cloud services is a trend in cloud computing which, by spanning multiple data centers at different geographical locations, can provide a cloud platform with much larger capacities. Such a geo-distributed cloud is ideal for supporting large-scale social media streaming applications (e.g., YouTube-like sites) with dynamic contents and demands, owing to its abundant on-demand storage/bandwidth capacities and geographical proximity to different groups of users. Although promising, its realization presents challenges on how to efficiently store and migrate contents among different cloud sites (i.e. data centers), and to distribute user requests to the appropriate sites for timely responses at modest costs. These challenges escalate when we consider the persistently increasing contents and volatile user behaviors in a social media application. By exploiting social influences among users, this paper proposes efficient proactive algorithms for dynamic, optimal scaling of a social media application in a geo-distributed cloud. Our key contribution is an online content migration and request distribution algorithm with the following features: (1) future demand prediction by novelly characterizing social influences among the users in a simple but effective epidemic model; (2) oneshot optimal content migration and request distribution based on efficient optimization algorithms to address the predicted demand, and (3) a Δ(t)-step look-ahead mechanism to adjust the one-shot optimization results towards the offline optimum. We verify the effectiveness of our algorithm using solid theoretical analysis, as well as large-scale experiments under dynamic realistic settings on a home-built cloud platform.

【Keywords】: cloud computing; computer centres; media streaming; social networking (online); cloud computing; epidemic model; future demand prediction; geo-distributed cloud services; geographical locations; geographical proximity; home-built cloud platform; large-scale social media streaming applications; multiple data center spanning; on-demand storage-bandwidth capacities; one-shot optimization algorithm; oneshot optimal content migration; online content migration; proactive algorithms; request distribution algorithm; social influences characterization; social media application scaling; step look-ahead mechanism; Algorithm design and analysis; Cloud computing; Heuristic algorithms; Media; Optimization; Servers; Streaming media

78. LT codes-based secure and reliable cloud storage service.

Paper Link】 【Pages】:693-701

【Authors】: Ning Cao ; Shucheng Yu ; Zhenyu Yang ; Wenjing Lou ; Y. Thomas Hou

【Abstract】: With the increasing adoption of cloud computing for data storage, assuring data service reliability, in terms of data correctness and availability, has been outstanding. While redundancy can be added into the data for reliability, the problem becomes challenging in the “pay-as-you-use” cloud paradigm where we always want to efficiently resolve it for both corruption detection and data repair. Prior distributed storage systems based on erasure codes or network coding techniques have either high decoding computational cost for data users, or too much burden of data repair and being online for data owners. In this paper, we design a secure cloud storage service which addresses the reliability issue with near-optimal overall performance. By allowing a third party to perform the public integrity verification, data owners are significantly released from the onerous work of periodically checking data integrity. To completely free the data owner from the burden of being online after data outsourcing, this paper proposes an exact repair solution so that no metadata needs to be generated on the fly for repaired data. The performance analysis and experimental results show that our designed service has comparable storage and communication cost, but much less computational cost during data retrieval than erasure codes-based storage solutions. It introduces less storage cost, much faster data retrieval, and comparable communication cost comparing to network coding-based distributed storage systems.

【Keywords】: cloud computing; data integrity; security of data; storage management; LT codes; cloud computing; cloud storage service reliability; cloud storage service security; corruption detection; data availability; data correctness; data integrity; data repair; data service reliability; data storage; erasure codes; near-optimal overall performance; network coding-based distributed storage systems; pay-as-you-use cloud paradigm; public integrity verification; Cloud computing; Decoding; Distributed databases; Encoding; Maintenance engineering; Reliability; Servers

79. Stochastic models of load balancing and scheduling in cloud computing clusters.

Paper Link】 【Pages】:702-710

【Authors】: Siva Theja Maguluri ; R. Srikant ; Lei Ying

【Abstract】: Cloud computing services are becoming ubiquitous, and are starting to serve as the primary source of computing power for both enterprises and personal computing applications. We consider a stochastic model of a cloud computing cluster, where jobs arrive according to a stochastic process and request virtual machines (VMs), which are specified in terms of resources such as CPU, memory and storage space. While there are many design issues associated with such systems, here we focus only on resource allocation problems, such as the design of algorithms for load balancing among servers, and algorithms for scheduling VM configurations. Given our model of a cloud, we first define its capacity, i.e., the maximum rates at which jobs can be processed in such a system. Then, we show that the widely-used Best-Fit scheduling algorithm is not throughput-optimal, and present alternatives which achieve any arbitrary fraction of the capacity region of the cloud. We then study the delay performance of these alternative algorithms through simulations.

【Keywords】: cloud computing; resource allocation; scheduling; stochastic processes; virtual machines; VM configurations; best-fit scheduling algorithm; cloud computing clusters; cloud computing services; enterprises; load balancing; load scheduling; personal computing applications; resource allocation problems; stochastic models; stochastic process; virtual machines; Cloud computing; Resource management; Routing; Servers; Stochastic processes; Throughput; Vectors

80. A theory of cloud bandwidth pricing for video-on-demand providers.

Paper Link】 【Pages】:711-719

【Authors】: Di Niu ; Chen Feng ; Baochun Li

【Abstract】: Current-generation cloud computing is offered with usage-based pricing, with no bandwidth capacity guarantees, which is however unappealing to bandwidth-intensive applications such as video-on-demand (VoD). We consider a new type of service where VoD providers, such as Netflix and Hulu, make reservations for bandwidth guarantees from the cloud at negotiable prices to support continuous media streaming. We argue that it is beneficial to multiplex such bandwidth reservations in the market using a profit-making broker while controlling the performance risks. We ask the question-in such a market, how much should each VoD provider pay for bandwidth reservation? We prove that the market has a unique Nash equilibrium where the bandwidth reservation price for a VoD provider critically depends on its demand correlation to the market. Real-world traces verify that our theory can significantly lower the market price for cloud bandwidth reservation.

【Keywords】: bandwidth allocation; cloud computing; media streaming; pricing; telecommunication industry; video on demand; Hulu; Netflix; bandwidth reservations; cloud bandwidth pricing; cloud computing; continuous media streaming; usage-based pricing; video-on-demand providers; Aggregates; Bandwidth; Bismuth; Cloud computing; Correlation; Multiplexing; Pricing

81. STROBE: Actively securing wireless communications using Zero-Forcing Beamforming.

Paper Link】 【Pages】:720-728

【Authors】: Narendra Anand ; Sung-Ju Lee ; Edward W. Knightly

【Abstract】: We present the design and experimental evaluation of Simultaneous TRansmission with Orthogonally Blinded Eavesdroppers (STROBE). STROBE is a cross-layer approach that exploits the multi-stream capabilities of existing technologies such as 802.11n and the upcoming 802.11ac standard where multi-antenna APs can construct simultaneous data streams using Zero-Forcing Beamforming (ZFBF). Instead of using this technique for simultaneous data stream generation, STROBE utilizes ZFBF by allowing an AP to use one stream to communicate with an intended user and the remaining streams to orthogonally “blind” (actively interfere with) any potential eavesdropper thereby preventing eavesdroppers from decoding nearby transmissions. Through extensive experimental evaluation, we show that STROBE consistently outperforms Omnidirectional, Single-User Beamforming (SUBF), and directional antenna based transmission methods by keeping the transmitted signal at the intended receiver and shielded from eavesdroppers. In an indoor Wireless LAN environment, STROBE consistently serves an intended user with an SINR 15 dB greater than an eavesdropper.

【Keywords】: array signal processing; directive antennas; indoor radio; radio receivers; telecommunication security; wireless LAN; 802.11ac standard; 802.11n standard; STROBE; SUBF; ZFBF; cross-layer approach; data stream generation; directional antenna based transmission method; indoor wireless LAN environment; multiantenna AP; multistream capabilities; receiver; simultaneous transmission with orthogonally blinded eavesdroppers; single-user beamforming; transmitted signal; wireless communication security; zero-forcing beamforming; Array signal processing; Interference; Receivers; Signal to noise ratio; Transmitting antennas; Vectors

82. Location privacy preservation in collaborative spectrum sensing.

Paper Link】 【Pages】:729-737

【Authors】: Shuai Li ; Haojin Zhu ; Zhaoyu Gao ; Xinping Guan ; Kai Xing ; Xuemin Shen

【Abstract】: Collaborative spectrum sensing has been regarded as a promising approach to enable secondary users to detect primary users by exploiting spatial diversity. In this paper, we consider a converse question: could space diversity be exploited by a malicious entity, e.g., an external attacker or an untrusted Fusion Center (FC), to achieve involuntary geolocation of a secondary user by linking his location-dependent sensing report to his physical position. We answer this question by identifying a new security threat in collaborative sensing from testbed implementation, and it is shown that the attackers could geo-locate a secondary user from its sensing report with a successful rate of above 90% even in the presence of data aggregation. We then introduce a novel location privacy definition to quantify the location privacy leaking in collaborative sensing. We propose a Privacy Preserving collaborative Spectrum Sensing (PPSS) scheme, which includes two primitive protocols: Privacy Preserving Sensing Report Aggregation protocol (PPSRA) and Distributed Dummy Report Injection Protocol (DDRI). Specifically, PPSRA scheme utilizes applied cryptographic techniques to allow the FC to obtain the aggregated result from various secondary users without learning each individual's values while DDRI algorithm can provide differential location privacy for secondary users by introducing a novel sensing data randomization technique. We implement and evaluate the PPSS scheme in a real-world testbed. The evaluation results show that PPSS can significantly improve the secondary user's location privacy with a reasonable security overhead in collaborative sensing.

【Keywords】: cognitive radio; cryptography; data privacy; protocols; telecommunication security; cognitive radio security; collaborative sensing; cryptographic technique; data aggregation; differential location privacy; distributed dummy report injection protocol; involuntary geolocation; location privacy definition; location privacy preservation; location-dependent sensing report; malicious entity; primary user detection; primitive protocol; privacy preserving collaborative spectrum sensing scheme; privacy preserving sensing report aggregation protocol; security threat identification; sensing data randomization technique; space diversity; spatial diversity; untrusted fusion center; Collaboration; Cryptography; Data privacy; Privacy; Protocols; Sensors; Cognitive Radio Security; Collaborative Sensing; Location Privacy

83. Joint UFH and power control for effective wireless anti-jamming communication.

Paper Link】 【Pages】:738-746

【Authors】: Kaihe Xu ; Qian Wang ; Kui Ren

【Abstract】: Jamming-resistant communication without pre-shared secrets has received extensive attention recently and is commonly tackled by utilizing the technique of uncoordinated frequency hopping (UFH). However, existing approaches exhibit significant performance constraints due to the use of UFH at both the sender and the receiver sides. To improve the state of the art, in this paper we aim to significantly improve the performance of the anti-jamming system in the presence of a power-limited jammer. Specifically, we for the first time jointly consider UFH and power control and pose these two techniques into a uniform framework. The proposed approach utilizes online learning theory to determine both the hopping channels and the transmitting powers based on the history of channel status. By dividing the transmission power into multiple levels, the sender with a limited power budget is able to choose both the sending channels and the corresponding transmission power. The sender keeps refining its knowledge of channel status to improve future channel selection and power allocation based on the feedback information from the receiver. We analytically show that, in presence of a power-limited jammer, the average transmission delay of our system is bounded by 2A √2T|Cs|ln|Ns| with high probability. Extensive simulations are conducted to demonstrate the effectiveness of our scheme against various jamming attacks.

【Keywords】: frequency hop communication; jamming; power control; probability; telecommunication control; telecommunication security; channel selection; channel status; feedback information; hopping channels; jamming-resistant communication; joint UFH-power control; online learning theory; power allocation; power-limited jammer; pre-shared secrets; probability; sending channels; transmission power; transmitting powers; uncoordinated frequency hopping; wireless antijamming communication; Jamming; Power control; Probability distribution; Receivers; Resource management; Spread spectrum communication; Vectors

84. Detection of channel degradation attack by Intermediary Node in Linear Networks.

Paper Link】 【Pages】:747-755

【Authors】: Eric Graves ; Tan F. Wong

【Abstract】: We consider the problem of two sources wanting to share information through a potentially untrustworthy intermediary node. We assume that the two sources transmit random symbols simultaneously and that the intermediary node relays the information in the amplify-and-forward manner. We show that under a certain sufficient condition on the channel, it is possible to asymptotically detect whether or not the intermediary node is degrading the channel by sending out manipulated symbols. This can be done solely by each source examining its received distribution conditioned on what it transmitted; thus allowing for a minimally invasive approach to determining if the intermediary node is acting maliciously. More specifically, we model the potential malicious action of the intermediate node by an “attack” channel. An estimate of the attack channel is obtained from the received conditional distribution empirically observed by a source node. We show that the estimated attack channel converges in probability to the true attack channel if the intermediate node is not acting maliciously. Otherwise there is a separation between the estimated and true attacking channels with high probability. This result provides us a clear-cut criterion to determine whether the intermediate node is malicious or not.

【Keywords】: amplify and forward communication; probability; telecommunication security; Intermediary node; amplify-and-forward manner; channel degradation attack detection; conditional distribution; linear network; manipulated symbol; minimally invasive approach; random symbol; untrustworthy intermediary node; Channel estimation; Measurement; Null space; Poles and towers; Random variables; Relays; Vectors

85. Motioncast with general Markovian mobility.

Paper Link】 【Pages】:756-764

【Authors】: Shangxing Wang ; Youyun Xu ; Xinbing Wang

【Abstract】: This paper investigates the capacity, delay and energy consumption for MotionCast (a multicast mechanism for MANETs) with general Markovian mobility. We consider MotionCast in an extended cell partitioned network under a Markovian node mobility model and exactly compute the pernode throughput capacity. A Two hop relay algorithm is proposed to guarantee such capacity, which also achieves a better delay-capacity tradeoff, i.e., Θ(N log k). Considering that redundancy can significantly improve network delay, we present a Two hop relay algorithm with redundancy and study the general influence of redundancy on the capacity and delay of MotionCast network. Moreover, we use the minimum energy function to characterize the energy consumption for the MotionCast network. An accurate piecewise minimum energy function to keep network stable is derived. Furthermore, a simple Minimum energy algorithm is designed, which reduces actual energy consumption arbitrarily close to the minimum energy function at the cost of increasing delay. Our result shows that the proposed algorithm achieves the optimal energy-delay tradeoff.

【Keywords】: Markov processes; communication complexity; delays; mobile ad hoc networks; mobility management (mobile radio); multicast communication; MANET; Markovian node mobility model; MotionCast network; delay-capacity tradeoff; energy consumption; extended cell partitioned network; general Markovian mobility; minimum energy algorithm; motioncast; multicast mechanism; network delay; optimal energy-delay tradeoff; pernode throughput capacity; piecewise minimum energy function; redundancy; stable network; two hop relay algorithm; Algorithm design and analysis; Delay; Partitioning algorithms; Receivers; Redundancy; Relays; Wireless networks

86. On the throughput-delay trade-off in georouting networks.

Paper Link】 【Pages】:765-773

【Authors】: Philippe Jacquet ; Salman Malik ; Bernard Mans ; Alonso Silva

【Abstract】: We study the scaling properties of a georouting scheme in a wireless multi-hop network of n mobile nodes. Our aim is to increase the network capacity quasi linearly with n while keeping the average delay bounded. In our model, mobile nodes move according to an i.i.d. random walk with velocity v and transmit packets to randomly chosen destinations. The average packet delivery delay of our scheme is of order 1/v and it achieves the network capacity of order n/(log n log log n). This shows a practical throughput-delay trade-off, in particular when compared with the seminal result of Gupta and Kumar which shows network capacity of order √(n/log n) and negligible delay and the groundbreaking result of Grossglauser and Tse which achieves network capacity of order n but with an average delay of order √n/v. The foundation of our improved capacity and delay trade-off relies on the fact that we use a mobility model that contains free space motion, a model that we consider more realistic than classic brownian motions. We confirm the generality of our analytical results using simulations under various interference models.

【Keywords】: communication complexity; delays; mobile communication; telecommunication network routing; georouting networks; interference models; mobile nodes; network capacity; packet delivery delay; random walk; throughput-delay trade-off; wireless multi-hop network; Delay; Mobile communication; Mobile computing; Protocols; Relays; Throughput; Vectors

87. Toward simple criteria to establish capacity scaling laws for wireless networks.

Paper Link】 【Pages】:774-782

【Authors】: Canming Jiang ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou ; Sastry Kompella ; Scott F. Midkiff

【Abstract】: Capacity scaling laws offer fundamental understanding on the trend of user throughput behavior when the network size increases. Since the seminal work of Gupta and Kumar, there have been active research efforts in developing capacity scaling laws for ad hoc networks under various advanced physical layer technologies. These efforts led to many custom-designed solutions, most of which were intellectually challenging and lacked universal properties that can be extended to address scaling laws of ad hoc networks with other physical layer technologies. In this paper, we present a set of simple yet powerful tool that can be applied to quickly determine the capacity scaling laws for various physical layer technologies under the protocol model. We prove the correctness of our proposed criteria and demonstrate their usage through a number of case studies, such as ad hoc networks with directional antenna, MIMO, multi-channel multi-radio, cognitive radio, and multiple packet reception. These simple criteria will serve as powerful tools to networking researchers to obtain throughput scaling laws of ad hoc networks under different physical layer technologies, particularly those to be developed in the future.

【Keywords】: ad hoc networks; protocols; MIMO; ad hoc networks; advanced physical layer technologies; capacity scaling laws; cognitive radio; directional antenna; multichannel multiradio; multiple-packet reception; protocol model; user throughput behavior; wireless networks; Irrigation; MIMO; Receivers

88. Asymmetric topology control: Exact solutions and fast approximations.

Paper Link】 【Pages】:783-791

【Authors】: Gruia Calinescu ; Kan Qiao

【Abstract】: We study the problem of assigning transmission power to the nodes of ad hoc wireless networks to minimize power consumption while ensuring strong network connectivity (unidirectional links allowed). We give (1) an exact algorithm based on new integer linear program formulations, solving instances with up to 30 nodes in one minute, (2) a fast variant of a recent greedy approximation algorithm, finishing within 85 seconds on instances with up to 2000 nodes, (3) a comprehensive experimental study comparing new and previously proposed heuristics with the above exact and approximation algorithms, showing tradeoffs between the running time and the quality of the output. Thus we deal with the original power assignment problem introduced by Chen and Huang in 1989, who proved that a minimum spanning tree (MST) based approximation algorithm has ratio of 2. Our experimental results show that the recent 1.98-approximation algorithm improves the MST solution by an average of up to 15%, and comes within 4-16% of optimum, in average, on the instances where we can compute the optimum.

【Keywords】: ad hoc networks; approximation theory; greedy algorithms; integer programming; linear programming; power consumption; telecommunication control; 1.98-approximation algorithm; MST based approximation; ad hoc wireless networks; asymmetric topology control; fast approximations; greedy approximation algorithm; integer linear program formulations; minimum spanning tree based approximation; power consumption minimization; transmission power assignment problem; Ad hoc networks; Approximation algorithms; Approximation methods; Heuristic algorithms; Linear programming; Solids; Wireless networks

89. Sherlock is around: Detecting network failures with local evidence fusion.

Paper Link】 【Pages】:792-800

【Authors】: Qiang Ma ; Kebin Liu ; Xin Miao ; Yunhao Liu

【Abstract】: Traditional approaches for wireless sensor network diagnosis are mainly sink-based. They actively collect global evidences from sensor nodes to the sink so as to conduct centralized analysis at the powerful back-end. On the one hand, long distance proactive information retrieval incurs huge transmission overhead; On the other hand, due to the coupling effect between diagnosis component and the application itself, sink often fails to obtain complete and precise evidences from the network, especially for the problematic or critical parts. To avoid large overhead in evidence collection process, self-diagnosis injects fault inference modules into sensor nodes and let them make local decisions. Diagnosis results from single nodes, however, are generally inaccurate due to the narrow scope of system performances. Besides, existing self-diagnosis methods usually lead to inconsistent results from different inference processes. How to balance the workload among the sensor nodes in a diagnosis task is a critical issue. In this work, we present a new in-network diagnosis approach named Local-Diagnosis (LD2), which conducts the diagnosis process in a local area. LD2 achieves diagnosis decision through distributed evidence fusion operations. Each sensor node provides its own judgements and the evidences are fused within a local area based on the Dempster-Shafer theory, resulting in the consensus diagnosis report. We implement LD2 on TinyOS 2.1 and examine the performance on a 50 nodes indoor testbed.

【Keywords】: fault diagnosis; inference mechanisms; wireless sensor networks; Dempster-Shafer theory; TinyOS 2.1; centralized analysis; coupling effect; distributed evidence fusion; fault inference modules; in-network diagnosis approach; local diagnosis; local evidence fusion; long distance proactive information retrieval; network failure detection; self-diagnosis; sensor nodes; transmission overhead; wireless sensor network diagnosis; Accuracy; Bayesian methods; Computer crashes; Measurement; Protocols; Reliability; Wireless sensor networks

90. Towards energy-fairness in asynchronous duty-cycling sensor networks.

Paper Link】 【Pages】:801-809

【Authors】: Zhenjiang Li ; Mo Li ; Yunhao Liu

【Abstract】: In this paper, we investigate the problem of controlling node sleep intervals so as to achieve the min-max energy fairness in asynchronous duty-cycling sensor networks. We propose a mathematical model to describe the energy efficiency of such networks and observe that traditional sleep interval setting strategy, i.e., operating sensor nodes with identical sleep intervals, or intuitive control heuristics, i.e., greedily increasing sleep intervals of sensor nodes with high energy consumption rates, hardly perform well in practice. There is an urgent need to develop an efficient sleep interval control strategy for achieving fair and high energy efficiency. To this end, we theoretically formulate the Sleep Interval Control (SIC) problem and find it a convex optimization problem. By utilizing the convex property, we decompose the original problem and propose a distributed algorithm, called GDSIC. In GDSIC, sensor nodes can tune sleep intervals through a local information exchange such that the maximum energy consumption rate in the network approaches to be minimized. The algorithm is self-adjustable to the traffic load variance and is able to serve as a unified framework for a variety of asynchronous duty-cycling MAC protocols. We implement our approach in a prototype system and test its feasibility and applicability on a 50-node testbed. We further conduct extensive trace-driven simulations to examine the efficiency and scalability of our algorithm with various settings.

【Keywords】: access protocols; convex programming; distributed algorithms; minimax techniques; GDSIC; asynchronous duty-cycling MAC protocols; asynchronous duty-cycling sensor networks; convex optimization problem; distributed algorithm; identical sleep interval; intuitive control heuristics; mathematical model; maximum energy consumption rate; min-max energy fairness; network energy efficiency; node sleep interval controlling; operating sensor nodes; sleep interval control problem; sleep interval setting strategy; trace-driven simulation; traffic load variance; Data communication; Energy consumption; Protocols; Receivers; Silicon carbide; Switching circuits; Wireless sensor networks

91. Distributed node placement algorithms for constructing well-connected sensor networks.

Paper Link】 【Pages】:810-818

【Authors】: Arthur J. Friend ; Vahideh H. Manshadi ; Amin Saberi

【Abstract】: We study the problem of node placement in a sensor network. We consider proximity-based communication models where each sensor can only communicate with the ones within a given distance from it and the quality of communication between two sensors decreases with their distance. Each sensor can move locally and our goal is to improve the network connectivity by locally relocating the sensors. We use tools from spectral graph theory to determine the criticality of each edge to the global network connectivity. Based on the criticality measure, we develop algorithms that iteratively move the sensors in directions that improve the communication along more critical edges. Our algorithms are fully decentralized and only use local information exchange which are essential features for the sensor network application due to lack of centralized control and access to information in such networks. We formulate our problem as a convex optimization and use techniques from proximal minorant methods to prove the convergence of our iterative algorithms. Further, to make the algorithms fully local we use ideas such as the alternating direction method of multipliers from the distributed optimization literature. We also quantitatively illustrate the effectiveness of our schemes using simulation on a few sample networks.

【Keywords】: convex programming; graph theory; wireless sensor networks; centralized control; convex optimization; criticality measure; distributed node placement; distributed optimization; global network connectivity; iterative algorithm; proximal minorant method; proximity-based communication model; spectral graph theory; well-connected sensor networks; Convergence; Eigenvalues and eigenfunctions; Iterative methods; Laplace equations; Optimization; Robot sensing systems; Vectors

92. Cost-effective barrier coverage by mobile sensor networks.

Paper Link】 【Pages】:819-827

【Authors】: Shibo He ; Jiming Chen ; Xu Li ; Xuemin Shen ; Youxian Sun

【Abstract】: Barrier coverage problem in emerging mobile sensor networks has been an interesting research issue. Existing solutions to this problem aim to decide one-time movement for individual sensors to construct as many barriers as possible, which may not work well when there are no sufficient sensors to form a single barrier. In this paper, we try to achieve barrier coverage in sensor scarcity case by dynamic sensor patrolling. In specific, we design a periodic monitoring scheduling (PMS) algorithm in which each point along the barrier line is monitored periodically by mobile sensors. Based on the insight from PMS, we then propose a coordinated sensor patrolling (CSP) algorithm to further improve the barrier coverage, where each sensor's current movement strategy is decided based on the past intruder arrival information. By jointly exploiting sensor mobility and intruder arrival information, CSP is able to significantly enhance barrier coverage. We prove that the total distance that the sensors move during each time slot in CSP is the minimum. Considering the decentralized nature of mobile sensor networks, we further introduce two distributed versions of CSP: S-DCSP and G-DCSP. Through extensive simulations, we demonstrate that CSP has a desired barrier coverage performance and S-DCSP and G-DCSP have similar performance as that of CSP.

【Keywords】: mobile radio; wireless sensor networks; CSP algorithm; G-DCSP algorithm; PMS algorithm; S-DCSP algorithm; barrier coverage problem; coordinated sensor patrolling algorithm; cost-effective barrier coverage; dynamic sensor patrolling; intruder arrival information; mobile sensor networks; one-time movement; periodic monitoring scheduling algorithm; sensor current movement strategy; sensor mobility; sensor scarcity; Algorithm design and analysis; Belts; Mobile communication; Mobile computing; Monitoring; Robot sensing systems

93. How to split a flow?

Paper Link】 【Pages】:828-836

【Authors】: Tzvika Hartman ; Avinatan Hassidim ; Haim Kaplan ; Danny Raz ; Michal Segalov

【Abstract】: Many practically deployed flow algorithms produce the output as a set of values associated with the network links. However, to actually deploy a flow in a network we often need to represent it as a set of paths between the source and destination nodes. In this paper we consider the problem of decomposing a flow into a small number of paths. We show that there is some fixed constant β >; 1 such that it is NP-hard to find a decomposition in which the number of paths is larger than the optimal by a factor of at most β. Furthermore, this holds even if arcs are associated only with three different flow values. We also show that straightforward greedy algorithms for the problem can produce much larger decompositions than the optimal one, on certain well tailored inputs. On the positive side we present a new approximation algorithm that decomposes all but an c-fraction of the flow into at most O(1/ϵ2) times the smallest possible number of paths. We compare the decompositions produced by these algorithms on real production networks and on synthetically generated data. Our results indicate that the dependency of the decomposition size on the fraction of flow covered is exponential. Hence, covering the last few percent of the flow may be costly, so if the application allows, it may be a good idea to decompose most but not all the flow. The experiments also reveal the fact that while for realistic data the greedy approach works very well, our novel algorithm which has a provable worst case guarantee, typically produces only slightly larger decompositions.

【Keywords】: approximation theory; computational complexity; greedy algorithms; telecommunication network routing; NP-hard problem; approximation algorithm; destination nodes; flow algorithm; greedy algorithm; network links; source nodes; Algorithm design and analysis; Approximation methods; Greedy algorithms; Optimized production technology; Polynomials; Routing; Shortest path problem

94. Upward Max Min Fairness.

Paper Link】 【Pages】:837-845

【Authors】: Emilie Danna ; Avinatan Hassidim ; Haim Kaplan ; Alok Kumar ; Yishay Mansour ; Danny Raz ; Michal Segalov

【Abstract】: Often one would like to allocate shared resources in a fair way. A common and well studied notion of fairness is Max-Min Fairness, where we first maximize the smallest allocation, and subject to that the second smallest, and so on. We consider a networking application where multiple commodities compete over the capacity of a network. In our setting each commodity has multiple possible paths to route its demand (for example, a network using MPLS tunneling). In this setting, the only known way of finding a max-min fair allocation requires an iterative solution of multiple linear programs. Such an approach, although polynomial time, scales badly with the size of the network, the number of demands, and the number of paths. More importantly, a network operator has limited control and understanding of the inner working of the algorithm. Finally, this approach is inherently centralized and cannot be implemented via a distributed protocol. In this paper we introduce Upward Max-Min Fairness, a novel relaxation of Max-Min Fairness and present a family of simple dynamics that converge to it. These dynamics can be implemented in a distributed manner. Moreover, we present an efficient combinatorial algorithm for finding an upward max-min fair allocation, which is a natural extension of the well known Water Filling Algorithm for a multiple path setting. We test the expected behavior of this new algorithm and show that on realistic networks upward max-min fair allocations are comparable to the max-min fair allocations both in fairness and in network utilization.

【Keywords】: combinatorial mathematics; computational complexity; iterative methods; linear programming; minimax techniques; multiprotocol label switching; resource allocation; telecommunication network routing; MPLS tunneling; combinatorial algorithm; iterative solution; linear program; max-min fair allocation; max-min fairness relaxation; network operator; network utilization; networking application; path setting; polynomial time; shared resource allocation; upward max-min fairness; water filling algorithm

95. A practical algorithm for balancing the max-min fairness and throughput objectives in traffic engineering.

Paper Link】 【Pages】:846-854

【Authors】: Emilie Danna ; Subhasree Mandal ; Arjun Singh

【Abstract】: One of the goals of traffic engineering is to achieve a flexible trade-off between fairness and throughput so that users are satisfied with their bandwidth allocation and the network operator is satisfied with the utilization of network resources. In this paper, we propose a novel way to balance the throughput and fairness objectives with linear programming. It allows the network operator to precisely control the trade-off by bounding the fairness degradation for each commodity compared to the max-min fair solution or the throughput degradation compared to the optimal throughput. We also present improvements to a previous algorithm that achieves max-min fairness by solving a series of linear programs. We significantly reduce the number of steps needed when the access rate of commodities is limited. We extend the algorithm to two important practical use cases: importance weights and piece-wise linear utility functions for commodities. Our experiments on synthetic and real networks show that our algorithms achieve a significant speedup and provide practical insights on the trade-off between fairness and throughput.

【Keywords】: bandwidth allocation; linear programming; minimax techniques; telecommunication network routing; telecommunication traffic; bandwidth allocation; fairness degradation; fairness objectives; importance weights; linear programming; max-min fairness; piecewise linear utility functions; throughput degradation; throughput objectives; traffic engineering; Approximation algorithms; Bandwidth; Degradation; Linear programming; Resource management; Throughput; Upper bound

96. Wireless capacity and admission control in cognitive radio.

Paper Link】 【Pages】:855-863

【Authors】: Magnús M. Halldórsson ; Pradipta Mitra

【Abstract】: We give algorithms with constant-factor performance guarantees for several capacity and throughput problems in the SINR model. The algorithms are all based on a novel LP formulation for capacity problems. First, we give a new constant-factor approximation algorithm for selecting the maximum subset of links that can be scheduled simultaneously, under any non-decreasing and sublinear power assignment. For the case of uniform power, we extend this to the case of variable QoS requirements and link-dependent noise terms. Second, we approximate a problem related to cognitive radio: find a maximum set of links that can be simultaneously scheduled without affecting a given set of previously assigned links. Finally, we obtain constant-factor approximation of weighted capacity under linear power assignment.

【Keywords】: cognitive radio; quality of service; telecommunication congestion control; SINR model; admission control; cognitive radio; constant-factor approximation algorithm; constant-factor performance guarantees; link-dependent noise terms; sublinear power assignment; variable QoS requirements; wireless capacity; Admission control; Approximation algorithms; Approximation methods; Interference; Noise; Optimized production technology

97. Full/half duplex based resource allocations for statistical quality of service provisioning in wireless relay networks.

Paper Link】 【Pages】:864-872

【Authors】: Wenchi Cheng ; Xi Zhang ; Hailin Zhang

【Abstract】: Integrating information theory with the principle of effective capacity, we propose the optimal resource allocation schemes for wireless full duplex and half duplex relay networks, respectively, to support the statistical quality-of-service (QoS) provisioning. In particular, we introduce a new control parameter, termed cancellation coefficient, to characterize the performance of full duplex relay transmission mode. For both amplitude-and-forward (AF) and decode-and-forward (DF) relay networks, we develop the dynamic hybrid resource allocation policies under full duplex and half duplex transmission modes to maximize the network throughput for the given delay QoS constraint measured by the QoS exponent. The numerical results obtained verify that our proposed resource allocation schemes can support diverse QoS requirements over wireless relay networks under full duplex and half duplex transmission modes. Our analysis indicates that the optimal effective capacity of perfect full duplex transmission mode is not the just twice as much as the optimal effective capacity of half duplex transmission mode. Our numerical analyses also show that the hybrid transmission mode can achieve better performance than just using full duplex or half duplex transmission mode alone.

【Keywords】: amplify and forward communication; decode and forward communication; information theory; quality of service; resource allocation; QoS provisioning; amplitude-and-forward relay networks; cancellation coefficient; decode-and-forward relay networks; dynamic hybrid resource allocation; full duplex relay transmission mode; full/half duplex; half duplex relay networks; information theory; optimal resource allocation; statistical quality of service provisioning; wireless full duplex; wireless relay networks; Convex functions; Delay; Protocols; Quality of service; Relays; Resource management; Wireless communication; Wireless relay networks; convex optimization; effective capacity; full duplex; quality-of-service (QoS); resource allocations

98. Truthful spectrum auction design for secondary networks.

Paper Link】 【Pages】:873-881

【Authors】: Yuefei Zhu ; Baochun Li ; Zongpeng Li

【Abstract】: Opportunistic wireless channel access by non-licensed users has emerged as a promising solution for addressing the bandwidth scarcity challenge. Auctions represent a natural mechanism for allocating the spectrum, generating an economic incentive for the licensed user to relinquish channels. A severe limitation of existing spectrum auction designs lies in the over-simplifying assumption that every non-licensed user is a single-node or single-link secondary user. While such an assumption makes the auction design easier, it does not capture practical scenarios where users have multihop routing demands. For the first time in the literature, we propose to model non-licensed users as secondary networks (SNs), each of which comprises of a multihop network with end-to-end routing demands. We aim to design truthful auctions for allocating channels to SNs in a coordinated fashion that maximizes social welfare of the system. We use simple examples to show that such auctions among SNs differ drastically from simple auctions among single-hop users, and previous solutions suffer severely from local, per-hop decision making. We first design a simple, heuristic auction that takes inter-SN interference into consideration, and is truthful. We then design a randomized auction based on primal-dual linear optimization, with a proven performance guarantee for approaching optimal social welfare. A key technique in our solution is to decompose a linear program (LP) solution for channel assignment into a set of integer program (IP) solutions, then applying a pair of tailored primal and dual LPs for computing probabilities of choosing each IP solution. We prove the truthfulness and performance bound of our solution, and verify its effectiveness through simulation studies.

【Keywords】: bandwidth allocation; channel allocation; integer programming; linear programming; probability; radio spectrum management; radiofrequency interference; telecommunication network routing; wireless channels; IP solution; LP solution; bandwidth scarcity; channel allocation; channel assignment; economic incentive; end-to-end routing demands; heuristic auction; integer program; interSN interference; linear program; multihop network; nonlicensed user; opportunistic wireless channel; optimal social welfare; primal-dual linear optimization; probability; randomized auction; secondary network; single-hop user; single-link secondary user; single-node secondary user; spectrum allocation; truthful spectrum auction design; Channel allocation; Cost accounting; IP networks; Interference; Joints; Resource management; Tin

99. Channel allocation in non-cooperative multi-radio multi-channel wireless networks.

Paper Link】 【Pages】:882-890

【Authors】: Dejun Yang ; Xi Fang ; Guoliang Xue

【Abstract】: While tremendous efforts have been made on channel allocation problems in wireless networks, most of them are on cooperative networks with few exceptions [6, 8, 31, 32]. Among those works on non-cooperative networks, none of them considers the network with multiple collision domains. Instead, they all assume the single collision domain, where all transmissions interfere with each other if they are on the same channel. In this paper, we fill this void and generalize the channel allocation problem to non-cooperative multi-radio multi-channel wireless networks with multiple collision domains. We formulate the problem as a strategic game, called ChAlloc. We show that the ChAlloc game may result in an oscillation when there are no exogenous factors to influence players' strategies. To avoid this possible oscillation, we design a charging scheme to induce players to converge to a Nash Equilibrium (NE). We bound the convergence speed and prove that the system performance in an NE is at least (1 - r̅/h) of the system performance in an optimal solution, where r̅ is the maximum number of radios equipped on wireless devices and h is the number of available channels. In addition, we develop a localized algorithm for players to find an NE strategy. Finally, we evaluate our design through extensive experiments. The results validate our analysis of the possible oscillation in the ChAlloc game lacking the charging scheme, confirm the convergence of the ChAlloc game with the charging scheme, and verify our proof on the system performance compared to the upper bounds returned by an LP-based algorithm.

【Keywords】: game theory; radio networks; ChAlloc game; LP-based algorithm; NE strategy; Nash equilibrium; channel allocation problems; charging scheme; convergence speed; multiple collision domains; noncooperative multi-radio multichannel wireless networks; player strategies; single collision domain; strategic game; system performance; wireless devices; Channel allocation; Games; Interference; Oscillators; System performance; Wireless networks

100. Efficient data retrieval scheduling for multi-channel wireless data broadcast.

Paper Link】 【Pages】:891-899

【Authors】: Zaixin Lu ; Yan Shi ; Weili Wu ; Bin Fu

【Abstract】: Wireless data broadcast is an efficient technique of disseminating data simultaneously to a large number of mobile clients. In many information services, the users may query multiple data items at a time. In this paper, we study the data retrieval scheduling problem from the client's point of view. We formally define the Largest Number Data Retrieval (LNDR) problem with the objective of downloading the largest number of requested data items in a given time duration, and the Minimum Cost Data Retrieval (MCDR) problem which aims at downloading a set of data items with the minimum energy consumption. When the time needed for channel switching can be ignored, a Maximum Matching optimal algorithm is exhibited for LNDR which requires only polynomial time; when the switching time cannot be neglected, LNDR is proven to be NP-hard and a greedy algorithm with constant approximation ratio is developed. We also prove that the MCDR problem is NP-hard to be approximated within to any nontrivial factor and a parameterized heuristic is devised to solve MCDR non-optimally.

【Keywords】: computational complexity; greedy algorithms; information retrieval; information services; radio networks; scheduling; NP-hard problem; channel switching; data retrieval scheduling problem; efficient data retrieval scheduling; greedy algorithm; information services; largest number data retrieval problem; maximum matching optimal algorithm; minimum cost data retrieval problem; minimum energy consumption; mobile clients; multichannel wireless data broadcast; Approximation methods; Energy consumption; Optimized production technology; Polynomials; Schedules; Switches; Wireless communication; Approximation and Inapproximation; Data retrieval; Multi-channel; NP-hard; Wireless data broadcast

101. Vulnerability and protection for distributed consensus-based spectrum sensing in cognitive radio networks.

Paper Link】 【Pages】:900-908

【Authors】: Qiben Yan ; Ming Li ; Tingting Jiang ; Wenjing Lou ; Y. Thomas Hou

【Abstract】: Cooperative spectrum sensing is key to the success of cognitive radio networks. Recently, fully distributed cooperative spectrum sensing has been proposed for its high performance benefits particularly in cognitive radio ad hoc networks. However, the cooperative and fully distributed natures of such protocol make it highly vulnerable to malicious attacks, and make the defense very difficult. In this paper, we analyze the vulnerabilities of distributed sensing architecture based on a representative distributed consensus-based spectrum sensing algorithm. We find that such distributed algorithm is particularly vulnerable to a novel form of attack called covert adaptive data injection attack. The vulnerabilities are even magnified under multiple colluding attackers. We further propose effective protection mechanisms, which include a robust distributed outlier detection scheme with adaptive local threshold to thwart the covert adaptive data injection attack, and a hash-based computation verification approach to cope with collusion attacks. Through simulation and analysis, we demonstrate the destructive power of the attacks, and validate the efficacy and efficiency of our proposed protection mechanisms.

【Keywords】: ad hoc networks; cognitive radio; cooperative communication; cryptography; radio spectrum management; telecommunication security; adaptive local threshold; cognitive radio ad hoc network; covert adaptive data injection attack; distributed consensus-based spectrum sensing; distributed cooperative spectrum sensing; distributed sensing architecture; hash-based computation verification; malicious attack; multiple colluding attacker; protection mechanism; robust distributed outlier detection scheme; Cognitive radio; Computational modeling; Detection algorithms; Distributed databases; Protocols; Robustness; Sensors

102. BitTrickle: Defending against broadband and high-power reactive jamming attacks.

Paper Link】 【Pages】:909-917

【Authors】: Yao Liu ; Peng Ning

【Abstract】: Reactive jamming is not only cost effective, but also hard to track and remove due to its intermittent jamming behaviors. Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) have been widely used to defend against jamming attacks. However, both will fail if the jammer jams all frequency channels or has high transmit power. In this paper, we propose BitTrickle, an anti-jamming wireless communication scheme that allows communication in the presence of a broadband and high power reactive jammer by exploiting the reaction time of the jammer. We develop a prototype of BitTrickle using the USRP platform running GNURadio. Our evaluation shows that when under powerful reactive jamming, BitTrickle still maintains communication, whereas other schemes such as 802.11 DSSS fail completely.

【Keywords】: code division multiple access; frequency hop communication; jamming; spread spectrum communication; telecommunication standards; wireless LAN; BitTrickle; GNURadio; IEEE 802.11 DSSS; antijamming wireless communication; direct sequence spread spectrum; frequency channels; frequency hopping spread spectrum; high-power reactive jamming attacks; intermittent jamming; Constellation diagram; Decoding; Jamming; Receivers; Spread spectrum communication; Wireless communication

103. A formal analysis of IEEE 802.11w deadlock vulnerabilities.

Paper Link】 【Pages】:918-926

【Authors】: Martin Eian ; Stig Fr. Mjølsnes

【Abstract】: Formal methods can be used to discover obscure denial of service (DoS) vulnerabilities in wireless network protocols. The application of formal methods to the analysis of DoS vulnerabilities in communication protocols is not a mature research area. Although several formal models have been proposed, they lack a clear and convincing demonstration of their usefulness and practicality. This paper bridges the gap between theory and practice, and shows how a simple protocol model can be used to discover protocol deadlock vulnerabilities. A deadlock vulnerability is the most severe form of DoS vulnerabilities, thus checking for deadlock vulnerabilities is an essential part of robust protocol design. We demonstrate the usefulness of the proposed method through the discovery and experimental validation of deadlock vulnerabilities in the published IEEE 802.11w amendment to the 802.11 standard. We present the complete procedure of our approach, from model construction to verification and validation. An Appendix includes the complete model source code, which facilitates the replication and extension of our results. The source code can also be used as a template for modeling other protocols.

【Keywords】: computer network security; formal verification; protocols; wireless LAN; IEEE 802.11w deadlock vulnerabilities; communication protocols; formal analysis; model construction; obscure denial of service vulnerabilities; protocol deadlock vulnerabilities; robust protocol design; simple protocol model; validation; verification; wireless network protocols; Authentication; Computer crime; IEEE 802.11 Standards; Protocols; Switches; System recovery

104. Collaborative secret key extraction leveraging Received Signal Strength in mobile wireless networks.

Paper Link】 【Pages】:927-935

【Authors】: Hongbo Liu ; Jie Yang ; Yan Wang ; Yingying Chen

【Abstract】: Securing communication in mobile wireless networks is challenging because the traditional cryptographic-based methods are not always applicable in dynamic mobile wireless environments. Using physical layer information of radio channel to generate keys secretly among wireless devices has been proposed as an alternative in wireless mobile networks. And the Received Signal Strength (RSS) based secret key extraction gains much attention due to the RSS readings are readily available in wireless infrastructure. However, the problem of using RSS to generate keys among multiple devices to ensure secure group communication remains open. In this work, we propose a framework for collaborative key generation among a group of wireless devices leveraging RSS. The proposed framework consists of a secret key extraction scheme exploiting the trend exhibited in RSS resulted from shadow fading, which is robust to outsider adversary performing stalking attacks. To deal with mobile devices not within each other's communication range, we employ relay nodes to achieve reliable key extraction. To enable secure group communication, two protocols, namely star-based and chain-based, are developed in our framework by exploiting RSS from multiple devices to perform group key generation collaboratively. Our experiments in both outdoor and indoor environments confirm the feasibility of using RSS for group key generation among multiple wireless devices under various mobile scenarios. The results also demonstrate that our collaborative key extraction scheme can achieve a lower bit mismatch rate compared to existing works when maintaining the comparable bit generation rate.

【Keywords】: cryptography; mobile communication; telecommunication security; collaborative key generation; collaborative secret key extraction; mobile wireless networks; physical layer information; radio channel; received signal strength; secure group communication; wireless devices; Collaboration; Communication system security; Fading; Mobile communication; Quantization; Relays; Wireless communication

105. When cloud meets eBay: Towards effective pricing for cloud computing.

Paper Link】 【Pages】:936-944

【Authors】: Qian Wang ; Kui Ren ; Xiaoqiao Meng

【Abstract】: The rapid deployment of cloud computing promises network users with elastic, abundant, and on-demand cloud services. The pay-as-you-go model allows users to be charged only for services they use. Current purchasing designs, however, are still primitive with significant constraints. Spot Instance, the first deployed auction-style pricing model of Amazon EC2, fails to enforce fair competition among users in facing of resource scarcity and may thus lead to untruthful bidding and unfair resource allocation. Dishonest users are able to abuse the system and obtain (at least) short-term advantages by deliberately setting large maximum price bids while being charged only at lower Spot Prices. Meanwhile, this may also prevent the demands of honest users from being satisfied due to resource scarcity. Furthermore, Spot Instance is inefficient and may not adequately meet users' overall demands because it limits users to bid for each computing instance individually instead of multiple different instances at a time. In this paper, we formulate and investigate the problem of cloud resource pricing. We propose a suite of computationally efficient and truthful auction-style pricing mechanisms, which enable users to fairly compete for resources and cloud providers to increase their overall revenue. We analytically show that the proposed algorithms can achieve truthfulness without collusion or (t, p)-truthfulness tolerating a collusion group of size t with probability at least p. We also show that the two proposed algorithms have polynomial complexities O(nm + n2) and O(nm), respectively, when n users compete for m different computing instances with multiple units. Extensive simulations show that, in a competitive cloud resource market, the proposed mechanisms can increase the revenue of cloud providers, especially when allocating relatively limited computing resources to a potentially large number of cloud users.

【Keywords】: Web sites; cloud computing; computational complexity; pricing; Amazon EC2; auction-style pricing mechanisms; cloud computing; cloud resource pricing; eBay; on-demand cloud services; pay-as-you-go model; polynomial complexities; probability; Algorithm design and analysis; Cloud computing; Computational modeling; Cost accounting; Polynomials; Pricing; Resource management

106. ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading.

Paper Link】 【Pages】:945-953

【Authors】: Sokol Kosta ; Andrius Aucinas ; Pan Hui ; Richard Mortier ; Xinwen Zhang

【Abstract】: Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.

【Keywords】: cloud computing; computer games; computer viruses; distributed programming; face recognition; mobile computing; resource allocation; smart phones; telecommunication computing; virtual machines; virtualisation; N-queens puzzle application; ThinkAir; VM; cloud elasticity; cloud scalability; dynamic resource allocation; energy consumption; execution time; face detection; memory-hungry image combiner; method-level computation offloading; mobile cloud computing; mobile code offloading; multiple virtual machine images; parallel execution; smartphone virtualization; virus scan application; Cloning; Energy consumption; Hardware; Mobile communication; Servers; Smart phones; Software

107. Energy-aware load balancing in content delivery networks.

Paper Link】 【Pages】:954-962

【Authors】: Vimal Mathew ; Ramesh K. Sitaraman ; Prashant J. Shenoy

【Abstract】: Internet-scale distributed systems such as content delivery networks (CDNs) operate hundreds of thousands of servers deployed in thousands of data center locations around the globe. Since the energy costs of operating such a large IT infrastructure are a significant fraction of the total operating costs, we argue for redesigning CDNs to incorporate energy optimizations as a first-order principle. We propose techniques to turn off CDN servers during periods of low load while seeking to balance three key design goals: maximize energy reduction, minimize the impact on client-perceived service availability (SLAs), and limit the frequency of on-off server transitions to reduce wear-and-tear and its impact on hardware reliability. We propose an optimal offline algorithm and an online algorithm to extract energy savings both at the level of local load balancing within a data center and global load balancing across data centers. We evaluate our algorithms using real production workload traces from a large commercial CDN. Our results show that it is possible to reduce the energy consumption of a CDN by 51% while ensuring a high level of availability that meets customer SLA requirements and incurring an average of one on-off transition per server per day. Further, we show that keeping even 10% of the servers as hot spares helps absorb load spikes due to global flash crowds and minimize any impact on availability SLAs. Finally, we show that redistributing load across highly proximal data centers can enhance service availability significantly, but has only a modest impact on energy savings.

【Keywords】: Internet; client-server systems; computer centres; file servers; power aware computing; reliability; resource allocation; CDN servers; IT infrastructure; Internet-scale distributed systems; client-perceived service availability; content delivery networks; customer SLA requirements; energy costs; energy optimizations; energy reduction; energy-aware load balancing; first-order principle; hardware reliability; on-off server transitions; optimal offline algorithm; optimal online algorithm; proximal data centers; Availability; Clustering algorithms; Energy consumption; Load management; Load modeling; Servers; Turning

108. Network aware resource allocation in distributed clouds.

Paper Link】 【Pages】:963-971

【Authors】: Mansoor Alicherry ; T. V. Lakshman

【Abstract】: We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms.

【Keywords】: approximation theory; cloud computing; computer centres; resource allocation; virtual machines; 2-approximation algorithm; bandwidth cost reduction; cloud computing resource deployment; communication cost minimization; communication latency minimization; distributed cloud systems; network aware resource allocation; optimal data center selection; rack selection; resource needs; server selection; virtual machines; wide area network; Switches

109. Traffic-aware multiple mix zone placement for protecting location privacy.

Paper Link】 【Pages】:972-980

【Authors】: Xinxin Liu ; Han Zhao ; Miao Pan ; Hao Yue ; Xiaolin Li ; Yuguang Fang

【Abstract】: Privacy protection is of critical concern to Location-Based Service (LBS) users in mobile networks. Long-term pseudonyms, although appear to be anonymous, in fact empower third-party service providers to continuously track users' movements. Researchers have proposed the mix zone model to allow pseudonym changes in protected areas. In this paper, we investigate a new form of privacy attack to the LBS system that an adversary reveals a user's true identity and complete moving trajectory with the aid of side information. We propose a new metric to quantify the system's resilience to such attacks, and suggest using multiple mix zones to tackle this problem. A mathematical model is presented that treats the deployment of multiple mix zones as a cost constrained optimization problem. Furthermore, the influence of traffic density is also taken into account to enhance the protection effectiveness. The placement optimization problem is NP-hard. We therefore design two heuristic algorithms as practical and effective means to strategically select mix zone locations, and consequently reduce the privacy risks of mobile users trajectories. The effectiveness of our proposed solutions is demonstrated through extensive simulations on real-world mobile user data traces.

【Keywords】: communication complexity; data privacy; mobility management (mobile radio); optimisation; telecommunication traffic; NP-hard; cost constrained optimization problem; heuristic algorithm design; location privacy protection; location-based service; long-term pseudonyms; mathematical model; mobile network; placement optimization problem; privacy attack; privacy risk; traffic density; traffic-aware multiple mix zone placement; Entropy; Joining processes; Large scale integration; Mobile communication; Privacy; Roads; Trajectory

110. On the design of scheduling algorithms for end-to-end backlog minimization in multi-hop wireless networks.

Paper Link】 【Pages】:981-989

【Authors】: Shizhen Zhao ; Xiaojun Lin

【Abstract】: In this paper, we study the problem of link scheduling for multi-hop wireless networks with per-flow delay constraints. Specifically, we are interested in algorithms that maximize the asymptotic decay-rate of the probability with which the maximum end-to-end backlog among all flows exceeds a threshold, as the threshold becomes large. We provide both positive and negative results in this direction. By minimizing the drift of the maximum end-to-end backlog in the converge-cast on a tree, we design an algorithm, Largest-Weight-First(LWF), that achieves the optimal asymptotic decay-rate for the overflow probability of the maximum end-to-end backlog as the threshold becomes large. However, such a drift minimization algorithm may not exist for general networks. We provide an example in which no algorithm can minimize the drift of the maximum end-to-end backlog. Finally, we simulate the LWF algorithm together with a well known algorithm (the back-pressure algorithm) and a large-deviations optimal algorithm in terms of the sum-queue (the P-TREE algorithm) in converge-cast networks. Our simulation shows that our algorithm significantly performs better not only in terms of asymptotic decay-rate, but also in terms of the actual overflow probability.

【Keywords】: minimisation; probability; queueing theory; radio links; radio networks; trees (mathematics); LWF algorithm; P-tree algorithm; back-pressure algorithm; converge-cast network; drift minimization algorithm; end-to-end backlog minimization; large-deviations optimal algorithm; largest-weight-first algorithm; link scheduling; maximum end-to-end backlog; multihop wireless network; optimal asymptotic decay-rate; overflow probability; per-flow delay constraint; scheduling algorithm; sum-queue; Algorithm design and analysis; Delay; Network topology; Scheduling algorithms; Spread spectrum communication; Topology; Wireless networks

111. Scheduling in networks with time-varying channels and reconfiguration delay.

Paper Link】 【Pages】:990-998

【Authors】: Guner D. Celik ; Eytan Modiano

【Abstract】: We consider the optimal control problem for networks subjected to time-varying channels, reconfiguration delays, and interference constraints. We model the network by a graph consisting of nodes, links, and a set of link interference constraints, where based on the current network state, the controller decides either to stay with the current link-service configuration or switch to another service configuration at the cost of idling during schedule reconfiguration. Reconfiguration delay occurs in many telecommunications applications and is a new modeling component of this problem that has not been previously addressed. We show that the simultaneous presence of time-varying channels and reconfiguration delays significantly reduces the system stability region and changes the structure of optimal policies. We first consider memoryless channel processes and characterize the stability region in closed form. We prove that a frame-based Max-Weight scheduling algorithm that sets frame durations dynamically, as a function of the current queue sizes and average channel gains is throughput-optimal. Next, we consider arbitrary Markov modulated channel processes and show that memory in the channel processes can be exploited to improve the stability region. We develop a novel approach to characterizing the stability region of such systems using state-action frequencies which are stationary solutions to a Markov Decision Process (MDP) formulation. Finally, we develop a frame-based dynamic control policy, based on the state-action frequencies, and show that it is throughput-optimal asymptotically in the frame length. The FBDC policy is applicable to a broad class of network control systems, with or without reconfiguration delays, and provides a new framework for developing throughput-optimal network control policies using state-action frequencies.

【Keywords】: Markov processes; interference suppression; memoryless systems; optimal control; queueing theory; telecommunication control; time-varying channels; wireless channels; FBDC policy; MDP formulation; Markov decision process; Markov modulated channel process; average channel gain; frame-based dynamic control policy; frame-based max-weight scheduling algorithm; link interference constraint; link-service configuration; memoryless channel process; network control system; optimal control problem; queue size; reconfiguration delay; state-action frequency; system stability; throughput-optimal network control policy; time-varying channel; Delay; Schedules; Servers; Stability analysis; Switches; Time-varying channels; Vectors; Markov Decision Process; Reconfiguration delay; queueing; scheduling; switching delay; time-varying channels

112. On sample-path optimal dynamic scheduling for sum-queue minimization in trees under the K-hop interference model.

Paper Link】 【Pages】:999-1007

【Authors】: Srikanth Hariharan ; Ness B. Shroff

【Abstract】: We investigate the problem of minimizing the sum of the queue lengths of all the nodes in a wireless network with a tree topology. Nodes send their packets to the tree's root (sink). We consider a time-slotted system, and a K-hop interference model. We characterize the existence of causal sample-path optimal scheduling policies in these networks, i.e., we wish to find a policy such that at each time slot, for any traffic arrival pattern, the sum of the queue lengths of all the nodes is minimum among all policies. We provide an algorithm that takes any tree and K as inputs, and outputs whether a causal sample-path optimal policy exists for this tree under the K-hop interference model. We show that when this algorithm returns FALSE, there exists a traffic arrival pattern for which no causal sample-path optimal policy exists for the given tree structure. We further show that for certain tree structures, even non-causal sample-path optimal policies do not exist. We provide causal sample-path optimal policies for those tree structures for which the algorithm returns TRUE. Thus, we completely characterize the existence of such policies for all trees under the K-hop interference model. The non-existence of sample-path optimal policies in a large class of tree structures implies that we need to study other (relatively) weaker metrics for this problem.

【Keywords】: radio networks; scheduling; telecommunication network topology; telecommunication traffic; K-hop interference model; causal sample-path optimal scheduling policies; sample-path optimal dynamic scheduling; sum-queue minimization; time-slotted system; traffic arrival pattern; tree structure; tree topology; wireless network; Classification algorithms; Interference; Schedules; Spread spectrum communication; Topology; Wireless networks

113. Efficient algorithms for sensor deployment and routing in sensor networks for network-structured environment monitoring.

Paper Link】 【Pages】:1008-1016

【Authors】: Shuguang Xiong ; Lei Yu ; Haiying Shen ; Chen Wang ; Wei Lu

【Abstract】: When monitoring environments with wireless sensor networks, optimal sensor deployment is a fundamental issue and an effective means to achieve desired performance. Selecting best sensor deployment has a dependence on the deployment environments. Existing works address sensor deployment within three types of environments including one dimensional line, 2-D field and 3-D space. However, in many applications the deployment environments usually have network structures, which cannot be simply classified as the three types. The deployed locations and communications of sensor nodes are limited onto the network edges, which make the deployment problem distinct from that in other types of environments. In this paper, we study sensor deployment in network-structured environments and aim to achieve k-coverage while minimizing the number of sensor nodes. Furthermore, we jointly consider the optimization of sink deployment and routing strategies with the goal to minimize the network communication cost of data collection. To the best of our knowledge, this paper is the first one to tackle sensor/sink deployment under the deployment constraints imposed by the network structure. The hardness of the problems is shown. Polynomial-time algorithms are proposed to determine optimal sensor/sink deployment and routing strategies in tree-topology network structure. Efficient approximation algorithms are proposed for the general graph network structure and their performances are analyzed. Theoretical results and extensive simulations show the efficiency of the proposed algorithms.

【Keywords】: optimisation; sensor placement; telecommunication network routing; telecommunication network topology; trees (mathematics); wireless sensor networks; data collection; hard problems; k-coverage; network communication cost; network structured environment monitoring; polynomial time algorithms; sensor deployment; sensor network routing; wireless sensor network; Approximation algorithms; Monitoring; Pipelines; Routing; Sensors; Topology; Wireless sensor networks

114. Environment-aware clock skew estimation and synchronization for wireless sensor networks.

Paper Link】 【Pages】:1017-1025

【Authors】: Zhe Yang ; Lin Cai ; Yu Liu ; Jianping Pan

【Abstract】: Clock synchronization is a fundamental requirement for network systems. It is particularly crucial and challenging in wireless sensor networks (WSNs), because WSN environments are dynamic and unpredictable. To tackle this problem, how to accurately estimate clock skew, the inherent reason causing clock desynchronization, is investigated. According to the measurement results, clock skew is a non-stationary random process highly correlated to temperature, and its measurements contain severe noises. Based on the observation, an additional information aided multi-model Kalman filter (AMKF) algorithm is proposed, which uses temperature measurements to assist clock skew estimation. Using AMKF, an environment-aware clock synchronization (EACS) scheme is proposed to dynamically compensate clock skew. The scheme is simple, scalable, and of low computation and energy cost. Using EACS as an additional component of the conventional synchronization protocols, the clock is updated with local information before the clock re-synchronization process is triggered, so it can substantially prolong the re-synchronization period, which not only reduces the energy consumption but also is essential for the scenarios where frequent synchronization is infeasible. The theoretical lower bound of clock skew estimation error is derived as a benchmark. Extensive simulation and experimental verification results have demonstrated the feasibility and effectiveness of the proposed scheme which can prolong the time resynchronization period by an order of magnitude in dynamic environments.

【Keywords】: Kalman filters; clocks; energy consumption; protocols; synchronisation; wireless sensor networks; WSN; aided multi-model Kalman filter algorithm; clock synchronization; energy consumption; environment-aware clock skew estimation; network systems; synchronization protocols; temperature measurements; wireless sensor networks; Clocks; Estimation; Kalman filters; Noise; Synchronization; Temperature measurement; Temperature sensors

115. Locating sensors in the forest: A case study in GreenOrbs.

Paper Link】 【Pages】:1026-1034

【Authors】: Cheng Bo ; Danping Ren ; Shaojie Tang ; Xiang-Yang Li ; XuFei Mao ; Qiuyuan Huang ; Lufeng Mo ; Zhiping Jiang ; Yongmei Sun ; Yunhao Liu

【Abstract】: As a large scale real sensor network system, GreenOrbs reveals that locating sensor nodes in the forest still faces great challenges because of volatile and fluctuating environmental factors. In this paper, we present a novel localization scheme, EARL, which provides accurate reference nodes and good ranging quality. We exam the range quality along routing paths by taking complex environmental factors into account, such as forest density, temperature and humidity. To improve localization accuracy, we use power scanning technique to judge the accuracy reference nodes and further calibrate the bad nodes through reverse-localization. To overcome the error propagation, we assign different weights to the range measurement according to the ranging quality. We implemented our localization scheme in GreenOrbs testbed, and evaluate through extensive experiments. The results demonstrate that EARL outperforms the current localization approaches with better accuracy. The localization accuracy achieved by our method is around 20% higher than best existing methods.

【Keywords】: environmental factors; sensor placement; telecommunication network routing; EARL; GreenOrbs; environmental factors; error propagation; large scale real sensor network system; localization accuracy; power scanning; ranging quality; reference nodes; reverse localization; routing paths; sensor localisation; sensor nodes; Accuracy; Green products; Humidity; Radio frequency; Sensors; Vegetation; Wireless sensor networks

116. Snapshot/Continuous Data Collection capacity for large-scale probabilistic Wireless Sensor Networks.

Paper Link】 【Pages】:1035-1043

【Authors】: Shouling Ji ; Raheem A. Beyah ; Zhipeng Cai

【Abstract】: Data collection is a common operation of Wireless Sensor Networks (WSNs). The performance of data collection can be measured by its achievable network capacity. Most of the current works on the network capacity issue are based on the deterministic network model, which is not practical for real applications due to the “transitional region phenomenon” [22]. The probabilistic network model is actually a more practical one. In this paper, we investigate the achievable Snapshot/Continuous Data Collection (SDC/CDC) capacity for WSNs under the probabilistic network model. For SDC, we propose a novel Cell-based Multi-Path Scheduling (CMPS) algorithm, whose achievable network capacity is Ω(po/3ω · W) in the worst case and Ω(po/ω · W) in the average case, where po is the promising transmission threshold probability, ω is a constant, and W is the data transmitting rate over a wireless channel, i.e. the channel bandwidth, which are both order-optimal. For CDC, we propose a Zone-based Pipeline Scheduling (ZPS) algorithm. ZPS significantly speeds up the data collection process and achieves surprising network capacities for both the worst case and the average case. The simulation results also validate that the proposed algorithms significantly improve network capacity compared with the existing works.

【Keywords】: channel capacity; probability; scheduling; wireless channels; wireless sensor networks; cell based multipath scheduling algorithm; continuous data collection capacity; deterministic network; large scale probabilistic wireless sensor networks; network capacity; probabilistic network model; snapshot data collection capacity; transitional region phenomenon; transmission threshold probability; wireless channel; zone based pipeline scheduling; Capacity planning; Data communication; Probabilistic logic; Schedules; Scheduling; Sensors; Wireless sensor networks

117. Mitigating timing based information leakage in shared schedulers.

Paper Link】 【Pages】:1044-1052

【Authors】: Sachin Kadloor ; Negar Kiyavash ; Parv Venkitasubramaniam

【Abstract】: In this work, we study information leakage in timing side channels that arise in the context of shared event schedulers. Consider two processes, one of them an innocuous process (referred to as Alice) and the other a malicious one (referred to as Bob), using a common scheduler to process their jobs. Based on when his jobs get processed, Bob wishes to learn about the pattern (size and timing) of jobs of Alice. Depending on the context, knowledge of this pattern could have serious implications on Alice's privacy and security. For instance, shared routers can reveal traffic patterns, shared memory access can reveal cloud usage patterns, and suchlike. We present a formal framework to study the information leakage in shared resource schedulers using the pattern estimation error as a performance metric. In this framework, a uniform upper bound is derived to benchmark different scheduling policies. The first-come-first-serve scheduling policy is analyzed, and shown to leak significant information when the scheduler is loaded heavily. To mitigate the timing information leakage, we propose an “Accumulate-and-Serve” policy which trades in privacy for a higher delay. The policy is analyzed under the proposed framework and is shown to leak minimum information to the attacker, and is shown to have comparatively lower delay than a fixed scheduler that preemptively assigns service times irrespective of traffic patterns.

【Keywords】: data privacy; scheduling; telecommunication channels; telecommunication network routing; telecommunication security; telecommunication traffic; Alice; Bob; accumulate-and-serve policy; cloud usage patterns; first-come-first-serve scheduling policy; information leakage; innocuous process; malicious process; pattern estimation error; shared event schedulers; shared memory access; shared routers; timing side channels; traffic patterns; Delay; Estimation error; Privacy; Scheduling; Time division multiple access

118. Fair background data transfers of minimal delay impact.

Paper Link】 【Pages】:1053-1061

【Authors】: Costas Courcoubetis ; Antonis Dimakis

【Abstract】: In this paper we present a methodology for the design of congestion control protocols for background data transfers that have a minimal delay impact on short TCP transfers and compete for a target share of the leftover average capacity with other background TCP transfers. We analytically compute the optimal policy and show that it's delay performance can be well approximated by a weighted TCP policy, which maintains a target proportion of TCPs bandwidth at all times in order to achieve the same share of excess capacity. The relative approximation error is always less than 17.2% while it quickly decreases to zero as the number of coexisting background TCP flows increases. Next, we consider a general utility-based fairness criterion for sharing the leftover average capacity, including a penalty term capturing the delay impact to short flows. The criterion is jointly optimized over all allocations of excess capacity to background flows (including TCP ones) on long timescales, and all bandwidth sharing policies on fast time scales. Even though the delay optimal sharing policy that solves the above optimization problem does not lead to distributed congestion control algorithms and more significantly, requires knowledge of the number of competing background TCP flows, both problems disappear under the weighted TCP policy. A distributed weight adjustment policy is considered where, at equilibrium, the overall performance is nearly optimal, with a vanishing relative optimality error as the number of background TCP flows increases. We illustrate the methodology by giving two examples of congestion control algorithms for background transfers. Both achieve low delay for short flows relative to TCP, but at the same time they present strong incentives for adoption against incumbent low priority solutions in public environments.

【Keywords】: approximation theory; delays; optimisation; telecommunication congestion control; transport protocols; TCP bandwidth; Transport Control Protocol; background TCP flows; background data transfers; bandwidth sharing policies; congestion control protocols; delay optimal sharing policy; distributed weight adjustment policy; leftover average capacity; minimal delay impact; optimization problem; penalty term; public environments; relative approximation error; relative optimality error; short TCP transfers; utility-based fairness criterion; weighted TCP policy; Algorithm design and analysis; Approximation algorithms; Approximation methods; Bandwidth; Delay; Optimization; Throughput

119. Optimal scheduling policies with mutual information accumulation in wireless networks.

Paper Link】 【Pages】:1062-1070

【Authors】: Jing Yang ; Yanpei Liu ; Stark C. Draper

【Abstract】: In this paper, we aim to develop scheduling policies to maximize the stability region of a wireless network under the assumption that mutual information accumulation is implemented at the physical layer. This enhanced physical layer capability enables the system to accumulate information even when the link between two nodes is not good and a packet cannot be decoded within a slot. The result is an expansion of the stability region of the system. The accumulation process does not satisfy the i.i.d assumption that underlies many previous analysis in this area. Therefore it also brings new challenges to the problem. We propose two dynamic scheduling algorithms to overcome this difficulty. One performs scheduling every T slot, which inevitably increases average delay in the system, but approaches the boundary of the stability region. The second constructs a virtual system with the same stability region. Through controlling the virtual queues in the constructed system, we avoid the non-i.i.d difficulty and attain the stability region. We derive performance bounds under both algorithms and compare them through simulation results.

【Keywords】: cooperative communication; queueing theory; scheduling; T slot; average delay; dynamic scheduling algorithm; mutual information accumulation; optimal scheduling policies; physical layer capability; stability region; virtual queues; virtual system; wireless networks; Decoding; Mutual information; Receivers; Routing; Stability analysis; Vectors; Wireless networks

120. Low-delay wireless scheduling with partial channel-state information.

Paper Link】 【Pages】:1071-1079

【Authors】: Aditya Gopalan ; Constantine Caramanis ; Sanjay Shakkottai

【Abstract】: We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows (selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue-length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms using partial information, of the tail of the longest queue in the system. To accomplish this, we introduce novel analytical techniques for studying the performance of scheduling algorithms using partial state information, that are of independent interest. These include new sample-path large deviations results for processes obtained by nonrandom, predictable sampling of sequences of independent and identically distributed random variables, which show that scheduling with partial state information yields a rate function significantly different from the case of full information. As a special case, Max-Exp reduces to simply serving the flow with the longest queue when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel-state informatio- ; thus, our results show that this greedy scheduling policy is large-deviations optimal.

【Keywords】: greedy algorithms; optimisation; queueing theory; radiocommunication; scheduling; wireless channels; Max-Exp algorithm; greedy scheduling policy; joint subset sampling-scheduling algorithm; low delay wireless scheduling; multiple packet based flow; partial channel state information; partial service rate information; partial state information; queue length aware scheduling; scheduling algorithms; time slotted queued system; Delay; Joints; Schedules; Scheduling; Scheduling algorithms; Throughput; Wireless communication

121. Adaptive resource scheduling in wireless OFDMA relay networks.

Paper Link】 【Pages】:1080-1088

【Authors】: Karthikeyan Sundaresan ; Sampath Rangarajan

【Abstract】: The ability of relay networks to improve capacity and coverage has led to their adoption in next generation wireless broadband networks (WiMAX, LTE-advanced). Unlike conventional cellular networks, there is a multitude of design features that impacts the performance of these relay networks. This has consequently increased the need for efficient scheduling algorithms to optimize such design features. The focus of this work is to improve relay network throughput through adaptive resource usage in the form of two key design features - (i) adaptive frame segmentation, and (ii) spatial reuse - that are allowed by the relay standard. To this end, we design efficient scheduling algorithms that optimize these two features jointly. We provide both algorithms with performance guarantees as well as those with fast running times. Our study reveals that with the help of properly designed scheduling algorithms, adaptive resource usage can (i) boost the network throughput performance by up to 50%; and (ii) optimize network throughput effectively, providing an effect similar to dynamic relay placement.

【Keywords】: Long Term Evolution; WiMax; adaptive scheduling; broadband networks; cellular radio; frequency division multiple access; next generation networks; LTE-advanced; WiMAX; adaptive frame segmentation; adaptive resource scheduling; adaptive resource usage; cellular networks; dynamic relay placement; next generation wireless broadband networks; orthogonal frequency division multiple access; relay network throughput; spatial reuse; wireless OFDMA relay networks; Adaptation models; Adaptive systems; Relays; Resource management; Schedules; Throughput; Tiles

122. Performance bounds and associated design principles for multi-cellular wireless OFDMA systems.

Paper Link】 【Pages】:1089-1097

【Authors】: Rohit Aggarwal ; Can Emre Koksal ; Philip Schniter

【Abstract】: In this paper, we consider the downlink of large-scale multi-cellular OFDMA-based networks and study performance bounds of the system as a function of the number of users K, the number of base-stations B, and the number of resource-blocks N. Here, a resource block is a collection of subcarriers such that all such collections, that are disjoint have associated independently fading channels. We derive novel upper and lower bounds on the sum-utility for a general spatial geometry of base stations, a truncated path loss model, and a variety of fading models (Rayleigh, Nakagami-m, Weibull, and LogNormal). We also establish the associated scaling laws and show that, in the special case of fixed number of resource blocks, a grid-based network of base stations, and Rayleigh-fading channels, the sum information capacity of the system scales as Θ(B log log K/B) for extended networks, and as O(B log log K) and Ω(log log K) for dense networks. Interpreting these results, we develop some design principles for the service providers along with some guidelines for the regulators in order to achieve provisioning of various QoS guarantees for the end users and, at the same time, maximize revenue for the service providers.

【Keywords】: Nakagami channels; OFDM modulation; Rayleigh channels; cellular radio; geometry; quality of service; LogNormal fading channel; Nakagami-m fading channel; QoS; Rayleigh fading channel; Weibull fading channel; base station spatial geometry; information capacity; large-scale multicellular OFDMA-based network; performance bound; resource-block; service provider; truncated path loss model; Bandwidth; Base stations; Downlink; Rayleigh channels; Throughput; Upper bound

123. Queue-based sub-carrier grouping for feedback reduction in OFDMA systems.

Paper Link】 【Pages】:1098-1106

【Authors】: Harish Ganapathy ; Constantine Caramanis

【Abstract】: Sub-carrier grouping is a popular feedback reduction approach for orthogonal-frequency-division multiple-access (OFDMA) systems that has been adopted into fourth-generation standards such as 3GPP Long Term Evolution (LTE). Feedback reduction is motivated by the fact that the bandwidth expenditure in acquiring full information in a downlink OFDMA system scales as the product of the number of users and the number of OFDMA bands. As this is infeasible in most systems, sub-carrier grouping calls for users to report a single channel state value per predesignated group of OFDM bands. Such an approach would reduce the amount of feedback by a factor that is equal to the size of the group, albeit at a loss in throughput. In this paper, we propose a throughput-optimal joint sub-carrier grouping and data scheduling policy that makes decisions based on queue-lengths and channel states in each time slot. The feedback allocation or sub-carrier grouping policy, inspired by the current approach in LTE, operates under a total feedback budget and must periodically decide a sub-carrier grouping size for each user that obeys this resource constraint. However, as we show, the optimal allocation algorithm has complexity that, in general, scales exponentially in the number of users. Thus, we turn our attention to the important issue of computational efficiency and propose a greedy algorithm that allocates full feedback bandwidth to a constant-sized subset of users based on the network state. We evaluate the performance of this simple approach through extensive numerical experiments. We show that under asymmetric arrival rate settings, our greedy algorithm is within 10% of the optimal (throughput-wise) when consuming only 25% of the full feedback bandwidth while paying only a logarithmic (in the number of users) price in control overhead.

【Keywords】: 4G mobile communication; Long Term Evolution; decision making; frequency division multiple access; greedy algorithms; queueing theory; scheduling; 3GPP Long Term Evolution; OFDMA systems; asymmetric arrival rate settings; bandwidth expenditure; channel state value; control overhead; data scheduling policy; decision making; downlink OFDMA system; feedback allocation; feedback reduction; fourth generation standards; greedy algorithm; orthogonal frequency-division multiple access; queue based subcarrier grouping; queue lengths; resource constraint; throughput optimal joint sub carrier grouping; total feedback budget; Bandwidth; Base stations; Fading; OFDM; Radio frequency; Resource management; Throughput

124. FemtoCaching: Wireless video content delivery through distributed caching helpers.

Paper Link】 【Pages】:1107-1115

【Authors】: Negin Golrezaei ; Karthikeyan Shanmugam ; Alexandros G. Dimakis ; Andreas F. Molisch ; Giuseppe Caire

【Abstract】: We suggest a novel approach to handle the ongoing explosive increase in the demand for video content in wireless/mobile devices. We envision femtocell-like base stations, which we call helpers, with weak backhaul links but large storage capacity. These helpers form a wireless distributed caching network that assists the macro base station by handling requests of popular files that have been cached. Due to the short distances between helpers and requesting devices, the transmission of cached files can be done very efficiently. A key question for such a system is the wireless distributed caching problem, i.e., which files should be cached by which helpers. If every mobile device has only access to a exactly one helper, then clearly each helper should cache the same files, namely the most popular ones. However, for the case that each mobile device can access multiple caches, the assignment of files to helpers becomes nontrivial. The theoretical contribution of our paper lies in (i) formalizing the distributed caching problem, (ii) showing that this problem is NP-hard, and (iii) presenting approximation algorithms that lie within a constant factor of the theoretical optimum. On the practical side, we present a detailed simulation of a university campus scenario covered by a single 3GPP LTE R8 cell and several helpers using a simplified 802.11n protocol. We use a real campus trace of video requests and show how distributed caching can increase the number served users by as much as 400 - 500%.

【Keywords】: 3G mobile communication; approximation theory; cache storage; computational complexity; protocols; video signal processing; wireless LAN; 3GPP LTE R8 cell; 802.11n protocol; FemtoCaching; NP-hard; approximation algorithms; distributed caching helpers; distributed caching problem; femtocell-like base stations; macro base station; mobile devices; multiple cache access; requests handling; storage capacity; university campus scenario; video requests; weak backhaul links; wireless devices; wireless distributed caching network; wireless video content delivery; Approximation methods; Base stations; Delay; Greedy algorithms; Optimization; Vectors; Wireless communication

125. REWIRE: An optimization-based framework for unstructured data center network design.

Paper Link】 【Pages】:1116-1124

【Authors】: Andrew R. Curtis ; Tommy Carpenter ; Mustafa Elsheikh ; Alejandro López-Ortiz ; Srinivasan Keshav

【Abstract】: Despite the many proposals for data center network (DCN) architectures, designing a DCN remains challenging. DCN design is especially difficult when expanding an existing network, because traditional DCN design places strict constraints on the topology (e.g., a fat-tree). Recent advances in routing protocols allow data center servers to fully utilize arbitrary networks, so there is no need to require restricted, regular topologies in the data center. Therefore, we propose a data center network design framework, that we call REWIRE, to design networks using an optimization algorithm. Our algorithm finds a network with maximal bisection bandwidth and minimal end-to-end latency while meeting user-defined constraints and accurately modeling the predicted cost of the network. We evaluate REWIRE on a wide range of inputs and find that it significantly outperforms previous solutions-its network designs have up to 100-500% more bisection bandwidth and less end-to-end network latency than equivalent-cost DCNs built with best practices.

【Keywords】: bandwidth allocation; computer centres; network servers; optimisation; routing protocols; telecommunication network topology; DCN design; REWIRE; arbitrary networks; bisection bandwidth; data center network architectures; data center servers; end-to-end network latency; maximal bisection bandwidth; minimal end-to-end latency; optimization-based framework; routing protocols; unstructured data center network design framework; user-defined constraints; Bandwidth; Delay; Network topology; Optical switches; Routing; Servers; Topology

126. CARPO: Correlation-aware power optimization in data center networks.

Paper Link】 【Pages】:1125-1133

【Authors】: Xiaodong Wang ; Yanjun Yao ; Xiaorui Wang ; Kefa Lu ; Qing Cao

【Abstract】: Power optimization has become a key challenge in the design of large-scale enterprise data centers. Existing research efforts focus mainly on computer servers to lower their energy consumption, while only few studies have tried to address the energy consumption of data center networks (DCNs), which can account for 20% of the total energy consumption of a data center. In this paper, we propose CARPO, a correlation-aware power optimization algorithm that dynamically consolidates traffic flows onto a small set of links and switches in a DCN and then shuts down unused network devices for energy savings. In sharp contrast to existing work, CARPO is designed based on a key observation from the analysis of real DCN traces that the bandwidth demands of different flows do not peak at exactly the same time. As a result, if the correlations among flows are considered in consolidation, more energy savings can be achieved. In addition, CARPO integrates traffic consolidation with link rate adaptation for maximized energy savings. We implement CARPO on a hardware testbed composed of 10 virtual switches configured with a production 48-port OpenFlow switch and 8 servers. Our empirical results with Wikipedia traces demonstrate that CARPO can save up to 46% of network energy for a DCN, while having only negligible delay increases. CARPO also outperforms two state-of-the-art baselines by 19.6% and 95% on energy savings, respectively. Our simulation results with 61 flows also show the superior energy efficiency of CARPO over the baselines.

【Keywords】: bandwidth allocation; computer centres; delays; network servers; power aware computing; 48-port OpenFlow switch; CARPO; bandwidth demands; computer servers; correlation-aware power optimization algorithm; delays; dynamic traffic flow consolidation; energy consumption; energy savings; hardware testbed; large-scale enterprise data center networks; network devices; virtual switches; Bandwidth; Correlation; Energy consumption; Internet; Optimization; Power demand; Servers

127. Bargaining towards maximized resource utilization in video streaming datacenters.

Paper Link】 【Pages】:1134-1142

【Authors】: Yuan Feng ; Baochun Li ; Bo Li

【Abstract】: Datacenters can be used to host large-scale video streaming services with better operational efficiency, as the multiplexing achieved by virtualization technologies allows different videos to share resources at the same physical server. Live migration of videos from servers that are overloaded to those that are under-utilized may be a solution to handle a flash crowd of requests, in the form of virtual machines (VMs). However, such migration has to be performed with a well-designed mechanism to fully utilize available resources in all three resource dimensions: storage, bandwidth and CPU cycles. In this paper, we show why the challenge of maximizing resource utilization in a video streaming datacenter is equivalent to maximizing the joint profit in the context of Nash bargaining solutions, by defining utility functions properly. Having servers participating as players to bargain with each other, and VMs as commodities in the game, trades conducted after bargaining govern VM migration decisions in each server. With extensive simulations driven by real-world traces from UUSee Inc., we show that our new VM migration algorithm based on such Nash bargaining solutions increases both the resource utilization ratio and the number of video streaming requests handled by the datacenter, yet achievable in a lightweight fashion.

【Keywords】: computer centres; decision making; game theory; profitability; resource allocation; video streaming; virtual machines; CPU cycles; Nash bargaining solutions; UUSee Inc; VM migration algorithm; bandwidth; decision making; flash request crowd; large-scale video streaming services; live video migration; multiplexing; physical servers; profit maximization; resource dimensions; resource sharing; resource utilization maximization; resource utilization ratio; storage; utility functions; video streaming data centers; virtual machines; virtualization technologies; Bandwidth; Games; Joints; Optimization; Resource management; Servers; Streaming media

128. Joint scheduling of processing and Shuffle phases in MapReduce systems.

Paper Link】 【Pages】:1143-1151

【Authors】: Fangfei Chen ; Murali S. Kodialam ; T. V. Lakshman

【Abstract】: MapReduce has emerged as an important paradigm for processing data in large data centers. MapReduce is a three phase algorithm comprising of Map, Shuffle and Reduce phases. Due to its widespread deployment, there have been several recent papers outlining practical schemes to improve the performance of MapReduce systems. All these efforts focus on one of the three phases to obtain performance improvement. In this paper, we consider the problem of jointly scheduling all three phases of the MapReduce process with a view of understanding the theoretical complexity of the joint scheduling and working towards practical heuristics for scheduling the tasks. We give guaranteed approximation algorithms and outline several heuristics to solve the joint scheduling problem.

【Keywords】: approximation theory; computational complexity; computer centres; data handling; scheduling; software performance evaluation; MapReduce systems; Shuffle phases joint scheduling; approximation algorithms; data centers; data processing; map phases; performance improvement; processing joint scheduling; reduce phases; tasks scheduling; Approximation algorithms; Approximation methods; Copper; Linear programming; Processor scheduling; Program processors; TV

129. Secret communication in large wireless networks without eavesdropper location information.

Paper Link】 【Pages】:1152-1160

【Authors】: Cagatay Capar ; Dennis Goeckel ; Benyuan Liu ; Don Towsley

【Abstract】: We present achievable scaling results on the per-node secure throughput that can be realized in a large random wireless network of n legitimate nodes in the presence of m eavesdroppers of unknown location. We consider both one-dimensional and two-dimensional networks. In the one-dimensional case, we show that a per-node secure throughput of order 1/n is achievable if the number of eavesdroppers satisfies m = o(n/log n). We obtain similar results for the two-dimensional case, where a secure throughput of order 1/(√n log n) is achievable under the same condition. The number of eavesdroppers that can be tolerated is significantly higher than previous works that address the case of unknown eavesdropper locations. The key technique introduced in our construction to handle unknown eavesdropper locations forces adversaries to intercept a number of packets to be able to decode a single message. The whole network is divided into regions, where a certain subset of packets is protected from adversaries located in each region. In the one-dimensional case, our construction makes use of artificial noise generation by legitimate nodes to degrade the signal quality at the potential locations of eavesdroppers. In the two-dimensional case, the availability of many paths to reach a destination is utilized to handle collaborating eavesdroppers of unknown location.

【Keywords】: radio networks; telecommunication security; artificial noise generation; eavesdropper location information; one-dimensional network; per-node secure throughput; secret communication; two-dimensional network; wireless network; Color; Interference; Jamming; Relays; Signal to noise ratio; Throughput

130. Detection and prevention of SIP flooding attacks in voice over IP networks.

Paper Link】 【Pages】:1161-1169

【Authors】: Jin Tang ; Yu Cheng ; Yong Hao

【Abstract】: As voice over IP (VoIP) increasingly gains popularity, traffic anomalies such as the SIP flooding attacks are also emerging and becoming into a major threat to the technology. Thus, detecting and preventing such anomalies is critical to ensure an effective VoIP system. The existing flooding detection schemes are inefficient in detecting low-rate flooding from dynamic background traffic, or may even totally fail when flooding is launched in a multi-attribute manner by simultaneously manipulating different types of SIP messages. In this paper, we develop an online scheme to detect and subsequently prevent the flooding attacks, by integrating a novel three-dimensional sketch design with the Hellinger distance (HD) detection technique. The sketch data structure summarizes the incoming SIP messages into a compact and constant-size data set based on which a separate probability distribution can be established for each SIP attribute. The HD monitors the evolution of the probability distributions and detects flooding attacks when abnormal variations are observed. The three-dimensional design equips our scheme with the advantages of high detection accuracy even for low-rate flooding, robust performance under multi-attribute flooding, and the capability of selectively discarding the offending SIP messages to prevent the attacks. Moreover, we develop an estimation freeze mechanism to protect the detection threshold from being polluted by attacks. Not only do we theoretically analyze the performance of the proposed detection and prevention techniques, but also resort to extensive simulations to thoroughly examine the performance.

【Keywords】: Internet telephony; data structures; signalling protocols; statistical distributions; telecommunication security; telecommunication traffic; 3D sketch design; Hellinger distance detection technique; SIP flooding attack detection; SIP flooding attack prevention; SIP messages; VoIP system; constant-size data set; detection threshold protection; dynamic background traffic; estimation freeze mechanism; probability distribution evolution; session initiation protocol; sketch data structure; voice over IP networks; Estimation; High definition video; Monitoring; Probability distribution; Protocols; Servers; Training

131. Secure top-k query processing via untrusted location-based service providers.

Paper Link】 【Pages】:1170-1178

【Authors】: Rui Zhang ; Yanchao Zhang ; Chi Zhang

【Abstract】: This paper considers a novel distributed system for collaborative location-based information generation and sharing which become increasingly popular due to the explosive growth of Internet-capable and location-aware mobile devices. The system consists of a data collector, data contributors, location-based service providers (LBSPs), and system users. The data collector gathers reviews about points-of-interest (POIs) from data contributors, while LBSPs purchase POI data sets from the data collector and allow users to perform location-based top-k queries which ask for the POIs in a certain region and with the highest k ratings for an interested POI attribute. In practice, LBSPs are untrusted and may return fake query results for various bad motives, e.g., in favor of POIs willing to pay. This paper presents two novel schemes for users to detect fake top-k query results as an effort to foster the practical deployment and use of the proposed system. The efficacy and efficiency of our schemes are thoroughly analyzed and evaluated.

【Keywords】: Internet; groupware; mobile computing; query processing; security of data; trusted computing; Internet capable devices; LBSP; POI data sets; collaborative location-based information generation; collaborative location-based information sharing; data collector; data contributors; distributed system; location-aware mobile devices; point of interest; secure top-k query processing; untrusted location-based service provider; Lead

132. Physical layer security from inter-session interference in large wireless networks.

Paper Link】 【Pages】:1179-1187

【Authors】: Azadeh Sheikholeslami ; Dennis Goeckel ; Hossein Pishro-Nik ; Don Towsley

【Abstract】: Physical layer secrecy in wireless networks in the presence of eavesdroppers of unknown location is considered. In contrast to prior schemes, which have expended energy in the form of cooperative jamming to enable secrecy, we develop schemes where multiple transmitters send their signals in a cooperative fashion to confuse the eavesdroppers. Hence, power is not expended on “artificial noise”; rather, the signal of a given transmitter is protected by the aggregate interference produced by the other transmitters. We introduce a two-hop strategy for the case of equal path-loss between all pairs of nodes, and then consider its embedding within a multi-hop approach for the general case of an extended network. In each case, we derive an achievable number of eavesdroppers that can be present in the region while secure communication between all sources and intended destinations is ensured.

【Keywords】: jamming; radio networks; telecommunication security; aggregate interference; cooperative jamming; eavesdropper; equal path loss; intersession interference; large wireless networks; physical layer security; two hop strategy; Interference; Protocols; Receivers; Relays; Signal to noise ratio; Transmitters; Wireless networks

133. A novel multi-tariff charging method for next generation Multicast and Broadcast Service.

Paper Link】 【Pages】:1188-1196

【Authors】: Sok-Ian Sou ; Phone Lin ; Ssu-Shih Chen ; Jeu-Yih Jeng

【Abstract】: Different from unicast data delivery through dedicated channels, next generation Multicast and Broadcast Service (MBS) delivers data with higher bandwidth efficiency through multicasting. When a group of online users receive the same MBS content, the average MBS network cost per user can be significantly reduced. However, traditional charging methods are unable to account for the variation of network resource. To address this shortfall, this paper develops an innovative concept in MBS charging to realize the goal that “the more users join an MBS session, the more everyone saves.” Specifically, we propose a multi-tariff charging method for MBS with a competitive advantage that attracts more users to subscribe to MBS services. Analysis and numerical results demonstrate that operators can alleviate customer concerns about user credit consumption with dynamic tariff while remaining profitable in MBS. These features in our multi-tariff charging method can create a “win-win” situation for both operators and subscribers.

【Keywords】: broadcast communication; multicast communication; next generation networks; tariffs; bandwidth efficiency; dedicated channels; dynamic tariff; multitariff charging method; network resource; next generation multicast and broadcast service; unicast data delivery; user credit consumption; Artificial neural networks; Electronic mail; Integrated circuits; Logic gates; Mathematical model; Next generation networking; Random variables; MBS; network cost; online charging; revenue; signaling

134. GENESIS: An agent-based model of interdomain network formation, traffic flow and economics.

Paper Link】 【Pages】:1197-1205

【Authors】: Aemen Lodhi ; Amogh Dhamdhere ; Constantine Dovrolis

【Abstract】: We propose an agent-based network formation model for the Internet at the Autonomous System (AS) level. The proposed model, called GENESIS, is based on realistic provider and peering strategies, with ASes acting in a myopic and decentralized manner to optimize a cost-related fitness function. GENESIS captures key factors that affect the network formation dynamics: highly skewed traffic matrix, policy-based routing, geographic co-location constraints, and the costs of transit/peering agreements. As opposed to analytical game-theoretic models, which focus on proving the existence of equilibria, GENESIS is a computational model that simulates the network formation process and allows us to actually compute distinct equilibria (i.e., networks) and to also examine the behavior of sample paths that do not converge. We find that such oscillatory sample paths occur in about 10% of the runs, and they always involve tier- 1 ASes, resembling the tier-1 peering disputes often seen in practice. GENESIS results in many distinct equilibria that are highly sensitive to initial conditions and the order in which ASes (agents) act. This implies that we cannot predict the properties of an individual AS in the Internet. However, certain properties of the global network or of certain classes of ASes are predictable. We also examine whether the underlying game is zero-sum, and identify three sufficient conditions for that property. Finally, we apply GENESIS in a specific “what-if” question, asking how the openness towards peering affects the resulting network in terms of topology, traffic flow and economics. Interestingly, we find that the peering openness that maximizes the fitness of different network classes (tier-1, tier-2 and tier-3 providers) closely matches that seen in real-world peering policies.

【Keywords】: Internet; game theory; multi-agent systems; optimisation; telecommunication network routing; telecommunication traffic; ASes acting; GENESIS; Internet; agent-based interdomain network formation model; analytical game-theoretic models; autonomous system level; cost-related fitness function; geographic colocation constraints; global network; network formation dynamics; peering strategies; policy-based routing; real-world peering policies; skewed traffic matrix; sufficient conditions; tier-1 peering; traffic flow; Computational modeling; Economics; Internet; Network topology; Oscillators; Peer to peer computing; Topology

135. Multi-resource allocation: Fairness-efficiency tradeoffs in a unifying framework.

Paper Link】 【Pages】:1206-1214

【Authors】: Carlee Joe-Wong ; Soumya Sen ; Tian Lan ; Mung Chiang

【Abstract】: Quantifying the notion of fairness is under-explored when users request different ratios of multiple distinct resource types. A typical example is datacenters processing jobs with heterogeneous resource requirements on CPU, memory, etc. A generalization of max-min fairness to multiple resources was recently proposed in [1], but may suffer from significant loss of efficiency. This paper develops a unifying framework addressing this fairness-efficiency tradeoff with multiple resource types. We develop two families of fairness functions which provide different tradeoffs, characterize the effect of user requests' heterogeneity, and prove conditions under which these fairness measures satisfy the Pareto efficiency, sharing incentive, and envy-free properties. Intuitions behind the analysis are explained in two visualizations of multi-resource allocation.

【Keywords】: Pareto analysis; computer centres; data visualisation; multiprocessing systems; resource allocation; Pareto efficiency; datacenters processing jobs; envy-free properties; fairness functions; fairness-efficiency tradeoffs; heterogeneous resource requirements; incentive sharing; max-min fairness; multiple distinct resource types; multiresource allocation; user requests heterogeneity; Bandwidth; Economics; Measurement; Memory management; Resource management; Vectors; Visualization

136. Spectrum leasing to femto service provider with hybrid access.

Paper Link】 【Pages】:1215-1223

【Authors】: Youwen Yi ; Jin Zhang ; Qian Zhang ; Tao Jiang

【Abstract】: The concept of femtocell that operates in licensed spectrum to provide home coverage has attracted interest in the wireless industry due to high spatial reuse, and extensive deployments of femtocells is expected in the future. In this paper, we consider the scenario that a femtocell service provider (FSP) expects to rent spectrum from the coexisting macrocell service provider (MSP) to serve its end users. In addition to the spectrum leasing payment, the FSP may allow hybrid access of macrocell users to improve the utilities of itself and MSP, which are defined as the sum of data traffic and payment/revenue. We propose the spectrum leasing framework taking hybrid access into consideration. The whole procedure is modeled as a three-stage Stackelberg game, where MSP and FSP determine the spectrum leasing ratio, spectrum leasing price and open access ratio sequentially to maximize their utilities, and the existence of the Nash Equilibrium of the sequential game is analyzed. We characterize the equilibrium, in terms of access price, spectrum acquisition of FSP, the open access ratio, and price of anarchy via simulation. Numerical results show that both MSP and FSP can benefit from spectrum leasing, and hybrid access of femtocell can further improve their utilities, which provide sufficient incentive for their cooperation.

【Keywords】: femtocellular radio; game theory; telecommunication industry; telecommunication traffic; FSP; MSP; Nash Equilibrium; data traffic; femtocell service provider; macrocell service provider; open access ratio; sequential game analysis; spectrum acquisition; spectrum leasing payment; spectrum leasing price; spectrum leasing ratio; three-stage Stackelberg game; wireless industry; Games; Macrocell networks; Noise measurement; Radio frequency; Throughput; Ultrafast electronics; Wireless communication

Paper Link】 【Pages】:1224-1232

【Authors】: Wenzhuo Ouyang ; Atilla Eryilmaz ; Ness B. Shroff

【Abstract】: We consider the scheduling problem in downlink wireless networks with heterogeneous, Markov-modulated, ON/OFF channels. It is well-known that the performance of scheduling over fading channels heavily depends on the accuracy of the available Channel State Information (CSI), which is costly to acquire. Thus, we consider the CSI acquisition via a practical ARQ-based feedback mechanism whereby channel states are revealed at the end of only scheduled users' transmissions. In the assumed presence of temporally-correlated channel evolutions, the desired scheduler must optimally balance the exploitation-exploration trade-off, whereby it schedules transmissions both to exploit those channels with up-to-date CSI and to explore the current state of those with outdated CSI. In earlier works, Whittle's Index Policy had been suggested as a low-complexity and high-performance solution to this problem. However, analyzing its performance in the typical scenario of statistically heterogeneous channel state processes has remained elusive and challenging, mainly because of the highly-coupled and complex dynamics it possesses. In this work, we overcome these difficulties to rigorously establish the asymptotic optimality properties of Whittle's Index Policy in the limiting regime of many users. More specifically: (1) we prove the local optimality of Whittle's Index Policy, provided that the initial state of the system is within a certain neighborhood of a carefully selected state; (2) we then establish the global optimality of Whittle's Index Policy under a recurrence assumption that is verified numerically for the problem at hand. These results establish, for the first time to the best of our knowledge, that Whittle's Index Policy possesses analytically provable optimality characteristics for scheduling over heterogeneous and temporally-correlated channels.

【Keywords】: Markov processes; automatic repeat request; fading channels; feedback; modulation; multiuser channels; radio networks; scheduling; ARQ based feedback mechanism; Markov modulated channel; Markovian fading channels; ON/OFF channel; Whittle index policy; asymptotic optimality; asymptotically optimal downlink scheduling; channel state information; channel state process; downlink wireless networks; fading channel scheduling; heterogeneous correlated channel; temporally correlated channel; Downlink; Indexes; Markov processes; Optimal scheduling; Throughput; Upper bound; Zinc

138. HOSA: Holistic scheduling and analysis for scalable fault-tolerant FlexRay design.

Paper Link】 【Pages】:1233-1241

【Authors】: Yu Hua ; Xue Liu ; Wenbo He

【Abstract】: FlexRay is a new industry standard for next-generation communication in automotives. Though there are a few recent researches on performance analysis of FlexRay, two important aspects of the FlexRay design have been overlooked. The first is a holistic integrated scheduling scheme that can handle both static and dynamic segments in a FlexRay network. The second is cost-effective and scalable fault-tolerance. In order to address these aspects, we propose a novel holistic scheduling scheme, called HOSA, which can provide scalable fault tolerance by using flexible and ease-of-use dual channel communication in FlexRay. HOSA is built upon a novel slot pilfering technique to schedule and optimize the available slots in both static and dynamic segments. Moreover, in order to achieve efficient implementation, we propose approximate computation, which can efficiently support cost-effective and holistic scheduling by judiciously obtaining the tradeoff between computation complexity and available pilfered slots. HOSA hence offers two salient features, i.e., providing fault-tolerance and improving bandwidth utilization. Extensive experiments based on synthetic test cases and real-world case studies demonstrate the efficiency and efficacy of HOSA.

【Keywords】: automotive electronics; computational complexity; fault tolerance; next generation networks; protocols; scheduling; HOSA; automotives; bandwidth utilization; computation complexity; dual channel communication; dynamic segments; holistic integrated scheduling; holistic scheduling and analysis; next generation communication; scalable fault tolerant FlexRay design; static segments

139. On managing quality of experience of multiple video streams in wireless networks.

Paper Link】 【Pages】:1242-1250

【Authors】: Partha Dutta ; Anand Seetharam ; Vijay Arya ; Malolan Chetlur ; Shivkumar Kalyanaraman ; Jim Kurose

【Abstract】: Managing the Quality-of-Experience (QoE) of video streaming for wireless clients is becoming increasingly important due to the rapid growth of video traffic on wireless networks. The inherent variability of the wireless channel as well as the Variable Bit Rate (VBR) of the compressed video streams make QoE management a challenging problem. Prior work has studied this problem in the context of transmitting a single video stream. In this paper, we investigate multiplexing schemes to transmit multiple video streams from a base station to mobile clients that use number of playout stalls as a performance metric. In this context, we present an epoch-by-epoch framework to fairly allocate wireless transmission slots to streaming videos. In each epoch our scheme essentially reduces the vulnerability to stalling by allocating slots to videos in a way that maximizes the minimum `playout lead' across all videos. Next, we show that the problem of allocating slots fairly is NP-complete even for a constant number of videos. We then present a fast lead-aware greedy algorithm for the problem. Our choice of greedy algorithm is motivated by the fact that this algorithm is optimal when the channel quality of a user remains unchanged within an epoch (but different users may experience different channel quality). Moreover, our experimental results based on public MPEG-4 video traces and wireless channel traces that we collected from a WiMAX test-bed show that the lead-aware greedy approach performs a fair distribution of stalls across the clients when compared to other algorithms, while still maintaining similar or lower average number of stalls per client.

【Keywords】: WiMax; computational complexity; data compression; mobile radio; multiplexing; video coding; video streaming; NP-complete problem; QoE management; WiMAX test-bed; base station; channel quality; epoch-by-epoch framework; fast-lead-aware greedy algorithm; mobile clients; multiplexing schemes; public MPEG-4 video traces; quality-of-experience; variable bit rate; video stream compression; video stream transmission; video traffic; wireless channel traces; wireless networks; wireless transmission slot allocation; Greedy algorithms; Lead; Multiplexing; Resource management; Servers; Streaming media; Wireless communication

140. Stability of the Max-Weight protocol in adversarial wireless networks.

Paper Link】 【Pages】:1251-1259

【Authors】: Sungsu Lim ; Kyomin Jung ; Matthew Andrews

【Abstract】: In this paper we consider the MAX-WEIGHT protocol for routing and scheduling in wireless networks under an adversarial model. This protocol has received a significant amount of attention dating back to the papers of Tassiulas and Ephremides. In particular, this protocol is known to be throughput-optimal whenever the traffic patterns and propagation conditions are governed by a stationary stochastic process. However, the standard proof of throughput optimality (which is based on the negative drift of a quadratic potential function) does not hold when the traffic patterns and the edge capacity changes over time are governed by an arbitrary adversarial process. Such an environment appears frequently in many practical wireless scenarios when the assumption that channel conditions are governed by a stationary stochastic process does not readily apply. In this paper we prove that even in the above adversarial setting, the MAX-WEIGHT protocol keeps the queues in the network stable (i.e. keeps the queue sizes bounded) whenever this is feasible by some routing and scheduling algorithm. However, the proof is somewhat more complex than the negative potential drift argument that applied in the stationary case. Our proof holds for any arbitrary interference relationships among edges. We also prove the stability of ε-approximate MAX-WEIGHT under the adversarial model. We conclude the paper with a discussion of queue sizes in the adversarial model as well as a set of simulation results.

【Keywords】: radio networks; routing protocols; scheduling; stochastic processes; adversarial wireless networks; ephremides; max-weight protocol stability; negative drift; quadratic potential function; routing algorithm; scheduling algorithm; stationary stochastic process; tassiulas; Interference; Protocols; Routing; Scheduling algorithms; Stability analysis; Stochastic processes; Vectors

141. Connectivity of large-scale Cognitive Radio Ad Hoc Networks.

Paper Link】 【Pages】:1260-1268

【Authors】: Dianjie Lu ; Xiaoxia Huang ; Pan Li ; Jianping Fan

【Abstract】: Connectivity of large-scale wireless networks has received considerable attention in the past several years. Different from traditional wireless networks, in Cognitive Radio Ad-hoc Networks (CRAHNs), primary users have spectrum access priority of the licensed bands over secondary users. Therefore, the connectivity of the secondary network is affected by not only the density and transmission power of secondary users, but also the activities of primary users. In addition, the number of licensed bands also has impact on the connectivity of CRAHNs. To capture the dynamic characteristics of opportunistic spectrum access, we introduce the Cognitive Radio Graph Model (CRGM) which takes into account the impact of the number of channels and the activities of primary users. Furthermore, we combine the CRGM with continuum percolation model to study the connectivity in the secondary network. We prove that secondary users can form the percolated network when the density of primary users is below the critical density. Then, the upper bound of the critical density of the primary users in the percolated CRAHNs is derived. Simulation results show that both the number of channels and the activities of primary users greatly impact the connectivity of CRAHNs.

【Keywords】: ad hoc networks; cognitive radio; graph theory; CRGM; cognitive radio graph model; continuum percolation model; large-scale CRAHN; large-scale cognitive radio ad hoc network; large-scale wireless network; opportunistic spectrum access priority; primary user; secondary user; Ad hoc networks; Availability; Cognitive radio; Educational Activities Board; Lattices; Radio link; Wireless networks

142. Wireless MAC processors: Programming MAC protocols on commodity Hardware.

Paper Link】 【Pages】:1269-1277

【Authors】: Ilenia Tinnirello ; Giuseppe Bianchi ; Pierluigi Gallo ; Domenico Garlisi ; Francesco Giuliano ; Francesco Gringoli

【Abstract】: Programmable wireless platforms aim at responding to the quest for wireless access flexibility and adaptability. This paper introduces the notion of wireless MAC processors. Instead of implementing a specific MAC protocol stack, Wireless MAC processors do support a set of Medium Access Control “commands” which can be run-time composed (programmed) through software-defined state machines, thus providing the desired MAC protocol operation. We clearly distinguish from related work in this area as, unlike other works which rely on dedicated DSPs or programmable hardware platforms, we experimentally prove the feasibility of the wireless MAC processor concept over ultra-cheap commodity WLAN hardware cards. Specifically, we reflash the firmware of the commercial Broadcom AirForce54G off-the-shelf chipset, replacing its 802.11 WLAN MAC protocol implementation with our proposed extended state machine execution engine. We prove the flexibility of the proposed approach through three use-case implementation examples.

【Keywords】: access protocols; digital signal processing chips; finite state machines; firmware; network interfaces; radio access networks; wireless LAN; 802.11 WLAN MAC protocol implementation; DSPs; MAC protocol programming; MAC protocol stack; commercial Broadcom AirForce54G off-the-shelf chipset; commodity hardware; extended state machine execution engine; firmware; medium access control commands; programmable wireless platforms; run-time composed; software-defined state machines; ultracheap commodity WLAN hardware cards; wireless MAC processors; wireless access adaptability; wireless access flexibility; Engines; Hardware; Media Access Protocol; Program processors; Registers; Wireless LAN; Wireless communication; WLAN 802.11; cognitive radio; programmable MAC; reconfigurability

143. Understanding the tempo-spatial limits of information dissemination in multi-channel Cognitive Radio Networks.

Paper Link】 【Pages】:1278-1286

【Authors】: Lei Sun ; Wenye Wang

【Abstract】: Cognitive Radio Networks (CRNs) have emerged to become promising network components for exploiting spectrum opportunistically in order that information can be delivered in circumstances otherwise impossible. Challenging yet open questions are how fast and how far a packet can be delivered in such networks, in temporal and spatial domains, respectively. The answers to these questions offer a straightforward interpretation of the potentials of CRNs for time-sensitive applications. To tackle these questions, we define two metrics, dissemination radius ∥ℒ(t)∥ and propagation speed S(d). The former is the maximum Euclidean distance that a packet can reach in time t, and the latter is the speed that a packet transmits between a source and destination at Euclidean distance d apart, which can be used to measure the transmission delay. Further, we determine the sufficient and necessary conditions under which there exist spatial and temporal limits of information dissemination in CRNs. We find that when information cannot be disseminated to the entire network, the limiting dissemination radius is statistically dominated by an exponential distribution, while the limiting information propagation speed approaches to zero. Otherwise, the dissemination radius approaches to infinity and the propagation speed S(d) is no lower than some constant k for large d. The results are validated through simulations.

【Keywords】: cognitive radio; delays; radiowave propagation; wireless channels; exponential distribution; information dissemination; limiting dissemination radius; limiting information propagation speed; maximum Euclidean distance; multichannel cognitive radio networks; tempospatial limits; time sensitive applications; transmission delay; Nonhomogeneous media

144. On latency distribution and scaling: from finite to large Cognitive Radio Networks under general mobility.

Paper Link】 【Pages】:1287-1295

【Authors】: Lei Sun ; Wenye Wang

【Abstract】: Cognitive Radio Networks (CRNs), as a phenomenal technique to improve spectrum efficiency for opportunistic communications, become an integral component in the future communication regime. In this paper, we study the end-to-end latency in CRNs because many CRN applications, such as military networks and emergency networks, are either time-sensitive or dependent on delay performance. In particular, we consider a general mobility framework that captures most characteristics of the existing models and accounts for spatial heterogeneity resulting from the scenario that some locations are more likely to be visited by mobile nodes (these can be home in the case of people, or garage in the case of vehicles). By assuming that secondary users are mobile under this general framework, we find that there exists a cutoff point on the mobility radius #, which indicates how far a mobile node can reach in the spatial domain, below which the latency has a heavy-tailed distribution and above which the tail distribution is bounded by some Gamma (light-tailed) distribution. A heavy tail of the latency implies a significant probability that it takes long time to disseminate a message from the source to the destination and thus a light-tailed latency is crucial for time-critical applications. Moreover, as the network grows large, we notice that the latency is asymptotically scalable (linear) with the dissemination distance (e.g., the number of hops or Euclidean distance). Another interesting observation is that although the density of primary users adversely impacts the expected latency, it makes no influence on the dichotomy of the tail distribution of the latency in finite networks and the linearity of latency in large networks. Our results encourage the CRN deployment for real-time and large applications, when the mobility radius of secondary users is large enough.

【Keywords】: cognitive radio; gamma distribution; mobile radio; probability; radio networks; CRN; Euclidean distance; Gamma distribution; cognitive radio network; delay performance; emergency network; general mobility framework; heavy-tailed distribution; light-tailed distribution; military network; mobile node; mobility radius; secondary user; spatial heterogeneity; spectrum efficiency; Ad hoc networks; Couplings; Interference; Mobile communication; Nonhomogeneous media; Random variables; Wireless communication

145. Physarum optimization: A biology-inspired algorithm for minimal exposure path problem in wireless sensor networks.

Paper Link】 【Pages】:1296-1304

【Authors】: Liang Liu ; Yuning Song ; Huadong Ma ; Xi Zhang

【Abstract】: Using insights from biological processes could help to design new optimization techniques for long-standing computational problems. This paper exploits a cellular computing model in the slime mold physarum polycephalum to solve the minimal exposure path problem which is a fundamental problem corresponding to the worst-case coverage in wireless sensor networks. We first formulate the minimal exposure path problem, and then convert it into the shortest path problem by discretizing the monitoring field to a large-scale weighted grid. Inspired by the path-finding capability of physarum, we develop a new optimization algorithm, named as the physarum optimization, for solving the shortest path problem. Our proposed algorithm is with low-complexity and high-parallelism. Moreover, the core mechanism of our physarum optimization is also helpful for designing new graph algorithms and improving routing protocols and topology control in self-organized networks.

【Keywords】: graph theory; minimisation; routing protocols; telecommunication network topology; wireless sensor networks; biological process; biology-inspired algorithm; cellular computing model; graph algorithm; large-scale weighted grid; minimal exposure path problem; monitoring field discretization; optimization algorithm; path-finding capability; physarum optimization; physarum polycephalum; routing protocol; self-organized network; shortest path problem; slime mold; topology control; wireless sensor networks; worst-case coverage; Computational modeling; Electron tubes; Equations; Mathematical model; Optimization; Organisms; Physarum optimization; biology-inspired computing; minimal exposure path; wireless sensor networks

146. A distributed optimal framework for mobile data gathering with concurrent data uploading in wireless sensor networks.

Paper Link】 【Pages】:1305-1313

【Authors】: Songtao Guo ; Yuanyuan Yang

【Abstract】: In this paper, we consider mobile data gathering in wireless sensor networks (WSNs) by using a mobile collector with multiple antennas. By taking into account the elastic nature of wireless link capacity and the power control for each sensor, we first propose a data gathering cost minimization (DaGCM) framework with concurrent data uploading, which is constrained by flow conservation, energy consumption, link capacity, compatibility among sensors and the bound on total sojourn time of the mobile collector at all anchor points. One of the main features of this framework is that it allows concurrent data uploading from sensors to the mobile collector to sharply shorten data gathering latency and significantly reduce energy consumption due to the use of multiple antennas and space-division multiple access technique. We then relax the DaGCM problem with Lagrangian dualization and solve it with the subgradient iteration algorithm. Furthermore, we present a distributed algorithm composed of cross-layer data control, routing, power control and compatibility determination subalgorithms with explicit message passing. We also give the subalgorithm for finding the optimal sojourn time of the mobile collector at different anchor points. Finally, we provide numerical results to show the convergence of the proposed DaGCM algorithm and its advantages over the algorithm without concurrent data uploading and power control in terms of data gathering latency and energy consumption.

【Keywords】: antenna arrays; channel capacity; distributed algorithms; iterative methods; mobile radio; power control; radio links; space division multiple access; telecommunication control; wireless sensor networks; Lagrangian dualization; WSN; antennas; compatibility determination subalgorithms; concurrent data uploading; cross-layer data control; data gathering cost minimization framework; distributed algorithm; distributed optimal framework; energy consumption; explicit message passing; flow conservation; mobile collector; mobile data gathering; power control; space-division multiple access technique; subgradient iteration algorithm; wireless link capacity; wireless sensor networks; Distributed databases; Energy consumption; Mobile communication; Optimization; Power control; Routing; Wireless sensor networks; Mobile data gathering; convex optimization; data gathering cost; distributed algorithm; wireless sensor networks

147. Distributed critical location coverage in wireless sensor networks with lifetime constraint.

Paper Link】 【Pages】:1314-1322

【Authors】: Changlei Liu ; Guohong Cao

【Abstract】: In many surveillance scenarios, there are some known critical locations where the events of concern are expected to occur. A common goal in such applications is to use sensors to monitor these critical locations with sufficient quality of surveillance within a designated period. However, with limited sensing resources, the coverage and lifetime requirement may not be satisfied at the same time. Thus, sometimes the sensor needs to reduce its duty cycle in order to satisfy the stringent lifetime constraint. In this paper, we model the critical location coverage problem using a point coverage model with the objective of scheduling sensors to maximize the event detection probability while meeting the network lifetime requirement. We show that this problem is NP-hard and propose a distributed algorithm with a provable approximation ratio of 0.5. Extensive simulations show that the proposed distributed algorithm outperforms the extensions of several state-of-the-art schemes with a significant margin while preserving the network lifetime requirement.

【Keywords】: optimisation; sensor placement; surveillance; wireless sensor networks; NP-hard problem; critical location monitoring; distributed algorithm; distributed critical location coverage; event detection probability; lifetime constraint; network lifetime requirement; point coverage model; quality of surveillance; sensor scheduling; wireless sensor network; Distributed algorithms

148. L2: Lazy forwarding in low duty cycle wireless sensor networks.

Paper Link】 【Pages】:1323-1331

【Authors】: Zhichao Cao ; Yuan He ; Yunhao Liu

【Abstract】: Energy-constrained wireless sensor networks are duty-cycled, relaying on multi-hop forwarding to collect data packets. A forwarding scheme generally involves three design elements: media access, link estimation, and routing strategy. Most existing studies, however, focus only on a subset of those three. Disregarding the low duty cycle nature of media access often leads to overestimate of link quality. Neglecting the characteristic of bursty loss over wireless links inevitably consumes much more energy than necessary and underutilizes wireless channels. The routing strategy, if not well tailored to the above factors, results in poor packet delivery performance. In this paper, we propose L2, a practical design of data forwarding in low duty cycle wireless sensor networks. L2 addresses link burstiness using multivariate Bernoulli link model. Further incorporated with synchronized rendezvous, L2 enables sensor nodes to work in a lazy mode, keep their radios off as long as they can, and allocates the precious energy for only a limited number of promising transmissions. We implement L2 on real sensor network testbeds. The results show that L2 outperforms state-of-the-art approaches in terms of energy efficiency and network yield.

【Keywords】: radio links; telecommunication network routing; wireless channels; wireless sensor networks; L2 lazy forwarding scheme; bursty loss characteristic; data forwarding; data packet collection; duty-cycled relaying; energy efficiency; energy-constrained wireless sensor networks; link estimation; low duty cycle wireless sensor networks; media access; multihop forwarding scheme; multivariate Bernoulli link model; routing strategy; sensor nodes; wireless channels; wireless links; Delay; Estimation; Receivers; Routing; Schedules; Synchronization; Wireless sensor networks

149. A server's perspective of Internet streaming delivery to mobile devices.

Paper Link】 【Pages】:1332-1340

【Authors】: Yao Liu ; Fei Li ; Lei Guo ; Bo Shen ; Songqing Chen

【Abstract】: Receiving Internet streaming services on various mobile devices is getting more and more popular. To understand and better support Internet streaming delivery to mobile devices, a number of studies have been conducted. However, existing studies have mainly focused on the client side resource consumption and streaming quality. So far, little is known about the server side, which is the key for providing successful mobile streaming services. In this work, we set to investigate the Internet mobile streaming service at the server side. For this purpose, we have collected a one-month server log (with 212 TB delivered video traffic) from a top Internet mobile streaming service provider serving worldwide mobile users. Through trace analysis, we find that (1) a major challenge for providing Internet mobile streaming services is rooted from the mobile device hardware and software heterogeneity. In this workload, we find over 2800 different hardware models with about 100 different screen resolutions running 14 different mobile OS and 3 audio codecs and 4 video codecs. (2) To deal with the device heterogeneity, transcoding is used to customize the video to the appropriate versions at runtime for different devices. A video clip could be transcoded into more than 40 different versions in order to serve requests from different devices. (3) Compared to videos in traditional Internet streaming, mobile streaming videos are typically of much smaller size (a median of 1.68 MBytes) and shorter duration (a median of 2.7 minutes). Furthermore, the daily mobile user accesses are more skewed following a Zipf-like distribution but users' interests also quickly shift. Considering the huge demand of CPU cycles for online transcoding, we further examine server-side caching in order to reduce CPU cycle demand. We show that a policy considering different versions of a video altogether outperforms other intuitive ones when the cache size is limited.

【Keywords】: Internet; audio coding; audio streaming; cache storage; client-server systems; mobile computing; operating systems (computers); transcoding; video codecs; video coding; video streaming; Internet mobile streaming service; Internet streaming delivery; Zipf-like distribution; audio codecs; client side resource consumption; mobile OS; mobile device hardware; online transcoding; server side; server-side caching; software heterogeneity; streaming quality; trace analysis; video clip transcoding; video codecs; video traffic; Internet; Mobile communication; Mobile handsets; Servers; Streaming media; Video codecs

150. Characterizing geospatial dynamics of application usage in a 3G cellular data network.

Paper Link】 【Pages】:1341-1349

【Authors】: Muhammad Zubair Shafiq ; Lusheng Ji ; Alex X. Liu ; Jeffrey Pang ; Jia Wang

【Abstract】: Recent studies on cellular network measurement have provided the evidence that significant geospatial correlations, in terms of traffic volume and application access, exist in cellular network usage. Such geospatial correlation patterns provide local optimization opportunities to cellular network operators for handling the explosive growth in the traffic volume observed in recent years. To the best of our knowledge, in this paper, we provide the first fine-grained characterization of the geospatial dynamics of application usage in a 3G cellular data network. Our analysis is based on two simultaneously collected traces from the radio access network (containing location records) and the core network (containing traffic records) of a tier-1 cellular network in the United States. To better understand the application usage in our data, we first cluster cell locations based on their application distributions and then study the geospatial dynamics of application usage across different geographical regions. The results of our measurement study present cellular network operators with fine-grained insights that can be leveraged to tune network parameter settings.

【Keywords】: 3G mobile communication; cellular radio; radio access networks; 3G cellular data network; United States; cellular network measurement; cellular network operators; core network; first fine-grained characterization; geospatial correlations; geospatial dynamics; location records; radio access network; traffic records; Computer architecture; Electronic mail; Geospatial analysis; IP networks; Indexes; Land mobile radio cellular systems; Optimization

151. Threshold compression for 3G scalable monitoring.

Paper Link】 【Pages】:1350-1358

【Authors】: Suk-Bok Lee ; Dan Pei ; MohammadTaghi Hajiaghayi ; Ioannis Pefkianakis ; Songwu Lu ; He Yan ; Zihui Ge ; Jennifer Yates ; Mario Kosseifi

【Abstract】: We study the problem of scalable monitoring of operational 3G wireless networks. Threshold-based performance monitoring in large 3G networks is very challenging for two main factors: large network scale and dynamics in both time and spatial domains. A fine-grained threshold setting (e.g., perlocation hourly) incurs prohibitively high management complexity, while a single static threshold fails to capture the network dynamics, thus resulting in unacceptably poor alarm quality (up to 70% false/miss alarm rates). In this paper, we propose a scalable monitoring solution, called threshold-compression that can characterize the location- and time-specific threshold trend of each individual network element (NE) with minimal threshold setting. The main insight is to identify groups of NEs with similar threshold behaviors across location and time dimensions, forming spatial-temporal clusters to reduce the number of thresholds while maintaining acceptable alarm accuracy in a large-scale 3G network. Our evaluations based on the operational experience on a commercial 3G network have demonstrated the effectiveness of the proposed solution. We are able to reduce the threshold setting up to 90% with less than 10% false/miss alarms.

【Keywords】: 3G mobile communication; mobility management (mobile radio); monitoring; telecommunication network reliability; 3G wireless network; NE; fine-grained threshold setting; location-specific threshold trend; management complexity; network element; scalable monitoring; single static threshold; spatial-temporal cluster; threshold compression; threshold-based performance monitoring; time-specific threshold trend; Accuracy; Clustering algorithms; Complexity theory; Compression algorithms; Downlink; Monitoring; Throughput

152. On the optimal mobile association in heterogeneous wireless relay networks.

Paper Link】 【Pages】:1359-1367

【Authors】: Qian Clara Li ; Rose Qingyang Hu ; Geng Wu ; Yi Qian

【Abstract】: This paper considers a cellular network with multiple low-power relay nodes (RN) deployed in each cell, helping the downlink transmission between the base stations (BS) and the user equipments (UE). The difference in transmission power between the BSs and the RNs makes this network heterogeneous. With conventional mobile association schemes, traffic load may concentrate on the BSs due to their high transmit power, leading to a highly unbalanced traffic load distribution and inefficient utilization of the RNs. To improve the capacity and the overall spectrum efficiency of the network, it is beneficial to let the RNs share a larger amount of the traffic burden. Towards this end, in associating the UEs to the RNs, the spectrum consumption on both UE to RN and RN to BS links should be cautiously considered. Also the interference from the high power BSs to the low power RNs need to be carefully mitigated. A systematic mobile association scheme considering load-balancing and overall spectrum efficiency is essential to ensure a fair and efficient usage of network resources. In this paper, we propose a load-balancing based mobile association framework under both full frequency reuse and partial frequency reuse and find the pseudo-optimal solutions using gradient descent method. By comparing with other mobile association schemes, a significant improvement in overall network capacity and RN utilization can be observed for the proposed mobile association framework. To enable dynamic and timely mobile association for the incoming UEs, we also develop an online mobile association algorithm based on the gradient descent method. Simulation results show that the proposed online algorithm achieves a good tradeoff between association delay and network capacity.

【Keywords】: cellular radio; frequency allocation; gradient methods; base stations; cellular network; downlink transmission; full frequency reuse; gradient descent method; heterogeneous wireless relay networks; load balancing based mobile association framework; multiple low power relay node; network capacity; optimal mobile association; partial frequency reuse; pseudooptimal solution; spectrum efficiency; transmission power difference; user equipments; Interference; Load management; Mobile communication; Mobile computing; Relays; Resource management; Signal to noise ratio

153. Social feature-based multi-path routing in delay tolerant networks.

Paper Link】 【Pages】:1368-1376

【Authors】: Jie Wu ; Yunsheng Wang

【Abstract】: Most routing protocols for delay tolerant networks resort to the sufficient state information, including trajectory and contact information, to ensure routing efficiency. However, state information tends to be dynamic and hard to obtain without a global and/or long-term collection process. In this paper, we use the internal social features of each node in the network to perform the routing process. This approach is motivated from several social contact networks, such as the Infocom 2006 trace, where people contact each other more frequently if they have more social features in common. Our approach includes two unique processes: social feature extraction and multi-path routing. In social feature extraction, we use entropy to extract the m most informative social features to create a feature space (F-space): (F1, F2, ..., Fm), where Fi corresponds to a feature. The routing method then becomes a hypercube-based feature matching process where the routing process is a step-by-step feature difference resolving process. We offer two special multi-path routing schemes: node-disjoint-based routing and delegation-based routing. Extensive simulations on both real and synthetic traces are conducted in comparison with several existing approaches, including spray-and-wait routing and spray-and-focus routing.

【Keywords】: delay tolerant networks; entropy; feature extraction; routing protocols; Infocom 2006 trace; delay tolerant networks; delegation-based routing; entropy; hypercube-based feature matching process; internal social features; multi-path routing schemes; node-disjoint-based routing; routing protocols; social contact networks; social feature extraction; social feature-based multi-path routing; spray-and-focus routing; spray-and-wait routing; state information; step-by-step feature difference resolving process; Delay; Entropy; Feature extraction; Hypercubes; Mobile communication; Routing; Closeness; delay tolerant networks; entropy; hypercubes; multi-path routing; social features

154. Traps and pitfalls of using contact traces in performance studies of opportunistic networks.

Paper Link】 【Pages】:1377-1385

【Authors】: Nikodin Ristanovic ; George Theodorakopoulos ; Jean-Yves Le Boudec

【Abstract】: Contact-based simulations are a very popular tool for the analysis of opportunistic networks. They are used for evaluation of networking metrics, for quantifying the effects of infrastructure and for the design of forwarding strategies. However, little evidence exists that the results of such simulations accurately describe the performance of opportunistic networks, as they commonly ignore some important factors (like limited transmission bandwidth) or they rely on assumptions such as infinite user cache sizes. In order to evaluate this issue, we design a testbed with a real application and real users; we collect application data in addition to the contact traces and compare measured performance to the results of the contact-based simulations. We find that contact-based simulations significantly overestimate delivery ratio, while the captured delay tends to be 2-3 times lower than the experimentally obtained delay. We show that assuming infinite cache sizes leads to misinterpretation of the effects of backbone on an opportunistic network. Finally, we show that contact traces can be used to analytically estimate the delivery ratios and the impact of backbone, through the dependency between a user centrality measure and her delivery ratio.

【Keywords】: computer network performance evaluation; telecommunication computing; contact based simulation; contact trace; forwarding strategy design; infinite user cache size; opportunistic network analysis; opportunistic network backbone; opportunistic networks; transmission bandwidth; Delay; Internet; Logic gates; Performance evaluation; Servers; Twitter

155. Efficient multicasting for delay tolerant networks using graph indexing.

Paper Link】 【Pages】:1386-1394

【Authors】: Misael Mongiovì ; Ambuj K. Singh ; Xifeng Yan ; Bo Zong ; Konstantinos Psounis

【Abstract】: In Delay Tolerant Networks (DTNs), end-to-end connectivity between nodes does not always occur due to limited radio coverage, node mobility and other factors. Remote communication may assist in guaranteeing delivery. However, it has a considerable cost, and consequently, minimizing it is an important task. For multicast routing, the problem is NP-hard, and naive approaches are infeasible on large problem instances. In this paper we define the problem of minimizing the remote communication cost for multicast in DTNs. Our formulation handles the realistic scenario in which a data source is continuously updated and nodes need to receive recent versions of data. We analyze the problem in the case of scheduled trajectories and known traffic demands, and propose a solution based on a novel graph indexing system. We also present an adaptive extension that can work with limited knowledge of node mobility. Our method reduces the search space significantly and finds an optimal solution in reasonable time. Extensive experimental analysis on large real and synthetic datasets shows that the proposed method completes in less than 10 seconds on datasets with millions of encounters, with an improvement of up to 100 times compared to a naive approach.

【Keywords】: delay tolerant networks; graph theory; multicast communication; optimisation; telecommunication network routing; telecommunication traffic; NP-hard roblem; delay tolerant networks; end-to-end connectivity; graph indexing; multicast routing; multicasting; node mobility; radio coverage; remote communication cost; traffic demands; Data models; Delay; Indexing; Routing; Servers; Trajectory

156. PReFilter: An efficient privacy-preserving Relay Filtering scheme for delay tolerant networks.

Paper Link】 【Pages】:1395-1403

【Authors】: Rongxing Lu ; Xiaodong Lin ; Tom H. Luan ; Xiaohui Liang ; Xu Li ; Le Chen ; Xuemin Shen

【Abstract】: Without direct path, information delivery in sparse delay tolerant networks (DTNs) typically relies on intermittent relays, making the transmission not only unreliable but also time consuming. To make the matter even worse, the source nodes may transmit some encrypted “junk” information, similar as the spam emails in current mail systems, to the destinations; without effective control, the delivery of encrypted junk information would significantly consume the precious resource of DTN and accordingly throttle the network efficiency. To address this challenging issue, we propose PReFilter, an efficient privacy-preserving relay filter scheme to prevent the relay of encrypted junk information early in DTNs. In PReFilter, each node maintains a specific filtering policy based on its interests, and distributes this policy to a group of “friends” in the network in advance. By applying the filtering policy, the friends can filter the junk packets which are heading to the node during the relay. Note that the keywords in the filtering policy may disclose the node's interest/preference to some extent, harming the privacy of nodes, a privacy-preserving filtering policy distribution technique is introduced, which will keep the sensitive keywords secret in the filtering policy. Through detailed security analysis, we demonstrate that PReFilter can prevent strong privacy-curious adversaries from learning the filtering keywords, and discourage a weak privacy-curious friend to guess the filtering keywords from the filtering policy. In addition, with extensive simulations, we show that PReFilter is not only effective in the filtering of junk packets but also significantly improve the network performance with the dramatically reduced delivery cost due to the junk packets.

【Keywords】: cryptography; delay tolerant networks; filtering theory; relays; telecommunication security; PReFilter scheme; delay tolerant network; encrypted junk information; information delivery; network efficiency; privacy-preserving relay filtering scheme; Cryptography; Delay; Filtering; Neodymium; Nickel; Relays; Delay tolerant networks; Privacy-preserving; Relay filtering

157. Sliding Mode Congestion Control for data center Ethernet networks.

Paper Link】 【Pages】:1404-1412

【Authors】: Wanchun Jiang ; Fengyuan Ren ; Ran Shu ; Chuang Lin

【Abstract】: Recently, Ethernet is being enhanced as the unified switch fabric of data centers, called Data Center Ethernet. The end-to-end congestion management is one of the indispensable enhancements, and Quantized Congestion Notification (QCN) has been ratified to be the standard. Our experiments show that QCN suffers from the oscillation of the queue at the bottleneck link. With the changes of system parameters and network configurations, the oscillation may become so serious that the queue is emptied frequently. As a result, the utilization of the bottleneck link degrades. Theoretical analysis shows that QCN approaches to the equilibrium point mainly through the sliding mode motion. But whether QCN enters into the sliding mode motion also depends on both system parameters and network configurations. Hence, we present the Sliding Mode Congestion Control (SMCC) scheme, which can drive the system into the sliding mode motion under any conditions. SMCC benefits from the advantage that the sliding mode motion is insensitive to system parameters and external disturbances. Moreover, SMCC is simple, stable and has short response time. QCN can be replaced by SMCC easily since both of them follow the framework developed by the IEEE 802.1 Qau work group. Experiments on the NetFPGA platform show that SMCC is superior to QCN, especially in the condition that the traffic pattern and the network state are variable.

【Keywords】: computer centres; local area networks; queueing theory; telecommunication congestion control; telecommunication network management; telecommunication traffic; variable structure systems; IEEE 802.1 Qau work group; NetFPGA platform; QCN; SMCC scheme; data center Ethernet network; end-to-end congestion management; network configuration; network state; quantized congestion notification; queue oscillation; sliding mode congestion control; sliding mode motion; system parameter; traffic pattern; unified switch fabric; Artificial intelligence; Switches; Data Center Ethernet; Quantized Congestion Notification; Sliding Mode Motion

158. Exploring server redundancy in nonblocking multicast data center networks.

Paper Link】 【Pages】:1413-1421

【Authors】: Zhiyang Guo ; Zhemin Zhang ; Yuanyuan Yang

【Abstract】: Clos networks and their variations such as folded- Clos networks (fat-trees) have been widely adopted as network topologies in data center networks. Since multicast is an essential communication pattern in many cloud services, nonblocking multicast communication can ensure the high performance of such services. However, nonblocking multicast Clos networks are costly due to the large number of middle stage switches required. On the other hand, server redundancy is ubiquitous in today's data centers to provide high availability of services. In this paper, we explore server redundancy in data centers to reduce the cost of nonblocking multicast Clos data center networks (DCNs). First, we show that the sufficient nonblocking condition on the number of middle stage switches for multicast Clos DCNs can be significantly reduced, when the data center is 2-redundant, i.e., each server in the data center has exactly one redundant backup. We then investigate more general cases that the data center is k-redundant (k >; 2), and show that a higher redundancy level further reduces the cost of nonblocking multicast Clos DCNs. We also extend the result to practical data centers where servers may have different number of redundant backups depending on the availability requirement of services provided. Finally, we provide a multicast routing algorithm with linear time complexity to configure multicast connections in Clos DCNs.

【Keywords】: cloud computing; computer centres; multicast communication; multistage interconnection networks; telecommunication network topology; Clos networks; cloud services; network topologies; nonblocking multicast data center networks; server redundancy; Computer crashes; Servers; Switches; Clos networks; Data center networks; fat-trees; multicast; network cost; nonblocking; redundancy

159. Experimental performance comparison of Byzantine Fault-Tolerant protocols for data centers.

Paper Link】 【Pages】:1422-1430

【Authors】: Guanfeng Liang ; Benjamin Sommer ; Nitin H. Vaidya

【Abstract】: In this paper, we compare performance of several Byzantine agreement algorithms, including NCBA, a network coding based algorithm. Unlike existing practical BFT protocols such as PBFT by Castro and Liskov [1], which utilize collision-resistant hash functions to reduce traffic load for BFT, NCBA uses a computationally efficient error-detection network coding scheme. Since NCBA does not rely on any hash function, it is always correct rather than correct only with high probability as PBFT. Through extensive experiments, we verified that NCBA performs at least as well as Digest, without relying on any cryptographic assumption on the hardness of breaking the hash function. To the best of our knowledge, this is the first implementation of BFT with network coding.

【Keywords】: computer centres; computer network security; cryptographic protocols; error detection codes; fault tolerant computing; BFT protocol; Byzantine agreement algorithm; Byzantine fault-tolerant protocol; Digest; NCBA; PBFT; collision-resistant hash function; cryptographic assumption; data centers; error-detection network coding scheme; experimental performance comparison; network coding based algorithm; traffic load reduction; Fault diagnosis; Fault tolerance; Fault tolerant systems; Network coding; Peer to peer computing; Protocols; Servers

160. Data centers power reduction: A two time scale approach for delay tolerant workloads.

Paper Link】 【Pages】:1431-1439

【Authors】: Yuan Yao ; Longbo Huang ; Abhishek B. Sharma ; Leana Golubchik ; Michael J. Neely

【Abstract】: In this work we focus on a stochastic optimization based approach to make distributed routing and server management decisions in the context of large-scale, geographically distributed data centers, which offers significant potential for exploring power cost reductions. Our approach considers such decisions at different time scales and offers provable power cost and delay characteristics. The utility of our approach and its robustness are also illustrated through simulation-based experiments under delay tolerant workloads.

【Keywords】: computer centres; power aware computing; stochastic programming; data center power reduction; delay tolerant workload; distributed routing decision; power cost reduction; server management decision; stochastic optimization based approach; time scale approach; Delay; Distributed databases; Optimization; Robustness; Routing; Servers; Vectors

161. AP association in 802.11n WLANs with heterogeneous clients.

Paper Link】 【Pages】:1440-1448

【Authors】: Dawei Gong ; Yuanyuan Yang

【Abstract】: As the latest amendment of IEEE 802.11 standard, 802.11n allows a maximum raw data rate as high as 300Mbps, making it a desirable candidate for wireless local area network (WLAN) deployment. In typical deployment, the coverage areas of nearby access points (APs) usually overlap with one another to provide satisfactory coverage and seamless mobility support. Clients tend to associate (connect) to the AP with the strongest signal strength, which might lead to poor client throughput and overloaded APs. Although a number of AP association schemes have been proposed for IEEE 802.11 WLANs in previous studies, none of them have considered the frame aggregation feature in 802.11n. Moreover, the impact of legacy 802.11a/b/g clients in 802.11n WLANS has not been considered in AP association. To fill in this gap, in this paper we explore AP association for 802.11n with heterogeneous clients (802.11a/b/g/n). We first formulate it into an optimization problem based on a bi-dimensional Markov model, aiming at providing clients with the bandwidth proportional to their highest physical data rates, and then propose two heuristic AP association algorithms that can efficiently make online decisions on AP association. We have also conducted extensive simulations and experiments to validate the proposed algorithms. Our simulation results show that under hotspot client distribution, the proposed algorithms can boost the throughput of 802.11n clients and overall throughput by 106% and 89%, respectively, compared to other AP association schemes. Experiments also confirm the effectiveness of the algorithms in enhancing aggregated throughput, maintaining proportional fairness among clients and balancing load among APs.

【Keywords】: Markov processes; optimisation; resource allocation; wireless LAN; 802.11a-b-g clients; 802.11n WLAN; AP association scheme; IEEE 802.11 standard; access points; bidimensional Markov model; frame aggregation feature; heterogeneous clients; heuristic AP association algorithms; hotspot client distribution; load balancing; optimization problem; wireless local area network deployment; Throughput; AP Association; Frame Aggregation; Heterogeneous Clients; IEEE 802.11n Standard; Wireless Local Area Networks (WLANs)

162. HJam: Attachment transmission in WLANs.

Paper Link】 【Pages】:1449-1457

【Authors】: Kaishun Wu ; Haochao Li ; Lu Wang ; Youwen Yi ; Yunhuai Liu ; Qian Zhang ; Lionel M. Ni

【Abstract】: Effective coordination can dramatically reduce radio interference and avoid packet collisions for multi-station wireless local area networks (WLANs). Coordination itself needs consume communication resource and thus competes with data transmission for the limited wireless radio resources. In traditional approaches, control frames and data packets are transmitted in an alternate manner, which brings a great deal of coordination overhead. In this paper we propose a new communication model where the control frames can be “attached” to the data transmission. Thus, control messages and data traffic can be transmitted simultaneously and consequently the channel utilization can be improved significantly. We implement the idea in OFDM-based WLANs called hJam, which fully explores the physical layer features of the OFDM modulation method and allows one data packet and a number of control messages to be transmitted together. hJam is implemented on the GNU Radio testbed consisting of eight USRP2 nodes. We also conduct comprehensive simulations and the experimental results show that hJam can improve the WLANs efficiency by up to 72% compared with the existing 802.11 family protocols.

【Keywords】: OFDM modulation; wireless LAN; OFDM modulation; OFDM-based WLAN; attachment transmission; communication model; control messages; coordination overhead; data traffic; data transmission; hJam; multistation wireless local area networks; packet collisions; physical layer; radio interference; wireless radio resources; Bit error rate; Data communication; Detectors; Interference cancellation; Jamming; Noise; OFDM

163. SAP: Smart Access Point with seamless load balancing multiple interfaces.

Paper Link】 【Pages】:1458-1466

【Authors】: Xi Chen ; Yue Zhao ; Brian Peck ; Daji Qiao

【Abstract】: Providing adequate Wi-Fi services to meet user demand in densely populated environments has been a fundamental challenge for Wi-Fi networks. In this paper, we explore the emerging OAMI (One-AP-Multiple-Interface) architecture and propose a unique solution called SAP (Smart Access Point). SAP takes full advantage of the OAMI architecture to provide seamless handoff experience to users, while smartly balancing the network load across multiple interfaces based on users' time-varying traffic load conditions. Moreover, we define a Traffic Fulfillment (TF) performance metric to quantify the user experience and aid in association scheduling. SAP is an AP-only solution that requires trivial network modifications and is backwards compatible with legacy 802.11 stations. We have implemented SAP in the MadWifi device driver and demonstrated its effectiveness via experiments.

【Keywords】: device drivers; mobility management (mobile radio); resource allocation; telecommunication traffic; user interfaces; wireless LAN; AP-only solution; MadWifi device driver; OAMI architecture; One-AP-multiple interface; SAP; Wi-Fi service network; association scheduling; densely populated environment; legacy 802.11 station; seamless handoff experience; seamless network load balancing multiple interface; smart access point; traffic fulfillment performance metric; trivial network modification; user experience; user time-varying traffic load condition

164. ADAM: An adaptive beamforming system for multicasting in wireless LANs.

Paper Link】 【Pages】:1467-1475

【Authors】: Ehsan Aryafar ; Mohammad Ali Khojastepour ; Karthikeyan Sundaresan ; Sampath Rangarajan ; Edward W. Knightly

【Abstract】: We present the design and implementation of ADAM, the first adaptive beamforming based multicast system and experimental framework for indoor wireless environments. ADAM addresses the joint problem of adaptive beamformer design at the PHY layer and client scheduling at the MAC layer by proposing efficient algorithms that are amenable to practical implementation. ADAM is implemented on an FPGA platform and its performance is compared against that of omni-directional and switched beamforming based multicast. Our experimental results reveal that (i) switched multicast beamforming has limited gains in indoor multi-path environments, whose deficiencies can be effectively overcome by ADAM to yield an average gain of three-fold; (ii) the higher the dynamic range of the discrete transmission rates employed by the MAC hardware, the higher the gains in ADAM's performance, yielding upto nine-fold improvement over omni with the 802.11 rate table; and (iii) finally, ADAM's performance is susceptible to channel variations due to user mobility and infrequent channel information feedback. However, we show that training ADAM's SNR-rate mapping to incorporate feedback rate and coherence time significantly increases its robustness to channel dynamics.

【Keywords】: array signal processing; field programmable gate arrays; indoor environment; multicast communication; scheduling; wireless LAN; ADAM; FPGA platform; MAC layer; PHY layer; SNR-rate mapping; adaptive beamforming based multicast system; channel dynamic; channel variation; client scheduling; indoor multipath environment; indoor wireless environment; infrequent channel information feedback; omnidirectional beamforming based multicast; switched beamforming based multicast; wireless LAN; Algorithm design and analysis; Array signal processing; Multicast communication; Partitioning algorithms; Signal to noise ratio; Switches; Transmitting antennas

165. Capacity and delay analysis for social-proximity urban vehicular networks.

Paper Link】 【Pages】:1476-1484

【Authors】: Ning Lu ; Tom H. Luan ; Miao Wang ; Xuemin Shen ; Fan Bai

【Abstract】: In this paper, the asymptotic capacity and delay performance of social-proximity urban vehicular networks with inhomogeneous vehicle density are analyzed. Specifically, we investigate the case of N vehicles in a grid-like street layout while the number of road segments increases linearly with the population of vehicles. Each vehicle moves in a localized mobility region centered at a fixed social spot and communicates to a destination vehicle in the same mobility region via a unicast flow. With a variant of the two-hop relay scheme applied, we show that social-proximity urban networks are scalable: a constant average per-vehicle throughput can be achieved with high probability. Furthermore, although the throughput and delay of a unicast flow may degrade in a high density area, almost constant per-vehicle throughput Ω(1/log (N)) and almost constant delay O(log2 (N)) (except for the polylogarithmic factor) are still achievable with high probability. By identifying the key impact factors of performance mathematically, our results should provide insight on the design and deployment of future vehicular networks.

【Keywords】: road safety; vehicular ad hoc networks; almost constant delay; almost constant pervehicle throughput; capacity analysis; delay analysis; fixed social spot; grid-like street layout; inhomogeneous vehicle density; localized mobility region; road segments; social proximity urban vehicular networks; unicast flow; vehicular network deployment; Delay; Relays; Roads; Steady-state; Throughput; Unicast; Vehicles

166. Infrastructure-assisted routing in vehicular networks.

Paper Link】 【Pages】:1485-1493

【Authors】: Yuchen Wu ; Yanmin Zhu ; Bo Li

【Abstract】: Deploying roadside access points (APs) or an infrastructure can improve data delivery. Our empirical results from real trace driven simulations show that deploying APs produces up to 5× performance gain in delivery ratio and reduces delivery delay by as much as 35% with simple routing. However, we also find that buffer resources at the APs become a critical factor and poor buffer allocation leads to marginal performance gain for inter-vehicle routing. Motivated by this important observation, we investigate the optimal infrastructure-assisted routing for inter-vehicle data delivery. It remains a challenging issue for two major reasons. First, the addition of APs dramatically changes delivery opportunities between vehicles, which has not been well understood by existing work. Second, packet forwarding and buffer allocation are inter-dependent and should be addressed together. To tackle the challenges, we first characterize packet delivery probability as a function of predicted vehicle trajectories and AP locations. Then, we formulate the coexisting problem of packet forwarding and buffer allocation as an optimization problem and show that it is a knapsack problem. We design a global algorithm to solve this optimization problem. For more realistic settings, we propose a distributed algorithm for packet forwarding and buffer allocation in which each vehicle and the APs make decisions locally. Through trace-driven simulations, we demonstrate that the distributed algorithm steadily outperforms other alternative approaches under a wide range of network configurations.

【Keywords】: decision making; delays; distributed algorithms; knapsack problems; mobile radio; optimisation; probability; road vehicles; telecommunication network routing; buffer allocation; decision making; delivery delay reduction; distributed algorithm; intervehicle routing data delivery improvement; knapsack problem; optimal infrastructure-assisted routing; optimization problem; packet delivery probability; packet forwarding; real trace driven simulation; roadside AP deployment; roadside access point deployment; vehicle trajectory prediction; vehicular network; Delay; Performance gain; Relays; Resource management; Routing; Trajectory; Vehicles; Buffer Allocation; Infrastructure Assisted; Packet forwarding; Vehicular Network

167. RISA: Distributed Road Information Sharing Architecture.

Paper Link】 【Pages】:1494-1502

【Authors】: Joon Ahn ; Yi Wang ; Bo Yu ; Fan Bai ; Bhaskar Krishnamachari

【Abstract】: With the advent of the new IEEE 802.11p DSRC/WAVE radios, Vehicle-to-Vehicle (V2V) communications is poised for a dramatic leap. A canonical application for these future vehicular networks is the detection and notification of anomalous road events (e.g., potholes, bumps, icy road patches, etc.). We present the Road Information Sharing Architecture (RISA), the first distributed approach to road condition detection and dissemination for vehicular networks. RISA provides for the in-network aggregation and dissemination of event information detected by multiple vehicles in a timely manner for improved information reliability and bandwidth efficiency. RISA uses a novel Time-Decay Sequential Hypothesis Testing (TD-SHT) approach in which event information from multiple sources is combined with time-varying beliefs. We describe our implementation of RISA which has been deployed and tested on a fleet of vehicles on-site at the GM Warren Technical Center in Michigan. We further provide a comprehensive evaluation of the aggregation mechanism using emulation of the RISA code on real vehicular mobility traces.

【Keywords】: mobile radio; telecommunication network reliability; telecommunication standards; wireless LAN; DSRC radio; IEEE 802.11p; RISA; WAVE radio; anomalous road events; bandwidth efficiency; distributed road information sharing architecture; in network aggregation; information reliability; road condition detection; time decay sequential hypothesis testing; time varying beliefs; vehicle-to-vehicle communications; vehicular mobility traces; wireless access in vehicular environment; Emulation; Global Positioning System; Roads; Sensors; Software systems; Testing; Vehicles

168. A measurement-based study of beaconing performance in IEEE 802.11p vehicular networks.

Paper Link】 【Pages】:1503-1511

【Authors】: Francesca Martelli ; M. Elena Renda ; Giovanni Resta ; Paolo Santi

【Abstract】: Active safety applications for vehicular networks aims at improving safety conditions on the road by raising the level of “situation awareness” onboard vehicles. Situation awareness is achieved through exchange of beacons reporting positional and kinematic data. Two important performance parameters influence the level of situation awareness available to the active safety application: the beacon (packet) delivery rate (PDR), and the packet inter-reception (PIR) time. While measurement-based evaluations of the former metric recently appeared in the literature, the latter metric has not been studied so far. In this paper, for the first time, we estimate the PIR time and its correlation with PDR and other environmental parameters through an extensive measurement campaign based on IEEE 802.11p technology. Our study discloses several interesting insights on PIR times that can be expected in a real-world scenarios, which should be carefully considered by the active safety application designers. A major insight is that the packet inter reception time distribution is a power-law and that long situation awareness black-outs are likely to occur in batch, implying that situation awareness can be severely impaired even when the average beacon delivery rate is relatively high. Furthermore, our analysis shows that PIR and PDR are only loosely (negatively) correlated, and that the PIR time is almost independent of speed and distance between vehicles. A third major contribution of this paper is promoting the Gilbert-Elliot model, previously proposed to model bit error bursts in packet switched networks, as a very accurate model of beacon reception behavior observed in real-world data.

【Keywords】: automated highways; mobile radio; packet switching; radio access networks; road safety; Gilbert-Elliot model; IEEE 802.11p vehicular networks; active safety applications; beacon delivery rate; beaconing performance; bit error bursts; kinematic data; measurement-based evaluations; measurement-based study; packet delivery rate; packet interreception time distribution; packet-switched networks; positional data; safety conditions; situation awareness black-outs; situation awareness onboard vehicles; Analytical models; Correlation; Global Positioning System; Safety; Time measurement; Vehicles

169. Cache capacity allocation for BitTorrent-like systems to minimize inter-ISP traffic.

Paper Link】 【Pages】:1512-1520

【Authors】: Valentino Pacifici ; Frank Lehrieder ; György Dán

【Abstract】: Many Internet service providers (ISPs) have deployed peer-to-peer (P2P) caches in their networks in order to decrease costly inter-ISP traffic. A P2P cache stores parts of the most popular contents locally, and if possible serves the requests of local peers to decrease the inter-ISP traffic. Traditionally, P2P cache resource management focuses on managing the storage resource of the cache so as to maximize the inter-ISP traffic savings. In this paper we show that when there are many overlays competing for the upload bandwidth of a P2P cache then in order to maximize the inter-ISP traffic savings the cache's upload bandwidth should be actively allocated among the overlays. We formulate the problem of P2P cache bandwidth allocation as a Markov decision process, and describe two approximations to the optimal cache bandwidth allocation policy. Based on the insights obtained from the approximate policies we propose SRP, a priority-based allocation policy for BitTorrent-like P2P systems. We use extensive simulations to evaluate the performance of the proposed policies, and show that cache bandwidth allocation can improve the inter-ISP traffic savings by up to 30 to 60 percent. We validate the results via BitTorrent experiments on Planet-lab.

【Keywords】: Internet; bandwidth allocation; cache storage; peer-to-peer computing; telecommunication traffic; BitTorrent-like systems; Internet service providers; P2P caches; bandwidth allocation policy; cache capacity allocation; inter-ISP traffic; peer-to-peer caches; Approximation methods; Bandwidth; Channel allocation; Markov processes; Peer to peer computing; Resource management; Steady-state

170. Network optimization for DHT-based applications.

Paper Link】 【Pages】:1521-1529

【Authors】: Yi Sun ; Yang Richard Yang ; Xiaobing Zhang ; Yang Guo ; Jun Li ; Kavé Salamatian

【Abstract】: P2P platforms have been criticized because of the heavy strain that some P2P services can inflict on costly inter-domain links of network operators. It is therefore necessary to develop network optimization schemes for controlling the load generated by P2P platforms on an operator network. Previous focus on network optimization has been mostly on centralized tracker-based systems. However, in recent years multiple DHT-based P2P networks are widely deployed due to their scalability and fault tolerance, and these networks have even been considered as platforms for commercial services. Thereby, finding network optimization for DHT-based P2P applications has potentially large practical impacts. In this paper, we present THash, a simple scheme to implement an effective distributed network optimization for DHT systems. THash is based on standard DHT put/get semantics and utilizes a triple hash method to guide the DHT clients sharing resources with peers in proper domains. We have implemented THash in a major P2P application (PPLive) by using the standard ALTO/P4P protocol as the network information source. We conducted realistic experiments over the network and observed that compared with Native DHT, THash only generated 45.5% and 35.7% of inter-PID and inter-AS traffic, and at the same time shortened the average downloading time by 13.8% to 22.1%.

【Keywords】: computer network management; peer-to-peer computing; DHT put-get semantics; DHT-based application; P2P platform; P2P service; THash scheme; distributed hash tables; distributed network optimization; fault tolerance; inter-domain link; interAS traffic; interPID; network information source; peer-to-peer platform; triple hash method; Indium phosphide; Optimization; Peer to peer computing; Publishing; Semantics; Servers; Vectors

171. A unifying model and analysis of P2P VoD replication and scheduling.

Paper Link】 【Pages】:1530-1538

【Authors】: Yipeng Zhou ; Tom Z. J. Fu ; Dah Ming Chiu

【Abstract】: We consider a P2P-assisted Video-on-Demand (VoD) system where each peer can store a relatively small number of movies to offload the server when these movies are requested. User requests are stochastic based on some movie popularity distribution. The problem is how to replicate (or place) content at peer storage to minimize the server load. Several variation of this replication problem have been studied recently with somewhat different conclusions. In this paper, we first point out that the main difference between these studies is in how they model the scheduling of peers to serve user requests, and show that these different scheduling assumptions will lead to different “optimal” replication strategies. We then propose a unifying request scheduling model, parameterized by the maximum number of peers that can be used to serve a single request. This scheduling is called Fair Sharing with Bounded Out-Degree (FSBD). Based on this unifying model, we can compare the different replication strategies for different out-degree bounds and see how and why different replication strategies are favored depending on the out-degree. We also propose a new simple, adaptive, and essentially distributed replication algorithm, and show that this algorithm is able to adapt itself to work well for different out-degree in scheduling.

【Keywords】: peer-to-peer computing; scheduling; stochastic processes; video on demand; FSBD; P2P VoD replication; P2P VoD scheduling; P2P-assisted video-on-demand system; distributed replication algorithm; fair sharing with bounded out-degree; movie popularity distribution; optimal replication strategies; peers scheduling; server load; unifying model; user requests; Algorithm design and analysis; Analytical models; Bandwidth; Load modeling; Motion pictures; Optimization; Servers

172. Stochastic analysis of self-sustainability in peer-assisted VoD systems.

Paper Link】 【Pages】:1539-1547

【Authors】: Delia Ciullo ; Valentina Martina ; Michele Garetto ; Emilio Leonardi ; Giovanni Luca Torrisi

【Abstract】: We consider a peer-assisted Video-on-demand system, in which video distribution is supported both by peers caching the whole video and by peers concurrently downloading it. We propose a stochastic fluid framework that allows to characterize the additional bandwidth requested from the servers to satisfy all users watching a given video. We obtain analytical upper bounds to the server bandwidth needed in the case in which users download the video content sequentially. We also present a methodology to obtain exact solutions for special cases of peer upload bandwidth distribution. Our bounds permit to tightly characterize the performance of peer-assisted VoD systems as the number of users increases, for both sequential and non-sequential delivery schemes. In particular, we rigorously prove that the simple sequential scheme is asymptotically optimal both in the bandwidth surplus and in the bandwidth deficit mode, and that peer-assisted systems become totally self-sustaining in the surplus mode as the number of users grows large.

【Keywords】: peer-to-peer computing; stochastic processes; video on demand; bandwidth deficit mode; bandwidth surplus; nonsequential delivery schemes; peer upload bandwidth distribution; peer-assisted VoD systems; peer-assisted video-on-demand system; selfsustainability; server bandwidth; stochastic analysis; stochastic fluid framework; video content; video distribution; Aggregates; Bandwidth; Neodymium; Random variables; Servers; Streaming media; Upper bound

173. Approximately optimal adaptive learning in opportunistic spectrum access.

Paper Link】 【Pages】:1548-1556

【Authors】: Cem Tekin ; Mingyan Liu

【Abstract】: In this paper we develop an adaptive learning algorithm which is approximately optimal for an opportunistic spectrum access (OSA) problem with polynomial complexity. In this OSA problem each channel is modeled as a two state discrete time Markov chain with a bad state which yields no reward and a good state which yields reward. This is known as the Gilbert-Elliot channel model and represents variations in the channel condition due to fading, primary user activity, etc. There is a user who can transmit on one channel at a time, and whose goal is to maximize its throughput. Without knowing the transition probabilities and only observing the state of the channel currently selected, the user faces a partially observed Markov decision problem (POMDP) with unknown transition structure. In general, learning the optimal policy in this setting is intractable. We propose a computationally efficient learning algorithm which is approximately optimal for the infinite horizon average reward criterion.

【Keywords】: Markov processes; channel allocation; fading channels; learning (artificial intelligence); telecommunication computing; Gilbert-Elliot channel model; adaptive learning algorithm; approximately optimal adaptive learning; discrete time Markov chain; fading channel; opportunistic spectrum access; partially observed Markov decision problem; polynomial complexity; primary user activity; Approximation algorithms; Complexity theory; Indexes; Markov processes; Optimized production technology; Polynomials; Probability; Approximate optimality; online learning; opportunistic spectrum access; restless bandits

174. Spectrum clouds: A session based spectrum trading system for multi-hop cognitive radio networks.

Paper Link】 【Pages】:1557-1565

【Authors】: Miao Pan ; Pan Li ; Yang Song ; Yuguang Fang ; Phone Lin

【Abstract】: Spectrum trading creates more accessing opportunities for secondary users (SUs) and economically benefits the primary users (PUs). However, it is challenging to implement spectrum trading in multi-hop cognitive radio networks (CRNs) due to harsh cognitive radio (CR) requirements on SUs' devices and complex conflict and competition relationship among different CR sessions. Unlike the per-user based spectrum trading designs in previous studies, in this paper, we propose a novel session based spectrum trading system, spectrum clouds, in multi-hop CRNs. In spectrum clouds, we introduce a new service provider, called secondary service provider (SSP), to harvest the available spectrum bands and facilitate the accessing of SUs without CR capability. The SSP also conducts spectrum trading among CR sessions w.r.t. their conflicts and competitions. Leveraging a 3-dimensional (3-D) conflict graph, we mathematically describe the conflicts and competitions among the candidate sessions for spectrum trading. Given the rate requirements and bidding values of candidate trading sessions, we formulate the optimal spectrum trading into the SSP's revenue maximization problem under multiple cross-layer constraints in multi-hop CRNs. In view of the NP-hardness of the problem, we have also developed heuristic algorithms to pursue feasible solutions. Through extensive simulations, we show that the solutions found by the proposed algorithms are close to the optimal one.

【Keywords】: cognitive radio; computational complexity; graph theory; optimisation; radio networks; telecommunication industry; 3-dimensional conflict graph; 3D conflict graph; NP-hardness problem; PU; SSP; SU; heuristic algorithm; multihop CRN; multihop cognitive radio network; multiple cross-layer constraint; per-user based spectrum trading design; primary user; revenue maximization problem; secondary service provider; secondary user; session based spectrum trading system; spectrum band harvesting; spectrum cloud; Bandwidth; Cognitive radio; Heuristic algorithms; Interference; Optimization; Receivers; Routing

Paper Link】 【Pages】:1566-1574

【Authors】: Bo Gao ; Jung-Min Park ; Yaling Yang

【Abstract】: Recent advances in cognitive radio (CR) technology have brought about a number of wireless standards that support opportunistic access to available white-space spectrum. Addressing the self-coexistence of CR networks in such an environment is very challenging, especially when coexisting networks operate in the same swath of spectrum with little or no direct coordination. In this paper, we study the problem of co-channel self-coexistence of uncoordinated CR networks that employ orthogonal frequency division multiple access (OFDMA) in the uplink. We frame the self-coexistence problem as a non-cooperative game, and propose an uplink soft frequency reuse (USFR) technique to enable globally power-efficient and locally fair sharing of white-space spectrum. In each network, uplink resource allocation is decoupled into two subproblems: subchannel allocation (SCA) and transmit power control (TPC). We provide a unique optimal solution to the TPC subproblem, and present a low-complexity heuristic for the SCA subproblem. Furthermore, we frame the TPC and SCA games, and integrate them as a heuristic algorithm that achieves the Nash equilibrium in a fully distributed manner. Our simulation results show that the proposed USFR technique significantly improves self-coexistence in several aspects, including spectrum utilization, power consumption, and intra-cell fairness.

【Keywords】: OFDM modulation; cellular radio; cognitive radio; frequency allocation; frequency division multiple access; game theory; Nash equilibrium; OFDMA; cochannel self-coexistence; including spectrum utilization; intracell fairness; locally fair sharing; low complexity heuristic; noncooperative game; orthogonal frequency division multiple access; subchannel allocation; transmit power control; uncoordinated cognitive radio networks; uplink resource allocation; uplink soft frequency reuse; white space spectrum; Games; Interference; Manganese; Nash equilibrium; OFDM; Power demand; Resource management

176. Maximizing system throughput by cooperative sensing in Cognitive Radio Networks.

Paper Link】 【Pages】:1575-1583

【Authors】: Shuang Li ; Zizhan Zheng ; Eylem Ekici ; Ness B. Shroff

【Abstract】: Cognitive Radio Networks allow unlicensed users to opportunistically access the licensed spectrum without causing disruptive interference to the primary users (PUs). One of the main challenges in CRNs is the ability to detect PU transmissions. Recent works have suggested the use of secondary user (SU) cooperation over individual sensing to improve sensing accuracy. In this paper, we consider a CRN consisting of a single PU and multiple SUs to study the problem of maximizing the total expected system throughput. We propose a Bayesian decision rule based algorithm to solve the problem optimally with a constant time complexity. To prioritize PU transmissions, we re-formulate the throughput maximization problem by adding a constraint on the PU throughput. The constrained optimization problem is shown to be strongly NP-hard and solved via a greedy algorithm with pseudo-polynomial time complexity that achieves strictly greater than 1/2 of the optimal solution. We also investigate the case for which a constraint is put on the sensing time overhead, which limits the number of SUs that can participate in cooperative sensing. We reveal that the system throughput is monotonic over the number of SUs chosen for sensing. We illustrate the efficacy of the performance of our algorithms via a numerical investigation.

【Keywords】: Bayes methods; cognitive radio; cooperative communication; greedy algorithms; optimisation; Bayesian decision rule based algorithm; NP-hard problem; cognitive radio networks; constant time complexity; cooperative sensing; disruptive interference; greedy algorithm; licensed spectrum; primary users; pseudo-polynomial time complexity; secondary user cooperation; sensing accuracy; sensing time overhead; system throughput; throughput maximization problem; Accuracy; Algorithm design and analysis; Approximation algorithms; Bayesian methods; Interference; Sensors; Throughput

177. Constant-approximation for target coverage problem in wireless sensor networks.

Paper Link】 【Pages】:1584-1592

【Authors】: Ling Ding ; Weili Wu ; James Willson ; Lidong Wu ; Zaixin Lu ; Wonjun Lee

【Abstract】: When a large amount of sensors are randomly deployed into a field, how can we make a sleep/activate schedule for sensors to maximize the lifetime of target coverage in the field? This is a well-known problem, called Maximum Lifetime Coverage Problem (MLCP), which has been studied extensively in the literature. It is a long-standing open problem whether MLCP has a polynomial-time constant-approximation. The best-known approximation algorithm has performance ratio 1 + ln n where n is the number of sensors in the network, which was given by Berman et. al [1]. In their work, MLCP is reduced to Minimum Weight Sensor Coverage Problem (MWSCP) which is to find the minimum total weight of sensors to cover a given area or a given set of targets with a given set of weighted sensors. In this paper, we present a polynomial-time (4 + ∈)-approximation algorithm for MWSCP and hence we obtain a polynomial-time (4 + ξ)-approximation algorithm for MLCP, where ∈ >; 0, ξ >; 0.

【Keywords】: polynomial approximation; scheduling; wireless sensor networks; MLCP; MWSCP; maximum lifetime coverage problem; minimum weight sensor coverage problem; polynomial-time constant-approximation algorithm; sleep-activate scheduling; target coverage lifetime maximization; wireless sensor network; Algorithm design and analysis; Approximation algorithms; Approximation methods; Batteries; Sensors; Strips; Wireless sensor networks

178. DEAR: Delay-bounded Energy-constrained Adaptive Routing in wireless sensor networks.

Paper Link】 【Pages】:1593-1601

【Authors】: Shi Bai ; Weiyi Zhang ; Guoliang Xue ; Jian Tang ; Chonggang Wang

【Abstract】: Reliability and energy efficiency are critical issues in wireless sensor networks. In this work, we study Delay-bounded Energy-constrained Adaptive Routing (DEAR) problem with reliability, differential delay, and transmission energy consumption constraints in wireless sensor networks. We aim to route the connections in a manner such that link failure does not shut down the entire stream but allows a continuing flow for a significant portion of the traffic along multiple paths. This flexibility enabled by a multi-path routing scheme has the tradeoff of differential delay among the different paths. This requires increased memory in the base station to buffer the traffic until the data arrives on all the paths. Therefore, differential delay between the multiple paths should be bounded in a range to reduce additional hardware cost in the base station. Moreover, the energy consumption constraint should also be satisfied when transmitting packets among multiple paths. We present a pseudo-polynomial time solution to solve a special case of DEAR, representing edge delays as integers. Next, an (1+α)-approximation algorithm is proposed to solve the optimization version of the DEAR problem. An efficient heuristic is provided for the DEAR problem. We present numerical results confirming the advantage of our schemes as the first solution for the DEAR problem.

【Keywords】: polynomial approximation; telecommunication network reliability; telecommunication network routing; wireless sensor networks; DEAR algorithm; approximation algorithm; base station; delay-bounded energy-constrained adaptive routing; differential delay constraint; edge delays; link failure; multipath routing scheme; pseudopolynomial time solution; reliability constraint; transmission energy consumption constraint; wireless sensor networks; Educational institutions; Reliability; Wireless sensor networks; differential delay; multi-path routing; polynomial time approximation; restricted maximum flow; wireless sensor network

179. A robust boundary detection algorithm based on connectivity only for 3D wireless sensor networks.

Paper Link】 【Pages】:1602-1610

【Authors】: Hongyu Zhou ; Hongyi Wu ; Miao Jin

【Abstract】: In this work we develop a distributed boundary detection algorithm, dubbed Coconut, for 3D wireless sensor networks. It first constructs a tetrahedral structure to delineate the approximate geometry of the 3D sensor network, producing a set of “sealed” triangular boundary surfaces for separating non-boundary nodes and boundary node candidates. The former are hollowed out immediately while the latter are further refined to yield the final boundary nodes and fine-grained boundary surfaces. The proposed Coconut algorithm offers several salient features. First, it requires connectivity information only, with no need for localization or distance measurement. Second, it does not rely on particular communication models, but only assumes a constant maximum transmission range, which is generally known in practical wireless sensor networks. Third, it is robust to sensor distribution, effectively identifying boundaries in both uniformly and non-uniformly distributed sensor networks. We prove the correctness of the algorithm and quantitatively demonstrate its effectiveness via simulations under various network models.

【Keywords】: geometry; sensor placement; wireless sensor networks; 3D wireless sensor networks; Coconut algorithm; boundary node candidates; connectivity information; distributed boundary detection algorithm; fine grained boundary surfaces; nonboundary nodes; robust boundary detection algorithm; sensor distribution; tetrahedral structure; transmission range; triangular boundary surfaces; Detection algorithms; Geometry; Robustness; Rough surfaces; Surface roughness; Three dimensional displays; Wireless sensor networks

180. CitySee: Urban CO2 monitoring with sensors.

Paper Link】 【Pages】:1611-1619

【Authors】: XuFei Mao ; Xin Miao ; Yuan He ; Xiang-Yang Li ; Yunhao Liu

【Abstract】: Motivated by the needs of precise carbon emission measurement and real-time surveillance for CO2 management in cities, we present CitySee, a real-time CO2-monitoring system using sensor networks for an urban area (around 100 square kilometers). In order to conduct environment monitoring in a real-time and long-term manner, CitySee has to address the following challenges, including sensor deployment, data collection, data processing, and network management. In this discussion, we mainly focus on the sensor deployment problem so that necessary requirements like connectivity, coverage, data representability are satisfied. We also briefly go through the solutions for the remaining challenges. In CitySee, the sensor deployment problem can be abstracted as a relay node placement problem under hole-constraint. By carefully taking all constraints and real deployment situations into account, we propose efficient and effective approaches and prove that our scheme uses additional relay nodes at most twice of the minimum. We evaluate the performance of our approach through extensive simulations resembling realistic deployment. The results show that our approach outperforms previous strategies. We successfully apply this design into CitySee, a large-scale wireless sensor network consisting of 1096 relay nodes and 100 sensor nodes in Wuxi City, China.

【Keywords】: air pollution measurement; atmospheric composition; atmospheric techniques; carbon compounds; chemical sensors; environmental monitoring (geophysics); wireless sensor networks; CO2; China; CitySee monitoring system; Wuxi City; carbon emission measurement; chemical sensors; data collection; data processing; network management; real-time surveillance; relay node placement problem; sensor deployment; urban carbon dioxide monitoring; wireless sensor network; Carbon dioxide; Joining processes; Relays; Sensors; Steiner trees; Wireless communication; Wireless sensor networks; CO2 monitoring; relay nodes placement; wireless sensor networks

181. Unreeling netflix: Understanding and improving multi-CDN movie delivery.

Paper Link】 【Pages】:1620-1628

【Authors】: Vijay Kumar Adhikari ; Yang Guo ; Fang Hao ; Matteo Varvello ; Volker Hilt ; Moritz Steiner ; Zhi-Li Zhang

【Abstract】: Netflix is the leading provider of on-demand Internet video streaming in the US and Canada, accounting for 29.7% of the peak downstream traffic in US. Understanding the Netflix architecture and its performance can shed light on how to best optimize its design as well as on the design of similar on-demand streaming services. In this paper, we perform a measurement study of Netflix to uncover its architecture and service strategy. We find that Netflix employs a blend of data centers and Content Delivery Networks (CDNs) for content distribution. We also perform active measurements of the three CDNs employed by Netflix to quantify the video delivery bandwidth available to users across the US. Finally, as improvements to Netflix's current CDN assignment strategy, we propose a measurement-based adaptive CDN selection strategy and a multiple-CDN-based video delivery strategy, and demonstrate their potentials in significantly increasing user's average bandwidth.

【Keywords】: Internet; bandwidth allocation; computer centres; video on demand; video streaming; CDN assignment strategy; Canada; Netflix architecture; US; content delivery networks; content distribution; data centers; measurement-based adaptive CDN selection strategy; multiCDN movie delivery bandwidth; on-demand Internet video streaming; on-demand streaming services; peak downstream traffic; service strategy; user average bandwidth; video delivery bandwidth; video delivery strategy; Bandwidth; Bit rate; Browsers; Computers; Motion pictures; Servers; Streaming media

182. Robust multi-source network tomography using selective probes.

Paper Link】 【Pages】:1629-1637

【Authors】: Akshay Krishnamurthy ; Aarti Singh

【Abstract】: Knowledge of a network's topology and internal characteristics such as delay times or losses is crucial to maintain seamless operation of network services. Network tomography is a useful approach to infer such knowledge from end-to-end measurements between nodes at the periphery of the network, as it does not require cooperation of routers and other internal nodes. Most current tomography algorithms are single-source methods, which use multicast probes or synchronized unicast packet trains to measure covariances between destinations from a single vantage point and recover a tree topology from these measurements. Multi-source tomography, on the other hand, uses pairwise hop counts or latencies and consequently overcomes the difficulties associated with obtaining measurements for single-source methods. However, topology recovery is complicated by the fact that the paths along which measurements are taken do not form a tree in the network. Motivated by recent work suggesting that these measurements can be well-approximated by tree metrics, we present two algorithms that use selective pairwise distance measurements between peripheral nodes to construct a tree whose end-to-end distances approximate those in the network. Our first algorithm accommodates measurements perturbed by additive noise, while our second considers a novel noise model that captures missing measurements and the network's deviations from a tree topology. Both algorithms provably use O (p polylog p) pairwise measurements to construct a tree approximation on p end hosts. We present extensive simulated and real-world experiments to evaluate both of our algorithms.

【Keywords】: covariance analysis; multicast communication; telecommunication network topology; additive noise; delay times; distance measurements; end-to-end measurements; internal characteristics; multicast probes; network services; network topology; pairwise hop counts; robust multi-source network tomography; routers; single-source methods; tree topology; unicast packet trains; Approximation algorithms; Complexity theory; Noise; Noise measurement; Probes; Tomography

183. The Bloom paradox: When not to use a Bloom filter?

Paper Link】 【Pages】:1638-1646

【Authors】: Ori Rottenstreich ; Isaac Keslassy

【Abstract】: In this paper, we uncover the Bloom paradox in Bloom filters: sometimes, it is better to disregard the query results of Bloom filters, and in fact not to even query them, thus making them useless. We first analyze conditions under which the Bloom paradox occurs in a Bloom filter, and demonstrate that it depends on the a priori probability that a given element belongs to the represented set. We show that the Bloom paradox also applies to Counting Bloom Filters (CBFs), and depends on the product of the hashed counters of each element. In addition, both for Bloom filters and CBFs, we suggest improved architectures that deal with the Bloom paradox. We also provide fundamental memory lower bounds required to support element queries with limited false-positive and false-negative rates. Last, using simulations, we verify our theoretical results, and show that our improved schemes can lead to a significant improvement in the performance of Bloom filters and CBFs.

【Keywords】: data structures; probability; Bloom filter; Bloom paradox; a priori probability; counting Bloom filter; element query; false-negative rate; false-positive rate; memory lower bound; Approximation methods; Data structures; Memory management; Probabilistic logic; Probability; Radiation detectors; Servers

184. Tracking millions of flows in high speed networks for application identification.

Paper Link】 【Pages】:1647-1655

【Authors】: Tian Pan ; Xiaoyu Guo ; Chenhui Zhang ; Junchen Jiang ; Hao Wu ; Bin Liu

【Abstract】: Today's Internet applications exhibit increased diversity, while the Internet routers are still oblivious to this trend. To improve the end-to-end application QoS, one solution is to embed the application information explicitly in packet headers, but it will bring global changes. Another local solution is router-assisted traffic differentiation. To achieve this, the functionalities including packet identification and flow tracking inside the router are required. While most existing studies focus on the former, fewer efforts are put on the later. Given a large flow table is involved, how to track millions of concurrent flows in a cost-effective manner on a router's line card raises a great space-time challenge. To address this, we design an on-chip/off-chip flow tracking system to accommodate millions of flows and achieve the throughput at tens of Gigabits. By exploiting temporal locality and heavy-tailedness of Layer-4 traffic, we design the Adaptive Least Frequently Evicted (ALFE) replacement policy to catch elephant flows, therefore maintain a high cache hit rate. To alleviate performance penalty due to the cache misses, we organize the flow table in a fixed-allocated manner to fully utilize modern DRAM's burst feature. We have implemented a research prototype using FPGA for performance evaluation. The experiment results show that our system can reach 80% hit rate with a small-sized cache of 16K entries, while achieving 70Mpps throughput. This enables backbone line rate processing. Further, more than 40% power saving can be achieved by our system, which is fast and accurate with only 3% FPGA resource usage.

【Keywords】: DRAM chips; Internet; cache storage; field programmable gate arrays; quality of service; telecommunication traffic; ALFE replacement policy; FPGA; Internet routers; QoS; adaptive least frequently evicted replacement policy; application identification; high speed networks; layer-4 traffic; router-assisted traffic differentiation; small-sized cache; Field programmable gate arrays

185. Energy efficient delivery of immersive video centric services.

Paper Link】 【Pages】:1656-1664

【Authors】: Jaime Llorca ; Kyle Guan ; Gary Atkinson ; Daniel C. Kilper

【Abstract】: We examine the basic energy tradeoffs between video transport and video processing for services such as multi-view video (MVV) streaming, where multiple media streams are combined and processed to create an immersive and personalized user experience. We analyze and compare the energy efficiency of different architectural options for the location of video processing functions and illustrate how the architecture of choice is influenced by the network topology, the users' view preferences, and the relative transport-processing energy efficiency. We provide an integer linear programming formulation for the energy efficient functional resource allocation problem, which we show it can be solved as a linear program, and an easily implementable algorithm that generates optimal solutions in polynomial time.

【Keywords】: integer programming; linear programming; telecommunication network topology; video streaming; MVV streaming; energy-efficient delivery; energy-efficient functional resource allocation problem; immersive video centric services; integer linear programming formulation; media streams; multiview video streaming; network topology; personalized user experience; polynomial time; relative transport-processing energy efficiency; user view preferences; video processing functions; video transport; Computational complexity; Media; Network topology; Power demand; Streaming media; Topology; Wavelength division multiplexing; energy efficiency; immersive video service; integral polyhedron; multi-view video (MVV); polynomial time algorithm; totally unimodular

186. Energy-efficient spectrum sharing and power allocation in cognitive radio femtocell networks.

Paper Link】 【Pages】:1665-1673

【Authors】: Renchao Xie ; F. Richard Yu ; Hong Ji

【Abstract】: Both cognitive radio and femtocell have been considered as promising techniques in wireless networks. However, most of previous works are focused on spectrum sharing and interference avoidance, and the energy efficiency aspect is largely ignored. In this paper, we study the energy efficiency aspect of spectrum sharing and power allocation in heterogeneous cognitive radio networks with femtocells. To fully exploit the cognitive capability, we consider a wireless network architecture in which both the macrocell and the femtocell have the cognitive capability. We formulate the energy-efficient resource allocation problem in heterogeneous cognitive radio networks with femtocells as a Stackelberg game. A gradient based iteration algorithm is proposed to obtain the Stackelberg equilibrium solution to the energy-efficient resource allocation problem. Simulation results are presented to demonstrate the Stackelberg equilibrium is obtained by the proposed iteration algorithm and energy efficiency can be improved significantly in the proposed scheme.

【Keywords】: cognitive radio; femtocellular radio; game theory; interference suppression; iterative methods; resource allocation; Stackelberg equilibrium solution; Stackelberg game; cognitive capability; cognitive radio femtocell networks; energy efficient resource allocation; energy efficient spectrum sharing; gradient based iteration; interference avoidance; power allocation; wireless networks; Base stations; Cognitive radio; Games; Macrocell networks; Nash equilibrium; Resource management; Wireless networks

187. Towards optimal energy store-carry-and-deliver for PHEVs via V2G system.

Paper Link】 【Pages】:1674-1682

【Authors】: Hao Liang ; Bong Jun Choi ; Weihua Zhuang ; Xuemin Shen

【Abstract】: As an important component of smart grid, the vehicle-to-grid (V2G) system is recently introduced to enable bidirectional energy delivery between the power grid and plug-in electric vehicles. Communication technology is incorporated to facilitate the energy delivery by providing electricity pricing and energy demand information. However, different from the stationary energy storage systems, the energy store-carry-and-deliver mechanism for a V2G system poses new challenges for performance optimization, such as bi-directional energy flow and non-stationary energy demand. How to utilize the statistical information provided by the communication system to achieve efficient energy delivery is critical for a V2G system and is still an open issue. In this paper, we address a specific problem in this new research area, i.e., daily energy cost minimization of vehicle owners under time-of-use (TOU) electricity pricing. We investigate a plug-in hybrid electric vehicle (PHEV) with a realistic battery model, which is general for both battery electric cars and plug-in hybrids. A dynamic programming formulation is established by considering the bidirectional energy flow, non-stationary energy demand, battery characteristics, and TOU electricity price. We prove the optimality of a state-dependent double-threshold (or (S, S')) policy based on the stochastic inventory theory. A modified backward iteration algorithm is devised for practical applications, where an exponentially weighted moving average (EWMA) algorithm is used to estimate the statistics of PHEV mobility and energy demand. The performance of the proposed scheme is demonstrated by simulations based on survey and real data collected from Canadian households. Numerical results indicate that our proposed scheme performs closely to a scheme with a priori knowledge of the PHEV mobility and energy demand information. Compared with the existing approaches, the proposed scheme can achieve energy cost reduction, which increases wi- h the battery capacity.

【Keywords】: battery powered vehicles; cost reduction; dynamic programming; energy storage; hybrid electric vehicles; iterative methods; minimisation; moving average processes; power markets; pricing; smart power grids; Canadian household; EWMA algorithm; PHEV; TOU electricity pricing; V2G system; battery electric car; bidirectional energy delivery; bidirectional energy flow; daily energy cost minimization; data collection; dynamic programming formulation; energy cost reduction; exponentially weighted moving average algorithm; modified backward iteration algorithm; nonstationary energy demand information; optimal energy store-carry-and-deliver mechanism; plug-in hybrid electric vehicle; smart power grid; stochastic inventory theory; time-of-use electricity pricing; vehicle-to-grid system; Batteries; Electricity; Energy states; Markov processes; Minimization; Pricing; Vehicles

188. On exploiting flow allocation with rate adaptation for green networking.

Paper Link】 【Pages】:1683-1691

【Authors】: Jian Tang ; Brendan Mumey ; Yun Xing ; Andy Johnson

【Abstract】: Network power consumption can be reduced considerably by adapting link data rates to their offered traffic loads. In this paper, we exploit how to leverage rate adaptation for green networking by studying the following flow allocation problem in wired networks: Given a set of candidate paths for each end-to-end communication session, determine how to allocate flow (data traffic) along these paths such that power consumption is minimized, subject to the constraint that the traffic demand of each session is satisfied. According to recent measurement studies, we consider a discrete step increasing function for link power consumption. We address both the single and multiple communication session cases and formulate them as two optimization problems, namely, the Single-session Flow allocation with Rate Adaptation Problem (SF-RAP), and the Multi-session Flow Allocation with Rate Adaptation Problem (MF-RAP). We first show that both problems are NP-hard and present a Mixed Integer Linear Programming (MILP) formulation for the MF-RAP to provide optimal solutions. Then we present a 2-approximation algorithm for the SF-RAP, and a general flow allocation framework as well as an LP-based heuristic algorithm for the MF-RAP. Simulation results show that the algorithm proposed for the SF-RAP consistently outperforms a shortest path based baseline solution and the algorithms proposed for the MF-RAP provide close-to-optimal solutions.

【Keywords】: integer programming; linear programming; local area networks; resource allocation; telecommunication traffic; 2-approximation algorithm; Ethernet; LP-based heuristic algorithm; MF-RAP; NP-hard problems; SF-RAP; data traffic; end-to-end communication session; green networking; link data rates; link power consumption; mixed integer linear programming; multisession flow allocation with rate adaptation problem; network power consumption; single session flow allocation with rate adaptation problem; traffic demand; wired networks; Algorithm design and analysis; Approximation algorithms; Approximation methods; Heuristic algorithms; Power demand; Resource management; Routing; Green networking; flow allocation; power efficiency; rate adaptation

189. Sampling directed graphs with random walks.

Paper Link】 【Pages】:1692-1700

【Authors】: Bruno F. Ribeiro ; Pinghui Wang ; Fabricio Murai ; Don Towsley

【Abstract】: Despite recent efforts to characterize complex networks such as citation graphs or online social networks (OSNs), little attention has been given to developing tools that can be used to characterize directed graphs in the wild, where no pre-processed data is available. The presence of hidden incoming edges but observable outgoing edges poses a challenge to characterize large directed graphs through crawling, as existing sampling methods cannot cope with hidden incoming links. The driving principle behind our random walk (RW) sampling method is to construct, in real-time, an undirected graph from the directed graph such that the random walk on the directed graph is consistent with one on the undirected graph. We then use the RW on the undirected graph to estimate the outdegree distribution. Our algorithm accurately estimates outdegree distributions of a variety of real world graphs. We also study the hardness of indegree distribution estimation when indegrees are latent (i.e., incoming links are only observed as outgoing edges). We observe that, in the same scenarios, indegree distribution estimates are highly innacurate unless the directed graph is highly symmetrical.

【Keywords】: complex networks; directed graphs; estimation theory; random processes; citation graph; complex network; directed graph sampling; indegree distribution estimation; online social network; outdegree distribution estimation; random walk sampling method; undirected graph; Google; World Wide Web

190. Incentive mechanisms for smartphone collaboration in data acquisition and distributed computing.

Paper Link】 【Pages】:1701-1709

【Authors】: Lingjie Duan ; Takeshi Kubo ; Kohei Sugiyama ; Jianwei Huang ; Teruyuki Hasegawa ; Jean C. Walrand

【Abstract】: This paper analyzes and compares different incentive mechanisms for a client to motivate the collaboration of smartphone users on both data acquisition and distributed computing applications. Data acquisition from a large number of users is essential to build a rich database and support emerging location-based services. We propose a reward-based collaboration mechanism, where the client announces a total reward to be shared among collaborators, and the collaboration is successful if there are enough users willing to collaborate. We show that if the client knows the users' collaboration costs, then he can choose to involve only users with the lowest costs by offering a small total reward. However, if the client does not know users' private cost information, then he needs to offer a larger total reward to attract enough collaborators. Users will benefit from knowing their costs before the data acquisition. Distributed computing aims to solve computational intensive problems in a distributed and inexpensive fashion. We study how the client can design an optimal contract by specifying different task-reward combinations for different user types. Under complete information, we show that the client will involve a user type as long as the client's preference for that type outweighs the corresponding cost. All collaborators achieve a zero payoff in this case. But if the client does not know users' private cost information, he will conservatively target at a smaller group of efficient users with small costs. He has to give most benefits to the collaborators, and a collaborator's payoff increases in his computing efficiency.

【Keywords】: Global Positioning System; data acquisition; distributed processing; human factors; incentive schemes; mobile computing; smart phones; collaboration cost; data acquisition; distributed computing; incentive mechanism; location-based service; reward-based collaboration mechanism; smartphone collaboration; user motivation; zero payoff; Collaboration; Computational modeling; Contracts; Data acquisition; Databases; Distributed computing; Games

191. Cosine-neighbourhood-refinement: Towards a robust network formation mechanism.

Paper Link】 【Pages】:1710-1718

【Authors】: Felix Ming ; Fai Wong ; Peter Marbach

【Abstract】: In this paper we consider the classical network formation problem where nodes want to connect to other nodes that have similar “interests”. This problem is of fundamental importance in the network formation of peer-to-peer networks and online social networks. For this problem, we study whether there exists an algorithm that is robust with respect to the underlying interest graph that models the similarity of nodes in the networks. With robust, we mean that the algorithm is simple and achieves high-performance for a variety of interest graph models. The concrete interest graph models that we consider are the widely used planted partition and latent space model. We propose a network formation mechanism based on a cosine-neighbourhood refinement step and formally show that it performs well for the planted partition model. In addition, it can be shown that this mechanism based on cosine-neighbourhood refinement step also performs well under a latent space model for a one-dimensional sphere. To the best of our knowledge, this is the first time that a network formation mechanism has been shown to be robust and perform well for both the planted partition and latent space model. The proposed algorithm is simple and can be implemented in a distributed or centralized manner.

【Keywords】: graph theory; peer-to-peer computing; social networking (online); classical network formation problem; cosine-neighbourhood-refinement step; interest graph models; latent space model; node similarity; one-dimensional sphere; online social networks; peer-to-peer networks; planted partition model; robust network formation mechanism; Algorithm design and analysis; Clustering algorithms; Motion pictures; Partitioning algorithms; Peer to peer computing; Prediction algorithms; Robustness

192. Proactive seeding for information cascades in cellular networks.

Paper Link】 【Pages】:1719-1727

【Authors】: Francesco Malandrino ; Maciej Kurant ; Athina Markopoulou ; Cédric Westphal ; Ulas C. Kozat

【Abstract】: Online social networks (OSNs) play an increasingly important role today in informing users about content. At the same time, mobile devices provide ubiquitous access to this content through the cellular infrastructure. In this paper, we exploit the fact that the interest in content spreads over OSNs, which makes it, to a certain extent, predictable. We propose Proactive Seeding-a technique for minimizing the peak load of cellular networks, by proactively pushing (“seeding”) content to selected users before they actually request it. We develop a family of algorithms that take as input information primarily about (i) cascades on the OSN and possibly about (ii) the background traffic load in the cellular network and (iii) the local connectivity among mobiles; the algorithms then select which nodes to seed and when. We prove that Proactive Seeding is optimal when the prediction of information cascades is perfect. In realistic simulations, driven by traces from Twitter and cellular networks, we find that Proactive Seeding reduces the peak cellular load by 20%-50%. Finally, we combine Proactive Seeding with techniques that exploit local mobile-to-mobile connections to further reduce the peak load.

【Keywords】: cellular radio; mobile computing; mobile handsets; social networking (online); OSN; Twitter; background traffic load; cellular infrastructure; cellular networks; information cascades; local connectivity; mobile devices; mobile-to-mobile connections; online social networks; proactive seeding; Delay; History; Land mobile radio cellular systems; Mobile handsets; Prediction algorithms; Schedules; Social network services

193. Firewall fingerprinting.

Paper Link】 【Pages】:1728-1736

【Authors】: Amir R. Khakpour ; Joshua W. Hulst ; Zihui Ge ; Alex X. Liu ; Dan Pei ; Jia Wang

【Abstract】: Firewalls are critical security devices handling all traffic in and out of a network. Firewalls, like other software and hardware network devices, have vulnerabilities, which can be exploited by motivated attackers. However, because firewalls are usually placed in the network such that they are transparent to the end users, it is very hard to identify them and use their corresponding vulnerabilities to attack them. In this paper, we study firewall fingerprinting, in which one can use firewall decisions on TCP packets with unusual flags and machine learning techniques for inferring firewall implementation.

【Keywords】: authorisation; computer crime; computer network security; fingerprint identification; learning (artificial intelligence); transport protocols; TCP packets; attacker motivation; firewall decisions; firewall fingerprinting; hardware network devices; machine learning techniques; security devices; software network devices; Hardware; IP networks; Indexes; Probes; Sensitivity; Software; Time measurement

194. Hardware-accelerated regular expression matching at multiple tens of Gb/s.

Paper Link】 【Pages】:1737-1745

【Authors】: Jan van Lunteren ; Alexis Guanella

【Abstract】: Hardware acceleration of regular expression matching is key to meeting the throughput requirements of state-of-the-art network intrusion detection systems (NIDSs) dictated by fast growing link speeds. This paper presents extensions to a programmable state machine, called B-FSM, which was initially optimized for string matching. These extensions enable direct support in hardware for essential regular expression features, such as character classes and case insensitivity. Moreover, they also allow the exploitation of regular expression properties that show up at the data structure level as common transitions shared between multiple states, resulting in storage reductions of up to 95% for five NIDS pattern sets analyzed. Additional instruction support based on a flexible integration within the B-FSM data structure increases the processing capabilities and enables the scaling to larger pattern collections. The new IBM Power Edge of NetworkTM processor employs the B-FSM technology to provide scanning capabilities at typical rates of 20-40 Gb/s.

【Keywords】: data structures; finite state machines; instruction sets; parallel processing; security of data; string matching; B-FSM data structure; IBM Power Edge of Network processor; NIDS pattern set; PowerEn processor; case insensitivity; character class; flexible integration; hardware-accelerated regular expression matching; instruction support; link speed; network intrusion detection system; processing capability; programmable state machine; regular expression features; scanning capability; storage reduction; string matching; throughput requirement; Data structures; Doped fiber amplifiers; Engines; Hardware; Optimization; Registers; Vectors

195. FlowSifter: A counting automata approach to layer 7 field extraction for deep flow inspection.

Paper Link】 【Pages】:1746-1754

【Authors】: Chad R. Meiners ; Eric Norige ; Alex X. Liu ; Eric Torng

【Abstract】: In this paper, we introduce FlowSifter, a systematic framework for online application protocol field extraction. FlowSifter introduces a new grammar model Counting Regular Grammars (CRG) and a corresponding automata model Counting Automata (CA). The CRG and CA models add counters with update functions and transition guards to regular grammars and finite state automata. These additions give CRGs and CAs the ability to parse and extract fields from context sensitive application protocols. These additions also facilitate fast and stackless approximate parsing of recursive structures. These new grammar models enable FlowSifter to generate optimized Layer 7 field extractors from simple extraction specifications. In our experiments, we compare FlowSifter against both BinPAC and UltraPAC, which are the freely available state of the art field extractors. Our experiments show that when compared to UltraPAC parsers, FlowSifter extractors run 84% faster and use 12% of the memory.

【Keywords】: context-free grammars; finite state machines; protocols; FlowSifter; context sensitive application protocol; counting automata approach; counting regular grammar; deep flow inspection; extraction specification; finite state automata; grammar model; layer 7 field extraction; online application protocol field extraction; recursive structure; stackless approximate parsing; transition guard; update function; Approximation methods; Automata; Data mining; Grammar; Production; Protocols; Radiation detectors

196. Robust feature selection and robust PCA for internet traffic anomaly detection.

Paper Link】 【Pages】:1755-1763

【Authors】: Cláudia Pascoal ; Maria Rosário de Oliveira ; Rui Valadas ; Peter Filzmoser ; Paulo Salvador ; António Pacheco

【Abstract】: Robust statistics is a branch of statistics which includes statistical methods capable of dealing adequately with the presence of outliers. In this paper, we propose an anomaly detection method that combines a feature selection algorithm and an outlier detection method, which makes extensive use of robust statistics. Feature selection is based on a mutual information metric for which we have developed a robust estimator; it also includes a novel and automatic procedure for determining the number of relevant features. Outlier detection is based on robust Principal Component Analysis (PCA) which, opposite to classical PCA, is not sensitive to outliers and precludes the necessity of training using a reliably labeled dataset, a strong advantage from the operational point of view. To evaluate our method we designed a network scenario capable of producing a perfect ground-truth under real (but controlled) traffic conditions. Results show the significant improvements of our method over the corresponding classical ones. Moreover, despite being a largely overlooked issue in the context of anomaly detection, feature selection is found to be an important preprocessing step, allowing adaption to different network conditions and inducing significant performance gains.

【Keywords】: Internet; principal component analysis; telecommunication traffic; Internet traffic anomaly detection; ground-truth; network scenario; outlier detection method; principal component analysis; robust PCA; robust feature selection; robust statistics; Estimation; Feature extraction; Gaussian distribution; Measurement; Principal component analysis; Robustness; Vectors

197. Achievable transmission capacity of cognitive mesh networks with different media access control.

Paper Link】 【Pages】:1764-1772

【Authors】: Tao Jing ; Xiuying Chen ; Yan Huo ; Xiuzhen Cheng

【Abstract】: Spectrum sharing is an emerging mechanism to resolve the conflict between the spectrum scarcity and the growing demands for the wireless broadband access. In this paper we investigate the achievable transmission capacity of a wireless backhaul mesh network that shares the spectrums of the underutilized cellular uplink over the underlay spectrum sharing model with several commonly adopted medium access control protocols: slotted-ALOHA, CSMA/CA, and TDMA. By employing stochastic geometry, we derive the probabilities for a packet to be successfully transmitted in the primary cellular uplink and the secondary mesh networks. The achievable transmission capacity of the secondary network with outage probability constraints from both the primary and the secondary systems is obtained according to Shannon's Theory. The capacity region and the achievable capacity when the outage probabilities equal their corresponding threshold values are analyzed numerically and the results illustrate the effect of adjusting the mesh network parameters on the achievable transmission capacity under different MAC protocols.

【Keywords】: access protocols; broadband networks; cellular radio; cognitive radio; probability; radio access networks; wireless mesh networks; CSMA/CA; Shannon's theory; TDMA; capacity region; carrier sense multiple access; cognitive mesh networks; media access control; medium access control protocols; outage probability constraints; primary cellular uplink; secondary mesh networks; slotted-ALOHA; spectrum scarcity; spectrum sharing; stochastic geometry; time division multiple access; transmission capacity; wireless backhaul mesh network; wireless broadband access; Capacity planning; Interference; Media Access Protocol; Mesh networks; Receivers; Signal to noise ratio; Transmitters; Achievable transmission capacity; media access control; outage probability constraint; primary cellular network; secondary cognitive mesh networks

198. On Exploiting degrees-of-freedom in whitespaces.

Paper Link】 【Pages】:1773-1781

【Authors】: Harish Ganapathy ; Mukundan Madhavan ; Malolan Chetlur ; Shivkumar Kalyanaraman

【Abstract】: TV Whitespaces, recently opened up by the FCC for unlicensed use by wireless devices, are seen as a potential cellular offload solution, especially in dense metros. However, under the new database-driven guidelines, there are typically very few whitespace bands available in such dense metros to a high-powered fixed device, which plays the role of a cellular base station in whitespaces. To address the lack of degrees-of-freedom (DoF) with this traditional architecture of one high-powered serving device, we propose a novel base station design that co-locates and networks together many low-powered devices to act as a multiple-antenna array. Lower-powered whitespace devices have access to more spectral DoF, a property that is unique to whitespaces. In the first part of the paper, we solve an array design problem where we estimate the size of the array required to meet long-term (worst-case) throughput targets. Using extensive simulations, we show that by effectively exploiting both spatial and spectral DoF, the array design outperforms the traditional design in most network conditions. Specifically, the proposed design can support throughputs of the order of a WiMAX cell running applications such as high-definition television. In the second part of the paper, we turn our attention to the operational aspects of such a design. Recognizing that the proposed array can potentially contain hundreds of elements, we propose a dynamic ON-OFF power control algorithm that operates in conjunction with the MaxWeight data scheduling algorithm and responds to the current network state - queues and channels - of the system, thus making the system power-efficient.

【Keywords】: WiMax; antenna arrays; cellular radio; high definition television; power control; television antennas; MaxWeight data scheduling algorithm; TV whitespace band; WiMAX cell; array size estimation; base station design; cellular base station; database-driven guideline; dynamic ON-OFF power control algorithm; high-definition television; high-powered fixed device; lower-powered whitespace device; multiple-antenna array; one high-powered serving device; potential cellular offload solution; spectral DoF exploitation; spectral degrees-of-freedom exploitation; wireless device; Arrays; Base stations; FCC; Interference; TV; Throughput; Wireless communication

199. Spectrum sensing based on three-state model to accomplish all-level fairness for co-existing multiple cognitive radio networks.

Paper Link】 【Pages】:1782-1790

【Authors】: Yanxiao Zhao ; Min Song ; Chunsheng Xin ; Manish Wadhwa

【Abstract】: Spectrum sensing plays a critical role in cognitive radio networks (CRNs). The majority of spectrum sensing algorithms aim to detect the existence of a signal on a channel, i.e., they classify a channel into either busy or idle state, referred to as a two-state sensing model in this paper. While this model works properly when there is only one CRN accessing a channel, it significantly limits the potential and fairness of spectrum access when there are multiple co-existing CRNs. This is because if the secondary users (SUs) from one CRN are accessing a channel, SUs from other CRNs would detect the channel as busy and hence be starved. In this paper, we propose a three-state sensing model that distinguishes the channel into three states: idle, occupied by a primary user, or occupied by a secondary user. This model effectively addresses the fairness concern of the two-state sensing model, and resolves the starvation problem of multiple co-existing CRNs. To accurately detect each state of the three, we develop a two-stage detection procedure. In the first stage, energy detection is employed to identify whether a channel is idle or occupied. If the channel is occupied, the received signal is further analyzed at the second stage to determine whether the signal originates from a primary user or an SU. For the second stage, we design a statistical model and use it for distance estimation. For detection performance, false alarm and miss detection probabilities are theoretically analyzed. Furthermore, we thoroughly analyze the performance of throughput and fairness for the three-state sensing model compared with the two-state sensing model. In terms of fairness, we define a novel performance metric called all-level fairness for all(ALFA) to characterize fairness among CRNs. Extensive simulations are carried out under various scenarios to evaluate the three-state sensing model and verify the aforementioned theoretical analysis.

【Keywords】: cognitive radio; probability; signal detection; ALFA; all level fairness for all; channel detection; distance estimation; energy detection; false alarm probability; idle state; miss detection probability; multiple cognitive radio networks; performance metric; primary user; secondary user; signal detection; spectrum access; spectrum sensing; statistical model; three state sensing model; two stage detection procedure; two state sensing model; Accuracy; Analytical models; Cognitive radio; Estimation; Measurement; Noise; Sensors

200. Delay optimal multichannel opportunistic access.

Paper Link】 【Pages】:1791-1799

【Authors】: Shiyao Chen ; Lang Tong ; Qing Zhao

【Abstract】: The problem of minimizing queueing delay of opportunistic access of multiple continuous time Markov channels is considered. A new access policy based on myopic sensing and adaptive transmission (MS-AT) is proposed. Under the framework of risk sensitive constrained Markov decision process with effective bandwidth as a measure of queueing delay, it is shown that MS-AT achieves simultaneously throughput and delay optimality. It is shown further that both the effective bandwidth and the throughput of MS-AT are two-segment piece-wise linear functions of the collision constraint (maximum allowable conditional collision probability) with the effective bandwidth and throughput coinciding in the regime of tight collision constraints. Analytical and simulation comparisons are conducted with the myopic sensing and memoryless transmission (MS-MT) policy which is throughput optimal but delay suboptimal in the regime of tight collision constraints.

【Keywords】: Markov processes; multi-access systems; queueing theory; telecommunication channels; MS-AT; MS-MT policy; adaptive transmission; collision constraints; delay optimal multichannel opportunistic access; memoryless transmission policy; multiple continuous time Markov channels; myopic sensing policy; queueing delay; risk sensitive constrained Markov decision process; two-segment piecewise linear functions; Bandwidth; Delay; History; Interference constraints; Markov processes; Sensors; Throughput; Delay optimal medium access; constrained risk sensitive Markov decision process; effective bandwidth; opportunistic access

201. A Markov chain model for coarse timescale channel variation in an 802.16e wireless network.

Paper Link】 【Pages】:1800-1807

【Authors】: Anand Seetharam ; Jim Kurose ; Dennis Goeckel ; Gautam D. Bhanage

【Abstract】: A wide range of wireless channel models have been developed to model variations in received signal strength. In contrast to prior work, which has focused primarily on channel modeling on a short, per- packet timescale (millisecond), we develop and validate a finite-state Markov chain model that captures variations due to shadowing, which occur at coarser time scales. The Markov chain is constructed by partitioning the entire range of shadowing into a finite number of intervals. We determine the Markov chain transition matrix in two ways: (i) via an abstract modeling approach in which shadowing effects are modeled as a log-normally distributed random variable affecting the received power, and the transition probabilities are derived as functions of the variance and autocorrelation function of shadowing; (ii) via an empirical approach, in which the transition matrix is calculated by directly measuring the changes in signal strengths collected in a 802.16e (WiMAX) network. We validate the abstract model by comparing its steady state and transient performance predictions with those computed using the empirically derived transition matrix and those observed in the actual traces themselves.

【Keywords】: IEEE standards; Markov processes; WiMax; broadband networks; radio access networks; telecommunication standards; 802.16e wireless network; Markov chain transition matrix; WiMAX; abstract modeling; coarse timescale channel variation; finite-state Markov chain model; received signal strength; shadowing effects; wireless channel models; Analytical models; Correlation; Markov processes; Predictive models; Shadow mapping; WiMAX

202. Stochastic geometry based medium access games.

Paper Link】 【Pages】:1808-1816

【Authors】: Manjesh Kumar Hanawal ; Eitan Altman ; François Baccelli

【Abstract】: This paper studies the performance of Mobile Ad hoc Networks (MANETs) when the nodes, that form a Poisson point process, selfishly choose their Medium Access Probability (MAP). We consider goodput and delay as the performance metric that each node is interested in optimizing taking into account the transmission energy costs. We introduce a pricing scheme based on the transmission energy requirements and compute the symmetric Nash equilibria of the game in closed form. It is shown that by appropriately pricing the nodes, the selfish behavior of the nodes can be used to achieve the social optimum at equilibrium. The price of anarchy is then analyzed for these games. For the game with delay based utility, we bound the price of anarchy and study the effect of the price factor. For the game with goodput based utility, it is shown that price of anarchy is infinite at the price factor that achieves the global optima.

【Keywords】: game theory; mobile ad hoc networks; probability; stochastic processes; MANET; MAP; medium access games; medium access probability; mobile ad hoc networks; price factor; pricing scheme; stochastic geometry; symmetric Nash equilibria; transmission energy costs; transmission energy requirements; Ad hoc networks; Delay; Games; Mobile computing; Pricing; Receivers; Transmitters; Game Theory; Medium Access Control; Mobile Ad hoc Networks (MANETs); Pricing; Stochastic Geometry

203. A geometrical probability approach to location-critical network performance metrics.

Paper Link】 【Pages】:1817-1825

【Authors】: Yanyan Zhuang ; Jianping Pan

【Abstract】: Node locations and distances are of profound importance for the operation of any communication networks. With the fundamental inter-node distance captured in a random network, one can build probabilistic models for characterizing network performance metrics such as k-th nearest neighbor and traveling distances, as well as transmission power and path loss in wireless networks. For the first time in the literature, a unified approach is developed to obtain the closed-form distributions of inter-node distances associated with hexagons. This approach can be degenerated to elementary geometries such as squares and rectangles. By the formulation of a quadratic product, the proposed approach can characterize general statistical distances when node coordinates are interdependent. Hence, our approach applies to both elementary and complex geometric topologies, and the corresponding probabilistic distance models go beyond the approximations and Monte Carlo simulations. Analytical models based on hexagon distributions are applied to the analysis of the nearest neighbor distribution in a sparse network for improving energy efficiency, and the farthest neighbor distribution in a dense network for minimizing routing overhead. Both the models and simulations demonstrate the high accuracy and promising potentials of this approach, whereas the current best approximations are not applicable in many scenarios. This geometrical probability approach thus provides accurate information essential to the successful network protocol and system design.

【Keywords】: Monte Carlo methods; probability; radio networks; telecommunication network routing; telecommunication network topology; Monte Carlo simulations; energy efficiency; farthest neighbor distribution; general statistical distances; geometric topologies; geometrical probability; hexagon distributions; internode distance; k-th nearest neighbor; location critical network performance metrics; nearest neighbor distribution; network protocol design; node coordinates; node locations; path loss; probabilistic distance models; random network; routing overhead; system design; traveling distances; wireless networks; Analytical models; Geometry; Interference; Measurement; Probabilistic logic; Random variables; Shape; Probabilistic distance distributions; geometric models; hexagons; rhombuses

204. Probabilistic analysis of buffer starvation in Markovian queues.

Paper Link】 【Pages】:1826-1834

【Authors】: Yuedong Xu ; Eitan Altman ; Rachid El Azouzi ; Majed Haddad ; Salah-Eddine Elayoubi ; Tania Jiménez

【Abstract】: Our purpose in this paper is to obtain the exact distribution of the number of buffer starvations within a sequence of N consecutive packet arrivals. The buffer is modeled as an M/M/1 queue. When the buffer is empty, the service restarts after a certain amount of packets are prefetched. With this goal, we propose two approaches, one of which is based on Ballot theorem, and the other uses recursive equations. The Ballot theorem approach gives an explicit solution, but at the cost of the high complexity order in certain circumstances. The recursive approach, though not offering an explicit result, needs fewer computations. We further propose a fluid analysis of starvation probability on the file level, given the distribution of file size and the traffic intensity. The starvation probabilities of this paper have many potential applications. We apply them to optimize the quality of experience (QoE) of media streaming service, by exploiting the tradeoff between the start-up delay and the starvation.

【Keywords】: Markov processes; media streaming; queueing theory; recursive estimation; Ballot theorem approach; M/M/1 queue; Markovian queues; QoE; buffer starvation probabilistic analysis; complexity order; file level; file size distribution; fluid analysis; media streaming service; packet arrivals; quality-of-experience; recursive equations; start-up delay; traffic intensity; Complexity theory; Delay; Equations; Media; Prefetching; Queueing analysis; Streaming media

205. Geometric algorithms for target localization and tracking under location uncertainties in wireless sensor networks.

Paper Link】 【Pages】:1835-1843

【Authors】: Khuong Vu ; Rong Zheng

【Abstract】: Since the onset of wireless sensor networks, target localization and tracking have received much attention in a wide range of applications including battle field surveillance, wildlife monitoring and border security. However, little work has been done that addresses the realistic considerations of uncertainties in sensor locations and evaluates their impacts on the accuracy of target localization and tracking. In this paper, we carry out a rigorous study of these problems using a computational geometry approach. We introduce the geometric structures of order-k max and min Voronoi Diagrams (VDs) and propose an algorithm to construct these diagrams. Based on order-k max and min VDs, efficient algorithms are developed to evaluate the likelihood of noisy sensor readings and kNN queries, which serve as building blocks in target localization and tracking under sensor location uncertainties.

【Keywords】: geometry; target tracking; wireless sensor networks; VD; Voronoi diagrams; battle field surveillance; border security; computational geometry approach; geometric algorithms; location uncertainties; noisy sensor reading likelihood; order-k max; sensor locations; target localization; target tracking; wildlife monitoring; wireless sensor networks; Complexity theory; Measurement uncertainty; Sensors; Target tracking; Trajectory; Uncertainty

206. LBA: Lifetime balanced data aggregation in low duty cycle sensor networks.

Paper Link】 【Pages】:1844-1852

【Authors】: Zi Li ; Yang Peng ; Daji Qiao ; Wensheng Zhang

【Abstract】: This paper proposes LBA, a lifetime balanced data aggregation scheme for asynchronous and duty cycle sensor networks under an application-specific requirement of end-to-end data delivery delay bound. In contrast to existing aggregation schemes that focus on reducing the energy consumption and extending the operational lifetime of each individual node, LBA has a unique design goal to balance the nodal lifetime and thus prolong the network lifetime more effectively. To achieve this goal in a distributed manner, LBA adaptively adjusts the aggregation holding time between neighboring nodes to balance their nodal lifetime; as such balancing take place in all neighborhoods, nodes in the entire network can gradually adjust their nodal lifetime towards the globally balanced status. Experimental studies on a sensor network testbed shows that LBA can achieve the design goal, yield longer network lifetime than other non-adaptive and nodal lifetime-unaware data aggregation schemes, and approach the theoretical upperbound performance, especially when nodes have highly different nodal lifetime.

【Keywords】: access protocols; delays; wireless sensor networks; LBA; aggregation holding time; asynchronous sensor networks; end-to-end data delivery delay bound; energy consumption; lifetime balanced data aggregation; low duty cycle sensor networks; neighboring nodes; network lifetime; nodal lifetime; operational lifetime; sensor network testbed shows; Delay; Energy consumption; Media Access Protocol; Monitoring; Peer to peer computing; Receivers

207. Approximate convex decomposition based localization in wireless sensor networks.

Paper Link】 【Pages】:1853-1861

【Authors】: Wenping Liu ; Dan Wang ; Hongbo Jiang ; Wenyu Liu ; Chonggang Wang

【Abstract】: Accurate localization in wireless sensor networks is the foundation for many applications, such as geographic routing and position-aware data processing. An important research direction for localization is to develop schemes using connectivity information only. These schemes primary apply hop counts to distance estimation. Not surprisingly, they work well only when the network topology has a convex shape. In this paper, we develop a new Localization protocol based on Approximate Convex Decomposition (ACDL). It can calculate the node virtual locations for a large-scale sensor network with arbitrary shapes. The basic idea is to decompose the network into convex subregions. It is not straight-forward, however. We first examine the influential factors on the localization accuracy when the network is concave such as the sharpness of concave angle and the depth of the concave valley. We show that after decomposition, the depth of the concave valley becomes irrelevant. We thus define concavity according to the angle at a concave point, which can reflect the localization error. We then propose ACDL protocol for network localization. It consists of four main steps. First, convex and concave nodes are recognized and network boundaries are segmented. As the sensor network is discrete, we show that it is acceptable to approximately identify the concave nodes to control the localization error. Second, an approximate convex decomposition is conducted. Our convex decomposition requires only local information and we show that it has low message overhead. Third, for each convex subsection of the network, an improved Multi-Dimensional Scaling (MDS) algorithm is proposed to compute a relative location map. Fourth, a fast and low complexity merging algorithm is developed to construct the global location map. Our simulation on several representative networks demonstrated that ACDL has localization error that is 60%-90% smaller as compared with the typical MDS-MAP algorithm and 20%-30% - maller as compared to a recent state-of-the-art localization algorithm CATL.

【Keywords】: convex programming; protocols; sensor placement; wireless sensor networks; approximate convex decomposition based localization; localization accuracy; localization protocol; multidimensional scaling algorithm; node virtual location; relative location map; wireless sensor networks; Accuracy; Approximation algorithms; Euclidean distance; Matrix decomposition; Protocols; Shape; Wireless sensor networks

208. Share risk and energy: Sampling and communication strategies for multi-camera wireless monitoring networks.

Paper Link】 【Pages】:1862-1870

【Authors】: Zichong Chen ; Guillermo Barrenetxea ; Martin Vetterli

【Abstract】: In the context of environmental monitoring, outdoor wireless cameras are vulnerable to natural hazards. To benefit from the inexpensive imaging sensors, we introduce a multi-camera monitoring system to share the physical risk. With multiple cameras focusing at a common scenery of interest, we propose an interleaved sampling strategy to minimize per-camera consumption by distributing sampling tasks among cameras. To overcome the uncertainties in the sensor network, we propose a robust adaptive synchronization scheme to build optimal sampling configuration by exploiting the broadcast nature of wireless communication. The theory as well as simulation results verify the fast convergence and robustness of the algorithm. Under the interleaved sampling configuration, we propose three video coding methods to compress correlated video streams from disjoint cameras, namely, distributed/independent/joint coding schemes. The energy profiling on a two-camera system shows that independent and joint coding perform substantially better. The comparison between two-camera and single-camera system shows 30%-50% per-camera consumption reduction. On top of these, we point out that MIMO technology can be potentially utilized to push the communication consumption even lower.

【Keywords】: MIMO communication; cameras; image sensors; radiotelemetry; sampling methods; video coding; video streaming; wireless sensor networks; MIMO technology; communication strategies; correlated video stream compression; distributed-independent-joint coding schemes; environmental monitoring; inexpensive imaging sensors; interleaved sampling strategy; multicamera wireless monitoring networks; optimal sampling configuration; per-camera consumption minimization; per-camera consumption reduction; robust adaptive synchronization scheme; sensor network; share risk; single-camera system; two-camera system; video coding methods; wireless communication; Cameras; Encoding; Monitoring; Robustness; Synchronization; Wireless communication; Wireless sensor networks

209. Sparse recovery with graph constraints: Fundamental limits and measurement construction.

Paper Link】 【Pages】:1871-1879

【Authors】: Meng Wang ; Weiyu Xu ; Enrique Mallada ; Ao Tang

【Abstract】: This paper addresses the problem of sparse recovery with graph constraints in the sense that we can take additive measurements over nodes only if they induce a connected subgraph. We provide explicit measurement constructions for several special graphs. A general measurement construction algorithm is also proposed and evaluated. For any given graph G with n nodes, we derive order optimal upper bounds of the minimum number of measurements needed to recover any k-sparse vector over G (Mk,nG). Our study suggests that Mk,nG may serve as a graph connectivity metric.

【Keywords】: graph theory; telecommunication network management; additive measurements; general measurement construction algorithm; graph constraints; k-sparse vector; sparse recovery; Aggregates; Bismuth; Compressed sensing; Monitoring; Sparse matrices; Testing; Vectors

210. The Variable-Increment Counting Bloom Filter.

Paper Link】 【Pages】:1880-1888

【Authors】: Ori Rottenstreich ; Yossi Kanizo ; Isaac Keslassy

【Abstract】: Counting Bloom Filters (CBFs) are widely used in networking device algorithms. They implement fast set representations to support membership queries with limited error, and support element deletions unlike Bloom Filters. However, they consume significant amounts of memory. In this paper we introduce a new general method based on variable increments to improve the efficiency of CBFs and their variants. Unlike CBFs, at each element insertion, the hashed counters are incremented by a hashed variable increment instead of a unit increment. Then, to query an element, the exact value of a counter is considered and not just its positiveness. We present two simple schemes based on this method. We demonstrate that this method can always achieve a lower false positive rate and a lower overflow probability bound than CBF in practical systems. We also show how it can be easily implemented in hardware, with limited added complexity and memory overhead. We further explain how this method can extend many variants of CBF that have been published in the literature. Last, using simulations, we show how it can improve the false positive rate of CBFs by up to an order of magnitude given the same amount of memory.

【Keywords】: counting circuits; digital filters; file organisation; element deletions; element insertion; fast set representations; hashed counters; hashed variable; membership queries; memory overhead; networking device algorithms; overflow probability bound; variable-increment counting bloom filter; Arrays; Complexity theory; Encoding; Filtering algorithms; Hardware; Memory management; Radiation detectors

211. Estimators also need shared values to grow together.

Paper Link】 【Pages】:1889-1897

【Authors】: Erez Tsidon ; Iddo Hanniel ; Isaac Keslassy

【Abstract】: Network management applications require large numbers of counters in order to collect traffic characteristics for each network flow. However, these counters often barely fit into on-chip SRAM memories. Past papers have proposed using counter estimators instead, thus trading off counter precision for a lower number of bits. But these estimators do not achieve optimal estimation error, and cannot always scale to arbitrary counter values. In this paper, we introduce the CEDAR algorithm for decoupling the counter estimators from their estimation values, which are quantized into estimation levels and shared among many estimators. These decoupled and shared estimation values enable us to easily adjust them without needing to go through all the counters. We demonstrate how our CEDAR scheme achieves the min-max relative error, i.e., can guarantee the best possible relative error over the entire counter scale. We also explain how to use dynamic adaptive estimation values in order to support counter up-scaling and adjust the estimation error depending on the current maximal counter. Finally we implement CEDAR on FPGA and explain how it can run at line rate. We further analyze its performance and size requirements.

【Keywords】: SRAM chips; counting circuits; field programmable gate arrays; CEDAR algorithm; FPGA; counter estimation decoupling for approximate rates; counter estimator; counter up-scaling; estimation value; field programmable gate array; min-max relative error; network flow traffic characteristics; network management application; on-chip SRAM memories; optimal estimation error; static random access memory; Arrays; Equations; Estimation error; Radiation detectors; Random access memory; System-on-a-chip

Paper Link】 【Pages】:1898-1906

【Authors】: Oguzhan Erdem ; Hoang Le ; Viktor K. Prasanna

【Abstract】: Hierarchical search structures for packet classification offer good memory performance and support quick rule updates when implemented on multi-core network processors. However, pipelined hardware implementation of these algorithms has two disadvantages: (1) backtracking which requires stalling the pipeline and (2) inefficient memory usage due to variation in the size of the trie nodes. We propose a clustering algorithm that can partition a given rule database into a fixed number of clusters to eliminate back-tracking in the state-of-the-art hierarchical search structures. Furthermore, we develop a novel ternary trie data structure (T). In T structure, the size of the trie nodes is fixed by utilizing ∈-branch property, which overcomes the memory inefficiency problems in the pipelined hardware implementation of hierarchical search structures. We design a two-stage hierarchical search structure consisting of binary search trees in Stage 1, and T structures in Stage 2. Our approach demonstrates a substantial reduction in the memory footprint compared with that of the state-of-the-art. For all publicly available databases, the achieved memory efficiency is between 10.37 and 22.81 bytes of memory per rule. State-of-the-art designs can only achieve the memory efficiency of over 23 byte/rule in the best case. We also propose a SRAM-based linear pipelined architecture for packet classification that achieves high throughput. Using a state-of-the-art FPGA, the proposed design can sustain a 418 million packets per second throughput or 134 Gbps (for the minimum packet size of 40 Bytes). Additionally, our design maintains packet input order and supports in-place non-blocking rule updates.

【Keywords】: SRAM chips; field programmable gate arrays; microprocessor chips; multiprocessing systems; pipeline processing; tree data structures; ∈-branch property; FPGA; SRAM; backtracking; binary search trees; clustering algorithm; hierarchical hybrid search structure; high performance packet classification; linear pipelined architecture; memory footprint; memory inefficiency problem; memory usage; multicore network processor; pipelined hardware; rule database; ternary trie data structure; Clustering algorithms; Data structures; Hardware; IP networks; Partitioning algorithms; Silicon; Throughput

213. Energy efficient broadcast in multiradio multichannel wireless networks.

Paper Link】 【Pages】:1907-1915

【Authors】: Changcun Ma ; Deying Li ; Hongwei Du ; Huan Ma ; Yuexuan Wang ; Wonjun Lee

【Abstract】: The broadcast is a fundamental operation in computer and communication networks. We study broadcast in multiradio multichannel multi-hop wireless networks. Suppose through configuration, each node is already assigned with a transmission power level and a set of radio channels for receiving and forwarding data. Our problem is to select a forward scheme for broadcasting from a given source node and to minimize total energy consumption. This is a known NP-hard minimization problem. In this paper, we construct a polynomial-time (1.35 + ϵ)(1+ln(n-1))-approximation algorithm where n is the number of nodes in given network and ϵ is any positive constant. We also show that there is no polynomial-time (ρ ln n)-approximation for 0 <; ρ <; 1 unless NP ⊆ DTIME(nO(log log n)).

【Keywords】: computational complexity; minimisation; polynomial approximation; radio networks; wireless channels; NP-hard minimization problem; computer-communication networks; energy consumption minimization; energy-efficient broadcast; multiradio multichannel multihop wireless networks; polynomial-time approximation algorithm; radio channels; source node; transmission power level; Approximation algorithms; Approximation methods; Bismuth; Energy consumption; Law; Optimized production technology

214. Energy-efficient collaborative sensing with mobile phones.

Paper Link】 【Pages】:1916-1924

【Authors】: Xiang Sheng ; Jian Tang ; Weiyi Zhang

【Abstract】: Mobile phones with a rich set of embedded sensors enable sensing applications in various domains. In this paper, we propose to leverage cloud-assisted collaborative sensing to reduce sensing energy consumption for mobile phone sensing applications. We formally define a minimum energy sensing scheduling problem and present a polynomial-time algorithm to obtain optimal solutions, which can be used to show energy savings that can potentially be achieved by using collaborative sensing in mobile phone sensing applications, and can also serve as a benchmark for performance evaluation. We also address individual energy consumption and fairness by presenting an algorithm to find fair energy-efficient sensing schedules. Under realistic assumptions, we present two practical and effective heuristic algorithms to find energy-efficient sensing schedules. It has been shown by simulation results based on real energy consumption (measured by the Monsoon power monitor) and location (collected from the Google Map) data that collaborative sensing significantly reduces energy consumption compared to a traditional approach without collaborations, and the proposed heuristic algorithm performs well in terms of both total energy consumption and fairness.

【Keywords】: cloud computing; embedded systems; energy consumption; groupware; mobile computing; mobile handsets; scheduling; sensors; cloud-assisted collaborative sensing; consumption data; embedded sensors; energy consumption; energy-efficient collaborative sensing; location data; minimum energy sensing scheduling problem; mobile phones; polynomial-time algorithm; sensing applications; sensing energy consumption; Collaboration; Energy consumption; Mobile communication; Mobile handsets; Roads; Schedules; Sensors; Mobile phone sensing; collaborative sensing; energy-efficiency; opportunistic sensing; scheduling

215. Reducing power of traffic manager in routers via dynamic on/off-chip scheduling.

Paper Link】 【Pages】:1925-1933

【Authors】: Jindou Fan ; Chengchen Hu ; Keqiang He ; Junchen Jiang ; Bin Liu

【Abstract】: Green networking in the Internet becomes increasingly important. In a high-performance router, the dominant power consumer on the Internet, half of its total power usage goes into the line-cards, where the traffic managers inside consume most of it. In this paper, we propose an energy-efficient design on the traffic manager architecture for packet buffering and storage. Unlike traditional routers where packets are always kept in off-chip memory, we propose a dynamic on-chip and off-chip scheduling mechanism, called Dynamic Packet Manager (DPM), to reduce both peak and average power consumption caused by the traffic manager. DPM buffers packets in a small on-chip memory in the light-traffic period, and activates the off-chip memory on when the on-chip memory is to overflow. In this design, when the traffic is light, the off-chip memory is put into power saving state by clock gating so that the average power consumption is reduced. With an on-chip flow based and off-chip class-based design, DPM can save one off-chip memory otherwise used for the per-flow index information storage, therefore further reduce the peak power usage. We present the theoretic analysis guiding the implementation of the DPM mechanism. Experiments on three prototypes implemented on different hardware show that the peak and average power consumptions can be reduced by 27.9% and 37.5% respectively, along with less on-chip memory cost. Besides, the traffic manger with DPM shows better performance on average packet scheduling delay than the one without DPM.

【Keywords】: Internet; memory architecture; microprocessor chips; telecommunication network routing; telecommunication traffic; DPM buffer; Internet; average packet scheduling delay; average power consumption; clock gating; dynamic on/off-chip scheduling; dynamic packet manager; energy-efficient design; green networking; high-performance router; light-traffic period; line-card; off-chip class-based design; off-chip memory; on-chip flow; on-chip memory cost; packet buffering; peak power consumption; per-flow index information storage; power reduction; power saving state; small on-chip memory; total power usage; traffic manager architecture; Switches; System-on-a-chip

216. Cherish every joule: Maximizing throughput with an eye on network-wide energy consumption.

Paper Link】 【Pages】:1934-1941

【Authors】: Canming Jiang ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou

【Abstract】: Conserving network-wide energy consumption is becoming an increasingly important concern for network operators. In this work, we study network-wide energy conservation problem which we hope will offer insights to both network operators and users. In the first part of this work, we study how to maximize throughput under a network-wide energy constraint. We formulate this problem as a mixed-integer nonlinear program (MINLP). We propose a novel piece-wise linear approximation to transform the nonlinear constraints into linear constraints. We prove that the solution developed under this approach is near-optimal with guaranteed performance bound. In the second part, we generalize the problem in the first part by exploring throughput and network-wide energy optimization via a multi-criteria optimization framework. We show that the weakly Pareto-optimal points in the solution can characterize an optimal throughput-energy curve. We offer some interesting properties of the optimal throughput-energy curve which are useful to both network operators and end users.

【Keywords】: Pareto optimisation; approximation theory; integer programming; nonlinear programming; piecewise linear techniques; radio networks; MINLP; mixed-integer nonlinear program; multicriteria optimization framework; network-wide energy conservation problem; network-wide energy constraint; network-wide energy consumption; network-wide energy optimization; optimal throughput-energy curve; piecewise linear approximation; throughput maximization; weakly Pareto-optimal point; wireless network; IEC; Optimized production technology

217. Socialize spontaneously with mobile applications.

Paper Link】 【Pages】:1942-1950

【Authors】: Zimu Liu ; Yuan Feng ; Baochun Li

【Abstract】: With the proliferation of mobile devices in both smartphone and tablet form factors, it is intuitive and natural for users to socially interact with their collaborators or competitors in multi-party conferencing, productivity, or gaming applications. In this paper, we make a case that such social interactions should be much more spontaneous to users in these applications. We design and implement a new system framework, Reflex, to provide the required system support to achieve spontaneous social interaction with other users in the same mobile application, be they in the same living room or around the world. Reflex features a simple and intuitive application programming interface (API), and uses cloud computing services from Google App Engine to offer the scalability and performance required to support spontaneous social networking at a large scale. Reflex is able to transparently switch to local interactions over Bluetooth or Wi-Fi interfaces, available on mobile devices, whenever possible. In order to evaluate Reflex in the iOS platform, we developed a real-world music composition application, called MusicScore, from scratch on the iPad, which takes advantage of Reflex to let music composers collaborate in real time.

【Keywords】: Bluetooth; application program interfaces; cloud computing; groupware; interactive programming; mobile computing; music; smart phones; social sciences computing; wireless LAN; API; Bluetooth; Google App Engine; MusicScore; Reflex; Wi-Fi interfaces; application programming interface; cloud computing; collaborators; gaming applications; iPad; mobile applications; mobile devices; multi-party conferencing; music composition application; productivity; smartphone; social interactions; tablet; Bluetooth; Cloud computing; IEEE 802.11 Standards; Mobile communication; Mobile handsets; Servers; Social network services

218. SybilDefender: Defend against sybil attacks in large social networks.

Paper Link】 【Pages】:1951-1959

【Authors】: Wei Wei ; Fengyuan Xu ; Chiu Chiang Tan ; Qun Li

【Abstract】: Distributed systems without trusted identities are particularly vulnerable to sybil attacks, where an adversary creates multiple bogus identities to compromise the running of the system. This paper presents SybilDefender, a sybil defense mechanism that leverages the network topologies to defend against sybil attacks in social networks. Based on performing a limited number of random walks within the social graphs, SybilDefender is efficient and scalable to large social networks. Our experiments on two 3,000,000 node real-world social topologies show that SybilDefender outperforms the state of the art by one to two orders of magnitude in both accuracy and running time. SybilDefender can effectively identify the sybil nodes and detect the sybil community around a sybil node, even when the number of sybil nodes introduced by each attack edge is close to the theoretically detectable lower bound. Besides, we propose two approaches to limiting the number of attack edges in online social networks. The survey results of our Facebook application show that the assumption made by previous work that all the relationships in social networks are trusted does not apply to online social networks, and it is feasible to limit the number of attack edges in online social networks by relationship rating.

【Keywords】: graph theory; security of data; social networking (online); Facebook application; SybilDefender; bogus identities; distributed systems; lower bound; network topologies; online social networks; relationship rating; social graphs; social topologies; sybil attacks; sybil community detection; sybil defense mechanism; sybil nodes; Approximation methods; Communities; Detection algorithms; Facebook; Image edge detection; Vectors

219. Preference-aware content dissemination in opportunistic mobile social networks.

Paper Link】 【Pages】:1960-1968

【Authors】: Kate Ching-Ju Lin ; Chun-Wei Chen ; Cheng-Fu Chou

【Abstract】: As mobile devices have become more ubiquitous, mobile users increasingly expect to utilize proximity-based connectivity, e.g., WiFi and Bluetooth, to opportunistically share multimedia content based on their personal preferences. However, many previous studies investigate content dissemination protocols that distribute a single object to as many users in an opportunistic mobile social network as possible without considering user preference. In this paper, we propose PrefCast, a preference-aware content dissemination protocol that targets on maximally satisfying user preference for content objects. Due to non-persistent connectivity between users in a mobile social network, when a user meets neighboring users for a limited contact duration, it needs to efficiently disseminate a suitable set of objects that can bring possible future contacts a high utility (the quantitative metric of preference satisfaction). We formulate such a problem as a maximum-utility forwarding model, and propose an algorithm that enables each user to predict how much utility it can contribute to future contacts and solve its optimal forwarding schedule in a distributed manner. Our trace-based evaluation shows that PrefCast can produce a 18.5% and 25.2% higher average utility than the protocols that only consider contact frequency or preference of local contacts, respectively.

【Keywords】: mobile computing; multimedia systems; protocols; social networking (online); PrefCast; content objects; maximum-utility forwarding model; mobile devices; mobile users; multimedia content; opportunistic mobile social networks; optimal forwarding schedule; preference-aware content dissemination protocols; proximity-based connectivity; trace-based evaluation; user preference; Broadcasting; Mobile communication; Mobile computing; Protocols; Schedules; Social network services; Unicast

220. Fine-grained private matching for proximity-based mobile social networking.

Paper Link】 【Pages】:1969-1977

【Authors】: Rui Zhang ; Yanchao Zhang ; Jinyuan Sun ; Guanhua Yan

【Abstract】: Proximity-based mobile social networking (PMSN) refers to the social interaction among physically proximate mobile users directly through the Bluetooth/WiFi interfaces on their smartphones or other mobile devices. It becomes increasingly popular due to the recently explosive growth of smartphone users. Profile matching means two users comparing their personal profiles and is often the first step towards effective PMSN. It, however, conflicts with users' growing privacy concerns about disclosing their personal profiles to complete strangers before deciding to interact with them. This paper tackles this open challenge by designing a suite of novel fine-grained private matching protocols. Our protocols enable two users to perform profile matching without disclosing any information about their profiles beyond the comparison result. In contrast to existing coarse-grained private matching schemes for PMSN, our protocols allow finer differentiation between PMSN users and can support a wide range of matching metrics at different privacy levels. The security and communication/computation overhead of our protocols are thoroughly analyzed and evaluated via detailed simulations.

【Keywords】: Bluetooth; computer network security; mobile computing; pattern matching; protocols; smart phones; social networking (online); wireless LAN; Bluetooth-WiFi interface; PMSN; coarse-grained private matching scheme; fine grained private matching protocol; mobile device; mobile user; personal profile matching; proximity-based mobile social networking; security; smart phone; social interaction; Cryptography; Measurement; Privacy; Protocols; Smart phones; Social network services; Vectors

221. Memory-efficient pattern matching architectures using perfect hashing on graphic processing units.

Paper Link】 【Pages】:1978-1986

【Authors】: Cheng-Hung Lin ; Chen-Hsiung Liu ; Shih-Chieh Chang ; Wing-Kai Hon

【Abstract】: Memory architectures have been widely adopted in network intrusion detection system for inspecting malicious packets due to their flexibility and scalability. Memory architectures match input streams against thousands of attack patterns by traversing the corresponding state transition table stored in commodity memories. With the increasing number of attack patterns, reducing memory requirement has become critical for memory architectures. In this paper, we propose a novel memory architecture using perfect hashing to condense state transition tables without hash collisions. The proposed memory architecture achieves up to 99.5% improvement in memory reduction compared to the traditional two-dimensional memory architecture. We have implemented our memory architectures on graphic processing units and tested using attack patterns from Snort V2.8 and input packets form DEFCON. The experimental results show that the proposed memory architectures outperform state-of-the-art memory architectures both on performance and memory efficiency.

【Keywords】: computer network security; cryptography; graphics processing units; memory architecture; DEFCON; Snort V2.8; attack patterns; graphic processing units; input streams; malicious packet inspection; memory-efficient pattern matching architectures; network intrusion detection system; perfect hashing; state transition table; state transition tables; Automata; Educational institutions; Indexes; Memory management; deterministic finite automaton; pattern matching; perfect hashing

222. Decompression-free inspection: DPI for shared dictionary compression over HTTP.

Paper Link】 【Pages】:1987-1995

【Authors】: Anat Bremler-Barr ; Shimrit Tzur-David ; David Hay ; Yaron Koral

【Abstract】: Deep Packet Inspection (DPI) is the most time and resource consuming procedure in contemporary security tools such as Network Intrusion Detection/Prevention System (NIDS/IPS), Web Application Firewall (WAF), or Content Filtering Proxy. DPI consists of inspecting both the packet header and payload and alerting when signatures of malicious software appear in the traffic. These signatures are identified through pattern matching algorithms. The portion of compressed traffic of overall Internet traffic is constantly increasing. This paper focuses on traffic compressed using shared dictionary. Unlike traditional compression algorithms, this compression method takes advantage of the inter-response redundancy (e.g., almost the same data is sent over and over again) as in nowadays dynamic Data. Shared Dictionary Compression over HTTP (SDCH), introduced by Google in 2008, is the first algorithm of this type. SDCH works well with other compression algorithm (as Gzip), making it even more appealing. Performing DPI on any compressed traffic is considered hard, therefore today's security tools either do not inspect compressed data, alter HTTP headers to avoid compression, or decompress the traffic before inspecting it. We present a novel pattern matching algorithm that inspects SDCH-compressed traffic without decompressing it first. Our algorithm relies on offline inspection of the shared dictionary, which is common to all compressed traffic, and marking auxiliary information on it to speed up the online DPI inspection. We show that our algorithm works near the rate of the compressed traffic, implying a speed gain of SDCH's compression ratio (which is around 40%). We also discuss how to deal with SDCH compression over Gzip compression, and show how to perform regular expression matching with about the same speed gain.

【Keywords】: Internet; computer network security; data compression; pattern matching; transport protocols; Gzip compression; HTTP; Internet traffic; SDCH-compressed traffic inspection; Web application firewall; compression algorithm; content filtering proxy; decompression-free inspection; deep packet inspection; hypertext transfer protocol; inter-response redundancy; malicious software; network intrusion detection system; network intrusion prevention system; packet header; pattern matching algorithm; payload; regular expression matching; security tool; shared dictionary compression; Automata; Dictionaries; Doped fiber amplifiers; Google; Pattern matching; Security; Servers

223. L2P2: Location-aware location privacy protection for location-based services.

Paper Link】 【Pages】:1996-2004

【Authors】: Yu Wang ; Dingbang Xu ; Xiao He ; Chao Zhang ; Fan Li ; Bin Xu

【Abstract】: Location privacy has been a serious concern for mobile users who use location-based services provided by the third-party provider via mobile networks. Recently, there have been tremendous efforts on developing new anonymity or obfuscation techniques to protect location privacy of mobile users. Though effective in certain scenarios, these existing techniques usually assume that a user has a constant privacy requirement along spatial and/or temporal dimensions, which may not be true in real-life scenarios. In this paper, we introduce a new location privacy problem: Location-aware Location Privacy Protection (L2P2) problem, where users can define dynamic and diverse privacy requirements for different locations. The goal of the L2P2 problem is to find the smallest cloaking area for each location request so that diverse privacy requirements over spatial and/or temporal dimensions are satisfied for each user. In this paper, we formalize two versions of the L2P2 problem, and propose several efficient heuristics to provide such location-aware location privacy protection for mobile users. Through multiple simulations on a large data set of trajectories for one thousand mobile users, we confirm the effectiveness and efficiency of the proposed L2P2 algorithms.

【Keywords】: data privacy; mobile computing; L2P2 problem; anonymity techniques; location-aware location privacy protection problem; location-based services; mobile networks; mobile users; obfuscation techniques; third-party provider; Data privacy; Educational institutions; Entropy; Measurement; Mobile communication; Privacy; Servers; Location privacy; cloaking; k-anonymity; location based service; mobile networks

224. Traffic anomaly detection based on the IP size distribution.

Paper Link】 【Pages】:2005-2013

【Authors】: Fabio Soldo ; Ahmed Metwally

【Abstract】: In this paper we present a data-driven framework for detecting machine-generated traffic based on the IP size, i.e., the number of users sharing the same source IP. Our main observation is that diverse machine-generated traffic attacks share a common characteristic: they induce an anomalous deviation from the expected IP size distribution. We develop a principled framework that automatically detects and classifies these deviations using statistical tests and ensemble learning. We evaluate our approach on a massive dataset collected at Google for 90 consecutive days. We argue that our approach combines desirable characteristics: it can accurately detect fraudulent machine-generated traffic; it is based on a fundamental characteristic of these attacks and is thus robust (e.g., to DHCP re-assignment) and hard to evade; it has low complexity and is easy to parallelize, making it suitable for large-scale detection; and finally, it does not entail profiling users, but leverages only aggregate statistics of network traffic.

【Keywords】: IP networks; learning (artificial intelligence); statistical testing; telecommunication security; telecommunication traffic; DHCP reassignment; Google; IP size distribution; anomalous deviation; data-driven framework; ensemble learning; fraudulent machine-generated traffic attack detection; network traffic; statistical tests; traffic anomaly detection; Advertising; Aggregates; Bismuth; Electronic mail; Google; Histograms; IP networks

225. Cooperative relay with interference alignment for video over cognitive radio networks.

Paper Link】 【Pages】:2014-2022

【Authors】: Donglin Hu ; Shiwen Mao

【Abstract】: Due to the drastic increase in wireless video traffic, the capacity of existing and future wireless networks will be greatly stressed, while interference will become the dominant capacity limiting factor. In this paper, we investigate cooperative relay in CR networks using video as a reference application. We incorporate interference alignment to allow transmitters collaboratively send encoded signals to all CR users, such that undesired signals will be canceled and the desired signal can be decoded at each CR user. We present a stochastic programming formulation, as well as a reformulation that greatly reduces computational complexity. In the cases of a single licensed channel and multiple licensed channels with channel bonding, we develop an optimal distributed algorithm with proven convergence and convergence speed. In the case of multiple channels without channel bonding, we develop a greedy algorithm with a proven performance bound. The algorithms are evaluated with simulations and are shown to achieve considerable gains over two heuristic schemes that do not consider interference alignment.

【Keywords】: cognitive radio; cooperative communication; greedy algorithms; stochastic processes; telecommunication channels; telecommunication traffic; video communication; channel bonding; computational complexity; cooperative relay; greedy algorithm; interference alignment; multiple licensed channels; single licensed channel; stochastic programming formulation; video over cognitive radio networks; wireless video traffic; Bonding; Interference; Relays; Sensors; Streaming media; Transmitters; Vectors

226. Spectrum management and power allocation in MIMO cognitive networks.

Paper Link】 【Pages】:2023-2031

【Authors】: Diep N. Nguyen ; Marwan Krunz

【Abstract】: We consider the problem of maximizing the throughput of a multi-input multi-output (MIMO) cognitive radio (CR) network. CR users are assumed to share the available spectrum without disturbing primary radio (PR) transmissions. With spatial multiplexing performed over each frequency band, a multi-antenna CR node controls its antenna radiation patterns and allocates power for each data stream by appropriately adjusting its precoding matrix. Our objective is to design a set of precoding matrices (one for each band) at each CR node so that power and spectrum are optimally allocated for that node (in terms of throughput) and its interference is steered away from other CR and PR transmissions. In other words, the problems of power, spectrum and interference management are jointly investigated. We formulate a multi-carrier MIMO network throughput optimization problem subject to frequency-dependent power constraints. The problem is non-convex, with the number of variables growing quadratically with the number of antenna elements. Such a problem is difficult to solve, even in a centralized manner. To tackle it, we translate it into a noncooperative game and derive an optimal pricing policy for each node, which adapts to the node's neighboring conditions and drives the game to a Nash-Equilibrium (NE). The network throughput under this NE is at least equal to that of a locally optimal solution of the non-convex centralized problem. To find the set of precoding matrices at each node (the best response), a low-complexity distributed algorithm is developed by exploiting the strong duality of the per-user convex optimization problem. The number of variables in the distributed algorithm is independent of the number of antenna elements. A centralized (cooperative) algorithm is also developed, serving as a performance benchmark. Simulations show that the network throughput under the distributed algorithm converges rapidly to that of the centralized one. The fast convergence of the g- me facilitates MAC design, which we briefly discuss in the paper. The application of our results is not limited to CR systems, but extends to multi-carrier (e.g., OFDM) MIMO systems.

【Keywords】: MIMO communication; access protocols; antenna arrays; antenna radiation patterns; cognitive radio; concave programming; convex programming; distributed algorithms; game theory; matrix algebra; radiofrequency interference; telecommunication network management; CR node; CR transmissions; CR users; MIMO cognitive networks; NE; Nash equilibrium; PR transmissions; antenna elements; antenna radiation patterns; centralized algorithm; data stream; frequency-dependent power constraints; game facilitates MAC design; interference management; low-complexity distributed algorithm; multiantenna CR node controls; multicarrier MIMO network; multiinput multioutput cognitive radio network; nonconvex centralized problem; noncooperative game; optimal pricing policy; per-user convex optimization problem; power allocation; precoding matrix; primary radio transmissions; spectrum management; Games; Interference; MIMO; Optimization; Pricing; Resource management; Throughput; MAC protocol; MIMO; Noncooperative game; beamforming; cognitive radio; frequency management; power allocation; pricing

227. Robust topology control in multi-hop cognitive radio networks.

Paper Link】 【Pages】:2032-2040

【Authors】: Jing Zhao ; Guohong Cao

【Abstract】: The opening of under-utilized spectrum creates the opportunity of substantial performance improvement through cognitive radio techniques. However, the real network performance may be limited since unlicensed users must vacate and switch to other available spectrum if the current spectrum is reclaimed by the licensed (primary) users. During the spectrum switching time, network partitions may occur since multiple links may be affected if they all operate on the channel reclaimed by the primary users. In this paper, we address this problem through robust topology control, where channels are assigned to minimize channel interference while maintaining network connectivity when primary users appear. To solve this NP-hard problem, we propose both centralized and distributed algorithms. Simulation results show that our solutions outperform existing interference-aware approaches substantially when primary users appear and achieve similar performance at other times.

【Keywords】: cognitive radio; computational complexity; interference suppression; optimisation; radiofrequency interference; robust control; telecommunication control; telecommunication network topology; NP-hard problem; centralized algorithm; channel interference minimization; distributed algorithm; interference-aware approaches; licensed users; multihop cognitive radio networks; network connectivity; network partitions; primary users; robust topology control; spectrum switching time; under-utilized spectrum; unlicensed users; Algorithm design and analysis; Cognitive radio; Interference; Network topology; Robustness; Switches; Topology

228. Spectrum trading with insurance in cognitive radio networks.

Paper Link】 【Pages】:2041-2049

【Authors】: Haiming Jin ; Gaofei Sun ; Xinbing Wang ; Qian Zhang

【Abstract】: Market based spectrum trading has been extensively studied to realize efficient spectrum utilization in cognitive radio networks (CRNs). In this paper, we utilize the concept of insurance in spectrum trading so as to improve spectrum efficiency in CRNs. We show that by additionally purchasing a specifically designed insurance contract from a PU, an SU can improve its utility since it will be insured against the potential accident, i.e., transmission failure incurred by excessively low SINR. Therefore insurance provides SUs more incentive to purchase PUs' channels and spectrum utilization in CRNs can be improved. In this paper, the original spectrum market including multiple PUs and multiple SUs are modeled as a hybrid market consisting of a spectrum market and an insurance market. In this hybrid market PUs serve as spectrum sellers as well as insurers and SUs act as spectrum buyers as well as insureds. We further model the hybrid market game as a four-stage Bayesian game between PUs and SUs. We characterize the second-best Pareto optimal (SBPO) market allocations and players' perfect Bayesian equilibrium (PBE) strategies. Furthermore, through extensive simulation, we have demonstrated that at the PBE, high risk and low risk SUs will respectively experience improvement in their utilities for approximately 23.5% and 4.6%.

【Keywords】: Bayes methods; cognitive radio; game theory; insurance; radio spectrum management; Bayesian equilibrium strategy; SBPO; cognitive radio networks; four stage Bayesian game; hybrid market; insurance contract; insurance market; market allocations; market based spectrum trading; primary user; second best Pareto optimal; secondary user; spectrum buyers; spectrum efficiency; spectrum market; spectrum sellers; spectrum utilization; transmission failure; Bayesian methods; Contracts; Games; Insurance; Interference; Resource management; Transmitters

229. A conservation-law-based modular fluid-flow model for network congestion modeling.

Paper Link】 【Pages】:2050-2058

【Authors】: Corentin Briat ; Emre A. Yavuz ; Gunnar Karlsson

【Abstract】: A modular fluid-flow model for network congestion analysis and control is proposed. The model is derived from an information conservation law stating that the information is either in transit, lost or received. Mathematical models of network elements such as queues, users, and transmission channels, and network description variables, including sending/ acknowledgement rates and delays, are inferred from this law and obtained by applying this principle locally. The modularity of the devised model makes it sufficiently generic to describe any network topology, and appealing for building simulators. Previous models in the literature are often not capable of capturing the transient behavior of the network precisely, making the resulting analysis inaccurate in practice. Those models can be recovered from exact reduction or approximation of this new model. An important aspect of this particular modeling approach is the introduction of new tight building blocks that implement mechanisms ignored by the existing ones, notably at the queue and user levels. Comparisons with packet-level simulations corroborate the proposed model.

【Keywords】: mathematical analysis; telecommunication congestion control; telecommunication network topology; building simulator; delay; information conservation-law-based modular fluid-flow model; mathematical model; network congestion control; network congestion modeling; network description variable; network topology; network transient behavior; new tight building block introduction; packet-level simulation; sending-acknowledgement rate; transmission channel; Clocks; Computational modeling; Congestion control modeling; Fluid-flow models; Queueing model; Self-clocking

230. Impact of jitter-based techniques on flooding over wireless ad hoc networks: Model and analysis.

Paper Link】 【Pages】:2059-2067

【Authors】: Juan Antonio Cordero ; Philippe Jacquet ; Emmanuel Baccelli

【Abstract】: Jitter is used in wireless ad hoc networks to reduce the number of packet collisions and the number of transmissions. This is done by scheduling random back-off for each packet to be transmitted and by piggybacking multiple packets in a single transmission. This technique has been standardized by the IETF in RFC 5148. This paper investigates on the impact of the standardized jitter mechanism on network-wide packet dissemination - i.e. flooding, an important component for many protocols used today. A novel analytical model is introduced, capturing standard jitter traits. From this model is derived accurate characterization of the effects of jittering on flooding performance, including the additional delay for flooded packets on each traversed network interface, the reduction of the number of transmissions over each network interface, and the increased length of transmissions, depending on jitter parameters. This paper also presents an analysis of the use of jitter in practice, over an 802.11 wireless link layer based on CSMA. The analytical results are then validated via statistical discrete event simulations. The paper thus provides a comprehensive overview of the impact of jittering in wireless ad hoc networks.

【Keywords】: IEEE standards; ad hoc networks; carrier sense multiple access; jitter; packet radio networks; protocols; radio networks; scheduling; wireless LAN; 802.11 wireless link layer; CSMA; IETF; RFC 5148; analytical model; jitter-based techniques; network-wide packet dissemination; packet collisions; protocols; scheduling; wireless ad hoc networks; Ad hoc networks; Analytical models; Context; Delay; Jitter; Random variables; Wireless communication

231. Effect of access probabilities on the delay performance of Q-CSMA algorithms.

Paper Link】 【Pages】:2068-2076

【Authors】: Javad Ghaderi ; R. Srikant

【Abstract】: It has been recently shown that queue-based CSMA algorithms can be throughput optimal. In these algorithms, each link of the wireless network has two parameters: a transmission probability and an access probability. The transmission probability of each link is chosen as an appropriate function of its queue-length, however, the access probabilities are simply regarded as some random numbers since they do not play any role in establishing the network stability. In this paper, we show that the access probabilities control the mixing time of the CSMA Markov chain and, as a result, affect the delay performance of the CSMA. In particular, we derive formulas that relate the mixing time to access probabilities and use these to develop the following guideline for choosing access probabilities: each link i should choose its access probability equal to 1/(di + 1), where di is the number of links which interfere with link i. Simulation results show that this choice of access probabilities results in good delay performance.

【Keywords】: Markov processes; carrier sense multiple access; probability; queueing theory; radio networks; CSMA Markov chain; Q-CSMA algorithms; access probabilities; delay performance; network stability; queue-based CSMA algorithms; transmission probability; wireless network; Delay; Eigenvalues and eigenfunctions; Heuristic algorithms; Markov processes; Multiaccess communication; Schedules; Throughput

232. Stochastic analysis of horizontal IP scanning.

Paper Link】 【Pages】:2077-2085

【Authors】: Derek Leonard ; Zhongmei Yao ; Xiaoming Wang ; Dmitri Loguinov

【Abstract】: Intrusion Detection Systems (IDS) have become ubiquitous in the defense against virus outbreaks, malicious exploits of OS vulnerabilities, and botnet proliferation. As attackers frequently rely on host scanning for reconnaissance leading to penetration, IDS is often tasked with detecting scans and preventing them. However, it is currently unknown how likely an IDS is to detect a given Internet-wide scan pattern and whether there exist sufficiently fast scan techniques that can remain virtually undetectable at large-scale. To address these questions, we propose a simple analytical model for the window-expiration rules of popular IDS tools (i.e., Snort and Bro) and utilize a variation of the Chen-Stein theorem to derive the probability that they detect some of the commonly used scan permutations. Using this analysis, we also prove the existence of stealth-optimal scan patterns, examine their performance, and contrast it with that of well-known techniques.

【Keywords】: IP networks; Internet; computer network security; computer viruses; operating systems (computers); probability; stochastic processes; ubiquitous computing; Bro; Chen-Stein theorem; IDS tools; Internet-wide scan pattern; OS vulnerability; Snort; botnet proliferation; fast scan techniques; horizontal IP scanning; host scanning; intrusion detection systems; malicious exploits; probability; scan detection; scan permutations; scan prevention; stealth-optimal scan patterns; stochastic analysis; ubiquitous; virtually undetectable; virus outbreaks; window-expiration rules; Accuracy; Analytical models; Delay; Grippers; IP networks; Internet; Probes

233. CONSEL: Connectivity-based segmentation in large-scale 2D/3D sensor networks.

Paper Link】 【Pages】:2086-2094

【Authors】: Hongbo Jiang ; Tianlong Yu ; Chen Tian ; Guang Tan ; Chonggang Wang

【Abstract】: A cardinal prerequisite for the system design of a sensor network, is to understand the geometric environment where sensor nodes are deployed. The global topology of a large-scale sensor network is often complex and irregular, possibly containing obstacles/holes. A convex network partition, so-called segmentation, is to divide a network into convex regions, such that traditional algorithms designed for a simple geometric region can be applied. Existing segmentation algorithms highly depend on concave node detection on the boundary or sink extraction on the medial axis, thus leading to quite sensitive performance to the boundary noise. More severely, since they exploit the network's 2D geometric properties, either explicitly or implicitly, so far there has been no general 3D segmentation solution. In this paper, we bring a new view to segmentation from a Morse function perspective, bridging the convex regions and the Reeb graph of a network. Accordingly, we propose a novel distributed and scalable algorithm, named CONSEL, for CONnectivity-based SEgmentation in Large-scale 2D/3D sensor networks. Specifically, several boundary nodes first perform flooding to construct the Reeb graph. The ordinary nodes then compute mutex pairs locally, thereby generating the coarse segmentation. Next the neighbor regions which are not mutex pair are merged together. Finally, by ignoring mutex pairs which leads to small concavity, we provide the constraints for approximately convex decomposition. CONSEL is more desirable compared with previous studies: (1) it works for both 2D and 3D sensor networks; (2) it only relies on network connectivity information; (3) it guarantees a bound for the regions' deviation from convexity. Extensive simulations show that CONSEL works well in the presence of holes and shape variation, always yielding appropriate segmentation results.

【Keywords】: graph theory; sensor placement; telecommunication network topology; wireless sensor networks; 2D geometric property; 3D sensor networks; CONSEL; Morse function perspective; Reeb graph; boundary extraction; boundary noise; coarse segmentation; concave node detection; connectivity based segmentation; convex decomposition; convex network partition; convex regions; geometric environment; global topology; holes variation; large scale 2D sensor networks; mutex pairs; network connectivity information; sensor deployment; sensor nodes; shape variation; sink extraction; Algorithm design and analysis; Network topology; Partitioning algorithms; Routing; Shape; Three dimensional displays; Topology

234. On the topology of wireless sensor networks.

Paper Link】 【Pages】:2095-2103

【Authors】: Sen Yang ; Xinbing Wang ; Luoyi Fu

【Abstract】: In this paper, we explore methods to generate optimal network topologies for wireless sensor networks (WSNs) with and without obstacles. Specifically, we investigate a dense network with n sensor nodes and m = nb (0 <; b <; 1) helping nodes, and evaluate the impact of topology on its throughput capacity. For networks without obstacles, we find that uniformly distributed sensor nodes and regularly distributed helping nodes have some advantages in improving the throughput capacity. We also explore properties of networks composed of some isomorphic sub-networks. For networks with obstacles, we assume there are M = Θ(nv) (0 <; v ≤ 1) arbitrarily or randomly distributed obstacles, which block cells they are located in, i.e., sensor nodes cannot be placed in these cells and nodes' communication cannot cross them directly. We find that the overall throughput capacity is bounded by the transmission burden in areas around these blocked cells and introduce a novel algorithm of complexity O(M) to generate optimal sensor nodes' topologies for any given obstacles' distributions.

【Keywords】: channel capacity; telecommunication network topology; wireless sensor networks; distributed helping nodes; distributed sensor nodes; isomorphic sub-networks; optimal network topologies; throughput capacity; wireless sensor networks topology; Bandwidth; Complexity theory; Network topology; Routing; Throughput; Topology; Wireless sensor networks

235. Exploiting constructive interference for scalable flooding in wireless networks.

Paper Link】 【Pages】:2104-2112

【Authors】: Yin Wang ; Yuan He ; XuFei Mao ; Yunhao Liu ; Zhiyu Huang ; Xiang-Yang Li

【Abstract】: Exploiting constructive interference in wireless networks is an emerging trend for it allows multiple senders transmit an identical packet simultaneously. Constructive interference based flooding can realize millisecond network flooding latency and sub-microsecond time synchronization accuracy, require no network state information and adapt to topology changes. However, constructive interference has a precondition to function, namely, the maximum temporal displacement Δ of concurrent packet transmissions should be less than a given hardware constrained threshold. We disclose that constructive interference based flooding suffers the scalability problem. The packet reception performances of intermediate nodes degrade significantly as the density or the size of the network increases. We theoretically show that constructive interference based flooding has a packet reception ratio (PRR) lower bound (95:4%) in the grid topology. For a general topology, we propose the spine constructive interference based flooding (SCIF) protocol. With little overhead, SCIF floods the entire network much more reliably than Glossy [1] in high density or large-scale networks. Extensive simulations illustrate that the PRR of SCIF keeps stable above 96% as the network size grows from 400 to 4000 while the PRR of Glossy is only 26% when the size of the network is 4000. We also propose to use waveform analysis to explain the root cause of constructive interference, which is mainly examined in simulations and experiments. We further derive the closed-form PRR formula and define interference gain factor (IGF) to quantitatively measure constructive interference.

【Keywords】: radio networks; radiofrequency interference; telecommunication network reliability; telecommunication network topology; IGF; PRR lower bound; SCIF protocol; closed-form PRR formula; concurrent packet transmissions; constructive interference; constructive interference based flooding; grid topology; identical packet; interference gain factor; intermediate nodes; large-scale networks; millisecond network flooding latency; packet ratio reception lower bound; spine constructive interference based flooding protocol; submicrosecond time synchronization accuracy; temporal displacement; wireless networks

236. Distributed data collection and its capacity in asynchronous wireless sensor networks.

Paper Link】 【Pages】:2113-2121

【Authors】: Shouling Ji ; Zhipeng Cai

【Abstract】: Most of the existing works studying the data collection capacity issue have an ideal assumption that the network time is slotted and the entire network is strictly synchronized explicitly or implicitly. Such an assumption is mainly for centralized synchronous WSNs. However, WSNs are more likely to be distributed asynchronous systems. Thus, in this paper, we investigate the achievable data collection capacity of realistic distributed asynchronous WSNs. To the best of our knowledge, this is the first work to study the data collection capacity issue for distributed asynchronous WSNs. Our main contributions are threefold. First, to avoid data transmission collisions/interference, we derive an R0-Proper Carrier-sensing Range (R0-PCR) under the generalized physical interference model for the nodes in a data collection WSN, where R0 is the satisfied threshold data receiving rate. Taking R0-PCR as its carrier-sensing range, any node can initiate a data transmission with a guaranteed data receiving rate. Second, based on the obtained R0-PCR, we propose a Distributed Data Collection (DDC) algorithm with fairness consideration for asynchronous WSNs. Theoretical analysis of DDC surprisingly shows that its asymptotic achievable network capacity is ℂ = Ω(1/(26βκ+1)·W), where βκ+1 is a constant value depends on R0 and W is the bandwidth of a wireless communication channel, which is order optimal and independent of network size. Thus, DDC is scalable. Finally, we conduct extensive simulations to validate the performance of DDC. Simulation results demonstrate that DDC can achieve comparable data collection capacity as that of the most recently published centralized and synchronized data collection algorithm.

【Keywords】: radiofrequency interference; wireless channels; wireless sensor networks; DDC algorithm; R0-proper carrier-sensing range; asynchronous wireless sensor networks; centralized synchronous WSN; data transmission collision-interference; distributed asynchronous WSN; distributed data collection; generalized physical interference model; network capacity; threshold data receiving rate; wireless communication channel; Data communication; Data models; Distributed databases; Interference; Unicast; Wireless networks; Wireless sensor networks

237. iBGP deceptions: More sessions, fewer routes.

Paper Link】 【Pages】:2122-2130

【Authors】: Stefano Vissicchio ; Luca Cittadini ; Laurent Vanbever ; Olivier Bonaventure

【Abstract】: Internal BGP (iBGP) is used to distribute interdomain routes within a single ISP. The interaction between iBGP and the underlying IGP can lead to routing and forwarding anomalies. For this reason, several research contributions aimed at defining sufficient conditions to guarantee anomaly-free configurations and providing design guidelines for network operators. In this paper, we show several anomalies caused by defective dissemination of routes in iBGP. We define the dissemination correctness property, which models the ability of routers to learn at least one route to each destination. By distinguishing between dissemination correctness and existing correctness properties, we show counterexamples that invalidate some results in the literature. Further, we prove that deciding whether an iBGP configuration is dissemination correct is computationally intractable. Even worse, determining whether the addition of a single iBGP session can adversely affect dissemination correctness of an iBGP configuration is also computationally intractable. Finally, we provide sufficient conditions that ensure dissemination correctness, and we leverage them to both formulate design guidelines and revisit prior results.

【Keywords】: Internet; internetworking; routing protocols; telecommunication network topology; telecommunication signalling; telecommunication traffic; Border Gateway Protocol; ISP; Internet Service Provider; anomaly-free configuration; defective route dissemination; design guidelines; dissemination correctness property; forwarding anomaly; forwarding correctness; forwarding loops; iBGP deception; iBGP session; interdomain route distribution; internal BGP; network operation; router learning ability; routing anomaly; signalling; traffic blackholes; Guidelines; Measurement; Network topology; Oscillators; Routing; Routing protocols; Topology

238. Loop mitigation in bloom filter based multicast: A destination-oriented approach.

Paper Link】 【Pages】:2131-2139

【Authors】: Xiaohua Tian ; Yu Cheng

【Abstract】: Recently, several Bloom filter based multicast schemes have been proposed, in which multicast routing information is carried with an in-packet Bloom filter. Since routers have no need to maintain forwarding states on a per-group basis, the Bloom filter based multicast protocols have desirable scalability. However, a critical issue is that these schemes may incur forwarding loops due to the false positive inherent in the Bloom filter. Existing solutions can only conditionally mitigate the probability of the forwarding loop, instead of fully preventing such events which (once occurred) will cause severe damage to the network. In this paper, we resolve this issue in the context of a destination-oriented multicast (DOM) scheme, a Bloom filter based multicast protocol carrying destinations IP addresses with the in-packet Bloom filter. With a theoretical analysis of the loop issue in DOM context developed, we reveal that the DOM design natively supports automatical elimination of permanent forwarding loops in all cases except a subtle one termed as conservation of bits. Based on the conclusion, we derive a probability upper bound on the loop occurrence in DOM. Furthermore, we propose an accurate tree branch pruning scheme, which equips the DOM the capability to completely and efficiently remove the false-positive forwarding loop. We present simulation results over a practical topology to demonstrate the performance of the loop mitigating DOM, with comparison to a representative Bloom filter based multicast scheme FRM and traditional IP multicast.

【Keywords】: IP networks; filtering theory; multicast protocols; probability; routing protocols; telecommunication network reliability; telecommunication network topology; DOM scheme; FRM; IP address; IP multicast; destination-oriented multicast scheme; false-positive forwarding loop; in-packet Bloom filter; loop mitigation; multicast protocol; multicast routing information scheme; probability mitigation; probability upper bound; tree branch pruning scheme; Context; Logic gates

239. Reputation-based incentive protocols in crowdsourcing applications.

Paper Link】 【Pages】:2140-2148

【Authors】: Yu Zhang ; Mihaela van der Schaar

【Abstract】: Crowdsourcing websites (e.g. Yahoo! Answers, Amazon Mechanical Turk, and etc.) emerged in recent years that allow requesters from all around the world to post tasks and seek help from an equally global pool of workers. However, intrinsic incentive problems reside in crowdsourcing applications as workers and requester are selfish and aim to strategically maximize their own benefit. In this paper, we propose to provide incentives for workers to exert effort using a novel game-theoretic model based on repeated games. As there is always a gap in the social welfare between the non-cooperative equilibria emerging when workers pursue their self-interests and the desirable Pareto efficient outcome, we propose a novel class of incentive protocols based on social norms which integrates reputation mechanisms into the existing pricing schemes currently implemented on crowdsourcing websites, in order to improve the performance of the non-cooperative equilibria emerging in such applications. We first formulate the exchanges on a crowdsourcing website as a two-sided market where requesters and workers are matched and play gift-giving games repeatedly. Subsequently, we study the protocol designer's problem of finding an optimal and sustainable (equilibrium) protocol which achieves the highest social welfare for that website. We prove that the proposed incentives protocol can make the website operate close to Pareto efficiency. Moreover, we also examine an alternative scenario, where the protocol designer aims at maximizing the revenue of the website and evaluate the performance of the optimal protocol.

【Keywords】: Web sites; game theory; incentive schemes; outsourcing; protocols; strategic planning; Pareto efficiency; crowdsourcing Web sites; crowdsourcing applications; gift-giving games; intrinsic incentive problems; noncooperative equilibria; protocol designer problem; repeated games-based game-theoretic model; reputation mechanisms; reputation-based incentive protocols; social norms-based incentive protocols; social welfare; sustainable protocol; two-sided market; Analytical models; Educational institutions; Electrical engineering; Games; Pricing; Protocols; USA Councils

240. On frame-based scheduling for directional mmWave WPANs.

Paper Link】 【Pages】:2149-2157

【Authors】: In Keun Son ; Shiwen Mao ; Michelle X. Gong ; Yihan Li

【Abstract】: Millimeter wave (mmWave) communications in the 60 GHz band can provide multi-gigabit rates for emerging bandwidth-intensive applications, and has thus gained considerable interest recently. In this paper, we investigate the problem of efficient scheduling in mmWave wireless personal area networks (WPAN). We develop a frame-based scheduling directional MAC protocol, termed FDMAC, to achieve the goal of leveraging collision-free concurrent transmissions to fully exploit spatial reuse in mmWave WPANs. The high efficiency of FDMAC is achieved by amortizing the scheduling overhead over multiple concurrent, back-to-back transmissions in a row. The core of FDMAC is a graph coloring-based scheduling algorithm, termed greedy coloring (GC) algorithm, that can compute near-optimal schedules with respect to the total transmission time with low complexity. The proposed FDMAC is analyzed and evaluated under various traffic models and patterns. Its superior performance is validated with extensive simulations.

【Keywords】: access protocols; frequency allocation; graph colouring; greedy algorithms; personal area networks; scheduling; FDMAC protocol; bandwidth intensive application; collision-free concurrent transmissions; directional MAC protocol; directional millimeter wave WPAN; frame based scheduling; graph coloring based scheduling algorithm; greedy coloring algorithm; millimeter wave communications; near optimal schedule; spatial reuse; wireless personal area network; Lead

241. A time-efficient information collection protocol for large-scale RFID systems.

Paper Link】 【Pages】:2158-2166

【Authors】: Hao Yue ; Chi Zhang ; Miao Pan ; Yuguang Fang ; Shigang Chen

【Abstract】: Sensor-enabled RFID technology has generated a lot of interest from industries lately. Integrated with miniaturized sensors, RFID tags could provide not only the IDs but also valuable real-time information about the state of the corresponding objects or the surrounding environment, which is beneficial to many practical applications, such as warehouse management and inventory control. In this paper, we study the problem on how to design efficient protocols to collect such sensor information from numerous tags in a large-scale RFID system with a number of readers deployed. Different from information collection in the small RFID system covered by only one reader, in the multi-reader scenario, each reader has to first find out which tags located in its interrogation region in order to read information from them. We start with two categories of warm-up solutions that are directly extended from the existing information collection protocols for single-reader RFID systems, and show that all of them do not work well for the multi-reader information collection problem due to their inefficiency of identifying the interrogated tags. Then, we propose a novel solution, called the Bloom filter based Information Collection protocol (BIC). In BIC, the interrogated tag identification can be efficiently achieved with a distributively constructed Bloom filter, which significantly reduces the communication overhead and thus the protocol execution time. Extensive simulations show that BIC performs better than all the warm-up solutions and its execution time is within 3 times of the lower bound.

【Keywords】: protocols; radiofrequency identification; telecommunication control; telecommunication network management; RFID tags; bloom filter based information collection protocol; interrogated tag identification; inventory control; large-scale RFID systems; miniaturized sensors; sensor-enabled RFID technology; time-efficient information collection protocol; warehouse management; Arrays; Microwave integrated circuits; Protocols; RFID tags; Synchronization

242. Novel constructions of complex orthogonal designs for space-time block codes.

Paper Link】 【Pages】:2167-2173

【Authors】: Yuan Li ; Chen Yuan ; Haibin Kan

【Abstract】: Complex orthogonal designs (CODs) are used to construct space-time block codes in wireless transmission. COD Oz with parameter [p, n, k] is a p × n matrix, where nonzero entries are filled by ±zi or ±zi* , i = 1, 2, ..., k, such that equation. In practice, n is the number of antennas, k=p the code rate, and p the decoding delay. One fundamental problem is to construct COD to maximize k/p and minimize p when n is given. Recently, this problem is completely solved by Liang and Adams et al. It's proved that when n = 2m or 2m - 1, the maximal possible rate is (m + 1)/(2m) and the minimum delay (m-12m)(with the only exception n ≡2 (mod 4) where it is 2(m-12m)). However, when the number of antennas increase, the minimum delay grows fast and eats the otherwise fast decoding. For example, when n = 14 the minimal delay for a code with maximal rate is 6006! Therefore, it is very important to study whether it is possible, by lowering the rate slightly, to shorten the decoding delay considerably. In this paper, we demonstrate this possibility by constructing a series of CODs with parameter [p, n, k] = [(w - 1n)+(w + 1n), n, (wn)], where 0 ≤ w ≤ n. Besides that, all optimal CODs, which achieve the maximal rate and minimal delay, are contained in our explicit-form constructions. And this is the first explicit-form construction, while the previous are recursive or algorithmic.

【Keywords】: decoding; delays; orthogonal codes; space-time block codes; complex orthogonal designs; decoding delay; explicit form constructions; space time block codes; wireless transmission; Antennas; Block codes; Delay; Maximum likelihood decoding; Transceivers; Vectors

243. Privacy-preserving RFID authentication based on cryptographical encoding.

Paper Link】 【Pages】:2174-2182

【Authors】: Tao Li ; Wen Luo ; Zhen Mo ; Shigang Chen

【Abstract】: Radio Frequency IDentification (RFID) technology has been adopted in many applications, such as inventory control, object tracking, theft prevention, and supply chain management. Privacy-preserving authentication in RFID systems is a very important problem. Existing protocols employ tree structures to achieve fast authentication. We observe that these protocols require a tag to transmit a large amount of data in each authentication, which costs significant bandwidth and energy overhead. Current protocols also impose heavy computational demand on the RFID reader. To address these issues, we design two privacy-preserving protocols based on a new technique called cryptographical encoding, which significantly reduces both authentication data transmitted by each tag and computation overhead incurred at the reader. Our analysis shows that the new protocols are able to reduce authentication data by more than an order of magnitude and reduce computational demand by about an order of magnitude, when comparing with the best existing protocol.

【Keywords】: cryptographic protocols; encoding; radiofrequency identification; RFID reader; authentication data reduction; cryptographical encoding; privacy-preserving RFID authentication; privacy-preserving protocols; radiofrequency identification technology; Authentication; Cryptography; Encoding; Indexes; Materials; Protocols; Radiofrequency identification

244. Fault-tolerant RFID reader localization based on passive RFID tags.

Paper Link】 【Pages】:2183-2191

【Authors】: Weiping Zhu ; Jiannong Cao ; Yi Xu ; Lei Yang ; Junjun Kong

【Abstract】: With the growing use of RFID-based devices, RFID reader localization attracts increasing attentions recently. In this technology, an object carrying an RFID reader is located by communicating with some passive RFID tags deployed in the environment. One important problem of RFID reader localization is that frequent occurred RFID faults affect localization accuracy. Specifically, complex localization environment (may include metal, water, obstacles, etc.) makes some tags fail to communicate with the reader, which makes the localization result deviate from the real location. Existing approaches can tolerate the faults occurred in individual tags and lasting for a short time period, but suffer serious localization error if the faults exist in a large region and last for a long time period. Moreover, existing approaches do not provide quality measurement of a localization result. In this paper, we propose an effective fault-tolerant RFID reader localization approach suitable for the above-mentioned situations, and illustrate how to measure the quality of a localization result. We have taken extensive simulations and implemented an RFID-based localization system. In both cases, our solution outperforms existing approaches in localization accuracy and can provide additional quality information.

【Keywords】: fault tolerance; radiofrequency identification; RFID faults; fault tolerant RFID reader localization; localization accuracy; localization error; passive RFID tags; radiofrequency identification; Accuracy; Indexes; Metals; Passive RFID tags; Programming; Redundancy

245. On quantification of anchor placement.

Paper Link】 【Pages】:2192-2200

【Authors】: Yibei Ling ; Scott Alexander ; Richard Lau

【Abstract】: This paper attempts to answer a question: for a given traversal area, how to quantify the geometric impact of anchor placement on localization performance. We present a theoretical framework for quantifying the anchor placement impact. An experimental study, as well as the field test using a UWB ranging technology, is presented. These experimental results validate the theoretical analysis. As a byproduct, we propose a two-phase localization method (TPLM) and show that TPLM outperforms the least-square method in localization accuracy by a huge margin. TPLM performs much faster than the gradient descent method and slightly better than the gradient descent method in localization accuracy. Our field test suggests that TPLM is more robust against noise than the least-square and gradient descent methods.

【Keywords】: gradient methods; least squares approximations; ultra wideband technology; UWB ranging technology; anchor placement impact; geometric impact; gradient descent method; least-square method; traversal area; two-phase localization method; Accuracy; Distance measurement; Global Positioning System; Manganese; Noise; Noise measurement; Trajectory

246. On distinguishing the multiple radio paths in RSS-based ranging.

Paper Link】 【Pages】:2201-2209

【Authors】: Dian Zhang ; Yunhuai Liu ; Xiaonan Guo ; Min Gao ; Lionel M. Ni

【Abstract】: Among the various ranging techniques, Radio Signal Strength (RSS) based approaches attract intensive research interests because of its low cost and wide applicability. RSS-based ranging is apt to be affected by the multipath phenomenon which allows the radio signals to reach the destination through multiple propagation paths. To address this issue, previous works try to profile the environment and refer this profile in run-time. In practical dynamic environments, however, the profile frequently changes and the painful retraining is needed. Rather than such static ways of profiling the environments, in this paper, we try to accommodate the environmental dynamics automatically in real-time. The key observation is that given a pair of nodes, the RSS at different spectrum channels will be different. This difference carries the valuable phase information of the radio signals. By analyzing these RSS values, we are able to identify the amplitude of signals solely from the Line-of-Sight (LOS) path. This LOS amplitude is a simple function of the path length (the physical distance). We find that the analysis is a typical non-linear curvature fitting problem that has no general routing algorithms. We prove this problem format is ill-conditioned which has no stable and trustable solutions. To deal with this issue, we further explore the practical considerations for the problem and reform it to a greatly improved conditioning shape. We solve the problem by numerical iterations and implement these ideas in a real-time indoor tracking system called MuD. MuD employs only three TelosB nodes as anchors. The experiment results show that in a dynamic environment where five people move around, the averaged localization error is 1 meter. Compared with the traditional RSS-based approaches in dynamic environment, the accuracy improves up to 10 times.

【Keywords】: curve fitting; indoor communication; radio tracking; signal processing; MuD; RSS based ranging; TelosB nodes; line-of-sight path; localization error; multiple propagation path; multiple radio path; nonlinear curvature fitting problem; numerical iteration; radio signal strength based ranging; ranging technique; real-time indoor tracking system; Distance measurement; Hardware; Mathematical model; Radio propagation; Radio transmitters; Receivers; Training

247. FILA: Fine-grained indoor localization.

Paper Link】 【Pages】:2210-2218

【Authors】: Kaishun Wu ; Jiang Xiao ; Youwen Yi ; Min Gao ; Lionel M. Ni

【Abstract】: Indoor positioning systems have received increasing attention for supporting location-based services in indoor environments. WiFi-based indoor localization has been attractive due to its open access and low cost properties. However, the distance estimation based on received signal strength indicator (RSSI) is easily affected by the temporal and spatial variance due to the multipath effect, which contributes to most of the estimation errors in current systems. How to eliminate such effect so as to enhance the indoor localization performance is a big challenge. In this work, we analyze this effect across the physical layer and account for the undesirable RSSI readings being reported. We explore the frequency diversity of the subcarriers in OFDM systems and propose a novel approach called FILA, which leverages the channel state information (CSI) to alleviate multipath effect at the receiver. We implement the FILA system on commercial 802.11 NICs, and then evaluate its performance in different typical indoor scenarios. The experimental results show that the accuracy and latency of distance calculation can be significantly enhanced by using CSI. Moreover, FILA can significantly improve the localization accuracy compared with the corresponding RSSI approach.

【Keywords】: OFDM modulation; indoor radio; radionavigation; wireless LAN; CSI; FILA system; OFDM systems; RSSI readings; Wi-Fi-based indoor localization; channel state information; commercial 802.11 NIC; distance estimation; fine-grained indoor localization; frequency diversity; indoor positioning systems; location-based services; multipath effect; physical layer; received signal strength indicator; spatial variance; temporal variance; Accuracy; Bandwidth; Baseband; Fading; OFDM; Receivers; Training

248. HAWK: An unmanned mini helicopter-based aerial wireless kit for localization.

Paper Link】 【Pages】:2219-2227

【Authors】: Zhongli Liu ; Yinjie Chen ; Benyuan Liu ; Chengyu Cao ; Xinwen Fu

【Abstract】: This paper presents a fully functional and highly portable mini Unmanned Aerial Vehicle (UAV) system, HAWK, for conducting aerial localization. HAWK is a programmable mini helicopter - Draganflyer X6 - armed with a wireless sniffer - Nokia N900. We developed custom PI-Control laws to implement a robust waypoint algorithm for the mini helicopter to fly a planned route. A Moore space filling curve is designed as a flight route for HAWK to survey a specific area. A set of theorems were derived to calculate the minimum Moore curve level for sensing all targets in the area with minimum flight distance. With such a flight strategy, we can confine the location of a target of interest to a small hot area. We can recursively apply the Moore curve based flight route to the hot area for a fine-grained localization of a target of interest. Therefore, HAWK does not rely on a positioning infrastructure for localization. We have conducted extensive experiments to validate the feasibility of HAWK and our theory. A demo of HAWK in autonomous fly is available at http://www.youtube.com/watch?v=ju86xnHbEq0.

【Keywords】: PI control; aircraft control; autonomous aerial vehicles; control engineering computing; helicopters; mobile handsets; path planning; position control; robust control; Draganflyer X6; HAWK; Moore curve based flight route; Moore space filling curve; Nokia N900; PI-control laws; aerial localization; aerial wireless kit; fine-grained localization; flight strategy; minimum flight distance; portable mini unmanned aerial vehicle system; positioning infrastructure; programmable mini helicopter; robust waypoint algorithm; unmanned mini helicopter; wireless sniffer; Helicopters; Mobile handsets; Roads; Software; Surveillance; Wireless communication; Wireless sensor networks

249. Efficient anonymous message submission.

Paper Link】 【Pages】:2228-2236

【Authors】: Xinxin Zhao ; Lingjun Li ; Guoliang Xue ; Gabriel Silva

【Abstract】: In online surveys, many people are not willing to provide true answers due to privacy concerns. Thus, anonymity is important for online message collection. Existing solutions let each member blindly shuffle the submitted messages by using the IND-CCA2 secure cryptosystem. In the end, all messages are randomly shuffled and no one knows the message order. However, the heavy computational overhead and linear communication rounds make it only useful for small groups. In this paper, we propose an efficient anonymous message submission protocol aimed at a practical group size. Our protocol is based on a simplified secret sharing scheme and a symmetric key cryptosystem. We propose a novel method to let all members secretly aggregate their messages into a message vector such that a member knows nothing about other members' message positions.We provide a theoretical proof showing that our protocol is anonymous under malicious attacks. We then conduct a thorough analysis of our protocol, showing that our protocol is computationally more efficient than existing solutions and results in a constant communication rounds with a high probability.

【Keywords】: computational complexity; cryptographic protocols; probability; IND-CCA2 secure cryptosystem; anonymous message submission protocol; computational overhead; linear communication rounds; malicious attacks; message order; message positions; message vector; online message collection; online surveys; privacy concerns; probability; simplified secret sharing scheme; submitted messages; symmetric key cryptosystem; theoretical proof; thorough analysis; Encryption; Games; Polynomials; Protocols; Vectors

250. Effective ad targeting with concealed profiles.

Paper Link】 【Pages】:2237-2245

【Authors】: Murali S. Kodialam ; T. V. Lakshman ; Sarit Mukherjee

【Abstract】: In an ad targeting system, an advertiser specifies the profiles of the users to whom it is interested in showing an ad. The underlying ad distribution system would like to use profiles of users, if available, to match advertisers to users in an optimal manner. Availability of the needed profile information very much depends on whether users opt-in to have their profile information revealed. When some set of users opt-out of having their profile information revealed, possibly for privacy reasons, an ad distribution system needs methods to match advertisers to the right users despite the system itself not having full knowledge of the users' profiles. In this paper, we propose solutions to this problem thereby expanding the universe of users to whom ad targeting becomes feasible. Ads can be targeted to opt-in users, whose profiles are therefore known to the ad targeting system, using now known approaches. Our solution enables targeting of ads to users who have chosen to not opt-in to reveal their profiles. Such users keep their true interest profiles to themselves (locally on their equipment). Ads to be displayed are selected locally and ad scheduling is done using a guaranteed approximation online algorithm that uses only statistically falsified profile information and not the true profiles. Despite the use of statistically falsified information, accurate targeting can be done. We show both analytically and experimentally that the performance of the ad scheduler is quite close to optimal.

【Keywords】: advertising data processing; approximation theory; pattern matching; scheduling; statistical analysis; ad distribution system; ad scheduling; ad targeting system; advertiser matching; concealed profile; guaranteed approximation online algorithm; profile information; statistically falsified information; user profile specification; Approximation algorithms; Approximation methods; Availability; Distribution functions; Estimation; Privacy; Vectors

251. Data perturbation with state-dependent noise for participatory sensing.

Paper Link】 【Pages】:2246-2254

【Authors】: Fan Zhang ; Li He ; Wenbo He ; Xue Liu

【Abstract】: The emerging participatory sensing applications rely on individuals' inputs which may be highly correlated with individuals' sensitive information or personal data. Hence, privacy protection is crucial to encourage individual participation for participatory sensing applications to generate trustworthy and high quality data. A widely used technique for privacy preservation is data perturbation, which adds noises to original data at the client side to protect individuals' privacy and allows the information server to reconstruct the statistics of the original data. In this paper, we find a serious vulnerability of existing data perturbation algorithms that an adversary may exploit to restore other users' private information (e.g., mean, variance and the distribution of original data) from the perturbed data, because all the participants share the same noise distribution. To overcome such vulnerability, we propose a privacy enhanced state-dependent perturbation (PESP), which assigns different noise distributions to individuals and the noises vary according to the state of real data. PESP is not only able to reconstruct community statistics, but also provides better privacy protection for individuals even when the adversaries acquire the perturbed data. We evaluate PESP through two participatory sensing applications: one analyzes the speed variations of vehicles on the road and the other computes the weight statistics for a particular diet. The results demonstrate the efficiency of the PESP in privacy preservation and reconstruction of the community statistics.

【Keywords】: data privacy; statistical analysis; statistical distributions; community statistics reconstruction; data distribution; data perturbation; high quality data; mean; noise distribution; participatory sensing application; particular diet; personal data; privacy enhanced state-dependent perturbation; privacy preservation; privacy protection; sensitive information; state-dependent noise; trustworthy data; user private information; variance; vehicle speed variation; weight statistics; Communities; Data models; Data privacy; Estimation; Noise; Sensors; Servers; Data Perturbation; Participatory Sensing; Privacy

252. SmartAnalyzer: A noninvasive security threat analyzer for AMI smart grid.

Paper Link】 【Pages】:2255-2263

【Authors】: Mohammad Ashiqur Rahman ; Padmalochan Bera ; Ehab Al-Shaer

【Abstract】: The Advanced Metering Infrastructure (AMI) is the core component in smart grid that exhibits highly complex network configurations comprising of heterogeneous cyber-physical components. These components are interconnected through different communication media, protocols, and secure tunnels, and they are operated using different data delivery modes and security policies. The inherent complexity and heterogeneity in AMI significantly increase the potential of security threats due to misconfiguration or absence of defense, which may cause devastating damage to AMI. Therefore, there is a need of creating a formal model that can represent the global behavior of AMI configuration in order to verify the potential threats. In this paper, we present SmartAnalyzer, a formal security analysis tool, which offers manifold contributions: (i) formal modeling of AMI configuration including device configurations, topology, communication properties, interactions between the devices, data flows, and security properties; (ii) formal modeling of AMI invariant and user-driven constraints based on the interdependencies between AMI device configurations, security properties, and security control guidelines; (iii) verifying the AMI configuration's compliances with security constraints using Satisfiability Modulo Theory (SMT) solver; (iv) generating a comprehensive security threat report with possible remediation plan based on the verification results. The accuracy, scalability, and usability of the tool are evaluated on real smart grid environment and synthetic test networks.

【Keywords】: computer network security; power system analysis computing; power system measurement; smart power grids; telecommunication security; AMI device configurations; AMI smart grid; SmartAnalyzer; advanced metering infrastructure; data delivery modes; data flows; formal modeling; heterogeneous cyberphysical components; network configurations; noninvasive security threat analyzer; satisfiability modulo theory solver; secure tunnels; security control guidelines; security policies; synthetic test networks; user driven constraints; Analytical models; Authentication; Network topology; Protocols; Schedules; Smart grids

253. Resource allocation for heterogeneous multiuser OFDM-based cognitive radio networks with imperfect spectrum sensing.

Paper Link】 【Pages】:2264-2272

【Authors】: Shaowei Wang ; Zhi-Hua Zhou ; Mengyao Ge ; Chonggang Wang

【Abstract】: In this paper we study the resource allocation in OFDM-based cognitive radio (CR) networks, under the consideration of many practical limitations such as imperfect spectrum sensing, limited transmission power, different traffic demands of secondary users, etc. We formulated this general problem as a mixed integer programming task. Considering that this optimization task is computationally intractable, we propose to address it in two steps. For the first step, we perform subchannel allocation to satisfy heterogeneous users' rate requirement roughly and remove the integer constraints of the optimization problem. For the second step, we perform power allocation among the subchannels. By exploiting the problem structure to speedup the Newton step, we propose a Barrier-based method which is able to achieve the optimal power distribution with a complexity of O(N), where N is the number of active OFDM subchannels, significantly better than the complexity of O(N3) of standard techniques. Moreover, we proposed a method which is able to approximate the optimal solution with a constant complexity. Numerical results validate that our proposal exploits the overall capacity of CR systems well subjected to different traffic demands of users.

【Keywords】: OFDM modulation; channel allocation; cognitive radio; integer programming; resource allocation; Newton step; barrier-based method; heterogeneous multiuser OFDM-based cognitive radio networks; imperfect spectrum sensing; mixed integer programming task; optimal power distribution; optimization task; power allocation; resource allocation; secondary users; subchannel allocation; traffic demands; transmission power; Complexity theory; Interference; OFDM; Power distribution; Resource management; Sensors; Signal to noise ratio

254. A distributed broadcast protocol in multi-hop cognitive radio ad hoc networks without a common control channel.

Paper Link】 【Pages】:2273-2281

【Authors】: Yi Song ; Jiang Xie

【Abstract】: Broadcast is an important operation in wireless ad hoc networks where control information is usually propagated as broadcasts for the realization of most networking protocols. In traditional ad hoc networks, since the spectrum availability is uniform, broadcasts are delivered via a common channel which can be heard by all users in a network. However, in cognitive radio (CR) ad hoc networks, different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. In this paper, a fully-distributed broadcast protocol in multi-hop CR ad hoc networks without a common control channel is proposed. In our design, we consider practical scenarios that each unlicensed user is not assumed to be aware of the global network topology, the spectrum availability information of other users, and time synchronization information. By intelligently downsizing the original available channel set and designing the broadcasting sequences and scheduling schemes, our proposed broadcast protocol can provide very high successful broadcast ratio while achieving the shortest average broadcast delay. It can also eliminate broadcast collisions. To the best of our knowledge, this is the first work that addresses the broadcasting challenges specifically in multi-hop CR ad hoc networks under practical scenarios.

【Keywords】: ad hoc networks; broadcast communication; cognitive radio; delays; protocols; telecommunication network topology; broadcast collisions; broadcast delay; broadcast ratio; channel sets; common control channel; control information; distributed broadcast protocol; global network topology; multihop cognitive radio; nonuniform spectrum availability; spectrum availability information; time synchronization information; unlicensed user; wireless ad hoc networks; Ad hoc networks; Availability; Broadcasting; Collision avoidance; Delay; Protocols; Receivers

255. Combinatorial auction with time-frequency flexibility in cognitive radio networks.

Paper Link】 【Pages】:2282-2290

【Authors】: Mo Dong ; Gaofei Sun ; Xinbing Wang ; Qian Zhang

【Abstract】: In this paper, we tackle the spectrum allocation problem in cognitive radio (CR) networks with time-frequency flexibility consideration using combinatorial auction. Different from all the previous works using auction mechanisms, we model the spectrum opportunity in a time-frequency division manner. This model caters to much more flexible requirements from secondary users (SUs) and has very clear application meaning. The additional flexibility also brings theoretical and computational difficulties. We model the spectrum allocation as a combinatorial auction and show that under the time-frequency flexible model, reaching the social welfare maximal is NP hard and the upper bound of worst-case approximation ratio is √m, m is the number of time-frequency slots. Therefore, we design an auction mechanism with near-optimal winner determination algorithm, whose worst-case approximation ratio reaches the upper bound √m. Further we devise a truthful payment scheme under the approximation winner determination algorithm to guarantee that all the bids submitted by SUs reflect their true valuation of the spectrum. To further address the issue and reach optimality, we simplify the general model to that only frequency flexibility is allowed, which is still useful, and propose a truthful, optimal and computationally efficient auction mechanism under modified model. Extensive simulation results show that all the proposed algorithms generate high social welfare as well as high spectrum utilization ratio. What's more, the actual approximation ratio of near-optimal algorithm is much higher than the worst-case approximation ratio.

【Keywords】: approximation theory; cognitive radio; combinatorial mathematics; radio spectrum management; cognitive radio networks; combinatorial auction; secondary users; spectrum allocation; time-frequency division; time-frequency flexibility; winner determination algorithm; worst-case approximation ratio; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computational modeling; Cost accounting; Optimized production technology; Time frequency analysis

256. Almost optimal accessing of nonstochastic channels in cognitive radio networks.

Paper Link】 【Pages】:2291-2299

【Authors】: Xiang-Yang Li ; Panlong Yang ; Yubo Yan ; Lizhao You ; Shaojie Tang ; Qiuyuan Huang

【Abstract】: We propose joint channel sensing, probing, and accessing schemes for secondary users in cognitive radio networks. Our method has time and space complexity O(N·k) for a network with N channels and k secondary users, while applying classic methods requires exponential time complexity. We prove that, even when channel states are selected by adversary (thus non-stochastic), it results in a total regret uniformly upper bounded by Θ(√TN log N), w.h.p, for communication lasts for T timeslots. Our protocol can be implemented in a distributed manner due to the nonstochastic channel assumption. Our experiments show that our schemes achieve almost optimal throughput compared with an optimal static strategy, and perform significantly better than previous methods in many settings.

【Keywords】: cognitive radio; protocols; wireless channels; cognitive radio network; exponential time complexity; joint channel sensing scheme; nonstochastic channel assumption; nonstochastic channel optimal accessing scheme; optimal static strategy; protocol; secondary user; space complexity; Channel estimation; Cognitive radio; Complexity theory; Probes; Protocols; Sensors; Throughput

257. Backpressure with Adaptive Redundancy (BWAR).

Paper Link】 【Pages】:2300-2308

【Authors】: Majed Alresaini ; Maheswaran Sathiamoorthy ; Bhaskar Krishnamachari ; Michael J. Neely

【Abstract】: Backpressure scheduling and routing, in which packets are preferentially transmitted over links with high queue differentials, offers the promise of throughput-optimal operation for a wide range of communication networks. However, when the traffic load is low, due to the corresponding low queue occupancy, backpressure scheduling/routing experiences long delays. This is particularly of concern in intermittent encounter-based mobile networks which are already delay-limited due to the sparse and highly dynamic network connectivity. While state of the art mechanisms for such networks have proposed the use of redundant transmissions to improve delay, they do not work well when the traffic load is high. We propose in this paper a novel hybrid approach that we refer to as backpressure with adaptive redundancy (BWAR), which provides the best of both worlds. This approach is highly robust and distributed and does not require any prior knowledge of network load conditions. We evaluate BWAR through both mathematical analysis and simulations based on a cell-partitioned model. We prove theoretically that BWAR does not perform worse than traditional backpressure in terms of the maximum throughput, while yielding a better delay bound. The simulations confirm that BWAR outperforms traditional backpressure at low load, while outperforming a state of the art encounter-routing scheme (Spray and Wait) at high load.

【Keywords】: cellular radio; redundancy; routing protocols; telecommunication network reliability; adaptive redundancy; backpressure routing; backpressure scheduling; cell partitioned model; high queue differential; intermittent encounter based mobile network; redundant transmissions; throughput optimal operation; Delay; Protocols; Redundancy; Routing; Scheduling; Throughput; Vectors

258. Closed-form throughput expressions for CSMA networks with collisions and hidden terminals.

Paper Link】 【Pages】:2309-2317

【Authors】: Bruno Nardelli ; Edward W. Knightly

【Abstract】: We present a novel modeling approach to derive closed-form throughput expressions for CSMA networks with hidden terminals. The key modeling principle is to break the interdependence of events in a wireless network using conditional expressions that capture the effect of a specific factor each, yet preserve the required dependences when combined together. Different from existing models that use numerical aggregation techniques, our approach is the first to jointly characterize the three main critical factors affecting flow throughput (referred to as hidden terminals, information asymmetry and flow-in-the-middle) within a single analytical expression. We have developed a symbolic implementation of the model, that we use for validation against realistic simulations and experiments with real wireless hardware, observing high model accuracy in the evaluated scenarios. The derived closed-form expressions enable new analytical studies of capacity and protocol performance that would not be possible with prior models. We illustrate this through an application of network utility maximization in complex networks with collisions, hidden terminals, asymmetric interference and flow-in-the-middle instances. Despite that such problematic scenarios make utility maximization a challenging problem, the model-based optimization yields vast fairness gains and an average per-flow throughput gain higher than 500% with respect to 802.11 in the evaluated networks.

【Keywords】: carrier sense multiple access; optimisation; radio networks; CSMA networks; asymmetric interference; closed-form throughput expressions; fairness gains; flow throughput; flow-in-the-middle; hidden terminals; information asymmetry; model-based optimization; network utility maximization; numerical aggregation techniques; per-flow throughput gain; protocol performance; symbolic implementation; wireless hardware; wireless network; Analytical models; Markov processes; Multiaccess communication; Numerical models; Throughput; Transmitters; Wireless communication

259. Max-weight scheduling in networks with heavy-tailed traffic.

Paper Link】 【Pages】:2318-2326

【Authors】: Mihalis G. Markakis ; Eytan Modiano ; John N. Tsitsiklis

【Abstract】: We consider the problem of packet scheduling in a single-hop network with a mix of heavy-tailed and light-tailed traffic, and analyze the impact of heavy-tailed traffic on the performance of Max-Weight scheduling. As a performance metric we use the delay stability of traffic flows: a traffic flow is delay stable if its expected steady-state delay is finite, and delay unstable otherwise. First, we show that a heavy-tailed traffic flow is delay unstable under any scheduling policy. Then, we focus on the celebrated Max-Weight scheduling policy, and show that a light-tailed flow that conflicts with a heavy-tailed flow is also delay unstable. This is true irrespective of the rate or the tail distribution of the light-tailed flow, or other scheduling constraints in the network. Surprisingly, we show that a light-tailed flow can be delay unstable, even when it does not conflict with heavy-tailed traffic. Furthermore, delay stability in this case may depend on the rate of the light-tailed flow. Finally, we turn our attention to the class of Max-Weight-α scheduling policies; we show that if the α-parameters are chosen suitably, then the sum of the α-moments of the steady-state queue lengths is finite. We provide an explicit upper bound for the latter quantity, from which we derive results related to the delay stability of traffic flows, and the scaling of moments of steady-state queue lengths with traffic intensity.

【Keywords】: delays; queueing theory; radio networks; scheduling; telecommunication traffic; delay stability; heavy tailed traffic; light tailed traffic; max weight scheduling; packet scheduling; performance metric; scheduling constraints; single hop network; steady state delay; steady state queue lengths; tail distribution; traffic flows; traffic intensity; Delay; Limiting; Queueing analysis; Schedules; Scheduling; Steady-state; Vectors

260. Learning to route queries in unstructured P2P networks: Achieving throughput optimality subject to query resolution constraints.

Paper Link】 【Pages】:2327-2335

【Authors】: Virag Shah ; Gustavo de Veciana ; George Kesidis

【Abstract】: Finding a document or resource in an unstructured peer-to-peer network can be an exceedingly difficult problem. In this paper we propose a dynamic query routing approach that accounts for arbitrary overlay topologies, nodes with heterogeneous processing capacity and heterogenous class-based likelihoods of query resolution at nodes, reflecting the query loads and manner in which files/resources are distributed across the network. Finite processing capacity at nodes, e.g., reflecting their degree of altruism, can indeed limit the stabilizable load into the system. Our approach is shown to be throughput optimal subject to a grade of service constraint, i.e., it stabilizes the query load subject to a guarantee that queries' routes meet pre-specified class-based bounds on their associated a priori probability of query resolution. Numerical and simulation results show significant improvement in capacity region and performance benefits, in terms of mean delay, over random walk based searches. Additional aspects associated with reducing complexity, learning, and adaptation to class-based query resolution probabilities and traffic loads are studied.

【Keywords】: overlay networks; peer-to-peer computing; probability; query processing; telecommunication network routing; telecommunication traffic; altruism degree; arbitrary overlay topology; class-based bounds; class-based query resolution probabilities; complexity reduction; document detection; dynamic query routing; finite node processing capacity; heterogeneous processing capacity; heterogenous class-based likelihoods; learning; mean delay; node query resolution; peer-to-peer systems; query load stabilization; query resolution constraints; resource detection; service constraint; throughput optimality; traffic loads; unstructured P2P networks; unstructured peer-to-peer network; Complexity theory; Delay; History; Network topology; Peer to peer computing; Throughput; Vectors

261. On the admission of dependent flows in powerful sensor networks.

Paper Link】 【Pages】:2336-2344

【Authors】: Reuven Cohen ; Ilia Nudelman ; Gleb Polevoy

【Abstract】: In this paper we define and study a new problem, referred to as the Dependent Unsplittable Flow Problem (D-UFP). We present and discuss this problem in the context of large-scale powerful (radar/camera) sensor networks, but we believe it has important applications on the admission of large flows in other networks as well. In order to optimize the selection of flows transmitted to the gateway, D-UFP takes into account possible dependencies between flows. We show that D-UFP is more difficult than NP-hard problems for which no good approximation is known. Then, we address two special cases of this problem: the case where all the sensors have a shared channel and the case where the sensors form a mesh and route to the gateway over a spanning tree.

【Keywords】: approximation theory; computational complexity; telecommunication network routing; wireless sensor networks; D-UFP; NP-hard problems; approximation; dependent flow admission; dependent unsplittable flow problem; gateway; large-scale powerful sensor networks; spanning tree; wireless sensor networks; Approximation methods; Bandwidth; Logic gates; Radar; Routing; Vectors; Wireless sensor networks

262. Optimal surface deployment problem in wireless sensor networks.

Paper Link】 【Pages】:2345-2353

【Authors】: Miao Jin ; Guodong Rong ; Hongyi Wu ; Liang Shuai ; Xiaohu Guo

【Abstract】: Sensor deployment is a fundamental issue in a wireless sensor network, which often dictates the overall network performance. Previous studies on sensor deployment mainly focused on sensor networks on 2D plane or in 3D volume. In this paper, we tackle the problem of optimal sensor deployment on 3D surfaces, aiming to achieve the highest overall sensing quality. In general, the reading of a sensor node exhibits unreliability, which often depends on the distance between the sensor and the target to be sensed, as observed in a wide range of applications. Therefore, with a given set of sensors, a sensor network offers different accuracy in data acquisition when the sensors are deployed in different ways in the Field of Interest (FoI). We formulate this optimal surface deployment problem in terms of sensing quality by introducing a general function to measure the unreliability of monitored data in the entire sensor network. We present its optimal solution and propose a series of algorithms for practical implementation. Extensive simulations are conducted on various 3D mountain surface models to demonstrate the effectiveness of the proposed algorithms.

【Keywords】: data acquisition; sensor placement; wireless sensor networks; 2D plane sensor network; 3D mountain surface model; 3D volume sensor network; FoI; data acquisition; data monitoring unreliability; field of interest; optimal 3D surface deployment problem; optimal sensor deployment; target sensing; wireless sensor network; Measurement; Partitioning algorithms; Sensors; Shape; Surface topography; Three dimensional displays; Wireless sensor networks

263. Optimal range assignment in solar powered active wireless sensor networks.

Paper Link】 【Pages】:2354-2362

【Authors】: Benjamin Gaudette ; Vinay Hanumaiah ; Sarma B. K. Vrudhula ; Marwan Krunz

【Abstract】: Energy harvesting in a sensor network is essential in situations where it is either difficult or not cost effective to access the network's nodes to replace the batteries. In this paper, we investigate the problems involved in controlling an active wireless sensor network that is powered both by rechargeable batteries and solar energy. The objective of this control is to maximize the network's quality of coverage (QoC), defined as the minimum number of targets that must be covered over a 24-hour period. Assuming a time varying solar profile, the problem is to optimally control the sensing range of each sensor so as to maximize the QoC. Implicit in the solution is the dynamic allocation of solar energy during the day to sensing tasks and to recharging the battery so that minimum coverage is guaranteed even during the night, when only the batteries can supply energy to the sensors. The problem turns out to be a nonlinear optimal control problem of high complexity. Exploiting the specific structure of the problem, we present a method to solve it as a series of quasiconvex (unimodal) optimization problems. The runtime of the proposed solution is 60X less than a naive method that is based on dynamic programming, while its worst-case error is less than 8%. Unlike the dynamic programming method, the proposed method is scalable to large networks consisting of hundreds of sensors and targets. This paper also offers several insights in the design of energy-harvesting networks, which result in minimum network setup cost through the determination of the optimal configuration of the number of sensors and the sampling time.

【Keywords】: energy harvesting; secondary cells; solar energy conversion; solar power stations; telecommunication control; wireless sensor networks; active wireless sensor network control; dynamic programming; energy harvesting; energy-harvesting networks; optimal range assignment; quality of coverage; rechargeable batteries; solar energy; solar powered active wireless sensor networks; worst-case error; Batteries; Energy harvesting; Mathematical model; Optimal control; Optimization; Sensors; Wireless sensor networks

264. DEOS: Dynamic energy-oriented scheduling for sustainable wireless sensor networks.

Paper Link】 【Pages】:2363-2371

【Authors】: Ting Zhu ; Abedelaziz Mohaisen ; Yi Ping ; Don Towsley

【Abstract】: Energy is the most precious resource in wireless sensor networks. To ensure sustainable operations, wireless sensor systems need to harvest energy from environments. The time-varying environmental energy results in the dynamic change of the system's available energy. Therefore, how to dynamically schedule tasks to match the time-varying energy is a challenging problem. In contrast to traditional computing-oriented scheduling methods that focus on reducing computational energy consumption and meeting the tasks' deadlines, we present DEOS, a dynamic energy-oriented scheduling method, which treats energy as a first-class schedulable resource and dynamically schedules tasks based on the tasks' energy consumption and the system's real-time available energy. We extensively evaluate our system in indoor and outdoor settings. Results indicate that DEOS is extremely lightweight (e.g., energy consumption overhead in the worst case is only 0.039%) and effectively schedules tasks to utilize the dynamically available energy.

【Keywords】: energy harvesting; wireless sensor networks; DEOS; computational energy consumption reduction; computing-oriented scheduling methods; dynamic energy-oriented scheduling method; energy harvesting; environmental energy; first-class schedulable resource; indoor settings; outdoor settings; sustainable wireless sensor networks; time-varying energy; Admission control; Ash; Dynamic scheduling; Energy consumption; Nickel; Schedules; Sensors

265. Obfuscation of sensitive data in network flows.

Paper Link】 【Pages】:2372-2380

【Authors】: Daniele Riboni ; Antonio Villani ; Domenico Vitali ; Claudio Bettini ; Luigi V. Mancini

【Abstract】: In the last decade, the release of network flows has gained significant popularity among researchers and networking communities. Indeed, network flows are a fundamental tool for modeling the network behavior, identifying security attacks, and validating research results. Unfortunately, due to the sensitive nature of network flows, security and privacy concerns discourage the publication of such datasets. On the one hand, existing techniques proposed to sanitize network flows do not provide any formal guarantees. On the other hand, microdata anonymization techniques are not directly applicable to network flows. In this paper, we propose a novel obfuscation technique for network flows that provides formal guarantees under realistic assumptions about the adversary's knowledge. Our work is supported by extensive experiments with a large set of real network flows collected at an important Italian Tier II Autonomous System, hosting sensitive government and corporate sites. Experimental results show that our obfuscation technique preserves the utility of network flows for network traffic analysis.

【Keywords】: Internet; security of data; telecommunication traffic; Internet; Italian Tier II autonomous system; network behavior; network flows; network traffic; obfuscation; security attacks; sensitive data; Encryption; Fingerprint recognition; IP networks; Knowledge engineering; Vectors

266. Extensive analysis and large-scale empirical evaluation of tor bridge discovery.

Paper Link】 【Pages】:2381-2389

【Authors】: Zhen Ling ; Junzhou Luo ; Wei Yu ; Ming Yang ; Xinwen Fu

【Abstract】: Tor is a well-known low-latency anonymous communication system that is able to bypass Internet censorship. However, publicly announced Tor routers are being blocked by various parties. To counter the censorship blocking, Tor introduced nonpublic bridges as the first-hop relay into its core network. In this paper, we analyzed the effectiveness of two categories of bridge-discovery approaches: (i) enumerating bridges from bridge https and email servers, and (ii) inferring bridges by malicious Tor middle routers. Large-scale experiments were conducted and validated our theoretic findings. We discovered 2365 Tor bridges through the two enumeration approaches and 2369 bridges by only one Tor middle router in 14 days. Our study shows that the bridge discovery based on malicious middle routers is simple, efficient and effective to discover bridges with little overhead. We also discussed the mechanisms to counter the malicious bridge discovery.

【Keywords】: Internet; computer network security; electronic mail; telecommunication network routing; transport protocols; Internet censorship; Tor bridge discovery; Tor middle router; anonymous communication system; bridge HTTP; censorship blocking; email server; malicious bridge discovery; malicious middle router; Bridges; Decision support systems; Electronic mail; Servers; Anonymous Communication; Bridge Discovery; Cloud Computing; Tor

267. A novel network delay based side-channel attack: Modeling and defense.

Paper Link】 【Pages】:2390-2398

【Authors】: Zhen Ling ; Junzhou Luo ; Yang Zhang ; Ming Yang ; Xinwen Fu ; Wei Yu

【Abstract】: Information leakage via side channels has become a primary security threat to encrypted web traffic. Existing side channel attacks and corresponding countermeasures focus primarily on packet length, packet timing, web object size and web flow size. However, we found that encrypted web traffic can also leak information via network delay between a user and the web sites that she visits. Motivated by this observation, we investigate a novel network-delay based side-channel attack to infer web sites visited by a user. The adversary can utilize pattern recognition techniques to differentiate web sites by measuring sample mean and sample variance of the round-trip time (RTT) between a victim user and web sites. We theoretically analyzed the damage caused by such an adversary and derived closed-form formulae for detection rate, the probability that the adversary correctly recognizes a web site. To defeat this side-channel attack, we proposed several countermeasures. The basic idea is to shape traffic from different web sites so that they have similar RTT statistics. We proposed the strategies based on the k-means clustering and K-Anonymity to ensure that traffic shaping will not cause excessive delay while providing a predictable degree of anonymity. We conducted extensive experiments and our empirical results match our theory very well.

【Keywords】: Web sites; cryptography; pattern clustering; probability; sampling methods; telecommunication security; telecommunication traffic; K-anonymity; RTT statistics; Web flow size; Web object size; Web site; defense; detection rate; encrypted Web traffic; information leakage; k-means clustering; network delay based side-channel attack; packet length; packet timing; pattern recognition; probability; round-trip time; sample mean; sample variance; security threat; traffic shaping; Cryptography; Delay; Equations; Feature extraction; Mathematical model; Web sites; Countermeasures; Information Leak; Network Delay; Side Channel

268. Efficient algorithms for K-anonymous location privacy in participatory sensing.

Paper Link】 【Pages】:2399-2407

【Authors】: Khuong Vu ; Rong Zheng ; Jie Gao

【Abstract】: Location privacy is an important concern in participatory sensing applications, where users can both contribute valuable information (data reporting) as well as retrieve (location-dependent) information (query) regarding their surroundings. K-anonymity is an important measure for privacy to prevent the disclosure of personal data. In this paper, we propose a mechanism based on locality-sensitive hashing (LSH) to partition user locations into groups each containing at least K users (called spatial cloaks). The mechanism is shown to preserve both locality and K-anonymity. We then devise an efficient algorithm to answer kNN queries for any point in the spatial cloaks of arbitrary polygonal shape. Extensive simulation study shows that both algorithms have superior performance with moderate computation complexity.

【Keywords】: data privacy; learning (artificial intelligence); pattern classification; query processing; K-anonymous location privacy; computation complexity; data reporting; k-nearest neighbor query; kNN query; locality-sensitive hashing; location-dependent information query; participatory sensing; spatial cloak; user location partition; Algorithm design and analysis; Data privacy; Sensors

269. An Opportunistic Resource Sharing and Topology-Aware mapping framework for virtual networks.

Paper Link】 【Pages】:2408-2416

【Authors】: Sheng Zhang ; Zhuzhong Qian ; Jie Wu ; Sanglu Lu

【Abstract】: Network virtualization provides a promising way to overcome Internet ossification. A major challenge is virtual network mapping, i.e., how to embed multiple virtual network requests with resource constraints into a substrate network, such that physical resources are utilized in an efficient and effective manner. Since this problem is known to be NP-complete, a variety of heuristic algorithms have been proposed. In this paper, we re-examine this problem and propose a virtual network mapping framework, ORS TA, which is based on Opportunistic Resource Sharing and Topology-Aware node ranking. Opportunistic resource sharing is taken into consideration at the entire network level for the first time and we develop an online approximation algorithm, FFA, for solving the corresponding time slot assignment problem. To measure the topology importance of a substrate node, a node ranking method, MCRank, based on Markov chain is presented. We also devise a simple and practical method to estimate the residual resource of a substrate node/link. Extensive simulation experiments demonstrate that the proposed framework enables the substrate network to achieve efficient physical resource utilization and to accept many more virtual network requests over time.

【Keywords】: Internet; Markov processes; approximation theory; optimisation; resource allocation; virtual private networks; FFA; Internet; MCRank; Markov chain; NP-complete problem; ORS TA; heuristic algorithm; network virtualization; node ranking method; online approximation algorithm; opportunistic resource sharing; residual resource estimation; resource constraint; resource utilization; substrate node; time slot assignment problem; topology aware node ranking; topology-aware mapping framework; virtual network mapping; Algorithm design and analysis; Approximation algorithms; Approximation methods; Bandwidth; Indium phosphide; Resource management; Substrates; bin packing; markov chain; opportunistic resource sharing; topology-aware; virtual network mapping

270. Flow-aware traffic control for a content-centric network.

Paper Link】 【Pages】:2417-2425

【Authors】: Sara Oueslati ; James Roberts ; Nada Sbihi

【Abstract】: The content-centric networking (CCN) paradigm proposed by PARC holds considerable promise as the architecture for the future Internet but remains incomplete. In this paper we argue that it is necessary to supplement CCN with mechanisms enabling controlled sharing of network bandwidth by concurrent flows. Traffic control is necessary to ensure low latency for conversational and streaming flows and to realize satisfactory bandwidth sharing between elastic flows. These objectives can be realized using “network neutral” traffic controls based on per-flow fair bandwidth sharing. The paper describes necessary CCN mechanisms and discusses the strategies to be implemented by end systems and routers. A simulation-based performance evaluation demonstrates the effectiveness of the proposed mechanisms and and the impact of possible strategies. We describe our experience in implementing some of the mechanisms using the CCNx prototype software.

【Keywords】: Internet; telecommunication congestion control; content-centric network; elastic flow; flow-aware traffic control; future Internet; network bandwidth sharing; network neutral traffic control; per-flow fair bandwidth sharing; simulation-based performance evaluation; Bandwidth; Delay; Face; IP networks; Internet; Proposals; Radiation detectors

271. Enhancing cache robustness for content-centric networking.

Paper Link】 【Pages】:2426-2434

【Authors】: Mengjun Xie ; Indra Widjaja ; Haining Wang

【Abstract】: With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.

【Keywords】: cache storage; computer network security; file servers; telecommunication network routing; CCN router; CacheShield; cache hit ratios; cache pollution attacks; cache replacement policy; cache robustness; cache server; content-centric networking; trace-driven simulations; Electronic mail; IP networks; Internet; Pollution; Robustness; Routing protocols; Servers

272. A hybrid IP lookup architecture with fast updates.

Paper Link】 【Pages】:2435-2443

【Authors】: Layong Luo ; Gaogang Xie ; Yingke Xie ; Laurent Mathy ; Kavé Salamatian

【Abstract】: As network link rates are being pushed beyond 40 Gbps, IP lookup in high-speed routers is moving to hardware. The TCAM (Ternary Content Addressable Memory)-based IP lookup engine and the SRAM (Static Random Access Memory)-based IP lookup pipeline are the two most common ways to achieve high throughput. However, route updates in both engines degrade lookup performance and may lead to packet drops. Moreover, there is a growing interest in virtual IP routers where more frequent updates happen. Finding solutions that achieve both fast lookup and low update overhead becomes critical. In this paper, we propose a hybrid IP lookup architecture to address this challenge. The architecture is based on an efficient trie partitioning scheme that divides the Forwarding Information Base (FIB) into two prefix sets: a large disjoint leaf prefix set mapped into an external TCAM-based lookup engine and a small overlapping prefix set mapped into an on-chip SRAM-based lookup pipeline. Critical optimizations are developed on both IP lookup engines to reduce the update overhead. We show how to extend the proposed hybrid architecture to support virtual routers. Our implementation shows a throughput of 250 million lookups per second (MLPS). The update overhead is significantly lower than that of previous work and the utilization ratio of most external TCAMs is up to 100%.

【Keywords】: IP networks; content-addressable storage; random-access storage; .critical optimization; FIB; IP lookup engine; IP lookup pipeline; TCAM; disjoint leaf prefix set; forwarding information base; high-speed router; hybrid IP lookup architecture; network link rates; on-chip SRAM-based lookup pipeline; overlapping prefix set; route update; static random access memory; ternary content addressable memory; trie partitioning scheme; virtual IP router; Engines; Field programmable gate arrays; Hardware; IP networks; Pipelines; Random access memory; System-on-a-chip

273. Collaborative hierarchical caching with dynamic request routing for massive content distribution.

Paper Link】 【Pages】:2444-2452

【Authors】: Jie Dai ; Zhan Hu ; Bo Li ; Jiangchuan Liu ; Baochun Li

【Abstract】: Massive content delivery in metropolitan networks has recently gained much attention with the successful deployment of commercial systems and an increasing user popularity. With an enormous volume of content available in the network, as well as the growing size of content owing to the popularity of high-definition video, the exploration of capacity in the caching network becomes a critical issue in providing guaranteed service. Yet, collaboration strategies among cache servers in emerging scenarios, such as IPTV services, are still not well understood so far. In this paper, we propose an efficient collaborative caching mechanism based on the topology derived from a real-world IPTV system, with a particular focus on exploring the capacity of the existing system infrastructure. We observe that collaboration among servers is largely affected by the topology characteristics and heterogeneous capacities of the network. Meanwhile, dynamic request routing within the caching network is strongly coupled with content placement decisions when designing the mechanism. Our proposed mechanism is implemented in a distributed manner, and is amenable to practical deployment. Our simulation results demonstrate the effectiveness of our proposed mechanism, as compared to conventional cache cooperation with static routing schemes.

【Keywords】: IPTV; cache storage; groupware; metropolitan area networks; telecommunication network routing; Internet protocol television; cache server; caching network; collaboration strategy; collaborative hierarchical caching; dynamic request routing; high-definition video; massive content delivery; massive content distribution; metropolitan network; real-world IPTV system; static routing scheme; Bandwidth; Collaboration; IPTV; Network topology; Routing; Servers; Topology

274. How good is bargained routing?

Paper Link】 【Pages】:2453-2461

【Authors】: Gideon Blocq ; Ariel Orda

【Abstract】: Game theoretic models have been widely employed in many networking contexts. Research to date has mainly focused on non-cooperative networking games, where the selfish agents cannot reach a binding agreement on the way they would share the network infrastructure and the operating points are the Nash equilibria. These are typically inefficient, as manifested by large values of the Price of Anarchy (PoA). Many approaches have been proposed for mitigating this problem, however under the standing assumption of a non-cooperative game. In a growing number of networking scenarios it is possible for the selfish agents to communicate and reach an agreement, i.e., play a cooperative game. Therefore, the degradation of performance should be considered at an operating point that is a cooperative game solution. Accordingly, our goal is to lay foundations for the application of cooperative game theory to fundamental problems in networking. We explain our choice of the Nash Bargaining Scheme (NBS) as the solution concept, and we introduce the Price of Selfishness (PoS), which considers the degradation of performance at an NBS. We focus on the fundamental load balancing game of routing over parallel links. First, we study the classical scenario of agents that consider the same performance objectives. While the PoA here can be very large, we establish that, under plausible assumptions, the PoS attains its minimum value, i.e., through bargaining, the selfish agents reach social optimality. We then extend our study to consider the “heterogeneous” case, where agents may consider vastly different performance objectives. We demonstrate that the PoS and PoA can be unbounded, yet we explain why both measures may now be unsuitable. Accordingly, we introduce the Price of Heterogeneity (PoH), as a proper extension of the PoA. We establish an upper-bound on the PoH for a general class of heterogeneous performance objectives, and indicate that it provides incentives for bargain- ng also in this general case. We discuss network design guidelines that follow from our findings.

【Keywords】: game theory; resource allocation; telecommunication network routing; NBS; Nash bargaining scheme; Nash equilibria; PoA; PoH; PoS; cooperative game theory solution; load balancing game; network routing; noncooperative networking game; price of anarchy; price of heterogeneity; price of selfishness; selfish agent; Cost function; Delay; Games; NIST; Nash equilibrium; Routing; Vectors

275. Analysis of TDMA crossbar real-time switch design for AFDX networks.

Paper Link】 【Pages】:2462-2470

【Authors】: Lei Rao ; Qixin Wang ; Xue Liu ; Yufei Wang

【Abstract】: The rapid scaling up of modern avionics is forcing its communication infrastructure to evolve from shared medium toward multi-hop switched real-time networks. This prompts the proposal of avionics full-duplex switched Ethernet (AFDX) standard. Since its publication, AFDX has been well-received, and is deployed or to-be-deployed in state-of-the-art aircrafts, such as Airbus A380/A400M/A350, Boeing 787, Bombardier CSeries etc. On the other hand, AFDX standard only specifies the behavior that an underlying switch must follow, but leaves the architecture design open. This creates an open market for switch vendors. Among the different candidate designs for this market, the TDMA crossbar real-time switch architecture stands out as it complies with and even simplifies many mainstream switch architectures, hence lays a smooth evolution path toward AFDX. In this paper, we focus on analyzing this switch design for AFDX networks. We first prove that TDMA crossbar real-time switch architecture complies with the AFDX specifications; and derive closed-form formulae on the corresponding AFDX networks' traffic characteristics and end-to-end real-time delay bound. Then we prove the resource planning problem in the corresponding AFDX networks is NP-Hard. To address this NP-Hard challenge, we re-model the problem. Based upon the re-modeling, we propose an approximation algorithm.

【Keywords】: aircraft communication; approximation theory; avionics; local area networks; telecommunication standards; telecommunication switching; telecommunication traffic; time division multiple access; AFDX networks; Airbus A380/A400M/A350; Boeing 787; Bombardier CSeries; TDMA crossbar real-time switch design; approximation algorithm; avionics full-duplex switched Ethernet standard; communication infrastructure; multi-hop switched real-time networks; shared medium; state-of-the-art aircrafts; Aerospace electronics; Computer architecture; Real time systems; Schedules; Switches; Time division multiple access

276. Transparent acceleration of software packet forwarding using netmap.

Paper Link】 【Pages】:2471-2479

【Authors】: Luigi Rizzo ; Marta Carbone ; Gaetano Catalli

【Abstract】: Software packet forwarding has been used for a long time in general purpose operating systems. While interesting for prototyping or on slow links, it is not considered a viable solution at very high packet rates, where various sources of overhead (particularly, the packet I/O mechanisms) get in the way of achieving good performance. Having recently developed a novel framework (called netmap) for packet I/O on general purpose operating systems, we have investigated how our work can improve the performance of software packet processing. The problem is of interest because software switches/routers are widely used, and they are becoming inadequate with the increasing use of 1..10 Gbit/s links. The two case studies (OpenvSwitch and Click userspace) that we report in this paper give very interesting answers and insights. First of all, accelerating the I/O layer has the potential for huge benefits: we improved the performance of OpenvSwitch from 780 Kpps to almost 3 Mpps, and that of Click userspace from 490 Kpps to 3.95 Mpps, by simply replacing the I/O library (libpcap) with our accelerated version. On the other hand, reaching these speedups was not purely mechanical. The original versions of the two systems had other limitations, partly hidden by the slow packet I/O library, which prevented or limited the exploitation of these speed gains. In the paper we make the following contributions: i) present an accelerated version of libpcap which gives significant speedups for many existing packet processing applications; ii) show how we modified two representative applications (in particular, Click userspace), achieving huge performance improvements; iii) prove that existing software packet processing systems can be made adequate for high speed links, provided we are careful in removing other bottlenecks not related to packet I/O1.

【Keywords】: computer networks; input-output programs; software libraries; software prototyping; telecommunication computing; telecommunication network routing; Click userspace; I/O layer; OpenvSwitch userspace; general purpose operating systems; high packet rates; high speed links; huge benefits; huge performance improvements; libpcap; netmap; packet I/O mechanisms; packet processing applications; prototyping; representative applications; slow packet I/O library; software packet forwarding; software packet processing systems; software routers; software switches; transparent acceleration; Instruction sets; Kernel; Libraries; Linux; Switches

277. Efficient processing of location-cloaked queries.

Paper Link】 【Pages】:2480-2488

【Authors】: Patricio Galdames ; Ying Cai

【Abstract】: When requesting location-based services, users can associate their queries with a purposely blurred location such as a circular or rectangular geographic region instead of their exact position. This strategy makes it possible for privacy protection, but presents problems in query processing. Since the server does not know a user's exact position, it has to retrieve query results for each position inside the user's cloaking region. While the server workload dramatically increases, a client downloading all query results will waste its battery power, because most of the data may be irrelevant to its query interest. This paper considers the problems of efficient processing of location-cloaked queries (LCQs). Our key observation is that queries may overlap in their cloaking regions and thus share some query results. In light of this, we propose to process queries as a batch instead of one by one independently. The technical contributions of this paper are threefold. 1) We propose to decompose queries into subqueries based on their interested region. Since the subqueries with a common region need to be processed only once, the server workload is minimized. 2) We propose a novel scheduling technique that addresses the dilemma between minimizing server latency and ensuring good fairness in query processing. 3) We present a personalized air indexing technique by which a client can filter out and download only the needed query results, thus avoiding the waste of energy in downloading irrelevant data.

【Keywords】: data privacy; mobile computing; query processing; LCQ; circular geographic region; location-based services; location-cloaked queries; privacy protection; query processing; rectangular geographic region; Batteries; Indexing; Measurement; Query processing; Schedules; Servers; Location cloaking; air indexing; query processing; scheduling

278. A distributed Newton's method for joint multi-hop routing and flow control: Theory and algorithm.

Paper Link】 【Pages】:2489-2497

【Authors】: Jia Liu ; Hanif D. Sherali

【Abstract】: The fast growing scale and heterogeneity of current communication networks necessitate the design of distributed cross-layer optimization algorithms. So far, the standard approach of distributed cross-layer design is based on dual decomposition and the subgradient algorithm, which is a first-order method that has a slow convergence rate. In this paper, we focus on solving a joint multi-path routing and flow control (MRFC) problem by designing a new distributed Newton's method, which is a second-order method and enjoys a quadratic rate of convergence. The major challenges in developing a distributed Newton's method lie in decentralizing the computation of the Hessian matrix and its inverse for both the primal Newton direction and dual variable updates. By appropriately reformulating, rearranging, and exploiting the special problem structures, we show that it is possible to decompose such computations into source nodes and links in the network, thus eliminating the need for global information. Furthermore, we derive closed-form expressions for both the primal Newton direction and dual variable updates, thus significantly reducing the computational complexity. The most attractive feature of our proposed distributed Newton's method is that it requires almost the same scale of information exchange as in first-order methods, while achieving a quadratic rate of convergence as in centralized Newton methods. We provide extensive numerical results to demonstrate the efficacy of our proposed algorithm. Our work contributes to the advanced paradigm shift in cross-layer network design that is evolving from first-order to second-order methods.

【Keywords】: Hessian matrices; Newton method; computational complexity; gradient methods; telecommunication congestion control; telecommunication network routing; Hessian matrix; MRFC; centralized Newton methods; closed-form expressions; computational complexity; convergence quadratic rate; cross-layer network design; current communication networks; distributed Newton method; distributed cross-layer optimization algorithms; dual decomposition algorithm; first-order methods; flow control; joint multihop routing; primal Newton direction; second-order methods; source nodes; subgradient algorithm; Algorithm design and analysis; Convergence; Newton method; Optimization; Routing; Symmetric matrices; Vectors

279. Exact regenerating codes for Byzantine fault tolerance in distributed storage.

Paper Link】 【Pages】:2498-2506

【Authors】: Yunghsiang S. Han ; Rong Zheng ; Wai Ho Mow

【Abstract】: Due to the use of commodity software and hardware, crash-stop and Byzantine failures are likely to be more prevalent in today's large-scale distributed storage systems. Regenerating codes have been shown to be a more efficient way to disperse information across multiple nodes and recover crash-stop failures in the literature. In this paper, we present the design of regeneration codes in conjunction with integrity check that allows exact regeneration of failed nodes and data reconstruction in the presence of Byzantine failures. A progressive decoding mechanism is incorporated in both procedures to leverage computation performed thus far. The fault tolerance and security properties of the schemes are also analyzed.

【Keywords】: data integrity; decoding; distributed memory systems; error detection codes; failure analysis; fault tolerance; security of data; Byzantine failures; Byzantine fault tolerance; commodity hardware; commodity software; crash-stop failures; data reconstruction; disperse information; exact regenerating codes; integrity check; large-scale distributed storage systems; progressive decoding mechanism; security properties; Bandwidth; Decoding; Encoding; Generators; Maintenance engineering; Polynomials; Redundancy; Byzantine failures; Error-detection code; Network storage; Reed-Solomon code; Regenerating code

280. NSDMiner: Automated discovery of Network Service Dependencies.

Paper Link】 【Pages】:2507-2515

【Authors】: Arun Natarajan ; Peng Ning ; Yao Liu ; Sushil Jajodia ; Steve E. Hutchinson

【Abstract】: Enterprise networks today host a wide variety of network services, which often depend on each other to provide and support network-based services and applications. Understanding such dependencies is essential for maintaining the well-being of an enterprise network and its applications, particularly in the presence of network attacks and failures. In a typical enterprise network, which is complex and dynamic in configuration, it is non-trivial to identify all these services and their dependencies. Several techniques have been developed to learn such dependencies automatically. However, they are either too complex to fine tune or cluttered with false positives and/or false negatives. In this paper, we propose a suite of novel techniques and develop a new tool named NSDMiner (which stands for Mining for Network Service Dependencies) to automatically discover the dependencies between network services from passively collected network traffic. NSDMiner is non-intrusive; it does not require any modification of existing software, or injection of network packets. More importantly, NSDMiner achieves higher accuracy than previous network-based approaches. Our experimental evaluation, which uses network traffic collected from our campus network, shows that NSDMiner outperforms the two best existing solutions significantly.

【Keywords】: Internet; business data processing; data mining; telecommunication traffic; NSDMiner; automated discovery; campus network; enterprise network; mining for network service dependencies; network attack; network failure; passively collected network traffic; Databases; Electronic mail; Monitoring; Protocols; Web servers

281. Distributed measurement-aware routing: Striking a balance between measurement and traffic engineering.

Paper Link】 【Pages】:2516-2520

【Authors】: Chia-Wei Chang ; Han Liu ; Guanyao Huang ; Bill Lin ; Chen-Nee Chuah

【Abstract】: Network-wide traffic measurement is important for various network management tasks, ranging from traffic accounting, traffic engineering, and network troubleshooting to security. Existing techniques for traffic measurement tend to be sub-optimal due to poor choice of monitor deployment location or due to constantly evolving monitoring objectives and traffic characteristics. It is not feasible to dynamically reconfigure/redeploy monitoring infrastructure to satisfy such evolving measurement requirements. In this paper, we present a distributed measurement-aware traffic engineering protocol based on a game-theoretic re-routing policy that attempts to optimally utilize existing monitor locations for maximizing the traffic measurement gain while ensuring that the traffic load distribution across the network satisfies some traffic engineering constraint. We introduce a novel cost function on each link that reflects both the measurement gain and the traffic engineering (TE) constraint. Individual routers compete with each other (in a game) to minimize their own costs for the downstream paths, i.e., each router dynamically gathers its cost information for upstream routers and use it to locally decide how to adjust traffic split ratios for each destination to the next-hop routers among these multiple equal-cost paths. Our routing policy guarantees not only a provable Nash equilibrium, but also a quick convergence without significant oscillations to an equilibrium state in which the measurement gain of the network is close to the best case performance bounds We evaluate the protocol via simulations using real traces/topologies (Abilene, AS6461 and GEANT). The simulation results show fast convergence (as expected from the theoretical results), improved measurement gains (e.g., 12 % higher) and much lower TE-violations (e.g., up to 100X smaller) compared to static, centralized measurement-aware routing framework in dynamic traffic scenario.

【Keywords】: game theory; routing protocols; telecommunication security; telecommunication traffic; AS6461; Abilene; GEANT; Nash equilibrium; cost function; distributed measurement aware routing; distributed measurement aware traffic engineering protocol; equilibrium state; game theoretic rerouting policy; monitor deployment location; network management tasks; network security; network troubleshooting; network wide traffic measurement; next hop routers; traffic accounting; traffic engineering constraint; traffic load distribution; traffic measurement gain; traffic split ratio; Convergence; Cost function; Gain measurement; Monitoring; Nash equilibrium; Oscillators; Routing

282. Vivisecting YouTube: An active measurement study.

Paper Link】 【Pages】:2521-2525

【Authors】: Vijay Kumar Adhikari ; Sourabh Jain ; Yingying Chen ; Zhi-Li Zhang

【Abstract】: We deduce key design features behind the YouTube video delivery system by building a distributed active measurement infrastructure, and collecting and analyzing a large volume of video playback logs, DNS mappings and latency data. We find that the design of YouTube video delivery system consists of three major components: a “flat” video id space, multiple DNS namespaces reflecting a multi-layered logical organization of video servers, and a 3-tier physical cache hierarchy. We also uncover that YouTube employs a set of sophisticated mechanisms to handle video delivery dynamics such as cache misses and load sharing among its distributed cache locations and data centers.

【Keywords】: cache storage; file servers; social networking (online); video retrieval; 3-tier physical cache hierarchy; DNS mapping; DNS namespace; YouTube; data center; distributed active measurement infrastructure; distributed cache location; flat video id space; latency data; load sharing; multilayered logical organization; video delivery system; video playback log analysis; video playback log collection; video server; Delay; Extraterrestrial measurements; Google; IP networks; Servers; Unicast; YouTube

283. Origin-destination flow measurement in high-speed networks.

Paper Link】 【Pages】:2526-2530

【Authors】: Tao Li ; Shigang Chen ; Yan Qiao

【Abstract】: An origin-destination (OD) flow between two routers is the set of packets that pass both routers in a network. Measuring the sizes of OD flows is important to many network management applications such as capacity planning, traffic engineering, anomaly detection, and network reliability analysis. Measurement efficiency and accuracy are two main technical challenges. In terms of efficiency, we want to minimize per-packet processing overhead to accommodate future routers that have extremely high packet rates. In terms of accuracy, we want to generate precise measurement results with small bias and standard deviation. To meet these challenges, we design a new measurement method that employs a compact data structure for packet information storage and uses a novel statistical inference approach for OD-flow size estimation. We perform simulations to demonstrate the effectiveness of our method.

【Keywords】: flow measurement; size measurement; telecommunication network management; telecommunication network routing; OD flow measurement; OD-flow size estimation; anomaly detection; capacity planning; compact data structure; high-speed network; network management application; network reliability analysis; network routing; origin-destination flow measurement; packet information storage; per-packet processing overhead minimization; size measurement; statistical inference approach; traffic engineering; Accuracy; Arrays; Estimation; Memory management; Random access memory; Size measurement

284. China's Internet: Topology mapping and geolocating.

Paper Link】 【Pages】:2531-2535

【Authors】: Ye Tian ; Ratan Dey ; Yong Liu ; Keith W. Ross

【Abstract】: We perform a large-scale topology mapping and geolocation study for China's Internet. To overcome the limited number of Chinese PlanetLab nodes and looking glass servers, we leverage several unique features in China's Internet, including the hierarchical structure of the major ISPs and the abundance of IDCs. Using only 15 vantage points, we design a traceroute scheme that finds significantly more interfaces and links than iPlane with significantly fewer traceroute probes. We then consider the problem of geolocating router interfaces and end hosts in China. We develop a heuristic for clustering the interface topology of a hierarchical ISP, and then apply the heuristic to the major Chinese ISPs. We show that the clustering heuristic can geolocate router interfaces with significantly more detail and accuracy than can the existing geoIP databases in isolation, and the resulting clusters expose the major ISPs' provincial structure. Finally, using the clustering heuristic, we propose a methodology for improving commercial geoIP databases.

【Keywords】: Internet; network servers; telecommunication network routing; telecommunication network topology; China; Chinese PlanetLab node; Internet; geoIP database; geolocating router interface; glass server; hierarchical ISP; interface topology clustering; topology mapping; traceroute scheme; Geology; Telecommunications

285. Characterizing end-host application performance across multiple networking environments.

Paper Link】 【Pages】:2536-2540

【Authors】: Diana Joumblatt ; Oana Goga ; Renata Teixeira ; Jaideep Chandrashekar ; Nina Taft

【Abstract】: Users today connect to the Internet everywhere - from home, work, airports, friend's homes, and more. This paper characterizes how the performance of networked applications varies across networking environments. Using data from a few dozen end-hosts, we compare the distributions of RTTs and download rates across pairs of environments. We illustrate that for most users the performance difference is statistically significant. We contrast the influence of the application mix and environmental factors on these performance differences.

【Keywords】: Internet; computer network performance evaluation; Internet; RTT; download rates; end-host application performance characterization; environmental factors; multiple networking environments; networked application performance; networking environments; Europe; Extraterrestrial measurements; Postal services

286. SubFlow: Towards practical flow-level traffic classification.

Paper Link】 【Pages】:2541-2545

【Authors】: Guowu Xie ; Marios Iliofotou ; Ram Keralapura ; Michalis Faloutsos ; Antonio Nucci

【Abstract】: Many research efforts propose the use of flow-level features (e.g., packet sizes and inter-arrival times) and machine learning algorithms to solve the traffic classification problem. However, these statistical methods have not made the anticipated impact in the real world. We attribute this to two main reasons: (a) training the classifiers and bootstrapping the system is cumbersome, (b) the resulting classifiers have limited ability to adapt gracefully as the traffic behavior changes. In this paper, we propose an approach that is easy to bootstrap and deploy, as well as robust to changes in the traffic, such as the emergence of new applications. The key novelty of our classifier is that it learns to identify the traffic of each application in isolation, instead of trying to distinguish one application from another. This is a very challenging task that hides many caveats and subtleties. To make this possible, we adapt and use subspace clustering, a powerful technique that has not been used before in this context. Subspace clustering allows the profiling of applications to be more precise by automatically eliminating irrelevant features. We show that our approach exhibits very high accuracy in classifying each application on five traces from different ISPs captured between 2005 and 2011. This new way of looking at application classification could generate powerful and practical solutions in the space of traffic monitoring and network management.

【Keywords】: Internet; computer bootstrapping; learning (artificial intelligence); statistical analysis; telecommunication network management; telecommunication traffic; ISP; SubFlow; bootstrapping; machine learning; network management; practical flow-level traffic classification; statistical methods; traffic behavior; traffic monitoring; training; Accuracy; Clustering algorithms; Internet; Protocols; Silicon; Training; Vectors

287. TECC: Towards collaborative in-network caching guided by traffic engineering.

Paper Link】 【Pages】:2546-2550

【Authors】: Haiyong Xie ; Guangyu Shi ; Pengwei Wang

【Abstract】: There has been an increasingly popular trend that content caching becomes an inherent underlay capability in network routers. This poses new challenges, and we believe that if not well provisioned, such capability may even degrade network performance. Among the challenges, we are interested in how the in-network caching capability should be best provisioned to improve the overall network performance. To address this challenge, we propose a collaborative caching scheme guided by traffic engineering (TECC) for the emerging content-centric networks. In particular, we decouple collaborative in-network caching and traffic engineering via the primal-dual decomposition. Our evaluations, although preliminary, suggest that the improvement is as much as 80% compared to collaborative caching without guidance of traffic engineering.

【Keywords】: cache storage; groupware; telecommunication network routing; collaborative caching scheme; collaborative in-network caching; content-centric network; in-network caching capability; network performance; network router; primal-dual decomposition; traffic engineering; Collaboration; Computational modeling; Internet; Load modeling; Network topology; Routing; Topology

288. Harnessing Internet topological stability in Thorup-Zwick compact routing.

Paper Link】 【Pages】:2551-2555

【Authors】: Stephen D. Strowes ; Colin Perkins

【Abstract】: Thorup-Zwick (TZ) compact routing guarantees sublinear state growth with the size of the network by routing via landmarks and incurring some path stretch. It uses a pseudo-random landmark selection designed for static graphs, and unsuitable for Internet routing. We propose a landmark selection algorithm for the Internet AS graph that uses k-shells decomposition to choose landmarks. Using snapshots of the AS graph from 1997-2010, we demonstrate that the ASes in the kmax-shell are highly-stable over time, and form a sufficient landmark set for TZ routing in the overwhelming majority of cases (in the remainder, adding the next k-shell suffices). We evaluate path stretch and forwarding table sizes, and show that these landmark sets retain low average path stretch with tiny forwarding tables, but are better suited to the dynamic nature of the AS graph than the original TZ landmark selection algorithm.

【Keywords】: Internet; graph theory; telecommunication network routing; telecommunication network topology; Internet AS graph; Internet routing; Internet topological stability; TZ landmark selection algorithm; Thorup-Zwick compact routing; forwarding tables; pseudo-random landmark selection; static graphs; sublinear state growth; Additives; Clustering algorithms; Educational institutions; Heuristic algorithms; Internet; Peer to peer computing; Routing

289. A fresh look at inter-domain route aggregation.

Paper Link】 【Pages】:2556-2560

【Authors】: João L. Sobrinho ; Franck Le

【Abstract】: We present three route aggregation strategies to scale the Internet's inter-domain routing system. These strategies result from a keen understanding on how the customer-provider, peer-peer routing policies propagate routes belonging to long prefixes in relation to how they propagate routes belonging to shorter prefixes that cover the long ones. The first strategy, Coordinated Route Suppression, requires coordination between the Autonomous Systems (ASs) of the Internet, and we present a protocol to perform such coordination. The second strategy, No Import Provider Routes, does not require any coordination between the ASs, but benefits only some of them. The third strategy, Implicit Long Routes, does not rely on any coordination between the ASs either and it is the most efficient strategy. However, it presupposes modifications to the way routers build their forwarding tables. We evaluate the three route aggregation strategies over a publicly available description of the Internet topology and on synthetically generated Internet-like topologies. The results are very promising, with savings in the amount of state information required to sustain inter-domain close to the optimum possible.

【Keywords】: Internet; peer-to-peer computing; protocols; telecommunication network routing; telecommunication network topology; Internet inter-domain routing system; Internet topology; autonomous system; coordinated route suppression strategy; coordination protocol; customer-provider routing policy; forwarding table; implicit long routes strategy; inter-domain route aggregation strategy; no import provider routes strategy; peer-peer routing policy; Internet; Network topology; Peer to peer computing; Routing; Routing protocols; Topology

290. Block permutations in Boolean Space to minimize TCAM for packet classification.

Paper Link】 【Pages】:2561-2565

【Authors】: Rihua Wei ; Yang Xu ; H. Jonathan Chao

【Abstract】: Packet classification is one of the major challenges in designing high-speed routers and firewalls as it involves sophisticated multi-dimensional searching. Ternary Content Addressable Memory (TCAM) has been widely used to implement packet classification thanks to its parallel search capability and constant processing speed. However, TCAM-based packet classification has the well-known range expansion problem, resulting in a huge waste of TCAM entries. In this paper, we propose a novel technique called Block Permutation (BP) to compress the packet classification rules stored in TCAMs. The compression is achieved by performing block-based permutations on the rules represented in Boolean Space. We develop an efficient heuristic approach to find the permutations for compression and design its hardware implementation. Experiments on ClassBench classifiers and ISP classifiers show that the proposed BP technique can reduce TCAM entries by 53.99% on average.

【Keywords】: Boolean algebra; authorisation; Boolean space; ClassBench classifiers; ISP classifiers; TCAM-based packet classification rules; block permutations; constant processing speed; firewalls; high speed routers; multidimensional searching; parallel search capability; ternary content addressable memory; Chaos; Fires; Hardware; Irrigation; Logic gates; Pipelines; World Wide Web; Classifier Minimization; Logic Optimization; Packet Classification; Range Expansion; TCAM

291. Workload factoring with the cloud: A game-theoretic perspective.

Paper Link】 【Pages】:2566-2570

【Authors】: Amir Nahir ; Ariel Orda ; Danny Raz

【Abstract】: Cloud computing is an emerging paradigm in which tasks are assigned to a combination (“cloud”) of servers and devices, accessed over a network. Typically, the cloud constitutes an additional means of computation and a user can perform workload factoring, i.e., split its load between the cloud and its other resources. Based on empirical data, we demonstrate that there is an intrinsic relation between the “benefit” that a user perceives from the cloud and the usage pattern followed by other users. This gives rise to a non-cooperative game, which we model and investigate. We show that the considered game admits a Nash equilibrium. Moreover, we show that this equilibrium is unique. We investigate the “price of anarchy” of the game and show that, while in some cases of interest the Nash equilibrium coincides with a social optimum, in other cases the gap can be arbitrarily large. We show that, somewhat counter-intuitively, exercising admission control to the cloud may deteriorate its performance. Furthermore, we demonstrate that certain (heavy) users may “scare off” other, potentially large, communities of users. Accordingly, we propose a resource allocation scheme that addresses this problem and opens the cloud to a wide range of user types.

【Keywords】: cloud computing; game theory; network servers; resource allocation; Nash equilibrium; admission control; cloud computing; game theory; load splitting; noncooperative game; price of anarchy; resource allocation; usage pattern; workload factoring; Cloud computing; Delay; Games; Nash equilibrium; Outsourcing; Servers; Time factors

292. Cost-minimizing dynamic migration of content distribution services into hybrid clouds.

Paper Link】 【Pages】:2571-2575

【Authors】: Xuanjia Qiu ; Hongxing Li ; Chuan Wu ; Zongpeng Li ; Francis C. M. Lau

【Abstract】: The recent advent of cloud computing technologies has enabled agile and scalable resource access for a variety of applications. Content distribution services are a major category of popular Internet applications. A growing number of content providers are contemplating a switch to cloud-based services, for better scalability and lower cost. Two key tasks are involved for such a move: to migrate their contents to cloud storage, and to distribute their web service load to cloud-based web services. The main challenge is to make the best use of the cloud as well as their existing on-premise server infrastructure, to serve volatile content requests with service response time guarantee at all times, while incurring the minimum operational cost. Employing Lyapunov optimization techniques, we present an optimization framework for dynamic, cost-minimizing migration of content distribution services into a hybrid cloud infrastructure that spans geographically distributed data centers. A dynamic control algorithm is designed, which optimally places contents and dispatches requests in different data centers to minimize overall operational cost over time, subject to service response time constraints. Rigorous analysis shows that the algorithm nicely bounds the response times within the preset QoS target in cases of arbitrary request arrival patterns, and guarantees that the overall cost is within a small constant gap from the optimum achieved by a T-slot lookahead mechanism with known information into the future.

【Keywords】: Web services; cloud computing; Internet application; Lyapunov optimization technique; T-slot lookahead mechanism; cloud computing technology; cloud storage; cloud-based Web services; content distribution services; cost-minimizing dynamic migration; dynamic control algorithm; hybrid cloud infrastructure; on-premise server infrastructure; rigorous analysis; service response time constraints; Algorithm design and analysis; Cloud computing; Delay; Heuristic algorithms; Optimization; Servers; Time factors

293. Towards temporal access control in cloud computing.

Paper Link】 【Pages】:2576-2580

【Authors】: Yan Zhu ; Hongxin Hu ; Gail-Joon Ahn ; Dijiang Huang ; Shan-Biao Wang

【Abstract】: Access control is one of the most important security mechanisms in cloud computing. Attribute-based access control provides a flexible approach that allows data owners to integrate data access policies within the encrypted data. However, little work has been done to explore temporal attributes in specifying and enforcing the data owner's policy and the data user's privileges in cloud-based environments. In this paper, we present an efficient temporal access control encryption scheme for cloud services with the help of cryptographic integer comparisons and a proxy-based re-encryption mechanism on the current time. We also provide a dual comparative expression of integer ranges to extend the power of attribute expression for implementing various temporal constraints. We prove the security strength of the proposed scheme and our experimental results not only validate the effectiveness of our scheme, but also show that the proposed integer comparison scheme performs significantly better than previous bitwise comparison scheme.

【Keywords】: authorisation; cloud computing; cryptography; attribute expression; attribute-based access control; cloud computing; cloud services; cryptographic integer comparisons; data access policy integration; encrypted data; integer comparison scheme; proxy-based reencryption mechanism; security strength; temporal access control encryption scheme; Access control; Cloud computing; Computational modeling; Encryption; Servers; Cloud Computing; Cryptography; Integer Comparison; Re-Encryption; Temporal Access Control

294. Efficient information retrieval for ranked queries in cost-effective cloud environments.

Paper Link】 【Pages】:2581-2585

【Authors】: Qin Liu ; Chiu Chiang Tan ; Jie Wu ; Guojun Wang

【Abstract】: Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In this paper, we address two fundamental issues in a cloud environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme proposed by Ostrovsky et. al. Then, based on an aggregation and distribution layer (ADL), we present a scheme, termed efficient information retrieval for ranked query (EIRQ), to further reduce querying costs incurred in the cloud. Queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. Extensive evaluations have been conducted on an analytical model to examine the effectiveness of our scheme.

【Keywords】: cloud computing; cost reduction; data privacy; query processing; security of data; ADL; EIRQ; aggregation-and-distribution layer; cloud computing; cost-effective cloud environments; efficient information retrieval; information technology; private keyword-based file retrieval scheme; querying costs reduction; ranked queries; Dictionaries; Encryption; Privacy; Public key; Cloud computing; efficiency; privacy

295. Performance analysis of Coupling Scheduler for MapReduce/Hadoop.

Paper Link】 【Pages】:2586-2590

【Authors】: Jian Tan ; Xiaoqiao Meng ; Li Zhang

【Abstract】: For MapReduce/Hadoop, map and reduce phases exhibit fundamentally distinguishing characteristics. Additionally, these two phases admit complicated and tight dependency on each other, causing the repeatedly observed starvation problem with the widely used Fair Scheduler. To mitigate this problem, we design Coupling Scheduler, which, among other new features, jointly schedules map and reduce tasks by coupling their progresses, different from existing ones that treat them separately. This design is based on the intuition that allocating excess resources to reduce tasks without balancing with the map task progress of the same job is likely to result in resource underutilization since a job is deemed done only when both phases complete. In order to analytically understand the performance of this design, we propose a model that captures the fundamental scheduling characteristics for MapReduce. Specifically, the map phase is modeled by a processor sharing queue, and the reduce phase by a “sticky processor sharing” queue. Along with the important dependence between these two types of tasks, we show that, for a class of jobs with regularly varying map service times, the job processing time distribution under Coupling Scheduler can be one order better than Fair Scheduler. These theoretical results are validated through simulations and the improved performance is further illustrated through real experiments on our testbed.

【Keywords】: parallel processing; performance evaluation; processor scheduling; public domain software; queueing theory; resource allocation; Hadoop; MapReduce; coupling scheduler; excess resource allocation; fundamental scheduling characteristics; job processing time distribution; map phase; map scheduling characteristics; map service times; performance analysis; resource underutilization; starvation problem; sticky processor sharing queue; task reduction; Bismuth; Couplings; Delay; Indexes; Processor scheduling; Upper bound

296. On the construction of data aggregation tree with minimum energy cost in wireless sensor networks: NP-completeness and approximation algorithms.

Paper Link】 【Pages】:2591-2595

【Authors】: Tung-Wei Kuo ; Ming-Jer Tsai

【Abstract】: In many applications, it is a basic operation for the sink to periodically collect reports from all sensors. Since the data gathering process usually proceeds for many rounds, it is important to collect these data efficiently, that is, to reduce the energy cost of data transmission. Under such applications, a tree is usually adopted as the routing structure to save the computation costs for maintaining the routing tables of sensors. In this paper, we work on the problem of constructing a data aggregation tree that minimizes the total energy cost of data transmission in a wireless sensor network. In addition, we also address such a problem in the wireless sensor network where relay nodes exist. We show these two problems are NP-complete, and propose O(1)-approximation algorithms for each of them. Simulations show that the proposed algorithms each have good performance in terms of the energy cost.

【Keywords】: approximation theory; computational complexity; data analysis; trees (mathematics); wireless sensor networks; NP-completeness; O(1)-approximation algorithms; data aggregation tree; data gathering process; data transmission; minimum energy cost; wireless sensor networks; Approximation algorithms; Approximation methods; Relays; Routing; Sensors; Steiner trees; Wireless sensor networks

297. Tracking and identifying burglar using collaborative sensor-camera networks.

Paper Link】 【Pages】:2596-2600

【Authors】: Haitao Zhang ; Shaojie Tang ; Xiang-Yang Li ; Huadong Ma

【Abstract】: This work presents BurTrap, a networking system which integrates wireless modules (such as TelosB nodes) with networked surveillance cameras to automatically, accurately, timely track and identify burglar who stole the property. First, we design an energy-efficient wakeup scheduling protocol that guarantees a successful target tracking while reducing the communication energy consumption of the portable wireless module. Then, we identify burglar among all the objects appeared in the obtained video information by performing trajectory fitting between the estimated geometric trajectory and the estimated local visual trajectory. Through extensive experiments, we show that BurTrap can pinpoint the burglar with extremely high accuracy.

【Keywords】: cameras; energy consumption; power aware computing; scheduling; target tracking; video surveillance; wireless sensor networks; BurTrap; burglar identification; burglar tracking; collaborative sensor-camera networks; communication energy consumption; energy-efficient wakeup scheduling protocol; geometric trajectory; local visual trajectory estimation; networked surveillance cameras; networking system; portable wireless module; target tracking; trajectory fitting; Cameras; Sensors; Surveillance; Trajectory; Visualization; Wireless communication; Wireless sensor networks

298. Optimal density estimation for exposure-path prevention in wireless sensor networks using percolation theory.

Paper Link】 【Pages】:2601-2605

【Authors】: Liang Liu ; Xi Zhang ; Huadong Ma

【Abstract】: Most existing works on sensor coverage mainly concentrate on the full coverage models which ensure that all points in the deployment region are covered at the expense of high complexity and cost. In contrast, the exposure-path prevention does not require full coverage sensor deployment, and instead it only needs the partial coverage, because the exposure paths are prevented as long as no moving objects or phenomena can go through a deployment region without being detected. Towards this end, we focus on the partial coverage by applying the percolation theory to solve the exposure path problem for wireless sensor networks. Specifically, we propose a bond-percolation based scheme by mapping the exposure path problem into a bond percolation model. Using this model, we derive the analytical expressions of critical densities for wireless sensor networks under random sensor deployment.

【Keywords】: estimation theory; percolation; sensor placement; wireless sensor networks; bond percolation model theory; exposure-path prevention mapping; full coverage sensor deployment; optimal density estimation; partial coverage; wireless sensor network; Approximation methods; Educational institutions; Lattices; Simulation; Upper bound; Wireless networks; Wireless sensor networks; Coverage; exposure path; percolation theory; percolation threshold; wireless sensor networks

299. Thermal Inertia: Towards an energy conservation room management system.

Paper Link】 【Pages】:2606-2610

【Authors】: Dawei Pan ; Yi Yuan ; Dan Wang ; XiaoHua Xu ; Yu Peng ; Xiyuan Peng ; Peng-Jun Wan

【Abstract】: We are in an age where people are paying increasing attention to energy conservation around the world. The heating and air-conditioning systems of buildings introduce one of the largest chunk of energy expenses. In this paper, we make a key observation that after a meeting or a class ends in a room, the indoor temperature will not immediately increase to the outdoor temperature. We call this phenomenon Thermal Inertia. Thus, if we arrange subsequent meetings in the same room; than a room that has not been used for some time, we can take advantage of such un-dissipated cool or heated air and conserve energy. We develop a green room management system with three main components. First, it has a wireless sensor network to collect indoor, outdoor temperature and electricity expenses of the air-conditioning devices. Second, we build an energy-temperature correlation model for the energy expenses and the corresponding room temperature. Third, we develop room scheduling algorithms. Our system is validated with real deployment of a sensor network for data collection and thermodynamics model calibration. We conduct a comprehensive evaluation with synthetic room and meeting configurations. We observe a 30% energy saving as compared with the current schedules.

【Keywords】: air conditioning; building management systems; computerised instrumentation; energy conservation; environmental factors; scheduling; thermodynamics; wireless sensor networks; TinyOS; air-conditioning devices; air-conditioning systems; buildings; electricity expenses; energy conservation room management system; energy expenses; energy saving; energy-temperature correlation model; green room management system; heated air; heating systems; indoor temperature; outdoor temperature; room scheduling algorithms; room temperature; thermal inertia; thermodynamics model calibration; un-dissipated cooling; wireless sensor network; Atmospheric modeling; Computational modeling; Computers; Correlation; Electricity; Temperature sensors; Wireless communication

300. A POMDP framework for heterogeneous sensor selection in wireless body area networks.

Paper Link】 【Pages】:2611-2615

【Authors】: Daphney-Stavroula Zois ; Marco Levorato ; Urbashi Mitra

【Abstract】: Wireless body area networks (WBANs) are emerging as a powerful tool for health management, emergency response, military personnel wellness as well as sports and entertainment. In contrast to traditional sensor networks for, say, environmental sensing, WBANs are often characterized by a modest number of heterogeneous sensors wirelessly coupled to a fusion center such as a mobile phone. Based on an actual implementation of a prototype WBAN, energy efficiency at the fusion center has proven to be one of the critical roadblocks to long-term deployment of WBANs. To this end, a novel formulation based on stochastic control tools is devised to model the sensor selection process. Sensors are heterogeneous both in their discrimination capabilities as well as their energy cost, further challenging sensor selection. The goal is to maximize the WBAN's lifetime while optimizing the performance of a physical state detection application. To this end, an optimal dynamic programming algorithm is derived. However, due to the prohibitive complexity of the optimal method, a low-cost approximation scheme, T3S, is designed. The low complexity design is based on several key properties of the cost functional. The proposed T3S scheme is evaluated on real-world data collected from an implemented WBAN and observed to offer near optimal performance with significantly lower complexity.

【Keywords】: approximation theory; body area networks; dynamic programming; telecommunication network reliability; wireless sensor networks; POMDP framework; T3S scheme design; WBAN; data collection; emergency response; energy efficiency; environmental sensing; fusion center; health management; heterogeneous sensor selection; low-cost approximation scheme; military personnel wellness; mobile phone; optimal dynamic programming algorithm; physical state detection application; prohibitive complexity; sensor selection process; stochastic control tool; wireless body area network; Accuracy; Aerospace electronics; Complexity theory; Feature extraction; Markov processes; Vectors; Wireless sensor networks

301. MobiShare: Flexible privacy-preserving location sharing in mobile online social networks.

Paper Link】 【Pages】:2616-2620

【Authors】: Wei Wei ; Fengyuan Xu ; Qun Li

【Abstract】: Location sharing is a fundamental component of mobile online social networks (mOSNs), which also raises significant privacy concerns. The mOSNs collect a large amount of location information over time, and the users' location privacy is compromised if their location information is abused by adversaries controlling the mOSNs. In this paper, we present MobiShare, a system that provides flexible privacy-preserving location sharing in mOSNs. MobiShare is flexible to support a variety of location-based applications, in that it enables location sharing between both trusted social relations and untrusted strangers, and it supports range query and user-defined access control. In MobiShare, neither the social network server nor the location server has a complete knowledge of the users' identities and locations. The users' location privacy is protected even if either of the entities colludes with malicious users.

【Keywords】: authorisation; computer network security; data privacy; mobile computing; network servers; public key cryptography; social networking (online); trusted computing; MobiShare; flexible privacy preserving location sharing; location information; location-based applications; mOSN; malicious users; mobile online social networks; range query; trusted social relations; untrusted strangers; user defined access control; user location privacy; Cryptography; Databases; Mobile communication; Poles and towers; Privacy; Servers; Social network services

302. Trusted collaborative spectrum sensing for mobile cognitive radio networks.

Paper Link】 【Pages】:2621-2625

【Authors】: Shraboni Jana ; Kai Zeng ; Prasant Mohapatra

【Abstract】: Collaborative spectrum sensing is a key technology in cognitive radio networks (CRNs). It is inaccurate if spectrum sensing nodes are malicious. Although mobility is an inherent property of wireless networks, there has been no prior work studying the detection of malicious users for collaborative spectrum sensing in mobile CRNs. Existing solutions based on user trust for secure collaborative spectrum sensing cannot be applied to mobile scenarios, since they do not consider the location diversity of the network, thus over penalize honest users who are at locations with severe pathloss. In this paper, we propose to use two trust parameters, Location Reliability and Malicious Intention (LRMI), to improve malicious and primary user detection in mobile CRNs under attacks. Location Reliability reflects pathloss characteristics of the wireless channel and Malicious Intention captures the true intention of secondary users, respectively. Simulations of our proposed detection mechanisms, LRMI, show that mobility helps train location reliability and detect malicious users. We show an improvement of malicious user detection rate by 3 times and primary user detection rate by 20% at false alarm rate of 5%, respectively.

【Keywords】: cognitive radio; diversity reception; mobile radio; signal detection; telecommunication security; LRMI; location diversity; location reliability and malicious intention; malicious user detection rate; mobile cognitive radio networks; primary user detection; secure collaborative spectrum sensing; spectrum sensing nodes; trusted collaborative spectrum sensing; wireless networks; Cognitive radio; Collaboration; Mobile communication; Reliability; Sensors; Wireless sensor networks

303. Distributed online channel assignment toward optimal monitoring in multi-channel wireless networks.

Paper Link】 【Pages】:2626-2630

【Authors】: Dong-Hoon Shin ; Saurabh Bagchi ; Chih-Chun Wang

【Abstract】: This paper studies an optimal channel assignment problem for passive monitoring in multi-channel wireless networks, where a set of sniffers capture and analyze the network traffic to monitor the network. The objective of this problem is to maximize the total amount of traffic captured by sniffers by judiciously assigning the radios of sniffers to a set of channels. This problem is NP-hard, with the computational complexity growing exponentially with the number of sniffers. We develop distributed online solutions to this problem for large-scale and dynamic networks. Prior works have attained a constant factor equation of the maximum monitoring coverage in a centralized setting. Our algorithm preserves the same ratio while providing a distributed solution that is amenable to online implementation. Also, our algorithm is cost-effective, in terms of communication and computational overheads, due to the use of only local communication and the adaptation to incremental network changes. We present two operational modes of our algorithm for two types of networks that have different rates of network changes. One is a proactive mode for fast varying networks, while the other is a reactive mode for slowly varying networks. Simulation results demonstrate the effectiveness of the two modes of our algorithm.

【Keywords】: channel allocation; computational complexity; monitoring; optimisation; radio networks; telecommunication traffic; wireless channels; NP-hard problem; computational complexity; distributed online optimal channel assignment problem; maximum monitoring coverage; multichannel wireless network; network traffic analysis; optimal monitoring; passive monitoring; radio sniffer; Ad hoc networks; Approximation algorithms; Distributed algorithms; Heuristic algorithms; Monitoring; Wireless networks

304. Open WiFi networks: Lethal weapons for botnets?

Paper Link】 【Pages】:2631-2635

【Authors】: Matthew Knysz ; Xin Hu ; Yuanyuan Zeng ; Kang G. Shin

【Abstract】: This paper assesses the potential for highly mobile botnets to communicate and perform nefarious actions using only open WiFi networks, which we term mobile WiFi botnets. We design and evaluate a proof-of-concept mobile WiFi botnet using real-world mobility traces and actual open WiFi network locations for the urban environment of San Francisco. Our extensive simulation results demonstrate that mobile WiFi botnets can support rapid command propagation, with commands typically reaching over 75% of the botnet only 2 hours after injection-sometimes, within as little as 30 minutes. Moreover, those bots able to receive commands usually have ≈40-50% probability of being able to do so within a minute of the command being issued. Our evaluation results also indicate that even a small mobile WiFi botnet of only 536 bots can launch an effective DDoS attack against poorly protected systems. Furthermore, mobile WiFi botnet traffic is sufficiently distributed across multiple open WiFi networks-with no single network being over-utilized at any given moment-to make detection difficult.

【Keywords】: computer network security; mobile computing; telecommunication traffic; wireless LAN; actual open WiFi network locations; effective DDoS attack; lethal weapons; mobile WiFi botnet traffic; proof-of-concept mobile WiFi botnet; rapid command propagation; real-world mobility traces; Computer crime; IEEE 802.11 Standards; Mobile communication; Mobile computing; Mobile handsets; Protocols; Servers

Paper Link】 【Pages】:2636-2640

【Authors】: Yao Liu ; Peng Ning

【Abstract】: Wireless link signature is a physical layer authentication mechanism, which uses the unique wireless channel characteristics between a transmitter and a receiver to provide authentication of wireless channels. A vulnerability of existing link signature schemes has been identified by introducing a new attack, called mimicry attack. To defend against the mimicry attack, we propose a novel construction for wireless link signature, called time-synched link signature, by integrating cryptographic protection and time factor into traditional wireless link signatures. We also evaluate the mimicry attacks and the time-synched link signature scheme on the USRP2 platform running GNURadio. The experimental results demonstrate the effectiveness of time-synched link signature.

【Keywords】: cryptography; digital signatures; radio receivers; radio transmitters; wireless channels; GNURadio; USRP2 platform; cryptographic protection; enhanced wireless channel authentication; link signature schemes; mimicry attack; physical layer authentication mechanism; time-synched link signature; wireless channel characteristics; wireless link signature; wireless link signatures; Communication system security; Microwave integrated circuits; Physical layer; Receivers; Training; Transmitters; Wireless communication

306. The sharing at roadside: Vehicular content distribution using parked vehicles.

Paper Link】 【Pages】:2641-2645

【Authors】: Nianbo Liu ; Ming Liu ; Guihai Chen ; Jiannong Cao

【Abstract】: In Vehicular Ad Hoc Networks (VANETs), content distribution directly relies on the fleeting and dynamic contacts between moving vehicles, which often leads to prolonged downloading delay and terrible user experience. Deploying Wifi-based Access Points (APs) could relieve this problem, but it often requires a large amount of investment, especially at the city scale. In this paper, we propose the idea of ParkCast, which doesn't need investment, but leverages roadside parking to distribute contents in urban VANETs. With wireless device and rechargable battery, parked vehicles can communicate with any vehicles driving through them. Owing to the extensive parking in cities, available resources and contact opportunities for sharing are largely increased. To each road, parked vehicles at roadside are grouped into a line cluster as far as possible, which is locally coordinated for node selection and data transmission. Such a collaborative design paradigm exploits the sequential contacts between moving vehicles and parked ones, implements sequential file transfer, reduces unnecessary messages and collisions, and then expedites content distribution greatly. We investigate ParkCast through theoretic analysis and realistic survey and simulation. The results prove that our scheme achieve high performance in distribution of contents with different sizes, especially in sparse traffic conditions.

【Keywords】: vehicular ad hoc networks; wireless LAN; ParkCast; WiFi based access points; collaborative design; data transmission; dynamic contacts; node selection; parked vehicles; rechargable battery; sequential file transfer; vehicular ad hoc networks; vehicular content distribution; wireless device; Cities and towns; Collaboration; Delay; Magnetic heads; Network coding; Roads; Vehicles; ParkCast; VANET; content distribution; line cluster

307. Distributed storage codes reduce latency in vehicular networks.

Paper Link】 【Pages】:2646-2650

【Authors】: Maheswaran Sathiamoorthy ; Alexandros G. Dimakis ; Bhaskar Krishnamachari ; Fan Bai

【Abstract】: We investigate the benefits of distributed storage using erasure codes for file sharing in vehicular networks through realistic trace-based simulations. We find that coding offers substantial benefits over simple replication when the file sizes are large compared to the average download bandwidth available per encounter. Our simulations, based on a large real vehicle trace from Beijing combined with a realistic radio link quality model for a IEEE 802.11p dedicated short range communication (DSRC) radio, demonstrate that coding provides significant cost reduction in vehicular networks.

【Keywords】: cost reduction; mobile radio; network coding; peer-to-peer computing; IEEE 802.11p DSRC radio; IEEE 802.11p dedicated short range communication radio; average download bandwidth; cost reduction; distributed storage code reduce latency; erasure code; file sharing; real vehicle tracing; realistic radio link quality; realistic radio link quality model; realistic trace-based simulation; vehicular network; Ad hoc networks; Bandwidth; Delay; Encoding; Redundancy; Vehicles

308. Trace-based performance analysis of opportunistic forwarding under imperfect node cooperation.

Paper Link】 【Pages】:2651-2655

【Authors】: Merkourios Karaliopoulos ; Christian Rohner

【Abstract】: The paper proposes an innovative method for the performance analysis of opportunistic forwarding protocols over files logging mobile node encounters (contact traces). The method is modular and evolves in three steps. It first carries out contact filtering to isolate contacts that constitute message forwarding opportunities for givenmessage coordinates and forwarding rules. It then draws on graph expansion techniques to capture these forwarding contacts into sparse space-time graph constructs. Finally, it runs standard shortest path algorithms over these constructs and derives typical performance metrics such as message delivery delay and path hopcount. The method is flexible in that it can easily assess the protocol operation under various expressions of imperfect node cooperation. We describe it in detail, analyze its complexity, and evaluate it against discrete event simulations for three representative randomized forwarding schemes. The match with the simulation results is excellent and obtained with run times up to three orders of size smaller than the duration of the simulations, thus rendering our method a valuable tool for the performance analysis of opportunistic forwarding schemes.

【Keywords】: filtering theory; graph theory; message passing; mobile radio; protocols; contact filtering; files logging mobile node encounter; forwarding contacts; forwarding rules; graph expansion techniques; imperfect node cooperation; message coordinates; message forwarding opportunities; opportunistic forwarding protocol; performance metrics; shortest path algorithm; sparse space-time graph; three representative randomized forwarding schemes; trace-based performance analysis; Analytical models; Communities; Delay; Performance analysis; Protocols; Routing

309. A mixed queueing network model of mobility in a campus wireless network.

Paper Link】 【Pages】:2656-2660

【Authors】: Yung-Chih Chen ; Jim Kurose ; Don Towsley

【Abstract】: Although wireless networks have become ubiquitous, surprisingly few models of user-level mobility have been developed and validated against traces of measured user behavior. In this paper, we develop and validate a simple mixed queueing network model of user mobility among access points in a campus network. We identify two classes of users, an open and a closed class, corresponding to mobile users that visit the network for a short time before departure, and users that are always resident in the network during the observation period. Using CRAWDAD traces of user-access-point affiliation over time, we compare model-predicted performance with the performance actually observed in the traces, and find that such a mixed queueing model can indeed be used to accurately predict a number of performance measures of interest.

【Keywords】: mobility management (mobile radio); queueing theory; CRAWDAD traces; campus wireless network; closed-class user; mixed queueing network model; mobile users; model-predicted performance; open-class user; user-access-point affiliation; user-level mobility; Analytical models; Data models; Exponential distribution; Mobile communication; Predictive models; Servers; Wireless networks

310. POVA: Traffic light sensing with probe vehicles.

Paper Link】 【Pages】:2661-2665

【Authors】: Xuemei Liu ; Yanmin Zhu ; Minglu Li ; Qian Zhang

【Abstract】: We develop a system called POVA for traffic light sensing in large-scale urban areas, where traffic light sensing aims to detect the status of traffic lights which is valuable for many applications such as traffic management, traffic light optimization and real-time vehicle navigation. The system employs pervasive probe vehicles that just report real-time states of position and speed from time to time. The important observation motivating the design of POVA is that a traffic light has a considerable impact on mobility of vehicles on the road attached to the traffic light. However, the system design faces three unique challenges, i.e., discrete probe reports, uneven distribution of reports over time and space, and variable interval of light states. To tackle the challenges, we develop a new technique that makes the best use of limited probe reports as well as statistical features of light states. It first estimates the state of a traffic light at the time instant of a report by applying maximum a posterior (MAP) estimation. Then, we formulate the state estimation of a light at any time into a joint optimization problem that is solved by an efficient heuristic algorithm. Trace-driven experimentation and field study show that the estimation error rate is as low as 21% even when the number of available reports is merely one per minute.

【Keywords】: maximum likelihood estimation; optimisation; road traffic; state estimation; statistical analysis; POVA; heuristic algorithm; joint optimization problem; large-scale urban areas; maximum a posterior estimation; pervasive probe vehicles; real-time position state estimation; real-time speed state estimation; real-time vehicle navigation; statistical features; trace-driven experimentation; traffic light optimization; traffic light sensing; traffic light status detection; traffic management; Algorithm design and analysis; Heuristic algorithms; Optimization; Sensors

311. An analytical approach towards cooperative relay scheduling under partial state information.

Paper Link】 【Pages】:2666-2670

【Authors】: Huijiang Li ; Neeraj Jaggi ; Biplab Sikdar

【Abstract】: Energy harvesting and cooperative communication are promising solutions to overcome the power limitations of Wireless Sensor Networks (WSNs) comprising of battery-powered nodes. In order to maximize the efficiency of such systems, measured in terms of packet delivery ratio achieved over time, efficient scheduling algorithms need to be designed. In particular, relay usage scheduling is critical for addressing the trade-off between energy consumption and efficiency in the network. However, the stochastic nature of the recharge and traffic generation processes at the sensor nodes, along with partial state information availability about neighboring nodes, makes the transmission and relay scheduling problem quite challenging. To address this problem, we model the system using a stochastic framework, and formulate the scheduling problem at source sensor node, when only partial state information about the relay is available at the source, as a Partially Observable Markov Decision Process (POMDP). We characterize an approximate solution to the optimality equations, which provides us with useful insights into the system dynamics. We observe that the structure of optimal policy is quite sensitive to system parameters, which makes it unsuitable for practical deployment. Therefore, we design a simple and practical threshold based relay scheduling policy, and show using simulations that it achieves close to optimal performance.

【Keywords】: Markov processes; cooperative communication; diversity reception; energy conservation; energy consumption; energy harvesting; telecommunication traffic; wireless sensor networks; POMDP; battery powered node; cooperative communication; energy consumption; energy efficiency; energy harvesting; neighboring node; packet delivery ratio; partial state information; partially observable Markov decision process; relay usage scheduling; source sensor node; stochastic process; traffic generation process; transmission scheduling problem; wireless sensor network; Batteries; Energy consumption; Energy harvesting; Equations; Mathematical model; Relays; Wireless sensor networks; Cooperative communications; Energy harvesting; Relay scheduling

312. Semi-structured and unstructured data aggregation scheduling in wireless sensor networks.

Paper Link】 【Pages】:2671-2675

【Authors】: Miloud Bagaa ; Abdelouahid Derhab ; Noureddine Lasla ; Abdelraouf Ouadjaout ; Nadjib Badache

【Abstract】: This paper focuses on data aggregation scheduling problem in wireless sensor networks (WSNs), to minimize time latency. Prior works on this problem have adopted a structured approach, in which a tree-based structure is used as an input for the scheduling algorithm. As the scheduling performance mainly depends on the supplied aggregation tree, such an approach cannot guarantee optimal performance. To address this problem, we propose approaches based on Semi-structured Topology (DAS-ST) and Unstructured Topology (DAS-UT). The approaches are based on two key design features, which are: (1) simultaneous execution of aggregation tree construction and scheduling, and (2) parent selection criteria that maximize the choices of parents for each node and maximize time slot reuse. We prove that the latency of DAS-ST is upper-bounded by ([2π/arccos(1/1+ϵ)]+4)R+Δ-4, where R is the network radius, Δ is the maximum node degree, and 0.05 <; ϵ ≤ 1. Simulations results show that DAS-UT outperforms DAS-ST and four competitive state-of-the-art aggregation scheduling algorithms in terms of latency and network lifetime.

【Keywords】: scheduling; trees (mathematics); wireless sensor networks; aggregation tree construction; data aggregation scheduling; network lifetime; parent selection criteria; semistructured topology; supplied aggregation tree; time latency; time slot reuse; tree based structure; unstructured topology; wireless sensor networks; Ad hoc networks; Network topology; Scheduling; Scheduling algorithms; Topology; Wireless communication; Wireless sensor networks; data aggregation; scheduling; wireless sensor network

313. Morello: A quality-of-monitoring oriented sensing scheduling protocol in Sensor Networks.

Paper Link】 【Pages】:2676-2680

【Authors】: Shaojie Tang ; Lei Yang

【Abstract】: Wireless Sensor Networks (WSN) are often densely deployed in the region of interest in order to continuously monitor physical phenomenon. Due to highly deployment density and the nature of the physical phenomenon, nearby sensor readings are often highly correlated in both space domain and time domain. These spatial and temporal correlations bring significant potential advantages as well as challenges for developing efficient sensing scheduling protocols for WSN. In this paper, a theoretical framework is developed to model the Quality of Monitoring (QoM) by exploiting both spatial and temporal correlations. The objective of this work is to enable the development of efficient sensing scheduling protocols which exploit these advantageous intrinsic features of the WSN paradigm. Specially, we propose two sensing scheduling schemes in order to maximize the overall QoM subject to resource constraints (e.g., under fixed duty cycle). Extensive experiments validate our theoretical results.

【Keywords】: protocols; quality of service; scheduling; wireless sensor networks; Morello; physical phenomenon; quality of monitoring oriented sensing scheduling protocol; resource constraints; sensor readings; space domain; time domain; wireless sensor networks; Correlation; Monitoring; Protocols; Robot sensing systems; Schedules; Wireless sensor networks; sensing task scheduling; shared sensor network

314. Scalable routing in 3D high genus sensor networks using graph embedding.

Paper Link】 【Pages】:2681-2685

【Authors】: Xiaokang Yu ; Xiaotian Yin ; Wei Han ; Jie Gao ; Xianfeng Gu

【Abstract】: We study scalable routing for a sensor network deployed in complicated 3D settings such as underground tunnels in gas system or water system. The nodes are in general 3D space but they are very sparsely located and the network has complex topology. We propose a routing scheme by first embdding the network on a surface with possibly non-zero genus. Then we compute a canonical hyperbolic metric of the embedded surface, and use geodesics to decompose the network into canonical components called pairs of pants' whose topology is simpler (with genus zero). The adjacency of the pants components is extracted as a high level routing map and stored at every node. With the hyperbolic metric one can use greedy routing to navigate within and across pants. Altogether this leads to a two-level routing scheme by first finding a sequence of pants and then realizing the route with greedy steps. We show by simulation that the number of pants is closely related to the truegenus' of the network and that the routing scheme is efficient and scalable.

【Keywords】: graph theory; telecommunication network routing; telecommunication network topology; wireless sensor networks; 3D high genus sensor networks; canonical hyperbolic metric; complex topology; gas system; general 3D space; graph embedding; greedy routing; high level routing map; scalable routing; underground tunnels; water system; Clocks; Face; Geology; Wireless communication; Wireless sensor networks

315. Fusion of state estimates over long-haul sensor networks under random delay and loss.

Paper Link】 【Pages】:2686-2690

【Authors】: Qiang Liu ; Xin Wang ; Nageswara S. V. Rao

【Abstract】: Long-haul sensor networks are deployed in a wide range of applications from national security to environmental monitoring. We consider target tracking over a long-haul sensor network, wherein state and covariance estimates are sent from sensors to a fusion center that generates a fused state. Fusion serves as a viable means to improve the estimation performance to meet the system requirement on accuracy and delay. Communications over the long-haul links, such as submarine fibers and satellite links, is subject to long latencies and high loss rates that lead to many lost or out-of-order messages and may significantly degrade the fusion performance. We propose an online selective fuser to combine the received state estimates based on estimated information contribution from the pending data. By concurrently using prediction and retrodiction, the fuser opportunistically makes timely decisions to achieve a balance between accuracy and timeliness of the fused estimate. Simulation results show that our method effectively maintains high levels of fusion performance under various communication delay and loss conditions.

【Keywords】: covariance analysis; target tracking; wireless sensor networks; communication delay; covariance estimation; environmental monitoring; fusion center; information contribution estimation; long-haul links; long-haul sensor networks; loss conditions; national security; online selective fuser; prediction; random delay; received state estimation; retrodiction; satellite links; submarine fibers; system requirement; target tracking; Accuracy; Delay; Estimation error; Noise; Target tracking; State estimation; delay and loss; long-haul sensor networks; online selective fusion; prediction and retrodiction

Paper Link】 【Pages】:2691-2695

【Authors】: Yaqin Zhou ; Xiang-Yang Li ; Min Liu ; Zhongcheng Li ; Shaojie Tang ; XuFei Mao ; Qiuyuan Huang

【Abstract】: We study distributed link scheduling for throughput maximization in wireless networks. The majority of results on link scheduling assume binary interference models for simplicity. While the physical interference model reflects the physical reality more precisely, the problem becomes notoriously harder under the physical interference model. There have been just a few existing results on centralized link scheduling under the physical interference model, though distributed schedulings are more practical. In this paper, by leveraging the partition and shifting strategies and the pick-and-compare scheme, we present the first distributed link scheduling algorithm that can achieve a constant fraction of the optimal capacity region subject to physical interference constraints in the linear power setting for multihop wireless networks.

【Keywords】: radio networks; radiofrequency interference; scheduling; centralized link scheduling; distributed link scheduling; linear power setting; multihop wireless networks; optimal capacity region; physical interference model; pick-and-compare scheme; throughput maximization; Approximation methods; Interference constraints; Scheduling algorithms; Throughput; Wireless networks

317. Connection-level scheduling in wireless networks using only MAC-layer information.

Paper Link】 【Pages】:2696-2700

【Authors】: Javad Ghaderi ; Tianxiong Ji ; R. Srikant

【Abstract】: The paper studies throughput-optimal scheduling in wireless networks when there are file arrivals and departures. In the case of single-hop traffic, the well-studied Max Weight algorithm provides soft priorities to links with larger queue lengths. If packets arrive in bursts during file arrival instants, then large variances in file sizes would imply that some links will have very large queue lengths while others will have small queue lengths. Thus, links with small queue lengths may be starved for long periods of time. An alternative is to use only MAC-layer queue lengths in making scheduling decisions; in fact, typically only this information is available since scheduling is performed at the MAC layer. Therefore the questions we ask in this paper are the following: (i) is scheduling using only MAC-layer queue length information throughput-optimal? and (ii) does it improve delay performance compared to the case where scheduling is performed using the total number of packets waiting at a link? We affirmatively answer both questions in the paper (the first theoretically and the second using simulations), making minimal assumptions on the transport-layer window control mechanism.

【Keywords】: access protocols; delays; queueing theory; radio networks; scheduling; telecommunication traffic; MAC layer information; MAC layer queue lengths; connection-level scheduling; delay performance; max weight algorithm; single hop traffic; throughput optimal scheduling; transport layer window control mechanism; wireless networks; Algorithm design and analysis; Delay; Multiaccess communication; Schedules; Scheduling algorithms; Throughput; Wireless networks

318. BOR/AC: Bandwidth-aware opportunistic routing with admission control in wireless mesh networks.

Paper Link】 【Pages】:2701-2705

【Authors】: Peng Zhao ; Xinyu Yang ; Jiayin Wang ; Benyuan Liu ; Jie Wang

【Abstract】: Opportunistic routing (OR) is a viable approach for improving performance of wireless communications. Previous studies on OR have focused on cost minimization, performance of multiple rates, congestion control, and other issues. Bandwidth assurance over OR, however, has not been adequately investigated. To bridge this gap, we present a bandwidth-aware opportunistic routing (BOR) with admission control (AC) protocol named BOR/AC. In particular, by analyzing the expected available bandwidth (EAB) and the expected transmission cost (ETC) in OR, we first devise a new metric called BCR (bandwidth-cost ratio) to determine the priority of relays in the forwarding candidates set. Admission control is then applied to admit or reject traffic flows based on estimated expected available bandwidth. Extensive simulation results show that BOR/AC consistently achieves much better performance than existing opportunistic routing protocols.

【Keywords】: cost reduction; routing protocols; telecommunication congestion control; wireless mesh networks; BCR; BOR-AC protocol; bandwidth assurance; bandwidth-aware opportunistic routing-with-admission control protocol; bandwidth-cost ratio; congestion control; cost minimization; expected available bandwidth; expected transmission cost; multiple-rate performance; relay priority; traffic flows; wireless mesh networks; Bandwidth

319. PLASMA: A new routing paradigm for wireless multihop networks.

Paper Link】 【Pages】:2706-2710

【Authors】: Rafael P. Laufer ; Pedro B. Velloso ; Luiz Filipe M. Vieira ; Leonard Kleinrock

【Abstract】: In this paper we present a new routing paradigm for wireless multihop networks. In plasma routing, each packet is delivered over the best available path to one of the gateways. The choice of the path and gateway for each packet is not made beforehand by the source node, but rather on-the-fly by the mesh routers as the packet traverses the network. We propose a distributed routing algorithm to jointly optimize the transmission rate and the set of gateways each node should use. A load balancing technique is also proposed to disperse the network traffic among multiple gateways. We validate our proposal with simulations and show that plasma routing outperforms the state-of-the-art multirate anypath routing paradigm, with a 98% throughput gain and a 2.2x delay decrease. Finally, we also show that the load can be evenly distributed among gateways with a similar routing cost, resulting in a further 63% throughput gain.

【Keywords】: resource allocation; telecommunication network routing; telecommunication traffic; wireless mesh networks; distributed routing algorithm; load balancing technique; mesh routers; multirate anypath routing paradigm; network traffic; plasma routing paradigm; source node; throughput gain; wireless mesh networks; wireless multihop networks; Delay; Load management; Logic gates; Plasmas; Routing; Throughput; Wireless communication

320. Context-aware sensor data dissemination for mobile users in remote areas.

Paper Link】 【Pages】:2711-2715

【Authors】: Edith C. H. Ngai ; Mani B. Srivastava ; Jiangchuan Liu

【Abstract】: Many mobile sensing applications consider users reporting and accessing sensing data through the Internet. However, WiFi and 3G connectivities are not always available in remote areas. Existing data dissemination schemes for opportunistic networks are not sufficient for sensing applications as sensing context has not been explored. In this work, we present a novel context-aware sensing data dissemination framework for mobile users in a remote sensing field. It maximizes information utility by considering such sensing context as sensing type, locality, time-to-live, mobility and user interests. Different from existing works, the mobile users not only collect sensing data, but also upload data to sensors for information sharing. We develop a context-aware deployment algorithm and a hybrid data exchange mechanism for generic sensors and mobile users. We evaluate our solution by both analysis and simulations, and show that it can provide high information utility for mobile users at low communication overhead.

【Keywords】: Internet; mobile radio; wireless LAN; wireless sensor networks; 3G connectivity; Internet; WiFi; communication overhead; context aware deployment algorithm; context aware sensor data dissemination; generic sensors; hybrid data exchange mechanism; information sharing; mobile sensing; mobile users; opportunistic networks; remote areas; Ad hoc networks; Context; Delay; Mobile communication; Mobile computing; Mobile handsets; Sensors

321. Energy-optimal mobile application execution: Taming resource-poor mobile devices with cloud clones.

Paper Link】 【Pages】:2716-2720

【Authors】: Yonggang Wen ; Weiwen Zhang ; Haiyun Luo

【Abstract】: In this paper, we propose to leverage cloud computing to tame resource-poor mobile devices. Specifically, mobile applications can be executed in the mobile device (known as mobile execution) or offloaded to the cloud clone for execution (known as cloud execution), with an objective to conserve energy for mobile device. The energy-optimal execution policy is obtained by solving two constrained optimization problems, i.e., how to optimally configure the clock frequency to complete CPU cycles for mobile execution, and how to optimally schedule the data transmission for cloud execution in order to achieve the minimal energy within time delay. Closed-form solutions are obtained for both cases and applied to decide the optimal condition under whether the local execution or the remote execution is more energy-efficient for the mobile device. Moreover, numerical results illustrate that a significant amount of energy (e.g., up to 13 times for a typical mobile application profile) can be saved by optimally offloading the mobile application to the cloud clone.

【Keywords】: cloud computing; mobile computing; clock frequency; cloud clones; complete CPU cycles; constrained optimization problems; energy-optimal mobile application execution; resource-poor mobile devices; time delay; Cloning; Mobile communication; Transmission line matrix methods; application offloading; cloud computing; dynamic voltage scaling; mobile applications

322. Dynamic lookahead mechanism for conserving power in multi-player mobile games.

Paper Link】 【Pages】:2721-2725

【Authors】: Karthik Thirugnanam ; Bhojan Anand ; Jeena Sebastian ; Pravein G. Kannan ; Akkihebbal L. Ananda ; Rajesh Krishna Balan ; Mun Choon Chan

【Abstract】: As the current generation of mobile smartphones become more powerful, they are being used to perform more resource intensive tasks making battery lifetime a major bottle-neck. In this paper, we present a technique called dynamic AoV lookahead for reducing wireless interface power consumption upto 50% while playing a popular, yet resource intensive, mobile multiplayer games.

【Keywords】: computer games; network interfaces; power aware computing; resource allocation; smart phones; battery lifetime; dynamic AoV lookahead; dynamic lookahead mechanism; multiplayer mobile games; power conservation; resource intensive tasks; smart phones; wireless interface power consumption reduction; Accuracy; Games; Heuristic algorithms; IEEE 802.11 Standards; Network interfaces; Power demand; Servers

Paper Link】 【Pages】:2726-2730

【Authors】: Jialin He ; Hui Liu ; Pengfei Cui ; Jonathan Landon ; Onur Altintas ; Rama Vuyyuru ; Dinesh Rajan ; Joseph David Camp

【Abstract】: Context awareness has received increasing attention with the proliferation of various types of sensors on mobile devices. However, while wireless performance is known to be highly correlated with environmental settings, mobile devices have yet to fully exploit the awareness of context to improve wireless performance. In this paper, we leverage available context information to improve link-level adaptation via decision-tree classifiers and extensively evaluate its performance over emulated channels as well as with in-field trials. We first propose a classification method based on decision trees to select the optimal transmission parameters such as modulation, coding rate and packet size. We then quantify the throughput improvement using the proposed scheme and show that in some scenarios the throughput increases by over 100% compared to traditional SNR-based rate adaptation protocols. Second, we analyze the amount of training to assess the classification scheme. Third, we validate classification-based method by implementation on two different test platforms for extensive experimentation. We reveal the importance of the various contextual attributes used and identify channel type as a key parameter that affects classification performance. Finally, we study and quantify the use of context information across multiple different frequency bands and demonstrate the significant throughput gains that can be obtained.

【Keywords】: decision trees; mobile handsets; pattern classification; protocols; radio links; wireless channels; SNR-based rate adaptation protocol; coding rate; context awareness; decision-tree classification; emulated channels; link-level adaptation; mobile devices; modulation; optimal transmission parameter; packet size; throughput improvement; Context; Decision trees; Signal to noise ratio; Throughput; Training; Wireless communication; Wireless sensor networks

324. Real-time status: How often should one update?

Paper Link】 【Pages】:2731-2735

【Authors】: Sanjit Krishnan Kaul ; Roy D. Yates ; Marco Gruteser

【Abstract】: Increasingly ubiquitous communication networks and connectivity via portable devices have engendered a host of applications in which sources, for example people and environmental sensors, send updates of their status to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited network resources. In this paper, we employ a time-average age metric for the performance evaluation of status update systems. We derive general methods for calculating the age metric that can be applied to a broad class of service systems. We apply these methods to queue-theoretic system abstractions consisting of a source, a service facility and monitors, with the model of the service facility (physical constraints) a given. The queue discipline of first-come-first-served (FCFS) is explored. We show the existence of an optimal rate at which a source must generate its information to keep its status as timely as possible at all its monitors. This rate differs from those that maximize utilization (throughput) or minimize status packet delivery delay. While our abstractions are simpler than their real-world counterparts, the insights obtained, we believe, are a useful starting point in understanding and designing systems that support real time status updates.

【Keywords】: mobile computing; mobile handsets; queueing theory; abstraction; first come first served; network connectivity; performance evaluation; portable device; queuing theory; service facility model; service system; status packet delivery delay minimisation; status update system; time average age metric; ubiquitous communication network; utilisation maximisation; Monitoring; Real time systems; Servers; Silicon; Temperature sensors; Vehicles

325. Geographic max-flow and min-cut under a circular disk failure model.

Paper Link】 【Pages】:2736-2740

【Authors】: Sebastian Neumayer ; Alon Efrat ; Eytan Modiano

【Abstract】: Failures in fiber-optic networks may be caused by natural disasters, such as floods or earthquakes, as well as other events, such as an Electromagnetic Pulse (EMP) attack. These events occur in specific geographical locations, therefore the geography of the network determines the effect of failure events on the network's connectivity and capacity. In this paper we consider a generalization of the min-cut and max-flow problems under a geographic failure model. Specifically, we consider the problem of finding the minimum number of failures, modeled as circular disks, to disconnect a pair of nodes and the maximum number of failure disjoint paths between pairs of nodes. This model applies to the scenario where an adversary is attacking the network multiple times with intention to reduce its connectivity. We present a polynomial time algorithm to solve the geographic min-cut problem and develop an ILP formulation, an exact algorithm, and a heuristic algorithm for the geographic max-flow problem.

【Keywords】: electromagnetic pulse; integer programming; linear programming; optical fibre networks; telecommunication network reliability; EMP attack; circular disk failure model; electromagnetic pulse attack; fiber-optic networks; geographic failure model; geographic max-flow problem; geographic min-cut problem; min-cut and max-flow problems; natural disasters; network geography; polynomial time algorithm; Cities and towns; Color; EMP radiation effects; Grippers; Heuristic algorithms; Polynomials; Shape

326. A fast sketch for aggregate queries over high-speed network traffic.

Paper Link】 【Pages】:2741-2745

【Authors】: Yang Liu ; Wenji Chen ; Yong Guan

【Abstract】: There have been security problems and network failures that are hard to resolve, for example, botnets, polymorphic worm/virus, DDoS, etc. To address them, we need to monitor the traffic dynamics and have a network-wide view about them, and more importantly, be able to detect attacks and failures in a timely manner. Due to the rapid increase in the traffic volume, it is often infeasible to monitor every individual flow in the backbone network due to space and time constraints. Instead, we are often required to aggregate packets into a small number of flows and develop the detection methods with aggregated flows, namely aggregate queries. Although it enables ISPs to detect network problems in a timely manner, the flow aggregation cannot preserve certain critical information in network traffic, e.g., IP addresses, port numbers, etc. Due to such missing information, it becomes very difficult (or often infeasible) for ISPs to identify the sources of network attacks or the causes of traffic anomalies, which are important to resolve the network problems effectively. In this paper, we propose an efficient data structure, namely the fast sketch, which can both aggregate packets into a small number of flows, and further enable ISPs to identify the anomalous keys (IP addresses, port numbers, etc.), with small space and time. With it, the number of aggregated flows can achieve the lower bound of the heavy-change detection, i.e., Ω(k log(n/k)), where n is the range of flow keys and k is an upper bound of the number of anomalous keys. In addition, our sketch combines both the combinatorial group testing and the quotient technique to identify anomalous keys, which can guarantee a sub-linear running time. We expect our work will improve the practice for real-time traffic monitoring in a high-speed networked system.

【Keywords】: IP networks; Internet; combinatorial mathematics; computer viruses; data structures; group theory; query processing; telecommunication security; telecommunication traffic; IP addresses; ISP; Internet service providers; anomalous keys identification; attack detection; backbone network; botnets; combinatorial group testing; data structure; failure detection; fast sketch; high-speed network traffic; high-speed networked system; network failures; packet aggregation; polymorphic virus; polymorphic worm; port numbers; query aggregation; quotient technique; real-time traffic monitoring; security problems; space constraints; sublinear running time; time constraints; traffic anomalies; traffic dynamic monitoring; Aggregates; IP networks; Internet; Monitoring; Protocols; Radiation detectors; Testing

327. Using host profiling to refine statistical application identification.

Paper Link】 【Pages】:2746-2750

【Authors】: Mohamad Jaber ; Roberto G. Cascella ; Chadi Barakat

【Abstract】: The identification of Internet traffic applications is very important for ISPs and network administrators to protect their resources from unwanted traffic and prioritize some major applications. Statistical methods are preferred to port-based ones since they don't rely on the port number, which can change dynamically, and to deep packet inspection since they also work for encrypted traffic. These methods combine the statistical analysis of the application packet flow parameters, such as packet size and inter-packet time, with machine learning techniques. Other successful approaches rely on the way the hosts communicate and their traffic patterns to identify applications. In this paper, we propose a new online method for traffic classification that combines the statistical and host-based approaches in order to construct a robust and precise method for early Internet traffic identification. Without loss of generality we use the packet size as the main feature for the classification and we benefit from the traffic profile of the host (i.e., which application and how much) to refine the classification and decide in favor of this or that application. The host profile is then updated online based on the result of the classification of previous flows originated by or addressed to the same host. We evaluate our method on real traces using several applications. The results show that leveraging the traffic pattern of the host ameliorates the performance of statistical methods. They also prove the capacity of our solution to derive profiles for the traffic of Internet hosts and to identify the services they provide.

【Keywords】: Internet; learning (artificial intelligence); statistical analysis; telecommunication traffic; ISP; Internet traffic applications; application packet flow parameters; deep packet inspection; encrypted traffic; host profiling; inter-packet time; machine learning techniques; network administrators; online method; packet size; statistical application identification; traffic classification; traffic profile; IP networks; Internet; Labeling; Monitoring; Probability; Statistical analysis; Training

328. Cuckoo sampling: Robust collection of flow aggregates under a fixed memory budget.

Paper Link】 【Pages】:2751-2755

【Authors】: Josep Sanjuàs-Cuxart ; Pere Barlet-Ros ; Nick G. Duffield ; Ramana Rao Kompella

【Abstract】: Collecting per-flow aggregates in high-speed links is challenging and usually requires traffic sampling to handle peak rates and extreme traffic mixes. Static selection of sampling rates is problematic, since worst-case resource usage is orders of magnitude higher than the average. To address this issue, adaptive schemes have been proposed in the last few years that periodically adjust packet sampling rates to network conditions. However, such proposals rely on complex algorithms and data structures of costly maintenance. As a consequence, adaptive sampling is still not widely implemented in routers.

【Keywords】: data structures; packet switching; sampling methods; telecommunication congestion control; telecommunication traffic; CPU cost; Cuckoo sampling; adaptive sampling; complex algorithms; data structure; epoch measurement; fixed memory budget; flow aggregates; flow sampling based measurement scheme; high-speed link; packet sampling rates; per-packet operation; robust collection; routers; synthetic network traffic; traffic sampling; Aggregates; Arrays; Hardware; Memory management; Monitoring; Reservoirs

329. Argus: End-to-end service anomaly detection and localization from an ISP's point of view.

Paper Link】 【Pages】:2756-2760

【Authors】: He Yan ; Ashley Flavel ; Zihui Ge ; Alexandre Gerber ; Daniel Massey ; Christos Papadopoulos ; Hiren Shah ; Jennifer Yates

【Abstract】: Recent trends in the networked services industry (e.g., CDN, VPN, VoIP, IPTV) see Internet Service Providers (ISPs) leveraging their existing network connectivity to provide an end-to-end solution. Consequently, new opportunities are available to monitor and improve the end-to-end service quality by leveraging the information from inside the network. We propose a new approach to detect and localize end-to-end service quality issues in such ISP-managed networked services by utilizing traffic data passively monitored at the ISP side, the ISP network topology, routing tables and geographic information. This paper presents the design of a generic service quality monitoring system “Argus”. Argus has been successfully deployed in a tier-1 ISP to monitor millions of users of its CDN service and assist operators to detect and localize end-to-end service quality issues. This operational experience demonstrates that Argus is effective in accurate, quick detection and localization of important service quality issues.

【Keywords】: Internet; computer network performance evaluation; computer network security; computerised monitoring; telecommunication network routing; telecommunication network topology; Argus; CDN service; ISP network topology; ISP-managed networked services; Internet Service Providers; end-to-end service anomaly detection; end-to-end service anomaly localization; end-to-end service quality; geographic information; network connectivity; networked services industry; routing tables; service quality monitoring system; tier-1 ISP; traffic data; Internet; Measurement; Monitoring; Robustness; Routing; Smoothing methods; Time series analysis

330. A reverse auction framework for access permission transaction to promote hybrid access in femtocell network.

Paper Link】 【Pages】:2761-2765

【Authors】: Yanjiao Chen ; Jin Zhang ; Qian Zhang ; Juncheng Jia

【Abstract】: Femtocell refers to a new class of low-power, low-cost base stations (BSs) which can provide better coverage and improved voice/data Quality of Service (QoS). Hybrid access in two-tier macro-femto network is regarded as the most ideal access control mechanism to enhance overall network performance. But the implementation of hybrid access is hindered by a lack of market that can motivate ACcess Permission (ACP) trading between Wireless Service Providers (WSPs) and private femtocell owners. In this paper, we propose a reverse auction framework for fair and efficient ACP transaction. Unlike strict outcome (the demand of bidder must be fully satisfied) in most of the existing works on auction design, the proposed auction model allows range outcome, in which WSP accepts partial demand fulfillment and femtocell owners makes best-effort selling. We first propose a Vickery-Clarke-Grove (VCG) based mechanism to maximize social welfare. As the VCG mechanism is too time-consuming, we further propose an alternative truthful mechanism (referred to as suboptimal mechanism) with acceptable polynomial computational complexity. The simulation results have shown that the suboptimal mechanism generates almost the same social welfare and the cost for WSP as VCG mechanism.

【Keywords】: commerce; computational complexity; femtocellular radio; quality of service; Vickery-Clarke-Grove based mechanism; access control mechanism; access permission transaction; femtocell network; hybrid access; polynomial computational complexity; reverse auction framework; two-tier macro-femto network; voice/data quality of service; wireless service providers; Access control; Computational complexity; Computational modeling; Cost accounting; Resource management; Simulation; Wireless communication

331. Pricing strategies for user-provided connectivity services.

Paper Link】 【Pages】:2766-2770

【Authors】: Mohammad Hadi Afrasiabi ; Roch Guérin

【Abstract】: User-provided connectivity (UPC) services offer a possible alternative, or complement, to existing infrastructure-based connectivity. A user allows other users to occasionally connect through its “home base” in exchange for reciprocation, or possibly compensation. This service model exhibits strong positive and negative externalities. A large user base makes the service more attractive, as it offers more connectivity options to roaming users, but it also implies a greater volume of (roaming) traffic passing through a user's home base, which can increase congestion. These interactions make it difficult to predict the eventual success of such a service offering, and in particular how to effectively price it. This paper investigates a two-price policy where the first price is an introductory price that expires once service adoption reaches a certain level. The paper uses a simplified analytical model to investigate pricing strategies under this policy, and their sensitivity to changes in system parameters. The insight and practical guidelines this yields are validated numerically under more realistic conditions.

【Keywords】: Internet; home computing; pricing; FON; UPC services; community-based Internet access model; compensation; infrastructure-based connectivity; pricing strategies; reciprocation; roaming traffic; roaming users; service model; service offering; user home base; user-provided connectivity services; Guidelines; Internet; Monopoly; Numerical models; Pricing; Robustness; Switches

332. Semi-dynamic Hawk and Dove game, applied to power control.

Paper Link】 【Pages】:2771-2775

【Authors】: Eitan Altman ; Dieter Fiems ; Majed Haddad ; Julien Gaillard

【Abstract】: In this paper, we study a power control game over a collision channel. Each player has an energy state. When choosing a higher transmission power, the chance of a successful transmission (in the presence of other interference) increases at the cost of a larger decrease in the energy state of the battery. We study this dynamic game when restricting to simple non-dynamic strategies that consist of choosing a given power level that is maintained during the lifetime of the battery. We identify a surprising paradox in our Hawk-Dove game which we term the initial energy paradox.

【Keywords】: game theory; mobile radio; power control; telecommunication congestion control; battery energy state; battery lifetime; collision channel; dynamic game; nondynamic strategies; power control game; semidynamic Hawk-Dove game; transmission power; Batteries; Electric breakdown; Energy states; Games; Mobile communication; Nash equilibrium; Neodymium

333. Design and analysis of a choking strategy for coalitions in data swarming systems.

Paper Link】 【Pages】:2776-2780

【Authors】: Honggang Zhang ; Sudarshan Vasudevan

【Abstract】: We design and analyze a mechanism for forming coalitions of peers in a data swarming system where peers have heterogeneous upload capacities. A coalition is a set of peers that explicitly cooperate with other peers inside the coalition via choking, data replication, and capacity allocation strategies. Further, each peer interacts with other peers outside its coalition via potentially distinct choking, data replication, and capacity allocation strategies. Following on our preliminary work in [14] that demonstrated significant performance benefits of coalitions, we present here a comprehensive analysis of the choking strategy for coalitions. We first develop an analytical model to understand a simple random choking strategy as a within-coalition strategy and show that it accurately predicts a coalition's performance. Our model shows that the random choking strategy can help a coalition achieve near-optimal performance by optimally choosing the re-choking interval lengths and the number of unchoke slots. Further, our analytical model can be easily adapted to model a BitTorrent-like swarm. Using extensive simulations, we demonstrate improvements in the performance of a swarming system due to coalition formation.

【Keywords】: peer-to-peer computing; BitTorrent-like swarm; capacity allocation strategies; coalition formation; data replication; data swarming systems; heterogeneous upload capacities; near-optimal performance; peer coalitions; random choking strategy; re-choking interval lengths; unchoke slots; Adaptation models; Analytical models; Mathematical model; Nickel; Numerical models; Resource management; Steady-state

334. A bayesian based incentive-compatible routing mechanism for Dynamic Spectrum Access networks.

Paper Link】 【Pages】:2781-2785

【Authors】: Swastik Brahma ; Mainak Chatterjee

【Abstract】: In this paper, we address the problem of incentive based routing in Dynamic Spectrum Access (DSA) networks, where each secondary node incurs a cost for routing traffic from a flow and also has a certain capacity that it can provide to the flow. We propose a path auction based routing mechanism in which each secondary node announces its privately known cost and capacity, based on which a route is chosen and payments are made to the nodes. We design the route selection mechanism and the pricing function in a manner that can induce nodes to reveal their cost and capacity honestly, while minimizing the payment that needs to be given to the nodes that relay traffic, thereby making our path auction optimal. In our proposed mechanism, the optimal route over which traffic should be routed and the payment that each node should receive can be computed in polynomial time. Simulation results suggest that our mechanism can ensure truthful reporting of both cost and capacity while making a payment less than that required for VCG based least cost path routing.

【Keywords】: incentive schemes; radio networks; telecommunication network routing; Bayesian-based incentive-compatible routing mechanism; DSA networks; VCG-based least-cost path routing; dynamic spectrum access networks; path auction-based routing mechanism; polynomial time; pricing function; relay traffic; route selection mechanism; routing traffic; Bayesian methods; Joints; Polynomials; Pricing; Random variables; Routing; Vectors

335. Perception-based playout scheduling for high-quality real-time interactive multimedia.

Paper Link】 【Pages】:2786-2790

【Authors】: Zixia Huang ; Klara Nahrstedt

【Abstract】: Existing media playout scheduling (MPS) schemes usually focus on selecting and scheduling packets according to optimized Internet media metrics, which are only partially relevant to the subjective human perception in the interactive system. The MPS design challenges are two-fold. First, human preferences are concurrently dominated by multiple quality attributes of the streaming media whose perceptual tradeoffs were not well understood, so they were not used as an integral part of an efficient MPS design. Second, people's perceptions can be impacted by the flicker effect caused by Internet dynamics and the resulting MPS adaptations. In this paper, we propose a new and adaptive perception-based MPS scheme to deliver high-quality real-time interactive multimedia. We first investigate the perceptual tradeoffs among the multi-modal bundle streaming qualities in a real Internet environment. We then present our MPS design that finds the bundle quality tradeoffs, while minimizing flicker degradations. Evaluation results show the performance of our MPS scheme.

【Keywords】: Internet; adaptive scheduling; interactive systems; media streaming; Internet dynamics; Internet media metrics; MPS adaptations; MPS design; adaptive perception-based playout scheduling; flicker degradation minimization; flicker effect; high-quality real-time interactive multimedia; human preferences; media playout scheduling scheme; multimodal bundle streaming qualities; packet scheduling; packet selection; people perceptions; perceptual tradeoffs; quality attributes; streaming media; subjective human perception; Delay; Humans; Internet; Media; Receivers; Streaming media

336. Coding and replication co-design for interactive multiview video streaming.

Paper Link】 【Pages】:2791-2795

【Authors】: Huan Huang ; Bo Zhang ; S.-H. Gary Chan ; Gene Cheung ; Pascal Frossard

【Abstract】: Multiview video refers to the simultaneous capturing of multiple video views with an array of closely spaced cameras. In an interactive multiview video streaming (IMVS) system, a client can play back the content in time in a single view, and may observe a scene of interest by switching to different viewpoints. Users independently choose their own view navigation paths through the high-dimensional multiview data. Distributed servers are deployed to collaboratively replicate video content in order to support user scalability. Such a system typically presents challenges in both coding and content replication. In coding, the multiview video must be encoded in order to support efficient view-switching and distributed replication. In content replication, it is important to decide which data blocks to store at each server to facilitate view-switches at any time. In this paper, we co-design a coding structure and a distributed content replication strategy. First, we propose a coding structure based on redundant P-frames and distributed source coding (DSC) frames to achieve efficiency in coding, view switches and content replication. We then propose a heuristic-based distributed and cooperative replication strategy to take advantage of the correlation between the multiple views for resource-effective content delivery. Simulation results show that our coding and replication co-design is cost-effective in supporting IMVS services.

【Keywords】: cameras; interactive video; video coding; video streaming; DSC frames; IMVS system; closely-spaced camera array; coding-distributed content replication co-design; distributed servers; distributed source coding frames; heuristic-based distributed-cooperative replication strategy; high-dimensional multiview data; interactive multiview video streaming system; multiple-video view capturing; redundant P-frames; resource-effective content delivery; user scalability; video content replication; view navigation paths; view-switching; Bandwidth; Delay; Encoding; Navigation; Servers; Streaming media; Switches

337. Rate allocation for layered multicast streaming with inter-layer network coding.

Paper Link】 【Pages】:2796-2800

【Authors】: Joerg Widmer ; Andrea Capalbo ; Antonio Fernández Anta ; Albert Banchs

【Abstract】: Multi-layer video streaming allows to provide different video qualities to a group of multicast receivers with heterogeneous receive rates. The number of layers received determines the quality of the decoded video stream. For such layered multicast streaming, network coding provides higher capacity than multicast routing. Network coding can be performed within a layer (intra-layer) or across layers (inter-layer), and in general inter-layer coding outperforms intra-layer coding. An optimal solution to a network coded layered multicast problem may require decoding of the network code at interior nodes to extract information to be forwarded. However, decoding consumes resources and introduces delay, which is particularly undesirable at interior nodes (the routers) of the network. In this paper, we thus focus on the inter-layer network coding problem without decoding at interior nodes. We propose a heuristic algorithm for rate allocation and code assignment based on the Edmonds-Karp maximum flow algorithm and perform simulations that show that our algorithm may even outperform other heuristics that do require decoding at interior nodes.

【Keywords】: multicast communication; network coding; telecommunication network routing; video coding; video streaming; Edmonds-Karp maximum flow algorithm; code assignment; decoded video stream quality; heterogeneous receive rates; heuristic algorithm; interlayer network coding; intralayer network coding; layered multicast streaming; multicast receivers; multicast routing; multilayer video streaming; network code decoding; network routers; rate allocation; video qualities; Algorithm design and analysis; Decoding; Encoding; Heuristic algorithms; Network coding; Receivers; Streaming media

338. Simple regenerating codes: Network coding for cloud storage.

Paper Link】 【Pages】:2801-2805

【Authors】: Dimitris S. Papailiopoulos ; Jianqiang Luo ; Alexandros G. Dimakis ; Cheng Huang ; Jin Li

【Abstract】: Network codes designed specifically for distributed storage systems have the potential to provide dramatically higher storage efficiency for the same availability. One main challenge in the design of such codes is the exact repair problem: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. One of the main open problems in this emerging area has been the design of simple coding schemes that allow exact and low cost repair of failed nodes and have high data rates. In particular, all prior known explicit constructions have data rates bounded by 1/2. In this paper we introduce the first family of distributed storage codes that have simple look-up repair and can achieve rates up to 2/3. Our constructions are very simple to implement and perform exact repair by simple XORing of packets. We experimentally evaluate the proposed codes in a realistic cloud storage simulator and show significant benefits in both performance and reliability compared to replication and standard Reed-Solomon codes.

【Keywords】: Reed-Solomon codes; cloud computing; network coding; reliability; storage area networks; storage management; cloud storage; cloud storage simulator; distributed storage codes; distributed storage systems; information encoding; information reliability; network codes; network coding; packet XORing; regenerating codes; replication codes; standard Reed-Solomon codes; storage efficiency; Availability; Cloud computing; Encoding; Maintenance engineering; Peer to peer computing; Servers

339. On region-based fault tolerant design of distributed file storage in networks.

Paper Link】 【Pages】:2806-2810

【Authors】: Sujogya Banerjee ; Shahrzad Shirazipourazad ; Arunabha Sen

【Abstract】: Distributed storage of data files in different nodes of a network enhances the reliability of the data by offering protection against node failure. In the (N,K),N ≥ K file distribution scheme, from a file F of size |F|, N segments of size |F|/K are created in such a way that it is possible to reconstruct the entire file, just by accessing any K segments. For the reconstruction scheme to work it is essential that the K segments of the file are stored in nodes that are connected in the network. However in case of node failures the network might become disconnected (i.e., split into several connected components). We focus on node failures that are spatially-correlated or region-based. Such failures are often encountered in disaster situations or natural calamities where only the nodes in the disaster zone are affected. The goal of this research is to devise a file segment distribution scheme so that, even if the network becomes disconnected due to any region fault, at least one of the largest connected components will have at least K distinct file segments with which to reconstruct the entire file. The distribution scheme will also ensure that the total storage requirement is minimized. We provide an optimal solution through Integer Linear Programming and an approximation solution with a guaranteed performance bound of O(ln n) to solve the problem for any arbitrary network. The performance of the approximation algorithm is evaluated by simulation on two real networks.

【Keywords】: approximation theory; computational complexity; distributed processing; integer programming; linear programming; software fault tolerance; storage management; approximation algorithm; data reliability enhancement; distributed data file storage; distribution scheme; file distribution scheme; file reconstruction scheme; file segment distribution scheme; integer linear programming; networks; node failure protection; region-based fault tolerant design; total storage requirement minimization; Approximation algorithms; Approximation methods; Color; Encoding; Image color analysis; Layout; Robustness

340. Energy and latency analysis for in-network computation with compressive sensing in wireless sensor networks.

Paper Link】 【Pages】:2811-2815

【Authors】: Haifeng Zheng ; Shilin Xiao ; Xinbing Wang ; Xiaohua Tian

【Abstract】: In this paper, we study data gathering with compressive sensing from the perspective of in-network computation in random networks, in which n nodes are uniformly and independently deployed in a unit square area. We formulate the problem of data gathering to compute multiround random linear function. We study the performance of in-network computation with compressive sensing in terms of energy consumption and latency in centralized and distributed fashions. For the centralized approach, we propose a tree-based protocol for computing multiround random linear function. The complexity of computation shows that the proposed protocol can save energy and reduce latency by a factor of Θ(√(n/log n)) for data gathering comparing with the traditional approach, respectively. For the distributed approach, we propose a gossip-based approach and study the performance of energy and latency through theoretical analysis. We show that our approach needs fewer transmissions than the scheme using randomized gossip.

【Keywords】: compressed sensing; data compression; protocols; wireless sensor networks; centralized approach; compressive sensing; data gathering; distributed approach; energy analysis; energy consumption; gossip-based approach; in-network computation; latency analysis; multiround random linear function; random networks; randomized gossip; tree-based protocol; wireless sensor networks; Algorithm design and analysis; Complexity theory; Compressed sensing; Energy consumption; Processor scheduling; Protocols; Wireless sensor networks

341. SpeedBalance: Speed-scaling-aware optimal load balancing for green cellular networks.

Paper Link】 【Pages】:2816-2820

【Authors】: Kyuho Son ; Bhaskar Krishnamachari

【Abstract】: This paper considers a component-level deceleration technique in BS operation, called speed-scaling, that is more conservative than entirely shutting down BSs, yet can conserve dynamic power effectively during periods of low load while ensuring full coverage at all times. By formulating a total cost minimization that allows for a flexible tradeoff between delay and energy, we first study how to adaptively vary the processing speed based on incoming load. We then investigate how this speed-scaling affects the design of network protocol, specifically, with respect to user association. Based on our investigation, we propose and analyze a distributed algorithm, called SpeedBalance, that can yield significant energy savings.

【Keywords】: cellular radio; distributed algorithms; energy conservation; environmental factors; protocols; BS operation; SpeedBalance algorithm; component-level deceleration technique; delay-energy tradeoff; distributed algorithm; dynamic power conservation; energy savings; green cellular networks; network protocol design; speed-scaling-aware optimal load balancing; total cost minimization; user association; Delay; Energy consumption; Green products; Load management; Mobile communication; Power demand; Program processors

342. Realizing the full potential of PSM using proxying.

Paper Link】 【Pages】:2821-2825

【Authors】: Ning Ding ; Abhinav Pathak ; Dimitrios Koutsonikolas ; Clayton Shepard ; Y. Charlie Hu ; Lin Zhong

【Abstract】: The WiFi radio in smartphones consumes a significant portion of energy when active. To reduce the energy consumption, the Power Saving Mode was standardized in IEEE 802.11 and two major implementations, Static PSM and Dynamic PSM, have been widely used in mobile devices. Unfortunately, both PSMs have inherent drawbacks: Static PSM is energy efficient but imposes considerable extra delays on data transfers; Dynamic PSM incurs little extra delay but misses energy saving opportunities. In this paper, we first analyze a one-week trace from 10 users and show that more than 80% of all traffic are Web 2.0 flows, which are of very small sizes and short durations. Targeting these short but dominant flows, we propose a system called Percy, to achieve the best of both worlds (Static and Dynamic PSM), i.e., to maximize the energy saving while minimizing the delay of flow completion time. Percy works by deploying a web proxy at the AP and suitably configuring the PSM parameters, and is designed to work with unchanged clients running Dynamic PSM, and unchanged APs and Internet servers. We evaluate our system via trace-driven testbed experiments. Our results show that Percy saves 40-70% energy compared to Dynamic PSM configurations of Nokia, iPhone and Android, while imposing low extra delay that can hardly be perceived by users.

【Keywords】: Internet; smart phones; wireless LAN; AP; Android; IEEE 802.11; Internet servers; Nokia; Percy system; Web 2.0 flows; Web proxy; WiFi radio; data transfers; delay minimization; dynamic PSM; energy consumption reduction; energy saving maximization; energy saving opportunities; flow completion time; iPhone; mobile devices; power saving mode; proxying; smartphones; static PSM; trace-driven testbed experiments; Computer aided manufacturing; Delay; Energy consumption; IEEE 802.11 Standards; Internet; Servers; Smart phones

343. Minimum energy coding for wireless nanosensor networks.

Paper Link】 【Pages】:2826-2830

【Authors】: Murat Kocaoglu ; Özgür B. Akan

【Abstract】: Wireless nanosensor networks (WNSNs), which are collections of nanosensors with communication units, can be used for sensing and data collection with extremely high resolution and low power consumption for various applications. In order to realize WNSNs, it is essential to develop energy-efficient communication techniques, since nanonodes are severely energy-constrained. In this paper, a novel minimum energy coding scheme (MEC) is proposed to achieve energy-efficiency in WNSNs. Unlike the existing minimum energy codes, MEC maintains the desired Hamming distance, while minimizing energy, in order to provide reliability. It is analytically shown that, with MEC, codewords can be decoded perfectly for large code distance, if source set cardinality, M is less than inverse of symbol error probability, 1/ps. Performance analysis shows that MEC outperforms popular codes such as Hamming, Reed-Solomon and Golay in average energy per codeword sense.

【Keywords】: Hamming codes; decoding; error statistics; nanosensors; telecommunication network reliability; wireless sensor networks; Golay codes; Hamming codes; Hamming distance; Reed-Solomon codes; WNSN; code distance; codeword decoding; data collection; energy-efficient communication techniques; minimum-energy coding scheme; nanonodes; network reliability; symbol error probability; wireless nanosensor networks; Decoding; Encoding; Error probability; Hamming distance; Modulation; Reed-Solomon codes; Reliability

344. Selection of a rate adaptation scheme for network hardware.

Paper Link】 【Pages】:2831-2835

【Authors】: Andrea Francini

【Abstract】: Rate adaptation is a family of technologies driven by the expectation that large energy savings can be achieved in packet networks by dynamically adjusting the capacity of network components to the load that they are required to sustain. In this paper we focus on packet-timescale rate adaptation (PTRA) techniques, which apply to individual traffic processing chips in the circuit packs of network systems. We look at the available options for PTRA implementation and compare their performance in realistic multi-device configurations. We find that in linear multi-device topologies the sleep-state-exploitation (SSE) scheme, which only adds a sleep state to the ordinary full-capacity state, offers the best compromise between energy savings and the unavoidable packet delay degradation of PTRA.

【Keywords】: transport protocols; PTRA technique; SSE scheme; TCP connections; energy savings; full-capacity state; linear multidevice topologies; multidevice configurations; network component capacity; network hardware; network system circuit packs; packet delay degradation; packet networks; packet-timescale rate adaptation technique; sleep-state-exploitation scheme; traffic processing chips; Delay; Energy consumption; Hardware; Oscillators; Power demand; Traffic control; Upper bound; energy efficiency; rate adaptation; sleep mode

345. Estimating age privacy leakage in online social networks.

Paper Link】 【Pages】:2836-2840

【Authors】: Ratan Dey ; Cong Tang ; Keith W. Ross ; Nitesh Saxena

【Abstract】: We perform a large-scale study to quantify just how severe the privacy leakage problem is in Facebook. As a case study, we focus on estimating birth year, which is a fundamental human attribute and, for many people, a private one. Specifically, we attempt to estimate the birth year of over 1 million Facebook users in New York City. We examine the accuracy of estimation procedures for several classes of users: (i) highly private users, who do not make their friend lists public; (ii) users who hide their birth years but make their friend lists public. To estimate Facebook users' ages, we exploit the underlying social network structure to design an iterative algorithm, which derives age estimates based on friends' ages, friends of friends' ages, and so on. We find that for most users, including highly private users who hide their friend lists, it is possible to estimate ages with an error of only a few years. We also make a specific suggestion to Facebook which, if implemented, would greatly reduce privacy leakages in its service.

【Keywords】: age issues; data privacy; iterative methods; social networking (online); Facebook users; New York city; age privacy leakage estimation; birth year estimation; highly private users; iterative algorithm; online social networks; social network structure; user birth year hiding; Accuracy; Educational institutions; Estimation; Facebook; Iterative methods; Privacy

346. Pssst, over here: Communicating without fixed infrastructure.

Paper Link】 【Pages】:2841-2845

【Authors】: Tom Callahan ; Mark Allman ; Michael Rabinovich

【Abstract】: This paper discusses a way to communicate without relying on fixed infrastructure at some central hub. This can be useful for bootstrapping loosely connected peer-to-peer systems, as well as for circumventing egregious policy-based blocking (e.g., for censorship purposes). Our techniques leverage the caching and aging properties of DNS records to create a covert channel of sorts that can be used to store ephemeral information. The only requirement imposed on the actors wishing to publish and/or retrieve this information is that they share a secret that only manifests outside the system and is never directly encoded within the network itself. We conduct several experiments that illustrate the efficacy of our techniques to exchange an IP address that is presumed to be a rendezvous point for future communication.

【Keywords】: IP networks; Internet; cache storage; computer network security; peer-to-peer computing; DNS records; IP address; aging property; bootstrapping; caching property; central hub; covert channel; ephemeral information; loosely-connected peer-to-peer systems; policy-based blocking; Encoding; Internet; Peer to peer computing; Probes; Robustness; Servers; Synchronization

347. EFFORT: Efficient and effective bot malware detection.

Paper Link】 【Pages】:2846-2850

【Authors】: Seungwon Shin ; Zhaoyan Xu ; Guofei Gu

【Abstract】: To detect bots, a lot of detection approaches have been proposed at host or network level so far and both approaches have clear advantages and disadvantages. In this paper, we propose EFFORT, a new host-network cooperated detection framework attempting to overcome shortcomings of both approaches while still keeping both advantages, i.e., effectiveness and efficiency. Based on intrinsic characteristics of bots, we propose a multi-module approach to correlate information from different host- and network-level aspects and design a multi-layered architecture to efficiently coordinate modules to perform heavy monitoring only when necessary. We have implemented our proposed system and evaluated on real-world benign and malicious programs running on several diverse real-life office and home machines for several days. The final results show that our system can detect all 15 real-world bots (e.g., Waledac, Storm) with low false positives (0.68%) and with minimal overhead. We believe EFFORT raises a higher bar and this host-network cooperated design represents a timely effort and a right direction in the malware battle.

【Keywords】: invasive software; EFFORT; bots intrinsic characteristics; efficient and effective bot malware detection; host-and network-level aspects; host-network cooperated detection framework; malicious programs; malware battle; module coordination; multilayered architecture design; multimodule approach; real-world benign programs; Correlation; Engines; Feature extraction; Malware; Monitoring; Servers; Support vector machines

348. Can we beat legitimate cyber behavior mimicking attacks from botnets?

Paper Link】 【Pages】:2851-2855

【Authors】: Shui Yu ; Song Guo ; Ivan Stojmenovic

【Abstract】: Botnets are the engine for malicious activities in cyber space. In order to sustain their botnets and disguise their illegal actions, botnet owners are exhausting their strength to mimic legitimate cyber behavior to fly under the radar, e.g. flash crowd mimicking attacks on popular websites. It is an open and challenging problem: can we beat mimicking attacks or not? We use web browsing on popular websites as an example to explore the issue. In our previous work, we discovered that it is almost impossible to detect mimicking attacks from statistics if the number of active bots of a botnet is sufficient (no less than the number of active legitimate users). In this paper, we pointed out that it is usually hard for botnet owners to have sufficient number of active bots in practice. Therefore, we can discriminate mimicking attacks when the sufficient number condition is not met. We prove our claim theoretically and confirm it with simulations. Our findings can also be applied to a large number of other detection related cases.

【Keywords】: Web sites; information retrieval; online front-ends; security of data; Web browsing; Web sites; active bots; botnets; cyber space; flash crowd mimicking attacks; illegal actions; legitimate cyber behavior; malicious activities; Ash; Browsers; Computer crime; Detection algorithms; Gaussian distribution; Internet; Web pages; botnet; detection; flash crowd attack; mimicking attack

349. Coordination in network security games.

Paper Link】 【Pages】:2856-2860

【Authors】: Marc Lelarge

【Abstract】: Malicious softwares or malwares for short have become a major security threat. While originating in criminal behavior, their impact are also influenced by the decisions of legitimate end users. Getting agents in the Internet, and in networks in general, to invest in and deploy security features and protocols is a challenge, in particular because of economic reasons arising from the presence of network externalities. An unexplored direction of this challenge consists in under- standing how to align the incentives of the agents of a large network towards a better security. This paper addresses this new line of research. We start with an economic model for a single agent, that determines the optimal amount to invest in protection. The model takes into account the vulnerability of the agent to a security breach and the potential loss if a security breach occurs. We derive conditions on the quality of the protection to ensure that the optimal amount spent on security is an increasing function of the agent's vulnerability and potential loss. We also show that for a large class of risks, only a small fraction of the expected loss should be invested. Building on these results, we study a network of interconnected agents subject to epidemic risks. We derive conditions to ensure that the incentives of all agents are aligned towards a better security. When agents are strategic, we show that security investments are always socially inefficient due to the network externalities. Moreover if our conditions are not satisfied, incentives can be aligned towards a lower security leading to an equilibrium with a very high price of anarchy.

【Keywords】: Internet; computer network security; game theory; invasive software; software agents; Internet; criminal behavior; epidemic risks; interconnected agents network; malicious softwares; malwares; network externalities; network security games; security features; security investments; security protocols; security threat; Computational modeling; Computers; Economics; Games; Internet; Investments; Security

350. Improving consolidation of virtual machines with risk-aware bandwidth oversubscription in compute clouds.

Paper Link】 【Pages】:2861-2865

【Authors】: David Breitgand ; Amir Epstein

【Abstract】: Current trends in virtualization, green computing, and cloud computing require ever increasing efficiency in consolidating virtual machines without degrading quality of service. In this work, we consider consolidating virtual machines on the minimum number of physical containers (e.g., hosts or racks) in a cloud where the physical network (e.g., network interface or top of the rack switch link) may become a bottleneck. Since virtual machines do not simultaneously use maximum of their nominal bandwidth, the capacity of the physical container can be multiplexed. We assume that each virtual machine has a probabilistic guarantee on realizing its bandwidth Requirements-as derived from its Service Level Agreement with the cloud provider. Therefore, the problem of consolidating virtual machines on the minimum number of physical containers, while preserving these bandwidth allocation guarantees, can be modeled as a Stochastic Bin Packing (SBP) problem, where each virtual machine's bandwidth demand is treated as a random variable. We consider both offline and online versions of SBP. Under the assumption that the virtual machines' bandwidth consumption obeys normal distribution, we show a 2-approximation algorithm for the offline version and improve the previously reported results by presenting a (2 +∈)-competitive algorithm for the online version. We also observe that a dual polynomial-time approximation scheme (PTAS) for SBP can be obtained via reduction to the two-dimensional vector bin packing problem. Finally, we perform a thorough performance evaluation study using both synthetic and real data to evaluate the behavior of our proposed algorithms, showing their practical applicability.

【Keywords】: approximation theory; bin packing; cloud computing; computational complexity; environmental factors; normal distribution; risk management; software performance evaluation; virtual machines; virtualisation; 2-approximation algorithm; 2D vector bin packing problem; PTAS; SBP problem; bandwidth allocation guarantees; bandwidth consumption; bandwidth requirements; bottleneck; cloud computing; dual polynomial-time approximation scheme; green computing; normal distribution; performance evaluation; physical containers; probabilistic guarantee; random variable; risk-aware bandwidth oversubscription; service level agreement; stochastic bin packing problem; virtual machines; virtualization; Approximation algorithms; Approximation methods; Bandwidth; Gaussian distribution; Random variables; Vectors; Virtual machining

351. RED-BL: Energy solution for loading data centers.

Paper Link】 【Pages】:2866-2870

【Authors】: Muhammad Saqib Ilyas ; Saqib Raza ; Chao-Chih Chen ; Zartash Afzal Uzmi ; Chen-Nee Chuah

【Abstract】: Cloud infrastructure providers and data center operators spend a major portion of their operations budget on the electric bills. We present RED-BL (Relocate Energy Demand to Better Locations), a framework for determining an optimal mapping of workload to an existing set of data centers while considering the cost of workload relocation. Within each workload mapping interval, RED-BL solution exploits the geo diversity in electricity price markets. The temporal diversity in those markets is simultaneously exploited by considering a planning window comprising several mapping intervals. Using workload traces from live Internet applications and electricity prices from the US markets, RED-BL can reduce the electric bill by as much as 81% from the case when the workload is equally distributed. Compared to a single data center deployment, an average reduction of 27% in electric bill can be achieved when RED-BL uses 10 or more data centers, a common case for most operators. When compared to existing workload relocation solutions, RED-BL achieves a further reduction of 13.63%, on average. While modest, this reduction can save millions of dollars for the operators. The cost of this saving is an inexpensive computation at the start of each planning window.

【Keywords】: cloud computing; computer centres; RED-BL; cloud infrastructure providers; data center operators; electricity prices; live Internet applications; planning window; relocate energy demand to better locations; temporal diversity; workload relocation; Electricity; Greedy algorithms; Internet; Optimization; Planning; Power demand; Servers

352. To migrate or to wait: Bandwidth-latency tradeoff in opportunistic scheduling of parallel tasks.

Paper Link】 【Pages】:2871-2875

【Authors】: Ting He ; Shiyao Chen ; Hyoil Kim ; Lang Tong ; Kang-Won Lee

【Abstract】: We consider the problem of scheduling low-priority tasks onto resources already assigned to high-priority tasks. Due to burstiness of the high-priority workloads, the resources can be temporarily underutilized and made available to the low-priority tasks. The increased level of utilization comes at a cost to the low-priority tasks due to intermittent resource availability. Focusing on two major costs, bandwidth cost associated with migrating tasks and latency cost associated with suspending tasks, we aim at developing online scheduling policies achieving the optimal bandwidth-latency tradeoff for parallel low-priority tasks with synchronization requirements. Under Markovian resource availability models, we formulate the problem as a Markov Decision Process (MDP) whose solution gives the optimal scheduling policy. Furthermore, we discover structures of the problem in the special case of homogeneous availability patterns that enable a simple threshold-based policy that is provably optimal. We validate the efficacy of the proposed policies by trace-driven simulations.

【Keywords】: Markov processes; cloud computing; parallel processing; resource allocation; scheduling; Markov decision process; Markovian resource availability models; bandwidth cost; bandwidth-latency tradeoff; cloud-based service model; intermittent resource availability; latency cost; migrating tasks; online scheduling policies; opportunistic scheduling; optimal scheduling policy; parallel low-priority tasks; service level agreement; suspending tasks; synchronization requirements; threshold-based policy; trace-driven simulations; Integrated circuits

353. Joint VM placement and routing for data center traffic engineering.

Paper Link】 【Pages】:2876-2880

【Authors】: Joe Wenjie Jiang ; Tian Lan ; Sangtae Ha ; Minghua Chen ; Mung Chiang

【Abstract】: Today's data centers need efficient traffic management to improve resource utilization in their networks. In this work, we study a joint tenant (e.g., server or virtual machine) placement and routing problem to minimize traffic costs. These two complementary degrees of freedom-placement and routing-are mutually-dependent, however, are often optimized separately in today's data centers. Leveraging and expanding the technique of Markov approximation, we propose an efficient online algorithm in a dynamic environment under changing traffic loads. The algorithm requires a very small number of virtual machine migrations and is easy to implement in practice. Performance evaluation that employs the real data center traffic traces under a spectrum of elephant and mice flows, demonstrates a consistent and significant improvement over the benchmark achieved by common heuristics.

【Keywords】: Markov processes; computer centres; computer networks; telecommunication network routing; telecommunication traffic; virtual machines; Markov approximation; changing traffic loads; data center traffic engineering; dynamic environment; efficient online algorithm; elephant-mice flow spectrum; joint VM placement-routing problem; resource utilization; traffic cost minimization; traffic management; virtual machine migrations; Approximation algorithms; Approximation methods; Heuristic algorithms; Joints; Markov processes; Optimization; Routing

354. Geographic trough filling for internet datacenters.

Paper Link】 【Pages】:2881-2885

【Authors】: Dan Xu ; Xin Liu

【Abstract】: To reduce datacenter energy consumption and cost, current practice has considered demand-proportional resource provisioning schemes, where servers are turned on/off according to the load of requests. Most existing work considers instantaneous (Internet) requests only, which are explicitly or implicitly assumed to be delay-sensitive. On the other hand, in datacenters, there exist a vast amount of delay-tolerant jobs, such as background/maintainance jobs. In this paper, we explicitly differentiate delay-sensitive jobs and delay tolerant jobs. We focus on the problem of using delay-tolerant jobs to fill the extra capacity of datacenters, referred to as trough/valley filling. Giving a higher priority to delay-sensitive jobs, our scheme complements most existing demand-proportional resource provisioning schemes. Our goal is to design an intelligent trough filling mechanism that is energy efficient and also achieves good delay performance. Specifically, we propose a joint dynamic speed scaling and traffic shifting scheme. The scheme does not need statistical information of the system, which is usually difficult to obtain. In the proposed scheme, energy cost saving comes from dynamic speed scaling, statistical multiplexing, electricity price diversity, and service efficiency diversity. In addition, good delay performance is achieved via load shifting and capacity allocation based on queue conditions. We show the efficiency of the proposed scheme by both analysis and simulation.

【Keywords】: Internet; computer centres; delays; energy conservation; energy consumption; power aware computing; queueing theory; resource allocation; telecommunication traffic; Internet data centers; capacity allocation; data center energy consumption reduction; delay performance; delay-sensitive jobs; delay-tolerant jobs; demand-proportional resource provisioning schemes; dynamic speed scaling; electricity price diversity; energy cost saving; energy efficiency; geographic trough filling; instantaneous requests; intelligent trough filling mechanism design; joint dynamic speed scaling; load shifting; queue conditions; service efficiency diversity; statistical information; statistical multiplexing; traffic shifting scheme; valley filling; Bandwidth; Delay; Electricity; Internet; Load modeling; Resource management; Servers

355. SocialTube: P2P-assisted video sharing in online social networks.

Paper Link】 【Pages】:2886-2890

【Authors】: Ze Li ; Haiying Shen ; Hailang Wang ; Guoxin Liu ; Jin Li

【Abstract】: Video sharing has been an increasingly popular application in online social networks (OSNs). However, its sustainable development is severely hindered by the intrinsic limit of the client/server architecture deployed in current OSN video systems, which is not only costly in terms of server bandwidth and storage but also not scalable. The peer-assisted Video-on-Demand (VOD) technique, in which participating peers assist the server in delivering video content has been proposed recently. Unfortunately, videos can only be disseminated through friends in OSNs. Therefore, current VOD works that explore clustering nodes with similar interests or close location for high performance are suboptimal, if not entirely inapplicable, in OSNs. Based on our long-term real-world measurement of over 1,000,000 users and 2,500 videos in Facebook, we propose SocialTube, a novel peer-assisted video sharing system that explores social relationship, interest similarity, and physical location between peers in OSNs. Specifically, SocialTube incorporates three algorithms: a social network (SN)-based P2P overlay construction algorithm, a SN-based chunk prefetch algorithm, and a buffer management algorithm. The trace driven based simulation results show that SocialTube can improve the quality of user experience and system scalability over current P2P VOD techniques.

【Keywords】: client-server systems; digital simulation; pattern clustering; peer-to-peer computing; social networking (online); sustainable development; video on demand; OSN video systems; P2P VOD techniques; P2P-assisted video sharing; SN-based chunk prefetch algorithm; SocialTube; buffer management algorithm; client-server architecture; interest similarity; node clustering; online social networks; peer physical location; peer-assisted video-on-demand technique; quality of user experience; social network-based P2P overlay construction algorithm; social relationship exploration; sustainable development; system scalability; trace driven based simulation; video content delivery; Accuracy; Facebook; Peer to peer computing; Prefetching; Servers; Streaming media

356. Accelerating peer-to-peer file sharing with social relations: Potentials and challenges.

Paper Link】 【Pages】:2891-2895

【Authors】: Haiyang Wang ; Feng Wang ; Jiangchuan Liu

【Abstract】: Peer-to-peer file sharing systems, most notably Bit-Torrent (BT), have achieved tremendous success among Internet users. Recent studies suggest that long-term relationships among BT peers could be explored for peer cooperation, so as to achieve better sharing efficiency. However, whether such long-term relationships exist remain unknown. From an 80-day trace of 100, 000 real world swarms, we find that less than 5% peers can meet each other again throughout the whole period, which largely invalidates the fundamental assumption of these peer cooperation protocols. Yet the recent emergence of online social network applications sheds new light on this problem. In particular, a number of BT swarms are now triggered by Twitter, reflecting a new trend for initializing sharing among communities. In this paper, we for the first time examine the challenges and potentials of accelerating peer-to-peer file sharing with Twitter social networks. We show that the peers in such swarms have stronger temporal locality, thus offering great opportunity for improving their degree of sharing. We further demonstrate a practical cooperation protocol that utilizes the social relations. Our PlanetLab experiments indicate that the incorporation of social relations remarkably accelerates the downloading time.

【Keywords】: Internet; peer-to-peer computing; social networking (online); BT swarms; Bit-Torrent; Internet users; PlanetLab experiments; Twitter social networks; long-term relationships; online social network applications; peer cooperation protocols; peer-to-peer file sharing acceleration; peer-to-peer file sharing systems; social relations; Acceleration; Communities; Internet; Peer to peer computing; Protocols; Twitter

357. Mechanism design for finding experts using locally constructed social referral web.

Paper Link】 【Pages】:2896-2900

【Authors】: Lan Zhang ; Xiang-Yang Li ; Yunhao Liu ; Qiuyuan Huang ; Shaojie Tang

【Abstract】: In this work, we address the problem of finding experts using chains of social referrals and profile matching with only local information in online social networks. By assuming that users are selfish, rational, and have privately known cost of participating in the referrals, we design a novel truthful efficient mechanism in which an expert-finding query will be relayed by intermediate users. When receiving a referral request, a participant will locally choose among her neighbors some user to relay the request. In our mechanism, several closely coupled methods are carefully designed to improve the search performance, including, profile matching, social acquaintance prediction, score function for locally choosing relay neighbors, and budget estimation. We conduct extensive experiments on several datasets of online social networks. The extensive study of our mechanism shows that the success rate of our mechanism is about 90% in finding closely matched experts using only local search and limited budget, which significantly improves the previously best rate 20%. The overall cost of finding an expert by our truthful mechanism is about 20% of the untruthful method and only about 2% of the method that always selects high-degree neighbors. The median length of social referral chains is 6 using our localized search decision, which surprisingly matches the well-known small-world phenomenon of global social structures.

【Keywords】: social networking (online); expert-finding query; global social structures; localized search decision; locally constructed social referral Web; online social networks; profile matching; social referral chains; Databases; Facebook; Probability; Relays; Search problems; Vectors; Localized Searching; Mechanism Design; Referral Web; Small World; Strategyproof

358. Guiding internet-scale video service deployment using microblog-based prediction.

Paper Link】 【Pages】:2901-2905

【Authors】: Zhi Wang ; Lifeng Sun ; Chuan Wu ; Shiqiang Yang

【Abstract】: Online microblogging has been very popular in today's Internet, where users exchange short messages and follow various contents shared by people that they are interested in. Among the variety of exchanges, video links are a representative type on a microblogging site. More and more viewers of an Internet video service are coming from microblog recommendations. It is intriguing research to explore the connections between the patterns of microblog exchanges and the popularity of videos, in order to potentially use the propagation patterns of microblogs to guide proactive service deployment of a video sharing system. Based on extensive traces from Youku and Tencent Weibo, a popular video sharing site and a favored microblogging system in China, we explore how patterns of video link propagation in the microblogging system are correlated with video popularity on the video sharing site, at different times and in different geographic regions. Using influential factors summarized from the measurement studies, we further design neural network-based learning frameworks to predict the number of potential viewers of different videos and the geographic distribution of viewers. Experiments show that our neural network-based frameworks achieve better prediction accuracy, as compared to a classical approach that relies on historical numbers of views. We also briefly discuss how proactive video service deployment can be effectively enabled by our prediction frameworks.

【Keywords】: Internet; learning (artificial intelligence); neural nets; recommender systems; social networking (online); China; Internet-scale video service deployment; Tencent Weibo; Youku; microblog exchange pattern; microblog propagation patterns; microblog recommendations; microblog-based prediction; neural network-based learning frameworks; online microblogging; proactive video service deployment; video link propagation pattern; video popularity; video sharing site; video sharing system; Biological neural networks; Correlation; Feature extraction; Neurons; Predictive models; Time measurement; Twitter

359. Near-optimal random walk sampling in distributed networks.

Paper Link】 【Pages】:2906-2910

【Authors】: Atish Das Sarma ; Anisur Rahaman Molla ; Gopal Pandurangan

【Abstract】: Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show how several random walks can be performed continuously (when source nodes are provided only at runtime, i.e., online), such that each walk of length ℓ can be performed exactly in just Õ(√ℓD) rounds (where D is the diameter of the network), and O(ℓ) messages. This significantly improves upon both, the naive technique that requires O(ℓ) rounds and O(ℓ) messages, and the sophisticated algorithm of [13] that has the same round complexity as this paper but requires Ω(m√ℓ) messages (where m is the number of edges in the network). Our theoretical results are corroborated through extensive experiments on various topological data sets. Our algorithms are fully decentralized, lightweight, and easily implementable, and can serve as building blocks in the design of topologically-aware networks.

【Keywords】: computational complexity; distributed algorithms; graph theory; telecommunication network topology; bandwidth constraints; building blocks; communication networks; comprehensive experimental evaluation; distributed networks; load balancing; near-optimal random walk sampling; network topologies; network topology discovery; node graph; peer-to-peer membership management; round complexity; round-message optimal distributed algorithms; sophisticated algorithm; source nodes; theoretical analysis; token management; topological data sets; topologically-aware networks; Algorithm design and analysis; Complexity theory; Distributed algorithms; Electronic mail; Graph theory; Network topology; Peer to peer computing

360. Maximizing throughput when achieving time fairness in multi-rate wireless LANs.

Paper Link】 【Pages】:2911-2915

【Authors】: Yuan Le ; Liran Ma ; Wei Cheng ; Xiuzhen Cheng ; Biao Chen

【Abstract】: This paper focuses on designing a distributed medium access control algorithm that aims at achieving time fairness among contending stations and throughput maximization in an 802.11 wireless LAN. The core idea of our proposed algorithm lies in that each station needs to select an appropriate contention window size so as to fairly share the channel occupancy time and maximize the throughput under the time fairness constraint. The derivation of the proper contention window size is presented rigorously. We evaluate the performance of our proposed algorithm through an extensive simulation study, and the evaluation results demonstrate that our proposed algorithm leads to nearly perfect time fairness, high throughput, and low collision overhead.

【Keywords】: access protocols; wireless LAN; 802.11 wireless LAN; channel occupancy time; collision overhead; contention window size; distributed medium access control algorithm; multirate wireless LAN; throughput maximization; time fairness; Aggregates; Algorithm design and analysis; Bit rate; IEEE 802.11 Standards; Resource management; Throughput; Wireless LAN

361. Fast and accurate packet delivery estimation based on DSSS chip errors.

Paper Link】 【Pages】:2916-2920

【Authors】: Pirmin Heinzer ; Vincent Lenders ; Franck Legendre

【Abstract】: Fast and accurate link quality estimation is an important feature for wireless protocols such as routing, rate switching or handover. Existing signal strength based estimators tend to be fast but inaccurate while packet statistic based approaches are more accurate but require longer estimation times. We propose a new link quality estimation approach based on chip errors in symbols for direct sequence spread spectrum transceivers. The new link quality estimator is evaluated experimentally with software defined radios on IEEE 802.15.4 for different link conditions, including multi-path and mobile scenarios. We show that our chip error based link quality estimator performs more accurately than received signal strength based estimators and much faster than the packet statistic based estimators with comparable accuracy. With our approach, only a single packet, or even a fraction of a packet (e.g., only a few symbols), is necessary to obtain similar performance as state-of-the-art approaches that require at least 10 packets.

【Keywords】: code division multiple access; personal area networks; protocols; software radio; spread spectrum communication; DSSS chip errors; IEEE 802.15.4; direct sequence spread spectrum transceivers; handover; link quality estimation; mobile scenario; multipath scenario; packet delivery estimation; packet statistic-based approaches; rate switching; received signal strength-based estimators; routing; software defined radios; wireless protocols; Estimation; IEEE 802.15 Standards; Semiconductor device measurement; Signal to noise ratio; Wireless communication; Wireless sensor networks

362. Is diversity gain worth the pain: A delay comparison between opportunistic multi-channel MAC and single-channel MAC.

Paper Link】 【Pages】:2921-2925

【Authors】: Yang Liu ; Mingyan Liu ; Jing Deng

【Abstract】: In this paper we analyze the delay performance of an opportunistic multi-channel medium access control scheme and compare it to that of the corresponding single channel MAC scheme. In the opportunistic multi-channel MAC scheme, we assume that the pair of sender/receiver is able to evaluate the channel quality after a certain amount of channel sensing delay and to choose the best one for data communication. We consider three settings: (1) an ideal scenario where no control channel is needed and no sensing delay is incurred, (2) a more realistic scheme where users compete for access on a control channel using random access, and (3) a scheme similar to (2) but with a Time Division Multiplex (TDM) based access scheme on the control channel. Our analysis show that in terms of delay performance, the random access overhead on the control channel almost always wipe out the channel diversity gain, which is the main motivation behind an opportunistic multi-channel MAC. Using a TDM based access scheme on the control channel can help remove this bottleneck, but only when channel sensing can be done sufficiently fast.

【Keywords】: access protocols; diversity reception; time division multiplexing; TDM-based access scheme; channel diversity gain; channel quality evaluation; channel sensing delay; control channel; data communication; delay comparison; medium access control scheme; opportunistic multichannel MAC scheme; random access; random access overhead; sender-receiver pair; single-channel MAC scheme; time division multiplex

363. WiBee: Building WiFi radio map with ZigBee sensor networks.

Paper Link】 【Pages】:2926-2930

【Authors】: Wenxian Li ; Yanmin Zhu ; Tian He

【Abstract】: Exploiting the increasing ubiquitous deployment of sensor networks, the paper presents a system called WiBee that utilizes ZigBee sensors to build real-time WiFi radio maps. The design of WiBee is motivated by the observation that a ZigBee radio can sense WiFi frame transmissions although it cannot decode WiFi frames. A sensor passively listens on the wireless channel and estimates the RSS of a WiFi AP at its location. The design of WiBee faces three unique challenges. First, multiple APs may transmit frames concurrently and frame collisions may happen. Second, because of severe resource constraints, a sensor cannot sample the channel at arbitrarily high frequency and hence some frame transmissions may not be sampled. Third, sensor nodes are usually not time synchronized and the on-board clock is inaccurate. To address these challenges, we propose a novel gateway-assisted approach to estimating WiFi RSS values at ZigBee sensors. A light-weight algorithm is designed for identifying the RSS values corresponding to a given AP. It searches the sequence of ZigBee RSS samples for an AP signature sequence. An optimization technique is proposed to address issues of clock drift and time asynchronization. Our extensive experiments on a testbed show that WiBee can achieve low estimation error, short delay and small computation overhead.

【Keywords】: Zigbee; internetworking; synchronisation; wireless LAN; wireless channels; wireless sensor networks; AP signature sequence; RSS estimation; WiBee; WiFi AP; WiFi frame transmissions; ZigBee radio; ZigBee sensor networks; clock drift; frame collisions; gateway-assisted approach; light-weight algorithm; on-board clock; optimization technique; real-time WiFi radio map; sensor network ubiquitous deployment; sensor nodes; time asynchronization; wireless channel; Delay; Error analysis; IEEE 802.11 Standards; Interference; Logic gates; Synchronization; Zigbee; RSS; Radio Map; Sensor Network; Signal Strength; WiFi; ZigBee

364. VoIPiggy: Implementation and evaluation of a mechanism to boost voice capacity in 802.11WLANs.

Paper Link】 【Pages】:2931-2935

【Authors】: Pablo Salvador ; Francesco Gringoli ; Vincenzo Mancuso ; Pablo Serrano ; Andrea Mannocci ; Albert Banchs

【Abstract】: Supporting voice traffic in existing WLANs results extremely inefficient, given the large overheads of the protocol operation and the need to prioritize this traffic over, e.g., bulky transfers. In this paper we propose a simple scheme to improve the efficiency of WLANs when voice traffic is present. The mechanism is based on piggybacking voice frames over the acknowledgments, which reduces both frame overheads and time spent in contentions. We evaluate its performance in a large-scale testbed consisting on 33 commercial off-the-shelf devices. The experimental results show dramatic performance improvements in both voice-only and mixed voice-and-data scenarios.

【Keywords】: Internet telephony; wireless LAN; 802.11 WLAN; VoIPiggy; frame overheads; mixed voice-and-data scenario; piggybacking voice frames; protocol operation; voice capacity; voice traffic; voice-only scenario; IEEE 802.11 Standards; Kernel; Linux; Performance evaluation; Throughput; Wireless LAN; Wireless communication

Paper Link】 【Pages】:2936-2940

【Authors】: Honghai Zhang ; Narayan Prasad ; Sampath Rangarajan

【Abstract】: Scheduling plays a vital role in LTE downlink systems with Multiple Input and Multiple Output (MIMO) antennas. We consider the MIMO downlink scheduling problem at the base station (BS) in LTE networks under several practical constraints mandated by the 3GPP standards. We Define a new construct called transmission mode, which denotes a particular choice of MIMO operational mode, precoding matrix, transmission rank, as well as the modulation and coding schemes (MCSs) of up to two codewords and show that both LTE systems require that each scheduled user be served using only one transmission mode in every subframe. We prove that the resulting scheduling problems are NP-hard under both backlogged and finite queue traffic models, and then develop a unified low-complexity greedy algorithm that yields solutions guaranteed to be within 1/2 of the respective optima. Extensive performance evaluation in realistic settings reveals near-optimal performance of our proposed algorithm and that it significantly outperforms the state of the art, especially under the more practical, finite queue model.

【Keywords】: 3G mobile communication; Long Term Evolution; MIMO communication; antenna arrays; computational complexity; matrix algebra; modulation coding; precoding; queueing theory; scheduling; 3GPP standards; LTE downlink systems; MCS; MIMO antennas; MIMO downlink scheduling; MIMO operational mode; NP-hard problems; backlogged traffic model; base station; finite queue traffic model; modulation-coding schemes; multiple-input multiple-output antennas; near-optimal performance; precoding matrix; transmission mode; transmission rank; unified low-complexity greedy algorithm; user scheduling; Algorithm design and analysis; Approximation algorithms; Downlink; MIMO; Resource management; Scheduling algorithms; Throughput

366. MIMO wireless networks with directional antennas in indoor environments.

Paper Link】 【Pages】:2941-2945

【Authors】: Tae Hyun Kim ; Theodoros Salonidis ; Henrik Lundgren

【Abstract】: We perform an experimental characterization of an indoor MIMO system with directional antennas (our prototype multi-sector antennas). The study reveals that, even without antenna directivity gain, the directionality of signals changes the MIMO channel structure and provides a way to improve MIMO throughput performance. It also shows that it is sufficient for the improvement to consider only a small subset out of all possible antenna direction combinations. Finally, our study shows that it is possible to achieve both throughput gains and interference reduction, thus increasing network spatial reuse.

【Keywords】: MIMO communication; antenna arrays; directive antennas; indoor radio; interference suppression; radiofrequency interference; MIMO channel structure; MIMO throughput performance; antenna direction combinations; directional antennas; indoor MIMO wireless networks; interference reduction; network spatial reuse; signal directionality; throughput gains; Directional antennas; Gain; Interference; MIMO; Signal to noise ratio; Throughput

367. Self-organization in wireless networks: A flow-level perspective.

Paper Link】 【Pages】:2946-2950

【Authors】: Richard Combes ; Zwi Altman ; Eitan Altman

【Abstract】: This paper introduces self-optimization for wireless networks taking into account flow-level dynamics. Users arrive and leave the network according to a traffic model. Elastic traffic is considered here. The developed solutions self-optimize the network stability region using user feedback (measurements). The use case considered is cell size optimization. An algorithm is given, and its convergence is proven using stochastic approximation techniques. Convergence points are characterized, allowing performance gains to be evaluated rigorously. Performance gains are evaluated numerically, showing an important increase of the network capacity.

【Keywords】: radio networks; stochastic processes; telecommunication traffic; cell size optimization; elastic traffic; flow-level dynamics; network capacity; network stability region; self-optimization; self-organizing networks; stochastic approximation technique; traffic model; user feedback; wireless networks; Approximation algorithms; Heuristic algorithms; Optimization; Performance gain; Stability analysis; Stochastic processes; Wireless networks; Cellular Networks; Load Balancing; Queuing Theory; SON; Self Optimization; Self Organizing Networks; Stability; Stochastic Approximation

368. Algorithm design for femtocell base station placement in commercial building environments.

Paper Link】 【Pages】:2951-2955

【Authors】: Jia Liu ; Qian Chen ; Hanif D. Sherali

【Abstract】: Although femtocell deployments in residential buildings have been increasingly prevalent, femtocell deployment in commercial building environments remains in its infancy. One of the main challenges lies in the femtocell base stations (FBS) placement problem, which is complicated by the buildings' size, layout, structure, and floor/wall separations. In this paper, we investigate a joint FBS placement and power control optimization problem in commercial buildings with the aim to prolong mobile handsets' battery lives. We first construct a mathematical model that takes into account the unique floor attenuation factor (FAF) and FBS installation restrictions in building environments. Based on this model, we propose a novel two-step reformulation approach to convert the original mixed-integer nonconvex problem (MINCP) into a mixed-integer linear program (MILP), which enables the design of efficient global optimization algorithms. We then devise a global optimization algorithm by utilizing the MILP in a branch-and-bound framework. This approach guarantees finding a global optimal solution. We conduct extensive numerical studies to demonstrate the efficacy of the proposed algorithm. Our mathematical reformulation techniques and optimization algorithm offer useful theoretical insights and valuable tools for future commercial building femtocell deployments.

【Keywords】: femtocellular radio; integer programming; linear programming; mobile handsets; power control; telecommunication control; telecommunication network reliability; telecommunication power supplies; tree searching; FAF; FBS installation restrictions; FBS placement; MILP; MINCP; algorithm design; branch-and-bound framework; building layout; building size; building structure; commercial building environments; femtocell base station placement; femtocell deployments; floor attenuation factor; floor-wall separations; global optimization algorithms; mathematical model; mixed-integer linear program; mixed-integer nonconvex problem; mobile handset battery lives; power control optimization; residential buildings; two-step reformulation approach; Algorithm design and analysis; Buildings; Femtocell networks; Joints; Mathematical model; Optimization; Wireless communication

369. Experimental characterization of interference in OFDMA femtocell networks.

Paper Link】 【Pages】:2956-2960

【Authors】: Mustafa Y. Arslan ; Jongwon Yoon ; Karthikeyan Sundaresan ; Srikanth V. Krishnamurthy ; Suman Banerjee

【Abstract】: The increase in mobile data usage is pushing broadband operators towards deploying smaller cells (femtocells) and sophisticated access technologies such as OFDMA. The expected high density of deployment and uncoordinated operations of femtocells however, make interference management both critical and extremely challenging. Femtocells have to use the same access technology as traditional macrocells. Given this, understanding the impact of the system design choices (originally tailored to well-planned macrocells) on interference management, forms an essential first step towards designing efficient solutions for next-generation femtocells. This in turn is the focus of our work. With extensive measurements from our WiMAX OFDMA femtocell testbed, we characterize the impact of various system design choices on interference. Based on the insights from our measurements, we discuss several implications on how to efficiently operate a femtocell network.

【Keywords】: OFDM modulation; WiMax; femtocellular radio; frequency division multiple access; mobility management (mobile radio); radiofrequency interference; OFDMA femtocell networks; WiMAX OFDMA femtocell testbed; access technology; broadband operators; experimental characterization; interference management; macrocells; mobile data usage; next-generation femtocells; system design; Interference; Macrocell networks; OFDM; Payloads; Synchronization; Throughput; WiMAX

370. Analysis of backward congestion notification with delay for enhanced ethernet networks.

Paper Link】 【Pages】:2961-2965

【Authors】: Wanchun Jiang ; Fengyuan Ren ; Chuang Lin ; Ivan Stojmenovic

【Abstract】: Recently, companies and standards organizations are enhancing Ethernet as the unified switch fabric for all of the TCP/IP traffic, the storage traffic and the interprocess communication(IPC) traffic in Data Center Networks(DCNs). Backward Congestion Notification(BCN) is the basic mechanism for the end-to-end congestion management enhancement. To fulfill the special requirements of the unified switch fabric that being lossless and of extremely low latency, BCN should hold the queue length around a target point tightly. Thus, the stability of the control loop and the buffer size are critical to BCN. Currently, the impacts of delay on the performance of BCN are unidentified. When the link capacity increases to 40Gbps or 100Gbps in the near future, the number of on-the-fly packets becomes the same order with the shallow buffer size of switches. Thus, the impacts of delay on the performance of BCN will become significant. In this paper, we analyze BCN, paying special attention on the delay. Firstly, we model the BCN system with a set of segmented delayed differential equations. Then, the sufficient condition for the uniformly asymptotic stability of the BCN system is deduced. Subsequently, the bound of buffer occupancy under this sufficient condition are estimated, which provides guidelines on setting buffer size. Finally, the numerical analysis and the experiments on the NetFPGA platform verify the theoretical analysis.

【Keywords】: computer centres; computer network management; differential equations; field programmable gate arrays; local area networks; telecommunication traffic; transport protocols; BCN system; DCN; IPC traffic; NetFPGA platform; TCP-IP traffic; backward congestion notification; buffer occupancy; buffer size; control loop stability; data center networks; delay; end-to-end congestion management enhancement; enhanced Ethernet networks; interprocess communication traffic; link capacity; numerical analysis; on-the-fly packets; queue length; segmented-delayed differential equations; standard organizations; storage traffic; switch buffer size; theoretical analysis; unified switch fabric; uniformly-asymptotic stability; Asymptotic stability; Delay; Differential equations; Equations; Mathematical model; Stability analysis; Switches; Backward Congestion Notification; Data Center Ethernet; Stability and Delay

371. Anchored desynchronization.

Paper Link】 【Pages】:2966-2970

【Authors】: Ching-Ming Lien ; Shu-Hao Chang ; Cheng-Shang Chang ; Duan-Shin Lee

【Abstract】: Distributed algorithms based on pulse-coupled oscillators have been recently proposed in [4], [14] for achieving desynchronization of a system of identical nodes. Though these algorithms are shown to work properly by various computer simulations, they are still lack of rigorous theoretical proofs for both the convergence of the algorithms and the rates of convergence for these algorithms. On the other hand, all the nodes are not likely to be identical in many practical applications. In particular, there might be a node that needs to interact with the “outside” world and thus may not have the freedom to adjust its local clock. Motivated by all these, in this paper we consider the desynchronization problem in a system where there exists an anchored node that never adjusts the phase of its oscillator. For such a system, we propose a generic anchored desynchronization algorithm that achieves ∈-desynchrony (defined in [4]) in O(n2ln(n/∈)) rounds of firings. We also prove that our algorithm converges even for the generalized processor sharing (GPS) scheme, where every node is assigned a weight and the amount of resource received by a node is proportional to its weight. In comparison with the original algorithm in [4], we show that the rate of convergence of the original algorithm in [4] is not always better than ours and it is only better in the asymptotic regime.

【Keywords】: distributed algorithms; oscillators; synchronisation; time division multiple access; anchored node; computer simulations; desynchronization problem; distributed algorithms; generalized processor sharing scheme; generic anchored desynchronization algorithm; pulse-coupled oscillators; Approximation algorithms; Convergence; Eigenvalues and eigenfunctions; Global Positioning System; Heuristic algorithms; Oscillators; Vectors

372. Proactive failure detection for WDM carrying IP.

Paper Link】 【Pages】:2971-2975

【Authors】: Jelena Pesic ; Julien Meuric ; Esther Le Rouzic ; Laurent Dupont ; Michel Morvan

【Abstract】: One of the important objectives of every telecommunication operator, when designing an optical network, is to provide a satisfying quality of service in the most cost-effective way to the final customer. As every network is frequently exposed to many different threats causing network disruptions, network recovery is the prime capability needed to achieve this objective. After a brief introduction on the existing reactive recovery methods we present a new method of proactive detection and explain how the method can be applied to some recovery schemes and discuss its benefits.

【Keywords】: IP networks; optical fibre networks; telecommunication network reliability; wavelength division multiplexing; WDM-carrying IP; network disruptions; network recovery; optical network design; proactive failure detection; quality of service; reactive recovery methods; telecommunication operator; Fault detection; IP networks; Optical fiber cables; Optical fiber networks; Optical fiber polarization; Wavelength division multiplexing; WDM; multi-layer; network recovery; optical network; proactive failure detection; protection; restoration

373. Delay and rate-optimal control in a multi-class priority queue with adjustable service rates.

Paper Link】 【Pages】:2976-2980

【Authors】: Chih-Ping Li ; Michael J. Neely

【Abstract】: We study two convex optimization problems in a multi-class M/G/1 queue with adjustable service rates: minimizing convex functions of the average delay vector, and minimizing average service cost, both subject to per-class delay constraints. Using virtual queue techniques, we solve the two problems with variants of dynamic cμ rules. These algorithms adaptively choose a strict priority policy, in response to past observed delays in all job classes, in every busy period. Our policies require limited or no statistics of the queue. Their optimal performance is proved by Lyapunov drift analysis and validated through simulations.

【Keywords】: Lyapunov methods; convex programming; minimisation; optimal control; queueing theory; telecommunication control; Lyapunov drift analysis; adjustable service rates; average service cost; convex optimization problems; multiclass M/G/1 queue; multiclass priority queue; per-class delay constraints; rate-optimal control; virtual queue techniques; Adaptation models; Computers; Convex functions; Delay; Optimization; Tin; Vectors

374. Weighted fair queuing with differential dropping.

Paper Link】 【Pages】:2981-2985

【Authors】: Feng Lu ; Geoffrey M. Voelker ; Alex C. Snoeren

【Abstract】: Weighted fair queuing (WFQ) allows Internet operators to define traffic classes and then assign different bandwidth proportions to these classes. Unfortunately, the complexity of efficiently allocating the buffer space to each traffic class turns out to be overwhelming, leading most operators to vastly overprovision buffering-resulting in a large resource footprint. A single buffer for all traffic classes would be preferred due to its simplicity and ease of management. Our work is inspired by the approximate differential dropping scheme but differs substantially in the flow identification and packet dropping strategies. Augmented with our novel differential dropping scheme, a shared buffer WFQ performs as well or better than the original WFQ implementation under varied traffic loads with a vastly reduced resource footprint.

【Keywords】: Internet; queueing theory; Internet operators; approximate differential dropping scheme; bandwidth proportions; buffer space allocation; flow identification; overprovision buffering; packet dropping strategy; resource footprint; shared buffer WFQ; traffic class; weighted fair queuing; Bandwidth; Complexity theory; Delay; Estimation; Indexes; Resource management; Scheduling algorithms

375. Redundancy management for P2P backup.

Paper Link】 【Pages】:2986-2990

【Authors】: László Toka ; Pasquale Cataldi ; Matteo Dell'Amico ; Pietro Michiardi

【Abstract】: We propose a redundancy management mechanism for peer-to-peer backup applications. Since, in a backup system, data is read over the network only during restore processes caused by data loss, redundancy management targets data durability rather than attempting to make each piece of information availabile at any time. Each peer determines, in an on-line manner, an amount of redundancy sufficient to counter the effects of peer deaths, while preserving acceptable data restore times. Our experiments, based on trace-driven simulations, indicate that our mechanism can reduce the redundancy by a factor between two and three with respect to redundancy policies aiming for data availability. These results imply an according increase in storage capacity and decrease in time to complete backups, at the expense of longer times required to restore data.We believe this is a very reasonable price to pay, given the nature of the application.

【Keywords】: back-up procedures; peer-to-peer computing; redundancy; P2P backup; data availability; peer-to-peer backup applications; redundancy management mechanism; storage capacity; trace-driven simulations; Availability; Bandwidth; Computer crashes; Internet; Maintenance engineering; Peer to peer computing; Redundancy

376. On superposition of heterogeneous edge processes in dynamic random graphs.

Paper Link】 【Pages】:2991-2995

【Authors】: Zhongmei Yao ; Daren B. H. Cline ; Dmitri Loguinov

【Abstract】: This paper builds a generic modeling framework for analyzing the edge-creation process in dynamic random graphs in which nodes continuously alternate between active and inactive states, which represent churn behavior of modern distributed systems. We prove that despite heterogeneity of node lifetimes, different initial out-degree, non-Poisson arrival/failure dynamics, and complex spatial and temporal dependency among creation of both initial and replacement edges, a superposition of edge-arrival processes to a live node under uniform selection converges to a Poisson process when system size becomes sufficiently large. Due to the convoluted dependency and non-renewal nature of various point processes, this result significantly advances classical Poisson convergence analysis and offers a simple analytical platform for future modeling of networks under churn in a wide range of degree-regular and -irregular graphs with arbitrary node lifetime distributions.

【Keywords】: graph theory; stochastic processes; telecommunication network reliability; Poisson process; arbitrary node lifetime distributions; classical Poisson convergence analysis; convoluted dependency; degree-irregular graph; degree-regular graph; distributed systems; dynamic random graphs; edge-arrival process superposition; edge-creation process; generic modeling framework; heterogeneous edge process superposition; node lifetime heterogeneity; nonPoisson arrival-failure dynamics; nonrenewal nature; Ad hoc networks; Aggregates; Analytical models; Electronic mail; Peer to peer computing; Resilience; Routing

377. Reverse-engineering BitTorrent: A Markov approximation perspective.

Paper Link】 【Pages】:2996-3000

【Authors】: Ziyu Shao ; Hao Zhang ; Minghua Chen ; Kannan Ramchandran

【Abstract】: In this paper we understand BitTorrent protocol from a Markov approximation perspective. We show that together with the underlying rate control algorithm, the rarest first algorithm and choking algorithm in BitTorrent protocol implicitly solve a cooperative combinatorial network utility maximization problem in a distributed manner. This understanding allows us to access properties of BitTorrent from a fresh perspective, including performance optimality, convergence and impacts of design parameters. Our numerical evaluations validate the analytical results.

【Keywords】: Markov processes; combinatorial mathematics; optimisation; peer-to-peer computing; protocols; reverse engineering; BitTorrent protocol; Markov approximation; choking algorithm; cooperative combinatorial network utility maximization problem; design parameters; rate control algorithm; reverse-engineering BitTorrent; Algorithm design and analysis; Approximation methods; Convergence; Markov processes; Optimization; Peer to peer computing; Protocols

378. Performance analysis of non-stationary peer-assisted VoD systems.

Paper Link】 【Pages】:3001-3005

【Authors】: Delia Ciullo ; Valentina Martina ; Michele Garetto ; Emilio Leonardi ; Giovanni Luca Torrisi

【Abstract】: We analyze a peer-assisted Video-on-Demand system in which users contribute their upload bandwidth to the redistribution of a video that they are downloading or that they have cached locally. Our target is to characterize the additional bandwidth that servers must supply to immediately satisfy all requests to watch a given video. We develop an approximate fluid model to compute the required server bandwidth in the sequential delivery case. Our approach is able to capture several stochastic effects related to peer churn, upload bandwidth heterogeneity, non-stationary traffic conditions, which have not been documented or analyzed before. We provide an analytical methodology to design efficient peer-assisted VoD systems and optimal resource allocation strategies.

【Keywords】: stochastic processes; video on demand; approximate fluid model; nonstationary peer-assisted VoD systems; nonstationary traffic conditions; optimal resource allocation strategies; peer churn; peer-assisted video-on-demand system; performance analysis; sequential delivery case; server bandwidth; stochastic effects; upload bandwidth heterogeneity; video redistribution; Approximation methods; Bandwidth; Random variables; Servers; Stochastic processes; Streaming media; TV

379. Co-evolution of content popularity and delivery in mobile P2P networks.

Paper Link】 【Pages】:3006-3010

【Authors】: Srinivasan Venkatramanan ; Anurag Kumar

【Abstract】: Mobile P2P technology provides a scalable approach for content delivery to a large number of users on their mobile devices. In this work, we study the dissemination of a single item of content (e.g., an item of news, a song or a video clip) among a population of mobile nodes. Each node in the population is either a destination (interested in the content) or a potential relay (not yet interested in the content). There is an interest evolution process by which nodes not yet interested in the content (i.e., relays) can become interested (i.e., become destinations) on learning about the popularity of the content (i.e., the number of already interested nodes). In our work, the interest in the content evolves under the linear threshold model. The content is copied between nodes when they make random contact. For this we employ a controlled epidemic spread model. We model the joint evolution of the copying process and the interest evolution process, and derive joint fluid limit ordinary differential equations. We then study the selection of parameters under the content provider's control, for the optimization of various objective functions that aim at maximizing content popularity and efficient content delivery.

【Keywords】: differential equations; mobile radio; peer-to-peer computing; content delivery; content dissemination; content popularity; content provider control; destination node; epidemic spread model; interest evolution process; joint fluid limit ordinary-differential equations; linear threshold model; mobile P2P networks; mobile devices; mobile nodes; objective function optimization; potential relay node; Peer to peer computing

380. Compressive broadcast in MIMO systems with receive antenna heterogeneity.

Paper Link】 【Pages】:3011-3015

【Authors】: Xiao Lin Liu ; Chong Luo ; Wenjun Hu ; Feng Wu

【Abstract】: The key challenge in a broadcast system is receiver heterogeneity, where the weakest receiver typically constrains the entire system performance. Traditionally, this arises from heterogeneous channel SNRs at different receivers. In Multiple-Input Multiple-Output (MIMO) systems, a further heterogeneity is caused by different antenna numbers across receivers. We propose a compressive broadcast framework to address both types of heterogeneity. By layering compressive sensing (CS) over MIMO transmissions, our framework ensures a received source quality commensurate with the channel SNR and the MIMO channel dimension. Compared with a conventional framework, our framework achieves more smooth rate increase with channel SNR and much higher performance for multi-antenna receivers.

【Keywords】: MIMO communication; antenna arrays; compressed sensing; data compression; receiving antennas; wireless channels; MIMO channel dimension; MIMO systems; MIMO transmissions; antenna numbers; compressive broadcast framework; compressive sensing; heterogeneous channel SNR; multiantenna receivers; multiple-input multiple-output systems; receive antenna heterogeneity; Antenna measurements; MIMO; Noise measurement; Receiving antennas; Signal to noise ratio; Transmitting antennas

381. Distributed power control and coding-modulation adaptation in wireless networks using annealed Gibbs sampling.

Paper Link】 【Pages】:3016-3020

【Authors】: Shan Zhou ; Xinzhou Wu ; Lei Ying

【Abstract】: In wireless networks, the transmission rate of a link is determined by received signal strength, interference from simultaneous transmissions, and available coding-modulation schemes. Rate allocation is a key problem in wireless network design, but a very challenging problem because: (i) wireless interference is global, i.e., a transmission interferes all other simultaneous transmissions, and (ii) the rate-power relation is non-convex and non-continuous, where the discontinuity is due to limited number of coding-modulation choices in practical systems. In this paper, we consider a realistic Signal-to-Interference-and-Noise-Ratio (SINR) based interference model, and assume continuous power space and finite rate options (coding-modulation choices). We propose a distributed power control and coding-modulation adaptation algorithm using annealed Gibbs sampling, which achieves throughput optimality in an arbitrary network topology.

【Keywords】: distributed control; modulation coding; power control; radio networks; radiofrequency interference; telecommunication control; telecommunication network topology; SINR-based interference model; annealed Gibbs sampling; arbitrary network topology; coding-modulation adaptation; continuous power space; distributed power control; finite rate options; link transmission rate; rate allocation; rate-power relation; received signal strength; signal-to-interference-and-noise-ratio; throughput optimality; wireless interference; wireless network design; Interference; Modulation; Power control; Signal to noise ratio; Throughput; Transmitters; Wireless networks

382. Impact of channel state information on the stability of cognitive shared channels.

Paper Link】 【Pages】:3021-3025

【Authors】: Sastry Kompella ; Gam D. Nguyen ; Jeffrey E. Wieselthier ; Anthony Ephremides

【Abstract】: In this paper, we consider the problem of calculating the stability region of a two-user cognitive shared channel where the secondary (lower priority) user, whose channel is modeled as a two-state Gilbert-Elliott channel, utilizes the channel state information to adapt its transmission probabilities accordingly. The analysis also takes into account the compound effects of multipacket reception at the receiver as well as the cooperative relaying capability of the secondary node, on the stability region of the cognitive network. Results clearly illustrate that the knowledge of the secondary channel state benefits not only the secondary user, but also the primary user as well.

【Keywords】: cognitive radio; probability; wireless channels; channel state information; cognitive network; cooperative relaying capability; multipacket reception; primary user; secondary node; secondary user; transmission probabilities; two-state Gilbert-Elliott channel; two-user cognitive shared channel stability; Channel models; Channel state information; Numerical stability; Queueing analysis; Stability analysis; Steady-state; Throughput

383. Distributed channel probing for efficient transmission scheduling over wireless fading channels.

Paper Link】 【Pages】:3026-3030

【Authors】: Bin Li ; Atilla Eryilmaz

【Abstract】: It is energy-consuming and operationally cumbersome for all users to continuously estimate the channel quality before each data transmission decision in opportunistic scheduling over wireless fading channels. This observation motivates us to understand whether and how opportunistic gains can still be achieved with significant reductions in channel probing requirements and without centralized coordination amongst the competing users. In this work, we first provide an optimal centralized probing and transmission algorithm under the probing constraints. Noting the difficulties in the implementation of the centralized solution, we develop a novel Sequential Greedy Probing (SGP) algorithm by using the maximum-minimums identity, which is naturally well-suited for physical implementation and distributed operation. We show that the SGP algorithm is optimal in the important scenario of symmetric and independent ON-OFF fading channels. Then, we study a variant of the SGP algorithm in general fading channels to obtain its efficiency ratio as an explicit function of the channel statistics and rates, and note its tightness in the symmetric and independent ON-OFF fading scenario. We further expand on the distributed implementation of these greedy solutions by using the Fast-CSMA technique.

【Keywords】: carrier sense multiple access; channel estimation; fading channels; scheduling; SGP algorithm; channel quality estimation; channel rates; channel statistics; data transmission decision; distributed channel probing; efficient-transmission scheduling; fast-CSMA technique; independent on-off fading channels; maximum-minimum identity; opportunistic scheduling; optimal centralized probing; sequential greedy probing algorithm; symmetric fading channels; transmission algorithm; wireless fading channels; Algorithm design and analysis; Fading; Joints; Probes; Processor scheduling; Schedules; Wireless communication

384. Ad hoc wireless networks meet the infrastructure: Mobility, capacity and delay.

Paper Link】 【Pages】:3031-3035

【Authors】: Devu Manikantan Shila ; Yu Cheng

【Abstract】: In our previous work [9], we investigated the capacity and delay of a static hybrid wireless network, consisting of n static wireless nodes overlaid with a cellular architecture of m base stations. By employing a more practical and simple routing policy, we proved that each wireless node can be realized with a throughput that scales sublinearly or linearly with m. This was in fact a significant result as opposed to prior works on hybrid wireless networks which claims that if m grows slower than some threshold, the benefit of augmenting those base stations to the pure ad hoc network is insignificant. Albeit our novel approach can render improved benefits in terms of capacity and delay as opposed to prior efforts, the analysis shows that one requires a large deployment cost in order to achieve a Θ(1) capacity. Existing research efforts also indicate that for pure mobile ad hoc networks, a capacity of Θ(1) can be achieved by exploiting the mobility of the nodes, at the expense of very high end-to-end delay. This larger delay, nevertheless, stems from the assumption of global mobility, where nodes move around the entire network. In this paper, by leveraging a more practical and restricted mobility model, we investigate the capacity and delay of our hybrid wireless network design with n mobile nodes and m base stations, termed as mobile hybrid wireless network. Interestingly, our results show that each node can be realized with a capacity of Θ(1), while keeping the average end-to-end delay smaller by a factor of m than the pure mobile ad hoc networks.

【Keywords】: mobile ad hoc networks; mobility management (mobile radio); telecommunication network routing; ad hoc wireless networks; base stations; cellular architecture; deployment cost; end-to-end delay; global mobility; mobile ad hoc networks; mobile hybrid wireless network; mobile nodes; node mobility; restricted mobility model; routing policy; static hybrid wireless network capacity; static wireless nodes; Ad hoc networks; Base stations; Delay; Mobile communication; Throughput; Wireless networks

Paper Link】 【Pages】:3036-3040

【Authors】: Anas Basalamah ; Song Min Kim ; Shuo Guo ; Tian He ; Yoshito Tobe

【Abstract】: By exploiting reception diversity of wireless network links, researchers have shown that opportunistic routing can improve network performance significantly over traditional routing schemes. However, recently empirical studies indicate that we are too optimistic, i.e. diversity gain can be overestimated if we continue to assume that packet receptions of wireless links are independent events. For the first time, this paper formally analyzes the opportunistic routing gain under the presence of link correlation considering the loss of DATA and ACK packets. Based on the model, we introduce a new link-correlation-aware opportunistic routing scheme, which improves the performance by exploiting the diverse uncorrelated forwarding links. Our design is evaluated using simulation where we show (i) link correlation leads to less diversity gain, (ii) and with our link-correlation-aware design; improvement can be gained. We also provide a unique model to generate strings of randomly correlated receptions.

【Keywords】: diversity reception; radio links; radio networks; telecommunication network routing; ACK packet; data packet; diverse-uncorrelated forwarding links; diversity gain; link correlation; link correlation-aware opportunistic routing scheme; link-correlation-aware design; network performance improvement; opportunistic routing gain; packet receptions; reception diversity; wireless network links; Correlation; Measurement; Protocols; Routing; Wireless networks; Wireless sensor networks

386. Revisiting delay-capacity tradeoffs for mobile networks: The delay is overestimated.

Paper Link】 【Pages】:3041-3045

【Authors】: Yoora Kim ; Kyunghan Lee ; Ness B. Shroff ; Injong Rhee

【Abstract】: In the literature, one of the key assumptions in characterizing the scaling laws for wireless mobile networks, is to assume that nodes do not communicate while being mobile. In other words, contact opportunities are not considered during the mobility process itself. However, we find that this assumption leads to an inflated estimate of the delay, even in an order sense. To address this issue, a new framework that allows nodes to communicate while being mobile is proposed in this paper. Under this framework, it is shown that delays to obtain various levels of throughput for i.i.d. mobility model are overestimated and a new tighter delay-capacity tradeoff is suggested. Also, the framework is used to analytically derive the delay-capacity tradeoff of Lévy flight model for various levels of throughput, where Lévy flight is a random walk of a power-law flight distribution with an exponent α ∈ (0, 2]. It is known as a mobility model which closely captures human movement patterns. The tradeoffs from the proposed framework between the delay (D̅) and per-node throughput (λ) indicate that D̅ = O(√(max(1,nλ3))) holds for i.i.d. mobility and D̅ = O(√(min(n1+αλ,n2))) holds for Lévy flight.

【Keywords】: mobility management (mobile radio); Levy flight model; delay-capacity tradeoffs; delay-inflated estimation; human movement patterns; iid mobility model; mobility process; per-node throughput; power-law flight distribution; wireless mobile networks; Ad hoc networks; Delay; Mobile communication; Mobile computing; Random variables; Throughput; Wireless networks

387. Lower bound of weighted fairness guaranteed congestion control protocol for WSNs.

Paper Link】 【Pages】:3046-3050

【Authors】: Guohua Li ; Jianzhong Li ; Bo Yu

【Abstract】: In wireless sensor networks, congestion not only leads to packet loss, but also increases delays and lowers network throughput with a lot of energy wastage due to retransmissions. Therefore an effective solution should be proposed to mitigate congestion to increase energy efficiency and prolong the lifetime of network. Traditional solutions work in an open-loop fashion. In this paper, we propose a novel decentralized and weighted fairness guaranteed congestion control protocol (WFCC). WFCC introduces node weight to reflect the importance of each node, and uses the ratio of packet inter-arrival time to packet service time as congestion metric. Based on node weight and congestion metric, WFCC divides time axis into period sequences, and uses closed-loop control method to mitigate congestion by adjusting the incoming data rate of each node periodically. Importantly, WFCC introduces a weighted fairness metric and gives its lower bound for the first time, i.e., 1 - (10c/9)2 where 0 <; c <; 0.2 is a constant. The results of simulation show that the weighted fairness of WFCC achieves 95% on the average, which is much better than the existing rate-based congestion control protocol. Moreover, WFCC achieves 50% and 19% gains in network throughput and weighted fairness on the average, compared with PCCP that is the state-of-art rate control based congestion control protocol.

【Keywords】: closed loop systems; protocols; telecommunication congestion control; telecommunication network reliability; wireless sensor networks; PCCP; WSN; closed-loop control method; congestion mitigation; decentralized WFCC protocol; energy efficiency; energy wastage; network lifetime; network throughput; node weight; packet interarrival time; packet loss; packet service time; rate-based congestion control protocol; state-of-art rate control-based congestion control protocol; weighted fairness guaranteed congestion control protocol; wireless sensor networks; Measurement; Protocols; Reliability; Routing; Throughput; Wireless communication; Wireless sensor networks

388. Impact of secrecy on capacity in large-scale wireless networks.

Paper Link】 【Pages】:3051-3055

【Authors】: Jinbei Zhang ; Luoyi Fu ; Xinbing Wang

【Abstract】: Since wireless channel is vulnerable to eavesdroppers, the secrecy during message delivery is a major concern in many applications such as commercial, governmental and military networks. This paper investigates information-theoretic secrecy in large-scale networks and studies how capacity is affected by the secrecy constraint where the locations and channel state information (CSI) of eavesdroppers are both unknown. We consider two scenarios: 1) non-colluding case where eavesdroppers can only decode messages individually; and 2) colluding case where eavesdroppers can collude to decode a message. For the non-colluding case, we show that the network secrecy capacity is not affected in order-sense by the presence of eavesdroppers. For the colluding case, the per-node secrecy capacity of Θ(1/√n) can be achieved when the eavesdropper density ψe(n) is O(n), for any constant β >; 0 and decreases monotonously as the density of eavesdroppers increases. The upper bounds on network secrecy capacity are derived for both cases and shown to be achievable by our scheme when ψe(n) = O(n) or ψe(n) = Ω(log α-2/α n), where α is the path loss gain. We show that there is a clear tradeoff between the security constraints and the achievable capacity.

【Keywords】: decoding; radio networks; telecommunication security; wireless channels; channel state information; commercial network; eavesdropper density; eavesdropper locations; governmental network; information-theoretic secrecy; large-scale wireless networks; message decoding; message delivery; military network; network secrecy capacity; path loss gain; per-node secrecy capacity; secrecy constraint; wireless channel

389. Locating malicious nodes for data aggregation in wireless networks.

Paper Link】 【Pages】:3056-3060

【Authors】: XiaoHua Xu ; Qian Wang ; Jiannong Cao ; Peng-Jun Wan ; Kui Ren ; Yuanfang Chen

【Abstract】: Data aggregation, as a primitive communication task in wireless networks, can reduce the communication complexity. However, in-network aggregation usually brings an unavoidable security defect. Some malicious nodes may control a large percentage of the whole network data and compel the network misbehave in an arbitrary manner. Thus, locating the malicious nodes to prevent them from further disaster is a practical challenge for data aggregation schemes. Based on the grouping and localization techniques, we propose a novel integrated protocol to locate malicious nodes. The proposed protocol does not rely on any special hardware and requests only incomplete information of the network from the security schemes. We also conduct simulation study to evaluate the proposed protocol.

【Keywords】: radio networks; security of data; telecommunication security; communication complexity; data aggregation; in-network aggregation; malicious nodes; primitive communication task; security scheme; unavoidable security defect; whole network data; wireless networks; Aggregates; Communication system security; Protocols; Security; Wireless networks; Wireless sensor networks

390. Phantom: Physical layer cooperation for location privacy protection.

Paper Link】 【Pages】:3061-3065

【Authors】: Sangho Oh ; Tam Vu ; Marco Gruteser ; Suman Banerjee

【Abstract】: Localization techniques that allow inferring the location of wireless devices directly from received signals have exposed mobile users to new threats. Adversaries can easily collect required information (such as signal strength) from target users, however, techniques securing location information at the physical layer of the wireless communication systems have not received much attention. In this paper, we propose Phantom, a novel approach to allow mobile devices thwart unauthorized adversary's location tracking by creating forged locations. In particular, Phantom leverages cooperation among multiple mobile devices in close vicinity and utilizes synchronized transmissions among those nodes to obfuscate localization efforts of adversary systems. Through an implementation on software-defined radios (GNU Radios) and extensive simulation with real location traces, we see that Phantom can improve location privacy.

【Keywords】: data privacy; mobile radio; radio tracking; software radio; telecommunication security; GNU radio; Phantom; forged location; localization technique; location information; location privacy protection; location tracking; mobile device; mobile user; physical layer cooperation; received signal; signal strength; software-defined radio; threat; unauthorized adversary; wireless communication system; wireless device; IEEE 802.11g Standard; OFDM; Phantoms; Privacy; Radio transmitters; Synchronization; Wireless communication

391. Hiding traffic with camouflage: Minimizing message delay in the smart grid under jamming.

Paper Link】 【Pages】:3066-3070

【Authors】: Zhuo Lu ; Wenye Wang ; Cliff Wang

【Abstract】: The smart grid is an emerging cyber-physical system that integrates power infrastructures with information technologies. In the smart grid, wireless networks have been proposed for efficient communications. However, the jamming attack that broadcasts radio interference is a primary security threat to prevent the deployment of wireless networks. Hence, spread spectrum systems with jamming resilience must be adapted to the smart grid to secure wireless communications. There have been extensive works on designing spread spectrum schemes to achieve feasible communication under jamming attacks. Nevertheless, an open question in the smart grid is how to minimize message delay for timely communication in power applications. In this paper, we address this problem in a wireless network with spread spectrum systems for the smart grid. By defining a generic jamming process that characterizes a wide range of existing jamming models, we show that the worst-case message delay is a U-shaped function of network traffic load. This indicates that, interestingly, increasing a fair amount of redundant traffic, called camouflage, can improve the worst-case delay performance. We demonstrate via experiments that transmitting camouflage traffic can decrease the probability that a message is not delivered on time in order of magnitude for smart grid applications.

【Keywords】: jamming; radio networks; spread spectrum communication; telecommunication security; U-shaped function; camouflage traffic; cyber-physical system; generic jamming process; information technology; jamming attack; jamming model; jamming resilience; message delay minimization; network traffic load; power infrastructures; radio interference; redundant traffic; secure wireless communication; smart grid application; spread spectrum systems; traffic hiding; wireless networks; worst-case delay performance; worst-case message delay; Delay; Jamming; Smart grids; Time factors; Wireless networks; Wireless sensor networks

392. Providing hop-by-hop authentication and source privacy in wireless sensor networks.

Paper Link】 【Pages】:3071-3075

【Authors】: Yun Li ; Jian Li ; Jian Ren ; Jie Wu

【Abstract】: Message authentication is one of the most effective ways to thwart unauthorized and corrupted traffic from being forwarded in wireless sensor networks (WSNs). To provide this service, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial: when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. In this paper, we propose a scalable authentication scheme based on elliptic curve cryptography (ECC). While enabling intermediate node authentication, our proposed scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. In addition, our scheme can also provide message source privacy. Both theoretical analysis and simulation results demonstrate that our proposed scheme is more efficient than the polynomial-based approach in terms of communication and computational overhead under comparable security levels while providing message source privacy.

【Keywords】: data privacy; message authentication; polynomials; public key cryptography; telecommunication security; telecommunication traffic; wireless sensor networks; ECC; WSN; built-in threshold; communication overhead; computational overhead; elliptic curve cryptography; hop-by-hop authentication; intermediate node authentication; message authentication; message source privacy; polynomial-based scheme; scalable authentication scheme; security level; wireless sensor network; Authentication; Message authentication; Polynomials; Privacy; Public key; Hop-by-hop authentication; public-key cryptosystem; source privacy; symmetric-key cryptosystem

393. TAHES: Truthful double Auction for Heterogeneous Spectrums.

Paper Link】 【Pages】:3076-3080

【Authors】: Xiaojun Feng ; Yanjiao Chen ; Jin Zhang ; Qian Zhang ; Bo Li

【Abstract】: Auction is widely applied in wireless communication for spectrum allocation. Most of prior works have assumed that spectrums are identical. In reality, however, spectrums provided by different owners have distinctive characteristics in both spacial and frequency domains. Spectrum availability also varies in different geo-locations. Furthermore, frequency diversity may cause non-identical conflicts among spectrum buyers since different frequencies have distinct communication ranges. Under such realistic scenario, existing spectrum auction schemes cannot provide truthfulness or efficiency. In this paper, we propose a Truthful double Auction for HEterogeneous Spectrum, called TAHES. TAHES allows buyers to explicitly express their personalized preferences for heterogeneous spectrums and also addresses the problem of interference graph variation. We prove that TAHES has nice economic properties including truthfulness, individual rationality and budget balance.

【Keywords】: radio spectrum management; TAHES; frequency diversity; heterogeneous spectrums; interference graph variation; spectrum allocation; spectrum auction scheme; spectrum availability; truthful double auction; wireless communication; Availability; Cost accounting; Economics; Interference; Resource management; White spaces; Wireless communication

394. Almost optimal dynamically-ordered multi-channel accessing for cognitive networks.

Paper Link】 【Pages】:3081-3085

【Authors】: Bowen Li ; Panlong Yang ; Xiang-Yang Li ; Shaojie Tang ; Yunhao Liu ; Qihui Wu

【Abstract】: For cognitive wireless networks, one challenge is that the status of the channels' availability and quality is difficult to predict and quantify. Numerous learning based online channel sensing and accessing strategies have been proposed to address such challenge. In this work, we propose a novel channel sensing and accessing strategy that carefully balances the channel statistics exploration and multichannel diversity exploitation. Unlike traditional MAB-based approaches, in our scheme, a secondary cognitive radio user will sequentially sense the status of multiple channels in a carefully designed ordering. We formulate the online sequential channel sensing and accessing problem as a sequencing multi-armed bandit problem, and propose a novel policy whose regret is in optimal logarithmic rate in time and polynomial in the number of channels. We conducted extensive simulations to compare the performance of our method with traditional MAB-based approach. Our simulation results show that our scheme improves the throughput by more than 30% and speed up the learning process by more than 100%.

【Keywords】: cognitive radio; diversity reception; statistical analysis; MAB-based approaches; almost-optimal dynamically-ordered multichannel accessing; channel availability; channel statistic exploration; cognitive wireless networks; learning-based online channel sensing strategy; multichannel diversity exploitation; online sequential channel sensing-accessing problem; optimal logarithmic rate; secondary cognitive radio user; sequencing multiarmed bandit problem; Acceleration; Analytical models; Indexes; Polynomials; Sensors; TV; Throughput

395. Efficient online learning for opportunistic spectrum access.

Paper Link】 【Pages】:3086-3090

【Authors】: Wenhan Dai ; Yi Gai ; Bhaskar Krishnamachari

【Abstract】: The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K <; N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.

【Keywords】: Markov processes; cognitive radio; CEE algorithm; cognitive radio networks; continuous exploration-exploitation algorithm; efficient-online learning; finite-state Markov chain; model-aware genie; near-logarithmic regret; nonBayesian restless multiarmed bandit problem; numerical simulations; opportunistic spectrum access; primary users; state-dependent rewards; stationary state distributions; transition matrix eigenvalues; Algorithm design and analysis; Bayesian methods; Bismuth; Heuristic algorithms; Indexes; Markov processes; Numerical stability

396. Robust threshold design for cooperative sensing in cognitive radio networks.

Paper Link】 【Pages】:3091-3095

【Authors】: Shimin Gong ; Ping Wang ; Jianwei Huang

【Abstract】: The successful coexistence of cognitive radio systems and licensed systems requires the secondary users to have the capability of sensing and keeping track of primary transmissions. While existing spectrum sensing methods usually assume known distributions of the primary signals, such an assumption is often not true in practice. As a result, applying existing sensing methods directly will often lead to unreliable detection performance in practical networks. In this paper, we try to improve the sensing performance under the distribution uncertainty of primary signals. We formulate the optimal sensing design as a robust optimization problem, and propose an iterative algorithm to determine the optimal decision threshold for each user. Extensive simulations demonstrate the effectiveness of our proposed algorithm.

【Keywords】: cognitive radio; optimisation; radio networks; radio spectrum management; cognitive radio networks; cognitive radio systems; cooperative sensing; distribution uncertainty; iterative algorithm; licensed systems; optimal decision threshold; optimal sensing design; primary signals; robust optimization problem; robust threshold design; secondary users; sensing performance; spectrum sensing; unreliable detection performance; Cognitive radio; Noise; Optimization; Robustness; Sensors; Uncertainty; Vectors; Cognitive radio; distribution uncertainty; robust optimization; spectrum sensing

397. Cooperative cognitive radio networking using quadrature signaling.

Paper Link】 【Pages】:3096-3100

【Authors】: Bin Cao ; Lin X. Cai ; Hao Liang ; Jon W. Mark ; Qinyu Zhang ; H. Vincent Poor ; Weihua Zhuang

【Abstract】: A quadrature signaling based two-phase cooperation framework for cooperative cognitive radio networking is proposed. By leveraging the degrees of freedom provided by orthogonal modulation, secondary users are able to relay the traffic of primary users and transmit their own in the same time slot without interference. To evaluate the cooperation performance of the proposed framework, a weighted sum throughput maximization problem is formulated, and closed-form solutions of the optimal power setting/allocation are obtained in the amplify-and-forward and decode-and-forward relaying modes. Simulation results validate the efficiency of the proposed framework.

【Keywords】: amplify and forward communication; cognitive radio; cooperative communication; decode and forward communication; signal processing; allocation; amplify and forward relaying mode; closed form solution; cooperative cognitive radio networking; decode and forward relaying mode; interference; optimal power setting; orthogonal modulation; quadrature signaling; time slot; two-phase cooperation framework; weighted sum throughput maximization problem; Binary phase shift keying; Cognitive radio; Relays; Scattering; Sensors; Signal to noise ratio; Throughput