INFOCOM 2010. 29th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 15-19 March 2010, San Diego, CA, USA. IEEE 【DBLP Link】
【Paper Link】 【Pages】:1-5
【Authors】: Yong Yang ; Lu Su ; Yan Gao ; Tarek F. Abdelzaher
【Abstract】: Solar-powered sensor nodes have incentive to spend extra energy, especially when the battery is fully charged, because this energy surplus would be wasted otherwise. In this paper, we consider the problem of utilizing such energy surplus to adaptively adjust the redundancy level of erasure codes used in communication, so that the delivery reliability is improved while the network lifetime is still conserved. We formulate the problem as maximizing the end-to-end packet delivery probability under energy constraints. This formulated problem is hard to solve because of the combinatorics involved and the special curvature of its objective function. By exploiting its inherent properties, we propose an effective solution called SolarCode, which has a constant approximation ratio. We evaluate SolarCode in the context of our solar-powered sensor network testbed. Experiments show that SolarCode is successful in utilizing energy surplus and leads to higher data delivery reliability.
【Keywords】: codes; telecommunication network reliability; wireless sensor networks; SolarCode; data delivery reliability; end-to-end packet delivery; energy surplus; erasure codes; solar-powered sensor nodes; solar-powered wireless sensor networks; Batteries; Communications Society; Computer network reliability; Computer science; Network topology; Peer to peer computing; Redundancy; Relays; Telecommunication network reliability; Wireless sensor networks
【Paper Link】 【Pages】:6-10
【Authors】: Michael Segal ; Hanan Shpungin
【Abstract】: This paper studies the problem of topology control in random wireless ad-hoc networks through power assignment for n nodes uniformly distributed in a unit square. In particular, we are interested in asymmetric power assignments so that the induced communication graph has a good distance and energy stretch simultaneously, with additional optimization objectives: both minimizing the total energy consumption, interference level, hop-diameter, and maximizing the network lifetime. We present several power assignments with varying construction time complexity. The probability of our results converges to one as the number of network nodes, n, increases. To the best of our knowledge, these are the first results for spanner construction in wireless ad-hoc networks with provable bounds for both, energy and distance, metrics simultaneously.
【Keywords】: ad hoc networks; communication complexity; graph theory; optimisation; asymmetric power assignments; distance metrics; energy metrics; hop diameter; improved multicriteria spanner; induced communication graph; interference level; network lifetime; optimization objectives; random wireless ad hoc networks; time complexity; topology control; total energy consumption; Ad hoc networks; Batteries; Communications Society; Energy consumption; Energy efficiency; Interference; Network topology; Peer to peer computing; Relays; Transceivers
【Paper Link】 【Pages】:11-15
【Authors】: Yan Gao ; Zheng Zeng ; P. R. Kumar
【Abstract】: In wireless networks, how to select transmit power that maximizes throughput is a challenging problem. On one hand, transmissions at a high power level could increase interference to others; on the other hand, transmissions at a low power level are prone to being interfered by others. Prior works consider this problem as a search for a fixed optimal power setting that maximizes communication spatial reuse. In this paper, we pursue a novel approach that combines power selection with a random medium access mechanism. For each transmission, a node randomly selects a transmit power from all available power levels to access the medium. In this way, all combinations of network power settings could be selected with some probability. Using a recently developed Markov chain model, we derive a distributed scheme that determines the access probabilities of each power setting, according to the arrival rate of traffic and the service rate achieved by the scheme. We show that this scheme always converges to the optimal solution. Moreover, we also show that the random scheme can attain the maximal throughput region that can be obtained by any time-sharing between power settings, and which is consequently larger than the region any fixed power setting can achieve.
【Keywords】: Markov processes; power control; wireless channels; Markov chain model; communication spatial reuse; maximal throughput; optimal power setting; optimal solution; power levels; power selection; random medium access; wireless networks; Access protocols; Communications Society; Contracts; Interference; Power control; Power system modeling; Telecommunication traffic; Throughput; Traffic control; Wireless networks
【Paper Link】 【Pages】:16-20
【Authors】: Wenjie Zeng ; Anish Arora ; Ness B. Shroff
【Abstract】: The energy efficiency of the widely used convergecast pattern depends substantially on the choice of medium access control (MAC) and routing protocol. In this paper, we formalize the maximization of convergecast energy efficiency with respect to its MAC and routing as a resource constrained optimization problem. We then analytically show that this maximization problem is linear in the context of two prototypical MACs - a locally synchronized wakeup (as in S-MAC) and a locally staggered wakeup MAC (as in O-MAC) - assuming low, uniform traffic that is delivered reliably and without interference. With this insight, we present a centralized algorithm, MeeCast, that solves the optimization problem utilizing linear programming techniques. We also design a distributed version of MeeCast, for the case where the traffic is ultra-low, and prove that it achieves optimality as well as fast convergence time. Notably, this version is self-stabilizing, so it autonomically handles changes in traffic load, network topology, loss of coordination and state corruption. In comparison with Dozer, a state-of-the-art convergecast protocol, MeeCast achieves better energy efficiency and application lifetime in the context of S-MAC and identical energy efficiency but better application lifetime in the context of O-MAC.
【Keywords】: access protocols; optimisation; routing protocols; wireless sensor networks; MeeCast; centralized algorithm; convergecast pattern; convergecast protocol; energy efficiency; fast convergence time; joint duty cycle; linear maximization problem; medium access control protocol; resource constrained optimization problem; route optimization; routing protocol; Constraint optimization; Convergence; Energy efficiency; Interference constraints; Linear programming; Media Access Protocol; Network topology; Prototypes; Routing protocols; Telecommunication traffic
【Paper Link】 【Pages】:21-25
【Authors】: Matthew Andrews ; Antonio Fernández ; Lisa Zhang ; Wenbo Zhao
【Abstract】: Energy conservation is drawing increasing attention in data networking. One school of thought believes that a dominant amount of energy saving comes from turning off network elements. The difficulty is that transitioning between the active and sleeping modes consumes considerable energy and time. This results in an obvious trade-off between saving energy and provisioning performance guarantees such as end-to-end delays. We study the following routing and scheduling problem in a network in which each network element either operates in the full-rate active mode or the zero-rate sleeping mode. For a given network and traffic matrix, routing determines the path along which each traffic stream traverses. For frame-based periodic scheduling, a schedule determines the active period per element within each frame and prioritizes packets within each active period. For a line topology, we present a schedule with close-to-minimum delay for a minimum active period per element. For an arbitrary topology, we partition the network into a collection of lines and utilize the near-optimal schedule along each line. Additional delay is incurred only when a path switches from one line to another. By minimizing the number of switchings via routing, we show a logarithmic approximation for both energy consumption and end-to-end delays. If routing is given as input, we present two schedules one of which has active period proportional to the traffic load per network element, and the other proportional to the maximum load over all elements. The end-to-end delay of the latter is much improved compared to the delay for the former. This demonstrates the trade-off between energy and delay.
【Keywords】: approximation theory; delays; energy conservation; power consumption; scheduling; telecommunication network routing; telecommunication traffic; delay minimization; end-to-end delays; energy conservation; energy consumption; energy minimization; frame-based periodic scheduling; full-rate active mode; line topology; logarithmic approximation; network routing; network traffic matrix; powerdown model; zero-rate sleeping mode; Added delay; Delay lines; Educational institutions; Energy conservation; Energy consumption; Network topology; Routing; Switches; Telecommunication traffic; Turning
【Paper Link】 【Pages】:31-35
【Authors】: Hong Xu ; Jin Jin ; Baochun Li
【Abstract】: Dynamic spectrum trading amongst small cognitive users is fundamentally different along two axes: temporal variation, and spatial variation of user demand and channel condition. We advocate that a spectrum secondary market, analogous to the stock market, is to be established for users to dynamically trade among themselves their channel holdings obtained in the primary market from legacy owners. We design a market mechanism based on dynamic double auctions, creating a marketplace in the air to match bandwidth demand with supply. In the analysis we prove important economic properties of the mechanism, notably its truthfulness and asymptotic efficiency in maximizing spectrum utilization. Complimentary simulation studies corroborate that spectrum utilization and user performance can be improved by establishing the spectrum secondary market.
【Keywords】: bandwidth allocation; cognitive radio; channel condition; cognitive users; dynamic double auctions; dynamic spectrum trading; economic properties; spatial variation; spectrum secondary market; spectrum utilization; stock market; temporal variation; user demand; Bandwidth; Communications Society; Fading; Frequency measurement; Interference; Marketing and sales; Mechanical factors; Resource management; Stock markets; Time measurement
【Paper Link】 【Pages】:36-40
【Authors】: Praveen Kumar Muthuswamy ; Koushik Kar ; Sambit Sahu ; Prashant Pradhan ; Saswati Sarkar
【Abstract】: We provide a formal model for the Change Management process for Enterprise IT systems, and develop change scheduling algorithms that seek to attain the "change capacity" of the system. The change management process handles critical updates in the system that often use overlapping sets of servers, resulting in scheduling conflicts between the corresponding change classes. Furthermore, applications are typically associated with certain permissible downtime windows, which impose constraints on the timing of the change executions. Scheduling of changes for such systems represent a complex dynamic optimization question. In a limiting fluid regime, where changes are assumed nonatomic, we develop a scheduling policy that provably attains the change capacity of the system. We then propose and evaluate an atomic approximation of the optimal fluid scheduling policy, which is well suited for application to a real change management system. Simulation results demonstrate that the expected change execution delay and the capacity attained by the approximate policy is close to the best attainable values, when unavoidable capacity losses due to fragmentation effects are taken into account and is significantly better than a randomized scheduling policy.
【Keywords】: capacity management (computers); dynamic programming; information technology; management information systems; management of change; scheduling; atomic approximation; capacity optimal scheduling; change capacity systems; change management process; change scheduling algorithms; dynamic optimization; enterprise IT systems; optimal fluid scheduling policy; process modeling; randomized scheduling policy; Communications Society; Constraint optimization; Databases; Delay effects; Dynamic scheduling; Fluid dynamics; Kernel; Scheduling algorithm; Timing; USA Councils
【Paper Link】 【Pages】:41-45
【Authors】: Jocelyne Elias ; Fabio Martignon ; Konstantin Avrachenkov ; Giovanni Neglia
【Abstract】: In many scenarios network design is not enforced by a central authority, but arises from the interactions of several self-interested agents. This is the case of the Internet, where connectivity is due to Autonomous Systems' choices, but also of overlay networks, where each user client can decide the set of connections to establish. Recent works have used game theory, and in particular the concept of Nash Equilibrium, to characterize stable networks created by a set of selfish agents. The majority of these works assume that users are completely non-cooperative, leading, in most cases, to inefficient equilibria. To improve efficiency, in this paper we propose two novel socially-aware network design games. In the first game we incorporate a socially-aware component in the users' utility functions, while in the second game we use additionally a Stackelberg (leader-follower) approach, where a leader (e.g., the network administrator) architects the desired network buying an appropriate subset of network's links, driving in this way the users to overall efficient Nash equilibria. We provide bounds on the Price of Anarchy and other efficiency measures, and study the performance of the proposed schemes in several network scenarios, including realistic topologies where players build an overlay on top of real Internet Service Provider networks. Numerical results demonstrate that (1) introducing some incentives to make users more sociallyaware is an effective solution to achieve stable and efficient networks in a distributed way, and (2) the proposed Stackelberg approach permits to achieve dramatic performance improvements, designing almost always the socially optimal network.
【Keywords】: Internet; game theory; network topology; social sciences computing; Internet service provider networks; Stackelberg approach; autonomous systems; game theory; nash equilibrium; overlay networks; price of anarchy; socially aware component; socially aware network design games; socially optimal network; user utility functions; Communications Society; Cost function; Degradation; Game theory; IP networks; Nash equilibrium; Network topology; Stability; System performance; Web and internet services
【Paper Link】 【Pages】:46-50
【Authors】: Zhi-Li Zhang ; Papak Nabipay ; Andrew M. Odlyzko ; Roch Guérin
【Abstract】: This paper presents a new economic approach for studying competition and innovation in a complex and highly interactive system of network providers, users, and suppliers of digital goods and services (i.e., service providers). It employs Cournot and Bertrand games to model competition among service providers and network providers, respectively, and develops a novel unified model to capture the interaction and competition among these players in a "service-oriented" Internet. Incentives for service and network innovation are studied in this model.
【Keywords】: Internet; game theory; interactive systems; Bertrand games; Cournot games; competition; complex systems; innovation; interactions; interactive systems; service providers; service-oriented Internet; Cloud computing; Communications Society; IP networks; Industrial economics; Industrial relations; Interactive systems; Regulators; Technological innovation; Videos; Web and internet services
【Paper Link】 【Pages】:51-55
【Authors】: Yu Cheng ; Hongkun Li ; Peng-Jun Wan ; Xinbing Wang
【Abstract】: This paper studies the maximum throughput that can be supported by a given wireless mesh backhaul network, over a practical CSMA/CA medium access control (MAC) protocol. We resort to the multi-commodity flow (MCF) formulation, augmented with the conflict-graph constraints, to jointly compute the maximum throughput and the associated optimal network dimensioning; while use a novel approach to take into account the collision overhead in the distributed CSMA/CA MAC. Such overhead has been ignored by the existing MCF-based capacity studies, which assume impractical centralized scheduling and result in aggressive network dimensioning, unachievable over the CSMA/CA MAC. We develop a generic method to integrate the CSMA/CA MAC analysis with the MCF formulation for optimal network capacity analysis, and derive both an upper bound and a lower bound of the network throughput over a practical CSMA/CA protocol. To the best of our knowledge, this paper is the first rigorous theoretical study of the achievable capacity over a multi-hop CSMA/CA based wireless network.
【Keywords】: carrier sense multiple access; graph theory; scheduling; wireless mesh networks; associated optimal network dimensioning; centralized scheduling; conflict-graph constraints; distributed CSMA/CA MAC protocol; lower bound; medium access control protocol; multicommodity flow formulation; multihop CSMA/CA based wireless network; network throughput; optimal network capacity analysis; upper bound; wireless mesh backhaul network; Access protocols; Computer networks; Distributed computing; Media Access Protocol; Multiaccess communication; Processor scheduling; Spread spectrum communication; Throughput; Upper bound; Wireless application protocol
【Paper Link】 【Pages】:56-60
【Authors】: Matthew Mah ; Neha Gupta ; Ashok K. Agrawala
【Abstract】: The ability to measure location using time of flight in 802.11 networks is impeded by the standard one microsecond clock resolution, imprecise synchronization of the 802.11 protocol, and the inaccuracy of available clock oscillators. We demonstrate a technique for off-the-shelf 802.11 hardware that enables accurate determination of location of transmitting 802.11 devices using time difference of arrival (TDOA). The technique refines the PinPoint clock model for 802.11 wireless cards, enabling accurate translation of times from one frame of reference to another for free-running clocks and producing accurate location of transmitting nodes to within 3m. This technique can locate nodes regardless of their participation in the location system and may be applied to other wireless communication protocols where send and receive timestamps are available.
【Keywords】: protocols; time-of-arrival estimation; wireless LAN; 802.11 protocol; clock oscillators; clock resolution; pinpoint clock model; pinpoint time difference of arrival; unsynchronized 802.11 wireless cards; wireless communication protocols; Clocks; Hardware; Impedance; Measurement standards; Oscillators; Protocols; Refining; Synchronization; Time difference of arrival; Time measurement
【Paper Link】 【Pages】:61-65
【Authors】: Amr Rizk ; Markus Fidler
【Abstract】: Fractional Brownian motion (fBm) emerged as a useful model for self-similar and long-range dependent Internet traffic. Asymptotic, respectively, approximate performance measures are known from large deviations theory for single queuing systems with fBm traffic. In this paper we prove a rigorous sample path envelope for fBm that complements previous results. We find that both approaches agree in their outcome that overflow probabilities for fBm traffic have a Weibull tail. We show numerical results on the impact of the variability and the correlation of fBm traffic on the queuing performance.
【Keywords】: Brownian motion; Internet; queueing theory; traffic engineering computing; Internet traffic; fBm; fBm traffic; fractional Brownian motion; long memory FBM traffic; queuing systems; sample path bounds; Aggregates; Communications Society; Communications technology; Internet; Queueing analysis; Random processes; Stochastic processes; Tail; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:66-70
【Authors】: Hongseok Kim ; Gustavo de Veciana ; Xiangying Yang
【Abstract】: In this paper we develop a framework for user association in infrastructure-based wireless networks, specifically focused on flow-level cell load balancing under spatially inhomogeneous traffic distributions. Our work encompasses several different user association policies: rate-optimal, throughput-optimal, delay-optimal, and load-equalizing, which we collectively denote α-optimal user association. We prove that the optimal load vector ρ* that minimizes a generalized system performance function is the fixed point of a certain mapping. Based on this mapping we propose and analyze an iterative distributed user association policy that adapts to spatial traffic loads and converges to a globally optimal allocation.
【Keywords】: resource allocation; telecommunication traffic; wireless channels; α-optimal user association; cell load balancing; delay-optimal; inhomogeneous traffic distributions; iterative distributed user association policy; rate-optimal; spatial traffic loads; throughput-optimal; wireless networks; Centralized control; Communications Society; Frequency; Interference; Load management; Standardization; Telecommunication traffic; Traffic control; WiMAX; Wireless networks
【Paper Link】 【Pages】:71-75
【Authors】: Vartika Bhandari ; Nitin H. Vaidya
【Abstract】: Significant research effort has been directed towards the design and performance analysis of imperfect scheduling policies for wireless networks. These imperfect schedulers are of interest despite being sub-optimal, as they allow for more tractable implementation at the expense of some loss in performance. However much of this prior work takes a uniform scaling approach to analyzing scheduling performance, whereby the performance of a scheduling policy is characterized in terms of a single scalar quantity, the efficiency-ratio. While suitable for characterizing worst-case performance, this approach limits one's ability to understand the different extents of performance degradation that may be experienced by different links in a network. Such an understanding is very valuable when average performance is of greater interest than the worst-case, or when certain links are more important than others. Furthermore, once one approaches scheduler design with non-uniform performance guarantees in mind, one finds that simple modifications to well-known scheduling algorithms can yield substantially improved non-uniform scaling results compared to the original algorithms. In this paper, we make a comprehensive case for adopting such an approach by presenting non-uniform scaling results for a set of algorithms that are variants of well-known algorithms from the class of maximal schedulers.
【Keywords】: radio networks; scheduling; nonuniform scheduling; uniform scaling; wireless network; Algorithm design and analysis; Communications Society; Degradation; Media Access Protocol; Performance analysis; Performance loss; Scheduling algorithm; Stability; Throughput; Wireless networks
【Paper Link】 【Pages】:76-80
【Authors】: Casey T. Deccio ; Jeff Sedayao ; Krishna Kant ; Prasant Mohapatra
【Abstract】: The domain name system (DNS) is critical to Internet functionality. The availability of a domain name refers to its ability to be resolved correctly. We develop a model for server dependencies that is used as a basis for measuring availability. We introduce the minimum number of servers queried (MSQ) and redundancy as availability metrics and show how common DNS misconfigurations impact the availability of domain names. We apply the availability model to domain names from production DNS and observe that 6.7% of names exhibit sub-optimal MSQ, and 14% experience false redundancy. The MSQ and redundancy values can be optimized by proper maintenance of delegation records for zones.
【Keywords】: Internet; network servers; optimisation; redundancy; DNS; DNS misconfigurations; Internet functionality; MSQ; availability; domain name system; optimization; redundancy; Availability; Communications Society; Cryptography; Domain Name System; Internet; Laboratories; Production; Robustness; Security; Web server
【Paper Link】 【Pages】:81-85
【Authors】: Hassan Gobjuka
【Abstract】: In this paper we investigate the problem of finding the physical layer topology of large, heterogeneous networks that comprises multiple VLANs and may include uncooperative network nodes. We prove that finding a layer-2 network topology for a given incomplete input is an NP-hard problem even when the network comprises only two VLANs and the network contains one loop and deciding whether a given input defines a unique VLANs topology is a co-NP-hard problem. We design several heuristic algorithms to find VLANs topology. Our first algorithm is designed for geographically wide-spread networks that may contain uncooperative devices. For such networks the algorithm discovers the topology for each VLAN then merges them to infer the network topology in 0(n3) time, where n is the number of internal network nodes. Our second algorithm is designed for smaller, active networks where each device in the network provides access to their MIB and few AFT entries are missing. For such networks, the algorithm finds the unique topology of VLANs in O(n3) time. We have implemented both the algorithms described in this paper and conducted extensive experiments on multiple networks. Our experiments demonstrate that our approach is quite practical and discovers the accurate VLANs topology of large and heterogeneous networks whose input may not necessarily be complete. To the best of our knowledge, this is the first paper investigating topology discovery for VLANs.
【Keywords】: computational complexity; local area networks; telecommunication network topology; virtual private networks; AFT entries; NP-hard problem; VLAN; active networks; geographically wide-spread networks; heterogeneous networks; heuristic algorithms; layer-2 network topology; physical layer topology discovery; virtual local area networks; Algorithm design and analysis; Communications Society; Ethernet networks; Heuristic algorithms; Local area networks; NP-hard problem; Network topology; Peer to peer computing; Physical layer; Switches
【Paper Link】 【Pages】:86-90
【Authors】: Yi-Hua Edward Yang ; Hoang Le ; Viktor K. Prasanna
【Abstract】: Dictionary-Based String Matching (DBSM) is used in network Deep Packet Inspection (DPI) applications virus scanning and network intrusion detection. We propose the Pipelined Affix Search with Tail Acceleration (PASTA) architecture for solving DBSM with guaranteed worst-case performance. Our PASTA architecture is composed of a Pipelined Affix Search Relay (PASR) followed by a Tail Acceleration Finite Automaton (TAFA). PASR consists of one or more pipelined Binary Search Tree (pBST) modules arranged in a linear array. TAFA is constructed with the Aho-Corasick goto and failure functions in a compact multi-path and multi-stride tree structure. Both PASR and TAFA achieve good memory efficiency of 1.2 and 2 B/ch (bytes per character) respectively and are pipelined to achieve a high clock rate of 200 MHz on FPGAs. Because PASTA does not depend on the effectiveness of any hash function or the property of the input stream, its performance is guaranteed in the worst case. Our prototype implementation of PASTA on an FPGA with 10 Mb on-chip block RAM achieves 3.2 Gbps matching throughput against a dictionary of over 700 K characters. This level of performance surpasses the requirements of next-generation security gateways for deep packet inspection.
【Keywords】: computer viruses; dictionaries; field programmable gate arrays; finite automata; string matching; tree searching; Aho-Corasick goto function; FPGA; RAM; dictionary-based string matching; failure function; linear array; multipath tree structure; multistride tree structure; network deep packet inspection; network intrusion detection; next-generation security gateways; pipelined affix search relay; pipelined binary search tree modules; tail acceleration architecture; tail acceleration finite automaton; virus scanning; Acceleration; Automata; Binary search trees; Clocks; Field programmable gate arrays; Inspection; Intrusion detection; Relays; Tail; Tree data structures
【Paper Link】 【Pages】:91-95
【Authors】: Hongyi Yao ; Sidharth Jaggi ; Minghua Chen
【Abstract】: Network Tomography (or network monitoring) uses end-to-end measurements to characterize the network, such as estimating the network topology and localizing random or adversarial glitches. Under the setting that all nodes in the network perform random linear network coding, this work provides a comprehensive study of passive network tomography in the presence of network failures, in particular adversarial/random errors and adversarial/random erasures. Our results are categorized into two classes: 1. Topology Estimation. In the presence of both adversarial/random failures, we prove it is both necessary and sufficient for all nodes in the network to share common randomness, i.e., the receiver knows the random code-books of other nodes. Without such common randomness, we prove that in the presence of adversarial or random failures it is either theoretically impossible or computationally intractable to estimate topology accurately. With common randomness, we present the first set of algorithms for characterizing topology exactly. Our algorithms for topology estimation in the presence of random errors/erasures have polynomial-time complexity. 2. Failure Localization. Given the topology, we present the first polynomial time algorithms to localize random errors and adversarial erasures. For the problem of locating adversarial errors, we prove that it is intractable.
【Keywords】: computational complexity; maintenance engineering; monitoring; network coding; telecommunication network topology; adversarial erasures; adversarial errors; end-to-end measurement; failure localization; network coding tomography; network failures; network monitoring; passive network tomography; polynomial-time complexity; random erasures; random errors; random linear network coding; topology estimation; Communications Society; Condition monitoring; Error correction codes; Network coding; Network topology; Peer to peer computing; Polynomials; Probes; Routing; Tomography
【Paper Link】 【Pages】:96-100
【Authors】: Hemant Sengar ; Zhen Ren ; Haining Wang ; Duminda Wijesekera ; Sushil Jajodia
【Abstract】: Peer-to-peer (P2P) VoIP calls such as those provided by Skype have been becoming popular due to their quality-of-service, free of cost, security and convenience. Skype is a distributed P2P network with no centralized call servers. Calls traverse through a myriad of possible paths before reaching to the destination and each packet is encrypted with 256 bit AES encryption. In this paper, we are particularly interested in tracing out from this entangled web of peer nodes, who has called a target subscriber or to whom the target subscriber is calling. To this end, we present a transparent packet marking scheme that not only determines the origination and destination of a call but also the path taken through various hosts in P2P networks.
【Keywords】: Internet telephony; peer-to-peer computing; quality of service; AES encryption; Internet; Skype VoIP calls; centralized call servers; distributed P2P network; peer nodes; peer-to-peer VoIP calls; quality-of-service; target subscriber; transparent packet marking scheme; word length 256 bit; Costs; Cryptography; Delay effects; Internet telephony; Network servers; Peer to peer computing; Protocols; Surveillance; Target tracking; Timing
【Paper Link】 【Pages】:101-105
【Authors】: Hongzi Zhu ; Luoyi Fu ; Guangtao Xue ; Yanmin Zhu ; Minglu Li ; Lionel M. Ni
【Abstract】: Inter-contact time between moving vehicles is one of the key metrics in vehicular ad hoc networks (VANETs) and central to forwarding algorithms and the end-to-end delay. Due to prohibitive costs, little work has conducted experimental study on inter-contact time in urban vehicular environments. In this paper, we carry out an extensive experiment involving thousands of operational taxies in Shanghai city. Studying the taxi trace data on the frequency and duration of transfer opportunities between taxies, we observe that the tail distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of taxies, exhibits a light tail such as one of an exponential distribution, over a large range of timescale. This observation is in sharp contrast to recent empirical data studies based on human mobility, in which the distribution of the inter-contact time obeys a power law. By performing a least squares fit, we establish an exponential model that can accurately depict the tail behavior of the inter-contact time in VANETs. Our results thus provide fundamental guidelines on design of new vehicular mobility models in urban scenarios, new data forwarding protocols and their performance analysis.
【Keywords】: ad hoc networks; exponential distribution; least squares approximations; mobile radio; transport protocols; Shanghai city; VANET; data forwarding protocols; end-to-end delay; exponential distribution; exponential inter contact time; forwarding algorithms; human mobility; least squares fit; moving vehicles; tail distribution; taxi trace data; urban scenarios; urban vehicular environments; vehicular ad hoc networks; vehicular mobility models; Ad hoc networks; Cities and towns; Costs; Delay effects; Exponential distribution; Frequency; Humans; Least squares methods; Probability distribution; Vehicles
【Paper Link】 【Pages】:106-110
【Authors】: Eitan Altman ; Amar Prakash Azad ; Tamer Basar ; Francesco De Pellegrini
【Abstract】: Much research has been devoted to maximize the life time of mobile ad-hoc networks. Life time has often been defined as the time elapsed until the first node is out of battery power. In the context of static networks, this could lead to disconnectivity. In contrast, Delay Tolerant Networks (DTNs) leverage the mobility of relay nodes to compensate for lack of permanent connectivity, and thus enable communication even after some nodes deplete their stored energy. One can thus consider the lifetimes of nodes as some additional parameters that can be controlled to optimize the performance of a DTN. In this paper, we consider two ways in which the energy state of a mobile can be controlled. Both listening and transmission require energy, besides each of these has a different type of effect on the network performance. Therefore we consider a joint optimization problem consisting of: i) activation, which determines when a mobile will turn on in order to receive packets, and ii) transmission control, which regulates the beaconing. The optimal solutions are shown to be of the threshold type. The findings are validated through extensive simulations.
【Keywords】: ad hoc networks; mobile radio; optimisation; delay tolerant networks; joint optimization problem; mobile ad-hoc networks; network performance; optimal activation; permanent connectivity; receive packets; relay node mobility; transmission control; Ad hoc networks; Batteries; Communication system control; Communications Society; Disruption tolerant networking; Frame relay; Optimal control; Peer to peer computing; Routing protocols; USA Councils
【Paper Link】 【Pages】:111-115
【Authors】: Abderrahmen Mtibaa ; Martin May ; Christophe Diot ; Mostafa H. Ammar
【Abstract】: In opportunistic networks, end-to-end paths between two communicating nodes are rarely available. In such situations, the nodes might still copy and forward messages to nodes that are more likely to meet the destination. The question is which forwarding algorithm offers the best trade off between cost (number of message replicas) and rate of successful message delivery. We address this challenge by developing the PeopleRank approach in which nodes are ranked using a tunable weighted social information. Similar to the PageRank idea, PeopleRank gives higher weight to nodes if they are socially connected to important other nodes of the network. We develop centralized and distributed variants for the computation of PeopleRank. We present an evaluation using real mobility traces of nodes and their social interactions to show that PeopleRank manages to deliver messages with near optimal success rate (close to Epidemic Routing) while reducing the number of message retransmissions by 50% compared to Epidemic Routing.
【Keywords】: mobility management (mobile radio); social aspects of automation; telecommunication computing; telecommunication network routing; PageRank; PeopleRank; end-to-end paths; epidemic routing; message delivery; mobility traces; opportunistic networks; social opportunistic forwarding; Ad hoc networks; Communications Society; Costs; Disaster management; Distributed computing; Mobile communication; Network topology; Peer to peer computing; Routing; Social network services
【Paper Link】 【Pages】:116-120
【Authors】: Bojin Liu ; Behrooz Khorashadi ; Dipak Ghosal ; Chen-Nee Chuah ; H. Michael Zhang
【Abstract】: Wireless networking enabled vehicles can form vehicular ad hoc mesh networks (VMeshs). Using cooperative communication among VMeshs, a local transient information could be "retained" within a given geographic region for a certain period of time, without any infrastructure help. In this paper, we study this "storage capability" of VMeshs. We analyze the scenarios of highway traffic (both one-way and two-way highway free flow traffic) and vehicular traffic in a city environment. For highway traffic, we study different properties of the "VMesh storage", using a simulation tool that accurately models the freeway vehicular mobility. For city traffic, we first perform simulations based on real traffic trace of San Francisco Yellow Cabs. Then we compare the results with the scenario where a general Random Way Point (RWP) mobility model is used. Our results show that transmission range has high impact on the storage lifetime for one-way highway traffic, and the size of the region in which we want the information stored has high impact for two-way highway traffic. For city-wide traffic, the storage's lifetime generated using San Francisco Yellow Cab trace is shorter than that obtained using the RWP mobility model. This is due to the regular movement of the cabs as compared to the random vehicle movement in the RWP mobility model.
【Keywords】: ad hoc networks; mobile communication; mobility management (mobile radio); telecommunication traffic; VANET; local information storage capability; random way point; traffic mobility; vehicular ad hoc mesh networks; Cities and towns; Communications Society; Mesh networks; Network topology; Road transportation; Road vehicles; Routing; Telecommunication traffic; Traffic control; Wireless mesh networks
【Paper Link】 【Pages】:121-125
【Authors】: Eitan Altman ; Francesco De Pellegrini ; Lucile Sassatelli
【Abstract】: We study replication mechanisms that include Reed-Solomon type codes as well as network coding in order to improve the probability of successful delivery within a given time limit. We propose an analytical approach to compute these and study the effect of coding on the performance of the network while optimizing parameters that govern routing.
【Keywords】: Reed-Solomon codes; delays; mobile communication; network coding; telecommunication network routing; Reed-Solomon type codes; coding dynamic control; delay tolerant networks; network coding; network performance; replication mechanisms; routing; successful delivery; time limit; Communication system control; Communications Society; Delay effects; Disruption tolerant networking; Frame relay; Mobile communication; Network coding; Optimal scheduling; Peer to peer computing; Reed-Solomon codes
【Paper Link】 【Pages】:126-130
【Authors】: Giovanni Resta ; Paolo Santi
【Abstract】: In this paper, we investigate the fundamental properties of broadcasting in mobile wireless networks. In particular, we characterize broadcast capacity and latency of a mobile network, subject to the condition that the stationary node spatial distribution generated by the mobility model is uniform. We first study the intrinsic properties of broadcasting, and present a broadcasting scheme that simultaneously achieves asymptotically optimal broadcast capacity and latency, subject to a weak upper bound on the maximum node velocity. We then investigate the broadcasting problem when the burden related to selecting relay nodes is taken into account, and present a combined distributed leader election and broadcasting scheme achieving a broadcast capacity and latency which is within a poly-logarithmic factor from optimal.
【Keywords】: broadcasting; mobile communication; broadcast capacity; broadcasting fundamental limits; broadcasting scheme; distributed leader election; latency; maximum node velocity; mobility model; polylogarithmic factor; relay nodes; weak upper bound; wireless mobile networks; Broadcasting; Delay; Interference; Mobile communication; Peer to peer computing; Relays; Spread spectrum communication; Unicast; Upper bound; Wireless networks
【Paper Link】 【Pages】:131-135
【Authors】: Shan Zhou ; Lei Ying
【Abstract】: This paper studies the delay constrained multicast capacity of large scale mobile ad hoc networks (MANETs). We consider a MANET that consists of ns multicast sessions. Each multicast session has one source and p destinations. Each source sends identical information to the p destinations in its multicast session, and the information is required to be delivered to all the p destinations within D time-slots. Assuming the wireless mobiles move according to a two-dimensional i.i.d. mobility model, we first prove that the capacity per multicast session is O(min{1, (log p)(log (nsp)) ¿(D/ns)}). We then propose a joint coding/scheduling algorithm achieving a throughput of ¿ (min {1, ¿(D/ns)}). Our simulation results suggest that the same scaling law also holds under random walk and random waypoint models.
【Keywords】: ad hoc networks; communication complexity; encoding; mobile radio; multicast communication; scheduling; D time-slots; coding algorithm; delay constrained multicast capacity; large scale mobile ad hoc networks; multicast sessions; random walk model; random waypoint model; scheduling algorithm; wireless mobiles; Ad hoc networks; Broadcasting; Delay; Disruption tolerant networking; Large-scale systems; Mobile ad hoc networks; Scheduling algorithm; Throughput; Unicast; Videoconference
【Paper Link】 【Pages】:136-140
【Authors】: Xiaohang Li ; Chih-Chun Wang ; Xiaojun Lin
【Abstract】: Multimedia streaming applications have stringent QoS requirements. Typically each packet is associated with a packet delivery deadline. This work models and considers real-time streaming broadcast for stored-video over the downlink of a single cell. The broadcast capacity of the system subject to deadline constraints are derived for both uncoded and coded wireless broadcast schemes. Even under the deadline requirements, it is shown in this work that network coding is asymptotically throughput-optimal and can strictly outperform the best non-coding policy by analytically quantifying the optimal capacity when the file size is sufficiently large. A simple network coding policy is also proposed that achieves the asymptotic capacity while maintaining finite transmission delay (queueing + decoding delay). A new temporal-queue-length-based Lyapunov function is used to prove the optimality of this policy. Simulation shows that the simple coding policy outperforms the best non-coding policies even for broadcasting files of small sizes.
【Keywords】: Lyapunov methods; broadcasting; delays; network coding; quality of service; queueing theory; radio networks; QoS requirements; broadcast capacity; coded wireless broadcast schemes; delay analysis; hard deadline constraints; multimedia streaming applications; network coding policy; network throughput; optimal capacity; packet delivery deadline; temporal-queue-length-based Lyapunov function; uncoded wireless broadcast schemes; Broadcasting; Communications Society; Decoding; Delay; Downlink; Multimedia communication; Network coding; Streaming media; Throughput; Unicast
【Paper Link】 【Pages】:141-145
【Authors】: Jubin Jose ; Ahmed Abdel-Hadi ; Piyush Gupta ; Sriram Vishwanath
【Abstract】: Analogous to the beneficial impact that mobility has on the throughput of unicast networks, this paper establishes that mobility can provide a similar gain in the order-wise growth-rate of the throughput for multicast networks. This paper considers an all-mobile multicast network, and characterizes its multicast capacity scaling. The scaling result shows that the growth-rate of the throughput in the all-mobile multicast network is order-wise higher compared to the all-static multicast network. Further, the paper considers a static-mobile hybrid multicast network, and establishes that, if there are sufficient number of mobile nodes (that is order-wise smaller than the total number of nodes) in the network, then mobile nodes can enhance the order behavior of the multicast throughput.
【Keywords】: mobile radio; radio networks; all-mobile multicast network; mobile nodes; multicast capacity; static-mobile hybrid multicast network; unicast network throughput; wireless networks; Communications Society; Electromagnetic modeling; Large-scale systems; Multicast protocols; Peer to peer computing; Physics; Telecommunication traffic; Throughput; Unicast; Wireless networks
【Paper Link】 【Pages】:146-150
【Authors】: Lakshmi Anantharamu ; Bogdan S. Chlebus ; Dariusz R. Kowalski ; Mariusz A. Rokicki
【Abstract】: We study broadcasting on multiple access channels by deterministic distributed protocols. Data arrivals are governed by an adversary. The power of the adversary is constrained by the average rate of data injection and a bound on the number of different packets that can be injected in one round. The injection rate is at most 1, which forbids the adversary from overloading the channel. We consider a number of deterministic protocols. For each of them we give an upper bound on the worst-case packet latency, as a function of the constraints imposed on the adversary. We present results of experiments by simulations to compare packet latency of the deterministic protocols and of backoff-type randomized protocols. The experiments are carried out in a simulation environment that captures the burstiness of data injection and the resulting traffic by admissibility condition defined by the fraction of active stations and the rate of changing the status of active versus passive among the stations.
【Keywords】: access protocols; broadcasting; routing protocols; data injection; deterministic broadcast; deterministic distributed protocols; multiple access channels; Access protocols; Broadcasting; Communications Society; Delay; Ethernet networks; Routing protocols; Stability; Throughput; Traffic control; Upper bound
【Paper Link】 【Pages】:151-155
【Authors】: Apoorva Jindal ; Ann Arbor ; Konstantinos Psounis
【Abstract】: This paper formally establishes that random access scheduling schemes, and, more specifically CSMA-CA, yields exceptionally good performance in the context of wireless multihop networks. While it is believed that CSMA-CA performs significantly worse than optimal, this belief is usually based on experiments that use rate allocation mechanisms which grossly underutilize the available capacity that random access provides. To establish our thesis we compare the max-min rate allocation achieved by CSMA-CA and optimal in multi-hop topologies and find that: (i) CSMA-CA is never worse than 16% of the optimal when ignoring physical layer constraints, (ii) in any realistic topology with geometric constraints due to the physical layer, CSMA-CA is never worse than 30% of the optimal. Considering that maximal scheduling achieves much lower bounds than the above, and greedy maximal scheduling, which is one of the best known distributed approximation of an optimal scheduler, achieves similar worst case bounds, CSMA-CA is surprisingly efficient.
【Keywords】: carrier sense multiple access; minimax techniques; radio networks; resource allocation; scheduling; telecommunication network topology; CSMA-CA; carrier sense multiple access-collision avoidance; greedy maximal scheduling; max-min rate allocation; multihop topology; random access scheduling; wireless multihop networks; Approximation algorithms; Communications Society; Interference constraints; Network topology; Optimal scheduling; Physical layer; Processor scheduling; Scheduling algorithm; Spread spectrum communication; Throughput
【Paper Link】 【Pages】:156-160
【Authors】: Daniel Wu ; Prasant Mohapatra
【Abstract】: Multi-radio nodes in wireless mesh networks introduce extra complexity in utilizing channel resources. Depending on the configuration of the radios, bad mappings between radio to wireless frequencies may result in sub-optimal network topologies. Static channel assignments in wireless mesh networks have been studied in theory and through simulation but very little work has been done through experiments. This paper focuses on evaluating static channel assignments on a live wireless mesh network. We chose three popular types of static channel assignment algorithms for implementation and comparison purposes. The three types are breadth-first search, priority-based selection and integer linear programming. We find that there is no single channel assignment algorithm that does well overall. BFS algorithm can create the shortest paths to the gateway and also generate balanced channel usage topologies. The PBS algorithm can use all the best links in the network but have poor performance from each radio to the gateway. Overall, we find the channel assignments given by the algorithms to be suboptimal when applied to a live mesh network because temporal variations in the link quality metrics are not taken into account. Looking at the interflow and intraflow performance of these channel assignment algorithms in a live mesh network, we can conclude that routing protocols must be modified to take advantage of the underlying channel assignment algorithms.
【Keywords】: channel allocation; integer programming; linear programming; routing protocols; telecommunication network topology; tree searching; wireless mesh networks; breadth-first search; channel usage topology; integer linear programming; link quality metrics; multiradio nodes; priority-based selection; routing protocols; static channel assignments; sub-optimal network topology; temporal variations; wireless mesh network; Communications Society; Computer science; Frequency; Integer linear programming; Mesh networks; Network topology; Peer to peer computing; Routing protocols; Wireless mesh networks; Wires
【Paper Link】 【Pages】:161-165
【Authors】: János Tapolcai ; Lajos Rónyai ; Pin-Han Ho
【Abstract】: Achieving fast, precise, and scalable fault localization has long been a highly desired feature in all-optical mesh networks. Monitoring tree (m-tree) is an interesting method that has been introduced as the most general monitoring structure for achieving unambiguous failure localization (UFL). Ideally, with J m-trees one can monitor up to 2J-1 links when a single failure has to be located. Such a logarithmic behavior has also been observed in numerous case studies of real life network topologies. It is expected that the m-tree framework will lead to a highly scalable link failure monitoring mechanism for not only all-optical mesh networks, but any possible future information system with mesh topologies, such as all-optical mesh networks, touch panels, quantum computing, and VLSI. It is an important task to investigate the extent such an optimal logarithmic behavior may hold, in particular in practically relevant network topologies. As an endeavor toward this goal, the paper investigates the problem by identifying essentially tight logarithmic bounds for two dimensional lattice networks. Experiments are conducted to show the feasibility and performance of the proposed constructions.
【Keywords】: VLSI; fault location; optical fibre networks; quantum computing; telecommunication network topology; 2D lattice networks; VLSI; all-optical mesh networks; failure monitoring; m-tree; monitoring tree; optimal solutions; quantum computing; real life network topology; single fault localization; touch panels; unambiguous failure localization; Computer networks; Condition monitoring; High speed optical techniques; Lattices; Mesh networks; Network topology; Optical receivers; Optical transmitters; Peer to peer computing; Quantum computing
【Paper Link】 【Pages】:166-170
【Authors】: Ahmed Khattab
【Abstract】: In this paper, we demonstrate that multiple concurrent asynchronous and uncoordinated Single-Input Multiple-Output (SIMO) transmissions can successfully take place even though the respective receivers do not explicitly null out interfering signals. Thus motivated, we propose simple modifications to the widely deployed IEEE 802.11 MAC to enable multiple non-spatially-isolated SIMO sender-receiver pairs to share the medium. Namely, we propose to increase the physical carrier sensing threshold, disable virtual carrier sensing, and enable message in message packet detection. We use experiments to show that while increasing the peak transmission rate, spatial multiplexing schemes such as those employed by the IEEE 802.11n are highly non-robust to asynchronous and uncoordinated interferers. In contrast, we show that the proposed multi-flow SIMO MAC scheme alleviates the severe unfairness resulting from uncoordinated transmissions in 802.11 multi-hop networks.
【Keywords】: access protocols; wireless LAN; IEEE 802.11 MAC; MAC protocols; SIMO random access; SIMO sender-receiver; message packet detection; multihop wireless networks; multiple concurrent asynchronous transmissions; peak transmission; random access medium access control protocols; uncoordinated single-input multiple- output transmissions; Communications Society; Interference; MIMO; Media Access Protocol; Physical layer; Robustness; Spread spectrum communication; Throughput; Transmitting antennas; Wireless networks
【Paper Link】 【Pages】:171-175
【Authors】: Yunfeng Lin ; Ben Liang ; Baochun Li
【Abstract】: Opportunistic routing significantly increases unicast throughput in wireless mesh networks by effectively utilizing the wireless broadcast medium. With network coding, opportunistic routing can be implemented in a simple and practical way without resorting to a complicated scheduling protocol. Traditionally, due to the constraints of computational complexity, a protocol utilizing network coding needs to partition the data into multiple segments and encode only packets in the same segment. However, it is extremely challenging to decide the optimal time to move to the transmissions of the next segment, and existing designs all resort to different heuristic ideas that might harm network throughput. To address this problem, we propose SlideOR, a new protocol to encode source packets in overlapping sliding windows such that coded packets from one window position may be useful towards decoding the source packets inside another window position. Through extensive simulations, we show that SlideOR outperforms the existing solutions and is amenable to much simpler implementation than solutions with complicated scheduling among multiple segments.
【Keywords】: network coding; routing protocols; wireless mesh networks; SlideOR; computational complexity; network throughput; online opportunistic network coding; opportunistic routing; overlapping sliding windows; scheduling protocol; source packets; wireless broadcast medium; wireless mesh networks; Bandwidth; Broadcasting; Computational complexity; Decoding; Network coding; Routing protocols; Scheduling algorithm; Throughput; Unicast; Wireless mesh networks
【Paper Link】 【Pages】:176-180
【Authors】: Shinuk Woo ; Hwangnam Kim
【Abstract】: Recently, it has been received in the community that the link reliability is strongly related to RSSI (or SINR) and the external interference makes it unpredictable, but the unpredictability has not been fully explained yet. In order to examine the causes of the unpredictable link state, we first configured an empirical testbed, performed a measurement study, and observed that the link reliability actually depends on an intra-frame SINR distribution. We also discovered that a RSSI (or SINR) value is not always a good indicator to estimate the link state. Based on these results, we propose a modeling framework for estimating the link state in the presence of the wireless interference. We vision that the framework can be used for developing link-aware protocols to achieve their optimal performance in a hostile wireless environment.
【Keywords】: interference (signal); protocols; radio networks; telecommunication network reliability; RSSI; SINR; empirical study; interference modeling; link reliability; link-aware protocols; wireless interference; wireless networks; AWGN; Fading; Interference; Performance evaluation; Physical layer; Signal to noise ratio; State estimation; Testing; Wireless application protocol; Wireless networks
【Paper Link】 【Pages】:181-185
【Authors】: Ziyu Shao ; Minghua Chen ; Amir Salman Avestimehr ; Shuo-Yen Robert Li
【Abstract】: Existing work on cross-layer optimization for wireless networks adopts simple physical-layer models, i.e., treating interference as noise. In this paper, we adopt a deterministic channel model proposed in, a simple abstraction of the physical layer that effectively captures the effect of channel strength, broadcast and superposition in wireless channels. Within the Network Utility Maximization (NUM) framework, we study the cross-layer optimization for wireless networks based on this deterministic channel model. First, we extend the well-applied conflict graph model to capture the flow interactions over the deterministic channels and characterize the feasible rate region. Then we study distributed algorithms for general wireless multi-hop networks. The convergence of algorithms is proved by Lyapunov stability theorem and stochastic approximation method. Further, we show the convergence to the bounded neighborhood of optimal solutions with probability one under constant steps and constant update intervals. Our numerical evaluation validates the analytical results.
【Keywords】: Lyapunov methods; approximation theory; optimisation; wireless mesh networks; Lyapunov stability theorem; cross layer optimization; deterministic channel models; network utility maximization; stochastic approximation method; wireless multihop networks; Approximation algorithms; Broadcasting; Distributed algorithms; Interference; Lyapunov method; Physical layer; Spread spectrum communication; Stochastic processes; Utility programs; Wireless networks
【Paper Link】 【Pages】:186-190
【Authors】: Moran Feldman ; Joseph Naor
【Abstract】: The delivery of latency sensitive packets is a crucial issue in real time applications of communication networks. Such packets often have a firm deadline and a packet becomes useless if it arrives after its deadline. The deadline, however, applies only to the packet's journey through the entire network; individual routers along the packet's route face a more flexible deadline. We consider policies for admitting latency sensitive packets at a router. Each packet is tagged with a value and a packet waiting at a router loses value over time as its probability of arriving at its destination decreases. The router is modeled as a non-preemptive queue, and its objective is to maximize the total value of the forwarded packets. When a router receives a packet, it must either accept it (and possibly delay future packets), or reject it immediately. The best policy depends on the set of values that a packet can take. We consider three natural settings: unrestricted model, real-valued model, where any value above 1 is allowed, and an integral-valued model. We obtain the following results. For the unrestricted model, we prove that there is no constant competitive ratio algorithm. The real valued model has a randomized 4-competitive algorithm and a matching lower bound. We also give for the last model a deterministic lower bound of ¿3 ¿ 4.236, almost matching the previously known 4.24-competitive algorithm. For the integral-valued model, we show a deterministic 4-competitive algorithm, and prove that this is tight even for randomized algorithms.
【Keywords】: competitive algorithms; deterministic algorithms; queueing theory; telecommunication network management; telecommunication network routing; telecommunication traffic; communication networks; constant competitive ratio algorithm; deterministic lower bound; integral-valued model; latency sensitive packets; nonpreemptive buffer management; nonpreemptive queue; randomized 4-competitive algorithm; router loses value; Algorithm design and analysis; Communication networks; Communications Society; Delay; Motion pictures; Performance analysis; Throughput
【Paper Link】 【Pages】:191-195
【Authors】: Luigi Alfredo Grieco ; Chadi Barakat
【Abstract】: In network measurement systems, packet sampling techniques are usually adopted to reduce the overall amount of data to collect and process. Being based on a subset of packets, they hence introduce estimation errors that have to be properly counteracted by a fine tuning of the sampling strategy and sophisticated inversion methods. This problem has been deeply investigated in the literature with particular attention to the statistical properties of packet sampling and the recovery of the original network measurements. Herein, we propose a novel approach to predict the energy of the sampling error on the real time traffic volume estimation, based on a spectral analysis in the frequency domain. We start by demonstrating that errors due to packet sampling can be modeled as an aliasing effect in the frequency domain. Then, we exploit this theoretical finding to derive closed-form expressions for the Signal-to-Noise Ratio (SNR), able to predict the distortion of traffic volume estimates over time. The accuracy of the proposed SNR metric is validated by means of real packet traces.
【Keywords】: estimation theory; frequency-domain analysis; signal sampling; spectral analysis; statistical analysis; SNR metric; aliasing effect; closed-form expressions; estimation accuracy; estimation errors; fine tuning; frequency domain model; network measurement systems; network measurements; packet sampling techniques; real packet traces; real time traffic volume estimation; sampling error; sampling strategy; signal-to-noise ratio; spectral analysis; statistical property; traffic volume estimates; Accuracy; Estimation error; Frequency domain analysis; Frequency estimation; Particle measurements; Predictive models; Sampling methods; Spectral analysis; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:196-200
【Authors】: Stilian Stoev ; George Michailidis ; Joel Vaughan
【Abstract】: We develop a probabilistic framework for global modeling of the traffic over a computer network. The model integrates existing single-link (-flow) traffic models with the routing over the network to capture the global traffic behavior. It arises from a limit approximation of the traffic fluctuations as the time-scale and the number of users sharing the network grow. The resulting probability model is comprised of a Gaussian and/or a stable, infinite variance components. They can be succinctly described and handled by certain 'space-time' random fields. The model is validated against real data and applied to predict traffic fluctuations over unobserved links from a limited set of observed links.
【Keywords】: Gaussian processes; computer networks; probability; telecommunication network routing; telecommunication traffic; Gaussian process; backbone network traffic; computer network; global modeling; infinite variance component; probabilistic framework; single-link traffic model; space-time random field; traffic fluctuation; Computer network management; Computer networks; Fluctuations; Predictive models; Protocols; Routing; Spine; Statistics; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:201-205
【Authors】: D. K. Lee ; Keon Jang ; Changhyun Lee ; Gianluca Iannaccone ; Sue B. Moon
【Abstract】: Many measurement systems have been proposed in recent years to shed light on the internal performance of the Internet. Their common goal is to allow distributed applications to improve end-user experience. A common hurdle they face is the need to deploy yet another measurement infrastructure. In this work, we demonstrate that without any new measurement infrastructure or active probing we obtain composite performance estimates from AS-by-AS segments and the estimates are as good as (or even better than) those from existing estimation methodologies that use on-demand, customized active probing. The main contribution of this paper is an estimation algorithm that breaks down measurement data into segments, identifies relevant segments efficiently, and, by carefully stitching segments together, produces delay and path estimates between any two end points. Fittingly, we call our algorithm path stitching. Our results show remarkably good accuracy: error in delay is below 20 ms in 80% of end-to-end paths.
【Keywords】: Internet; measurement systems; Internet-wide path estimation; customized active probing; delay estimation; distributed applications; measurement infrastructure; path stitching; Communications Society; Costs; Delay estimation; IP networks; Instruments; Loss measurement; Moon; Peer to peer computing; Size measurement; Web and internet services
【Paper Link】 【Pages】:206-210
【Authors】: Aiyou Chen ; Yu Jin ; Jin Cao
【Abstract】: We propose the tracking of long duration flows as a new network measurement primitive. Long-duration flows are characterized by their long lived nature in time, and may not have high traffic volumes. We propose an efficient data streaming algorithm to effectively track long duration flows. Our basic technique is to maintain only two Bloom filters at any given time. In each time duration, only old flows that appear in the current time duration get copied to the current Bloom filter. Our basic algorithm is further enhanced by sampling. Using real network traces, we show that our tracking algorithm is very accurate with low false positive and false negative probabilities. Using multi-faceted analysis, we show that more than 50% of hosts participating in long duration flows (duration no less than 30 minutes) are blacklisted by various public sources.
【Keywords】: telecommunication network management; telecommunication traffic; bloom filters; data streaming algorithm; false negative probabilities; false positive probabilities; high traffic volumes; long duration flows; multifaceted analysis; network measurement; network traffic; real network traces; time duration; tracking algorithm; Algorithm design and analysis; Communications Society; Computer science; Data security; Entropy; Filters; Monitoring; Sampling methods; Storms; Telecommunication traffic
【Paper Link】 【Pages】:211-215
【Authors】: Jin Cao ; Li (Erran) Li ; Aiyou Chen ; Tian Bu
【Abstract】: Quantiles are very useful in characterizing the data distribution of an evolving dataset in the process of data mining or network monitoring. The method of Stochastic Approximation (SA) tracks quantiles online by incrementally deriving and updating local approximations of the underly distribution function at the quantiles of interest. In this paper, we propose a generalization of the SA method for quantile estimation that allows not only data insertions, but also dynamic data operations such as deletions and updates.
【Keywords】: data communication; data mining; stochastic processes; telecommunication traffic; data mining; distribution function; dynamic operations; network data streams; network monitoring; stochastic approximation; Approximation algorithms; Communications Society; Data mining; Distribution functions; High-speed networks; Linear approximation; Memory; Monitoring; Stochastic processes; Telecommunication traffic
【Paper Link】 【Pages】:216-220
【Authors】: Ravish Khosla ; Sonia Fahmy ; Y. Charlie Hu ; Jennifer Neville
【Abstract】: The Border Gateway Protocol (BGP) maintains inter-domain routing information by announcing and withdrawing IP prefixes, possibly resulting in temporary prefix unreachability. Prefix availability observed from different vantage points in the Internet can be lower than standards promised by Service Level Agreements (SLAs). In this paper, we develop a framework for predicting long-term prefix availability, given short-duration prefix information from publicly available BGP routing databases. We compare three prediction models, and find that bagged decision trees perform the best when predicting for long future durations, whereas a simple model works well for short prediction durations. We show that mean time to failure and to recovery outperform past availability in terms of their importance for predicting availability for long durations. We also find that predictability is higher in the year 2009, compared to four years earlier. Our models allow ISPs to adjust BGP routing policies if predicted availability is low, and the models are useful for cloud computing systems, P2P, and VoIP applications.
【Keywords】: Internet; decision trees; internetworking; prediction theory; routing protocols; BGP routing databases; IP prefixes; Internet; P2P applications; VoIP applications; bagged decision trees; border gateway protocol; cloud computing; inter domain routing information; long term prefix availability; prefix availability prediction; service level agreements; short duration prefix information; short prediction durations; Availability; Cloud computing; Communications Society; Databases; Decision trees; IEEE news; Internet telephony; Predictive models; Routing protocols; Web and internet services
【Paper Link】 【Pages】:221-225
【Authors】: Brian Gallagher ; Marios Iliofotou ; Tina Eliassi-Rad ; Michalis Faloutsos
【Abstract】: We address the following questions. Is there link homophily in the application layer traffic? If so, can it be used to accurately classify traffic in network trace data without relying on payloads or properties at the flow level? Our research shows that the answers to both of these questions are affirmative in real network trace data. Specifically, we define link homophily to be the tendency for flows with common IP hosts to have the same application (P2P, Web, etc.) compared to randomly selected flows. The presence of link homophily in trace data provides us with statistical dependencies between flows that share common IP hosts. We utilize these dependencies to classify application layer traffic without relying on payloads or properties at the flow level. In particular, we introduce a new statistical relational learning algorithm, called Neighboring Link Classifier with Relaxation Labeling (NLC+RL). Our algorithm has no training phase and does not require features to be constructed. All that it needs to start the classification process is traffic information on a small portion of the initial flows, which we refer to as seeds. In all our traces, NLC+RL achieves above 90% accuracy with less than 5% seed size; it is robust to errors in the seeds and various seed-selection biases; and it is able to accurately classify challenging traffic such as P2P with over 90% Precision and Recall.
【Keywords】: IP networks; classification; telecommunication links; telecommunication traffic; IP hosts; application layer; link homophily; neighboring link classifier; relaxation labeling; statistical relational learning algorithm; trace data; traffic classification; traffic information; Communications Society; Error analysis; Labeling; Laboratories; Payloads; Robustness; Spine; Telecommunication traffic
【Paper Link】 【Pages】:226-230
【Authors】: Xiaoqing Zhu ; Rong Pan ; Nandita Dukkipati ; Vijay Subramanian ; Flavio Bonomi
【Abstract】: This paper presents a novel scheme, Layered Internet Video Engineering (LIVE), in which network nodes feedback virtual congestion levels to video senders to assist both media-aware bandwidth sharing and transient loss protection. The video senders respond to such feedback by adapting the rates of encoded H.264/SVC streams based on their respective video rate-distortion (R-D) characteristics. The same feedback is employed to calculate the amount of forward error correction (FEC) protection for combating transient losses. Simulation studies show that LIVE can minimize the total distortion of all participating video streams and hence maximize their overall quality. At steady state, video streams experience no queuing delays or packet losses. In face of transient congestion, the network-assisted adaptive FEC effectively protect video packets from losses while keeping a minimum overhead. Our theoretical analysis further guarantees system stability for arbitrary number of streams with arbitrary round trip delays below a prescribed limit. Finally, we show that LIVE streams can coexist with TCP flows within the existing explicit congestion notification (ECN) framework.
【Keywords】: Internet; data compression; forward error correction; transport protocols; video coding; video streaming; H.264/SVC streams; TCP flows; explicit congestion notification framework; feedback virtual congestion levels; forward error correction protection; layered Internet video engineering; media-aware bandwidth sharing; network nodes; network-assisted bandwidth sharing; packet losses; queuing delays; scalable video streaming; transient loss protection; video rate-distortion characteristics; Bandwidth; Delay; Feedback; Forward error correction; IP networks; Protection; Rate-distortion; Static VAr compensators; Streaming media; Video sharing
【Paper Link】 【Pages】:231-235
【Authors】: Lin Cai ; Yuanqian Luo ; Siyuan Xiang ; Jianping Pan
【Abstract】: In conventional wireless systems with layered architectures, the physical layer treats all data streams from upper layers equally and apply the same modulation and coding schemes. Newer systems such as Digital Video Broadcast start to introduce hierarchical modulation schemes with SuperPosition preCoding (SPC) and support data streams of different priorities. However, SPC requires specialized hardware and has high complexity beyond most existing handheld devices. We thus propose scalable modulation (s-mod) by reusing the current mainstream modulation schemes with software-based bit-remapping. In this paper, we study how to optimize the configuration of the PHY layer s-mod and coding schemes to maximize the utility of videos with Scalable Video Coding (SVC). Simulation results demonstrate significant performance gains using s-mod and the cross-layer optimization, indicating s-mod and SVC is a good combination for wireless video multicast.
【Keywords】: digital video broadcasting; optimisation; precoding; video coding; cross-layer optimization; data streams; digital video broadcast; layered architectures; scalable modulation; scalable video coding; scalable wireless videocast; software-based bit-remapping; superposition precoding; wireless systems; Digital modulation; Digital video broadcasting; Handheld computers; Hardware; Modulation coding; Performance gain; Physical layer; Static VAr compensators; Streaming media; Video coding
【Paper Link】 【Pages】:236-240
【Authors】: Wesley W. Terpstra ; Christof Leng ; Max Lehn ; Alejandro P. Buchmann
【Abstract】: This paper presents a novel transport protocol, CUSP, specifically designed with complex and dynamic network applications in mind. Peer-to-peer applications benefit in particular, as their requirements are met by neither UDP nor TCP. While other modern transports like SCTP or SST have also tried to combine the advantages of TCP and UDP, CUSP overcomes their technical and conceptual shortcomings. CUSP makes it possible to directly express application logic in the message flow. Modern applications need a mixture of request-response, request-multiple-response, publish-subscribe, and message-passing. All of these operations can be conveniently implemented using CUSP's unidirectional streams. We separate low-level packet management from streams into reusable channels. A channel connects two applications providing negotiation, congestion control, and cryptography. Developers operate on the stream level, sending messages as reliable and ordered byte-streams. Although they may share a common channel, a stall or loss in one stream does not block the others.
【Keywords】: cryptography; telecommunication congestion control; transport protocols; CUSP; channel-based unidirectional stream protocol; congestion control; cryptography; low-level packet management; message flow; message passing; request-multiple-response; transport protocol; Communications Society; Cryptographic protocols; Cryptography; Delay; Distributed databases; Logic; Peer to peer computing; Privacy; Publish-subscribe; Transport protocols
【Paper Link】 【Pages】:241-245
【Authors】: Gabriel Scalosub ; Peter Marbach ; Jörg Liebeherr
【Abstract】: In many applications the traffic traversing the network has inter-packet dependencies due to application-level encoding schemes. For some applications, e.g., multimedia streaming, dropping a single packet may render useless the delivery of a whole sequence. In such environments, the algorithm used to decide which packet to drop in case of buffer overflows must be carefully designed, to avoid goodput degradation. We present a model that captures such inter-packet dependencies, and design algorithms for performing packet discards. Traffic consists of an aggregation of multiple streams, each of which consists of a sequence of inter-dependent packets. We provide two guidelines for designing buffer management algorithms for this problem, and demonstrate the effectiveness of these criteria. We devise an algorithm according to these guidelines and evaluate its performance analytically, using competitive analysis. We also present a simulation study that shows that the performance of our algorithm is within a small fraction of the performance of the best offline algorithm.
【Keywords】: forward error correction; packet switching; telecommunication network management; telecommunication traffic; aggregated streaming data; application-level encoding schemes; buffer management; packet dependencies; Algorithm design and analysis; Application software; Buffer overflow; Decoding; Delay; Encoding; Guidelines; Streaming media; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:246-250
【Authors】: Vijay Gopalakrishnan ; Rittwik Jana ; Ralph Knag ; K. K. Ramakrishnan ; Deborah F. Swayne ; Vinay A. Vaishampayan
【Abstract】: We investigate the user viewing activity for broadcast TV, pre-recorded content using Digital Video Recording (DVR) and video on demand (VoD) in an IP-based content distribution environment. Advanced stream control functions (play, pause, skip, rewind, etc.) provide users with a high level of interactivity, but place demands on the distribution infrastructure (servers, network, home-network) that can be difficult to manage at large scale. To support system design as well as network capacity planning, it is necessary to have a good model of user interaction. Using traces from a well-provisioned operational environment with a large user population, we first characterize interactivity for broadcast TV, DVR and VoD. We then develop parametric models of individual users stream control operations for VoD. Our analysis shows that interactive behavior is adequately characterized by two semi-Markov models, one for weekdays and another for weekends. We propose a parametric model for the underlying sojourn time distributions and show that it results in a superior fit compared to well known distributions (generalized Pareto and Weibull). In order to validate that our models faithfully capture user behavior, we compare the workload that a VoD server experiences in response to actual traces and synthetic data generated from our proposed models.
【Keywords】: IPTV; interactive television; video on demand; video recording; video streaming; IP-based content distribution environment; broadcast TV; digital video recording; interactive behavior; large-scale operational IPTV environment; pre-recorded content; stream control functions; user viewing activity; video on demand; Capacity planning; Digital video broadcasting; IPTV; Large-scale systems; Network servers; Parametric statistics; Streaming media; TV broadcasting; Video on demand; Video recording
【Paper Link】 【Pages】:251-255
【Authors】: Lars Kulseng ; Zhen Yu ; Yawen Wei ; Yong Guan
【Abstract】: The promise of RFID technology has been evidently foreseeable due to the low cost and high convenience value of RFID tags. However, the low-cost RFID tags poses new challenges to security and privacy. Some solutions utilize expensive cryptographic primitives such as hash or encryption functions, and some lightweight approaches have been reported to be broken. In this paper, we propose a lightweight solution to Mutual Authentication for RFID systems in which only the authenticated readers and tags can successfully communicate with each other. Then, we adapt our Mutual Authentication scheme to secure the Ownership Transfer of RFID tags. Our mutual authentication and ownership transfer protocols are realized utilizing minimalistic cryptography such as Physically Unclonable Functions (PUF) and Linear Feedback Shift Registers (LFSR). PUFs and LFSRs are very efficient in hardware and particularly suitable for the low-cost RFID tags. Compared to existing solutions built on top of hash functions that require 8000 to 10000 gates, our schemes demand only 784 gates for 64-bit variables and can be easily accommodated by the cheapest RFID tags with only 2000 gates available for security functions.
【Keywords】: cryptographic protocols; radiofrequency identification; RFID systems; cryptographic primitives; encryption functions; hash functions; lightweight mutual authentication; linear feedback shift registers; ownership transfer protocols; physically unclonable functions; security functions; Authentication; Costs; Cryptographic protocols; Cryptography; Hardware; Linear feedback shift registers; Privacy; RFID tags; Radiofrequency identification; Security
【Paper Link】 【Pages】:256-260
【Authors】: Thanh Dang ; Nirupama Bulusu ; Wu-chi Feng ; Sergey Frolov
【Abstract】: Current feature tracking frameworks in sensor networks exploit advantages of either mobility, where mobile sensors can provide micro scale information of a small sensing area or numerical models that can provide macro scale information about the environment but not both. With the continual development of sensor networks, mobility becomes an important feature to integrate next generation sensing systems. In addition, recent advances in environmental modeling also allow us to better understand basic behavior of the environment. In order to further improve existing sensing systems, we need a new framework that can take advantages of existing fixed sensor networks, mobile sensors and numerical models. We develop CoTrack, a Collaborative Tracking framework, that allows mobile sensors to cooperate with fixed sensors and numerical models to accurately track dynamic features in an environment. The key innovation in CoTrack is the incorporation of numerical models at different scales and sensor measurements to guide mobile sensors for tracking. The framework includes three components: a macro model for large-scale estimation, a micro model for locale estimation of specific features based on sensor measurements, and an adaptive sampling scheme that guides mobile sensors to accurately track dynamic features. We apply our framework to track salinity intrusion in the Columbia River estuary in Oregon, United States. Our framework is fast and can reduce tracking error by more than half compared to existing data assimilation and state-of-the-art numerical models.
【Keywords】: sensors; tracking; wireless sensor networks; CoTrack; adaptive sampling scheme; collaborative tracking framework; data assimilation; dynamic features tracking; feature tracking frameworks; fixed sensor networks; mobile sensors; numerical models; salinity intrusion; sensor measurements; static sensors; Collaboration; Data assimilation; Large-scale systems; Next generation networking; Numerical models; Rivers; Sampling methods; Sensor phenomena and characterization; Sensor systems; Technological innovation
【Paper Link】 【Pages】:261-265
【Authors】: Junxing Zhang ; Sneha Kumar Kasera ; Neal Patwari
【Abstract】: We propose an approach where wireless devices, interested in establishing a secret key, sample the channel impulse response (CIR) space in a physical area to collect and combine uncorrelated CIR measurements to generate the secret key. We study the impact of mobility patterns in obtaining uncorrelated measurements. Using extensive measurements in both indoor and outdoor settings, we find that (i) when movement step size is larger than one foot the measured CIRs are mostly uncorrelated, and (ii) more diffusion in the mobility results in less correlation in the measured CIRs. We develop efficient mechanisms to encode CIRs and reconcile the differences in the bits extracted between the two devices. Our results show that our scheme generates very high entropy secret bits and that too at a high bit rate. The secret bits, that we generate using our approach, also pass the 8 randomness tests of the NIST test suite.
【Keywords】: correlation methods; mobile communication; telecommunication links; NIST; channel impulse response; mobility assisted secret key generation; mobility patterns; randomness tests; uncorrelated CIR measurements; wireless devices; wireless link signatures; Area measurement; Bit rate; Communications Society; Entropy; Extraterrestrial measurements; Foot; NIST; Physical layer; Size measurement; Testing
【Paper Link】 【Pages】:266-270
【Authors】: Natalia Castro Fernandes ; Marcelo D. D. Moreira ; Otto Carlos Muniz Bandeira Duarte
【Abstract】: This paper introduces a self-organized mechanism to control user access in ad hoc networks without requiring any infrastructure or a central administration entity. The proposed mechanism authenticates and monitors nodes with the so-called controller sets, which are resistant to the dynamic network membership. The analysis shows that the proposed scheme is robust even to collusion attacks and provides availability up to 90% better than proposals based on threshold cryptography. The performance improvement arises mostly from the controller sets autonomy to recover after network partitions.
【Keywords】: ad hoc networks; cryptography; telecommunication security; ad hoc networks; central administration entity; collusion attacks; controller sets; dynamic network membership; malicious access; network partitions; self-organized mechanism; threshold cryptography; user access; Access control; Ad hoc networks; Authentication; Availability; Centralized control; Monitoring; Peer to peer computing; Proposals; Public key; Public key cryptography
【Paper Link】 【Pages】:271-275
【Authors】: Jian Ni ; Bo (Rambo) Tan ; R. Srikant
【Abstract】: Recently, it has been shown that CSMA-type random access algorithms can achieve the maximum possible throughput in ad hoc wireless networks. However, these algorithms assume an idealized continuous-time CSMA protocol where collisions can never occur. In addition, simulation results indicate that the delay performance of these algorithms can be quite bad. On the other hand, although some simple heuristics (such as distributed approximations of greedy maximal scheduling) can yield much better delay performance for a large set of arrival rates, they may only achieve a fraction of the capacity region in general. In this paper, we propose a discrete-time version of the CSMA algorithm. Central to our results is a discrete-time distributed randomized algorithm which is based on a generalization of the so-called Glauber dynamics from statistical physics, where multiple links are allowed to update their states in a single time slot. The algorithm generates collision-free transmission schedules while explicitly taking collisions into account during the control phase of the protocol, thus relaxing the perfect CSMA assumption. More importantly, the algorithm allows us to incorporate delay-reduction mechanisms which lead to very good delay performance while retaining the throughput-optimality property.
【Keywords】: ad hoc networks; carrier sense multiple access; queueing theory; statistical analysis; wireless channels; CSMA-type random access algorithm; Glauber dynamics; Q-CSMA; ad hoc wireless network; collision-free transmission; continuous-time CSMA protocol; delay-reduction mechanism; discrete-time distributed randomized algorithm; queue-length based CSMA-CA algorithm; statistical physics; throughput-optimality property; Access protocols; Communications Society; Delay; Media Access Protocol; Multiaccess communication; Physics; Scheduling algorithm; Throughput; USA Councils; Wireless networks
【Paper Link】 【Pages】:276-280
【Authors】: Miao Wang ; Lisong Xu ; Byrav Ramamurthy
【Abstract】: Most of the commercial P2P video streaming deployments support hundreds of channels and are referred to as multichannel systems. Measurement studies show that bandwidth resources of different channels are highly unbalanced and thus recent research studies have proposed various protocols to improve the streaming qualities for all channels by enabling cross-channel cooperation among multiple channels. However, there is no general framework for comparing existing and potential designs for multi-channel P2P systems. The goal of this paper is to establish tractable models for answering the fundamental question in multi-channel system designs: Under what circumstances, should a particular design be used to achieve the desired streaming quality with the lowest implementation complexity? To achieve this goal, we first classify existing and potential designs into three categories, namely Naive Bandwidth allocation Approach (NBA), Passive Channel-aware bandwidth allocation Approach (PCA) and Active Channel-aware bandwidth allocation Approach (ACA). Then, we define the bandwidth satisfaction ratio as a performance metric to develop linear programming models for the three designs. The proposed models are independent of implementations and can be efficiently solved due to the linear property, which provides a way of numerically exploring the design space of multi-channel systems and developing closed-form solutions for special systems.
【Keywords】: bandwidth allocation; linear programming; media streaming; peer-to-peer computing; P2P video streaming deployments; active channel-aware bandwidth allocation approach; bandwidth resources; bandwidth satisfaction ratio; cross-channel cooperation; implementation complexity; linear programming models; linear property; multichannel P2P streaming systems; naive bandwidth allocation approach; passive channel-aware bandwidth allocation approach; streaming qualities; tractable models; Bandwidth; Channel allocation; Communications Society; Computer science; Linear programming; Resource management; Space exploration; Streaming media; USA Councils; Watches
【Paper Link】 【Pages】:281-285
【Authors】: Zhenjiang Li ; Danny H. K. Tsang ; Wang-Chien Lee
【Abstract】: The P2P pull-push hybrid architecture has achieved great success in delivering live video traffic over the Internet. However, a formal study on the sub-stream scheduling problem, a key design issue in hybrid systems, is still lacking. In this paper, we propose a max-flow model for mathematical analysis of this problem. We find that the sub-stream scheduling schemes used in existing hybrid systems, including CoolStreaming+, GridMedia and LStreaming, individually solve one special case of the proposed max-flow model. Moreover, this model can also serve as a benchmark to assess the performance of these existing sub-stream scheduling schemes. Further, we propose a weighted max-flow scheme to address the issue of peer heterogeneity in scheduling sub-streams. Finally, we point out the benefits of combining the hybrid streaming architecture and layered coding, and we also investigate how to schedule sub-streams in hybrid layered streaming systems.
【Keywords】: Internet; media streaming; peer-to-peer computing; telecommunication traffic; CoolStreaming+; GridMedia; Internet; LStreaming; P2P hybrid live streaming systems; hybrid layered streaming systems; mathematical analysis; substream scheduling schemes; video traffic; Communications Society; Internet; Mathematical analysis; Mathematical model; Performance analysis; Robustness; Scheduling; Streaming media; Throughput; Traffic control
【Paper Link】 【Pages】:286-290
【Authors】: Yi Hu ; Min Feng ; Laxmi N. Bhuyan
【Abstract】: A fundamental challenge of managing mutable data replication in a Peer-to-Peer (P2P) system is how to efficiently maintain consistency under various sharing patterns with heterogeneous resource capabilities. This paper presents a framework for balanced consistency maintenance (BCoM) in structured P2P systems. Replica nodes of each object are organized into a tree for disseminating updates, and a sliding window update protocol is developed to bound the consistency. The effect of window size in response to dynamic network conditions, workload updates and resource limits is analyzed through a queueing model. This enables us to balance availability, performance and consistency strictness for various application requirements. On top of the dissemination tree, two enhancements are proposed: a fast recovery scheme to strengthen the robustness against node and link failures; and a node migration policy to remove and prevent the bottleneck for better system performance. Simulations are conducted using P2PSim to evaluate BCoM in comparison to SCOPE. The experimental results demonstrate that BCoM significantly improves the availability of SCOPE by lowering the discard rate from almost 100% to 5% with slight increase in latency.
【Keywords】: peer-to-peer computing; protocols; queueing theory; trees (mathematics); balanced consistency maintenance protocol; consistency strictness; data replication; dissemination tree; link failures; node migration policy; peer-to-peer systems; queueing model; replica nodes; resource limits; sliding window update protocol; window size; workload updates; Availability; Communications Society; Computer science; Data engineering; Delay; Frequency synchronization; Maintenance engineering; Peer to peer computing; Protocols; Resource management
【Paper Link】 【Pages】:291-295
【Authors】: David R. Choffnes ; Mario A. Sánchez ; Fabian E. Bustamante
【Abstract】: Network positioning systems provide an important service to large- scale P2P systems, potentially enabling clients to achieve higher performance, reduce cross-ISP traffic and improve the robustness of the system to failures. Because traces representative of this environment are generally unavailable, and there is no platform suited for experimentation at the appropriate scale, network positioning systems have been commonly imple- mented and evaluated in simulation and on research testbeds. The performance of network positioning remains an open question for large deployments at the edges of the network. This paper evaluates how four key classes of network po- sitioning systems fare when deployed at scale and measured in P2P systems where they are used. Using 2 billion network measurements gathered from more than 43,000 IP addresses probing over 8 million other IPs worldwide, we show that network positioning exhibits noticeably worse performance than previously reported in studies conducted on research testbeds. To explain this result, we identify several key properties of this environment that call into question fundamental assumptions driving network positioning research.
【Keywords】: peer-to-peer computing; telecommunication traffic; cross-ISP traffic; large-scale P2P systems; network positioning; Communications Society; Coordinate measuring machines; Delay; Economic indicators; Large-scale systems; Position measurement; Robustness; System testing; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:296-300
【Authors】: Qiyan Wang ; Long Vu ; Klara Nahrstedt ; Himanshu Khurana
【Abstract】: Network coding has been shown to be capable of greatly improving quality of service in P2P live streaming systems (e.g., IPTV). However, network coding is vulnerable to pollution attacks where malicious nodes inject into the network bogus data blocks that are combined with other legitimate blocks at downstream nodes, leading to incapability of decoding the original blocks and substantial degradation of network performance. In this paper, we propose a novel approach to limiting pollution attacks by rapidly identifying malicious nodes. Our scheme can fully satisfy the requirements of live streaming systems, and achieves much higher efficiency than previous schemes. Each node in our scheme only needs to perform several hash computations for an incoming block, incurring very small computational latency. The space overhead added to each block is only 20 bytes. The verification information given to each node is independent of the streaming content and thus does not need to be redistributed. The simulation results based on real PPLive channel overlays show that the process of identifying malicious nodes only takes a few seconds even in the presence of a large number of malicious nodes.
【Keywords】: identification; network coding; peer-to-peer computing; quality of service; video streaming; MIS; P2P live streaming systems; PPLive channel; computational latency; data blocks; hash computations; information verification; malicious nodes identification; malicious nodes identification scheme; network performance; network-coding-based peer-to-peer streaming; quality of service; Decoding; Degradation; Delay; Error correction; Network coding; Peer to peer computing; Protocols; Quality of service; Streaming media; Water pollution
【Paper Link】 【Pages】:301-305
【Authors】: Bakr Sarakbi ; Stéphane Maag
【Abstract】: The Internet is a composition of ASes (Autonomous Systems), BGP (Border Gateway Protocol) is the routing protocol that is responsible of exchanging routes between these ASes. It operates in two modes: External BGP (eBGP) and Internal BGP (iBGP). EBGP exchanges routing information between ASes, while iBGP propagates that information within the AS. BGP Full Mesh Solution (FMS) is based on that all the ASBRs (Autonomous System Border Routers) should be fully meshed and each internal node should have an iBGP session with all of them. This was because an iBGP node does not have the ability to reflect routes. BGP route reflection was widely employed as an alternative to full mesh to reduce the needed number of iBGP sessions and, in turn, increase the scalability inside the AS. Under particular configuration, it introduces persistent route oscillation, forwarding loops, and non-optimal egress nodes. Skeleton is an alternative to route reflection that overcomes these routing anomalies. Skeleton is a subgraph of the physical graph with the same set of nodes, its edges are the iBGP sessions between the nodes. All Skeleton nodes have the ability of reflecting routes. Skeleton eliminates the use of clusters and establishes iBGP sessions only between single hop neighbors. We prove that it holds the sufficient correctness conditions as well as its robustness against MED induced oscillations. We evaluate it on five real world topologies and find that the number of iBGP sessions has a linear relationship with the number of ASBRs, where in FMS this relationship is quadratic.
【Keywords】: Internet; computer networks; routing protocols; BGP skeleton; Internet; autonomous systems; border gateway protocol; full mesh solution; iBGP route reflection; routing protocol; Communications Society; Flexible manufacturing systems; Internet; Peer to peer computing; Reflection; Routing protocols; Scalability; Skeleton; Telecommunications; Topology
【Paper Link】 【Pages】:306-310
【Authors】: Jung Ryu ; Lei Ying ; Sanjay Shakkottai
【Abstract】: We study a mobile wireless network where groups or clusters of nodes are intermittently connected via mobile "carriers'' (the carriers provide connectivity over time among different clusters of nodes). Over such networks (an instantiation of a delay tolerant network), it is well-known that traditional routing algorithms perform very poorly. In this paper, we propose a two-level Back- Pressure with Source-Routing algorithm (BP+SR) for such networks. The proposed BP+SR algorithm separates routing and scheduling within clusters (fast time-scale) from the communications that occur across clusters (slow time-scale), without loss in network throughput (i.e., BP+SR is throughput-optimal). More importantly, for a source and destination node that lie in different clusters, the traditional back-pressure algorithm results in large queue lengths at each node along its path. This is because the queue dynamics are driven by the slowest time-scale (i.e., that of the carrier nodes) along the path between the source and destination, which results in very large end-to-end delays. On the other-hand, we show that the two-level BP+SR algorithm maintains large queues only at a very few nodes, and thus results in order-wise smaller end-to-end delays. We provide analytical as well as simulation results to confirm our claims.
【Keywords】: delays; mobile radio; queueing theory; telecommunication network routing; back-pressure routing; delay tolerant network; intermittently connected networks; mobile carriers; mobile wireless network; network throughput; queue dynamics; queue lengths; source-routing algorithm; Clustering algorithms; Communications Society; Delay; Disruption tolerant networking; Peer to peer computing; Routing; Scheduling algorithm; Telecommunication traffic; Traffic control; Wireless networks
【Paper Link】 【Pages】:311-315
【Authors】: Jianping Wang ; Kui Wu ; Shiliang Li ; Chunming Qiao
【Abstract】: In an integrated fiber and wireless access (FiWi) network, multi-path forwarding may be applied in the wireless subnetwork to improve throughput. Due to the delay difference along multiple paths, reordered packets of a flow may arrive at the Optical Line Terminal (OLT) waiting for dispatching to the Internet, which may deteriorate the TCP performance. As all traffic in a FiWi network is sent out through the OLT, the OLT serves as a convergence node which naturally makes it possible to resequence packets at the OLT before they are sent to the Internet. The fundamental difference between resequencing at the end systems and resequencing at an intermediate node (e.g., the OLT) is that very tight resequencing delay can be tolerated in the latter. Thus, resequencing at the intermediate nodes must be fast enough. In this paper, we propose an integrated flow assignment and resequencing approach which jointly determines the probability of sending packets along each path from the source and needs virtually zero resequencing delay at the OLT to reduce the out-of-order probability when packets are injected to the Internet from the access network. Simulation results validate our analysis and the effectiveness of the proposed integrated flow assignment and resequencing approach.
【Keywords】: Internet; radio access networks; telecommunication network routing; FiWi network; Internet; integrated fiber wireless networks; multipath forwarding; multipath routing; optical line terminal; out-of-order probability; performance modeling; resequencing delay; wireless access network; Convergence; Delay; Dispatching; IP networks; Image motion analysis; Internet; Performance analysis; Routing; Telecommunication traffic; Throughput
【Paper Link】 【Pages】:316-320
【Authors】: Vijay Kamble ; Eitan Altman ; Rachid El Azouzi ; Vinod Sharma
【Abstract】: Most theoretical research on routing games in telecommunication networks has so far dealt with reciprocal congestion effects between routed entities. Yet in networks that support differentiation between flows, the congestion experienced by a packet depends on its priority level. Another differentiation is made by compressing the packets in the low priority flow while leaving the high priority flow intact. In this paper we study such kind of routing scenarios for the case of non-atomic users and we establish conditions for the existence and uniqueness of equilibrium.
【Keywords】: game theory; telecommunication network routing; hierarchical routing games; nonatomic users; priority flow; reciprocal congestion effects; telecommunication networks; Ad hoc networks; Communications Society; Delay; Engineering management; Game theory; Industrial engineering; Routing; Telecommunication congestion control; Telecommunication network management; Telecommunication traffic
【Paper Link】 【Pages】:321-325
【Authors】: Chengchen Hu ; Kai Chen ; Yan Chen ; Bin Liu
【Abstract】: As the Internet becomes a critical infrastructure component of our global information-based society, any interruption to its availability can have significant economical and societal impacts. Although many researches tried to improve the resilience through the BGP policy-compliant paths, it has been demonstrated that the Internet is still highly vulnerable when major failures happen. In this paper, we aim to overcome the inherent constraint of the existing BGP-compliant recovery schemes and propose to seek additional potential routing diversity by relaxing BGP peering links and through Internet eXchange Points (IXPs). The focus of this paper is to evaluate the potentiality of these two schemes, rather than on their implementations. By collecting most complete AS link map up-to-date with 31K nodes and 142K links, we demonstrate that the proposed potential routing diversity can recover 40% to 80% of the disconnected paths on average beyond BGP-compliant paths. This work suggests a promising venue to address the Internet failures.
【Keywords】: Internet; failure analysis; routing protocols; AS link map; BGP peering links; BGP policy-compliant recovery schemes; Internet exchange points; Internet failure recovery; evaluating potential routing diversity; information-based society; Communications Society; Cultural differences; Delay; Earth; IP networks; Internet; Peer to peer computing; Resilience; Routing; Terrorism
【Paper Link】 【Pages】:326-330
【Authors】: Andrew J. Kalafut ; Craig A. Shue ; Minaxi Gupta
【Abstract】: While many attacks are distributed across botnets, investigators and network operators have recently targeted malicious networks through high profile autonomous system (AS) de-peerings and network shut-downs. In this paper, we explore whether some ASes indeed are safe havens for malicious activity. We look for ISPs and ASes that exhibit disproportionately high malicious behavior using 12 popular blacklists. We find that some ASes have over 80% of their routable IP address space blacklisted and others account for large fractions of blacklisted IPs. Overall, we conclude that examining malicious activity at the AS granularity can unearth networks with lax security or those that harbor cybercrime.
【Keywords】: Internet; security of data; telecommunication network routing; ASes; ISP; abnormally malicious autonomous systems; de-peerings; harbor cybercrime; high profile autonomous system; malicious hubs; network shut-downs; routable IP address; security; targeted malicious networks; Communications Society; Computer crime; Computer networks; Computer security; Distributed computing; Informatics; Internet; Peer to peer computing; Telecommunication traffic; US Government
【Paper Link】 【Pages】:331-335
【Authors】: Zhibin Zhou ; Dijiang Huang
【Abstract】: Many IP multicast-based applications, such as multimedia conferencing, multiplayer games, require controlling the group memberships of senders and receivers. One common solution is to encrypt the data with a session key shared with all authorized senders/receivers. To efficiently update the session key in the event of member removal, many rooted-tree based group key distribution schemes have been proposed. However, most of the existing rooted-tree based schemes are not optimal. In other words, given the O(log N) storage overhead, the communication overhead is not minimized. On the other hand, although Flat Table scheme achieves optimality, it is rather dismissed due to the vulnerability to collusion attacks. In this paper, we propose a key distribution scheme - EGK that attains the same optimality as Flat Table without collusion vulnerability. Additionally, EGK provides constant message size and requires O(log N) storage overhead at the group controller, which makes EGK suitable for applications containing a large number of multicasting group members. Moreover, adding members in EGK requires just one multicasting message. EGK is the first work with such features and out- performs all existing schemes.
【Keywords】: IP networks; communication complexity; cryptography; multicast communication; telecommunication security; IP multicast-based applications; flat table scheme; optimal key distribution scheme; rooted-tree based group key distribution schemes; secure multicast group communication; session key; Communication system control; Communications Society; Cryptography; Data privacy; Data security; Intrusion detection; Multicast communication; Optimal control; Random number generation; Size control
【Paper Link】 【Pages】:336-340
【Authors】: Daniele Quercia ; Stephen Hailes
【Abstract】: Collaborative applications for co-located mobile users can be severely disrupted by a sybil attack to the point of being unusable. Existing decentralized defences have largely been designed for peer-to-peer networks but not for mobile networks. That is why we propose a new decentralized defence for portable devices and call it MobID. The idea is that a device manages two small networks in which it stores information about the devices it meets: its network of friends contains honest devices, and its network of foes contains suspicious devices. By reasoning on these two networks, the device is then able to determine whether an unknown individual is carrying out a sybil attack or not. We evaluate the extent to which MobID reduces the number of interactions with sybil attackers and consequently enables collaborative applications. We do so using real mobility and social network data. We also assess computational and communication costs of MobID on mobile phones.
【Keywords】: computer network security; groupware; mobile handsets; radio networks; MoblD; collaborative applications; mobile decentralized defence; mobile user; portable devices; real mobility; social network data; sybil attacks; Application software; Collaboration; Collaborative software; Communications Society; Computational efficiency; Educational institutions; Mobile communication; Mobile handsets; Peer to peer computing; Social network services
【Paper Link】 【Pages】:341-345
【Authors】: Bin Zhang ; Ehab Al-Shaer
【Abstract】: The objective of this work is to create usable security architecture that will minimize network risk while considering usability and budget. We propose and formulate a novel framework for automatic creation of network security architecture including configuration rules and device placements in order to minimize risk while satisfying the business requirements, service usability and budget constraints. Our framework also automates the creation of external and internal Demilitarized Zones (DMZ) to improve security by increasing isolation. We formalize this as an optimization problem and show that it is NP-hard. We then provide heuristic approximation algorithms. The implemented systems, called SecBuilder, were evaluated under different network sizes, topologies and security requirements. Our evaluation study shows that the results obtained by SecBuilder are close to the theoretical lower bound and the performance is scalable with the network size.
【Keywords】: computer network security; heuristic programming; optimisation; DMZ; NP-hard problems; budget constraints; network security architecture; optimization problem; security configuration; towards automatic creation; Approximation algorithms; Communications Society; Computer architecture; Computer security; Costs; Heuristic algorithms; Information security; Network topology; USA Councils; Usability
【Paper Link】 【Pages】:346-350
【Authors】: Pawan Prakash ; Manish Kumar ; Ramana Rao Kompella ; Minaxi Gupta
【Abstract】: Phishing has been easy and effective way for trickery and deception on the Internet. While solutions such as URL blacklisting have been effective to some degree, their reliance on exact match with the blacklisted entries makes it easy for attackers to evade. We start with the observation that attackers often employ simple modifications (e.g., changing top level domain) to URLs. Our system, PhishNet, exploits this observation using two components. In the first component, we propose five heuristics to enumerate simple combinations of known phishing sites to discover new phishing URLs. The second component consists of an approximate matching algorithm that dissects a URL into multiple components that are matched individually against entries in the blacklist. In our evaluation with real-time blacklist feeds, we discovered around 18,000 new phishing URLs from a set of 6,000 new blacklist entries. We also show that our approximate matching algorithm leads to very few false positives (3%) and negatives (5%).
【Keywords】: Internet; computer crime; unsolicited e-mail; Internet; URL; approximate matching algorithm; blacklisting; phishing attack detection; Communications Society; Credit cards; Electronic commerce; Feeds; Humans; Information security; Internet; Law; Resilience; Uniform resource locators
【Paper Link】 【Pages】:351-355
【Authors】: Chia-Hung Lin ; Jian-Jhih Kuo ; Ming-Jer Tsai
【Abstract】: Because the global positioning system (GPS) consumes a large amount of power and does not work indoors, many GPS-free information brokerage schemes are proposed for wireless sensor networks. Each of them, however, either cannot guarantee successful data retrieval or demands a great deal of message overhead to replicate the data. In this paper, we propose a GPS-free information brokerage scheme, RDRIB, in which the double-ruling technique is used to replicate and retrieve the data. RDRIB guarantees successful data retrieval, and, in addition, simulations show that RDRIB has good performance in terms of the replication message overhead and the construction message overhead.
【Keywords】: Global Positioning System; wireless sensor networks; Global Positioning System; construction message overhead; data retrieval; reliable GPS-free double-ruling technique; replication message overhead; wireless sensor networks; Communications Society; Global Positioning System; High definition video; Information retrieval; Network topology; Peer to peer computing; Power system reliability; Telecommunication network reliability; Tiles; Wireless sensor networks
【Paper Link】 【Pages】:356-360
【Authors】: Siyuan Chen ; Shaojie Tang ; Minsu Huang ; Yu Wang
【Abstract】: How to efficiently collect sensing data from all sensor nodes is critical to the performance of wireless sensor networks. In this paper, we aim to understand the theoretical limitations of data collection in terms of possible and achievable maximum capacity. Previously, the study of data collection capacity has only concentrated on large-scale random networks. However, in most of practical sensor applications, the sensor network is not deployed uniformly and the number of sensors may not be as huge as in theory. Therefore, it is necessary to study the capacity of data collection in an arbitrary network. In this paper, we derive the upper and constructive lower bounds for data collection capacity in arbitrary networks. The proposed data collection method can lead to order-optimal performance for any arbitrary sensor networks. We also examine the design of data collection under a general graph model and discuss performance implications.
【Keywords】: wireless sensor networks; arbitrary networks; arbitrary wireless sensor networks; data collection capacity; large-scale random networks; Capacitive sensors; Communications Society; Computer science; Interference; Peer to peer computing; Physical layer; Protocols; Sensor phenomena and characterization; USA Councils; Wireless sensor networks
【Paper Link】 【Pages】:361-365
【Authors】: Haiyan Cai ; Xiaohua Jia ; Mo Sha
【Abstract】: Assume sensor deployment follows the Poisson distribution. For a given partial connectivity requirement ¿, 0.5 < ¿ < 1, we prove, for a hexagon model, that there exists a critical sensor density ¿0, around which the probability that at least 100¿% of sensors are connected in the network increases sharply from ¿ to 1-¿ within a short interval of sensor density ¿. The location of ¿0 is at the sensor density where the above probability is about 1/2. We also extend the results to the disk model. Simulations are conducted to confirm the theoretical results.
【Keywords】: Poisson distribution; probability; wireless sensor networks; Poisson distribution; critical sensor density; disk model; hexagon model; large area wireless sensor networks; partial connectivity requirement; probability; sensor deployment; Analytical models; Communications Society; Context modeling; Convergence; H infinity control; Peer to peer computing; Relays; Throughput; Wireless sensor networks
【Paper Link】 【Pages】:366-370
【Authors】: Yaxiong Zhao ; Jie Wu ; Feng Li ; Sanglu Lu
【Abstract】: Wireless sensor network (WSN) applications require redundant sensors to guarantee fault tolerance. However, the same degree of redundancy is not necessary for multi-hop communication. In this paper, we present a new scheduling method called virtual backbone scheduling (VBS). VBS employs heterogeneous scheduling, where backbone nodes work with duty-cycling to preserve network connectivity, and non-backbone nodes turn off radios to save energy. We formulate a maximum lifetime backbone scheduling (MLBS) problem to maximize the network lifetime using this scheduling model. Because the MLBS problem is NP-hard, two approximation solutions based on the schedule transition graph (STG) and virtual scheduling graph (VSG) are proposed.We also present an iterative local replacement (ILR) scheme as an distributed implementation of VBS. The path stretch problem is analyzed in order to explore the impact of VBS on the network structure. We show, through simulations, that VBS significantly prolongs the network lifetime under extensive conditions.
【Keywords】: fault tolerance; optimisation; wireless sensor networks; NP-hard; VBS; fault tolerance; heterogeneous scheduling; iterative local replacement; maximum lifetime backbone scheduling; maximum lifetime sleep scheduling; network lifetime; schedule transition graph; virtual backbones; virtual scheduling graph; wireless sensor networks; Batteries; Communications Society; Energy consumption; Fault tolerance; Intelligent networks; Peer to peer computing; Redundancy; Sleep; Spine; Wireless sensor networks
【Paper Link】 【Pages】:371-375
【Authors】: Robert Zhong Zhou ; James Peng Zheng ; Jun-Hong Cui ; Zaihan Jiang
【Abstract】: In this paper, we investigate the multi-channel MAC problem in underwater acoustic sensor networks. To reduce hardware cost, only one acoustic transceiver is often preferred on every node. In a single-transceiver multi-channel long-delay underwater network, new hidden terminal problems, namely multi-channel hidden terminal and long-delay hidden terminal (together with the traditional multi-hop hidden terminal problem, we refer to them as "triple hidden terminal problems"), are identified and studied in this paper. Based on our findings, we propose a new MAC protocol, called CUMAC, for long delay multi-channel underwater sensor networks. CUMAC utilizes the cooperation of neighboring nodes for collision detection, and a simple tone device is designed for distributed collision notification, providing better system efficiency while keeping overall cost low. Analytical and simulation results show that CUMAC can greatly improve the system throughput and energy efficiency via effectively solving the complicated triple hidden terminal problems.
【Keywords】: access protocols; underwater acoustic communication; wireless sensor networks; CUMAC; MAC protocol; long-delay hidden terminal; long-delay underwater acoustic sensor networks; multichannel MAC problem; triple hidden terminal problems; Acoustic sensors; Analytical models; Costs; Energy efficiency; Hardware; Media Access Protocol; Spread spectrum communication; Throughput; Transceivers; Underwater acoustics
【Paper Link】 【Pages】:376-380
【Authors】: Zohar Naor ; Sajal K. Das
【Abstract】: A scalable framework for mobile real-time group communication services is developed in this paper. Examples for possible applications of this framework are mobile social networks, mobile conference calls, mobile instant messaging services, and mobile multi-player on-line games. A key requirement for enabling a real-time group communication service is the tight constraint imposed on the call delivery delay. Since establishing such communication service for a group of independent mobile users under a tight delay constraint is NP-hard, a two-tier architecture is proposed, that can meet the delay constraint imposed by the real-time service requirement for many independent mobile clients in a scalable manner. The time and memory complexity associated with the group services provided by the proposed framework are O(N) for each service, where N is the number of nodes being served, while a distributed scheme requires O(N2) for both time and memory complexity.
【Keywords】: communication complexity; mobile communication; NP-hard; call delivery delay; distributed scheme; memory complexity; mobile conference call; mobile instant messaging service; mobile multiplayer online games; mobile real-time group communication service; mobile social network; time complexity; two-tier architecture; Call conference; Communications Society; Computer science; Cost function; Delay; Mathematics; Message service; Mobile communication; Mobile computing; Social network services
【Paper Link】 【Pages】:381-385
【Authors】: Alexander W. Min ; Xinyu Zhang ; Kang G. Shin
【Abstract】: In cognitive radio networks (CRNs), detecting small-scale primary devices---such as wireless microphones (WMs)---is a challenging, but very important, problem that has not yet been addressed well. We identify the data-fusion range as a key factor that enables effective cooperative sensing for detection of small-scale primary devices. In particular, we derive a closed-form expression for the optimal data-fusion range that minimizes the average detection delay. We also observe that the sensing performance is sensitive to the accuracy in estimating the primary's location and transmit-power. Based on these observations, we propose an efficient sensing framework, called DeLOC, that iteratively performs location/transmit-power estimation and dynamic sensor selection for cooperative sensing. Our extensive simulation results in a realistic CRN environment show that DeLOC achieves near-optimal detection performance, while meeting the detection requirements specified in the IEEE 802.22 standard draft.
【Keywords】: cognitive radio; sensor fusion; signal detection; DeLOC; IEEE 802.22 standard; cognitive radio networks; cooperative sensing; dynamic sensor selection; location/transmit power estimation; optimal data fusion; sensing framework; small scale primary detection; spatio temporal fusion; Cognitive radio; Communications Society; Computer networks; Delay estimation; Laboratories; Peer to peer computing; Protocols; Signal detection; TV; Wireless sensor networks
【Paper Link】 【Pages】:386-390
【Authors】: Liang Cai ; Sridhar Machiraju ; Hao Chen
【Abstract】: Existing handover schemes in wireless LANs, 3G/4G networks, and femtocells rely upon protocols involving centralized authentication servers and one or more access points. These protocols are invariably complex and use extensive signaling on the wireless backhaul since they aim to be be efficient (minimal handover latency) without sacrificing robustness. However, the mobile user has little involvement especially with the so-called context transfer stage; this stage involves the transfer of necessary state to the new access point as well as the enforcement of security goals such as user authentication and single point of access. We propose the incorporation of user capabilities, network-asserted proofs of user identity and access control, as a general mechanism to simplify the context transfer stage. To this end, we have designed CapAuth, a capability-based scheme that has reduced complexity, low overhead, high level of fault tolerance and is general enough to implement a range of security policies.
【Keywords】: 3G mobile communication; 4G mobile communication; authorisation; fault tolerance; protocols; wireless LAN; 3G/4G networks; CapAuth; access control; capability-based handover scheme; centralized authentication servers; fault tolerance; femtocells; network-asserted proofs; protocols; security policies; wireless LAN; Access control; Access protocols; Authentication; Delay; Fault tolerance; Femtocells; Network servers; Robustness; Wireless LAN; Wireless application protocol
【Paper Link】 【Pages】:391-395
【Authors】: Chengdi Lai ; Ka-Cheong Leung ; Victor O. K. Li
【Abstract】: In wireless networks, TCP performs unsatisfactorily since packet reordering and random losses may be falsely interpreted as congestive losses. This causes TCP to trigger fast retransmission and fast recovery spuriously, leading to under-utilization of available network resources. In this paper, we propose a novel TCP variant, known as TCP for non-congestive loss (TCP-NCL), to adapt TCP to wireless networks by using more reliable signals of packet loss and network overload for activating packet retransmission and congestion response, separately. TCP-NCL can thus serve as a unified solution for effective congestion control, sequencing control, and loss recovery. Different from the existing unified solutions, the modifications involved in the proposed variant are limited to sender-side TCP only, thereby facilitating possible future wide deployment. The two signals employed are the expirations of two serialized timers. A smart TCP sender model has been developed for optimizing the timer expiration periods. Our simulation studies reveal that TCP-NCL is robust against packet reordering as well as random packet loss while maintaining responsiveness against situations with purely congestive loss.
【Keywords】: packet radio networks; telecommunication congestion control; transport protocols; wireless channels; congestion control; congestion response; congestive losses; network overload; network resources; packet reordering; packet retransmission; random losses; sequencing control; serialized-timer approach; wireless TCP; wireless networks; Algorithm design and analysis; Communications Society; Delay; Electronic mail; Maintenance; Out of order; Robustness; Signal processing; Transport protocols; Wireless networks
【Paper Link】 【Pages】:396-400
【Authors】: Di Wang ; Alhussein A. Abouzeid
【Abstract】: In this paper, an information-theoretic framework is developed for characterizing the minimum cost, in bits per second, of tracking the motion state information, such as locations and velocities, of nodes in dynamic networks. The minimum-cost motion-tracking problem is formulated as a rate-distortion problem, where the minimum cost is the minimum rate of information required to identify the network state at a sequence of tracking time instants within a certain distortion bound. The formulation is general in that it can be applied to a variety of mobility models, distortion criteria, and stochastic sequences of tracking time instants. Under the Gauss-Markov mobility model, lower bounds on the information rate of tracking the motion state information of nodes in dynamic networks are derived, where the motion state of a node is 1) the node's locations only, or 2) both its locations and velocities. The results are then used to analyze the protocol overhead of geographic routing protocols in mobile ad hoc networks. The minimum overhead incurred by maintaining the geographic information of the nodes is characterized in terms of node mobility, packet arrival process and distortion bounds. This leads to precise mathematical description of the observation that, given certain state-distortion allowance, protocols aimed at tracking motion state information (such as geographic routing protocols) may not scale beyond a certain level of node mobility.
【Keywords】: Gaussian processes; Markov processes; ad hoc networks; mobile radio; rate distortion theory; routing protocols; Gauss-Markov mobility model; distortion criteria; dynamic networks; geographic routing protocols; minimum cost motion-tracking problem; mobile ad hoc networks; mobility models; rate-distortion problem; stochastic sequences; Bit rate; Costs; Distortion measurement; Gaussian processes; Information rates; Motion analysis; Peer to peer computing; Rate-distortion; Routing protocols; Tracking
【Paper Link】 【Pages】:401-405
【Authors】: Salah-Eddine Elayoubi ; Eitan Altman ; Majed Haddad ; Zwi Altman
【Abstract】: The area of networking games has had a growing impact on wireless networks. This reflects the recognition in the important scaling advantages that the service providers can benefit from by increasing the autonomy of mobiles in decision making. This may however result in inefficiencies that are inherent to equilibria in non-cooperative games. Due to the concern for efficiency, centralized protocols keep being considered and compared to decentralized ones. From the point of view of the network architecture, this implies the co-existence of network-centric and terminal centric radio resource management schemes. Instead of taking part within the debate among the supporters of each solution, we propose in this paper hybrid schemes where the wireless users are assisted in their decisions by the network that broadcasts aggregated load information. We derive the utilities related to the Quality of Service (QoS) perceived by the users and develop a Bayesian framework to obtain the equilibria. Numerical results illustrate the advantages of using our hybrid game framework in an association problem in a network composed of HSDPA and 3G LTE systems.
【Keywords】: Bayes methods; decision making; protocols; radio networks; 3G LTE systems; Bayesian framework; HSDPA; association problem; centralized protocols; decision making; heterogeneous networks; hybrid decision approach; hybrid game framework; network architecture; network centric radio resource management; networking games; noncooperative games; quality of service; terminal centric radio resource management; wireless networks; Base stations; Bayesian methods; Communications Society; Costs; Decision making; Game theory; Multiaccess communication; Quality of service; Resource management; Wireless networks
【Paper Link】 【Pages】:406-410
【Authors】: Alexandre Proutière ; Yung Yi ; Tian Lan ; Mung Chiang
【Abstract】: We consider a widely applicable model of resource allocation where two sequences of events are coupled: on a continuous time axis (t), network dynamics evolve over time. On a discrete time axis [t], certain control laws update resource allocation variables according to some proposed algorithm. The algorithmic updates, together with exogenous events out of the algorithm's control, change the network dynamics, which in turn changes the trajectory of the algorithm, thus forming a loop that couples the two sequences of events. In between the algorithmic updates at [t-1] and [t], the network dynamics continue to evolve randomly as influenced by the previous variable settings at time [t-1]. The standard way used to avoid the subsequent analytic difficulty is to assume the separation of timescales, which in turn unrealistically requires either slow network dynamics or high complexity algorithms. In this paper, we develop an approach that does not require separation of timescales. It is based on the use of stochastic approximation algorithms with continuous-time controlled Markov noise. We prove convergence of these algorithms without assuming timescale separation. This approach is applied to develop simple algorithms that solve the problem of utility-optimal random access in multi-channel, multi-radio wireless networks.
【Keywords】: Markov processes; channel allocation; multi-access systems; radio networks; continuous time controlled Markov noise; exogenous events; multiradio wireless networks; network dynamics; resource allocation; stochastic approximation algorithms; timescale separation; utility optimal random access; Algorithm design and analysis; Approximation algorithms; Communications Society; Convergence; Fading; Iterative algorithms; Resource management; Signal to noise ratio; Stochastic resonance; Wireless networks
【Paper Link】 【Pages】:411-415
【Authors】: Andrés Ferragut ; Fernando Paganini
【Abstract】: Methods for network resource allocation have mainly focused on establishing fairness among the rates of individual flows. However, since multiple TCP connections in one or many paths can serve a common user, we advocate in this paper a user-centric notion of fairness, which we formulate in the Network Utility Maximization (NUM) framework. In particular, we develop control laws for the number of connections identified with a certain user, which can include single-path, multipath or more general aggregates of flows, and prove convergence to the optimal resource allocation. This theory applies directly to the case of cooperative users. In the case where connections are generated exogenously by possibly non-cooperative users, we develop admission control policies that ensure both network stability and user-centric fairness.
【Keywords】: convergence; optimisation; resource allocation; telecommunication control; transport protocols; admission control policies; connection-level control; convergence; cooperative users; multiple TCP connections; network resource allocation; network stability; network utility maximization; optimal resource allocation; user-centric network fairness; Admission control; Aggregates; Communication system control; Communications Society; Convergence; Optimal control; Protocols; Resource management; Stability; Utility programs
【Paper Link】 【Pages】:416-420
【Authors】: Hammad Iqbal ; Taieb Znati
【Abstract】: We investigate the design of a clean-slate control and management plane for data networks using the abstraction of 4D architecture, utilizing and extending 4D's concept of logically centralized Decision plane that is responsible for managing network-wide resources. In this paper, a dynamically adaptable algorithm for assigning Data plane devices to a physically distributed Decision plane is presented, which enables a network to operate with minimal configuration and human intervention, while providing optimal convergence and robustness against failures. Our work is especially relevant in the context of ISPs and large geographically dispersed enterprise networks, where robust and scalable network-wide control of a large number of heterogeneous devices is desired.
【Keywords】: failure analysis; fault tolerance; telecommunication network management; telecommunication network reliability; 4D architecture; ISP; clean-slate network management; data networks; distributed Decision plane; dynamically adaptable algorithm; fault-tolerance; heterogeneous devices; logical centralization; network-wide control; overcoming failures; Centralized control; Collaborative work; Communications Society; Fault tolerance; Heuristic algorithms; Network topology; Resource management; Robust control; Robustness; USA Councils
【Paper Link】 【Pages】:421-425
【Authors】: Mingui Zhang ; Bin Liu ; Beichuan Zhang
【Abstract】: False routing announcements are a serious security problem, which can lead to widespread service disruptions in the Internet. A number of detection systems have been proposed and implemented recently, however, it takes time to detect attacks, notify operators, and stop false announcements. Thus detection systems should be complemented by a mitigation scheme that can protect data delivery before the attack is resolved. We propose such a mitigation scheme, QBGP, which decouples the propagation of a path and the adoption of a path for data forwarding. QBGP does not use suspicious paths to forward data traffic, but still propagates them in the routing system to facilitate attack detection. It can protect data delivery from routing announcements of false sub-prefixes, false origins, false nodes and false links. QBGP incurs overhead only when there are suspicious paths, which happen infrequently in real BGP traces. Results from large scale simulations and BGP trace analysis show that QBGP is light-weight yet effective, and it converges faster and incurs less overhead than Pretty Good BGP.
【Keywords】: security of data; telecommunication traffic; Internet; QBGP; attack detection; data traffic; decoupling path propagation; false links; false nodes; false origins; false subprefixes; operator notification; routing announcements; safeguarding data delivery; security problem; Communication system security; Communications Society; Computer science; IEEE news; IP networks; Protection; Routing; Telecommunication traffic; Web and internet services; YouTube
【Paper Link】 【Pages】:426-430
【Authors】: Rade Stanojevic ; Robert Shorten
【Abstract】: In recent years we have witnessed a great interest in large distributed computing platforms, also known as clouds. While these systems offer enormous computing power, they are major energy consumers. In existing data centers CPUs are responsible for approximately half of the energy consumed by the servers. A promising technique for saving CPU energy consumption is dynamic speed scaling, in which the speed at which the processor is run is adjusted based on demand and performance constraints. In this paper we look at the problem of allocating the demand in the network of processors (each being capable to perform dynamic speed scaling) to minimize the global energy consumption/cost subject to a performance constraint. The nonlinear dependence between the energy consumption and the performance as well as the high variability in the energy prices result in a nontrivial resource allocation. The problem can be abstracted as a fully distributed convex optimization with a linear constraint. On the theoretical side, we propose two low-overhead fully decentralized algorithms for solving the problem of interest and provide closed-form conditions that ensure stability of the algorithms. Then we evaluate the efficacy of the optimal solution using simulations driven by the real-world energy prices. Our findings indicate a possible cost reduction of 10-40% compared to power-oblivious 1/N load balancing, for a wide range of load factors.
【Keywords】: energy consumption; energy management systems; microprocessor chips; optimisation; resource allocation; CPU energy consumption; cost reduction; distributed computing platforms; distributed convex optimization; distributed dynamic speed scaling; global energy consumption; load balancing; microprocessor; nontrivial resource allocation; real-world energy prices; Clouds; Communications Society; Control systems; Costs; Energy consumption; Marine vehicles; Network servers; Performance gain; Robustness; Scalability
【Paper Link】 【Pages】:431-435
【Authors】: Yunghsiang S. Han ; Oluwasoji Omiwade ; Rong Zheng
【Abstract】: We propose a storage-optimal and computation efficient primitive to spread information from a single data source to a set of storage nodes, to allow recovery from both crash-stop and Byzantine failures. A progressive data retrieval scheme is employed, which retrieves minimal amount of data from live storage nodes. The scheme adapts the cost of successful data retrieval to the degree of errors in the system. Implementation and evaluation studies demonstrate comparable performance to that of a genie-aid decoding process.
【Keywords】: decoding; error detection codes; Byzantine failures; data retrieval scheme; error-detection code; genie-aid decoding process; live storage nodes; progressive decoding; single data source; survivable distributed storage; Communications Society; Computer crashes; Computer science; Data engineering; Decoding; Distributed computing; Error correction codes; Information retrieval; Peer to peer computing; Polynomials
【Paper Link】 【Pages】:436-440
【Authors】: Urtzi Ayesta ; Olivier Brun ; Balakrishna J. Prabhu
【Abstract】: We investigate the price of anarchy of a load balancing game with K dispatchers. The service rates and holding costs are assumed to depend on the server, and the service discipline is assumed to be processor-sharing at each server. The performance criterion is taken to be the weighted mean number of jobs in the system, or equivalently, the weighted mean sojourn time in the system. For this game, we first show that, for a fixed amount of total incoming traffic, the worst-case Nash equilibrium occurs when each player routes exactly the same amount of traffic, i.e., when the game is symmetric. For this symmetric game, we provide the expression for the loads on the servers at the Nash equilibrium. Using this result we then show that, for a system with two or more servers, the price of anarchy, which is the worst-case ratio of the global cost of the Nash equilibrium to the global cost of the centralized setting, is lower bounded by K/(2¿K-1) and upper bounded by ¿K, independently of the number of servers.
【Keywords】: game theory; queueing theory; telecommunication traffic; Nash equilibrium; anarchy price; holding costs; noncooperative load balancing game; performance criterion; processor-sharing; servers; service rates; symmetric game; total incoming traffic; Communications Society; Computer architecture; Costs; Load management; Nash equilibrium; Network servers; Performance loss; Routing; Scalability; Web server
【Paper Link】 【Pages】:441-445
【Authors】: Jin Li ; Qian Wang ; Cong Wang ; Ning Cao ; Kui Ren ; Wenjing Lou
【Abstract】: As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords and selectively retrieve files of interest, these techniques support only exact keyword search. That is, there is no tolerance of minor typos and format inconsistencies which, on the other hand, are typical user searching behavior and happen very frequently. This significant drawback makes existing techniques unsuitable in Cloud Computing as it greatly affects system usability, rendering user searching experiences very frustrating and system efficacy very low. In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud data while maintaining keyword privacy. Fuzzy keyword search greatly enhances system usability by returning the matching files when users' searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity semantics, when exact match fails. In our solution, we exploit edit distance to quantify keywords similarity and develop an advanced technique on constructing fuzzy keyword sets, which greatly reduces the storage and representation overheads. Through rigorous security analysis, we show that our proposed solution is secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search.
【Keywords】: Internet; cryptography; data privacy; fuzzy set theory; information retrieval; cloud computing; data privacy; data protection; encrypted data; files matching; fuzzy keyword search; user searching behavior; Cloud computing; Cryptography; Data privacy; Fuzzy systems; Impedance matching; Information retrieval; Keyword search; Outsourcing; Protection; Usability
【Paper Link】 【Pages】:446-450
【Authors】: Haiquan (Chuck) Zhao ; Cathy H. Xia ; Zhen Liu ; Donald F. Towsley
【Abstract】: Many emerging information processing applications require applying various fork and join type operations such as correlation, aggregation, and encoding/decoding to data streams in real-time. Each operation will require one or more simultaneous input data streams and produce one or more output streams, where the processing may shrink or expand the data rates upon completion. Multiple tasks can be co-located on the same server and compete for limited resources. Effective in-network processing and resource management in a distributed heterogeneous environment is critical to achieving better scalability and provision of quality of service. In this paper, we study the distributed resource allocation problem for a synchronous fork and join processing network, with the goal of achieving the maximum total utility of output streams. Using primal and dual based optimization techniques, we propose several decentralized iterative algorithms to solve the problem, and design protocols that implement these algorithms. These algorithms have different strengths in practical implementation and can be tailored to take full advantage of the computing capabilities of individual servers. We show that our algorithms guarantee optimality and demonstrate through simulation that they can adapt quickly to dynamically changing environments.
【Keywords】: distributed processing; iterative methods; optimisation; quality of service; resource allocation; decentralized iterative algorithm; distributed heterogeneous environment; distributed resource allocation; dual based optimization; in-network processing; primal based optimization; quality of service; resource management; synchronous fork-and-join processing network; Algorithm design and analysis; Decoding; Design optimization; Encoding; Information processing; Iterative algorithms; Network servers; Quality of service; Resource management; Scalability
【Paper Link】 【Pages】:451-455
【Authors】: Shuang Yang ; Xin Wang ; Baochun Li
【Abstract】: The use of network coding has been shown to improve throughput in input-queued multicast switches, but not without costs of computational complexity and delays. In this paper, we investigate the design of efficient online network coding algorithms in a switch with multicast traffic. We present Haste, an online opportunistic coding algorithm designed to streamline the computation when network coding is involved in a network switch with multicast traffic. Haste enjoys the advantage of incurring no decoding delays, which reduces packet delays compared with existing network coding algorithms on switches. We have conducted extensive simulations to show the efficiency of Haste, and implemented an emulation framework to emulate input-queued switches using asynchronous network sockets. Our emulation framework is able to process actual UDP traffic using Haste with online network coding, and to show convincing evidence that Haste is suitable for practical use, and is beneficial in multicast switches.
【Keywords】: multicast communication; network coding; queueing theory; Haste; asynchronous network sockets; input-queued switches; multicast switch; multicast traffic; online opportunistic coding algorithm; practical online network coding; Algorithm design and analysis; Computational complexity; Computational efficiency; Emulation; Multicast algorithms; Network coding; Switches; Telecommunication traffic; Throughput; Traffic control
【Paper Link】 【Pages】:456-460
【Authors】: Ori Rottenstreich ; Isaac Keslassy
【Abstract】: Designers of TCAMs (Ternary CAMs) for packet classification deal with unpredictable sets of rules, resulting in highly variable rule expansions, and rely on heuristic encoding algorithms with no reasonable expansion guarantees. In this paper, given several types of rules, we provide new upper bounds on the TCAM worst-case rule expansions. In particular, we prove that a W-bit range can be encoded using W TCAM entries, improving upon the previously-known bound of 2W-5. We also propose a modified TCAM architecture that uses additional logic to significantly reduce the rule expansions, both in the worst case and in experiments with real-life classification databases.
【Keywords】: content-addressable storage; pattern classification; Ternary CAM; heuristic encoding algorithms; packet classification; real life classification databases; rule expansions; worst case TCAM rule expansion; Cams; Communications Society; Databases; Encoding; Energy consumption; Filtering; Heuristic algorithms; Logic; Routing; Upper bound
【Paper Link】 【Pages】:461-465
【Authors】: Ramei Cohen ; Dan Raz
【Abstract】: In recent years, hardware based packet classification has became an essential component in many networking devices. Ternary Content-Addressable Memories (TCAMs) are one of the most popular solutions in this domain, allowing to compare in parallel the packet header against a large set of rules, and to retrieve the first match. However, using TCAM to match a range of values is much more problematic and dramatically reduces the cost effectiveness of the solution. In this paper we study ways to use simple built-in TCAM mechanisms in order to increase the efficiency of range coverage. While current techniques have a worst expansion ratio of 2W-4, we present an efficient algorithm enabling to encode any range with at most W TCAM entries (where W in the number of bits), without using additional processing, extra bits, and without any external encoding. The same paradigm can be applied to multiple raging rules as well, resulting in significant improvement over current known techniques. Moreover, our simulation results indicate that these techniques can be used to reduce the actual TCAM size of hardware networking devices under realistic scenarios.
【Keywords】: content-addressable storage; pattern classification; telecommunication traffic; TCAM; networking devices; packet classification; packet header; range classification; ternary content-addressable memories; Communications Society; Computer science; Content based retrieval; Costs; Encoding; Hardware; Packet switching; Random access memory; Routing; Switches
【Paper Link】 【Pages】:466-470
【Authors】: Kaishun Wu ; Haoyu Tan ; Hoilun Ngan ; Lionel M. Ni
【Abstract】: IEEE 802.15.4 standard specifies physical layer (PHY) and medium access control (MAC)sublayer protocols for low-rate and low-power communication applications. In this protocol, every 4-bit symbol is encoded into a sequence of 32 chips that are actually transmitted over the air. The 32 chips as a whole is also called a pseudo-noise code (PN-Code). Due to complex channel conditions such as attenuation and interference, the transmitted PN-Code will often be received with some PN-Code chips corrupted. In this paper, we conduct a systematic analysis on these errors occurring at chip-level. We find that there are notable error patterns corresponding to different cases. Recognizing these patterns will enable us to identify the channel condition in great details. We believe that understanding what happened to the transmission in our setup can potentially bring benefit to channel coding, routing and error correction protocol design.
【Keywords】: access protocols; channel coding; error correction codes; personal area networks; pseudonoise codes; routing protocols; IEEE 802.15.4 standard; channel coding; chip error pattern analysis; error correction protocol design; low-power communication; low-rate communication; medium access control sublayer protocols; physical layer; pseudo-noise code; routing protocol; symbol encoding; Access protocols; Attenuation; Channel coding; Communication standards; Error analysis; Interference; Media Access Protocol; Pattern analysis; Pattern recognition; Physical layer
【Paper Link】 【Pages】:471-475
【Authors】: Novella Bartolini ; Tiziana Calamoneri ; Thomas F. La Porta ; Simone Silvestri
【Abstract】: In this paper we propose GREASE, a distributed algorithm to deploy mobile sensors in an unknown environment with obstacles and field asperities that may cause sensing anisotropies and non uniform device capabilities. These aspects are not taken into account by traditional approaches to the problem of mobile sensor self-deployment. GREASE works by realizing a grid-shaped deployment throughout the Area of Interest (AoI) and adaptively refining the grid to find new sensor positions to cover the target area more precisely in the zones where devices experience reduced movement, sensing and communication capabilities. We give bounds on the number of sensors necessary to cover an AoI with obstacles and noisy zones. Simulations show that GREASE provides a fast deployment with precise movements and no oscillations, with moderate energy consumption.
【Keywords】: distributed algorithms; sensor placement; wireless sensor networks; GREASE works; area of interest; distributed algorithm; grid-shaped deployment; mobile sensor self-deployment; unknown fields; Anisotropic magnetoresistance; Communications Society; Distributed algorithms; Mobile communication; Noise reduction; Partitioning algorithms; Sensor phenomena and characterization; Solid modeling; Tiles; USA Councils
【Paper Link】 【Pages】:476-480
【Authors】: Kai Zeng ; Zhenyu Yang ; Wenjing Lou
【Abstract】: Two major factors that limit the throughput in multi-hop wireless networks are the unreliability of wireless transmissions and co-channel interference. One promising technique that combats lossy wireless transmissions is opportunistic routing (OR). OR involves multiple forwarding candidates to relay packets by taking advantage of the broadcast nature and spacial diversity of the wireless medium. Furthermore, recent advances in multi-radio multi-channel transmission technology allows more concurrent transmissions in the network, and shows the potential of substantially improving the system capacity. However, the performance of OR in multi-radio multi-channel systems is still unknown, and the methodology of studying the performance of traditional routing (TR) can not be directly applied to OR. In this paper, we present our research on computing an end-to-end throughput bound of OR in multi-radio multi-channel systems. We formulate the capacity of OR as a linear programming (LP) problem which jointly solves the radio-channel assignment and transmission scheduling. Leveraging our analytical model, we gain the following insights into OR: 1) OR can achieve better performance than TR under different radio/channel configurations, however, in particular scenarios, TR is more preferable than OR; 2) OR can achieve comparable or even better performance than TR by using less radio resource; 3) for OR, the throughput gained from increasing the number of potential forwarding candidates becomes marginal.
【Keywords】: cochannel interference; linear programming; radio networks; telecommunication network routing; broadcast nature; cochannel interference; linear programming problem; multiradio multichannel multihop wireless networks; opportunistic routing; radiochannel assignment; spacial diversity; transmission scheduling; wireless medium; wireless transmissions; Broadcasting; Interchannel interference; Linear programming; Processor scheduling; Propagation losses; Relays; Routing; Spread spectrum communication; Throughput; Wireless networks
【Paper Link】 【Pages】:481-485
【Authors】: Swaminathan Sankararaman ; Alon Efrat ; Srinivasan Ramasubramanian ; Pankaj K. Agarwal
【Abstract】: Multi-channel wireless networks are increasingly being employed as infrastructure networks, e.g. in metro areas. Nodes in these networks frequently employ directional antennas to improve spatial throughput. In such networks, given a source and destination, it is of interest to compute an optimal path and channel assignment on every link in the path such that the path bandwidth is the same as that of the link bandwidth and such a path satisfies the constraint that no two consecutive links on the path are assigned the same channel, referred to as "Channel Discontinuity Constraint" (CDC). CDC-paths are also quite useful for TDMA system, where preferably every consecutive links along a path are assigned different time slots. This paper contains several contributions. We first present an O(N2) distributed algorithm for discovering the shortest CDC-path between given source and destination. For use in wireless networks, we explain how spatial properties can be used for dramatically expedite the algorithm. This improves the running time of the O(N3) centralized algorithm of Ahuja et al. for finding the minimum-weight CDC-path. Our second result is a generalized t-spanner for CDC-path; For any ¿>0 we show how to construct a sub-network containing only O(N/¿) edges, such that that length of shortest CDC-paths between arbitrary sources and destinations increases by only a factor of at most 1/(1-2 sin (¿/2))2. This scheme can be implemented in a distributed manner with a message complexity of O(n log n) and it is highly dynamic, so addition/deletion of nodes are easily handled in a distributed manner. An important conclusion of this scheme is in the case of directional antennas are used. In this case, it is enough to consider only the two closest nodes in each cone.
【Keywords】: computational complexity; directive antennas; distributed algorithms; radio networks; telecommunication network routing; time division multiple access; TDMA system; centralized algorithm; channel assignment; channel discontinuity constraint; channel-discontinuity-constraint routing; directional antennas; distributed algorithm; infrastructure networks; link bandwidth; message complexity; multichannel wireless networks; optimal path; path bandwidth; shortest CDC-path; Bandwidth; Communications Society; Computer networks; Directional antennas; Interference constraints; Peer to peer computing; Routing; Throughput; Time division multiple access; Wireless networks
【Paper Link】 【Pages】:486-490
【Authors】: Ilias Leontiadis ; Paolo Costa ; Cecilia Mascolo
【Abstract】: Nowadays, the navigation systems available on cars are becoming more and more sophisticated. They greatly improve the experience of drivers and passengers by enabling them to receive map and traffic updates, news feeds, advertisements, media files, etc. Unfortunately, the bandwidth available to each vehicle with the current technology is severely limited. There have been many reports on the inability of 3G networks to cope with large size file downloads, especially in dense and mobile settings. A possible alternative is provided by WiFi access points (APs) that are being installed in several countries along the main routes and in popular areas. Although this approach significantly increases the available bandwidth, it still does not provide a fully satisfactory solution due to the limited transmission range (usually a few hundred meters). In this paper we present a novel routing protocol, based on opportunistic vehicle to vehicle communication, to enable efficient multi-hop routing capabilities between mobile vehicles and APs. Unlike prior work, this protocol fully supports two- way communication, i.e., the traditional vehicle-to-AP as well as the more challenging AP-to-vehicle. We leverage the information offered by the navigation system in terms of final destination and path, to i) route packets to the closest AP and ii) to route replies back to the moving vehicle efficiently.
【Keywords】: 3G mobile communication; mobile radio; routing protocols; 3G networks; WiFi access points; access point connectivity; mobile vehicles; multihop routing capabilities; navigation systems; opportunistic routing; routing protocol; vehicular networks; Bandwidth; Communications Society; Costs; Internet; Mobile communication; Navigation; Road vehicles; Routing; Telecommunication traffic; Vehicle safety
【Paper Link】 【Pages】:491-495
【Authors】: Partha Dutta ; Vivek Mhatre ; Debmalya Panigrahi ; Rajeev Rastogi
【Abstract】: Long-distance multi-hop wireless networks have been used in recent years to provide connectivity to rural areas. The salient features of such networks include TDMA channel access, nodes with multiple radios, and point-to-point long-distance wireless links established using high-gain directional antennas mounted on high towers. It has been demonstrated previously that in such network architectures, nodes can transmit concurrently on multiple radios, as well as receive concurrently on multiple radios. However, concurrent transmission on one radio, and reception on another radio causes interference. Under this scheduling constraint, given a set of source-destination demand rates, we consider the problem of satisfying the maximum fraction of each demand (also called the maximum concurrent flow problem). We give a novel joint routing and scheduling scheme for this problem, based on linear programming and graph coloring. We analyze our algorithm theoretically and prove that at least 50% of a satisfiable set of demands is satisfied by our algorithm for most practical networks (with maximum node degree at most 5).
【Keywords】: directive antennas; graph colouring; linear programming; radiofrequency interference; scheduling; telecommunication network routing; wireless mesh networks; TDMA channel access; directional antennas; graph coloring; linear programming; maximum concurrent flow problem; multihop wireless networks; multiple radios; point-to-point long distance wireless links; reception interference; routing; scheduling constraint; Algorithm design and analysis; Directional antennas; Interference; Mesh networks; Peer to peer computing; Poles and towers; Routing; Scheduling algorithm; Spread spectrum communication; Wireless networks
【Paper Link】 【Pages】:496-500
【Authors】: Mohammad Naghshvar ; Tara Javidi
【Abstract】: This paper considers the problem of routing packets across a multi-hop network consisting of multiple sources of traffic and wireless links with stochastic reliability while ensuring bounded expected delay. Each packet transmission can be overheard by a random subset of receiver nodes among which the next relay is selected opportunistically. The main challenge in the design of minimum-delay routing policies is balancing the trade-off between routing the packets along the shortest paths to the destination and distributing traffic across the network. Opportunistic variants of shortest path routing may, under heavy traffic scenarios, result in severe congestion and unbounded delay. While the opportunistic variants of backpressure, which ensure a bounded expected delay, are known to exhibit poor delay performance at low to medium traffic conditions. Combining important aspects of shortest path routing with those of backpressure routing, this paper provides an opportunistic routing policy with congestion diversity (ORCD). ORCD uses a measure of draining time to opportunistically identify and route packets along the paths with an expected low overall congestion. Previously, ORCD was proved to ensure a bounded expected delay for all networks and under any admissible traffic (without any knowledge of traffic statistics). This paper proposes practical implementations and discusses criticality of various aspects of the algorithm. Furthermore, the expected delay encountered by the packets in the network under ORCD is compared against known existing routing policies via simulations where substantial improvements are observed.
【Keywords】: delays; graph theory; radiocommunication; stochastic processes; telecommunication network reliability; telecommunication network routing; telecommunication traffic; backpressure routing; congestion diversity; minimum-delay routing policies; opportunistic routing; routing packets; shortest path routing; stochastic reliability; wireless links; wireless multi-hop networks; Delay; Diversity reception; Relays; Routing; Spread spectrum communication; Statistics; Stochastic processes; Telecommunication traffic; Time measurement; Traffic control
【Paper Link】 【Pages】:501-505
【Authors】: Miao Zhao ; Yuanyuan Yang
【Abstract】: Recent advances have shown a great potential of anchor based mobile data gathering in wireless sensor networks. In such a scheme, during each periodic data gathering tour, the mobile collector stays at each anchor point for a period of sojourn time and collects data from nearby sensors via multi-hop communications. We provide an optimization based distributed algorithm for such data gathering in this paper. We adopt network utility, which is a properly defined function, to characterize the data gathering performance, and formalize the problem as a network utility maximization problem under the constraint of guaranteed network lifetime. To efficiently solve the problem, we decompose it into two sets of subproblems and solve them in a distributed manner, which facilitates the scalable implementations. Finally, we provide numerical results to demonstrate the convergence of the proposed distributed algorithm.
【Keywords】: distributed algorithms; mobile radio; optimisation; wireless sensor networks; mobile collector; mobile data gathering; multihop communication; network utility maximization problem; optimization based distributed algorithm; periodic data gathering tour; wireless sensor networks; Batteries; Convergence of numerical methods; Distributed algorithms; Mobile communication; Mobile computing; Relays; Routing; Sensor phenomena and characterization; Utility programs; Wireless sensor networks
【Paper Link】 【Pages】:506-510
【Authors】: Junbin Liang ; Jianxin Wang ; Jiannong Cao ; Jianer Chen ; Mingming Lu
【Abstract】: Data gathering is a broad research area in wireless sensor networks. The basic operation in sensor networks is the systematic gathering and transmission of sensed data to a sink for further processing. The lifetime of the network is defined as the time until the first node depletes its energy. A key challenge in data gathering without aggregation is to conserve the energy consumption among nodes so as to maximize the network lifetime. We formalize the problem of tackling the challenge as to construct a min-max-weight spanning tree, in which the bottleneck nodes have the least number of descendants according to their energy. However, the problem is NP-complete. A ¿(log n/log/log n)-approximation algorithm MITT is proposed to solve the problem without location information. Simulation results show that MITT can achieve longer network lifetime than existing algorithms.
【Keywords】: computational complexity; energy consumption; minimax techniques; trees (mathematics); wireless sensor networks; NP-complete; data gathering; energy consumption; maximum lifetime tree; min-max-weight spanning tree; network lifetime; systematic gathering; systematic transmission; wireless sensor networks; Communications Society; Computer networks; Computer science; Data engineering; Energy consumption; Information science; Peer to peer computing; Protocols; USA Councils; Wireless sensor networks
【Paper Link】 【Pages】:511-515
【Authors】: Dengpan Zhou ; Jie Gao
【Abstract】: We study the problem of maintaining group communication between m mobile agents, tracked and helped by n static networked sensors. We develop algorithms to maintain a O(lg n)-approximation to the minimum Sterner tree of the mobile agents such that the maintenance message cost is on average O(lg n) per each hop an agent moves. The key idea is to extract a 'hierarchical well-separated tree (HST)' on the sensor nodes such that the tree distance approximates the sensor network hop distance by a factor of O(lg n). We then prove that maintaining the subtree of the mobile agents on the HST uses logarithmic messages per hop movement. With the HST we can also maintain O(lg n) approximate k-center for the mobile agents with the same message cost. Both the minimum Steiner tree and the k-center problems are NP-hard and our algorithms are the first efficient algorithms for maintaining approximate solutions in a distributed setting.
【Keywords】: computational complexity; mobile agents; telecommunication computing; trees (mathematics); wireless sensor networks; NP-hard algorithms; hierarchical well-separated tree; k-center; logarithmic messages per hop movement; minimum Steiner tree; mobile agents; sensor network; sensor nodes; static networked sensors; Clustering algorithms; Communications Society; Computer science; Costs; Data structures; Kinetic theory; Mobile agents; Mobile communication; Peer to peer computing; Space exploration
【Paper Link】 【Pages】:516-524
【Authors】: Amir Epstein ; Dean H. Lorenz ; Ezra Silvera ; Inbar Shapira
【Abstract】: Cloud Computing in general and Virtualized Infrastructure Provisioning in particular, are significant trends with the potential to increase agility and lower costs of IT. An emerging cloud service is a virtual server shop, that allows cloud customers to order virtual appliances to be delivered virtually on the cloud. Like physical shops, customers want to customize the ordered products, e.g., have them pre-installed with their desired applications and pre-configured. Global cloud providers need to create customized virtual-server disk images and deliver them on time to meet the customer reservations and service level. This framework creates a new flavor of content distribution over the web, where large virtual server images need to be delivered to the target compute farms (either on the global cloud or on customer private clouds). In order to reduce provisioning time and meet reservation deadlines, one approach is to stage images on storage near the customer. This introduces an optimization problem of finding an optimal staging schedule, according to network bandwidth, pending reservations schedule, and customer value. This problem has some similarities to cache pre-filling and production-line scheduling. It combines scheduling, bandwidth considerations, and storage capacity constraints. In this paper we study the fundamental properties of this approach and formalize several flavors of the related optimization problem. We prove useful properties of the problem and then use those properties to provide exact efficient algorithms to solve it. We also derive efficient approximate solutions with proven error bounds.
【Keywords】: Internet; optimisation; scheduling; virtual reality; cache pre-filling; cloud computing; customer value; global infrastructure cloud service; network bandwidth; optimal staging schedule; optimization problem; pending reservations schedule; production-line scheduling; virtual appliance content distribution; virtual server shop; virtualized infrastructure provisioning; Bandwidth; Cloud computing; Costs; Distributed computing; Home appliances; Image storage; Processor scheduling; Runtime; Testing; Virtual machining
【Paper Link】 【Pages】:525-533
【Authors】: Cong Wang ; Qian Wang ; Kui Ren ; Wenjing Lou
【Abstract】: Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.
【Keywords】: Internet; auditing; data integrity; public key cryptography; security of data; bilinear aggregate signature; cloud computing; configurable computing resources; data integrity; data outsourcing; data storage security; on-demand high quality applications; privacy preserving public auditing; public key based homomorphic authenticator; random masking; third party auditor; Aggregates; Cloud computing; Computer vision; Data privacy; Data security; Memory; Outsourcing; Physics computing; Protection; Public key
【Paper Link】 【Pages】:534-542
【Authors】: Shucheng Yu ; Cong Wang ; Kui Ren ; Wenjing Lou
【Abstract】: Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.
【Keywords】: Internet; authorisation; cryptography; Internet; attribute-based encryption techniques; cloud computing; cloud servers; cryptographic methods; data confidentiality; data decryption keys; data management; data security; fine-grained data access control; lazy reencryption; proxy reencryption; security models; Access control; Business; Cloud computing; Cryptography; Data security; Medical services; Memory; Service oriented architecture; Web and internet services; Web server
【Paper Link】 【Pages】:543-551
【Authors】: Hyoil Kim ; Kang G. Shin
【Abstract】: In IEEE 802.22 Wireless Regional Area Networks (WRANs), each Base Station (BS) solves a complex resource allocation problem of simultaneously determining the channel to reuse, power for adaptive coverage, and Consumer Premise Equipments (CPEs) to associate with, while maximizing the total downstream capacity of CPEs. Although joint power and channel allocation is a classical problem, resource allocation in WRANs faces two unique challenges that has not yet been addressed: (1) the presence of small-scale incumbents such as wireless microphones (WMs), and (2) asymmetric interference patterns between BSs using omnidirectional antennas and CPEs using directional antennas. In this paper, we capture this asymmetry in upstream/downstream communications to propose an accurate and realistic WRAN-WM coexistence model that increases spatial reuse of TV spectrum while protecting small-scale incumbents. Based on the proposed model, we formulate the resource allocation problem as a mixed-integer nonlinear programming (MINLP) which is NP-hard. To solve the problem in real-time, we propose a suboptimal algorithm based on the Genetic Algorithm (GA), and extend the basic GA algorithm to a fully-distributed GA algorithm (dGA) that distributes computational cost over the network and achieves scalability via local cooperation between neighboring BSs. Using extensive simulation, the proposed dGA is shown to perform as good as 99.4-99.8% of the optimal solution, while reducing the computational cost significantly.
【Keywords】: channel allocation; distributed algorithms; genetic algorithms; integer programming; nonlinear programming; radio spectrum management; IEEE 802.22 WRAN; NP hard problem; TV spectrum; adaptive coverage; asymmetry aware real time distributed joint resource allocation; base station; channel reuse; consumer premise equipments; genetic algorithm; mixed integer nonlinear programming; spatial reuse; suboptimal algorithm; wireless regional area networks; Base stations; Channel allocation; Computational efficiency; Directional antennas; Directive antennas; Dissolved gas analysis; Interference; Microphones; Resource management; TV
【Paper Link】 【Pages】:552-560
【Authors】: Jin Jin ; Baochun Li
【Abstract】: WiMAX with femto cells is a cost-effective next generation broadband wireless communication system. Cognitive Radio (CR) has recently emerged as a promising technology to improve spectrum utilization by allowing dynamic spectrum access. There will be large potential benefits by applying the CR technique to WiMAX with femto cells, which are barely explored in the literature. In this paper, we propose a novel cognitive WiMAX architecture with femto cells, where the base station and users are equipped with CRs and intelligently adjusts power, channel, and other resources to accommodate the entire network ecosystem. In this new design, we develop an optimization framework for location-aware cooperative resource management, by jointly employing multi-hop cooperative communication, power control, channel assignment, primary user protection, buffer management, and fairness, and incorporating user, channel, and cooperative diversities. To achieve optimality, it is designed based on stochastic Lyapunov optimization, aiming to take advantage of the radio flexibility and fully utilize the spectrum. Evaluated by the rigorous analysis and extensive simulations, our resource management protocol is near-optimal with closed-form bounds, with which cognitive WiMAX achieves substantial performance improvement.
【Keywords】: WiMax; channel allocation; cognitive radio; optimisation; resource allocation; spread spectrum communication; telecommunication network management; base station; buffer management; channel assignment; cognitive WiMAX architecture; cognitive radio; cost-effective next generation broadband wireless communication system; dynamic spectrum access; fairness; femto cell; location-aware cooperative resource management; multihop cooperative communication; network ecosystem; optimization framework; power control; primary user protection; radio flexibility; resource management protocol; spectrum utilization; stochastic Lyapunov optimization; Base stations; Broadband communication; Chromium; Cognitive radio; Design optimization; Ecosystems; Intelligent networks; Resource management; WiMAX; Wireless communication
【Paper Link】 【Pages】:561-569
【Authors】: Hong Xu ; Baochun Li
【Abstract】: Recently, a cooperative paradigm for single-channel cognitive radio networks has been advocated, where primary users can leverage secondary users to relay their traffic. However, it is not clear how such cooperation can be exploited in multi-channel networks effectively. Conventional cooperation entails that data on one channel has to be relayed on exactly the same channel, which is inefficient in multi-channel networks with channel and user diversity. Moreover, the selfishness of users complicates the critical resource allocation problem, as both parties target at maximizing their own utility. This work represents the first attempt to address these challenges. We propose FLEC, a novel design of flexible channel cooperation. It allows secondary users to freely optimize the use of channels for transmitting primary data along with their own data, in order to maximize performance. Further, we formulate a unifying optimization framework based on Nash Bargaining Solutions to fairly and efficiently address resource allocation between primary and secondary networks, in both decentralized and centralized settings. We present an optimal distributed algorithm and sub-optimal centralized heuristics, and verify their effectiveness via realistic simulations.
【Keywords】: cognitive radio; distributed algorithms; diversity reception; frequency division multiple access; resource allocation; telecommunication traffic; FLEC; Nash bargaining solutions; OFDMA; cognitive radio networks; decentralized settings; flexible channel cooperation; multichannel networks; optimal distributed algorithm; orthogonal frequency division multiple access; resource allocation; user diversity; Cognitive radio; Communications Society; Costs; Distributed algorithms; Land mobile radio cellular systems; Receivers; Relays; Resource management; Telecommunication traffic; Throughput
【Paper Link】 【Pages】:570-578
【Authors】: Mahdi Lotfinezhad ; Ben Liang ; Elvino S. Sousa
【Abstract】: In this paper, we consider the problem of optimal control for throughput utility maximization in cognitive radio networks with dynamic user arrivals and departures. The cognitive radio network considered in this paper consists of a number of heterogeneous sub-networks. These sub-networks may be power-constrained and are required to operate in such a way that the average total interference received on primary channels are kept below given thresholds. We develop a control policy that performs joint admission control and resource scheduling. Through Lyapunov optimization techniques, we show that the proposed policy achieves a utility performance within O(¿) of optimality for any positive ¿. We further show that this arbitrarily closeness to optimality comes at the price of having a delay that is O(1/¿) in admitting users. We also propose constant factor approximations of the policy for distributed implementation.
【Keywords】: Lyapunov methods; cognitive radio; interference; optimal control; resource allocation; scheduling; telecommunication congestion control; Lyapunov optimization technique; average total interference; constant factor approximation; constrained cognitive radio networks; distributed implementation; dynamic population size; dynamic user arrivals; dynamic user departures; heterogeneous sub-networks; joint admission control; optimal control; primary channels; resource scheduling; throughput utility maximization; Admission control; Aggregates; Cognitive radio; Collaborative work; Communications Society; Interference constraints; Optimal control; Scheduling; Throughput; Tin
【Paper Link】 【Pages】:579-587
【Authors】: Mohammad A. Saleh ; Ahmed E. Kamal
【Abstract】: A large number of network applications today allow several users to interact together using the many-to-many service mode. In many-to-many communication, also referred to as group communication, a session consists of a group of users (we refer to them as members), where each member transmits its traffic to all other members in the same group. In this paper, we address the problem of grooming sub-wavelength many-to-many traffic (e.g., OC-3) into high-bandwidth wavelength channels (e.g., OC-192) in WDM mesh networks. The cost of a WDM network is dominated by the cost of higher layer electronic ports (i.e., transceivers). A transceiver is needed for each initiation and termination of a lightpath. Therefore, our objective is to minimize the total number of lightpaths established. Unfortunately, the grooming problem even with unicast traffic has been shown to be NP-hard. For a number of special cases where the many-to-many traffic grooming problem is tractable, we efficiently derive the optimal solution, while in the general case, we introduce two novel approximation algorithms. We also consider the routing and wavelength assignment problem with the objective of minimizing the number of wavelengths used. Through extensive experiments, we show that the two algorithms use a number of lightpaths that is very close to that of a derived lower bound. Also, we compare the two algorithms on the several costs mentioned in the paper including the number of lightpaths and the number of wavelengths used.
【Keywords】: computational complexity; telecommunication network routing; telecommunication traffic; transceivers; wavelength assignment; wavelength division multiplexing; NP-hard; WDM mesh networks; many-to-many communication; many-to-many service mode; many-to-many traffic grooming; routing problem; transceiver; unicast traffic; wavelength assignment problem; wavelength division multiplexing; Approximation algorithms; Costs; Mesh networks; Telecommunication traffic; Transceivers; Unicast; WDM networks; Wavelength assignment; Wavelength division multiplexing; Wavelength routing
【Paper Link】 【Pages】:588-595
【Authors】: Ming Xia ; Massimo Tornatore ; Charles U. Martel ; Biswanath Mukherjee
【Abstract】: A Service Level Agreement (SLA) typically specifies the availability a Service Provider (SP) promises to a customer. In an Optical Transport Network, finding a lightpath for a connection is commonly based on whether the availability of a lightpath availability complies with the connection's SLA-requested availability. Because of the stochastic nature of network failures, the actual availability of a lightpath over a specific time period is subject to uncertainty, and the SLA is usually at risk. We consider the network uncertainty, and study routing to minimize the probability of SLA violation. First, we use a single-link model to study SLA Violation Risk (i.e., the probability of SLA violation) under different settings. We show that SLA Violation Risk may vary by paths and is affected by other factors (e.g., failure rate, connection holding time, etc.), and hence cannot be simply described by path availability. We then formulate the problem of risk-aware routing in mesh networks, in which routing decisions are dictated by SLA Violation Risk. In particular, we focus on devising a scheme capable of computing lightpath(s) that are likely to successfully accommodate a connection's SLA-requested availability. A novel technique is applied to convert links with heterogeneous failure profiles to reference links which capture the main risk features in a relative manner. Based on the "reference link" concept, we present a polynomial Risk-Aware Routing scheme using only limited failure information. In addition, we extend our Risk-Aware Routing scheme to incorporate shared path protection (SPP) when protection is needed. We evaluate the performance and demonstrate the effectiveness of our schemes in terms of SLA violation ratio and, more generally, contrast them with the generic availability-aware approaches.
【Keywords】: optical fibre networks; risk management; telecommunication network routing; generic availability-aware approaches; optical transport networks; risk-aware routing; service level agreement; service provider; shared path protection; single-link model; violation risk; Availability; Communications Society; Computer science; Mesh networks; Optical fiber networks; Protection; Routing; Stochastic processes; USA Councils; Uncertainty
【Paper Link】 【Pages】:596-604
【Authors】: Onur Turkcu ; Suresh Subramaniam
【Abstract】: Waveband switching saves port costs in optical crossconnects by grouping together a set of consecutive wavelengths and switching them as a single waveband. Previous work has focused on either uniform band sizes or non-uniform band sizes considering a single node. In this paper, we show that such solutions are inadequate when considering the entire network, and present a novel framework for optimizing the number of wavebands in a ring network for deterministic traffic. We then consider a specific type of traffic, namely, all-to-all traffic and present bounds and heuristic solutions for the problem. Our results show that the number of ports can be reduced by a large amount using waveband switching compared to wavelength switching. We also numerically evaluate the performance of our waveband design algorithms under dynamic stochastic traffic.
【Keywords】: optical communication; stochastic processes; telecommunication switching; telecommunication traffic; consecutive wavelengths; deterministic traffic; dynamic stochastic traffic; optical crossconnects; optical ring networks; optimal waveband switching; Communication switching; Communications Society; Cost function; Network topology; Optical fiber networks; Optical switches; Peer to peer computing; Telecommunication switching; Telecommunication traffic; Wavelength division multiplexing
【Paper Link】 【Pages】:605-613
【Authors】: Haojin Zhu ; Xiaodong Lin ; Rongxing Lu ; Xuemin Shen ; Dongsheng Xing ; Zhenfu Cao
【Abstract】:
Bundle Authentication is a critical security service in Delay Tolerant Networks (DTNs) that ensures authenticity and integrity of bundles during multi-hop transmissions. Public key signatures, which have been suggested in existing bundle security protocol specification, achieve bundle authentication at the cost of an increased computational, transmission overhead and a higher energy consumption, which is not desirable for energy-constrained DTNs. On the other hand, the unique store-carry-and-forward'' transmission characteristic of DTNs implies that bundles from distinct/common senders can be buffered opportunistically at some common intermediate nodes. This
buffering'' characteristic distinguishes DTN from any other traditional wireless networks, for which an intermediate cache is not supported. To exploit such a buffering characteristic, in this paper, we propose an Opportunistic Batch Bundle Authentication Scheme (OBBA) to achieve efficient bundle authentication. The proposed scheme adopts batch verification techniques, allowing a computational overhead to be bounded by the number of opportunistic contacts instead of the number of messages. Furthermore, we introduce a novel concept of a fragment authentication tree to minimize communication cost by choosing an optimal tree height. Finally, we implement OBBA in a specific DTN scenario setting: packet-switched networks on campus. The simulation results in terms of computation time, transmission overhead and power consumption are given to demonstrate the efficiency and effectiveness of the proposed schemes.
【Keywords】: digital signatures; message authentication; packet radio networks; packet switching; protocols; public key cryptography; telecommunication computing; telecommunication security; batch verification techniques; bundle security protocol specification; delay tolerant networks; energy constrained DTN; high energy consumption; multihop transmissions; opportunistic batch bundle authentication scheme; packet-switched networks; power consumption; public key signatures; store-carry-and-forward transmission characteristic; transmission overhead; wireless networks; Authentication; Buffer storage; Computational modeling; Cost function; Disruption tolerant networking; Energy consumption; Protocols; Public key; Spread spectrum communication; Wireless networks
【Paper Link】 【Pages】:614-622
【Authors】: Liang Hu ; Jean-Yves Le Boudec ; Milan Vojnovic
【Abstract】: Collaborative ad-hoc dissemination of information has been proposed as an efficient means to disseminate information among devices in a wireless ad-hoc network. Devices help in forwarding the information channels to the entire network, by disseminating the channels they subscribe to, plus others. We consider the case where devices have a limited amount of storage that they are willing to devote to the public good, and thus have to decide which channels they are willing to help disseminate. We are interested in finding channel selection strategies which optimize the dissemination time across the channels. We first consider a simple model under the random mixing assumption; we show that channel dissemination time can be characterized in terms of the number of nodes that forward this channel. Then we show that maximizing a social welfare is equivalent to an assignment problem, whose solution can be obtained by a centralized greedy algorithm. We show empirical evidence, based on Zune data, that there is a substantial difference between the utility of the optimal assignment and heuristics that were used in the past. We also show that the optimal assignment can be approximated in a distributed way by a Metropolis-Hastings sampling algorithm. We also give a variant that accounts for battery level. This leads to a practical channel selection and re-selection algorithm that can be implemented without any central control.
【Keywords】: ad hoc networks; greedy algorithms; information dissemination; signal sampling; wireless channels; Metropolis-Hastings sampling algorithm; Zune data; assignment problem; battery level; centralized greedy algorithm; channel selection; collaborative ad hoc dissemination; information channels; information dissemination; optimal channel choice; random mixing assumption; wireless ad hoc network; Bandwidth; Communications Society; Costs; Digital audio broadcasting; Energy consumption; Energy storage; Feeds; Greedy algorithms; Internet; Online Communities/Technical Collaboration
【Paper Link】 【Pages】:623-631
【Authors】: Philippe Jacquet ; Bernard Mans ; Georgios Rodolakis
【Abstract】: We investigate the fundamental capacity limits of space-time journeys of information in mobile and Delay Tolerant Networks (DTNs), where information is either transmitted or carried by mobile nodes, using store-carry-forward routing. We define the capacity of a journey (i.e., a path in space and time, from a source to a destination) as the maximum amount of data that can be transferred from the source to the destination in the given journey. Combining a stochastic model (conveying all possible journeys) and an analysis of the durations of the nodes' encounters, we study the properties of journeys that maximize the space-time information propagation capacity, in bit-meters per second. More specifically, we provide theoretical lower and upper bounds on the information propagation speed, as a function of the journey capacity. In the particular case of random way-point-like models (i.e., when nodes move for a distance of the order of the network domain size before changing direction), we show that, for relatively large journey capacities, the information propagation speed is of the same order as the mobile node speed. This implies that, surprisingly, in sparse but large-scale mobile DTNs, the space-time information propagation capacity in bit-meters per second remains proportional to the mobile node speed and to the size of the transported data bundles, when the bundles are relatively large. We also verify that all our analytical bounds are accurate in several simulation scenarios.
【Keywords】: mobile radio; telecommunication network routing; delay tolerant networks; mobile networks; mobile node speed; space-time capacity limits; space-time information propagation capacity; store-carry-forward routing; Ad hoc networks; Communications Society; Delay effects; Disruption tolerant networking; Information analysis; Large-scale systems; Peer to peer computing; Routing; Stochastic processes; Upper bound
【Paper Link】 【Pages】:632-640
【Authors】: Rongxing Lu ; Xiaodong Lin ; Xuemin Shen
【Abstract】: In this paper, we propose a social-based privacy- preserving packet forwarding protocol, called SPRING, for vehicular delay tolerant networks (DTNs). With SPRING, Roadside Units (RSUs) deployed along the roadside can assist in packet forwarding to achieve highly reliable transmissions. In specific, we first heuristically define how to evaluate each traffic intersection's social degree in a vehicular DTN. Based on the social degree information, we then strategically place RSUs at some high-social intersections. As a result, these RSUs can provide tremendous assistance in temporarily storing packets and helping packet forwarding to achieve high delivery ratio. Performance evaluations via extensive simulations demonstrate the SPRING's efficiency. In addition, detailed security analyses show that the proposed SPRING can achieve conditional privacy preservation and resist most attacks existing in vehicular DTNs.
【Keywords】: mobile radio; protocols; road vehicles; SPRING; roadside units; social degree information; social-based privacy preserving packet forwarding protocol; vehicular delay tolerant networks; Ad hoc networks; Communication system traffic control; Disruption tolerant networking; Mobile communication; Privacy; Protocols; Springs; Telecommunication network reliability; Telecommunication traffic; Vehicles
【Paper Link】 【Pages】:641-649
【Authors】: Andrew Markham ; Agathoniki Trigoni
【Abstract】: The operation of a sensor network is determined by a large number of parameters, such as the radio duty cycle, the frequency of neighbor discovery beacons, and the rate of sampling sensors. Writing adaptive algorithms to tune these parameters in dynamic network conditions is a challenging task that requires expert knowledge, and many design-test-rewrite cycles. This paper proposes a novel nature-inspired paradigm, termed discrete Gene Regulatory Network (dGRN), for configuring sensor networks. The idea is that nodes should regulate their parameters based on their local state and state communicated from neighbor nodes, in a similar manner that cells regulate their behavior based on local levels of protein concentrations, and proteins diffused from neighbor cells. The proposed dGRN paradigm has two major strengths: 1) it is general-purpose, and can be applied to a variety of parameter tuning problems; and 2) it generates parameter tuning code automatically removing the need for a human expert. We demonstrate the feasibility of the dGRN approach in a scenario where nodes must tune their sampling rates to track a moving target with a certain accuracy. The automatically generated code exhibits properties similar to the ones that one would expect from expert-designed code, such as aggressive sampling when the target moves fast and the sensing range is low, and relaxed sampling otherwise. Moreover, the automatically generated code causes nodes to communicate with each other to coordinate their tuning tasks, as one would expect from expert-designed code. The resulting dGRN code is evaluated both in a simulation environment, and in a real environment with eight T-Mote Sky nodes tracking a light-emitting target.
【Keywords】: tuning; wireless sensor networks; configuring sensor networks; design-test-rewrite cycles; discrete gene regulatory networks; expert-designed code; nature-inspired paradigm; neighbor discovery beacons; parameter tuning code; wireless sensor network; Communications Society; Computer networks; Electronic mail; Frequency; Laboratories; Peer to peer computing; Proteins; Sampling methods; Target tracking; Writing
【Paper Link】 【Pages】:650-658
【Authors】: Bo Li ; Dan Wang ; Feng Wang ; Yi Qing Ni
【Abstract】: There are heavy studies recently on applying wireless sensor networks for structural health monitoring. These works usually focus on the computer science aspect, and the considerations include energy consumption, network connectivity, etc. It is commonly believed that for the current resource limited wireless sensors, system design could be more efficient if the application requirements are incorporated. Nevertheless, we often find that, rather than integration, assumptions have to be made due to lack of knowledge of civil engineering; for example, to evaluate routing algorithms, the sensor placement is assumed to be random or on grids/trees. These may not be practically meaningful to the respective application demands, and make the great efforts by the computer science community on developing efficient methods from the sensor network aspect less useful. In this paper, we study the very first problem of the SHM systems: the sensor placement and focus on the civil requirements. We first study the current general framework of structure health monitoring. We redevelop the framework that includes a new sensor placement module. This module implements the most widely accepted sensor placement scheme from civil engineering but focusing on its usefulness for computer science. It provides such interfaces that can rank the placement quality of the candidate locations in a step by step manner. We then optimize system performance by considering network connectivity and data routing issues; with the objective on energy efficiency. We evaluate our scheme using the data from the structural health monitoring system on the Ting Kau Bridge, Hong Kong. We show that a uniform and a state-of-the-art placement are not very meaningful in placement quality. Our scheme achieves almost the same sensor placement quality with that of the civil engineering with five-fold improvement in system lifetime. We conduct an experiment on the in-built Guangzhou New TV Tower, China; and the results valid- - ate the effectiveness of our scheme.
【Keywords】: condition monitoring; structural engineering; telecommunication network routing; wireless sensor networks; China; Hong Kong; SHM systems; Ting Kau Bridge; application demands; application requirements; civil engineering; civil requirements; computer science aspect; data routing; energy consumption; energy efficiency; high quality sensor placement; in-built Guangzhou New TV Tower; network connectivity; resource limited wireless sensors; routing algorithms; sensor placement module; structural health monitoring; system design; wireless sensor networks; Application software; Civil engineering; Computer science; Computerized monitoring; Energy consumption; Routing; Sensor systems; Sensor systems and applications; System performance; Wireless sensor networks
【Paper Link】 【Pages】:659-667
【Authors】: Anat Bremler-Barr ; David Hay ; Yaron Koral
【Abstract】: Pattern matching algorithms lie at the core of all contemporary Intrusion Detection Systems (IDS), making it intrinsic to reduce their speed and memory requirements. This paper focuses on the most popular class of pattern-matching algorithms, the Aho-Corasick-like algorithms, which are based on constructing and traversing a Deterministic Finite Automaton (DFA), representing the patterns. While this approach ensures deterministic time guarantees, modern IDSs need to deal with hundreds of patterns, thus requiring to store very large DFAs which usually do not fit in fast memory. This results in a major bottleneck on the throughput of the IDS, as well as its power consumption and cost. We propose a novel method to compress DFAs by observing that the name of the states is meaningless. While regular DFAs store separately each transition between two states, we use this degree of freedom and encode states in such a way that all transitions to a specific state can be represented by a single prefix that defines a set of current states. Our technique applies to a large class of automata, which can be categorized by simple properties. Then, the problem of pattern matching is reduced to the well-studied problem of Longest Prefix Matching (LPM) that can be solved either in TCAM, in commercially available IP-lookup chips, or in software. Specifically, we show that with a TCAM our scheme can reach a throughput of 10 Gbps with low power consumption.
【Keywords】: data compression; deterministic automata; finite state machines; pattern matching; security of data; Aho-Corasick-like algorithms; IP-lookup chips; TCAM; bit rate 10 Gbit/s; compactDFA; deterministic finite automaton; generic state machine compression; intrusion detection systems; longest prefix matching; low power consumption; power consumption; scalable pattern matching algorithms; Automata; Communications Society; Computer science; Doped fiber amplifiers; Energy consumption; Hardware; Intrusion detection; Pattern matching; Throughput; USA Councils
【Paper Link】 【Pages】:668-676
【Authors】: Yunye Jin ; Mehul Motani ; Wee-Seng Soh ; Juanjuan Zhang
【Abstract】: Accurate indoor pedestrian tracking has wide applications in the healthcare, retail, and entertainment industries. However, existing approaches to indoor tracking have various limitations. For example, location-fingerprinting approaches are labor-intensive and vulnerable to environmental changes. Trilateration approaches require at least three Line-of-Sight (LoS) beacons to cover any point in the service area, which results in heavy infrastructure cost. Dead Reckoning (DR) approaches rely on knowledge of the initial user location and suffer from tracking error accumulation. Despite this, we adopt DR for location tracking because of the recent emergence of affordable hand-held devices equipped with low cost DR-enabling sensors. In this paper, we propose an indoor pedestrian tracking system which comprises a DR sub-system implemented on a mobile phone, and a ranging sub-system with a sparse infrastructure. A probabilistic fusion scheme is applied to bound the accumulated tracking error of DR when new range measurements are available from sparsely deployed beacons. Experimental results show that the proposed system is able to track users much better than DR alone, with reductions in average error by up to 71.9%. The system is robust and works well even when the initial user location is not available and range updates are intermittent. This highlights the potential of using sparse but reasonably accurate partial information to limit location tracking errors.
【Keywords】: indoor radio; mobile handsets; sensor fusion; target tracking; SparseTrack; dead reckoning; indoor pedestrian tracking; initial user location; line-of-sight beacons; location tracking errors; location-fingerprinting; mobile phone; probabilistic fusion; sparse infrastructure support; tracking error accumulation; trilateration; Accelerometers; Communications Society; Costs; Dead reckoning; Fingerprint recognition; Global Positioning System; Magnetic sensors; Magnetometers; Medical services; Sensor systems
【Paper Link】 【Pages】:677-685
【Authors】: Onur Güngör ; Jian Tan ; Can Emre Koksal ; Hesham El Gamal ; Ness B. Shroff
【Abstract】: In recent years, the famous wiretap channel has been revisited by many researchers and information theoretic secrecy has become an active area of research in this setting. In this paper, we design a wireless communication system that achieves constant bit rate data transmission over a block fading channel, securely from an eavesdropper that listens to the transmitter over another independent block fading channel. It is well known that, the method of sending secure information using the binning techniques inspired by the wiretap channel fails to secure the information at times when the eavesdropper channel has favorable conditions over the main channel. This phenomenon is called secrecy outage. In our system, however, we exploit the times at which the main channel is favorable over the eavesdropper channel for us to be able to transmit some random secret key bits along with the data bits. These key bits are stored in a separate key queue at the transmitter as well as the receiver, and are utilized to secure data bits, whenever the channel conditions favor the eavesdropper. We show that, our system achieves a high performance at any given desired outage probability by jointly controlling the key queue and the transmit power. We show that the optimal power control involves a time sharing between secure waterfilling and channel inversion strategies and the key queue operates in the heavy traffic regime to achieve the maximum delay limited rate possible, under a small outage constraint. This work can be viewed as a first step in providing a framework that combines both information theory and queueing analysis for the study of information theoretic security.
【Keywords】: data communication; fading channels; information theory; probability; queueing theory; radio networks; telecommunication security; block fading channel; channel inversion strategies; data transmission; delay limited secure communication; information theoretic secrecy; joint power; optimal power control; outage probability; secret key queue management; secure waterfilling; wireless communication system; wiretap channel; Bit rate; Control systems; Data communication; Delay; Energy management; Fading; Power control; Power system management; Transmitters; Wireless communication
【Paper Link】 【Pages】:686-694
【Authors】: Javad Ghaderi ; Rayadurgam Srikant
【Abstract】: The problem of anonymous networking when an eavesdropper observes packet timings in a communication network is considered. The goal is to hide the identities of source-destination nodes, and paths of information flow in the network. One way to achieve such an anonymity is to use mixers. Mixers are nodes that receive packets from multiple sources and change the timing of packets, by mixing packets at the output links, to prevent the eavesdropper from finding sources of outgoing packets. In this paper, we consider two simple but fundamental scenarios: double input-single output mixer and double input-double output mixer. For the first case, we use the information-theoretic definition of the anonymity, based on average entropy per packet, and find an optimal mixing strategy under a strict latency constraint. For the second case, perfect anonymity is considered, and a maximal throughput strategy with perfect anonymity is found that minimizes the average delay.
【Keywords】: mixers (circuits); telecommunication networks; telecommunication security; anonymous networking; communication network; double input-double output mixer; double input-single output mixer; source-destination nodes; Communication networks; Communications Society; Cryptography; Delay; Information theory; Peer to peer computing; Performance analysis; Protection; Telecommunication traffic; Timing
【Paper Link】 【Pages】:695-703
【Authors】: Yao Liu ; Peng Ning ; Huaiyu Dai ; An Liu
【Abstract】: Jamming resistance is crucial for applications where reliable wireless communication is required. Spread spectrum techniques such as Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) have been used as countermeasures against jamming attacks. Traditional anti-jamming techniques require that senders and receivers share a secret key in order to communicate with each other. However, such a requirement prevents these techniques from being effective for anti-jamming broadcast communication, where a jammer may learn the shared key from a compromised or malicious receiver and disrupt the reception at normal receivers. In this paper, we propose a Randomized Differential DSSS (RD-DSSS) scheme to achieve anti-jamming broadcast communication without shared keys. RD-DSSS encodes each bit of data using the correlation of unpredictable spreading codes. Specifically, bit "0" is encoded using two different spreading codes, which have low correlation with each other, while bit "1" is encoded using two identical spreading codes, which have high correlation. To defeat reactive jamming attacks, RD-DSSS uses multiple spreading code sequences to spread each message and rearranges the spread output before transmitting it. Our theoretical analysis and simulation results show that RD-DSSS can effectively defeat jamming attacks for anti-jamming broadcast communication without shared keys.
【Keywords】: codes; frequency hop communication; jamming; spread spectrum communication; anti-jamming broadcast communication; direct sequence spread spectrum; frequency hopping spread spectrum; jamming resistance; randomized differential DSSS; spreading codes; Analytical models; Broadcasting; Communications Society; Decoding; Jamming; Radio frequency; Signal processing; Spread spectrum communication; Timing; Wireless communication
【Paper Link】 【Pages】:704-712
【Authors】: Li Lu ; Yunhao Liu ; Xiang-Yang Li
【Abstract】: Privacy-Preserving Authentication (PPA) is crucial for Radio Frequency Identifcation (RFID)-enabled applications. Without appropriate formal privacy models, it is difficult for existing PPA schemes to explicitly prove their privacy. Even worse, RFID systems cannot discover potential security flaws that are vulnerable to new attacking patterns. Recently, researchers propose a formal model, termed as Strong Privacy, which strictly requires tags randomly generate their output. Adopting the Strong Privacy model, PPA schemes have to employ brute-force search in tags' authentications, which incurs unacceptable overhead and delay to large-scale RFID systems. Instead of adopting Strong Privacy, most PPA schemes improve the authentication efficiency at the cost of the privacy degradation. Due to the lack of proper formal models, it cannot be theoretically proven that the degraded PPA schemes can achieve acceptable privacy in practical RFID systems. To address these issues, we propose a weak privacy model, Refresh, for designing PPA schemes with high efficiency as well as acceptable privacy. Based on Refresh, we show that many well-known PPA schemes do not provide satisfied privacy protection, even though they achieve relatively high authentication efficiency. We further propose a Light-weight privAcy-preServing authenTication scheme, LAST, which can guarantee the privacy based on the Refresh model and realize O(1) authentication efficiency, simultaneously.
【Keywords】: cryptography; radiofrequency identification; RFID systems; privacy protection; privacy-preserving authentication; radio frequency identifcation; weak privacy model; Authentication; Costs; Degradation; Delay; Large-scale systems; Privacy; Protection; Radio frequency; Radiofrequency identification; Security
【Paper Link】 【Pages】:713-721
【Authors】: Daniela Brauckhoff ; Kavé Salamatian ; Martin May
【Abstract】: Anomaly detection methods typically operate on preprocessed traffic traces. Firstly, most traffic capturing devices today employ random packet sampling, where each packet is selected with a certain probability, to cope with increasing link speeds. Secondly, temporal aggregation, where all packets in a measurement interval are represented by their temporal mean, is applied to transform the traffic trace to the observation timescale of interest for anomaly detection. These preprocessing steps affect the temporal correlation structure of traffic that is used by anomaly detection methods such as Kalman filtering or PCA, and have thus an impact on anomaly detection performance. Prior work has analyzed how packet sampling degrades the accuracy of anomaly detection methods; however, neither theoretical explanations nor solutions to the sampling problem have been provided. This paper makes the following key contributions: (i) It provides a thorough analysis and quantification of how random packet sampling and temporal aggregation modify the signal properties by introducing noise, distortion and aliasing. (ii) We show that aliasing introduced by the aggregation step has the largest impact on the correlation structure. (iii) We further propose to replace the aggregation step with a specifically designed low-pass filter that reduces the aliasing effect. (iv) Finally, we show that with our solution applied, the performance of anomaly detection systems can be considerably improved in the presence of packet sampling.
【Keywords】: acoustic distortion; signal detection; signal sampling; anomaly detection; packet sampling; signal aliasing; signal distortion; signal modification; signal processing; Degradation; Filtering; Kalman filters; Noise reduction; Performance analysis; Principal component analysis; Sampling methods; Signal analysis; Signal processing; Signal sampling
【Paper Link】 【Pages】:722-730
【Authors】: Fernando Silveira ; Christophe Diot
【Abstract】: Traffic anomaly detection has received a lot of attention over recent years, but understanding the nature of these anomalies and identifying the flows involved is still a manual task, in most cases. We introduce Unsupervised Root Cause Analysis (URCA) which isolates anomalous traffic and classifies alarms with minimal manual assistance and high accuracy. URCA proceeds by successive reduction of the anomalous space, eliminating normal traffic based on feedback from the anomaly detection method. Classification is done by clustering a new anomaly with previously labeled events. We validate URCA using manually analyzed real anomalies as well as synthetic anomaly injection. Our validation shows that URCA can accurately diagnose a large range of anomaly types, including network scans, DDoS attacks, and major routing changes.
【Keywords】: telecommunication congestion control; telecommunication security; anomalous space; anomalous traffic isolation; classification; root causes; traffic anomaly detection; unsupervised root cause analysis; Classification algorithms; Classification tree analysis; Clustering algorithms; Communications Society; Computer crime; Detectors; Feedback; Network-on-a-chip; Routing; Telecommunication traffic
【Paper Link】 【Pages】:731-739
【Authors】: Kai Zheng ; Xin Zhang ; Zhiping Cai ; Zhijun Wang ; Baohua Yang
【Abstract】: In this paper, we identify the unique challenges in deploying parallelism on TCAM-based pattern matching for Network Intrusion Detection Systems (NIDSes). We resolve two critical issues when designing scalable parallelism specifically for pattern matching modules: 1) how to enable fine-grained parallelism in pursuit of effective load balancing and desirable speedup simultaneously; and 2) how to reconcile the tension between parallel processing speedup and prohibitive TCAM power consumption. To this end, we first propose the novel concept of Negative Pattern Matching to partition flows, by which the number of TCAM lookups can be significantly reduced, and the resulting (fine-grained) flow segments can be inspected in parallel without incurring false negatives. Then we propose the notion of Exclusive Pattern Matching to divide the entire pattern set into multiple subsets which can later be matched against selectively and independently without affecting the correctness. We show that Exclusive Pattern Matching enables the adoption of smaller and faster TCAM blocks and improves both the pattern matching speed and scalability. Finally, our theoretical and experimental results validate that the above two concepts are inherently complementary, enabling our integrated scheme to provide performance gain in any scenario (with either clean or dirty traffic).
【Keywords】: content-addressable storage; parallel processing; pattern matching; security of data; NIDS; TCAM lookups; TCAM-based pattern matching; exclusive pattern matching; load balancing; negative pattern matching; network intrusion detection systems; parallel processing speedup; ternary content addressable memories; Communications Society; Computer networks; Concurrent computing; Energy consumption; Hardware; Intrusion detection; Parallel processing; Pattern matching; Scalability; USA Councils
【Paper Link】 【Pages】:740-748
【Authors】: Kedar S. Namjoshi ; Girija J. Narlikar
【Abstract】: The rule language of an Intrusion Detection System (IDS) plays a critical role in its effectiveness. A rule language must be expressive, in order to describe attack patterns as precisely as possible. It must also allow for a matching algorithm with predictable and low complexity, in order to ensure robustness against denial-of-service attacks. Unfortunately, these requirements often conflict. We show, for instance, that a single rule, when coupled with a backtracking matching algorithm, can bring the processing rate down to nearly ONE packet per second. Performance vulnerabilities of this type are known for patterns described using regular expressions, and can be avoided by using a deterministic matching algorithm. Increasingly, however, rules are being written using the more powerful regex syntax, which includes non-regular features such as back-references. The matching algorithm for general regex's is based on backtracking, and is thus vulnerable to attacks. The main contribution of this paper is a deterministic algorithm for the full regex syntax, which builds upon the deterministic algorithm for regular expressions. We provide a (rough) complexity bound on the worst-case performance, and show that this bound can be tightened through compile-time analysis of the regex structure. These bounds can be used as an admissibility check, to isolate expressions that require further analysis. Finally, we present an implementation of these algorithms in the context of the Snort IDS, and experimental results on several packet traces which show substantial improvement over the backtracking algorithm.
【Keywords】: backtracking; computational complexity; deterministic algorithms; pattern matching; security of data; Snort IDS; backtracking matching algorithm; compile-time analysis; denial-of-service attacks; deterministic matching algorithm; fast pattern matching; intrusion detection systems; processing rate; regex syntax; robust pattern matching; robustness; rule language; Automata; Communications Society; Computer crime; Data security; Intrusion detection; Pattern matching; Performance analysis; Prediction algorithms; Protection; Robustness
【Paper Link】 【Pages】:749-757
【Authors】: M. H. R. Khouzani ; Saswati Sarkar ; Eitan Altman
【Abstract】: Malware attacks constitute a serious security risk that threatens to slow down the large scale proliferation of wireless applications. As a first step towards thwarting this security threat, we seek to quantify the maximum damage inflicted on the system owing to such outbreaks and identify the most vicious attacks. We represent the propagation of malware in a battery-constrained mobile wireless network by an epidemic model in which the worm can dynamically control the rate at which it kills the infected node and also the transmission range and/or the media scanning rate. At each moment of time, the worm at each node faces the following trade-offs: (i) using larger transmission range and media scanning rate to accelerate its spread at the cost of exhausting the battery and thereby reducing the overall infection propagation rate in the long run or (ii) killing the node to inflict a large cost on the network, however at the expense of loosing the chance of infecting more susceptible nodes at later times. We mathematically formulate the decision problems and utilize Pontryagin Maximum Principle from optimal control theory to quantify the damage that the malware can inflict on the network by deploying optimum decision rules. Next, we establish structural properties of the optimal strategy of the attacker over time. Specifically, we prove that it is optimal for the attacker to defer killing of the infective nodes in the propagation phase until reaching a certain time and then start the slaughter with maximum effort. We also show that in the optimal attack policy, the battery resources are used according to a decreasing function of time, i.e., mostly during the initial phase of the outbreak. Finally, our numerical investigations reveal a framework for identifying intelligent defense strategies that can limit the damage by appropriately selecting network parameters.
【Keywords】: invasive software; mobile radio; optimal control; radio networks; telecommunication security; Pontryagin maximum principle; battery-constrained mobile wireless network; decision problem; epidemic model; infection propagation rate; maximum damage malware attack; media scanning rate; optimal attack policy; optimal control theory; optimum decision rules; security risk; structural property; transmission range; wireless applications; Batteries; Communication system security; Computer worms; Costs; Intelligent networks; Investments; Peer to peer computing; Relays; Telecommunication traffic; Wireless networks
【Paper Link】 【Pages】:758-766
【Authors】: Jing Shi ; Rui Zhang ; Yunzhong Liu ; Yanchao Zhang
【Abstract】: People-centric urban sensing is a new paradigm gaining popularity. A main obstacle to its widespread deployment and adoption are the privacy concerns of participating individuals. To tackle this open challenge, this paper presents the design and evaluation of PriSense, a novel solution to privacy-preserving data aggregation in people-centric urban sensing systems. PriSense is based on the concept of data slicing and mixing and can support a wide range of statistical additive and non-additive aggregation functions such as Sum, Average, Variance, Count, Max/Min, Median, Histogram, and Percentile with accurate aggregation results. PriSense can support strong user privacy against a tunable threshold number of colluding users and aggregation servers. The efficacy and efficiency of PriSense are confirmed by thorough analytical and simulation results.
【Keywords】: data handling; data privacy; mobile computing; statistics; PriSense; data aggregation; data mixing; data slicing; non-additive aggregation function; people-centric urban sensing; privacy preservation; statistical additive function; Aggregates; Communications Society; Computer architecture; Data privacy; Distributed computing; Histograms; Personal digital assistants; Sensor systems; Space technology; Statistical distributions
【Paper Link】 【Pages】:767-775
【Authors】: Jie Yang ; Yong Ge ; Hui Xiong ; Yingying Chen ; Hongbo Liu
【Abstract】: Recent years have witnessed increasing interests in passive intrusion detection for wireless environments, e.g., asset protection in industrial facilities and emergency rescue of trapped people. Most previous studies have focused primarily on exploiting a single intrusion indicator, such as moving variance, for capturing an intrusion pattern at a time. However, in real-world, there are many intrusion patterns which may be only detectable by combining different intrusion indicators and performing detection jointly. To this end, we propose a joint intrusion learning approach, which has the ability in combining the detection power of several complementary intrusion indicators and detects different intrusion patterns at the same time. We developed the GREEK algorithm, which utilizes grid-based clustering over K-neighborhood to effectively diagnose the presence of intrusions. Further, we show that the performance of intrusion detection can be enhanced by utilizing the collaborative detecting efforts among multiple transmitter-receiver pairs. To validate the effectiveness of the joint intrusion learning method, we conducted experiments in a real-office environment using an IEEE 802.15.4 (Zigbee) network. Our experimental results provide strong evidence of the effectiveness of our joint learning approach in performing passive intrusion detection with a minimized false positive rate.
【Keywords】: learning (artificial intelligence); pattern clustering; personal area networks; security of data; telecommunication computing; telecommunication security; ubiquitous computing; IEEE 802.15.4 network; Zigbee network; greek algorithm; grid-based clustering; joint intrusion learning approach; k-neighborhood clustering; multiple transmitter-receiver pairs; passive intrusion detection; pervasive wireless environments; single intrusion indicator; Clustering algorithms; Collaboration; Event detection; Fires; Industrial plants; Infrared detectors; Intrusion detection; Object detection; Protection; Wireless sensor networks
【Paper Link】 【Pages】:776-784
【Authors】: Mohamed Elsalih Mahmoud ; Xuemin Shen
【Abstract】: In multi-hop wireless networks, the mobile nodes usually act as routers to relay packets generated from other nodes. However, selfish nodes do not cooperate but make use of the honest ones to relay their packets, which has negative effect on fairness, security, and performance of the network. In this paper, we propose a novel incentive mechanism to stimulate cooperation in multi-hop wireless networks. Fairness can be achieved by using credits to reward the cooperative nodes. The overhead can be significantly reduced by using a cheating detection system (CDS) to secure the payment. Extensive security analysis demonstrates that the CDS can identify the cheating nodes effectively under different cheating strategies. Simulation results show that the overhead of the proposed incentive mechanism is incomparable with the existing ones.
【Keywords】: mobile radio; radio networks; telecommunication security; cheating detection system; mobile nodes; multihop wireless networks; relay packets; security analysis; Ad hoc networks; Communications Society; Degradation; Peer to peer computing; Relays; Spread spectrum communication; Statistical analysis; Telecommunication traffic; Wireless mesh networks; Wireless networks
【Paper Link】 【Pages】:785-793
【Authors】: Lingjie Duan ; Jianwei Huang ; Biying Shou
【Abstract】: This paper presents the first analytical study of optimal investment and pricing decisions of a cognitive mobile virtual network operator (C-MVNO) under spectrum supply uncertainty. Compared with a traditional MVNO who only obtains spectrum by long-term leasing contracts, a C-MVNO can acquire short-term spectrum by both sensing the empty "spectrum holes" of licensed bands and dynamically leasing from the spectrum owner. As a result, a C-MVNO can make flexible investment and pricing decisions to match the current demands of the secondary unlicensed users. Spectrum sensing is typically cheaper than dynamic spectrum leasing, but the obtained useful spectrum amount is random due to primary licensed users' stochastic traffic. The CMVNO needs to determine the optimal amounts of sensing and leasing spectrum, considering the trade-offs between cost and uncertainty. The C-MVNO also needs to determine the optimal retail price to sell the spectrum to the secondary unlicensed users, taking into account wireless heterogeneity of users such as different maximum transmission power levels and channel gains. We model and analyze these decisions and the interactions between the C-MVNO and secondary users as a multi-stage Stackelberg game. We show several interesting properties of the network equilibrium, such as threshold structures of the optimal investment and pricing decisions, independence between the optimal price and users' wireless characteristics, and fair and predictable spectrum allocations to the users. Compared with the traditional MVNO, spectrum sensing can significantly improve the C-MVNO's expected profit and users' payoffs.
【Keywords】: cognitive radio; game theory; mobile radio; telecommunication traffic; C-MVNO; channel gain; cognitive mobile virtual network operator; maximum transmission power level; multistage Stackelberg game; network equilibrium; optimal investment; pricing; spectrum supply uncertainty; stochastic traffic; threshold structure; Bandwidth; Cognitive radio; Costs; Investments; Licenses; Pricing; Space technology; Stochastic processes; Uncertainty; Wireless sensor networks
【Paper Link】 【Pages】:794-802
【Authors】: Lin Chen ; Stefano Iellamo ; Marceau Coupechoux ; Philippe Godlewski
【Abstract】: Extensive research in recent years has shown the benefits of cognitive radio technologies to improve the flexibility and efficiency of spectrum utilization. This new communication paradigm, however, requires a well-designed spectrum allocation mechanism. In this paper, we propose an auction framework for cognitive radio networks to allow unlicensed secondary users (SUs) to share the available spectrum of licensed primary users (PUs) fairly and efficiently, subject to the interference temperature constraint at each PU. To study the competition among SUs, we formulate a non-cooperative multiple-PU multiple-SU auction game and study the structure of the resulting equilibrium by solving a non-continuous two-dimensional optimization problem. A distributed algorithm is developed in which each SU updates its strategy based on local information to converge to the equilibrium. We then extend the proposed auction framework to the more challenging scenario with free spectrum bands. We develop an algorithm based on the no-regret learning to reach a correlated equilibrium of the auction game. The proposed algorithm, which can be implemented distributedly based on local observation, is especially suited in decentralized adaptive learning environments as cognitive radio networks. Finally, through numerical experiments, we demonstrate the effectiveness of the proposed auction framework in achieving high efficiency and fairness in spectrum allocation.
【Keywords】: cognitive radio; distributed algorithms; frequency allocation; game theory; radiofrequency interference; auction framework; cognitive radio networks; decentralized adaptive learning environments; distributed algorithm; interference temperature constraint; licensed primary users; noncontinuous two-dimensional optimization problem; noncooperative multiple-PU multiple-SU auction game; spectrum allocation mechanism; unlicensed secondary users; Cognitive radio; Command and control systems; Communications Society; Computer science; Constraint optimization; Distributed algorithms; Interference constraints; Resource management; Telecommunications; Temperature
【Paper Link】 【Pages】:803-811
【Authors】: Animashree Anandkumar ; Nithin Michael ; Ao Tang
【Abstract】: The problem of cooperative allocation among multiple secondary users to maximize cognitive system throughput is considered. The channel availability statistics are initially unknown to the secondary users and are learnt via sensing samples. Two distributed learning and allocation schemes which maximize the cognitive system throughput or equivalently minimize the total regret in distributed learning and allocation are proposed. The first scheme assumes minimal prior information in terms of pre-allocated ranks for secondary users while the second scheme is fully distributed and assumes no such prior information. The two schemes have sum regret which is provably logarithmic in the number of sensing time slots. A lower bound is derived for any learning scheme which is asymptotically logarithmic in the number of slots. Hence, our schemes achieve asymptotic order optimality in terms of regret in distributed learning and allocation.
【Keywords】: cognitive radio; distributed algorithms; radio spectrum management; statistical distributions; asymptotic order optimality; asymptotically logarithmic; channel availability statistics; cooperative allocation; distributed allocation schemes; distributed learning schemes; maximize cognitive system; multiple users; spectrum access; time slots; Availability; Cognitive radio; Communications Society; Distributed algorithms; Signal processing algorithms; Statistical distributions; Statistics; Throughput; USA Councils; Upper bound
【Paper Link】 【Pages】:812-820
【Authors】: Prasanna Chaporkar ; Alexandre Proutière ; Himanshu Asnani
【Abstract】: Consider a wireless system where a transmitter may send data to a set of receivers, or on various channels, experiencing random time-varying fading. The transmitter can send data to a single receiver or on a single channel at a time and may adapt its transmission power to the radio conditions of the chosen receiver/channel. Its objective is to implement a strategy defining at each time how to select the receiver/channel and transmission power, so as to maximize its throughput, i.e., its average sending rate, under an average power constraint. The optimization problem is easy when the fading conditions of all the receivers/channels are known. In many situations however, the instantaneous fading conditions are not known a priori, instead they have to be acquired, i.e., receivers/channels have to be probed, which consumes resources (time, spectrum, energy) in proportion of the number of probed receivers/channels. Hence, the transmitter may choose not to acquire the radio conditions of all the receivers/channels so as to spare resources for actual transmissions. In this paper, we aim at characterizing a joint probing, receiver/channel selection and power control strategy maximizing throughput. We provide an adaptive algorithm converging to the throughput optimal strategy. This algorithm may be used in a wide class of wireless systems with limited information, such as broadcast systems without a priori knowledge of the instantaneous Channel-State Information (CSI). But it can be also used to solve dynamic spectrum access problems such as those arising in cognitive radio systems, where secondary users can access large parts of the spectrum, but have to discover which portions of the spectrum offer more favorable radio conditions or less interference from primary users.
【Keywords】: cognitive radio; diversity reception; interference (signal); optimisation; power control; radio receivers; radio transmitters; wireless channels; average power constraint; broadcast systems; channel-state information; cognitive radio; dynamic spectrum access; interference; multichannel diversity; optimization problem; power control; radio receivers; radio transmitter; random time-varying fading; spare resources; wireless systems; Adaptive algorithm; Cognitive radio; Fading; Interference; Power control; Radio broadcasting; Radio transmitters; Receivers; Throughput; Time varying systems
【Paper Link】 【Pages】:821-829
【Authors】: Masanori Bando ; H. Jonathan Chao
【Abstract】: It is becoming apparent that the next generation IP route lookup architecture needs to achieve speeds of 100-Gbps and beyond while supporting both IPv4 and IPv6 with fast real-time updates to accommodate ever-growing routing tables. Some of the proposed multibit-trie based schemes, such as Tree Bitmap, have been used in today's high-end routers. However, their large data structure often requires multiple external memory accesses for each route lookup. A pipelining technique is widely used to achieve high-speed lookup with a cost of using many external memory chips. Pipelining also often leads to poor memory load-balancing. In this paper, we propose a new IP route lookup architecture called FlashTrie that overcomes the shortcomings of the multibit-trie based approach. We use a hash-based membership query to limit off-chip memory accesses per lookup to one and to balance memory utilization among the memory modules. We also develop a new data structure called Prefix-Compressed Trie that reduces the size of a bitmap by more than 80%. Our simulation and implementation results show that FlashTrie can achieve 160-Gbps worst-case throughput while simultaneously supporting 2-M prefixes for IPv4 and 279-k prefixes for IPv6 using one FPGA chip and four DDR3 SDRAM chips. FlashTrie also supports incremental real-time updates.
【Keywords】: IP networks; field programmable gate arrays; telecommunication network routing; DDR3 SDRAM chips; FPGA chip; FlashTrie; IP route lookup architecture; IPv4; IPv6; Prefix-Compressed Trie; data structure; external memory chips; hash-based membership query; hash-based prefix-compressed trie; memory load-balancing; multibit-trie based schemes; multiple external memory; pipelining technique; real-time updates; routing tables; tree bitmap; Chaotic communication; Communications Society; Computer architecture; Costs; Data structures; Field programmable gate arrays; Pipeline processing; Routing; Telecommunication traffic; Throughput
【Paper Link】 【Pages】:830-838
【Authors】: Christos Kozanitis ; John Huber ; Sushil Singh ; George Varghese
【Abstract】: More fundamental than IP lookups and packet classification in routers is the extraction of fields such as IP Dest and TCP Ports that determine packet forwarding. While parsing of packet fields used to be easy, new shim layers (e.g., MPLS, 802.1Q, MAC-in-MAC) of possibly variable length have greatly increased the worst-case path in the parse tree. The problem is exacerbated by the need to accommodate new packet headers and to extract other higher layer fields. Programmable routers for projects such as GENI will need such flexible parsers. In this paper, we describe the design and implementation of the Kangaroo system, a flexible packet parser that can run at 40 Gbps even for worst-case packet headers. Because conventional solutions that traverse the parse tree one protocol at a time are too slow, Kangaroo uses lookahead to parse several protocol headers in one step using a new architecture in which a CAM directs the next set of bytes to be extracted. The challenge is to keep the number of CAM entries from growing exponentially with the amount of lookahead. We deal with this challenge using a non-uniform traversal of the parse tree, and an offline dynamic programming algorithm that calculates the optimal walk. Our experiments on a NetFPGA prototype show a speedup of 2 compared to an architecture with a lookahead of 1. The architecture can be implemented as a parsing block in a standard 400 MHz ASIC at 40 Gbps using less than 1% of chip area.
【Keywords】: IP networks; application specific integrated circuits; content-addressable storage; dynamic programming; field programmable gate arrays; table lookup; telecommunication network routing; trees (mathematics); IP dest; IP lookups; Kangaroo system; NetFPGA; TCP ports; application specific integrated circuits; bit rate 40 Gbit/s; content-addressable memory; frequency 400 MHz; offline dynamic programming; packet classification; packet forwarding; packet headers; parse tree; programmable routers; protocol headers; wire-speed parsing; worst-case path; CADCAM; Communications Society; Computer aided manufacturing; Dynamic programming; Ethernet networks; Multiprotocol label switching; Packet switching; Protocols; Switches; TCPIP
【Paper Link】 【Pages】:839-847
【Authors】: Cheng-Shang Chang ; Jay Cheng ; Tien-Ke Huang ; Xuan-Chao Huang ; Duan-Shin Lee
【Abstract】: Motivated by the design of high speed switching fabrics, in this paper we propose a bit-stuffing algorithm for generating forbidden transition codes to mitigate the crosstalk effect between adjacent wires in long on-chip buses. We first model a bus with forbidden transition constraints as a forbidden transition channel, and derive the Shannon capacity of such a channel. Then we perform a worst case analysis and a probabilistic analysis for the bit-stuffing algorithm. We show by both theoretic analysis and simulations that the coding rate of the bit stuffing encoding scheme for independent and identically distributed (i.i.d.) Bernoulli input traffic is quite close to the Shannon capacity, and hence is much better than those of the existing forbidden transition codes in the literature, including the Fibonacci representation.
【Keywords】: channel capacity; channel coding; crosstalk; probability; telecommunication switching; telecommunication traffic; Fibonacci representation; Shannon capacity; bit stuffing encoding scheme; coding rate; crosstalk avoidance; forbidden transition channel; forbidden transition code generation; forbidden transition constraints; high speed switching fabrics; identically distributed Bernoulli input traffic; independent distributed Bernoulli input traffic; on-chip buses; probabilistic analysis; worst case analysis; Algorithm design and analysis; Communication switching; Communications Society; Crosstalk; Data communication; Decoding; Encoding; Performance analysis; Switches; Wires
【Paper Link】 【Pages】:848-856
【Authors】: Xin Zhao ; Yaoqing Liu ; Lan Wang ; Beichuan Zhang
【Abstract】: The rapid growth of global routing tables has raised concerns among many Internet Service Providers. The most immediate concern regarding routing scalability is the size of the Forwarding Information Base (FIB), which seems to be growing at a faster pace than router hardware can support. This paper focuses on one potential solution to this problem - FIB aggregation, i.e., aggregating FIB entries without affecting the forwarding paths taken by data traffic. Compared with alternative solutions to the routing scalability problem, FIB aggregation is particularly appealing because it is a purely local software optimization limited within a router, requiring no changes to routing protocols or router hardware. To understand the feasibility of using FIB aggregation to extend router lifetime, we present several FIB aggregation algorithms and evaluate their performance using routing tables and updates from tens of networks. We find that FIB aggregation can reduce the FIB table size by as much as 70% with small computational overhead. We also show that the computational overhead can be controlled through various mechanisms.
【Keywords】: Internet; routing protocols; Internet service providers; aggregatability; forwarding information base; global routing tables; router forwarding tables; Algorithm design and analysis; Communications Society; Computer science; Costs; Energy consumption; Hardware; Routing protocols; Scalability; Telecommunication traffic; Web and internet services
【Paper Link】 【Pages】:857-865
【Authors】: Qinghua Li ; Sencun Zhu ; Guohong Cao
【Abstract】: Existing routing algorithms for Delay Tolerant Networks(DTNs) assume that nodes are willing to forward packets for others. In the real world, however, most people are socially selfish; i.e., they are willing to forward packets for nodes with whom they have social ties but not others, and such willingness varies with the strength of the social tie. Following the philosophy of design for user, we propose a Social Selfishness Aware Routing (SSAR) algorithm to allow user selfishness and provide better routing performance in an efficient way. To select a forwarding node, SSAR considers both users' willingness to forward and their contact opportunity, resulting in a better forwarding strategy than purely contact-based approaches. Moreover, SSAR formulates the data forwarding process as a Multiple Knapsack Problem with Assignment Restrictions (MKPAR) to satisfy user demands for selfishness and performance. Trace-driven simulations show that SSAR allows users to maintain selfishness and achieves better routing performance with low transmission cost.
【Keywords】: telecommunication network routing; forward packets; multiple knapsack problem with assignment restrictions; routing algorithms; social selfishness aware routing; socially selfish delay tolerant networks; Algorithm design and analysis; Communications Society; Computer science; Costs; Disruption tolerant networking; Packet switching; Peer to peer computing; Relays; Routing; Wireless networks
【Paper Link】 【Pages】:866-874
【Authors】: Theus Hossmann ; Thrasyvoulos Spyropoulos ; Franck Legendre
【Abstract】: Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are generally made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict future contact opportunities. The use of complex network analysis has been recently suggested to perform this prediction task and improve the performance of DTN routing. Contacts seen in the past are aggregated to a social graph, and a variety of metrics (e.g., centrality and similarity) or algorithms (e.g., community detection) have been proposed to assess the utility of a node to deliver a content or bring it closer to the destination. In this paper, we argue that it is not so much the choice or sophistication of social metrics and algorithms that bears the most weight on performance, but rather the mapping from the mobility process generating contacts to the aggregated social graph. We first study two well-known DTN routing algorithms - SimBet and BubbleRap - that rely on such complex network analysis, and show that their performance heavily depends on how the mapping (contact aggregation) is performed. What is more, for a range of synthetic mobility models and real traces, we show that improved performances (up to a factor of 4 in terms of delivery ratio) are consistently achieved for a relatively narrow range of aggregation levels only, where the aggregated graph most closely reflects the underlying mobility structure. To this end, we propose an online algorithm that uses concepts from unsupervised learning and spectral graph theory to infer this 'correct' graph structure; this algorithm allows each node to locally identify and adjust to the optimal operating point, and achieves good performance in all scenarios considered.
【Keywords】: graph theory; mobility management (mobile radio); routing protocols; BubbleRap; DTN routing; SimBet; aggregated social graph; delay tolerant networks; mobility process; optimal mapping; spectral graph theory; Algorithm design and analysis; Communications Society; Complex networks; Computer networks; Disruption tolerant networking; Laboratories; Peer to peer computing; Performance analysis; Relays; Routing protocols
【Paper Link】 【Pages】:875-883
【Authors】: Bin Bin Chen ; Mun Choon Chan
【Abstract】: When Disruption Tolerant Network (DTN) is used in commercial environments, incentive mechanism should be employed to encourage cooperation among selfish mobile users. Key challenges in the design of an incentive scheme for DTN are that disconnections among nodes are the norm rather than exception and network topology is time varying. Thus, it is difficult to detect selfish actions that can be launched by mobile users or to pre-determine the routing path to be used. In this paper, we propose MobiCent, a credit-based incentive system for DTN. While MobiCent allows the underlying routing protocol to discover the most efficient paths, it is also incentive compatible. Therefore, using MobiCent, rational nodes will not purposely waste transfer opportunity or cheat by creating non-existing contacts to increase their rewards. MobiCent also provides different payment mechanisms to cater to client that wants to minimize either payment or data delivery delay.
【Keywords】: credit transactions; mobile computing; routing protocols; telecommunication network topology; MobiCent; credit-based incentive system; data delivery delay; disruption tolerant network; network topology; routing protocol; time varying; transfer opportunity; Algorithm design and analysis; Communications Society; Computer science; Delay; Disruption tolerant networking; Incentive schemes; Mobile computing; Peer to peer computing; Routing protocols; Social network services
【Paper Link】 【Pages】:884-892
【Authors】: Daniel J. Klein ; João Pedro Hespanha ; Upamanyu Madhow
【Abstract】: We propose and investigate a deterministic traveling wave model for the progress of epidemic routing in disconnected mobile ad hoc networks. In epidemic routing, broadcast or unicast is achieved by exploiting mobility: message-carrying nodes "infect" non message-carrying nodes when they come within communication range of them. Early probabilistic analyses of epidemic routing follow a "well-mixed" model which ignores the spatial distribution of the infected nodes, and hence do not provide good performance estimates unless the node density is very low. More recent work has pointed out that the infection exhibits wave-like characteristics, but does not provide a detailed model of the wave propagation. In this paper, we model message propagation using a reaction-diffusion partial differential equation that has a traveling wave solution, and show that the performance predictions made by the model closely match simulations in regimes where the well- mixed model breaks down. In particular, we show that well-mixed models are generally overly optimistic in regard to the scaling of the message delivery delay with problem parameters such as communication range, node density, and total area. In contrast to prior work, our model provides insight into the spatial distribution of the "infection," and reveals that the performance is sensitive to the geometry of the deployment region, not just its area.
【Keywords】: ad hoc networks; mobile communication; partial differential equations; telecommunication network routing; wave propagation; communication range; deterministic traveling wave model; disconnected mobile ad hoc network; epidemic routing; message propagation; non message carrying node; reaction-diffusion model; reaction-diffusion partial differential equation; sparsely connected MANET; spatial distribution; traveling wave solution; well mixed model; Broadcasting; Delay; Mobile ad hoc networks; Mobile communication; Partial differential equations; Performance analysis; Predictive models; Routing; Solid modeling; Unicast
【Paper Link】 【Pages】:893-901
【Authors】: Changhee Joo ; Jin-Ghoo Choi ; Ness B. Shroff
【Abstract】: In-network aggregation has become a promising technique for improving the energy efficiency of wireless sensor networks. Aggregating data at various nodes in the network results in a reduction in the amount of bits transmitted over the network, and hence, saves energy. In this paper, we focus on another important aspect of aggregation, i.e., delay performance. In conjunction with link scheduling, in-network aggregation can reduce the delay by lessening the demands for wireless resources and thus expediting data transmissions. We formulate the problem that minimizes the sum delay of sensed data, and analyze the performance of optimal scheduling with in-network aggregation in tree networks under the node-exclusive interference model. We provide a system wide lower bound on the delay and use it as a benchmark for evaluating different scheduling policies. We numerically evaluate the performance of myopic and non-myopic scheduling policies, where myopic one considers only the current system state for a scheduling decision while non-myopic one simulates future system states. We show that the one-step non-myopic policies can substantially improve the delay performance. In particular, the proposed non-myopic greedy scheduling achieves a good tradeoff between performance and implementability.
【Keywords】: optimisation; telecommunication network topology; wireless sensor networks; data aggregation; delay performance; in-network aggregation; nonmyopic greedy scheduling; wireless sensor networks; Communications Society; Data analysis; Data communication; Delay; Energy efficiency; Network topology; Optimal scheduling; Peer to peer computing; Performance analysis; Wireless sensor networks
【Paper Link】 【Pages】:902-910
【Authors】: Ren-Shiou Liu ; Prasun Sinha ; Can Emre Koksal
【Abstract】: Energy harvesting sensor platforms have opened up a new dimension to the design of network protocols. In order to sustain the network operation, the energy consumption rate cannot be higher than the energy harvesting rate, otherwise, sensor nodes will eventually deplete their batteries. In contrast to traditional network resource allocation problems where the resources are static, the time-varying recharging rate presents a new challenge. In this paper, We first explore the performance of an efficient dual decomposition and subgradient method based algorithm, called QuickFix, for computing the data sampling rate and routes. However, fluctuations in recharging can happen at a faster time-scale than the convergence time of the traditional approach. This leads to battery outage and overflow scenarios, that are both undesirable due to missed samples and lost energy harvesting opportunities respectively. To address such dynamics, a local algorithm, called SnapIt, is designed to adapt the sampling rate with the objective of maintaining the battery at a target level. Our evaluations using the TOSSIM simulator show that QuickFix and SnapIt working in tandem can track the instantaneous optimum network utility while maintaining the battery at a target level. When compared with IFRC, a backpressure-based approach, our solution improves the total data rate by 42% on the average while significantly improving the network utility.
【Keywords】: energy harvesting; protocols; resource allocation; wireless sensor networks; IFRC; QuickFix; backpressure-based approach; energy harvesting sensor platforms; joint energy management; network protocols; rechargeable sensor networks; resource allocation; Batteries; Convergence; Energy consumption; Energy management; Fluctuations; Heuristic algorithms; Protocols; Resource management; Sampling methods; Utility programs
【Paper Link】 【Pages】:911-919
【Authors】: Yafeng Wu ; Gang Zhou ; John A. Stankovic
【Abstract】: Packet collision causes packet loss and wastes resources in wireless networks. It becomes even worse in dense WSNs, due to burst-traffic and congestion around sinks. In this paper, we propose a novel protocol to recover collided packets. Our experiments on a testbed reveal that collisions between long packets and short packets cause a partial error pattern on collided packets, which can be used for efficient recovery. We give a theoretical analysis that demonstrates that combining such collision recovery with CSMA protocols achieves a significant performance improvement. Then, we design ACR, an Active Collision Recovery protocol, which actively converts most potential collisions into LS-collisions, and then applies a lightweight FEC scheme to recover collided packets with such partial error patterns. We implement ACR on a Tmote testbed, and compare its performance with other packet recovery schemes. Results show that ACR significantly reduces the number of retransmissions, and achieves around 25% improvement on transmission efficiency over other schemes.
【Keywords】: carrier sense multiple access; telecommunication congestion control; telecommunication traffic; wireless sensor networks; ACR; CSMA protocols; active collision recovery; burst-traffic; congestion around sinks; dense wireless sensor networks; packet collision; packet loss; wastes resources; Automatic repeat request; Communications Society; Computer science; Delay; Forward error correction; Multiaccess communication; Protocols; Road accidents; Testing; Wireless sensor networks
【Paper Link】 【Pages】:920-928
【Authors】: Kolar Purushothama Naveen ; Anurag Kumar
【Abstract】: We consider a wireless sensor network whose main function is to detect certain infrequent alarm events, and to forward alarm packets to a base station, using geographical forwarding. The nodes know their locations, and they sleep-wake cycle, waking up periodically but not synchronously. In this situation, when a node has a packet to forward to the sink, there is a trade-off between how long this node waits for a suitable neighbor to wake up and the progress the packet makes towards the sink once it is forwarded to this neighbor. Hence, in choosing a relay node, we consider the problem of minimizing delay subject to a constraint on the progress. By constraint relaxation, we formulate this next hop relay selection problem as a Markov decision process (MDP). The exact optimal solution (BF (Best Forward)) can be found, but is computationally intensive. Next, we consider a simplified model in which the times between the wake up instants of successive candidate relay nodes are assumed to be i.i.d. and exponentially distributed. The optimal policy (SF (Simplified Forward)) for this model is a simple one-step-look-ahead rule. Simulations show that SF is very close in performance to BF, even for a reasonably small node density. We then study the end-to-end performance of SF in comparison with two extremal policies: Max Forward (MF) and First Forward (FF), and an end-to-end delay minimizing policy proposed by Kim et al. We find that, with appropriate choice of one hop average progress constraint, SF can be tuned to provide a favorable trade-off between end-to-end packet delay and the number of hops in the forwarding path.
【Keywords】: Markov processes; telecommunication network routing; wireless sensor networks; Markov decision process; alarm packet; base station; best forward solution; exact optimal solution; infrequent alarm event detection; next hop relay selection problem; optimal policy; relay node; simplified forward solution; sleep wake cycling nodes; tunable locally optimal geographical forwarding; wireless sensor networks; Base stations; Communications Society; Delay; Event detection; Intrusion detection; Mathematical model; Peer to peer computing; Relays; Routing; Wireless sensor networks
【Paper Link】 【Pages】:929-937
【Authors】: Chi-Kin Chau ; Qian Wang ; Dah-Ming Chiu
【Abstract】: Paris Metro Pricing (PMP) is a simple multi-class flat-rate pricing scheme already practiced by transport systems, specifically by the Paris Metro at one time. The name is coined after Andrew Odlyzko proposed it for the Internet as a simple way to provide differentiated services. Subsequently, there were several analytical studies of this promising idea. The central issue of these studies is whether PMP is viable, namely, whether it will produce more profit for the service provider, or whether it will achieve more social welfare. The previous studies considered similar models, but arrived at different conclusions. In this paper, we point out that the key is how the users react to the congestion externality of the underlying system. We derive sufficient conditions of congestion functions that can guarantee the viability of PMP, and provide the relevant physical meanings of these conditions.
【Keywords】: DiffServ networks; Internet; pricing; Internet; Paris metro pricing; communication networks; differentiated services; multiclass flat-rate pricing scheme; service networks; service provider; Cloud computing; Communications Society; Educational institutions; IP networks; Optimized production technology; Portable media players; Pricing; Resource management; Sufficient conditions; Web and internet services
【Paper Link】 【Pages】:938-946
【Authors】: Prashanth Hande ; Mung Chiang ; A. Robert Calderbank ; Junshan Zhang
【Abstract】: This paper investigates pricing of Internet connectivity services in the context of a monopoly ISP selling broadband access to consumers. We first study the optimal combination of flat-rate and usage-based access price components for maximization of ISP revenue, subject to a capacity constraint on the data-rate demand. Next, we consider time-varying consumer utilities for broadband data rates that can result in uneven demand for data-rate over time. Practical considerations limit the viability of altering prices over time to smoothen out the demanded data-rate. Despite such constraints on pricing, our analysis reveals that the ISP can retain the revenue by setting a low usage fee and dropping packets of consumer demanded data that exceed capacity. Regulatory attention on ISP congestion management discourages such ``technical" practices and promotes economics based approaches. We characterize the loss in ISP revenue from an economics based approach. Regulatory requirements further impose limitations on price discrimination across consumers, and we derive the revenue loss to the ISP from such restrictions. We then develop partial recovery of revenue loss through non-linear pricing that does not explicitly discriminate across consumers. While determination of the access price is ultimately based on additional considerations beyond the scope of this paper, the analysis here can serve as a benchmark to structure access price in broadband access networks.
【Keywords】: Internet; broadband networks; computer network management; economics; pricing; subscriber loops; ISP congestion management; ISP revenue maximization; Internet connectivity service pricing; broadband access networks; broadband data rates; capacity constraint; data-rate demand; economics based approach; flat-rate; nonlinear pricing; regulatory requirements; revenue loss recovery; time-varying consumer utility; usage fee; usage-based access price component; Aggregates; Communications Society; Context-aware services; Costs; Engineering management; IP networks; Monopoly; Pricing; USA Councils; Web and internet services
【Paper Link】 【Pages】:947-955
【Authors】: Huseyin Mutlu ; Murat Alanyali ; David Starobinski
【Abstract】: We consider a wireless provider who caters to two classes of customers, namely primary and secondary users. Primary users have long term contracts while secondary users are admitted and priced according to current availability of excess spectrum. Secondary users accept an advertised price with a certain probability defined by an underlying demand function. We analyze the problem of maximizing profit gained by admission of secondary users. Previous studies in the field usually assume that the demand function is known and that the call length distribution is also known and exponentially distributed. In this paper, we analyze more realistic settings where both of these quantities are unknown. Our main contribution is to derive near-optimal pricing strategies under such settings. We focus on occupancy-based pricing policies, which depend only on the total number of ongoing calls in the system. We first show that such policies are insensitive to call length distribution except through the mean. Next, we introduce a new on-line, occupancy-based pricing algorithm, called Measurement-based Threshold Pricing (MTP) that operates by measuring the reaction of secondary users to a specific price and does not require the demand function to be known. MTP optimizes a profit function that depends on price only. We prove that while the profit function can be multimodal, MTP converges to one of the local optima as fast as if the function were unimodal. Lastly, we provide numerical studies demonstrating the near-optimal performance of occupancy-based policies for diverse sets of call length distributions and demand functions and the quick convergence of MTP to near-optimal on-line profit.
【Keywords】: pricing; radio spectrum management; call length distribution; measurement-based threshold pricing; near-optimal pricing strategy; occupancy-based pricing policy algortihm; online pricing; profit function; secondary spectrum access; wireless provider; Bridges; Communications Society; Contracts; Convergence of numerical methods; Exponential distribution; Fasteners; Licenses; Pricing; Quality of service; Radio spectrum management
【Paper Link】 【Pages】:956-964
【Authors】: Jaeok Park ; Mihaela van der Schaar
【Abstract】: Peer-to-peer (P2P) networks offer a cost effective and easily deployable framework for sharing user-generated content. However, intrinsic incentive problems reside in P2P networks as the transfer of content incurs costs both to uploaders and to downloaders while the benefit accrues only to downloaders. We investigate the issues of incentives in content production and sharing over P2P networks using a game theoretic model. Peers do not share produced content at all at non-cooperative equilibria whereas Pareto efficiency requires peers to fully share produced content. There is also a divergence in the total amount of produced content between non-cooperative equilibria and Pareto efficiency. By imposing full sharing, we decompose the inefficiency of non-cooperative equilibria into two parts, inefficiency due to no sharing and inefficiency due to underproduction. As a method to remedy the incentive problems in P2P networks, two classes of pricing schemes, MP pricing schemes and linear pricing schemes, are proposed. We show that the proposed pricing schemes can achieve Pareto efficiency as non-cooperative equilibria. We also examine a linear pricing scheme that maximizes the revenue of the network manager.
【Keywords】: computer network management; game theory; incentive schemes; peer-to-peer computing; pricing; Pareto efficiency; content production; game theoretic model; incentive schemes; linear pricing schemes; non-cooperative equilibria; peer-to-peer networks; Communications Society; Costs; Game theory; IP networks; Peer to peer computing; Pricing; Production; Scalability; USA Councils; User-generated content
【Paper Link】 【Pages】:965-973
【Authors】: Jayakrishnan Nair ; Martin Andreasson ; Lachlan L. H. Andrew ; Steven H. Low ; John Doyle
【Abstract】: It has been recently discovered that heavy-tailed file completion time can result from protocol interaction even when file sizes are light-tailed. A key to this phenomenon is the RESTART feature where if a file transfer is interrupted before it is completed, the transfer needs to restart from the beginning. In this paper, we show that independent or bounded fragmentation produces light-tailed file completion time as long as the file size is light-tailed, i.e., in this case, heavy-tailed file completion time can only originate from heavy-tailed file sizes. If the file size is heavy-tailed, then the file completion time is clearly heavy-tailed. For this case, we show that when the file size distribution is regularly varying, then under independent or bounded fragmentation, the completion time tail distribution function is asymptotically upper bounded by that of the original file size stretched by a constant factor. We then prove that if the failure distribution has non-decreasing failure rate, the expected completion time is minimized by dividing the file into equal sized fragments; this optimal fragment size is unique but depends on the file size. We also present a simple blind fragmentation policy where the fragment sizes are constant and independent of the file size and prove that it is asymptotically optimal. Finally, we bound the error in expected completion time due to error in modeling of the failure process.
【Keywords】: file organisation; multipath channels; peer-to-peer computing; statistical distributions; asymptotic optimal fragments; blind fragmentation policy; bounded fragmentation; file fragmentation; file size distribution; heavy-tailed file completion time; independent fragmentation; light-tailed file completion; tail distribution function; unreliable channel; Communications Society; Distribution functions; Internet; Probability distribution; Protocols; Random variables; Robustness; Routing; Tail; USA Councils
【Paper Link】 【Pages】:974-982
【Authors】: Boxuan Gu ; Xiaole Bai ; Zhimin Yang ; Adam C. Champion ; Dong Xuan
【Abstract】: Malicious shellcodes are segments of binary code disguised as normal input data. Such shellcodes can be injected into a target process's virtual memory. They overwrite the process's return addresses and hijack control flow. Detecting and filtering out such shellcodes is vital to prevent damage. In this paper, we propose a new malicious shellcode detection methodology in which we take snapshots of the process's virtual memory before input data are consumed, and feed the snapshots to a malicious shellcode detector. These snapshots are used to instantiate a runtime environment that emulates the target process's input data consumption to monitor shellcodes' behaviors. The snapshots can also be used to examine the system calls that shellcodes invoke, these system call parameters, and the process's execution flow. We implement a prototype system in Debian Linux with kernel version 2.6.26. Our extensive experiments with real traces and thousands of malicious shellcodes illustrate our system's performance with low overhead and few false negatives and few false positives.
【Keywords】: binary codes; security of data; virtual storage; Debian Linux; binary code; hijack control flow; input data; kernel version 2.6.26; malicious shellcode detection; process execution flow; process return addresses; prototype system; shellcode behavior monitoring; target process; virtual memory snapshots; Binary codes; Communications Society; Computer science; Data engineering; Detectors; Feeds; Filtering; Pattern analysis; Runtime environment; USA Councils
【Paper Link】 【Pages】:983-991
【Authors】: Matteo Dell'Amico ; Pietro Michiardi ; Yves Roudier
【Abstract】: It is a well known fact that user-chosen passwords are somewhat predictable: by using tools such as dictionaries or probabilistic models, attackers and password recovery tools can drastically reduce the number of attempts needed to guess a password. Quite surprisingly, however, existing literature does not provide a satisfying answer to the following question: given a number of guesses, what is the probability that a state-of-the-art attacker will be able to break a password? To answer the former question, we compare and evaluate the effectiveness of currently known attacks using various datasets of known passwords. We find that a "diminishing returns" principle applies: in the absence of an enforced password strength policy, weak passwords are common; on the other hand, as the attack goes on, the probability that a guess will succeed decreases by orders of magnitude. Even extremely powerful attackers won't be able to guess a substantial percentage of the passwords. The result of this work will help in evaluating the security of authentication means based on user- chosen passwords, and our methodology for estimating password strength can be used as a basis for creating more effective proactive password checkers for users and security auditing tools for administrators.
【Keywords】: estimation theory; probability; security of data; attackers; authentication; datasets; dictionaries; empirical analysis; password recovery; password strength estimation; probabilistic models; security auditing; Authentication; Best practices; Communications Society; Costs; Dictionaries; Internet; Length measurement; Predictive models; Resilience; Security
【Paper Link】 【Pages】:992-1000
【Authors】: Mengjun Xie ; Haining Wang
【Abstract】: This paper presents CARE, an autonomous email reputation system based on inter-domain collaboration. Within the framework of CARE, each domain independently builds its reputation database based on both the local email history and the information exchanged with other collaborating domains. CARE examines the trustworthiness of the email histories obtained from collaborators by correlating them with the local email history. To validate the efficacy of CARE, we have analyzed real email logs, conducted a DNS-based estimation experiment, and performed a series of simulations. Our experimental results show that CARE can effectively improve the reliability and performance of email systems.
【Keywords】: Internet; electronic data interchange; unsolicited e-mail; CARE; DNS-based estimation; collaboration-based autonomous reputation system; email logs; email services; information exchanged; inter-domain collaboration; local email history; reputation database; trustworthiness; Collaboration; Collaborative work; Communications Society; Databases; Educational institutions; History; Information filtering; Information filters; Large-scale systems; Web server
【Paper Link】 【Pages】:1001-1009
【Authors】: Lei Xie ; Bo Sheng ; Chiu Chiang Tan ; Hao Han ; Qun Li ; Daoxu Chen
【Abstract】: In this paper we consider how to efficiently identify tags on the moving conveyor. Considering conditions like the path loss and multi-path effect in realistic settings, we first propose a probabilistic model for RFID tag identification. Based on this model, we propose efficient solutions to identify moving RFID tags, according to the fixed-path mobility on the conveyor. A dynamic program based solution and an adaptive solution are proposed to select optimized frame sizes during the query cycles. Simulation results indicate that by leveraging the probabilistic model our solutions can achieve much better performance than using parameters for the ideal propagation situations.
【Keywords】: dynamic programming; probability; radiofrequency identification; RFID tag identification; adaptive solution; dynamic program; efficient tag identification; fixed-path mobility; mobile RFID systems; moving conveyor; multipath effect; optimized frame sizes; path loss; probabilistic model; Belts; Communications Society; Computer science; Intrusion detection; Laboratories; Media Access Protocol; Mobile computing; RFID tags; Radiofrequency identification; Supply chains
【Paper Link】 【Pages】:1010-1018
【Authors】: Bo Sheng ; Qun Li ; Weizhen Mao
【Abstract】: RFID is an emerging technology with many potential applications such as inventory management for supply chain. In practice, these applications often need a series of continuous scanning operations to accomplish a task. For example, if one wants to scan all the products with RFID tags in a large warehouse, given a limited reading range of an RFID reader, multiple scanning operations have to be launched at different locations to cover the whole warehouse. Usually, this series of scanning operations are not completely independent as some RFID tags can be read by multiple processes. Simply scanning all the tags in the reading range during each process is inefficient because it collects a lot of redundant data and consumes a long time. In this paper, we develop efficient schemes for continuous scanning operations defined in both spatial and temporal domains. Our basic idea is to fully utilize the information gathered in the previous scanning operations to reduce the scanning time of the succeeding ones. We illustrate in the evaluation that our algorithms dramatically reduce the total scanning time when compared with other solutions.
【Keywords】: radiofrequency identification; warehousing; RFID systems; continuous scanning; multiple scanning; warehouse; Communications Society; Educational institutions; Intrusion detection; Inventory control; Inventory management; Protocols; RFID tags; Radiofrequency identification; Supply chain management; Supply chains
【Paper Link】 【Pages】:1019-1027
【Authors】: Tao Li ; Samuel Wu ; Shigang Chen ; Mark C. K. Yang
【Abstract】: RFID has been gaining popularity for inventory control, object tracking, and supply chain management in warehouses, retail stores, hospitals, etc. Periodically and automatically estimating the number of RFID tags deployed in a large area has many important applications in inventory management and theft detection. The prior work focuses on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. We observe that, for a RFID reader to access tags in a large area, active tags are likely to be used. These tags are battery-powered and use their own energy for information transmission. However, recharging batteries for tens of thousands of tags is laborious. Unlike the prior work, this paper studies the RFID estimation problem from the energy angle. Our goal is to reduce the amount of energy that is consumed by the tags during the estimation procedure. We design several energy-efficient probabilistic algorithms that iteratively refine a control parameter to optimize the information carried in the transmissions from the tags, such that both the number and the size of the transmissions are minimized.
【Keywords】: maximum likelihood estimation; radiofrequency identification; stock control; supply chain management; RFID estimation problem; information transmission; inventory control; inventory management; maximum likelihood estimation; object tracking; radiofrequency identification; supply chain management; theft detection; Algorithm design and analysis; Batteries; Energy efficiency; Hospitals; Inventory control; Inventory management; Iterative algorithms; RFID tags; Radiofrequency identification; Supply chain management
【Paper Link】 【Pages】:1028-1036
【Authors】: Hao Han ; Bo Sheng ; Chiu Chiang Tan ; Qun Li ; Weizhen Mao ; Sanglu Lu
【Abstract】: Radio Frequency IDentification (RFID) technology has attracted much attention due to its variety of applications, e.g., inventory control and object tracking. One important problem in RFID systems is how to quickly estimate the number of distinct tags without reading each tag individually. This problem plays a crucial role in many real-time monitoring and privacy-preserving applications. In this paper, we present an efficient and anonymous scheme for tag population estimation. This scheme leverages the position of the first reply from a group of tags in a frame. Results from mathematical analysis and extensive simulation demonstrate that our scheme outperforms other protocols proposed in the previous work.
【Keywords】: data privacy; mathematical analysis; radiofrequency identification; telecommunication security; RFID tags counting; anonymous scheme; mathematical analysis; privacy-preserving application; radio frequency identification technology; real-time monitoring application; tag population estimation; Analytical models; Communications Society; Educational institutions; Intrusion detection; Inventory control; Protocols; RFID tags; Radiofrequency identification; Remote monitoring; USA Councils
【Paper Link】 【Pages】:1037-1045
【Authors】: Tao Shu ; Marwan Krunz
【Abstract】: We study the problem of finding the least-priced path (LPP) between a source and a destination in opportunistic spectrum access (OSA) networks. This problem is motivated by economic considerations, whereby spectrum opportunities are sold/leased to secondary radios (SRs). This incurs a communication cost, e.g., for traffic relaying. As the beneficiary of these services, the end user must compensate the service-providing SRs for their spectrum cost. To give an incentive (i.e., profit) for SRs to report their true cost, typically the payment to a SR should be higher than the actual cost. However, from an end user's perspective, unnecessary overpayment should be avoided. So we are interested in the optimal route selection and payment determination mechanism that minimizes the price tag of the selected route and at the same time guarantees truthful cost reports from SRs. This setup is in contrast to the conventional truthful least-cost path (LCP) problem, where the interest is to find the minimum-cost route. The LPP problem is investigated with and without capacity constraints at individual SRs. For both cases, our algorithmic solutions can be executed in polynomial time. The effectiveness of our algorithms in terms of price saving is verified through extensive simulations.
【Keywords】: radio access networks; telecommunication network routing; least-cost path problem; opportunistic spectrum access networks; secondary radios; spectrum opportunities; truthful least-priced-path routing; Broadcasting; Communications Society; Cost function; Optimized production technology; Polynomials; Relays; Routing; Strontium; Telecommunication traffic; Wireless communication
【Paper Link】 【Pages】:1046-1054
【Authors】: Amine Laourine ; Shiyao Chen ; Lang Tong
【Abstract】: The queueing performance of a (secondary) cognitive user is investigated for a hierarchical network where there are N independent and identical primary users. Each primary user employs a slotted transmission protocol, and its channel usage forms a two-state (busy, idle) discrete-time Markov chain. The cognitive user employs the optimal policy to select which channel to sense (and use if found idle) at each slot. In the framework of effective bandwidths, the stationary queue tail distribution of the cognitive user is estimated using a large deviation approach for which closed-form expressions are obtained when N = 2. Upper and lower bounds are obtained for the general N primary user network. For positively correlated primary transmissions, the bounds are shown to be asymptotically tight. Monte Carlo simulations using importance sampling techniques are used to validate the obtained large deviation estimates.
【Keywords】: Markov processes; Monte Carlo methods; cognitive radio; queueing theory; Monte Carlo simulations; discrete-time Markov chain; multichannel cognitive spectrum access; positively correlated primary transmissions; queuing analysis; slotted transmission protocol; Access protocols; Bandwidth; Closed-form solution; Cognitive radio; Communications Society; Computer networks; Probability distribution; Quality of service; Queueing analysis; Traffic control
【Paper Link】 【Pages】:1055-1063
【Authors】: Shanshan Wang ; Junshan Zhang ; Lang Tong
【Abstract】: We consider a cognitive radio network where multiple secondary users (SUs) contend for spectrum usage, using random access, over available primary user (PU) channels. Our focus is on SUs' queueing delay performance, for which a systematic understanding is lacking. We take a fluid queue approximation approach to study the steady-state delay performance of SUs, for cases with a single PU channel and multiple PU channels. Using stochastic fluid models, we represent the queue dynamics as Poisson driven stochastic differential equations, and characterize the moments of the SUs' queue lengths accordingly. Since in practical systems, a secondary user would have no knowledge of other users' activities, its contention probability has to be set based on local information. With this observation, we develop adaptive algorithms to find the optimal contention probability that minimizes the mean queue lengths. Moreover, we study the impact of multiple channels and multiple interfaces, on SUs' delay performance. As expected, the use of multiple channels and/or multiple interfaces leads to significant delay reduction.
【Keywords】: Poisson equation; cognitive radio; differential equations; stochastic processes; telecommunication channels; Fluid Queue View; Poisson driven stochastic differential equations; cognitive radio networks; delay analysis; practical systems; primary user; random access; secondary users; stochastic fluid models; Cognitive radio; Delay; Differential equations; Fluid dynamics; Frequency; Performance analysis; Queueing analysis; Steady-state; Stochastic processes; Traffic control
【Paper Link】 【Pages】:1064-1072
【Authors】: Xinyu Zhang ; Kang G. Shin
【Abstract】: Cooperative relay is a communication paradigm that aims to realize the capacity of multi-antenna arrays in a distributed manner. However, the symbol-level synchronization requirement among distributed relays limits its use in practice. We propose to circumvent this barrier with a cross-layer protocol called Distributed Asynchronous Cooperation (DAC). With DAC, multiple relays can schedule concurrent transmissions with packet-level (hence coarse) synchronization. The receiver then extracts multiple versions of each relayed packet via a collision-resolution algorithm, thus realizing the diversity gain of cooperative communication. We demonstrate the feasibility of DAC by prototyping and testing it on the GNURadio/USRP software radio platform. To explore its relevance at the network level, we introduce a DAC-based MAC, and a generic approach to integrate the DAC MAC/PHY layer into a typical routing algorithm. Considering the use of DAC for multiple network flows, we analyze the fundamental tradeoff between the improvement in diversity gain and the reduction in multiplexing opportunities. DAC is shown to improve the throughput and delay performance of lossy networks with medium-level link quality. Our analytical results are also confirmed by network-level simulation in ns-2.
【Keywords】: access protocols; antenna arrays; diversity reception; radio receivers; software radio; telecommunication network routing; wireless channels; DAC MAC/PHY layer; GNURadio/USRP software radio platform; MAC protocol; communication paradigm; cooperative communication; cooperative relay; cross-layer protocol; distributed asynchronous cooperation; diversity gain; medium-level link quality; multiantenna arrays; ns-2 network-level simulation; packet-level synchronization; radio receiver; routing algorithm; symbol-level synchronization; wireless relay networks; Diversity methods; Physical layer; Protocols; Receivers; Relays; Routing; Software prototyping; Software radio; Software testing; Throughput
【Paper Link】 【Pages】:1073-1081
【Authors】: Elias Kehdi ; Baochun Li
【Abstract】: Recent studies show that network coding improves multicast session throughput. In this paper, we demonstrate how random linear network coding can be incorporated to provide network diagnosis for peer-to-peer systems. We present a new trace collection protocol that allows operators to diagnose peer-to-peer networks. It is essential to monitor large-scale peer-to-peer applications by collecting measurements referred to as snapshots from the peers. However, existing solutions are not scalable and fail to collect measurements from peers that departed before the time of collection. We use progressive random linear network coding to disseminate the snapshots in the network, from which the server pulls data in a delayed fashion. We leverage the power of progressive encoding to increase block diversity and tolerate extreme block losses by introducing redundancy in the network. Peers cooperate by allocating cache capacity for other peers. Snapshots of departed peers can thus be retrieved from the network. We show how our protocol controls the redundancy introduced through progressive encoding and thus scales to large number of peers and tolerates high level of peer dynamics.
【Keywords】: linear codes; multicast communication; network coding; peer-to-peer computing; protocols; random codes; multicast session throughput; network redundancy; peer-to-peer network diagnosis; progressive random linear network coding; trace collection protocol; Condition monitoring; Encoding; Large-scale systems; Network coding; Network servers; Peer to peer computing; Protocols; Redundancy; Throughput; Time measurement
【Paper Link】 【Pages】:1082-1090
【Authors】: John R. Douceur ; James W. Mickens ; Thomas Moscibroda ; Debmalya Panigrahi
【Abstract】: In this paper, we study the theory of collaborative upload bandwidth measurement in peer-to-peer environments. A host can use a bandwidth estimation probe to determine the bandwidth between itself and any other host in the system. The problem is that the result of such a measurement may not necessarily be the sender's upload bandwidth, since the most bandwidth restricted link on the path could also be the receiver's download bandwidth. In this paper, we formally define the bandwidth determination problem and devise efficient distributed algorithms. We consider two models, the free-departure and no-departure model, depending on whether hosts keep participating in the algorithm even after their bandwidth has been determined. We present lower bounds on the time-complexity of any collaborative bandwidth measurement algorithm in both models. We then show how, for realistic bandwidth distributions, the lower bounds can be overcome. Specifically, we present O(1) and O(log log n)-time algorithms for the two models. We corroborate these theoretical findings with practical measurements on a implementation on PlanetLab.
【Keywords】: bandwidth allocation; peer-to-peer computing; PlanetLab; bandwidth estimation probe; bandwidth restricted link; collaborative upload bandwidth measurement; download bandwidth; free-departure model; no-departure model; peer-to-peer systems; time-complexity; upload speeds; Bandwidth; Collaboration; Communications Society; Computer science; Distributed algorithms; Extraterrestrial measurements; IP networks; Peer to peer computing; Probes; Velocity measurement
【Paper Link】 【Pages】:1091-1099
【Authors】: Zhichun Li ; Anup Goyal ; Yan Chen ; Aleksandar Kuzmanovic
【Abstract】: Misconfigured P2P traffic caused by bugs in volunteer-developed P2P software or by attackers is prevalent. It influences both end users and ISPs. In this paper, we discover and study address-misconfigured P2P traffic, a major class of such misconfiguration. P2P address misconfiguration is a phenomenon in which a large number of peers send P2P file downloading requests to a ``random'' target on the Internet. On measuring three Honeynet datasets spanning four years and across five different /8 networks, we find address-misconfigured P2P traffic on average contributes 38.9% of Internet background radiation, increasing by more than 100% every year. In this paper, we design the P2PScope, a measurement tool, to detect and diagnose such unwanted traffic. We find, in all the P2P systems, address misconfiguration is caused by resource mapping contamination, i.e., the sources returned for a given file ID through P2P indexing are not valid. Different P2P systems have different reasons for such contamination. For eMule, we find that the root cause is mainly a network byte ordering problem in the eMule Source Exchange protocol. For BitTorrent misconfiguration, one reason is that anti-P2P companies actively inject bogus peers into the P2P system. Another reason is that the KTorrent implementation has a byte order problem. We also design approaches to detect anti-P2P peers without false positives.
【Keywords】: computer network management; peer-to-peer computing; program debugging; security of data; telecommunication traffic; BitTorrent misconfiguration; Honeynet dataset; Internet; P2P file downloading; P2P indexing; P2PScope; address misconfigured P2P traffic; bogus peers; bugs; eMule source exchange protocol; measurement tool; network byte ordering problem; random target; unwanted traffic; volunteer developed P2P software; Communications Society; Computer bugs; Computer crime; Contamination; IP networks; Indexing; Internet; Pollution measurement; Telecommunication traffic; USA Councils
【Paper Link】 【Pages】:1100-1108
【Authors】: Elisha J. Rosensweig ; James F. Kurose ; Donald F. Towsley
【Abstract】: Many systems employ caches to improve performance. While isolated caches have been studied in-depth, multi-cache systems are not well understood, especially in networks with arbitrary topologies. In order to gain insight into and manage these systems, a low-complexity algorithm for approximating their behavior is required. We propose a new algorithm, termed a-Net, that approximates the behavior of multi-cache networks by leveraging existing approximation algorithms for isolated LRU caches. We demonstrate the utility of a-Net using both per- cache and network-wide performance measures. We also perform factor analysis of the approximation error to identify system parameters that determine the precision of a-Net.
【Keywords】: approximation theory; cache storage; communication complexity; parameter estimation; LRU caches; a-NET; approximate models; approximation algorithms; approximation error; general multicache networks; low-complexity algorithm; multicache systems; system parameter identification; Approximation algorithms; Approximation error; Communications Society; Computer networks; Computer science; File systems; IP networks; Network topology; Performance analysis; Proposals
【Paper Link】 【Pages】:1109-1117
【Authors】: Zizhan Zheng ; Zhixue Lu ; Prasun Sinha ; Santosh Kumar
【Abstract】: With increasing popularity of media enabled hand-helds, the need for high data-rate services for mobile users is evident. Large-scale Wireless LANs (WLANs) can provide such a service, but they are expensive to deploy and maintain. Open WLAN access-points (APs), on the other hand, need no new deployments, but can offer only opportunistic services with no guarantees on short term throughput. In contrast, a carefully planned sparse deployment of roadside WiFi provides an economically scalable infrastructure with quality of service assurance to mobile users. In this paper, we propose to study the deployment techniques with respect to roadside WiFi. In particular, we present a new metric, called Contact Opportunity, as a characterization of a roadside WiFi network. Informally, the contact opportunity for a given deployment measures the fraction of distance or time that a mobile user is in contact with some AP when moving through a certain path. Such a metric is closely related to the quality of data service that a mobile user might experience while driving through the system. We then present an efficient deployment method that maximizes the worst case contact opportunity under a budget constraint. We further show how to extend this concept and the deployment techniques to a more intuitive metric -- the average throughput -- by taking various dynamic elements into account. Simulations over a real road network and experimental results show that our approach achieves more than 200% higher minimum contact opportunity, 30%-100% higher average contact opportunity and a significantly improved distribution of average throughput compared with two commonly used baseline algorithms.
【Keywords】: Internet; mobile communication; notebook computers; wireless LAN; WiFi; contact opportunity; data-rate services; handheld computer; mobile user; sparse deployment; vehicular Internet access; Communications Society; Costs; Large-scale systems; Quality of service; Throughput; Time measurement; Vehicle dynamics; Web and internet services; WiMAX; Wireless LAN
【Paper Link】 【Pages】:1118-1126
【Authors】: Nathanael Thompson ; Samuel C. Nelson ; Mehedi Bakht ; Tarek F. Abdelzaher ; Robin Kravets
【Abstract】: The widespread availability of mobile wireless devices offers growing opportunities for the formation of temporary networks with only intermittent connectivity. These intermittently-connected networks (ICNs) typically lack stable end-to-end paths. In order to improve the delivery rates of the networks, new store-carry-and-forward protocols have been proposed which often use message replication as a forwarding mechanism. Message replication is effective at improving delivery, but given the limited resources of ICN nodes, such as buffer space, bandwidth and energy, as well as the highly dynamic nature of these networks, replication can easily overwhelm node resources. In this work we propose a novel node-based replication management algorithm which addresses buffer congestion by dynamically limiting the replication a node performs during each encounter. The insight for our algorithm comes from a stochastic model of message delivery in ICNs with constrained buffer space. We show through simulation that our algorithm is effective, nearly tripling delivery rates in some scenarios, and imposes little overhead.
【Keywords】: electronic messaging; radio networks; telecommunication congestion control; congestion control; delivery rates; end-to-end paths; intermittently-connected networks; message replication; replication management algorithm; retiring replicants; Bandwidth; Communication system control; Communications Society; Computer science; Disruption tolerant networking; Mobile communication; Mobile computing; Peer to peer computing; Protocols; Stochastic processes
【Paper Link】 【Pages】:1127-1135
【Authors】: Qing Yu ; Jiming Chen ; Yanfei Fan ; Xuemin Shen ; Youxian Sun
【Abstract】: In this paper, we formulate multi-channel assignment in Wireless Sensor Networks (WSNs) as an optimization problem and show it is NP-hard. We then propose a distributed Game Based Channel Assignment algorithm (GBCA) to solve the problem. GBCA takes into account both the network topology information and transmission routing information. We prove that there exists at least one Nash Equilibrium in the channel assignment game. Furthermore, we analyze the sub-optimality of Nash Equilibrium and the convergence of the Best Response in the game. Simulation results are given to demonstrate that GBCA can reduce interference significantly and achieve satisfactory network performance in terms of delivery ratio, throughput, channel access delay and energy consumption.
【Keywords】: game theory; optimisation; telecommunication network routing; telecommunication network topology; wireless channels; wireless sensor networks; NP-hard; Nash equilibrium; channel access delay; channel assignment game; delivery ratio; distributed game based channel assignment algorithm; energy consumption; game theoretic approach; multichannel assignment; network topology information; optimization problem; transmission routing information; wireless sensor networks; Autonomous agents; Game theory; Interference; Nash equilibrium; Network topology; Protocols; Radio transceivers; Routing; Sensor phenomena and characterization; Wireless sensor networks
【Paper Link】 【Pages】:1136-1144
【Authors】: Kyunghan Lee ; Yung Yi ; Jaeseong Jeong ; Hyungsuk Won ; Injong Rhee ; Song Chong
【Abstract】: This is by far the first paper considering joint optimization of link scheduling, routing and replication for disruption-tolerant networks (DTNs). The optimization problems for resource allocation in DTNs are typically solved using dynamic programming which requires knowledge of future events such as meeting schedules and durations. This paper defines a new notion of optimality for DTNs, called snapshot optimality where nodes are not clairvoyant, i.e., cannot look ahead into future events, and thus decisions are made using only contemporarily available knowledge. Unfortunately, the optimal solution for snapshot optimality still requires solving an NP-hard problem of maximum weight independent set and a global knowledge of who currently owns a copy and what their delivery probabilities are. This paper presents a new efficient approximation algorithm, called Distributed Max-Contribution (DMC) that performs greedy scheduling, routing and replication based only on locally and contemporarily available information. Through a simulation study based on real GPS traces tracking over 4000 taxies for about 30 days in a large city, DMC outperforms existing heuristically engineered resource allocation algorithms for DTNs.
【Keywords】: Global Positioning System; communication complexity; dynamic programming; mobile radio; scheduling; telecommunication network routing; GPS traces tracking; NP- hard problem; delay tolerant networks; disruption-tolerant networks; distributed max-contribution; dynamic programming; greedy scheduling; link scheduling; mobile wireless networks; optimal resource allocation; replication; routing; Approximation algorithms; Cities and towns; Disruption tolerant networking; Dynamic programming; Dynamic scheduling; Global Positioning System; NP-hard problem; Resource management; Routing; Scheduling algorithm
【Paper Link】 【Pages】:1145-1153
【Authors】: Lei Rao ; Xue Liu ; Le Xie ; Wenyu Liu
【Abstract】: The study of Cyber-Physical System (CPS) has been an active area of research. Internet Data Center (IDC) is an important emerging Cyber-Physical System. As the demand on Internet services drastically increases in recent years, the power used by IDCs has been skyrocketing. While most existing research focuses on reducing power consumptions of IDCs, the power management problem for minimizing the total electricity cost has been overlooked. This is an important problem faced by service providers, especially in the current multi-electricity market, where the price of electricity may exhibit time and location diversities. Further, for these service providers, guaranteeing quality of service (i.e. service level objectives-SLO) such as service delay guarantees to the end users is of paramount importance. This paper studies the problem of minimizing the total electricity cost under multiple electricity markets environment while guaranteeing quality of service geared to the location diversity and time diversity of electricity price. We model the problem as a constrained mixed-integer programming and propose an efficient solution method. Extensive evaluations based on real-life electricity price data for multiple IDC locations illustrate the efficiency and efficacy of our approach.
【Keywords】: Internet; computer centres; integer programming; power aware computing; power markets; quality of service; constrained mixed-integer programming; cyber-physical system; distributed Internet Data Centers; electricity cost; electricity price; location diversity; multielectricity market environment; power consumptions; power management problem; quality of service; service providers; time diversity; Cloud computing; Cost function; Delay; Disaster management; Electricity supply industry; Energy management; Internet; Power system management; Quality of service; Switches
【Paper Link】 【Pages】:1154-1162
【Authors】: Xiaoqiao Meng ; Vasileios Pappas ; Li Zhang
【Abstract】: The scalability of modern data centers has become a practical concern and has attracted significant attention in recent years. In contrast to existing solutions that require changes in the network architecture and the routing protocols, this paper proposes using traffic-aware virtual machine (VM) placement to improve the network scalability. By optimizing the placement of VMs on host machines, traffic patterns among VMs can be better aligned with the communication distance between them, e.g. VMs with large mutual bandwidth usage are assigned to host machines in close proximity. We formulate the VM placement as an optimization problem and prove its hardness. We design a two-tier approximate algorithm that efficiently solves the VM placement problem for very large problem sizes. Given the significant difference in the traffic patterns seen in current data centers and the structural differences of the recently proposed data center architectures, we further conduct a comparative analysis on the impact of the traffic patterns and the network architectures on the potential performance gain of traffic-aware VM placement. We use traffic traces collected from production data centers to evaluate our proposed VM placement algorithm, and we show a significant performance improvement compared to existing general methods that do not take advantage of traffic patterns and data center network characteristics.
【Keywords】: computer centres; routing protocols; virtual machines; data center network architecture; data center network characteristics; network scalability; routing protocols; traffic patterns; traffic-aware virtual machine placement; two-tier approximate algorithm; Algorithm design and analysis; Bandwidth; Pattern analysis; Performance analysis; Routing protocols; Scalability; Telecommunication traffic; Virtual machining; Virtual manufacturing; Voice mail
【Paper Link】 【Pages】:1163-1171
【Authors】: Guohui Wang ; T. S. Eugene Ng
【Abstract】: Cloud computing services allow users to lease computing resources from large scale data centers operated by service providers. Using cloud services, users can deploy a wide variety of applications dynamically and on-demand. Most cloud service providers use machine virtualization to provide flexible and cost-effective resource sharing. However, few studies have investigated the impact of machine virtualization in the cloud on networking performance. In this paper, we present a measurement study to characterize the impact of virtualization on the networking performance of the Amazon Elastic Cloud Computing (EC2) data center. We measure the processor sharing, packet delay, TCP/UDP throughput and packet loss among Amazon EC2 virtual machines. Our results show that even though the data center network is lightly utilized, virtualization can still cause significant throughput instability and abnormal delay variations. We discuss the implications of our findings on several classes of applications.
【Keywords】: Internet; computer centres; performance evaluation; virtual machines; Amazon elastic cloud computing data center; TCP/UDP throughput; cloud computing services; cloud service providers; computing resources; data center network; machine virtualization; network performance; networking performance; packet delay; packet loss; processor sharing; virtual machines; Cloud computing; Communications Society; Computer networks; Large-scale systems; Loss measurement; Propagation delay; Resource management; Resource virtualization; Throughput; Virtual machining
【Paper Link】 【Pages】:1172-1180
【Authors】: Yun Wang ; Kai Li ; Jie Wu
【Abstract】: Distance estimation is fundamental for many functionalities of wireless sensor networks and has been studied intensively in recent years. A critical challenge in distance estimation is handling anisotropic problems in sensor networks. Compared with isotropic networks, anisotropic networks are more intractable in that their properties vary according to the directions of measurement. Anisotropic properties result from various factors, such as geographic shapes, irregular radio patterns, node densities, and impacts from obstacles. In this paper, we study the problem of measuring irregularity of sensor networks and evaluating its impact on distance estimation. In particular, we establish a new metric to measure irregularity along a path in sensor networks, and identify turning nodes where a considered path is inflected. Furthermore, we develop an approach to construct a virtual ruler for distance estimation between any pair of sensor nodes. The construction of a virtual ruler is carried out according to distance measurements among beacon nodes. However, it does not require beacon nodes to be deployed uniformly throughout sensor networks. Compared with existing methods, our approach neither assumes global knowledge of boundary recognition nor relies on uniform distribution of beacon nodes. Therefore, this approach is robust and applicable in practical environments. Simulation results show that our approach outperforms some previous methods, such as DVDistance and PDM.
【Keywords】: distance measurement; telecommunication computing; virtual reality; wireless sensor networks; anisotropic sensor networks; beacon nodes; boundary recognition; distance estimation; distance measurements; node density; virtual ruler; wireless sensor networks; Anisotropic magnetoresistance; Communications Society; Computer networks; Computer science; Measurement errors; Peer to peer computing; Pollution measurement; Sensor phenomena and characterization; Shape; Wireless sensor networks
【Paper Link】 【Pages】:1181-1189
【Authors】: Siyao Cheng ; Jianzhong Li ; Qianqian Ren ; Lei Yu
【Abstract】: Aggregations of sensed data are very important for users to get summary information about monitored area in applications of wireless sensor networks (WSNs). As the approximate aggregation results are enough for users to perform analysis and make decisions, many approximate aggregation algorithms are proposed for WSNs. However, most of the algorithms have fixed error bounds and cannot meet arbitrary precision requirement, the uniform sampling based algorithm which can reach arbitrary precision is just suitable for the static networks. Considering the dynamic property of WSNs, in this paper, we propose an approximate aggregation algorithm based on Bernoulli sampling to satisfy arbitrary precision requirement. Besides, two adaptive algorithms are also proposed, one is for adapting the sample with varying of precision requirement, the other is for adapting the sample with varying of sensed data. The theoretical analysis and experiment results show that the proposed algorithms have high performance in terms of accuracy and energy consumption.
【Keywords】: sampling methods; wireless sensor networks; (ε, δ)-approximate aggregation; Bernoulli sampling; large-scale sensor networks; uniform sampling based algorithm; wireless sensor networks; Algorithm design and analysis; Clustering algorithms; Communications Society; Energy consumption; Inference algorithms; Large-scale systems; Monitoring; Performance analysis; Sampling methods; Wireless sensor networks
【Paper Link】 【Pages】:1190-1198
【Authors】: Zainul Charbiwala ; Supriyo Chakraborty ; Sadaf Zahedi ; Younghun Kim ; Ting He ; Chatschik Bisdikian ; Mani B. Srivastava
【Abstract】: Data loss in wireless sensing applications is inevitable and while there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) provide a new and attractive perspective. Since many physical signals of interest are known to be sparse or compressible, employing CS, not only compresses the data and reduces effective transmission rate, but also improves the robustness of the system to channel erasures. This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we propose that if CS is employed for source compression, then CS can further be exploited as an application layer erasure coding strategy for recovering missing data. We show that CS erasure encoding (CSEC) with random sampling is efficient for handling missing data in erasure channels, paralleling the performance of BCH codes, with the added benefit of graceful degradation of the reconstruction error even when the amount of missing data far exceeds the designed redundancy. Further, since CSEC is equivalent to nominal oversampling in the incoherent measurement basis, it is computationally cheaper than conventional erasure coding. We support our proposal through extensive performance studies.
【Keywords】: channel coding; data communication; data compression; radio links; wireless sensor networks; compressive sensing; erasure channel coding; reconstruction algorithms; robust data transmission; source compression; wireless link disturbances; wireless sensor networks; Costs; Data communication; Decoding; Image coding; Propagation losses; Reconstruction algorithms; Robustness; Sampling methods; Stochastic processes; Wireless sensor networks
【Paper Link】 【Pages】:1199-1207
【Authors】: Rui Zhang ; Jing Shi ; Yunzhong Liu ; Yanchao Zhang
【Abstract】: Most large-scale sensor networks are expected to follow a two-tier architecture with resource-poor sensor nodes at the lower tier and resource-rich master nodes at the upper tier. Master nodes collect data from sensor nodes and then answer the queries from the network owner on their behalf. In hostile environments, master nodes may be compromised by the adversary and then instructed to return fake and/or incomplete data in response to data queries. Such application-level attacks are more harmful and difficult to detect than blind DoS attacks on network communications, especially when the query results are the basis for making critical decisions such as military actions. This paper presents three schemes whereby the network owner can verify the authenticity and completeness of fine-grained top-k query results in tired sensor networks, which is the first work of its kind. The proposed schemes are built upon symmetric cryptographic primitives and force compromised master nodes to return both authentic and complete top-k query results to avoid being caught. Detailed theoretical and quantitative results confirm the high efficacy and efficiency of the proposed schemes.
【Keywords】: message authentication; query processing; telecommunication security; wireless sensor networks; application-level attacks; blind denial-of-service attacks; fine-grained top-k queries; resource-poor sensor nodes; resource-rich master nodes; symmetric cryptographic primitives; tiered sensor networks; Business; Communications Society; Computer architecture; Computer crime; Large-scale systems; Memory; Peer to peer computing; Query processing; Sensor phenomena and characterization; Wireless sensor networks
【Paper Link】 【Pages】:1208-1216
【Authors】: Ahmed Elmokashfi ; Amund Kvalbein ; Constantine Dovrolis
【Abstract】: The scalability limitations of BGP have been a major concern in the networking community lately. An important issue in this respect is the rate of routing updates (churn) that BGP routers must process. This paper presents an analysis of the evolution of churn in four networks in the backbone of the Internet over the last six years, using update traces from the Routeviews project. The churn rate varies widely over time and between networks, and cannot be understood through "black-box'' statistical analysis. Instead we take a different approach with a focus on investigating the underlying reasons for BGP churn evolution. Through our analysis we are able to identify and isolate the main reasons behind many of the anomalies in the churn time series. We find that duplicate announcements is a major churn contributor, and responsible for most large spikes in the churn time series. Other intense periods of churn are caused by misconfigurations or other special events in or close to the monitored AS, and hence limiting these is an important mean to limit churn. We then analyze the remaining "baseline'' churn, and find that it is increasing with a rate much slower than the increase in the routing table size.
【Keywords】: Internet; routing protocols; statistical analysis; time series; BGP; Internet; churn evolution; statistical analysis; time series; Communications Society; IEEE news; IP networks; Internet; Laboratories; Routing; Scalability; Spine; Statistical analysis; Time series analysis
【Paper Link】 【Pages】:1217-1225
【Authors】: Gábor Rétvári ; Gábor Németh
【Abstract】: Until recent years, it was more or less undisputed common-sense that an accurate view on traffic demands is indispensable for optimizing the flow of traffic through a network. Lately, this premise has been questioned sharply: it was shown that setting just a single routing, the so called demand-oblivious routing, is sufficient to accommodate any admissible traffic matrix in the network with moderate link overload, so no prior information on demands is absolutely necessary for efficient traffic engineering. Demand-oblivious routing lends itself to distributed implementations, so it scales well. In this paper, we generalize demand-oblivious routing in a new way: we show that, in contrast to the distributed case, centralized demand-oblivious routing can eliminate link overload completely. What is more, our centralized scheme allows for optimizing the routes with respect to arbitrary linear or quadratic objective function. We realize, however, that a centralized scheme can become prohibitively complex, therefore, we propose a hybrid distributed-centralized algorithm, which, according to our simulations, strikes a good balance between efficiency, scalability and complexity.
【Keywords】: telecommunication network routing; centralized demand oblivious routing; distributed demand oblivious routing; network traffic; traffic engineering; Art; Communications Society; Geometry; Informatics; Internet; Monitoring; Routing; Scalability; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:1226-1234
【Authors】: Yuval Shavitt ; Yaron Singer
【Abstract】: When forwarding packets in the Internet, Autonomous Systems (ASes) frequently choose the shortest path in their network to the next-hop AS in the BGP path, a strategy known as hot-potato routing. As a result, paths in the Internet are suboptimal from a global perspective. For peering ASes who exchange traffic without payments, path trading - complementary deviations from hot- potato routing - appears to be a desirable solution to deal with these inefficiencies. In recent years, path trading approaches have been suggested as means for interdomain traffic engineering between neighboring ASes, as well as between multiple ASes to achieve global efficiency. Surprisingly, little is known on the computational complexity of finding path trading solutions, or the conditions which guarantee the optimality or even approximability of a path trading protocol. In this paper we explore the computational feasibility of computing path trading solutions between peering ASes. We first show that finding a path trading solution between a pair of ASes is NP- complete, and that path-trading solutions are even NP-hard to approximate. We continue to explore the feasibility of implementing policies between multiple ASes and show that, even if the bilateral path trading problem is tractable for every AS pair in the set of trading ASes, path trading between multiple ASes is NP- hard, and NP-hard to approximate as well. Despite the above negative results, we show a pseudo-polynomial algorithm to compute path trading solutions. Thus, if the range of the instances is bounded, we show one can compute solutions efficiently for peering ASes. We evaluate the path trading algorithm on pairs of ASes using real network topologies. Specifically, we use real PoP-level maps of ASes in the Internet to show that path trading can substantially mitigate the inefficiencies associated with hot-potato routing.
【Keywords】: Internet; computational complexity; optimisation; routing protocols; telecommunication network topology; NP-complete problems; NP-hard problems; PoP-level maps; autonomous systems; computational complexity; hot-potato routing; interdomain traffic engineering; internet; next-hop AS; path trading algorithm; pseudopolynomial algorithm; real network topologies; routing protocols; Communications Society; Computational complexity; Computer science; Forward contracts; IP networks; Internet; Network topology; Peer to peer computing; Routing protocols; Telecommunication traffic
【Paper Link】 【Pages】:1235-1243
【Authors】: Kin Wah Kwong ; Lixin Gao ; Roch Guérin ; Zhi-Li Zhang
【Abstract】: With network components increasingly reliable, routing is playing an ever greater role in determining network reliability. This has spurred much activity in improving routing stability and reaction to failures, and rekindled interest in centralized routing solutions, at least within a single routing domain. Centralizing decisions eliminates uncertainty and many inconsistencies, and offers added flexibility in computing routes that meet different criteria. However, it also introduces new challenges; especially in reacting to failures where centralization can increase latency. This paper leverages the flexibility afforded by centralized routing to address these challenges. Specifically, we explore when and how standby backup forwarding options can be activated, while waiting for an update from the centralized server after the failure of an individual component (link or node). We provide analytical insight into the feasibility of such backups as a function of network structure, and quantify their computational complexity. We also develop an efficient heuristic reconciling protectability and performance, and demonstrate its effectiveness in a broad range of scenarios. The results should facilitate deployments of centralized routing solutions.
【Keywords】: IP networks; computational complexity; telecommunication network reliability; telecommunication network routing; IP networks; centralized routing; centralized server; computational complexity; network components; network reliability; network structure; protection routing; routing stability; single routing domain; Costs; Delay; Distributed computing; IP networks; Network servers; Protection; Routing; Scalability; Switches; Tree graphs
【Paper Link】 【Pages】:1244-1252
【Authors】: Yu Gu ; Guofei Jiang ; Vishal K. Singh ; Yueping Zhang
【Abstract】: Network tomography has been proposed to ascertain internal network performances from end-to-end measurements. In this work, we present priority probing, an optimal probing scheme for unicast network delay tomography that is proven to provide the most accurate estimation. We first demonstrate that the Fisher information matrix in unicast network delay tomography can be decomposed into an additive form where each term can be obtained numerically. This establishes the space over which we can design the optimal probing scheme. Then, we formulate the optimal probing problem into a semi-definite programming (SDP) problem. High computation complexity constrains the SDP solution to only small scale scenarios. In response, we propose a greedy algorithm that approximates the optimal solution. Evaluations through simulation demonstrate that priority probing effectively increases estimation accuracy with a fixed number of probes.
【Keywords】: computational complexity; delays; greedy algorithms; matrix algebra; optimisation; telecommunication network topology; tomography; fisher information matrix; greedy algorithm; high computation complexity; network topology; optimal probing scheme; semidefinite programming problem; unicast network delay tomography; Computational modeling; Covariance matrix; Delay estimation; Greedy algorithms; Matrix decomposition; Maximum likelihood estimation; Performance evaluation; Telecommunication traffic; Tomography; Unicast
【Paper Link】 【Pages】:1253-1261
【Authors】: Peter Lieven ; Björn Scheuermann
【Abstract】: On today's high-speed backbone network links, measuring per-flow traffic information has become very challenging. Maintaining exact per-flow packet counters on OC-192 or OC-768 links is not practically feasible due to computational and cost constrains. Packet sampling as implemented in today's routers results in large approximation errors. Here, we present Probabilistic Multiplicity Counting (PMC), a novel data structure that is capable of accounting traffic per flow probabilistically. The PMC algorithm is very simple and highly parallelizable, and therefore allows for efficient implementations in software and hardware. At the same time, it provides very accurate traffic statistics. We evaluate PMC with both artificial and real-world traffic data, demonstrating that it outperforms other approaches.
【Keywords】: probability; radio links; telecommunication network routing; telecommunication traffic; OC-768 links; approximation errors; backbone network links; data structure; high-speed per-flow traffic measurement; packet counters; packet sampling; probabilistic multiplicity counting; traffic statistics; Approximation error; Computational efficiency; Counting circuits; Data structures; Hardware; Sampling methods; Software algorithms; Spine; Statistics; Telecommunication traffic
【Paper Link】 【Pages】:1262-1270
【Authors】: Denisa Ghita ; Hung Xuan Nguyen ; Maciej Kurant ; Katerina J. Argyraki ; Patrick Thiran
【Abstract】: We present Netscope, a tomographic technique that infers the loss rates of network links from unicast end-to-end measurements. Netscope uses a novel combination of first- and second-order moments of end-to-end measurements to identify and characterize the links that cannot be (accurately) characterized through existing practical tomographic techniques. Using both analytical and experimental tools, we show that Netscope enables scalable, accurate link-loss inference: in a simulation scenario involving 4000 links, 20% of them lossy, Netscope correctly identifies 94% of the lossy links with a false positive rate of 16%-a significant improvement over the existing alternatives. Netscope is robust in the sense that it requires no parameter tuning, moreover its advantage over the alternatives widens when the number of lossy links increases. We also validate Netscope's performance on an "Internet tomographer" that we deployed on an overlay of 400 PlanetLab nodes.
【Keywords】: Internet; Internet tomographer; Netscope; PlanetLab nodes; link-loss inference; network links; network loss tomography technique; unicast end-to-end measurements; Analytical models; Delay estimation; Extraterrestrial measurements; Inference algorithms; Internet; Loss measurement; Robustness; Routing; Tomography; Vectors
【Paper Link】 【Pages】:1271-1279
【Authors】: Saqib Raza ; Guanyao Huang ; Chen-Nee Chuah ; Srini Seetharaman ; Jatinder Pal Singh
【Abstract】: Monitoring transit traffic at one or more points in a network is of interest to network operators for reasons of traffic accounting, debugging or troubleshooting, forensics, and traffic engineering. Previous research in the area has focused on deriving a placement of monitors across the network towards the end of maximizing the monitoring utility of the network operator for a given traffic routing. However, both traffic characteristics and measurement objectives can dynamically change over time, rendering a previously optimal placement of monitors suboptimal. It is not feasible to dynamically redeploy/reconfigure measurement infrastructure to cater to such evolving measurement requirements. We address this problem by strategically routing traffic sub-populations over fixed monitors. We refer to this approach as MeasuRouting. The main challenge for MeasuRouting is to work within the constraints of existing intra-domain traffic engineering operations that are geared for efficiently utilizing bandwidth resources, or meeting Quality of Service (QoS) constraints, or both. A fundamental feature of intra-domain routing, that makes MeasuRouting feasible, is that intra-domain routing is often specified for aggregate flows. MeasuRouting, can therefore, differentially route components of an aggregate flow while ensuring that the aggregate placement is compliant to original traffic engineering objectives. In this paper we present a theoretical framework for MeasuRouting. Furthermore, as proofs-of-concept, we present synthetic and practical monitoring applications to showcase the utility enhancement achieved with MeasuRouting.
【Keywords】: quality of service; telecommunication network routing; telecommunication traffic; MeasuRouting; bandwidth resources; intra-domain routing; intra-domain traffic engineering; quality of service; redeploy/reconfigure measurement infrastructure; routing assisted traffic monitoring; traffic accounting; traffic sub-populations; Aggregates; Bandwidth; Debugging; Fluid flow measurement; Forensics; Monitoring; Quality of service; Routing; Telecommunication traffic; Time measurement
【Paper Link】 【Pages】:1280-1288
【Authors】: Senhua Huang ; Xin Liu ; Zhi Ding
【Abstract】: We venture beyond the "listen-before-talk" strategy that is common in many traditional cognitive radio access schemes. We exploit the bi-directional nature of most primary communication systems. By intelligently choosing their transmission parameters based on the observation of primary user (PU) communications, secondary users (SUs) in a cognitive network can achieve higher spectrum usage while limiting their interference to the PU. Specifically, we propose that the SUs listen to the PU's feedback channel to assess their interference on the primary receiver (PU-Rx), and adjust radio power accordingly to satisfy the PU's interference constraint. We investigate both centralized and distributed power control algorithms without active PU cooperation. We show that the PU feedback information inherent in many two-way primary systems can be used as important coordination signal among multiple SUs to distributively achieve a joint performance guarantee on the primary receiver's quality of service.
【Keywords】: cognitive radio; distributed control; feedback; power control; radio links; telecommunication control; bi-directional nature; cognitive user access; distributed power control algorithms; higher spectrum usage; listen-before-talk strategy; primary link control feedback; primary user communications; secondary users communications; Bidirectional control; Cognitive radio; Communication system control; Feedback; Intelligent networks; Intelligent systems; Interference constraints; Power control; Quality of service; Receivers
【Paper Link】 【Pages】:1289-1297
【Authors】: Jin Jin ; Hong Xu ; Baochun Li
【Abstract】: Cognitive Radio Networks (CRNs) have recently emerged as a promising technology to improve spectrum utilization by allowing secondary users to dynamically access idle primary channels. As progress are made and computationally powerful wireless devices are proliferated, there is a compelling need of enabling multicast services for secondary users. Thus, it is crucial to design an efficient multicast scheduling protocol in CRNs. However, state-of-the-art multicast scheduling protocols are not well designed for CRNs. First, due to primary channel dynamics and user mobility, there may not exist commonly available channels for secondary users, which inevitably makes the multicast scheduling infeasible. Second, the potential benefits provided by user and channel diversities are overlooked, which leads to under-utilization of the scarce wireless bandwidth. In this paper, we present an optimization framework for multicast scheduling in CRNs, by fully embracing its characteristics. In this framework, base station multicasts data to a subset of secondary users first by carefully tuning the power. Concurrently, secondary users opportunistically perform cooperative transmissions using locally idle primary channels, in order to mitigate multicast loss and delay effects. Network coding is adopted during the transmissions to reduce overhead and perform error control and recovery. We jointly consider important design factors in our scheduling protocols, including power control, relay assignment, buffer management, dynamic spectrum access, primary user protection, and fairness. We also incorporate user, channel, and cooperative diversities. Two forms of multicast scheduling protocols in CRNs are proposed accordingly: (i) a greedy protocol based on centralized optimization; (ii) an online protocol based on stochastic optimization in both centralized and decentralized manners. With rigorous analysis based on Lyapunov optimization, we provide closed-form bounds to characterize the perf- - ormance of our protocols, in terms of the interference to primary users and throughput utility of secondary users. With extensive simulations, we show that our proposed protocols can significantly improve the multicast performance in CRNs.
【Keywords】: channel coding; cognitive radio; diversity reception; error correction; greedy algorithms; multicast protocols; network coding; power control; radio spectrum management; stochastic processes; wireless channels; base station multicast data; buffer management; channel diversities; cognitive radio networks; cooperation coding; cooperative transmissions; dynamic spectrum access; error control; error recovery; greedy protocol; multicast loss; multicast scheduling protocol; multicast services; network coding; online protocol; power control; primary channel dynamics; primary user protection; relay assignment; scarce wireless bandwidth; spectrum utilization; stochastic optimization; user mobility; wireless devices; Access protocols; Bandwidth; Base stations; Cognitive radio; Delay effects; Dynamic scheduling; Multicast protocols; Network coding; Processor scheduling; Propagation losses
【Paper Link】 【Pages】:1298-1306
【Authors】: Yanyan Yang ; Yunhuai Liu ; Qian Zhang ; Lionel M. Ni
【Abstract】: Spectrum sensing is one of the key enabling technologies in Cognitive Radio Networks (CRNs). In CRNs, secondary users (SUs) are allowed to exploit the spectrum opportunities by sensing and accessing the spectrum, which exhibit many critical limitations in practical environments. In this paper, we propose a new sensing service model that uses dedicated wireless spectrum sensor networks (WSSN) for spectrum sensing. The major challenge in WSSN is the design of data fusion, for which the traditional fusion scheme will produce a large amount of errors. We formulate the problem as a boundary detection problem with notable unknown erroneous inputs. To solve the problem, we propose a novel cooperative boundary detection scheme that intelligently incorporates the cooperative spectrum sensing concept and the recent advances in support vector machine (SVM). Cooperative boundary detection consists of two major components, a declaration calibration algorithm and a boundary derivation algorithm. We prove that cooperative spectrum sensing can asymptotically approach the optimal solution. A prototype system as well as simulation experiments show that compared with the traditional approaches, cooperative boundary detection can reduce the errors by up to 95% with an average reduction about 85%.
【Keywords】: cognitive radio; sensor fusion; signal detection; support vector machines; wireless sensor networks; boundary derivation algorithm; cognitive radio networks; cooperative boundary detection; data fusion; declaration calibration algorithm; dedicated wireless spectrum sensor networks; sensing service model; spectrum sensing; support vector machine; Calibration; Cognitive radio; Communications Society; Hardware; Machine intelligence; Radio transmitters; Solids; Support vector machines; Virtual prototyping; Wireless sensor networks
【Paper Link】 【Pages】:1307-1315
【Authors】: Jin Zhang ; Juncheng Jia ; Qian Zhang ; Eric M. K. Lo
【Abstract】: Cooperative communication is a promising technique for future wireless networks, which significantly improves link capacity and reliability by leveraging broadcast nature of wireless medium and exploiting cooperative diversity. However, most of existing works investigate its performance theoretically or by simulation. It has been widely accepted that simulations often fail to faithfully capture many real-world radio signal propagation effects, which can be overcome through developing physical wireless network testbeds. In this work, we build a cooperative testbed based on GNU Radio and Universal Software Radio Peripheral (USRP) platform, which is a promising open-source software-defined radio system. Both single-relay cooperation and multi-relay cooperation can be supported in our testbed. Some key techniques are provided to solve the main challenges during the testbed development: e.g., maximum ratio combine in single-relay transmission and synchronized transmission among multiple relays. Extensive experiments are carried out in the testbed to evaluate performance of various cooperative communication schemes. The results show that cooperative transmission achieves significant performance enhancement in terms of link reliability and end-to-end throughput.
【Keywords】: software radio; telecommunication network reliability; GNU radio; cooperative communication; cooperative transmission; end-to-end throughput; link reliability; software defined radio testbed; universal software radio peripheral; Analytical models; Diversity reception; Field programmable gate arrays; MIMO; Physical layer; Relays; Software radio; Software testing; Throughput; Wireless networks
【Paper Link】 【Pages】:1316-1324
【Authors】: Anne Bouillard ; Laurent Jouhet ; Eric Thierry
【Abstract】: Network Calculus theory aims at evaluating worst-case performances in communication networks. It provides methods to analyze models where the traffic and the services are constrained by some minimum and/or maximum envelopes (service/arrival curves). While new applications come forward, a challenging and inescapable issue remains open: achieving tight analyzes of networks with aggregate multiplexing. The theory offers efficient methods to bound maximum end-to-end delays or local backlogs. However as shown recently, those bounds can be arbitrarily far from the exact worst-case values, even in seemingly simple feed-forward networks (two flows and two servers), under blind multiplexing (i.e. no information about the scheduling policies, except FIFO per flow). For now, only a network with three flows and three servers, as well as a tandem network called sink tree, have been analyzed tightly. We describe the first algorithm which computes the maximum end-to-end delay for a given flow, as well as the maximum backlog at a server, for any feed-forward network under blind multiplexing, with concave arrival curves and convex service curves. Its computational complexity may look expensive (possibly super-exponential), but we show that the problem is intrinsically difficult (NP-hard). Fortunately we show that in some cases, like tandem networks with cross-traffic interfering along intervals of servers, the complexity becomes polynomial. We also compare ourselves to the previous approaches and discuss the problems left open.
【Keywords】: calculus of communicating systems; computational complexity; feedforward; multiplexing; telecommunication networks; blind multiplexing; communication networks; computational complexity; concave arrival curves; convex service curves; cross-traffic interfering; end-to-end delay; feedforward networks; maximum server backlog; network Calculus theory; sink tree; tandem network; tight performance bounds; worst-case analysis; Aggregates; Calculus; Communication networks; Delay; Feedforward systems; Network servers; Performance analysis; Performance evaluation; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:1325-1333
【Authors】: Jörg Liebeherr ; Almut Burchard ; Florin Ciucu
【Abstract】: Traffic with self-similar and heavy-tailed characteristics has been widely reported in networks, yet, only few analytical results are available for predicting the delay performance of such networks. We address a particularly difficult type of heavy-tailed traffic where only the first moment can be computed, and present the first non-asymptotic end-to-end delay bounds for such traffic. The derived performance bounds are non-asymptotic in that they do not assume a steady state, large buffer, or many sources regime. Our analysis considers a multi-hop path of fixed-capacity links with heavy-tailed self-similar cross traffic at each node. A key contribution of the analysis is a probabilistic sample-path bound for heavy-tailed arrival and service processes, which is based on a scale-free sampling method. We explore how delays scale as a function of the length of the path, and compare them with lower bounds. A comparison with simulations illustrates pitfalls when simulating self-similar heavy-tailed traffic, providing further evidence for the need of analytical bounds.
【Keywords】: probability; telecommunication traffic; delay performance; fixed-capacity links; heavy-tailed arrival; heavy-tailed characteristics; heavy-tailed self-similar cross traffic; heavy-tailed traffic; many sources regime; multihop path; nonasymptotic end-to-end delay bounds; probabilistic sample-path bound; scale-free sampling method; self-similar characteristics; service process; steady state large buffer; Aggregates; Analytical models; Calculus; Delay; Performance analysis; Probability distribution; Steady-state; Tail; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:1334-1342
【Authors】: Jian Tan ; Ness B. Shroff
【Abstract】: Retransmissions serve as the basic building block that communication protocols use to achieve reliable data transfer. Until recently, the number of retransmissions were thought to follow a light tailed (in particular, a geometric) distribution. However, recent work seems to suggest that when the distribution of the packets have infinite support, retransmission-based protocols may result in heavy tailed delays and even possibly zero throughput. While this result is true even when the distribution of packet sizes are light-tailed, it requires the assumption that the packet sizes have infinite support. However, in reality, packet sizes are often bounded by the Maximum Transmission Unit (MTU), and thus the aforementioned result merits a deeper investigation. To that end, in this paper, we allow the distribution of the packet size L to have finite support. This packet is sent over an on-off channel {(Ai,Ui)} with alternating available Ai and unavailable Ui periods. If L ? Ai, the transmission fails and we wait for the next period Ai + 1 to retransmit the packet. The transmission duration is thus measured from the first attempt to a point when a channel available period larger than L. Under mild conditions, we show that the transmission duration distribution exhibits a transition from a power law main body to an exponential tail with Weibull type distributions between the two. The time scale to observe the power law main body is roughly equal to the average transmission duration of the longest packet. Both the power law main body and the exponential tail could dominate the overall performance. For example, the power law main body, if significant, may cause the channel throughput to be very close to zero. On the other hand, the exponential tail, if more evident, may imply that the system operates in a benign environment. These theoretical findings provide an understanding on why some empirical measurements s- - uggest heavy tails and light tails for others (e.g., wireless networks). We use these results to further highlight the engineering implications from distributions with power law main bodies and light tails by analyzing two cases: (1) The throughput of on-off channels with retransmissions, where we show that even when packet sizes have small means and bounded support the variability in their sizes can greatly impact system performance. (2) The distribution of the number of jobs in an M/M/? queue with server failures. Here we show that retransmissions can cause long-range dependence and quantify the impact of the maximum job sizes on the long-range dependence.
【Keywords】: Weibull distribution; protocols; queueing theory; M/M/¿ queue; Weibull type distributions; channel throughput; communication protocols; data transfer; exponential tail; light tailed distribution; maximum transmission unit; packet distribution; packet size; power law main body; retransmission durations; Delay; Performance analysis; Power engineering and energy; Power measurement; Probability distribution; Protocols; System performance; Tail; Throughput; Wireless networks
【Paper Link】 【Pages】:1343-1351
【Authors】: Tian Lan ; David Kao ; Mung Chiang ; Ashutosh Sabharwal
【Abstract】: We present five axioms for fairness measures in resource allocation. A family of fairness measures satisfying the axioms is constructed. Special cases of this family include ¿-fairness, Jain's index, and entropy. Properties of fairness measures satisfying the axioms are proven, including Schur-concavity. Among the engineering implications is a generalized Jain's index that tunes the resolution of fairness measure, a new understanding of ¿-fair utility functions, and an interpretation of "larger ¿ is more fair". We also construct an alternative set of axioms to capture system efficiency and feasibility constraints.
【Keywords】: entropy; resource allocation; Jain's index; Schur-concavity; axiomatic theory; entropy; fairness measures; network resource allocation; ¿-fair utility functions; Communications Society; Electric variables measurement; Entropy; Particle measurements; Power generation; Power measurement; Resource management; Throughput; USA Councils
【Paper Link】 【Pages】:1352-1360
【Authors】: Xinyu Xing ; Shivakant Mishra ; Xue Liu
【Abstract】: With 802.11 WiFi networks becoming popular in homes, it is common for an end-user to have access to multiple WiFi access points (APs) from residents next door. In general, wireless networks have much higher bandwidth than residential Internet (DSL or Cable) connections. This provides an incentive for an end-user to simultaneously harness bandwidths from multiple APs. This paper introduces ARBOR, an 802.11 driver that aggregates broadband connections in a neighborhood and maximizes Internet access bandwidth in a secure manner. ARBOR has four important characteristics. First, ARBOR can sustain a much longer switching cycle without losing packets queued at different APs. Second, it can schedule traffic loads and (in)directly aggregate AP backhaul bandwidths. Third, ARBOR designs and implements a light-weight authentication mechanism that provides sufficient amount of security, and at the same time, ensures fast switching time. Finally, ARBOR is transparent to the upper layers of the network stack. A prototype of ARBOR has been implemented and extensively evaluated. Experiment results show that ARBOR provides significantly better throughput gains and lower Internet access delays.
【Keywords】: Internet; telecommunication traffic; wireless LAN; 802.11 WiFi networks; AP backhaul bandwidths; ARBOR; Internet access bandwidth; Internet access delays; multiple WiFi access points; residential Internet; throughput gains; traffic loads; Aggregates; Authentication; Bandwidth; DSL; IP networks; Internet; Packet switching; Prototypes; Telecommunication traffic; Wireless networks
【Paper Link】 【Pages】:1361-1369
【Authors】: Fengyuan Xu ; Chiu Chiang Tan ; Qun Li ; Guanhua Yan ; Jie Wu
【Abstract】: In a Wireless Local Area Network (WLAN), the Access Point (AP) selection of a client heavily influences the performance of its own and others. Through theoretical analysis, we reveal that previously proposed association protocols are not effective in maximizing the minimal throughput among all clients. Accordingly, we propose an online AP association strategy that not only achieves a minimal throughput (among all clients) that is provably close to the optimum, but also works effectively in practice with a reasonable computational overhead. The association protocol applying this strategy is implemented on the commercial hardware and compatible with legacy APs without any modification. We demonstrate its feasibility and performance through real experiments.
【Keywords】: protocols; wireless LAN; access point association protocol; minimal throughput; online AP association strategy; wireless local area network; Access protocols; Algorithm design and analysis; Communications Society; Educational institutions; Hardware; Switches; Throughput; Wireless LAN; Wireless application protocol; Wireless networks
【Paper Link】 【Pages】:1370-1378
【Authors】: Ming Zhang ; Shigang Chen ; Ying Jian
【Abstract】: Wireless LANs have been densely deployed in many urban areas. Contention among nearby WLANs is location-sensitive, which makes some hosts much more capable than others to obtain the channel for their transmissions. Another reality is that wireless hosts use different transmission rates to communicate with the access points due to attenuation of their signals. We show that location-sensitive contention aggravates the throughput anomaly caused by different transmission rates. It can cause throughput degradation and host starvation. This paper studies the intriguing interaction between location-sensitive contention and time fairness across contending WLANs. Achieving time fairness across multiple WLANs is a very difficult problem because the hosts may perceive very different channel conditions and they may not be able to communicate and coordinate their operations due to the disparity between the interference range and the transmission range. In this paper, we design a MAC-layer time fairness solution based on two novel techniques: channel occupancy adaptation, which applies AIMD on the channel occupancy of each flow, and queue spreading, which ensures that all hosts and only those hosts in a saturated channel detect congestion and reduce their channel occupancies in response. We show that these two techniques together approximate the generic adaptation algorithm for proportional fairness.
【Keywords】: access protocols; multi-access systems; wireless LAN; MAC-layer; generic adaptation algorithm; location sensitive; multiple wireless LAN; proportional fairness; time fairness; Access protocols; Attenuation; Communications Society; Degradation; Information science; Interference; Throughput; USA Councils; Urban areas; Wireless LAN
【Paper Link】 【Pages】:1379-1387
【Authors】: Zhenghao Zhang ; Steven Bronson ; Jin Xie ; Hu Wei
【Abstract】: In this paper, we study the One-Sender-Multiple-Receiver (OSMR) transmission technique, which allows one sender to send to multiple receivers simultaneously by utilizing multiple antennas at the sender. We implemented a prototype OSMR transmitter/receiver with GNU Software Defined Radio, and conducted experiments in a university building to study the physical layer characteristics of OSMR. Our results are positive and show that wireless channels allow OSMR for a significant percentage of the time. Motivated by our physical layer study, we propose extensions to the 802.11 MAC protocol to support OSMR transmission, which is backward compatible with existing 802.11 devices. We also note that the AP needs a packet scheduling algorithm to efficiently exploit OSMR. We show that the scheduling problem without considering the packet transmission overhead can be formalized as a Linear Programming problem, but the scheduling problem considering the overhead is NP-hard. We then propose a greedy algorithm to schedule OSMR transmissions. We tested the proposed protocol and algorithm with simulations driven by traffic traces collected from wireless LANs and channel state traces collected from our experiments, and the results show that OSMR significantly improves the downlink performance.
【Keywords】: access protocols; computational complexity; greedy algorithms; linear programming; radio receivers; scheduling; software radio; telecommunication traffic; wireless LAN; wireless channels; 802.11 MAC protocol; GNU software defined radio; NP-hard problem; OSMR transmitter-receiver; channel state traces; greedy algorithm; linear programming problem; multiple antennas; one-sender-multiple-receiver transmission technique; packet scheduling algorithm; packet transmission overhead; physical layer; wireless LAN; wireless channels; Media Access Protocol; Physical layer; Radio transmitters; Receivers; Receiving antennas; Scheduling algorithm; Software prototyping; Software radio; Transmitting antennas; Wireless LAN
【Paper Link】 【Pages】:1388-1396
【Authors】: Peng-Jun Wan ; Lixin Wang ; Ai Huang ; Minming Li ; F. Frances Yao
【Abstract】: The capacity region of multihop wireless network is involved in many capacity optimization problems. However, the membership of the capacity region is NP-complete in general, and hence the direct application of capacity region is quite limited. As a compromise, we often substitute the capacity region with a polynomial approximate capacity subregion. In this paper, we construct polynomial ¿-approximate capacity subregions of multihop wireless network under either 802.11 interference model or protocol interference model in which all nodes have uniform communication radii normalized to one and uniform interference radii ¿ ¿ 1. The approximation factor ¿ decreases with ¿ in general and is smaller than the best-known ones in the literature. For example, ¿ = 3 when ¿ ¿ 2.2907 under the 802.11 interference model or when ¿ ¿ 4.2462 under the protocol interference model. Our construction exploits a nature of the wireless interference called strip-wise transitivity of independence discovered in this paper and utilize the independence polytopes of cocomparability graphs in a spatial-divide-conquer manner. We also apply these polynomial ¿-approximate capacity subregions to compute ¿-approximate solutions for maximum (concurrent) multiflows.
【Keywords】: IEEE standards; optimisation; radio networks; telecommunication standards; 802.11 interference model; NP-complete; approximation factor; capacity optimization problems; cocomparability graphs; polynomial approximate capacity subregion; protocol interference model; spatial-divide-conquer; uniform communication radii; uniform multihop wireless networks; Algorithm design and analysis; Application software; Communications Society; Computer science; Interference; Polynomials; Processor scheduling; Scheduling algorithm; Spread spectrum communication; Wireless networks
【Paper Link】 【Pages】:1397-1405
【Authors】: Michael Dinitz
【Abstract】: In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.
【Keywords】: channel capacity; radio networks; radio transmitters; scheduling; algorithmic game theory; distributed algorithms; interference graph; machine learning theory; one-shot scheduling; protocol model; transmitters; wireless network capacity; Broadcasting; Communications Society; Distributed algorithms; Game theory; Interference; Machine learning algorithms; Transmitters; USA Councils; Wireless application protocol; Wireless networks
【Paper Link】 【Pages】:1406-1414
【Authors】: Pan Li ; Yuguang Fang ; Jie Li
【Abstract】: Throughput capacity in wireless ad hoc networks has been studied extensively under many different mobility models such as i.i.d. mobility model, Brownian mobility model, random walk model, and so on. Most of these research works assume global mobility, i.e., each node moves around in the whole network, and the results show that a constant per-node throughput can be achieved at the cost of very high expected average end-to-end delay. Thus, we are having a very big gap here, either low throughput and low delay in static networks or high throughput and high delay in mobile networks. In this paper, employing a more practical restricted random mobility model, we try to fill in this gap. Specifically, we assume a network of unit area with n nodes is evenly divided into n2¿ cells with an area of n-2¿ where 0 ¿ ¿ ¿ 1/2, each of which is further evenly divided into squares with an area of n-2Ã? where 0 ¿ ¿ ¿ Ã? ¿ 1/2. All nodes can only move inside the cell which they are initially distributed in, and at the beginning of each time slot, every node moves from its current square to a uniformly chosen point in an uniformly chosen adjacent square. Proposing a new multi-hop relay scheme, we present an upper bound and a lower bound on per-node throughput capacity and expected average end-to-end delay, respectively. We finally explicitly show smooth trade-offs between throughput and delay by controlling nodes' mobility.
【Keywords】: ad hoc networks; mobile radio; Brownian mobility model; average end-to-end delay; global mobility; mobile networks; multihop relay scheme; random walk model; static networks; throughput capacity; wireless ad hoc networks; Ad hoc networks; Cities and towns; Communications Society; Costs; Delay; Mobile ad hoc networks; Peer to peer computing; Relays; Throughput; Wireless networks
【Paper Link】 【Pages】:1415-1423
【Authors】: Mingyue Ji ; Zheng Wang ; Hamid R. Sadjadpour ; Jose Joaquin Garcia-Luna-Aceves
【Abstract】: We study the scaling laws for wireless ad hoc networks in which the distribution of n nodes in the network is homogeneous but the traffic they carry is heterogeneous. More specifically, we consider the case in which a given node is the data-gathering sink for k sources sending different information to it, while the rest of the s = n - k nodes participate in unicast sessions with random destinations chosen uniformly. We present a separation theorem for heterogeneous traffic showing that the optimum order throughput capacity can be attained in a wireless network in which traffic classes are distributed uniformly by endowing each node with multiple radios, each operating in a separate orthogonal channel, and by allocating a radio per node to each traffic class. Based on this theorem, we show how this order capacity can be attained for the unicast and data-gathering traffic classes by extending cooperative communication schemes that have been proposed previously.
【Keywords】: ad hoc networks; radio networks; telecommunication traffic; cooperation; heterogeneous traffic; wireless ad hoc networks; Ad hoc networks; Bandwidth; Peer to peer computing; Telecommunication traffic; Throughput; Traffic control; USA Councils; Unicast; Upper bound; Wireless networks
【Paper Link】 【Pages】:1424-1432
【Authors】: Yan Cai ; Bo Jiang ; Tilman Wolf ; Weibo Gong
【Abstract】: For the optical packet-switching routers to be widely deployed in the Internet, the size of packet buffers on routers has to be significantly small. Such small-buffer networks rely on traffic with low levels of burstiness to avoid buffer overflows and packet losses. We present a pacing system that proactively shapes traffic in the edge network to reduce burstiness. Our queue length based pacing uses an adaptive pacing on a single queue and paces traffic indiscriminately where deployed. In this work, we show through analysis and simulation that this pacing approach introduces a bounded delay and that it effectively reduces traffic burstiness. We also show that it can achieve higher throughput than end-system based pacing.
【Keywords】: Internet; optical fibre networks; packet switching; queueing theory; switching networks; telecommunication switching; telecommunication traffic; Internet; adaptive pacing; buffer overflows; end-system based pacing; online pacing scheme; optical packet-switching routers; paces traffic burstiness reduction; packet losses; queue length; small buffer edge network; Analytical models; Buffer overflow; Delay effects; Internet; Optical buffering; Optical losses; Optical packet switching; Shape; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:1433-1441
【Authors】: Arun Vishwanath ; Vijay Sivaraman ; Marina Thottan ; Constantine Dovrolis
【Abstract】: Internet traffic is expected to grow phenomenally over the next five to ten years, and to cope with such large traffic volumes, core networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for switching and transmission inside the core network seems to be the most promising way forward to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few packets of integrated all-optical buffers. In the context of envisioning a bufferless (or near-zero buffer) core network, our contributions are threefold: First, we propose a novel edge-to-edge based packet-level forward error correction (FEC) framework as a means of combating high core losses, and investigate via analysis and simulation the appropriate FEC strength for a single core link. Second, we consider a realistic multi-hop network and develop an optimisation framework that adjusts the FEC strength on a per-flow basis to ensure fairness between single-and multi-hop flows. Third, we study the efficacy of FEC for various system parameters such as relative mixes of short-lived and long-lived TCP flows, and average offered link loads. Our study is the first to show that packet-level FEC, when tuned properly, can be very effective in mitigating high core losses, thus opening the doors to a bufferless core network in the future.
【Keywords】: Internet; forward error correction; routing protocols; telecommunication traffic; Internet traffic; bufferless core network; edge-to-edge packet-level FEC; multi-hop network; packet-level forward error correction; Analytical models; Context modeling; Core loss; Forward error correction; IP networks; Optical buffering; Optical fiber networks; Spread spectrum communication; Telecommunication traffic
【Paper Link】 【Pages】:1442-1450
【Authors】: Haoyu Song ; Murali S. Kodialam ; Fang Hao ; T. V. Lakshman
【Abstract】: Many popular algorithms for fast packet forwarding and filtering rely on the tree data structure. Examples are the trie-based IP lookup and packet classification algorithms. With the recent interest in network virtualization, the ability to run multiple virtual router instances on a common physical router platform is essential. An important scaling issue is the number of virtual router instances that can run on the platform. One limiting factor is the amount of high-speed memory and caches available for storing the packet forwarding and filtering data structures. An ideal goal is to achieve good scaling while maintaining total isolation amongst the virtual routers. However, total isolation requires maintaining separate data structures in high-speed memory for each virtual router. In this paper, we study the case where some sharing of the forwarding and filtering data structures is permissible and develop algorithms for combining tries used for IP lookup and packet classification. Specifically, we develop a mechanism called trie-braiding that allows us to combine tries from the data structures of different virtual routers into just one compact trie. Two optimal braiding algorithms are presented and the effectiveness is demonstrated using the real world data sets.
【Keywords】: IP networks; telecommunication network routing; tree data structures; trees (mathematics); virtual reality; building scalable virtual routers; high-speed memory; packet classification algorithms; real world data sets; tree data structure; trie-based IP lookup; Buildings; Classification algorithms; Communications Society; Data structures; Filtering algorithms; Platform virtualization; Routing protocols; Scalability; Tree data structures; Virtual private networks
【Paper Link】 【Pages】:1451-1459
【Authors】: Xin Sun ; Yu-Wei Eric Sung ; Sunil Krothapalli ; Sanjay G. Rao
【Abstract】: Enterprise networks are large and complex, and their designs must be frequently altered to adapt to changing organizational needs. The process of redesigning and reconfiguring enterprise networks is ad-hoc and error-prone, and configuration errors could cause serious issues such as network outages. In this paper, we take a step towards systematic evolution of network designs in the context of virtual local area networks (VLANs). We focus on VLANs given their importance and prevalence, the frequent need to change VLAN designs, and the time-consuming and error-prone process of making changes. We present algorithms for common design tasks encountered in evolving VLANs such as deciding which VLAN a new host must be assigned to. Our algorithms trade off multiple criteria such as broadcast traffic costs, and costs associated with maintaining spanning trees for each VLAN in the network, while honoring correctness and feasibility constraints on the design. Our algorithms also enable automatic detection of network-wide dependencies which must be factored when reconfiguring VLANs. We evaluate our algorithms on longitudinal snapshots of configuration files of a large-scale operational campus network obtained over a two year period. Our results show that our algorithms can produce significantly better designs than current practice, while avoiding errors and minimizing human work. Our unique data-sets also enable us to characterize VLAN related change activity in real networks, an important contribution in its own right.
【Keywords】: local area networks; virtual reality; VLAN designs; enterprise networks; error-prone process; large-scale operational campus network; systematic evolution; virtual local area networks; Algorithm design and analysis; Broadcasting; Change detection algorithms; Communications Society; Costs; Humans; Large-scale systems; Local area networks; Sun; Telecommunication traffic
【Paper Link】 【Pages】:1460-1468
【Authors】: Chao Zhang ; Prithula Dhungel ; Di Wu ; Zhengye Liu ; Keith W. Ross
【Abstract】: A private BitTorrent site (also known as a "Bit Torrent darknet") is a collection of torrents that can only be accessed by members of the darknet community. The private BitTorrent sites also have incentive policies which encourage users to continue to seed files after completing downloading. Although there are at least 800 independent BitTorrent darknets in the Internet, they have received little attention in the research community to date. We examine BitTorrent darknets from macroscopic, medium-scopic and microscopic perspectives. For the macroscopic analysis, we consider 800+ private sites to obtain a broad picture of the darknet landscape, and obtain a rough estimate of the total number of files, accounts, and simultaneous peers within the entire darknet landscape. Although the size of each private site is relatively small, we find the aggregate size of the darknet landscape to be surprisingly large. For the medium-scopic analysis, we investigate content overlap between four private sites and the public BitTorrent ecosystem. For the microscopic analysis, we explore in-depth one private site and examine its user behavior. We observe that the seed-to-leecher ratios and upload-to-download ratios are much higher than in the public ecosystem. The macroscopic, medium-scopic and microscopic analyses when combined provide a vivid picture of the darknet landscape, and provide insight into how the darknet landscape differs from the public BitTorrent ecosystem.
【Keywords】: Internet; Web sites; BitTorrent darknet landscape; Internet; medium-scopic analysis; microscopic analysis; private BitTorrent sites; public BitTorrent ecosystem; Aggregates; Chaotic communication; Communications Society; Ecosystems; Internet; Microscopy; Performance analysis; Performance evaluation; Regression analysis; Sun
【Paper Link】 【Pages】:1469-1477
【Authors】: Gabriel Maciá-Fernández ; Yong Wang ; Rafael Rodríguez-Gómez ; Aleksandar Kuzmanovic
【Abstract】: Online advertising is a rapidly growing industry currently dominated by the search engine 'giant' Google. In an attempt to tap into this huge market, Internet Service Providers (ISPs) started deploying deep packet inspection techniques to track and collect user browsing behavior. However, such techniques violate wiretap laws that explicitly prevent intercepting the contents of communication without gaining consent from consumers. In this paper, we show that it is possible for ISPs to extract user browsing patterns without inspecting contents of communication. Our contributions are threefold. First, we develop a methodology and implement a system that is capable of extracting web browsing features from stored non-content based records of online communication, which could be legally shared. When such browsing features are correlated with information collected by independently crawling the Web, it becomes possible to recover the actual web pages accessed by clients. Second, we systematically evaluate our system on the Internet and demonstrate that it can successfully recover user browsing patterns with high accuracy. Finally, our findings call for a comprehensive legislative reform that would not only enable fair competition in the online advertising business, but more importantly, protect the consumer rights in a more effective way.
【Keywords】: Internet; Web sites; advertising; search engines; Google; ISP-enabled behavioral ad targeting; Internet service providers; Web browsing features; Web pages; deep packet inspection; online advertising; online communication; user browsing behavior; user browsing patterns; wiretap laws; Advertising; Data mining; Feature extraction; Inspection; Law; Protection; Search engines; Target tracking; Web and internet services; Web pages
【Paper Link】 【Pages】:1478-1486
【Authors】: Sem C. Borst ; Varun Gupta ; Anwar Walid
【Abstract】: The delivery of video content is expected to gain huge momentum, fueled by the popularity of user-generated clips, growth of VoD libraries, and wide-spread deployment of IPTV services with features such as CatchUp/PauseLive TV and NPVR capabilities. The `time-shifted' nature of these personalized applications defies the broadcast paradigm underlying conventional TV networks, and increases the overall bandwidth demands by orders of magnitude. Caching strategies provide an effective mechanism for mitigating these massive bandwidth requirements by replicating the most popular content closer to the network edge, rather than storing it in a central site. The reduction in the traffic load lessens the required transport capacity and capital expense, and alleviates performance bottlenecks. In the present paper, we develop light-weight cooperative cache management algorithms aimed at maximizing the traffic volume served from cache and minimizing the bandwidth cost. As a canonical scenario, we focus on a cluster of distributed caches, either connected directly or via a parent node, and formulate the content placement problem as a linear program in order to benchmark the globally optimal performance. Under certain symmetry assumptions, the optimal solution of the linear program is shown to have a rather simple structure. Besides interesting in its own right, the optimal structure offers valuable guidance for the design of low-complexity cache management and replacement algorithms. We establish that the performance of the proposed algorithms is guaranteed to be within a constant factor from the globally optimal performance, with far more benign worst-case ratios than in prior work, even in asymmetric scenarios. Numerical experiments for typical popularity distributions reveal that the actual performance is far better than the worst-case conditions indicate.
【Keywords】: cache storage; distributed algorithms; distributed memory systems; caching strategies; content distribution networks; cooperative cache management algorithms; distributed caches; distributed caching algorithms; linear program; low complexity cache management; massive bandwidth requirements; replacement algorithms; video content delivery; Algorithm design and analysis; Bandwidth; Clustering algorithms; Cooperative caching; Costs; IPTV; Libraries; Multimedia communication; TV broadcasting; Telecommunication traffic
【Paper Link】 【Pages】:1487-1495
【Authors】: Kanat Tangwongsan ; Himabindu Pucha ; David G. Andersen ; Michael Kaminsky
【Abstract】: Many modern systems exploit data redundancy to improve efficiency. These systems split data into chunks, generate identifiers for each of them, and compare the identifiers among other data items to identify duplicate chunks. As a result, chunk size becomes a critical parameter for the efficiency of these systems: it trades potentially improved similarity detection (smaller chunks) with increased overhead to represent more chunks. Unfortunately, the similarity between files increases unpredictably with smaller chunk sizes, even for data of the same type. Existing systems often pick one chunk size that is "good enough'' for many cases because they lack efficient techniques to determine the benefits at other chunk sizes. This paper addresses this deficiency via two contributions: (1) we present multi-resolution (MR) handprinting, an application-independent technique that efficiently estimates similarity between data items at different chunk sizes using a compact, multi-size representation of the data; (2) we then evaluate the application of MR handprints to workloads from peer-to-peer, file transfer, and storage systems, demonstrating that the chunk size selection enabled by MR handprints can lead to real improvements over using a fixed chunk size in these systems.
【Keywords】: data analysis; data structures; chunk size; data redundancy; file transfer; multiresolution handprinting; similarity detection; similarity estimation; storage systems; Communications Society; Motion pictures; Navigation; Peer to peer computing; Protocols; Redundancy; Springs; Streaming media; Web pages; Wide area networks
【Paper Link】 【Pages】:1496-1504
【Authors】: Dan-Cristian Tomozei ; Laurent Massoulié
【Abstract】: In this paper we address the issue of network cost efficiency for live streaming peer-to-peer systems. We formalize this as an optimization problem, which features a generic cost function. The latter is appropriate to capture not only ISP-specific link weights, but also non-linear, congestion-dependent costs. Our main contribution is the introduction of the Implicit-Primal-Dual scheme for flow control in live streaming peer-to-peer systems. It is fully distributed in that it relies only on local state variable exchanges. Moreover, we show that at a fluid scale, combined with random linear network coding, it admits the cost optimal operating point as a fixed point. We also prove asymptotic boundedness of fluid trajectories for particular cost functions. We finally show via experiments that these optimality properties are resilient to operational constraints such as finite generation size and finite field size.
【Keywords】: linear codes; media streaming; network coding; optimisation; peer-to-peer computing; telecommunication congestion control; cost-efficient peer-to-peer streaming; flow control; fluid trajectories; generic cost function; implicit-primal-dual scheme; live streaming peer-to-peer systems; optimization problem; random linear network coding; Communication system control; Communications Society; Control systems; Cost function; Fluid flow control; Galois fields; Network coding; Peer to peer computing; Telecommunication traffic; Testing
【Paper Link】 【Pages】:1505-1513
【Authors】: Daniel Sadoc Menasché ; Laurent Massoulié ; Donald F. Towsley
【Abstract】: This work investigates reciprocity in peer-to-peer systems. The scenario is one where users arrive to the network with a set of contents and content demands. Peers exchange contents to satisfy their demands, following either a direct reciprocity principle (I help you and you help me) or indirect reciprocity principle (I help you and someone helps me). First, we prove that any indirect reciprocity schedule of exchanges, in the absence of relays, can be replaced by a direct reciprocity schedule, provided that users (1) are willing to download undemanded content for bartering purposes and (2) use up to twice the bandwidth they would use under indirect reciprocity. Motivated by the fact that, in the absence of relays, the loss of efficiency due to direct reciprocity is at most two, we study various distributed direct reciprocity schemes through simulations, some of them involving a broker to facilitate exchanges.
【Keywords】: peer-to-peer computing; barter; indirect reciprocity principle; peer-to-peer systems; Bandwidth; Communications Society; Computer crime; Costs; DVD; Peer to peer computing; Relays; Robustness
【Paper Link】 【Pages】:1514-1522
【Authors】: Bivas Mitra ; Abhishek Kumar Dubey ; Sujoy Ghose ; Niloy Ganguly
【Abstract】: In this paper, we develop an analytical framework which explains the emergence of superpeer networks on execution of the commercial peer-to-peer bootstrapping protocols by incoming nodes. Bootstrapping protocols exploit physical properties of the online peers like resource content, processing power, storage space, connectivity etc as well as take the finiteness of bandwidth of each online peer into consideration. With the help of rate equations, we show that execution of these protocols results in the emergence of superpeer nodes in the network - the exact degree distribution is evaluated. We validate the framework developed in this paper through extensive simulation. The analysis of the results shows that the amount of superpeers produced in the network depends on the protocol as well as the properties of the joining nodes. Interestingly, our analysis reveals that increase in the amount of resource and the number of resourceful nodes do not always help to increase the fraction of superpeer nodes in the network. The impact of the frequent leaving of the peers on the topology of the emerging network is also evaluated. As an application study, we show that our framework can explain the topological configuration of commercial Gnutella networks. The developed model can almost perfectly match the degree distribution of Gnutella network.
【Keywords】: computer bootstrapping; peer-to-peer computing; protocols; Gnutella networks; analytical framework; peer-to-peer bootstrapping protocols; physical properties; storage space; superpeer networks; Bandwidth; Communications Society; Computer science; Equations; Network topology; Peer to peer computing; Power engineering and energy; Protocols; Quality of service; Space technology
【Paper Link】 【Pages】:1523-1531
【Authors】: Shansi Ren ; Enhua Tan ; Tian Luo ; Songqing Chen ; Lei Guo ; Xiaodong Zhang
【Abstract】: BitTorrent (BT) has carried out a significant and continuously increasing portion of Internet traffic. While several designs have been recently proposed and implemented to improve the resource utilization by bridging the application layer (overlay) and the network layer (underlay), these designs are largely dependent on Internet infrastructures, such as ISPs and CDNs. In addition, they also demand large-scale deployments of their systems to work effectively. Consequently, they require multiefforts far beyond individual users' ability to be widely used in the Internet. In this paper, aiming at building an infrastructure-independent user-level facility, we present our design, implementation, and evaluation of a topology-aware BT system, called TopBT, to significantly improve the overall Internet resource utilization without degrading user downloading performance. The unique feature of TopBT client lies in that a TopBT client actively discovers network proximities (to connected peers), and uses both proximities and transmission rates to maintain fast downloading while reducing the transmitting distance of the BT traffic and thus the Internet traffic. As a result, a TopBT client neither requires feeds from major Internet infrastructures, such as ISPs or CDNs, nor requires large-scale deployment of other TopBT clients on the Internet to work effectively. We have implemented TopBT based on widely used open-source BT client code base, and made the software publicly available. By deploying TopBT and other BitTorrent clients on hundreds of Internet hosts, we show that on average TopBT can reduce about 25% download traffic while achieving a 15% faster download speed compared to several prevalent BT clients. TopBT has been widely used in the Internet by many users all over the world.
【Keywords】: Internet; peer-to-peer computing; telecommunication network topology; telecommunication traffic; BitTorrent client; CDN; ISP; Internet resource utilization; Internet traffic; TopBT; application layer; content delivery network; network layer; network proximity; topology-aware BT system; transmitting distance; Bandwidth; Degradation; Feeds; IP networks; Internet; Large-scale systems; Open source software; Peer to peer computing; Resource management; Telecommunication traffic
【Paper Link】 【Pages】:1532-1540
【Authors】: Brian Eriksson ; Gautam Dasarathy ; Paul Barford ; Robert D. Nowak
【Abstract】: Accurate and timely identification of the router-level topology of the Internet is one of the major unresolved problems in Internet research. Topology recovery via tomographic inference is potentially an attractive complement to standard methods that use TTL-limited probes. In this paper, we describe new techniques that aim toward the practical use of tomographic inference for accurate router-level topology measurement. Specifically, prior tomographic techniques have required an infeasible number of probes for accurate, large scale topology recovery. We introduce a Depth-First Search (DFS) Ordering algorithm that clusters end host probe targets based on shared infrastructure, and enables the logical tree topology of the network to be recovered accurately and efficiently. We evaluate the capabilities of our DFS Ordering topology recovery algorithm in simulation and find that our method uses 94% fewer probes than exhaustive methods and 50% fewer than the current state-of-the-art. We also present results from a case study in the live Internet where we show that DFS Ordering can recover the logical router-level topology more accurately and with fewer probes than prior techniques.
【Keywords】: Internet; telecommunication network routing; telecommunication network topology; tomography; tree searching; Internet topology discovery; depth-first search ordering algorithm; logical tree topology; network tomography; router-level topology; tomographic inference; topology recovery; Clustering algorithms; Coordinate measuring machines; Delay; IP networks; Internet; Network topology; Probes; Time measurement; Tomography; Unicast
【Paper Link】 【Pages】:1541-1549
【Authors】: Hao Wang ; Haiquan (Chuck) Zhao ; Bill Lin ; Jun (Jim) Xu
【Abstract】: Many network processing applications require wire-speed access to large data structures or a large amount of flow-level data, but the capacity of SRAMs is woefully inadequate in many cases. In this paper, we analyze a robust pipelined memory architecture that can emulate an ideal SRAM by guaranteeing with very high probability that the output sequence produced by the pipelined memory architecture is the same as the one produced by an ideal SRAM under the same sequence of memory read and write operations, except time-shifted by a fixed pipeline delay of ¿. The design is based on the interleaving of DRAM banks together with the use of a reservation table that serves in part as a data cache. In contrast to prior interleaved memory solutions, our design is robust even under adversarial memory access patterns, which we demonstrate through a rigorous worst-case theoretical analysis using a combination of convex ordering and large deviation theory.
【Keywords】: DRAM chips; SRAM chips; memory architecture; network synthesis; pipeline processing; telecommunication network routing; DRAM banks; SRAM banks; convex ordering; data structures; deviation theory; memory access patterns; network processing; robust pipelined memory architecture; robust pipelined memory system; wire-speed access; Buffer storage; Data structures; Delay; Information analysis; Information security; Inspection; Intrusion detection; Memory architecture; Random access memory; Robustness
【Paper Link】 【Pages】:1550-1558
【Authors】: Murali S. Kodialam ; T. V. Lakshman ; Sarit Mukherjee ; Limin Wang
【Abstract】: Behavioral targeting of content to users is a huge and lucrative business, valued as a $20 billion industry that is growing rapidly. So far dominant players in this field like Google and Yahoo examine the user requests coming to their servers and place appropriate ads based on the user's search keywords. Triple play service providers have access to all the traffic generated by the users and can generate more comprehensive profiles of users based on their TV, broadband and mobile usage. Using such multi-source profile information they can generate new revenue streams by smart targeting of ads to their users over multiple screens (computer, TV and mobile handset). This paper proposes methods to place targeted ads to a TV based on user's interests. It proposes an ad auction model that can leverage multi-source profile and can handle dynamic profile-based targeting like Google's AdWords vis-a-vis static demography-based targeting of legacy TV. We then propose a 0.502-competitive revenue maximizing scheduling algorithm that chooses a set of ads in each time slot and assigns users to one of these selected ads.
【Keywords】: IPTV; advertising; scheduling; IPTV; auction model; dynamic profile-based targeting; online scheduling; scheduling algorithm; triple play service providers; user search keywords; Communications Society; Displays; IPTV; Job shop scheduling; Mobile handsets; Paper technology; Scheduling algorithm; TV; Web pages; Web search
【Paper Link】 【Pages】:1559-1567
【Authors】: Giovanna Carofiglio ; Luca Muscariello
【Abstract】: Internet performance is tightly related to the properties of TCP and UDP protocols, jointly responsible for the delivery of the great majority of Internet traffic. It is well understood how these protocols behave under FIFO queuing and what are the network congestion effects. However, no comprehensive analysis is available when flow-aware mechanisms such as per-flow scheduling and dropping policies are deployed. Previous simulation and experimental results leave a number of unanswered questions. In the paper, we tackle this issue by modeling via a set of fluid non-linear ODEs the instantaneous throughput and the buffer occupancy of N long-lived TCP sources under three per-flow scheduling disciplines (Fair Queuing, Longest Queue First, Shortest Queue First) and with longest queue drop buffer management. We study the system evolution and analytically characterize the stationary regime: closed-form expressions are derived for the stationary throughput/sending rate and buffer occupancy which give a thorough understanding of short/long-term fairness for TCP traffic. Similarly, we provide the characterization of the loss rate experienced by UDP flows in presence of TCP traffic. As a result, the analysis allows to quantify benefits and drawbacks related to the deployment of flow-aware scheduling mechanisms in different networking contexts. The model accuracy is confirmed by a set of ns2 simulations and by the evaluation of the three scheduling disciplines in a real implementation in the Linux kernel.
【Keywords】: Internet; scheduling; telecommunication traffic; transport protocols; FIFO queuing; Internet traffic; Linux; TCP; UDP protocols; closed-form expressions; fair queuing; flow-aware scheduling mechanisms; longest queue drop buffer management; longest queue first; network congestion effects; per-flow scheduling; shortest queue first; Communications Society; IP networks; Internet; Multiaccess communication; Processor scheduling; Protocols; Switches; Telecommunication traffic; Throughput; Traffic control
【Paper Link】 【Pages】:1568-1576
【Authors】: Sheng Xiao ; Weibo Gong ; Donald F. Towsley
【Abstract】: This paper introduces a set of low-complexity algorithms that when coupled with link layer retransmission mechanisms, strengthen wireless communication security. Our basic idea is to generate a series of secrets from inevitable transmission errors and other random factors in wireless communications. Because these secrets are constantly extracted from the communication process in realtime, we call them dynamic secrets. Dynamic secrets have interesting security properties. They offer a complementary mechanism to existing security protocols. Even if the adversary exploits a vulnerability and steals the underlying system secret, security can be automatically replenished. In many scenarios, it is also possible to bootstrap a secure communication with the dynamic secrets.
【Keywords】: communication complexity; private key cryptography; public key cryptography; radiocommunication; telecommunication security; dynamic secrets; link layer retransmission mechanisms; low-complexity algorithms; private key; public key infrastructure; security protocols; wireless communication security; Communication system security; Communications Society; Computer errors; Computer science; Computer security; Couplings; Mobile communication; Protocols; Public key; Wireless communication
【Paper Link】 【Pages】:1577-1585
【Authors】: Julien Freudiger ; Mohammad Hossein Manshaei ; Jean-Yves Le Boudec ; Jean-Pierre Hubaux
【Abstract】: In many envisioned mobile ad hoc networks, nodes are expected to periodically beacon to advertise their presence. In this way, they can receive messages addressed to them or participate in routing operations. Yet, these beacons leak information about the nodes and thus hamper their privacy. A classic remedy consists of each node making use of (certified) pseudonyms and changing its pseudonym in specific locations called mix zones. Of course, privacy is then higher if the pseudonyms are short-lived (i.e., nodes have a short distance-to-confusion), but pseudonyms can be costly, as they are usually obtained from an external authority. In this paper, we provide a detailed analytical evaluation of the age of pseudonyms based on differential equations. We corroborate this model by a set of simulations. This paper thus provides a detailed quantitative framework for selecting the parameters of a pseudonym-based privacy system in peer-to-peer wireless networks.
【Keywords】: ad hoc networks; differential equations; mobile radio; differential equations; information leakage; mobile ad hoc networks; peer-to-peer wireless networks; pseudonym-based privacy system; routing operations; Communications Society; Computer applications; Computer networks; Laboratories; Mobile ad hoc networks; Peer to peer computing; Privacy; Routing; Wireless communication; Wireless networks
【Paper Link】 【Pages】:1586-1594
【Authors】: Santhanakrishnan Anand ; Shamik Sengupta ; Rajarathnam Chandramouli
【Abstract】: We discuss malicious interference based denial of service (DoS) attacks in multi-band covert timing networks using an adversarial game theoretic approach. A covert timing network operating on a set of multiple spectrum bands is considered. Each band has an associated utility which represents the critical nature of the covert data transmitted in the band. A malicious attacker wishes to cause a DoS attack by sensing and creating malicious interference on some or all of the bands. The covert timing network deploys camouflaging resources to appropriately defend the spectrum bands. A two tier game theoretic approach is proposed to model this scenario. The first tier of the game is the sensing game in which, the covert timing network determines the amount of camouflaging resources to be deployed in each band and the malicious attacker determines the optimal sensing resources to be deployed in each band. In the second tier of the game, the malicious attacker determines the optimal transmit powers on each spectral band it chooses to attack. We prove the existence of Nash equilibriums for the games. We compare the performance of our proposed game theoretic mechanism with that of other well known heuristic mechanisms and demonstrate the effectiveness of the proposed approach.
【Keywords】: computer crime; game theory; radio networks; telecommunication security; timing; Nash equilibriums; attack-defense game theoretic analysis; denial of service; heuristic mechanisms; malicious attacker; multiband wireless covert timing networks; multiple spectrum bands; optimal transmit powers; Computer crime; Delay effects; Game theory; Interference; Jamming; Nash equilibrium; Protocols; Radio transmitters; Switches; Timing
【Paper Link】 【Pages】:1595-1603
【Authors】: Kai Xing ; Xiuzhen Cheng
【Abstract】: A common vulnerability of wireless networks, in particular, the mobile ad hoc network (MANET), is their susceptibility to node compromise/physical capture attacks since the wireless devices are often not protected by tamper-resistant hardware due to small form factors and low cost, and can be easily stolen/lost or temporarily controlled by unauthorized entities due to their harsh working environments. A serious consequence of the device capture attack is the node replication attacks in which adversaries deploy a large number of replicas of the compromised/captured nodes throughout the network. Replicated nodes have all legitimate security credentials and therefore can launch various insider attacks or even take over the network easily. They are indeed "attack multipliers" and therefore are extremely destructive to the network. Detecting replication attacks is a nontrivial problem in MANETs due to the challenges resulted from node mobility, cloned/compromised node collusion, and the large number and wide spread of the replicas. Existing approaches either fail in mobile environments due to the limitations caused by local views or their dependence on invariant claims such as location and neighbor list, or are constrained by the number, distribution, and colluding activities of the replicas. In this paper, we propose two replication detection schemes (TDD and SDD) to tackle all these challenges from both the time domain and the space domain. Our theoretical analysis indicates that TDD and SDD provide high detection accuracy and excellent resilience against smart and colluding replicas, have no restriction on the number and distribution of replicas, and incur low communication/computation overhead. To our best knowledge, TDD and SDD are the only approaches that support mobile networks while place no restrictions on the number and distribution of the cloned frauds and on whether the replicas collude or not.
【Keywords】: ad hoc networks; mobile radio; radio networks; telecommunication security; attack multipliers; device capture attack; mobile ad hoc networks; replica attacks; time domain to space domain; Base stations; Communications Society; Computer networks; Hardware; Mobile ad hoc networks; Peer to peer computing; Physics computing; Protection; Space technology; Wireless networks
【Paper Link】 【Pages】:1604-1612
【Authors】: Xi Chen ; Daji Qiao
【Abstract】: How to reduce the handoff delay and how to make appropriate handoff decisions are two fundamental challenges in designing an effective handoff scheme for 802.11 networks to provide seamless and satisfactory data roaming services to mobile users. In this paper, we propose a unique fast handoff scheme called HaND (Handoff with Null Dwell time). HaND adopts a novel zero-channel-dwell-time architecture which leverages on the communication backbone between APs to relay the information about wireless channels, and allows the AP (rather than the station) to make appropriate handoff decisions aiming at providing fair service satisfaction to all stations. HaND is a software-only solution and compatible with the 802.11 standard without modifying the 802.11 protocol or introducing new wireless frames. We have implemented it in the Madwifi device driver and demonstrated its effectiveness via experiments.
【Keywords】: mobile computing; telecommunication standards; wireless LAN; HaND scheme; IEEE 802.11 Networks; Madwifi device driver; data roaming; fast handoff; handoff with null dwell time; mobile user; software only solution; wireless LAN; wireless channel; zero channel dwell time architecture; Communications Society; Delay effects; Probes; Relays; Roaming; Smart phones; Software standards; Spine; Switches; Wireless application protocol
【Paper Link】 【Pages】:1613-1621
【Authors】: An (Jack) Chan ; Kai Zeng ; Prasant Mohapatra ; Sung-Ju Lee ; Sujata Banerjee
【Abstract】: Peak Signal-to-Noise Ratio (PSNR) is the simplest and the most widely used video quality evaluation methodology. However, traditional PSNR calculations do not take the packet loss into account. This shortcoming, which is amplified in wireless networks, contributes to the inaccuracy in evaluating video streaming quality in wireless communications. Such inaccuracy in PSNR calculations adversely affects the development of video communications in wireless networks. This paper proposes a novel video quality evaluation methodology. As it not only considers the PSNR of a video, but also with modifications to handle the packet loss issue, we name this evaluation method MPSNR. MPSNR rectifies the inaccuracies in traditional PSNR computation, and helps us to approximate subjective video quality, Mean Opinion Score (MOS), more accurately. Using PSNR values calculated from MPSNR and simple network measurements, we apply linear regression techniques to derive two specific objective video quality metrics, PSNR-based Objective MOS (POMOS) and Rates-based Objective MOS (ROMOS). Through extensive experiments and human subjective tests, we show that the two metrics demonstrate high correlation with MOS. POMOS takes the averaged PSNR value of a video calculated from MPSNR as the only input. Despite its simplicity, it has a Pearson correlation of 0.8664 with the MOS. By adding a few other simple network measurements, such as the proportion of distorted frames in a video, ROMOS achieves an even higher Pearson correlation (0.9350) with the MOS. Compared with the PSNR metric from the traditional PSNR calculations, our metrics evaluate video streaming quality in wireless networks with a much higher accuracy while retaining the simplicity of PSNR calculation.
【Keywords】: video communication; video streaming; wireless LAN; PSNR-based objective MOS; Pearson correlation; linear regression techniques; lossy IEEE 802.11 wireless networks; mean opinion score; network measurements; objective video quality metrics; packet loss; peak signal-to-noise ratio; rates-based objective MOS; video communications; video streaming quality evaluation method; wireless communications; Communications Society; Computer science; Electronic mail; Humans; Lifting equipment; Multimedia communication; PSNR; Streaming media; Wireless mesh networks; Wireless networks
【Paper Link】 【Pages】:1622-1630
【Authors】: Joseph Camp ; Ehsan Aryafar ; Edward W. Knightly
【Abstract】: Contending flows in multi-hop 802.11 wireless networks compete with two fundamental asymmetries: (i) channel asymmetry, in which one flow has a stronger signal, potentially yielding physical layer capture, and (ii) topological asymmetry, in which one flow has increased channel state information, potentially yielding an advantage in winning access to the channel. Prior work has considered these asymmetries independently with a highly simplified view of the other. However, in this work, we perform thousands of measurements on coupled flows in urban environments and build a simple, yet accurate model that jointly considers information and channel asymmetries. We show that if these two asymmetries are not considered jointly, throughput predictions of even two coupled flows are vastly distorted from reality when traffic characteristics are only slightly altered (e.g., changes to modulation rate, packet size, or access mechanism). These performance modes are sensitive not only to small changes in system properties, but also small-scale link fluctuations that are common in an urban mesh network. We analyze all possible capture relationships for two-flow sub-topologies and show that capture of the reverse traffic can allow a previously starving flow to compete fairly. Finally, we show how to extend and apply the model in domains such as modulation rate adaptation and understanding the interaction of control and data traffic.
【Keywords】: telecommunication network topology; telecommunication standards; telecommunication traffic; wireless LAN; wireless mesh networks; channel asymmetry; channel state information; contending flows; coupled flows; modulation rate adaptation; multihop 802.11 wireless networks; reverse traffic; small-scale link fluctuations; topological asymmetry; urban channels; urban mesh network; Channel state information; Distortion measurement; Fluctuations; Performance evaluation; Physical layer; Spread spectrum communication; Telecommunication traffic; Throughput; Traffic control; Wireless networks
【Paper Link】 【Pages】:1631-1639
【Authors】: Chen Feng ; Wain Sy Anthea Au ; Shahrokh Valaee ; Zhenhui Tan
【Abstract】: The sparse nature of location finding problem makes the theory of compressive sensing desirable for indoor positioning in Wireless Local Area Networks (WLANs). In this paper, we address the received signal strength (RSS)-based localization problem in WLANs using the theory of compressive sensing (CS), which offers accurate recovery of sparse signals from a small number of measurements by solving an ¿1-minimization problem. A pre-processing procedure of orthogonalization is used to induce incoherence needed in the CS theory. In order to mitigate the effects of RSS variations due to channel impediments, the proposed positioning system consists of two steps: coarse localization by exploiting affinity propagation, and fine localization by the CS theory. In the fine localization stage, access point selection problem is studied to further increase the accuracy. We implement the positioning system on a WiFi-integrated mobile device (HP iPAQ hx4700 with Windows Mobile 2003 Pocket PC) to evaluate the performance. Experimental results indicate that the proposed system leads to substantial improvements on localization accuracy and complexity over the widely used traditional fingerprinting methods.
【Keywords】: fingerprint identification; minimisation; position control; wireless LAN; HP iPAQ hx4700; RSS; WLAN access points; WiFi-integrated mobile device; Windows Mobile 2003 Pocket PC; affinity propagation; coarse localization; compressive sensing based positioning; fingerprinting methods; location finding problem; minimization problem; orthogonalization; pre-processing procedure; received signal strength; wireless local area networks; Communications Society; Electrical safety; Fingerprint recognition; Hardware; Impedance; Laboratories; Rails; Railway safety; Traffic control; Wireless LAN
【Paper Link】 【Pages】:1640-1648
【Authors】: Fabio Soldo ; Anh Le ; Athina Markopoulou
【Abstract】: A widely used defense practice against malicious traffic on the Internet is through blacklists: lists of prolific attack sources are compiled and shared. The goal of blacklists is to predict and block future attack sources. Existing blacklisting techniques have focused on the most prolific attack sources and, more recently, on collaborative blacklisting. In this paper, we formulate the problem of forecasting attack sources (also referred to as "predictive blacklisting") based on shared attack logs, as an implicit recommendation system. We compare the performance of existing approaches against the upper bound for prediction and we demonstrate that there is much room for improvement. Inspired by the recent NetFlix competition, we propose a multi-level collaborative filtering model that is adjusted and tuned specifically for the attack forecasting problem. Our model captures and combines various factors namely: attacker-victim history (using time-series) and attackers and/or victims interactions (using neighborhood models). We evaluate our combined method on one month of logs from Dshield.org and demonstrate that it improves significantly the prediction rate over state-of-the-art methods as well as the robustness against poisoning attacks.
【Keywords】: Internet; recommender systems; security of data; time series; Internet; Netflix competition; attacker-victims interactions; implicit recommendation system; malicious traffic; multilevel prediction model; neighborhood models; predictive collaborative blacklisting; prolific attack sources; time series; Collaboration; Communications Society; History; Internet; Intrusion detection; Predictive models; Robustness; Security; Telecommunication traffic; Traffic control
【Paper Link】 【Pages】:1649-1657
【Authors】: Partha Kanuparthy ; Constantine Dovrolis
【Abstract】: We propose an active probing method, called Differential Probing or DiffProbe, to detect whether an access ISP is deploying forwarding mechanisms such as priority scheduling, variations of WFQ, or WRED to discriminate against some of its customer flows. DiffProbe aims to detect if the ISP is doing one or both of delay discrimination and loss discrimination. The basic idea in DiffProbe is to compare the delays and packet losses experienced by two flows: an Application flow A and a Probing flow P. The paper describes the statistical methods that DiffProbe uses, a novel method for distinguishing between Strict Priority and WFQ-variant packet scheduling, simulation and emulation experiments, and a few real-world tests at major access ISPs.
【Keywords】: Internet; scheduling; statistical analysis; DiffProbe; ISP service discrimination; WFQ-variant packet scheduling; WRED; active probing method; delay discrimination; differential probing; forwarding mechanisms; loss discrimination; packet losses; priority scheduling; statistical methods; Communications Society; Computer science; Delay estimation; Emulation; Processor scheduling; Scheduling algorithm; Statistical analysis; Testing; Tomography; Traffic control
【Paper Link】 【Pages】:1658-1666
【Authors】: Sebastian Neumayer ; Eytan Modiano
【Abstract】:
Fiber-optic networks are vulnerable to natural disasters, such as tornadoes or earthquakes, as well as to physical failures, such as an anchor cutting underwater fiber cables. Such real-world events occur in specific geographical locations and disrupt specific parts of the network. Therefore, the geography of the network determines the effect of physical events on the network's connectivity and capacity. In this paper, we develop tools to analyze network failures after a random' geographic disaster. The random location of the disaster allows us to model situations where the physical failures are not targeted attacks. In particular, we consider disasters that take the form of a
random' line in a plane. Using results from geometric probability, we are able to calculate some network performance metrics to such a disaster in polynomial time. In particular, we can evaluate average two-terminal reliability in polynomial time under `random' line-cuts. This is in contrast to the case of independent link failures for which there exists no known polynomial time algorithm to calculate this reliability metric. We also present some numerical results to show the significance of geometry on the survivability of the network and discuss network design in the context of random line-cuts. Our novel approach provides a promising new direction for modeling and designing networks to lessen the effects of geographical disasters or attacks.
【Keywords】: combinatorial mathematics; disasters; optical fibre networks; statistical distributions; telecommunication network reliability; attacks; fiber-optic networks; geographical disasters; geographically correlated failures; network failures; network reliability; Earthquakes; Failure analysis; Geography; Optical fiber cables; Optical fiber networks; Polynomials; Probability; Telecommunication network reliability; Tornadoes; Underwater cables
【Paper Link】 【Pages】:1667-1675
【Authors】: Kayi Lee ; Hyang-Won Lee ; Eytan Modiano
【Abstract】: We consider network reliability in layered networks where the lower layer experiences random link failures. In layered networks, each failure at the lower layer may lead to multiple failures at the upper layer. We generalize the classical polynomial expression for network reliability to the multi-layer setting. Using random sampling techniques, we develop polynomial time approximation algorithms for the failure polynomial. Our approach gives an approximate expression for reliability as a function of the link failure probability, eliminating the need to resample for different values of the failure probability. Furthermore, it gives insight on how the routings of the logical topology on the physical topology impact network reliability. We show that maximizing the min cut of the (layered) network maximizes reliability in the low failure probability regime. Based on this observation, we develop algorithms for routing the logical topology to maximize reliability.
【Keywords】: polynomials; telecommunication links; telecommunication network reliability; telecommunication network routing; telecommunication network topology; failure polynomial; layered networks reliability; multi-layer setting; polynomial expression; polynomial time approximation algorithms; random link failures; random sampling techniques; Algorithm design and analysis; Approximation algorithms; Circuit topology; Communications Society; Network topology; Optical fiber communication; Polynomials; Routing; Sampling methods; Telecommunication network reliability
【Paper Link】 【Pages】:1676-1684
【Authors】: Uichin Lee ; Paul Wang ; Youngtae Noh ; Luiz Filipe M. Vieira ; Mario Gerla ; Jun-Hong Cui
【Abstract】: A SEA Swarm (Sensor Equipped Aquatic Swarm) is a sensor "cloud" that drifts with water currents and enables 4D (space and time) monitoring of local underwater events such as contaminants, marine life and intruders. The swarm is escorted at the surface by drifting sonobuoys that collect the data from underwater sensors via acoustic modems and report it in real-time via radio to a monitoring center. The goal of this study is to design an efficient anycast routing algorithm for reliable underwater sensor event reporting to any one of the surface sonobuoys. Major challenges are the ocean current and the limited resources (bandwidth and energy). In this paper, we address these challenges and propose HydroCast, a hydraulic pressure based anycast routing protocol that exploits the measured pressure levels to route data to surface buoys. The paper makes the following contributions: a novel opportunistic routing mechanism to select the subset of forwarders that maximizes greedy progress yet limiting co-channel interference; and an efficient underwater "dead end" recovery method that outperforms recently proposed approaches. The proposed routing protocols are validated via extensive simulations.
【Keywords】: cochannel interference; marine communication; pressure measurement; routing protocols; wireless sensor networks; 4D space monitoring; 4D time monitoring; HydroCast; acoustic modems; anycast routing protocol; co-channel interference; contaminants; hydraulic pressure; intruders; marine life; opportunistic routing mechanism; pressure level measurement; pressure routing; radio; sensor cloud; sensor equipped aquatic swarm; sonobuoys drifting; underwater dead end recovery method; underwater sensor networks; Acoustic sensors; Algorithm design and analysis; Clouds; Modems; Monitoring; Oceans; Routing protocols; Sea surface; Surface contamination; Underwater acoustics
【Paper Link】 【Pages】:1685-1693
【Authors】: Shouwen Lai ; Binoy Ravindran
【Abstract】: We revisit the shortest path problem in asynchronous duty-cycled wireless sensor networks, which exhibit time-dependent features. We model the time-varying link cost and distance from each node to the sink as periodic functions. We show that the time-cost function satisfies the FIFO property, which makes the time-dependent shortest path problem solvable in polynomial-time. Using the Ã-synchronizer, we propose a fast distributed algorithm to build all-to-one shortest paths with polynomial message complexity and time complexity. The algorithm determines the shortest paths for all discrete times with a single execution, in contrast with multiple executions needed by previous solutions. We further propose an efficient distributed algorithm for time-dependent shortest path maintenance. The proposed algorithm is loop-free with low message complexity and low space complexity of O(maxdeg), where maxdeg is the maximum degree for all nodes. The performance of our solution is evaluated under diverse network configurations. The results suggest that our algorithm is more efficient than previous solutions in terms of message complexity and space complexity.
【Keywords】: computational complexity; distributed algorithms; synchronisation; wireless sensor networks; asynchronous duty-cycled wireless sensor networks; distributed time-dependent shortest path problem; fast distributed algorithm; low space complexity; polynomial message complexity; polynomial time; time complexity; time-cost function; time-varying link cost; Ã-synchronizer; Communications Society; Cost function; Delay; Distributed algorithms; Peer to peer computing; Polynomials; Routing protocols; Shortest path problem; Telecommunication traffic; Wireless sensor networks
【Paper Link】 【Pages】:1694-1702
【Authors】: Wei Zeng ; Rik Sarkar ; Feng Luo ; Xianfeng Gu ; Jie Gao
【Abstract】: We study how to characterize the families of paths between any two nodes s, t in a sensor network with holes. Two paths that can be deformed to one another through local changes are called homotopy equivalent. Two paths that pass around holes in different ways have different homotopy types. With a distributed algorithm we compute an embedding of the network in hyperbolic space by using Ricci flow such that paths of different homotopy types are mapped naturally to paths connecting s with different images of t. Greedy routing to a particular image is guaranteed with success to find a path with a given homotopy type. This leads to simple greedy routing algorithms that are resilient to both local link dynamics and large scale jamming attacks and improve load balancing over previous greedy routing algorithms.
【Keywords】: greedy algorithms; hyperbolic equations; telecommunication network routing; wireless sensor networks; Greedy routing; homotopy types; hyperbolic embedding; large scale jamming attacks; local link dynamics; resilient routing; sensor networks; universal covering space; Computer networks; Distributed algorithms; Distributed computing; Embedded computing; Jamming; Joining processes; Large-scale systems; Load management; Routing; Sensor phenomena and characterization
【Paper Link】 【Pages】:1703-1711
【Authors】: Hwee-Xian Tan ; Mun Choon Chan ; Wendong Xiao ; Peng Yong Kong ; Chen-Khong Tham
【Abstract】: Upon the occurrence of a phenomenon of interest in a wireless sensor network, multiple sensors may be activated, leading to data implosion and redundancy. Data aggregation and/or fusion techniques exploit spatio-temporal correlation among sensory data to reduce traffic load and mitigate congestion. However, this is often at the expense of loss in Information Quality (IQ) of data that is collected at the fusion center. In this work, we address the problem of finding the least-cost routing tree that satisfies a given IQ constraint. We note that the optimal least-cost routing solution is a variation of the classical NP-hard Steiner tree problem in graphs, which incurs high overheads as it requires knowledge of the entire network topology and individual IQ contributions of each activated sensor node. We tackle these issues by proposing: (i) a topology-aware histogram-based aggregation structure that encapsulates the cost of including the IQ contribution of each activated node in a compact and efficient way; and (ii) a greedy heuristic to approximate and prune a least-cost aggregation routing path. We show that the performance of our IQ-aware routing protocol is: (i) bounded by a distance-based aggregation tree that collects data from all the activated nodes; and (ii) comparable to another IQ-aware routing protocol that uses an exhaustive brute-force search to approximate and prune the least-cost aggregation tree.
【Keywords】: communication complexity; routing protocols; sensor fusion; telecommunication network topology; telecommunication traffic; trees (mathematics); wireless sensor networks; NP-hard Steiner tree problem; congestion mitigation; data aggregation technique; data fusion technique; data implosion; data redundancy; distance-based aggregation tree; event-driven sensor networks; graphs; greedy heuristic; information quality aware routing protocol; least-cost aggregation routing path; least-cost routing tree; network topology; optimal least-cost routing solution; spatio-temporal correlation; topology-aware histogram-based aggregation structure; traffic load reduction; wireless sensor network; Computer networks; Monitoring; Network topology; Peer to peer computing; Routing protocols; Sensor fusion; Sensor phenomena and characterization; Telecommunication traffic; Tree graphs; Wireless sensor networks
【Paper Link】 【Pages】:1712-1719
【Authors】: Sreenath Ramanath ; Mérouane Debbah ; Eitan Altman ; Vinod Kumar
【Abstract】: In this paper, we study precoded MIMO based small cell networks. We derive the theoretical sum-rate capacity, when multi-antenna base stations transmit precoded information to its multiple single-antenna users in the presence of inter-cell interference from neighboring cells. Due to an interference limited scenario, increasing the number of antennas at the base stations does not yield necessarily a linear increase of the capacity. We assess exactly the effect of multi-cell interference on the capacity gain for a given interference level. We use recent tools from random matrix theory to obtain the ergodic sum-rate capacity, as the number of antennas at the base station, number of users grow large. Simulations confirm the theoretical claims and also indicate that in most scenarios the asymptotic derivations applied to a finite number of users give good approximations of the actual ergodic sum-rate capacity.
【Keywords】: MIMO communication; cellular radio; precoding; transmitting antennas; asymptotic analysis; ergodic sum-rate capacity; intercell interference; multi-antenna base station; precoded MIMO based small cell network; precoded small cell networks; Base stations; Broadcasting; Communications Society; Downlink; Interference; MIMO; Radio transmitters; Receiving antennas; Throughput; Transmitting antennas
【Paper Link】 【Pages】:1720-1728
【Authors】: Vincenzo Mancuso ; Omer Gurewitz ; Ahmed Khattab ; Edward W. Knightly
【Abstract】: IEEE 802.11-based mesh networks can yield a throughput distribution among nodes that is spatially biased, with traffic originating from nodes that directly communicate with the gateway obtaining higher throughput than all other upstream traffic. In particular, if single-hop nodes fully utilize the gateway's resources, all other nodes communicating with the same gateway will attain very little (if any) throughput. In this paper, we show that it is sufficient to rate limit the single-hop nodes in order to give transmission opportunities to all other nodes. Based on this observation, we develop a new rate limiting scheme for 802.11 mesh networks, which counters the spatial bias effect and does not require, in principle, any control overhead. Our rate control mechanism is based on three key techniques. First, we exploit the system's inherent priority nature and control the throughput of the spatially disadvantaged nodes by only controlling the transmission rate of the spatially advantaged nodes. Namely, the single-hop nodes collectively behave as a proxy controller for multi-hop nodes in order to achieve the desired bandwidth distribution. Second, we devise a rate limiting scheme that enforces a utilization threshold for advantaged single-hop traffic and guarantees a small portion of the gateway resources for the disadvantaged multi-hop traffic. We infer demand for multi-hop flow bandwidth whenever gateway resource usage exceeds this threshold, and subsequently reduce the rates of the spatially advantaged single-hop nodes. Third, since the more bandwidth the spatially disadvantaged nodes attain, the easier they can emph{signal} their demands, we allow the bandwidth unavailable for the advantaged nodes to be elastic, i.e., the more the disadvantaged flows use the gateway resources, the higher the utilization threshold is. We develop an analytical model to study a system characterized by such priority, dynamic utilization thresholds, and control by proxy. Moreover, we - - use simulations to evaluate the proposed elastic rate limiting technique.
【Keywords】: internetworking; telecommunication standards; telecommunication traffic; wireless LAN; wireless mesh networks; IEEE 802.11; bandwidth distribution; control overhead; dynamic utilization thresholds; elastic rate limiting; gateway resources; inherent priority nature; multihop flow bandwidth; proxy controller; single-hop nodes; spatial bias effect; throughput distribution; upstream traffic; wireless mesh networks; Aggregates; Bandwidth; Communication system traffic control; Control systems; Mesh networks; Peer to peer computing; Telecommunication traffic; Throughput; Traffic control; Wireless mesh networks
【Paper Link】 【Pages】:1729-1737
【Authors】: Alonso Silva ; Eitan Altman ; Mérouane Debbah ; Giuseppa Alfano
【Abstract】: In this paper we study the optimal placement and optimal number of active relay nodes through the traffic density in mobile sensor ad hoc networks. We consider a setting in which a set of mobile sensor sources is creating data and a set of mobile sensor destinations is receiving that data through multihop wireless paths. We make the assumption that the network is massively dense, i.e., there are so many sources, destinations, and relay nodes, that it is best to describe the network in terms of macroscopic parameters rather than in terms of microscopic parameters. A simple one-dimensional scenario is used to introduce the problem. We solve the two-dimensional scenario where the mobility of the nodes is deterministic and when it follows the brownian mobility model.
【Keywords】: ad hoc networks; mobile radio; telecommunication traffic; active relay nodes; brownian mobility model; magnetworks; mobile ad hoc networks; mobile sensor sources; one-dimensional scenario; traffic density; Ad hoc networks; Communications Society; Magnetic sensors; Mobile ad hoc networks; Peer to peer computing; Relays; Roads; Routing; Telecommunication traffic; Wireless sensor networks
【Paper Link】 【Pages】:1738-1746
【Authors】: Delia Ciullo ; Valentina Martina ; Michele Garetto ; Emilio Leonardi
【Abstract】: We extend the analysis of the scaling laws of wireless ad hoc networks to the case of correlated nodes movements, which are commonly found in real mobility processes. We consider a simple version of the Reference Point Group Mobility model, in which nodes belonging to the same group are constrained to lie in a disc area, whose center moves uniformly across the network according to the i.i.d. model. We assume fast mobility conditions, and take as primary goal the maximization of per-node throughput. We discover that correlated node movements have huge impact on asymptotic throughput and delay, and can sometimes lead to better performance than the one achievable under independent nodes movements.
【Keywords】: ad hoc networks; mobile communication; mobility management (mobile radio); correlated mobility; delay-throughput performance; mobile ad hoc networks; mobility process; wireless ad hoc networks; Ad hoc networks; Animals; Communications Society; Delay effects; Mobile ad hoc networks; Peer to peer computing; Performance analysis; Routing protocols; Scheduling; Throughput
【Paper Link】 【Pages】:1747-1755
【Authors】: Xinyu Zhang ; Kang G. Shin
【Abstract】: Traditional wireless broadcast protocols rely heavily on the 802.11-based CSMA/CA model, which avoids interference and collision by conservatively scheduling transmissions. While CSMA/CA is amenable to multiple concurrent unicasts, it tends to degrade broadcast performance, especially when there are a large number of nodes and links are lossy. In this paper, we propose a new, drastically different protocol called Chorus that improves the efficiency and scalability of broadcast service with a MAC layer that allows packet collisions. Chorus is built upon the observation that packets carrying the same data can be effectively detected and decoded, even when they overlap in time and have comparable signal strength. It performs collision resolution using symbol-level iterative decoding, and then combines the resolved symbols to reconstruct the packet. This collision-tolerant mechanism significantly improves the transmission diversity and spatial reuse in wireless broadcast, providing an asymptotic broadcast delay that is proportional to the network radius. This advantage is exploited further by Chorus's MAC-layer cognitive sensing and scheduling scheme. We evaluate Chorus with symbol-level simulation, and validate its network-level performance via ns-2, in comparison with a typical CSMA/CA broadcast protocol.
【Keywords】: access protocols; broadcasting; interference; iterative decoding; scheduling; wireless LAN; 802.11-based CSMA/CA model; CSMA/CA broadcast protocol; Chorus; MAC layer; MAC-layer cognitive sensing; collision resolution; interference; multiple concurrent unicasts; scheduling transmissions; symbol-level iterative decoding; wireless broadcast protocols; Broadcasting; Degradation; Interference; Iterative decoding; Multiaccess communication; Performance loss; Signal resolution; Spatial resolution; Unicast; Wireless application protocol
【Paper Link】 【Pages】:1756-1764
【Authors】: Seyed A. Hejazi ; Ben Liang
【Abstract】: Despite much research on the throughput of relaying networks under idealized interference models, many practical wireless networks rely on physical-layer protocols that preclude the concurrent reception of multiple transmissions. In this work, we develop analytical frameworks for the uplink of a multi- source single-channel relay-aided wireless system where transmissions are scheduled to avoid collisions. We study amplify-and-forward and decode-and-forward strategies, in both time-sharing and network-coded variants, and provide mathematical models to investigate their achievable rate regions. Both general and optimal power allocations are considered. We also find the cut-set outer bounds for the rate regions. Moreover, we present a comparison between these methods with the simple time sharing scheme. Our numerical results reveal that optimizing power allocation favors the time sharing scheme significantly more than it does the relaying schemes, so that time sharing under some circumstances can provide higher maximum sum rates, even if the links to the relay have strong channel gains. The proposed analysis provides a means to quantitatively evaluate the efficacy of relaying under the collision model, leading to pragmatic design guidelines.
【Keywords】: access protocols; channel allocation; wireless channels; amplify-and-forward strategy; collision model; decode-and-forward strategy; multiple access relay channel; network-coded variants; optimal power allocation; physical layer protocols; throughput analysis; time sharing scheme; wireless networks; Access protocols; Decoding; Frame relay; Interference; Mathematical model; Power system relaying; Throughput; Time sharing computer systems; Wireless application protocol; Wireless networks
【Paper Link】 【Pages】:1765-1773
【Authors】: Sumit Singh ; Raghuraman Mudumbai ; Upamanyu Madhow
【Abstract】: Multi-gigabit outdoor mesh networks operating in the unlicensed 60 GHz "millimeter (mm) wave" band, offer the possibility of a quickly deployable broadband extension of the Internet. We consider mesh nodes with electronically steerable antenna arrays, with both the transmitter and receiver synthesizing narrow beams that compensate for the higher path loss at mm-wave frequencies, achieving ranges on the order of 100 meters using the relatively low transmit powers attainable with low-cost silicon implementations. Such highly directional networking differs from WiFi networks at lower carrier frequencies in two ways that have a crucial impact on protocol design: (1) directionality drastically reduces spatial interference, so that pseudowired link abstractions form an excellent basis for protocol design; (2) directionality induces deafness, which makes medium access control (MAC) based on carrier sensing infeasible. Interference analysis in our prior work shows that, in such a setting, coordination between transmitters and receivers, rather than interference management, becomes the key MAC performance bottleneck. However, the question of whether such coordination can be achieved in a distributed fashion while achieving high medium utilization, was left open. In this paper, we answer this question in the affirmative, presenting a distributed MAC protocol that employs memory to achieve approximate time division multiplexed (TDM) schedules without explicit coordination or resource allocation. The efficacy of the protocol is demonstrated via packet level simulations, while a Markov chain fixed-point analysis provides insight into the effect of parameter choices.
【Keywords】: Markov processes; access protocols; antenna arrays; interference suppression; time division multiplexing; wireless mesh networks; Markov chain fixed-point analysis; WiFi networks; broadband extension; interference analysis; medium access; medium access control; multigigabit outdoor mesh networks; spatial interference reduction; steerable antenna arrays; time division multiplexed schedules; Access protocols; Deafness; IP networks; Interference; Media Access Protocol; Mesh networks; Receiving antennas; Time division multiplexing; Transmitters; Transmitting antennas
【Paper Link】 【Pages】:1774-1782
【Authors】: Erran L. Li ; Richard Alimi ; Dawei Shen ; Harish Viswanathan ; Yang Richard Yang
【Abstract】: Physical layer techniques have come a long way and can achieve very close to Shannon capacity for point-to-pint links. It is apparent that, to further improve network capacity significantly, we have to resort to concurrent transmissions. Many concurrent transmission techniques (e.g., zero forcing, interference alignment and distributed MIMO) are proposed in which multiple senders jointly encode signals to multiple receivers so that interference is aligned and each receiver is able to decode its desired information. In this paper, we investigate the constraints and challenges of using interference alignment. Our main contribution is conducting the first systematic investigation on the key issue of identifying opportunities for interference alignment. We identify diverse, novel scenarios for using interference alignment. We show that identifying opportunities for interference alignment in the general case is computational challenging. However, we also present a promising, distributed algorithm for identifying a wide range of opportunities for interference alignment using a unifying framework based on the degree of freedom. Our second contribution is evaluating key practical implementation issues.
【Keywords】: interference suppression; optimisation; radiofrequency interference; wireless mesh networks; NP-hard problem; interference alignment; interference cancellation; multiple concurrent transmission techniques; single point-to-point transmissions; wireless multihop mesh networks; Channel estimation; Communications Society; Decoding; Filtering theory; Interference cancellation; Network topology; OFDM; Receivers; Throughput; Wireless networks
【Paper Link】 【Pages】:1783-1791
【Authors】: Minghua Chen ; Soung Chang Liew ; Ziyu Shao ; Caihong Kai
【Abstract】: Many important network design problems can be formulated as a combinatorial optimization problem. A large number of such problems, however, cannot readily be tackled by distributed algorithms. The Markov approximation framework studied in this paper is a general technique for synthesizing distributed algorithms. We show that when using the log-sum-exp function to approximate the optimal value of any combinatorial problem, we end up with a solution that can be interpreted as the stationary probability distribution of a class of time- reversible Markov chains. Certain carefully designed Markov chains among this class yield distributed algorithms that solve the log-sum-exp approximated combinatorial network optimization problem. By three case studies, we illustrate that Markov approximation technique not only can provide fresh perspective to existing distributed solutions, but also can help us generate new distributed algorithms in various domains with provable performance. We believe the Markov approximation framework will find applications in many network optimization problems, and this paper serves as a call for participation.
【Keywords】: Markov processes; approximation theory; distributed algorithms; optimisation; radio networks; Markov approximation framework; combinatorial network optimization; distributed algorithms; log-sum-exp function; network design problems; stationary probability distribution; Algorithm design and analysis; Communications Society; Design engineering; Design optimization; Distributed algorithms; Entropy; Multiaccess communication; Network synthesis; Probability distribution; Throughput
【Paper Link】 【Pages】:1792-1800
【Authors】: Moez Draief ; Milan Vojnovic
【Abstract】: We consider the convergence time for solving the binary interval consensus problem using a distributed algorithm proposed by Benezit at al (2009) for computing the quantized average value. In the binary consensus problem, each node initially holds one of two states and the goal for each node is to correctly decide which one of the two states was initially held by a majority of nodes. We derive an upper bound on the expected convergence time that holds for arbitrary connected graphs, which is based on the location of eigenvalues of some contact rate matrices. We instantiate our bound for particular networks of interest, including complete graphs, star-shaped networks, and Erdos-Renyi random graphs, and in the former two cases compare with alternative computations. We find that for all these examples our bound is of exact order with respect to the number of nodes. We pinpoint the fact that the expected convergence time critically depends on the voting margin defined as the difference between the fraction of the nodes that initially held the majority and the minority states, respectively. We derive an exact relation between the expected convergence time and the voting margin, for some of these graphs, which reveals how the expected convergence time tends to infinity as the voting margin approaches zero. Our results provide insights on how the expected convergence time depends on the network topology which can be used for performance evaluation and network design. The results are of interest in the context of peer-to-peer systems; in particular, for sensor networks and distributed databases.
【Keywords】: convergence; distributed algorithms; distributed databases; eigenvalues and eigenfunctions; graph theory; matrix algebra; network topology; peer-to-peer computing; Erd¿s Renyi random graphs; arbitrary connected graphs; binary interval consensus problem; contact rate matrices; convergence speed; distributed algorithm; distributed databases; eigenvalues; peer-to-peer systems; sensor networks; star shaped networks; Computer networks; Convergence; Distributed algorithms; Distributed computing; Eigenvalues and eigenfunctions; H infinity control; Network topology; Peer to peer computing; Upper bound; Voting
【Paper Link】 【Pages】:1801-1809
【Authors】: Chenhui Hu ; Xinbing Wang ; Ding Nie ; Jun Zhao
【Abstract】: A new class of scheduling policies for multicast traffic are proposed in this paper. By utilizing hierarchical cooperative MIMO transmission, our new policies can obtain an aggregate throughput of ¿((n/k)1-¿) for any ¿ > 0. This achieves a gain of nearly ¿(n/k) compared with non-cooperative scheme in [19]. Between the two cooperative strategies in our paper, the converge-based one is superior to the other on delay, while the throughput and energy consumption performances are nearly the same. Moreover, to schedule the traffic in a converge multicast manner instead of the simple multicast, we can dramatically reduce the delay by a factor nearly (n/k)h/2, where h > 1 is the number of the hierarchical layers. Our optimal cooperative strategy achieves an approximate delay-throughput tradeoff D(n,k)/T(n, k) = ¿(k) when h ¿ ¿. This tradeoff ratio is identical to that of non-cooperative scheme, while the throughput performance is greatly improved. Besides, for certain k and h, the tradeoff ratio is even better than that of unicast.
【Keywords】: MIMO communication; multicast communication; scheduling; telecommunication traffic; MIMO transmission; cooperative strategy; energy consumption performances; hierarchical cooperation; multicast scaling laws; multicast traffic; scheduling policies; Aggregates; Communications Society; Delay; MIMO; Mobile ad hoc networks; Multimedia communication; Performance gain; Telecommunication traffic; Throughput; Unicast
【Paper Link】 【Pages】:1810-1818
【Authors】: MinJi Kim ; Daniel Enrique Lucani ; Xiaomeng Shi ; Fang Zhao ; Muriel Médard
【Abstract】: Multi-resolution codes enable multicast at different rates to different receivers, a setup that is often desirable for graphics or video streaming. We propose a simple, distributed, two-stage message passing algorithm to generate network codes for single-source multicast of multi-resolution codes. The goal of this pushback algorithm is to maximize the total rate achieved by all receivers, while guaranteeing decodability of the base layer at each receiver. By conducting pushback and code assignment stages, this algorithm takes advantage of inter-layer as well as intra-layer coding. Numerical simulations show that in terms of total rate achieved, the pushback algorithm outperforms routing and intra-layer coding schemes, even with field sizes as small as 210(10 bits). In addition, the performance gap widens as the number of receivers and the number of nodes in the network increases. We also observe that naive inter-layer coding schemes may perform worse than intra-layer schemes under certain network conditions.
【Keywords】: message passing; multicast communication; network coding; numerical analysis; telecommunication computing; code assignment stages; inter-layer coding; intra-layer coding; multiresolution codes; multiresolution multicast; network coding; pushback algorithm; two-stage message passing algorithm; Communications Society; Decoding; Encoding; Multicast algorithms; Network coding; Positron emission tomography; Routing; Streaming media; Subcontracting; USA Councils
【Paper Link】 【Pages】:1819-1827
【Authors】: Zhuo Lu ; Wenye Wang ; Cliff Wang
【Abstract】: Backoff misbehavior, in which a wireless node deliberately manipulates its backoff time, can induce significant network problems, such as severe unfairness and denial-of-service. Although great progress has been made towards the design of countermeasures to backoff misbehavior, little attention has been focused on quantifying the gain of backoff misbehaviors. In this paper, we define and study two general classes of backoff misbehavior to assess the gain that misbehaving nodes can obtain. The first class, called continuous misbehavior, keeps manipulating the backoff time unless it is disabled by countermeasures. The second class is referred to as intermittent misbehavior, which tends to evade the detection by countermeasures by performing misbehavior sporadically. Our approach is to introduce a new performance metric, namely order gain, which is to characterize the performance benefits of misbehaving nodes in comparison to legitimate nodes. Through analytical studies, simulations, and experiments, we demonstrate the impact of a wide range of backoff misbehaviors on network performance with respect to the number of users in CSMA/CA-based wireless networks.
【Keywords】: carrier sense multiple access; collision avoidance; radio networks; CSMA/CA-based wireless networks; backoff misbehaving nodes; carrier-sense multiple-access; collision avoidance; continuous misbehavior; intermittent misbehavior; multiple-access protocol; Analytical models; Collision avoidance; Communications Society; Detectors; Measurement; Multiaccess communication; Peer to peer computing; Performance analysis; Performance gain; Wireless networks
【Paper Link】 【Pages】:1828-1836
【Authors】: Tingting Chen ; Sheng Zhong
【Abstract】: Wireless mesh networks have been widely deployed to provide broadband network access, and their performance can be significantly improved by using a new technology called network coding. In a wireless mesh network using network coding, selfish nodes may deviate from the protocol when they are supposed to forward packets. This fundamental problem of packet forwarding incentives is closely related to the incentive compatible routing problem in wireless mesh networks using network coding, and to the incentive compatible packet forwarding problem in conventional wireless networks, but different from both of them. In this paper, we propose INPAC, the first incentive scheme for this fundamental problem, which uses a combination of game theoretic and cryptographic techniques to solve it. We formally prove that, if INPAC is used, then following the protocol faithfully is a subgame perfect equilibrium. To make INPAC more practical, we also provide an extension that achieves two improvements: (a) an online authority is no longer needed; (b) the computation and communication overheads are reduced. We have implemented and evaluated INPAC on the Orbit Lab testbed. Our evaluation results verify the incentive compatibility of INPAC and demonstrate that it is efficient.
【Keywords】: cryptography; game theory; network coding; telecommunication network routing; wireless mesh networks; INPAC; broadband network access; cryptographic technique; enforceable incentive scheme; game theory; incentive compatible packet forwarding problem; incentive compatible routing problem; network coding; wireless mesh network; Access protocols; Broadband communication; Cryptography; Game theory; Incentive schemes; Network coding; Routing; Wireless application protocol; Wireless mesh networks; Wireless networks
【Paper Link】 【Pages】:1837-1845
【Authors】: Kai Zeng ; Daniel Wu ; An (Jack) Chan ; Prasant Mohapatra
【Abstract】: Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.
【Keywords】: antenna arrays; cryptography; diversity reception; fading channels; telecommunication security; multilevel quantization; multiple-antenna diversity; multiple-antenna key generation; received signal strength indicator; shared secret key generation; wireless fading channel; wireless networks; Antenna measurements; Data mining; Fading; Mutual information; Portable computers; Protocols; Quantization; Receiving antennas; Signal generators; Wireless networks
【Paper Link】 【Pages】:1846-1854
【Authors】: Yang Ji ; Seung-Woo Seo
【Abstract】: In group communications, an efficient rekeying scheme plays a key role in providing access control when a membership change happens. For reducing the communication cost in the rekeying operation, one proposed model is to rekey upon individual membership change. It is theoretically proved that given the forward secrecy requirement, the optimal amortized communication cost is at least O(log n) (n is the group size) for an Individual Rekeying (IR). Another model is to rekey upon a batch of multiple membership changes: Batch Rekeying (BR), which largely reduces the rekeying communication cost, and relieves implementation difficulties in the IR model (e.g., extremely intensive rekey messages and key arriving disorders in large-size and highly dynamic groups). Unlike IR, however, the communication lower bound in BR is not yet explicitly stated. This paper first extends the communication lower bound for IR to the BR model. Specifically, we prove that given the batch level forward secrecy, the communication costs for updating the whole group subset by subset in a sequence of b batch rekeyings are at least O(b · (log2 b - 1)) + O(n). This bound, as a superset, inclusively explains the IR bound as a special case of b = n. Second, for achieving the found bound, we provide a departing-time related key topology that works optimally under the bound. Third, to further implement the proposed optimal topology, we propose two novel BR protocols, one with support of forward secrecies and the other with support of two-way secrecies. Through extensive analyses and simulations, the proposed protocols are shown to achieve notable upgrades in major performance metrics: 60% ~ 70% reduction in communication overheads, 50% ~ 60% reduction in key storage overheads, and elimination of key tree unbalance.
【Keywords】: Internet; access control; multicast protocols; telecommunication network topology; BR protocols; Internet multicast protocols; access control; group communications; group rekeying; individual rekeying; intensive rekey messages; optimal amortized communication; optimal topology; Access control; Access protocols; Analytical models; Centralized control; Communications Society; Cost function; Cryptography; Measurement; Performance analysis; Topology
【Paper Link】 【Pages】:1855-1863
【Authors】: Maryam Daneshi ; Jianping Pan ; Sudhakar Ganti
【Abstract】: With the proliferation of wireless technologies and the convenience they offer, transporting Quality-of-Service (QoS) demanding traffic such as compressed video over wireless links becomes a trend and a challenging issue. Among many factors, Media Access Control (MAC) protocols play an important role in the network stack to ensure the QoS provisioning for multimedia applications and the efficient utilization of wireless channels. Various contention-based or contention-free MAC protocols have been proposed to solve these problems. In this paper, we model, analyze with an existing framework, and evaluate two reservation algorithms, subframe-fit and isozone-fit, proposed for distributed reservation protocols exampled by WiMedia UWB MAC. The models have been validated by extensive simulations using ns-2 and an MPEG-4 traffic generator. We further improve the system performance by introducing cross-isozone allocation and on-demand compaction to isozone-fit, and discuss how to leverage both contention-based and contention-free MAC protocols.
【Keywords】: access protocols; quality of service; telecommunication traffic; ultra wideband communication; video coding; wireless channels; MPEG-4 traffic generator; WiMedia UWB MAC; compressed video; cross-isozone allocation; demanding traffic; distributed reservation protocols; isozone-fit reservation; media access control protocols; network stack; ns-2 simulation; on-demand compaction; quality-of-service; subframe-fit reservation; wireless channels; wireless links; Access protocols; Algorithm design and analysis; Communication system traffic control; MPEG 4 Standard; Media Access Protocol; Quality of service; System performance; Traffic control; Video compression; Wireless application protocol
【Paper Link】 【Pages】:1864-1872
【Authors】: Ron Banner ; Ariel Orda
【Abstract】: There are two basic approaches to allocate protection resources for fast restoration. The first allocates resources upon the arrival of each connection request; yet, it incurs significant set-up time and is often capacity-inefficient. The second approach allocates protection resources during the network configuration phase; therefore, it needs to accommodate any possible arrival pattern of connection requests, hence potentially calling for a substantial over-provisioning of resources. However, in this study we establish the feasibility of this approach. Specifically, we consider a scheme that, during the network configuration phase, constructs an (additional) low-capacity backup network. Upon a failure, traffic is rerouted through a bypass in the backup network. We establish that, with proper design, backup networks induce feasible capacity overhead. We further impose several design requirements (e.g., hop-count limits) on backup networks and their induced bypasses, and prove that, commonly, they also incur minor overhead. Motivated by these findings, we design efficient algorithms for the construction of backup networks.
【Keywords】: telecommunication network reliability; telecommunication network routing; telecommunication traffic; arrival pattern; connection request; design requirement; fast restoration; low-capacity backup network; network configuration phase; network failure; network traffic; protection resource allocation; rerouting; Algorithm design and analysis; Communications Society; Peer to peer computing; Polynomials; Protection; Resource management; Routing; Telecommunication traffic; Topology
【Paper Link】 【Pages】:1873-1881
【Authors】: John R. Lange ; J. Scott Miller ; Peter A. Dinda
【Abstract】: We consider optimizing the control of the wide-area link of a home router based on the needs of individual users instead of assuming a canonical user. A careful user study clearly demonstrates that measured end-user satisfaction with a given set of home network conditions is highly variable - user perception and opinion of acceptable network performance is very different from user to user. To exploit this fact we design, implement, and evaluate a prototype system, EmNet, that incorporates direct user feedback from a simple user interface layered over existing web content. This feedback is used to dynamically configure a weighted fair queuing (WFQ) scheduler on the wide-area link. We evaluate EmNet in terms of the measured satisfaction of end-users, and in terms of the bandwidth required. We compare EmNet with an uncontrolled link (the common case today), as well as with statically configured WFQ scheduling. On average, EmNet is able to increase overall user satisfaction by 20% over the uncontrolled network and by 12% over static WFQ. EmNet does so by only increasing the average application bandwidth by 6% over the static WFQ scheduler.
【Keywords】: Internet; home automation; local area networks; queueing theory; telecommunication network routing; user interfaces; EmNet; Web content; empathic home networks; end-user satisfaction; home router; network performance; user feedback; user interface; weighted fair queuing scheduler; wide-area link; Bandwidth; Communications Society; Control systems; Energy management; Feedback; Home automation; IP networks; Prototypes; Traffic control; User interfaces
【Paper Link】 【Pages】:1882-1890
【Authors】: Yi Xu ; Wenye Wang
【Abstract】: Correlated failures pose a great challenge for the normal functioning of large wireless networks, because an initial local failure may trigger a global sequence of related failures. Given their potentially devastating impact, we characterize the spread of correlated failures in this paper, which lays the foundation for evaluating and improving the failure resilience of existing wireless networks. We model the failure contagiousness as two generic functions: the failure impact radius distribution function fr(x) and the failure connection function g(x). By using the percolation theory, we determine the respective characteristic regimes of fr(x) and g(x) in which correlated failures will and will not percolate in the network. As our model represents various failure scenarios, the results are generally applicable in understanding the spread of a wide range of correlated failures.
【Keywords】: failure analysis; percolation; radio networks; telecommunication network reliability; correlated failures; failure connection function; failure contagiousness; failure impact radius distribution function; failure resilience; global sequence; initial local failure; large wireless networks; percolation theory; Communication system traffic control; Communications Society; Distribution functions; Failure analysis; Path planning; Peer to peer computing; Resilience; Telecommunication traffic; USA Councils; Wireless networks
【Paper Link】 【Pages】:1891-1899
【Authors】: Bin Tong ; Zi Li ; Guiling Wang ; Wensheng Zhang
【Abstract】: To address energy constraint problem in sensor networks, node reclamation and replacement strategy has been proposed for networks accessible to human beings and robots. The major challenge in realizing the strategy is how to minimize the system maintenance cost, especially the frequency in replacing sensor nodes with limited number of backup nodes. New duty cycle scheduling schemes are required in order to address the challenge. Tong et al. have proposed a staircase-based scheme to address the problem based on ideal assumptions of sensor nodes that are free of failure and have regular energy consumption rate. Since sensor nodes are often deployed in outdoor unattended environment, node failures are inevitable. Energy consumption rates of sensor nodes are irregular due to manufacture or environmental reasons. Hence, this paper proposes several new schemes to achieve reliable scheduling for node reclamation and replacement. Extensive simulations have been conducted to verify that the proposed scheme is effective and efficient.
【Keywords】: energy consumption; scheduling; telecommunication network management; wireless sensor networks; energy consumption; human beings; long-lived replaceable sensor networks; node reclamation; reliable scheduling schemes; robots; sensor nodes; staircase-based scheme; system maintenance cost; Computer science; Costs; Energy conservation; Energy consumption; Frequency; Hardware; Humans; Job shop scheduling; Peer to peer computing; Robot sensing systems
【Paper Link】 【Pages】:1900-1908
【Authors】: Saikat Guha ; Chi-Kin Chau ; Prithwish Basu
【Abstract】: While scheduling the nodes in a wireless network to sleep periodically can save energy, it also incurs higher latency and lower throughput. We consider the problem of designing optimal sleep schedules in wireless networks, and show that finding sleep schedules that can minimize the latency over a given subset of source-destination pairs is NP-hard. We also derive a latency lower bound given by d + O(1/p) for any sleep schedule with a required active rate (i.e., the fraction of active slots of each node) p, and the shortest path length d. We offer a novel solution to optimal sleep scheduling using green-wave sleep scheduling (GWSS), inspired by coordinated traffic lights, which is shown to meet our latency lower bound (hence is latency-optimal) for topologies such as the line, grid, ring, torus and tree networks, under light traffic. For high traffic loads, we propose non-interfering GWSS, which can achieve the maximum throughput scaling law given by T(n,p) = ¿(p/¿n) bits/sec on a grid network of size n, with a latency scaling law D(n,p) = O(¿n) + O(1/p). Finally, we extend GWSS to a random network with n Poisson-distributed nodes, for which we show an achievable throughput scaling law of T(n,p) = ¿(p/¿(n log n)) bits/sec and a corresponding latency scaling law D(n,p) = O(¿(n/log n)) + O(1/p); hence meeting the well-known Gupta-Kumar achievable throughput rate ¿(1/¿(n log n)) when p ¿ 1.
【Keywords】: radio networks; scheduling; Gupta-Kumar achievable throughput rate; Poisson-distributed nodes; capacity-efficient sleep scheduling; coordinated traffic lights; green wave; green-wave sleep scheduling; wireless networks; Delay; Energy consumption; Network topology; Peer to peer computing; Processor scheduling; Sleep; Telecommunication traffic; Throughput; USA Councils; Wireless networks
【Paper Link】 【Pages】:1909-1917
【Authors】: Praveen Jayachandran ; Matthew Andrews
【Abstract】: We study the end-to-end delay bounds that can be achieved in wireless networks using packet deadlines. We assume a set of flows in the network, for which flow i has burst parameter ¿i, injection rate ¿i, and path length Ki. It was already known that, in wireline networks, the Coordinated-Earliest-Deadline-First (CEDF) protocol can achieve and end-to-end delay of approximately (¿i/¿i)+Ki, whereas other schedulers such as Weighted Fair Queuing, have end-to-end delay bounds of the form (¿i + Ki)/¿i. For the case of wireless networks of arbitrary topology, the focus has typically been more on throughput optimality than minimizing delay. In this paper, we study the delay bounds that can be achieved by combining wireless link scheduling algorithms with a CEDF packet scheduler. We first present a centralized scheduler that has an end-to-end delay of approximately O(¿/(¿i) + ¿¿¿p i N/(r¿)), where r¿ is the total rate of flows through link ¿, N is the number of links in the network, and pi is the path followed by packets of flow i. We then show how to convert this into a distributed scheduler. We also study the extent to which results on the schedulability of packet deadlines can be carried over from the wireline to the wireless context. Lastly, we examine ways in which the theoretical schedulers considered in this paper can be transferred to a more practical random-access based setting. This work was supported by NSF contract CCF-0728980 and was performed while the first author was visiting Bell Labs in Summer, 2009.
【Keywords】: packet switching; queueing theory; wireless mesh networks; coordinated edf schedule; coordinated-earliest-deadline-first protocol; distributed scheduler; end-to-end delay; weighted fair queuing; wireless link scheduling algorithms; wireless networks; Communications Society; Contracts; Delay; Global Positioning System; Network topology; Processor scheduling; Protocols; Scheduling algorithm; Throughput; Wireless networks
【Paper Link】 【Pages】:1918-1926
【Authors】: Zhe Yang ; Lin Cai ; Wu-Sheng Lu
【Abstract】: Optimal scheduling for concurrent transmissions in rate-nonadaptive wireless networks is NP-hard. Optimal scheduling in rate-adaptive wireless networks is even more difficult, because, due to mutual interference, each flow's throughput in a time slot is unknown before the scheduling decision of that slot is finalized. The capacity bound derived for rate-nonadaptive networks is no longer applicable either. In this paper, we first formulate the optimal scheduling problems with and without minimum per-flow throughput constraints. Given the hardness of the problems and the fact that the scheduling decisions should be made within a few milliseconds, we propose two simple yet effective searching algorithms which can quickly move towards better scheduling decisions. Thus, the proposed scheduling algorithms can achieve high network throughput and maintain long-term fairness among competing flows with low computational complexity. For the constrained optimization problem involved, we consider its dual problem and apply Lagrangian relaxation. We then incorporate a dual update procedure in the proposed searching algorithm to ensure that the searching results satisfy the constraints. Extensive simulations are conducted to demonstrate the effectiveness and efficiency of the proposed scheduling algorithms which are found to achieve throughputs close to the exhaustive searching results with much lower computational complexity.
【Keywords】: adaptive scheduling; computational complexity; data communication; optimisation; radio networks; radiofrequency interference; Lagrangian relaxation; NP-hard; computational complexity; concurrent transmissions; constrained optimization problem; decision scheduling; long-term fairness; mutual interference; network throughput; optimal scheduling; practical scheduling algorithms; rate adaptive wireless networks; Computational complexity; Computational modeling; Constraint optimization; Interference constraints; Lagrangian functions; Optimal scheduling; Processor scheduling; Scheduling algorithm; Throughput; Wireless networks
【Paper Link】 【Pages】:1927-1935
【Authors】: Amir Hamed Mohsenian Rad ; Jianwei Huang ; Vincent W. S. Wong ; Robert Schober
【Abstract】: Most of the previous work on network coding has assumed that the users are not selfish and always follow the designed coding schemes. However, recent results have shown that selfish users do not have the incentive to participate in inter-session network coding in a static non-cooperative game setting. As a result, the worst-case network efficiency (i.e., the price-of-anarchy) can be as low as 22%. In this paper, we show that if the same game is played repeatedly, then the price-of-anarchy can be significantly improved to 48%. We propose a grim-trigger strategy that encourages users to cooperate and participate in the inter-session network coding. A key challenge is to determine a common cooperative coding rate that the users should mutually agree on. We propose to resolve the conflict of interest among the users through a bargaining process. We derive a tight upper bound for the price-of-anarchy which is valid for any bargaining scheme. Moreover, we propose a simple and efficient min-max bargaining solution that can achieve this upper bound. Our results represent one of the first steps towards designing practical inter-session network coding schemes that can achieve reasonable performance for selfish users.
【Keywords】: computer games; network coding; bargaining process; grim trigger strategy; intersession network coding games; network efficiency; price-of-anarchy; Communications Society; Decoding; Design engineering; Electronic mail; Encoding; Nash equilibrium; Network coding; Unicast; Upper bound; Wireless networks
【Paper Link】 【Pages】:1936-1944
【Authors】: Vinith Reddy ; Srinivas Shakkottai ; Alexander Sprintson ; Natarajan Gautam
【Abstract】: We consider wireless networks in which multiple paths are available between each source and destination. We allow each source to split traffic among all of its available paths, and ask the question: how do we attain the lowest possible number of transmissions per unit time to support a given traffic matrix? Traffic bound in opposite directions over two wireless hops can utilize the "reverse carpooling'' advantage of network coding in order to decrease the number of transmissions used. We call such coded hops as "hyper-links''. With the reverse carpooling technique longer paths might be cheaper than shorter ones. However, there is a prisoners dilemma type situation among sources - the network coding advantage is realized only if there is traffic in both directions of a shared path. We develop a two-level distributed control scheme that decouples user choices from each other by declaring a hyper-link capacity, allowing sources to split their traffic selfishly in a distributed fashion, and then changing the hyper-link capacity based on user actions. We show that such a controller is stable, and verify our analytical insights by simulation.
【Keywords】: game theory; network coding; radio links; radio networks; telecommunication traffic; hyper-link capacity; multipath wireless network coding; network traffic; network transmission; population game perspective; prisoners dilemma; reverse carpooling; traffic bound; traffic matrix; two-level distributed control; wireless hops; Broadcasting; Communication system traffic control; Decoding; Network coding; Relays; Spread spectrum communication; Telecommunication traffic; Throughput; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:1945-1953
【Authors】: Fangwen Fu ; Ulas C. Kozat
【Abstract】: We propose a virtualization framework to separate the network operator (NO) who focuses on wireless resource management and service providers (SP) who target distinct objectives with different constraints. Within the proposed framework, we model the interactions among SPs and NO as a stochastic game, each stage of which is played by SPs (on behalf of the end users) and is regulated by the NO through the Vickrey-Clarke-Groves (VCG) mechanism. Due to the strong coupling between the future decisions of SPs and lack of global information at each SP, the stochastic game is notoriously hard. Instead, we introduce conjectural prices to represent the future congestion levels the end users potentially will experience, via which the future interactions between SPs are decoupled. Then, the policy to play the dynamic rate allocation game becomes selecting the conjectural prices and announcing a strategic value function (i.e., the preference on the rate) at each time. We prove that there exists one Nash equilibrium in the conjectural prices and, given the conjectural prices, the SPs have to truthfully reveal their own value function. We further prove that this Nash equilibrium results in efficient rate allocation in our virtualized wireless network. In other words, there are enough incentives for NO to advertise such a conjectural price and SPs to follow this advice.
【Keywords】: game theory; resource allocation; telecommunication network management; wireless sensor networks; Vickrey-Clarke-Groves mechanism; dynamic rate allocation game; network operator; sequential auction game; service providers; stochastic game; wireless network virtualization; wireless resource management; Communications Society; Game theory; Nash equilibrium; Radio spectrum management; Resource management; Resource virtualization; Stochastic processes; Technological innovation; USA Councils; Wireless networks
【Paper Link】 【Pages】:1954-1962
【Authors】: Ozan Candogan ; Ishai Menache ; Asuman E. Ozdaglar ; Pablo A. Parrilo
【Abstract】: We study power control in a multi-cell CDMA wireless system whereby self-interested users share a common spectrum and interfere with each other. Our objective is to design a power control scheme that achieves a (near) optimal power allocation with respect to any predetermined network objective (such as the maximization of sum-rate, or some fairness criterion). To obtain this, we introduce the potential-game approach that relies on approximating the underlying noncooperative game with a "close" potential game, for which prices that induce an optimal power allocation can be derived. We use the proximity of the original game with the approximate game to establish through Lyapunov-based analysis that natural user-update schemes (applied to the original game) converge within a neighborhood of the desired operating point, thereby inducing near-optimal performance in a dynamical sense. Additionally, we demonstrate through simulations that the actual performance can in practice be very close to optimal, even when the approximation is inaccurate. As a concrete example, we focus on the sum-rate objective, and evaluate our approach both theoretically and empirically.
【Keywords】: Lyapunov methods; cellular radio; code division multiple access; control system analysis; game theory; power control; telecommunication control; Lyapunov based analysis; close potential game; common spectrum; multicell CDMA wireless system; natural user update scheme; near optimal power control; noncooperative game; optimal power allocation; potential game approach; self interested user; wireless network; Concrete; Interference; Multiaccess communication; Nash equilibrium; Power control; Pricing; Quality of service; Throughput; Transmitters; Wireless networks
【Paper Link】 【Pages】:1963-1971
【Authors】: Hui Zang ; François Baccelli ; Jean Bolot
【Abstract】: In this paper, we present a general technique based on Bayesian inference to locate mobiles in cellular networks. We study the problem of localizing users in a cellular network for calls with information regarding only one base station and hence triangulation or trilateration cannot be performed. In our call data records, this happens more than 50% of time. We show how to localize mobiles based on our knowledge of the network layout and how to incorporate additional information such as round-trip-time and signal to noise and interference ratio (SINR) measurements. We study important parameters used in this Bayesian method through mining call data records and matching GPS records and obtain their distribution or typical values. We validate our localization technique in a commercial network with a few thousand emergency calls. The results show that the Bayesian method can reduce the localization error by 20% compared to a blind approach and the accuracy of localization can be further improved by refining the a priori user distribution in the Bayesian technique.
【Keywords】: Bayes methods; cellular radio; radiofrequency interference; Bayesian method; GPS records; a priori user distribution; cellular networks; inference; localization technique; triangulation; trilateration; Base stations; Bayesian methods; Global Positioning System; Humans; Interference; Land mobile radio cellular systems; Mobile handsets; Monitoring; Signal to noise ratio; USA Councils
【Paper Link】 【Pages】:1972-1980
【Authors】: Lirong Jian ; Zheng Yang ; Yunhao Liu
【Abstract】: Knowing accurate positions of nodes in wireless ad-hoc and sensor networks is essential for a wide range of pervasive and mobile applications. However, errors are inevitable in distance measurements and we observe that a small number of outliers can degrade localization accuracy drastically. To deal with noisy and outlier ranging results, triangle inequality is often employed in existing approaches. Our study shows that triangle inequality has a lot of limitations which make it far from accurate and reliable. In this study, we formally define the outlier detection problem for network localization and build a theoretical foundation to identify outliers based on graph embeddability and rigidity theory. Our analysis shows that the redundancy of distance measurements plays an important role. We then design a bilateration generic cycles based outlier detection algorithm, and examine its effectiveness and efficiency through a network prototype implementation of MicaZ motes as well as extensive simulations. The results shows that our design significantly improves the localization accuracy by wisely rejecting outliers.
【Keywords】: ad hoc networks; graph theory; wireless sensor networks; MicaZ motes; bilateration generic cycles; graph embeddability theory; graph rigidity theory; localization accuracy; network localization; outlier detection; outlier distance measurements; triangle inequality; wireless ad-hoc networks; wireless sensor networks; Calibration; Communications Society; Distance measurement; Hardware; Mobile computing; Peer to peer computing; Receiving antennas; Space technology; Transmitters; Wireless sensor networks
【Paper Link】 【Pages】:1981-1989
【Authors】: A. K. M. Mahtab Hossain ; Wee-Seng Soh
【Abstract】: In this paper, we analyze the Cramer-Rao Lower Bound (CRLB) of localization using Signal Strength Difference (SSD) as location fingerprint. This analysis has a dual purpose. Firstly, the properties of the bound on localization error may help to design efficient localization algorithm. For example, utilizing one of the properties, we propose a way to define weights for a weighted K-Nearest Neighbor (K-NN) scheme which is shown to perform better than the K-NN algorithm. Secondly, it provides suggestions for a positioning system design by revealing error trends associated with the system deployment. In both cases, detailed analysis as well as experimental results are presented in order to support our claims.
【Keywords】: Global Positioning System; estimation theory; fingerprint identification; indoor radio; Cramer-Rao lower bound analysis; K-NN scheme; error trend; indoor localization; localization algorithm; localization error; location fingerprint; positioning system design; signal strength difference; weighted K-nearest neighbor; Algorithm design and analysis; Bluetooth; Communications Society; Computer errors; Fingerprint recognition; Phase estimation; Signal analysis; Signal design; System analysis and design; Training data
【Paper Link】 【Pages】:1990-1998
【Authors】: Sushant Sharma ; Yi Shi ; Jia Liu ; Y. Thomas Hou ; Sastry Kompella
【Abstract】: Network coding (NC) is a promising approach to reduce time-slot overhead for cooperative communications (CC) in a multi-session environment. Most of the existing works take advantage of the benefits of NC in CC but do not fully recognize its potential adverse effect. In this paper, we show that employing NC may not always benefit CC. We substantiate this important finding in the context of analog network coding (ANC) and amplify-and-forward (AF) CC. This paper, for the first time, introduces an important concept of network coding noise (NC noise). Specifically, we analyze the signal aggregation at a relay node and signal extraction at a destination node. We then use the analysis to derive a closed-form expression for NC noise at each destination node in a multi-session environment. We show that NC noise can diminish the advantage of NC in CC. Our results formalizes an important concept on using NC in CC.
【Keywords】: network coding; radiocommunication; amplify-and-forward; analog network coding; cooperative communications; destination node; multisession environment; network coding noise; relay node; signal aggregation; signal extraction; Broadcasting; Communications Society; Computer science; Frame relay; Information technology; Network coding; Peer to peer computing; Transceivers; USA Councils; Working environment noise
【Paper Link】 【Pages】:1999-2007
【Authors】: Qijia Liu ; Wei Zhang ; Xiaoli Ma
【Abstract】: Cooperative networks allow the nodes relaying each other's messages to enhance the transmission reliability over wireless fading channels by achieving cooperative diversity. Among the various relaying protocols, the amplify-and-forward (AF) strategy is well studied for its simplicity. However, to collect the cooperative diversity, there are two main issues that the AF protocol is facing. One is that the channel state information (CSI) of the source-to-relay link (i.e., two-hop CSI) is needed at the destination. The other concern is that the power scaling factor (PSF) and output signals at the relay are unbounded. These two issues make AF less practical in resource constrained networks, e.g., blind and peak power constrained relays. In this paper, we reveal the necessary and sufficient conditions on designing the PSF for the maximum ratio combining (MRC) receiver at the destination with two-hop (TH) CSI to achieve full cooperative diversity. Furthermore, we also provide the necessary conditions on the PSF design so that MRC with only one-hop (OH) CSI still collects full cooperative diversity. These designs make AF strategies more general and practical. The theoretical analysis is corroborated by numerical simulations.
【Keywords】: fading channels; protocols; radio receivers; AF protocol; amplify-and-forward design; channel state information; cooperative diversity; cooperative networks; maximum ratio combining receiver; output signals; power scaling factor; relaying protocols; source-to-relay link; transmission reliability; two-hop CSI; wireless fading channels; Channel state information; Decoding; Diversity reception; Fading; Power system relaying; Protocols; Relays; Sufficient conditions; Telecommunication network reliability; Wireless communication
【Paper Link】 【Pages】:2008-2015
【Authors】: Chengzhi Li ; Huaiyu Dai
【Abstract】: In this paper we study distributed function computation in a noisy multi-hop wireless network, in which n nodes are uniformly and independently distributed in a unit square. We adopt the adversarial noise model, for which independent binary symmetric channels are assumed for any point-to-point transmissions, with (not necessarily identical) crossover probabilities bounded above by some constant ¿. Each node holds an m-bit integer per instance and the computation is started after each node collects N readings. The goal is to compute a global function with a certain fault tolerance, in this distributed setting; we mainly deal with divisible functions, which essentially covers the main body of interest for wireless applications. We focus on protocol designs that are efficient in terms of communication complexity. We first devise a general protocol for evaluating any divisible functions, addressing both one-shot (N = O(1)) and block computation, and both constant and large m scenarios; its bottleneck in different scenarios is also analyzed. Based on this analysis, we then endeavor to improve the design for two special cases: identity function, and some restricted type-threshold functions, both focusing on the constant m and N scenario.
【Keywords】: communication complexity; distributed processing; fault tolerance; probability; protocols; telecommunication computing; wireless channels; communication complexity; crossover probabilities; distributed function computation; fault tolerance; identity function; in-network computing; independent binary symmetric channels; multihop wireless network; noisy wireless channels; point-to-point transmissions; protocol designs; restricted type-threshold functions; wireless applications; Broadcasting; Complexity theory; Computer networks; Distributed computing; Fault tolerance; History; Information processing; Peer to peer computing; Protocols; Wireless networks
【Paper Link】 【Pages】:2016-2024
【Authors】: Sushant Sharma ; Yi Shi ; Y. Thomas Hou ; Hanif D. Sherali ; Sastry Kompella
【Abstract】: It has been shown that cooperative communications (CC) have the potential to significantly increase the capacity of wireless networks. However, most of the existing results are limited to single-hop wireless networks. To illustrate the benefits of CC in multi-hop wireless networks, we solve a joint optimization problem of relay node assignment and flow routing for concurrent sessions. We study this problem via mathematical modeling and solve it using a solution procedure based on the branch-and-cut framework. We design several novel components to speed-up the computation time of branch-and-cut. Via numerical results, we show the significant rate gains that can be achieved by incorporating CC in multi-hop networks.
【Keywords】: optimisation; radio networks; telecommunication network routing; branch-and-cut framework; cooperative communication; joint flow routing; joint optimization problem; multihop wireless network; relay node assignment; Communication industry; Communications Society; Computer science; MIMO; Peer to peer computing; Relays; Routing; Spread spectrum communication; USA Councils; Wireless networks
【Paper Link】 【Pages】:2025-2033
【Authors】: Shahab Oveis Gharan ; Shervan Fashandi ; Amir K. Khandani
【Abstract】: This paper addresses a fundamental trade-off between rate and the diversity gain of an end-to-end connection in an erasure network. The erasure network is modeled by a directed graph whose links are orthogonal erasure channels. Furthermore, the erasure network is assumed to be non-ergodic, meaning that the erasure status of the links are assumed to be fixed during each block of transmission and change independently from block to block. The erasure status of the links is assumed to be known only by the destination node. First, we study the homogeneous erasure networks in which the links have the same erasure probability and capacity. We derive the optimum trade-off between diversity gain and the end-to-end rate and prove that a variant of the conventional routing strategy combined with an appropriate forward error correction at the end-nodes achieves the optimum diversity-rate trade-off. Next, we consider the general erasure networks in which different links may have different values of erasure probability and capacity. We prove that there exist general erasure networks for which any conventional routing strategy fails to achieve the optimum diversity-rate trade-off. However, for any general erasure graph, we show that there exists a linear network coding strategy which achieves the optimum diversity-rate trade-off. Unlike the previous works which suggest the potential benefit of linear network coding in the error-free multicast scenario (in terms of the achievable rate), our result introduces the benefit of linear network coding in the erasure single-source single-destination scenario (in terms of the diversity gain). Finally, we study the diversity-rate trade-off through simulations. The erasure graphs are constructed according to the Barabasi-Albert random model which is known to capture the scale-free property of the practical packet switched networks like the Internet. The error probability is depicted for different network strategies and different rate v- - alues. The depicted results confirm the trade-off between the rate and the diversity gain for each network strategy. Moreover, the diversity gain is plotted versus the rate for different conventional routing and the linear network coding strategies. It is observed that linear network coding outperforms all conventional routing strategies in terms of the diversity gain.
【Keywords】: diversity reception; network coding; radio networks; Barabasi-Albert random model; directed graph; diversity gain; diversity-rate trade-off; erasure probability; homogeneous erasure networks; linear network coding; orthogonal erasure channels; routing strategies; Capacity planning; Communications Society; Diversity methods; Error probability; Forward error correction; IP networks; Network coding; Packet switching; Peer to peer computing; Routing
【Paper Link】 【Pages】:2034-2042
【Authors】: Long Bao Le ; Eytan Modiano ; Ness B. Shroff
【Abstract】: This paper considers network control for wireless networks with finite buffers. We investigate the performance of joint flow control, routing, and scheduling algorithms which achieve high network utility and deterministically bounded backlogs inside the network. Our algorithms guarantee that buffers inside the network never overflow. We study the tradeoff between buffer size and network utility and show that if internal buffers have size (N - 1)/¿ then a high fraction of the maximum utility can be achieved, where ¿ captures the loss in utility and N is the number of network nodes. The underlying scheduling/routing component of the considered control algorithms requires ingress queue length information (IQI) at all network nodes. However, we show that these algorithms can achieve the same utility performance with delayed ingress queue length information. Numerical results reveal that the considered algorithms achieve nearly optimal network utility with a significant reduction in queue backlog compared to the existing algorithm in the literature. Finally, we discuss extension of the algorithms to wireless networks with time-varying links.
【Keywords】: buffer circuits; flow control; optimal control; scheduling; telecommunication control; telecommunication network routing; wireless channels; delayed ingress queue length information; finite buffers; joint flow control; network control; optimal control; optimal network utility; queue backlog; routing algorithms; scheduling algorithms; time-varying links; wireless networks; Buffer overflow; Communication system control; Delay; Optimal control; Peer to peer computing; Routing; Scheduling algorithm; Throughput; Utility programs; Wireless networks
【Paper Link】 【Pages】:2043-2051
【Authors】: Andre Berger ; James Gross ; Tobias Harks
【Abstract】: In communication networks, resource assignment problems appear in several different settings. These problems are often modeled by a maximum weight matching problem in bipartite graphs and efficient matching algorithms are well known. In several applications, the corresponding matching problem has to be solved many times in a row as the underlying system operates in a time-slotted fashion and the edge weights change over time. However, changing the assignments can come with a certain cost for reconfiguration that depends on the number of changed edges between subsequent assignments. In order to control the cost of reconfiguration, we propose the k-constrained bipartite matching problem for bipartite graphs, which seeks an optimal matching that realizes at most k changes from a previous matching. We provide fast approximation algorithms with provable guarantees for this problem. Furthermore, to cope with the sequential nature of assignment problems, we introduce an online variant of the k-constrained matching problem and derive online algorithms that are based on our approximation algorithms for the k-constrained bipartite matching problem. Finally, we establish the applicability of our model and our algorithms in the context of OFDMA wireless networks finding a significant performance improvement for the proposed algorithms.
【Keywords】: frequency division multiple access; radio networks; OFDMA wireless networks; approximation algorithms; bipartite graphs; communication networks; k-constrained bipartite matching; maximum weight matching problem; resource assignment problems; Approximation algorithms; Bipartite graph; Communication networks; Communications Society; Cost function; Electronic mail; Mathematics; Optimal control; Optimal matching; Wireless networks
【Paper Link】 【Pages】:2052-2060
【Authors】: Bozidar Radunovic ; Prasanna Chaporkar ; Alexandre Proutière
【Abstract】: In Wireless LANs, users may adapt their transmission rates depending on the radio conditions of their links so as to maximize their throughput. Recently, there has been a significant research effort in developing distributed rate adaptation schemes. Unlike previous works that mainly focus on channel tracking, this paper characterizes the optimal reaction of a rate adaptation protocol to the contention information received from the MAC. We formulate this problem analytically. We study both competitive and cooperative user behaviors. In the case of competition, users selfishly adapt their rates so as to maximize their own throughput, whereas in the case of cooperation they adapt their rates so as to maximize the overall system throughput. We show that the Nash Equilibrium reached in the case of competition is inefficient (i.e. the price of anarchy goes to infinity as the number of users increases), and provide insightful properties of the socially optimal rate adaptation schemes. We find that recently proposed collision-aware rate adaptation algorithms decrease the price of anarchy. We also propose a novel collision-aware rate adaptation algorithm that further reduces the price of anarchy.
【Keywords】: access protocols; game theory; wireless LAN; MAC protocol; Nash equilibrium; collision aware rate adaptation algorithm; contention information; optimal rate adaptation; price of anarchy; rate adaptation game; rate adaptation protocol; wireless LAN; Communications Society; H infinity control; Hardware; Media Access Protocol; Modulation coding; Nash equilibrium; Propagation losses; Throughput; Time factors; Wireless LAN
【Paper Link】 【Pages】:2061-2069
【Authors】: Yan Yang ; Alix L. H. Chow ; Leana Golubchik ; Danielle Bragg
【Abstract】: In recent years a number of research efforts have focused on effective use of P2P-based systems in providing large scale video streaming services. In particular, live streaming and Video-on-Demand (VoD) systems have attracted much interest. While previous efforts mainly focused on the common challenges faced by both types of applications, there are still a number of fundamental open questions in designing P2P-based VoD systems, which is the focus of our effort. Specifically, in this paper, we consider a BitTorrent-like VoD system and focus on the following questions: (1) how the lack of load balance, which typically exists in a P2P- based VoD system, affects the performance and what steps can be taken to remedy that, and (2) is a FCFS approach to serving requests at a peer sufficient or whether a Deadline-Aware Scheduling (DAS) approach can lead to performance improvements. Given the deadline considerations that exist in VoD systems, we also investigate approaches to avoiding unnecessary queueing time. For each of these questions, we first illustrate deficiencies of current approaches in adequately meeting streaming quality of service requirements. Motivated by this, we propose several practical schemes aimed at addressing these questions. To illustrate the benefits of our approach, we present an extensive simulation-based performance study.
【Keywords】: quality of service; scheduling; video on demand; video streaming; BitTorrent-like VoD systems; P2P-based systems; QoS; deadline aware scheduling; live streaming; peer-to-peer systems; quality of service; queueing time; video-on-demand systems; Bandwidth; Communications Society; Internet; Large-scale systems; Motion pictures; Peer to peer computing; Quality of service; Scalability; Streaming media; Thin film transistors
【Paper Link】 【Pages】:2070-2078
【Authors】: Zimu Liu ; Chuan Wu ; Baochun Li ; Shuqiao Zhao
【Abstract】: Since the inception of network coding in information theory, we have witnessed a sharp increase of research interest in its applications in communications and networking, where the focus has been on more practical aspects. However, thus far, network coding has not been deployed in real-world commercial systems in operation at a large scale, and in a production setting. In this paper, we present the objectives, rationale, and design in the first production deployment of random network coding, where it has been used in the past year as the cornerstone of a large-scale production on-demand streaming system, operated by UUSee Inc., delivering thousands of on-demand video channels to millions of unique visitors each month. To achieve a thorough understanding of the performance of network coding, we have collected 200 Gigabytes worth of real-world traces throughout the 17-day Summer Olympic Games in August 2008, and present our lessons learned after an in-depth trace-driven analysis.
【Keywords】: information theory; network coding; video streaming; Summer Olympic Games; UUSee; information theory; large-scale operational on-demand streaming; large-scale production; on-demand video channels; random network coding; real-world traces; Bandwidth; Information theory; Large-scale systems; Network coding; Network servers; Peer to peer computing; Performance analysis; Production systems; Protocols; Streaming media
【Paper Link】 【Pages】:2079-2087
【Authors】: Jianping Wang ; Chunming Qiao ; Yan Li ; Kejie Lu
【Abstract】: Minimizing the worst-case playback delay (WPD) in VoD services is both critical and challenging. Given a fixed amount of bandwidth for broadcasting and patching, there is no prior work on determining the minimum WPD, let alone guaranteeing it. In this work, we propose novel schemes that leverage the unique properties of a TDM-based Passive Optical Network (PON) by performing rebroadcasting and patching at its Optical Network Unit (ONUs). For a given bandwidth available for VoD services in the PON, we derive the minimum worst-case playback delay (WPD), and also design optimal patch scheduling algorithm as well as ONU rebroadcast and patching channel assignment to guarantee such minimum WPD. Numerical results confirm the superiority of the proposed schemes over the existing ones in terms of both worst-case and average performance.
【Keywords】: channel allocation; delays; optical fibre networks; scheduling; time division multiplexing; video on demand; ONU patching; ONU rebroadcast; channel assignment; optical network unit; optimal patch scheduling; passive optical networks; time division multiplexing; video on demand; worst-case playback delay; Bandwidth; Broadcasting; Delay; Internet; Multimedia communication; Optical network units; Passive optical networks; Streaming media; Video on demand; Web server
【Paper Link】 【Pages】:2088-2096
【Authors】: Anh Tuan Nguyen ; Baochun Li ; Frank Eliassen
【Abstract】: Layered streaming can be used to adapt to the available download capacity of an end-user, and such adaptation is very much required in real world HTTP media streaming. The multiple layer codec has become more refined, as SVC (the scalable extension of the H.264/AVC standard) has been standardized with a bit rate overhead of around 10% and an indistinguishable visual quality, compared to the state of the art single layer codec. Peer-to-peer streaming systems have also become the reality. The important question is how such layered coding can be used in real world peer-to-peer streaming systems. This paper tries to explore the feasibility of using network coding to make layered peer-to-peer streaming much more realistic, by combining network coding and SVC in a fine granularity manner. We present Chameleon, our new peer-to-peer streaming algorithm designed to incorporate network coding seamlessly with SVC. Key components with different design options of Chameleon are presented and experimentally evaluated, with the objective of investigating benefits of network coding in combination with SVC. We carry out extensive experiments on real stream data to (i) evaluate the performance of Chameleon in terms of playback skips and delivered video quality, and (ii) understand its insights. Our results demonstrate the feasibility of the approach and bring us one step closer to real adaptive peer-to-peer streaming.
【Keywords】: media streaming; network coding; peer-to-peer computing; transport protocols; video coding; Chameleon; H.264/AVC; HTTP media streaming; SVC; adaptive peer-to-peer streaming; multiple layer codec; network coding; video quality; visual quality; Automatic voltage control; Bandwidth; Codecs; Fluctuations; Network coding; Peer to peer computing; Protocols; Scalability; Static VAr compensators; Streaming media
【Paper Link】 【Pages】:2097-2105
【Authors】: Changlei Liu ; Guohong Cao
【Abstract】: Self-monitoring the sensor statuses such as liveness, node density and residue energy is critical for maintaining the normal operation of the sensor network. When building the monitoring architecture, most existing work focuses on minimizing the number of monitoring nodes. However, with less monitoring points, the false alarm rate may increase as a consequence. In this paper, we study the fundamental tradeoff between the number of monitoring nodes and the false alarm rate in the wireless sensor networks. Specifically, we propose fully distributed monitoring algorithms, to build up a poller-pollee based architecture with the objective to minimize the number of overall pollers while bounding the false alarm rate. Based on the established monitoring architecture, we further explore the hop-by-hop aggregation opportunity along the multihop path from the polee to the poller, with the objective to minimize the monitoring overhead. We show that the optimal aggregation path problem is NP-hard and propose an opportunistic greedy algorithm, which achieves an approximation ratio of 5/4. As far as we know, this is the first proved constant approximation ratio applied to the aggregation path selection schemes over the wireless sensor networks.
【Keywords】: greedy algorithms; monitoring; optimisation; wireless sensor networks; NP-hard algorithm; distributed aggregation; distributed monitoring algorithms; false alarm rate; greedy; hop-by-hop aggregation; multihop path; poller-pollee based architecture; residue energy; wireless sensor networks; Buildings; Communications Society; Computer science; Computerized monitoring; Costs; Energy efficiency; Maintenance engineering; Peer to peer computing; Power engineering and energy; Wireless sensor networks
【Paper Link】 【Pages】:2106-2114
【Authors】: Pu Wang ; Rui Dai ; Ian F. Akyildiz
【Abstract】: Data redundancy caused by correlation has motivated the application of collaborative multimedia in-network processing for data filtering and compression in wireless multimedia sensor networks (WMSNs). This paper proposes an information theoretic data compression framework with an objective to maximize the overall compression of the visual information gathered in a WMSN. To achieve this, an entropy-based divergence measure (EDM) scheme is proposed to predict the compression efficiency of performing joint coding on the images collected by spatially correlated cameras. The novelty of EDM relies on its independence of the specific image types and coding algorithms, thereby providing a generic mechanism for prior evaluation of compression under different coding solutions. Utilizing the predicted results from EDM, a distributed multi-cluster coding protocol (DMCP) is proposed to construct a compression-oriented coding hierarchy. The DMCP aims to partition the entire network into a set of coding clusters such that the global coding gain is maximized. Moreover, in order to enhance decoding reliability at data sink, the DMCP also guarantees that each sensor camera is covered by at least two different coding clusters. Experiments on H.264 standards show that the proposed EDM can effectively predict the joint coding efficiency from multiple sources. Further simulations demonstrate that the proposed compression framework can reduce 10%-23% total coding rate compared with the individual coding scheme, i.e., each camera sensor compresses its own image independently.
【Keywords】: multimedia communication; protocols; source coding; wireless sensor networks; clustered source coding; collaborative data compression; data redundancy; distributed multi-cluster coding protocol; entropy-based divergence measure; wireless multimedia sensor networks; Cameras; Clustering algorithms; Collaboration; Data compression; Filtering; Image coding; Performance evaluation; Redundancy; Source coding; Wireless sensor networks
【Paper Link】 【Pages】:2115-2123
【Authors】: Jian Li ; Amol Deshpande ; Samir Khuller
【Abstract】: We address the problem of efficiently gathering correlated data from a wireless sensor network, with the aim of designing algorithms with provable optimality guarantees, and understanding how close we can get to the known theoretical lower bounds. Our proposed approach is based on finding an optimal or a near-optimal compression tree for a given sensor network: a compression tree is a directed tree over the sensor network nodes such that the value of a node is compressed using the value of its parent. We focus on broadcast communication model in this paper, but our results are more generally applicable to a unicast communication model as well. We draw connections between the data collection problem and a previously studied graph concept called weakly connected dominating sets, and we use this to develop novel approximation algorithms for the problem. We present comparative results on several synthetic and real-world datasets showing that our algorithms construct near-optimal compression trees that yield a significant reduction in the data collection cost.
【Keywords】: data communication; data compression; trees (mathematics); wireless sensor networks; approximation algorithms; computing compression trees; data collection; designing algorithms; graph concept; weakly connected dominating sets; wireless sensor networks; Base stations; Communications Society; Computer networks; Computer science; Costs; Educational institutions; Entropy; Monitoring; Protocols; Wireless sensor networks
【Paper Link】 【Pages】:2124-2132
【Authors】: Rui Tan ; Guoliang Xing ; Xue Liu ; Jianguo Yao ; Zhaohui Yuan
【Abstract】: Wireless sensor networks (WSNs) are typically composed of low-cost sensors that are deeply integrated with physical environments. As a result, the sensing performance of a WSN is inevitably undermined by various physical uncertainties, which include stochastic sensor noises, unpredictable environment changes and dynamics of the monitored phenomenon. Traditional solutions (e.g., sensor calibration and collaborative signal processing) work in an open-loop fashion and hence fail to adapt to these uncertainties after system deployment. In this paper, we propose an adaptive system-level calibration approach for a class of sensor networks that employ data fusion to improve system sensing performance. Our approach features a feedback control loop that exploits sensor heterogeneity to deal with the aforementioned uncertainties in calibrating system performance. In contrast to existing heuristic based solutions, our control-theoretical calibration algorithm can ensure provable system stability and convergence. We also systematically analyze the impacts of communication reliability and delay, and propose an optimal routing algorithm that minimizes the impact of packet loss on system stability. Our approach is evaluated by both experiments on a testbed of Tmotes as well as extensive simulations based on data traces gathered from a real vehicle detection experiment. The results demonstrate that our calibration algorithm enables a network to maintain the optimal detection performance in the presence of various system and environmental dynamics.
【Keywords】: feedback; sensor fusion; telecommunication network reliability; telecommunication network routing; wireless sensor networks; Tmotes; adaptive calibration; adaptive system-level calibration; communication delay; communication reliability; control-theoretical calibration algorithm; data fusion; environmental dynamics; feedback control loop; fusion-based wireless sensor network; low-cost sensor; optimal routing; packet loss; physical environment; physical uncertainty; provable system stability; sensor heterogeneity; stochastic sensor noise; system dynamics; system sensing performance; unpredictable environment change; Calibration; Condition monitoring; Sensor phenomena and characterization; Sensor systems; Signal processing algorithms; Stochastic resonance; Uncertainty; Vehicle dynamics; Wireless sensor networks; Working environment noise
【Paper Link】 【Pages】:2133-2141
【Authors】: Yao Hua ; Qian Zhang ; Zhisheng Niu
【Abstract】: Cooperative relay networks combined with Orthogonal Frequency Division Multiplexing Access (OFDMA) technology has been widely recognized as a promising candidate for future cellular infrastructure due to the performance enhancement by flexible resource allocation schemes. The majority of the existing schemes aim to optimize single cell performance gain. However, the higher frequency reuse factor and smaller cell size requirement lead to severe inter-cell interference problem. Therefore, the multi-cell resource allocation of subcarrier, time scheduling and power should be jointly considered to alleviate the severe inter-cell interference problem. In this paper, the joint resource allocation problem is formulated. Considering the high complexity of the optimal solution, a two-stage resource allocation scheme is proposed. In the first stage, all of the users in each cell are selected sequentially and the joint subcarrier allocation and scheduling is conducted for the selected users without considering the interference. In the second stage, the optimal power control is performed by geometric programming method. Simulation results show that the proposed the interference-aware resource allocation scheme improves the system capacity compared with existing schemes. Especially, the edge users achieve more benefit.
【Keywords】: OFDM modulation; geometric programming; interference (signal); optimal control; radio networks; resource allocation; cell size requirement; future cellular infrastructure; geometric programming method; higher frequency reuse factor; inter-cell interference; multicell OFDMA-based relay networks; multicell resource allocation; optimal power control; orthogonal frequency division multiplexing access; single cell performance gain; subcarrier allocation; two-stage resource allocation scheme; Communications Society; Information science; Interference; Laboratories; Performance gain; Power control; Power system relaying; Relays; Resource management; Throughput
【Paper Link】 【Pages】:2142-2150
【Authors】: Joshua Robinson ; Mohit Singh ; Ram Swaminathan ; Edward W. Knightly
【Abstract】: Wireless mesh networks are popular as a cost- effective means to provide broadband connectivity to large user populations. A mesh network placement provides coverage, such that each target client location has a link to a deployed mesh node, and connectivity, such that each mesh node wirelessly connects directly to a gateway or via intermediate mesh nodes. Prior work on placement assumes wireless propagation to be uniform in all directions, i.e., an unrealistic assumption of circular communication regions. In this paper, we present approximation algorithms to solve the NP- hard mesh node placement problem for non-uniform propagation settings. The first key challenge is incorporating non-uniform propagation, which we address by formulating the problem input as a connectivity graph consisting of discrete target coverage locations and potential mesh node locations. This graph incorporates non-uniform propagation by specifying the estimated signal quality per link. Secondly, our algorithms are the first to minimize the number of deployed mesh nodes with constant-factor approximation ratio in the non-uniform propagation setting. To achieve this, we formulate the Degree-Constrained Terminal Steiner tree problem and present approximation algorithms which leverage prior results on the Steiner tree problem. Third, it is impractical to measure all possible potential mesh links, and therefore deployment planning must rely on estimations. To address this challenge, we extend our algorithm to iteratively measure the links in the solution Steiner tree, refining the graph input on a per-link basis in order to ensure the deployed network is not disconnected. Finally, we use propagation measurements at 35,000 locations in the deployed GoogleWiFi network to investigate placement in a realistic, non-uniform propagation environment. Under this measured propagation setting, our algorithms result in up to 80% fewer mesh nodes than current algorithms and only require an average of 3 measur- - ements per deployed mesh node to ensure backhaul connectivity.
【Keywords】: optimisation; trees (mathematics); wireless mesh networks; GoogleWiFi network; NP-hard problems; broadband connectivity; connectivity graph; degree-constrained terminal Steiner tree problem; mesh node deployment; nonuniform propagation; target client location; wireless mesh networks; Approximation algorithms; Communications Society; Current measurement; Estimation error; IP networks; Iterative algorithms; Mesh networks; Peer to peer computing; Telecommunication traffic; Wireless mesh networks
【Paper Link】 【Pages】:2151-2159
【Authors】: Eugene Chai ; Kang G. Shin
【Abstract】: In traditional wireless networks, nodes use only a single channel per radio interface, thus limiting the overall channel diversity of the network. This restriction is due to the inherent limitations of commercially-available RF devices. With the advent of high bandwidth software-defined radios (SDRs), we now have the option of assigning multiple contiguous, independent channels to a single wireless interface. This new-found opportunity raises an important question: how do we assign contiguous channels to nodes in order to maximize overall network throughput? This question lies at the often-ignored intersection of single-radio-multi-channel and multi-radio-multi-channel assignment schemes. In this paper, we develop a protocol that assigns contiguous channels with the goal of evenly spreading the load across the multiple channels. Neighboring nodes greedily adjust their channel ranges according to channel conditions to achieve an overall pattern of partially-overlapping bandwidths that maximizes the network throughput. The end-result is a network that can dynamically adapt its bandwidth usage to the network load and the conditions of the different channels. The proposed protocol is evaluated with a prototype built upon the USRP as well as with detailed simulation.
【Keywords】: access protocols; channel allocation; software radio; wireless mesh networks; M-Polar; SDR wireless mesh networks; USRP; channel allocation; channel diversity; commercially-available RF devices; contiguous channels; contiguous channels protocol; multi-radio-multi-channel assignment schemes; neighboring nodes; single channel per radio interface; software-defined radios; throughput maximization; Bandwidth; Channel allocation; Communication switching; Communications Society; Interference; Mesh networks; Peer to peer computing; Protocols; Switches; Throughput
【Paper Link】 【Pages】:2160-2168
【Authors】: Claudio Cicconetti ; Luciano Lenzini ; Andrea Lodi ; Silvano Martello ; Enzo Mingozzi ; Michele Monaci
【Abstract】: The IEEE 802.16 standard uses Orthogonal Frequency Division Multiple Access (OFDMA) for mobility support. Therefore, the medium access control frame extends in two dimensions, i.e., time and frequency. At the beginning of each frame, i.e., every 5 ms, the base station is responsible both for scheduling packets, based on the negotiated quality of service requirements, and for allocating them into the frame, according to the restrictions imposed by 802.16 OFDMA. To break down the complexity, a split approach has been proposed in the literature, where the two tasks are solved in separate and subsequent stages. In this paper we focus on the allocation task alone, which is addressed in its full complexity, i.e., by considering that data within the frame must be allocated as bursts with rectangular shape, each consisting of a set of indivisible sub-bursts, and that a variable portion of the frame is reserved for in-band signaling. After proving that the resulting allocation problem is NP-hard, we develop an efficient heuristic algorithm, called Recursive Tiles and Stripes (RTS), to solve it. RTS, in addition to handle a more general problem, is shown to perform better than state-of-the-art solutions via numerical analysis with realistic system parametrization.
【Keywords】: OFDM modulation; WiMax; communication complexity; frequency division multiple access; numerical analysis; quality of service; IEEE 802.16 standard; NP-hard; OFDMA; allocation task; base station; heuristic algorithm; in-band signaling; medium access control frame; mobility support; numerical analysis; orthogonal frequency division multiple access; packet scheduling; quality of service requirement; recursive stripes; recursive tiles; split approach; system parametrization; two-dimensional data allocation; Frequency conversion; Heuristic algorithms; Media Access Protocol; Numerical analysis; Physical layer; Quality of service; Resource management; Scheduling; Shape; Tiles
【Paper Link】 【Pages】:2169-2177
【Authors】: Lu Cheng ; Xuesong Qiu ; Luoming Meng ; Yan Qiao ; Raouf Boutaba
【Abstract】: Active probing is an effective tool for monitoring networks. By measuring probing responses, we can perform fault diagnosis actively and efficiently without instrumentation on managed entities. In order to reduce the traffic generated by probing messages and the measurement infrastructure costs, an optimal set of probes is desirable. However, the computational complexity for obtaining such an optimal set is very high. Existing works assume single-fault scenarios, apply only to small size networks, or use simplistic methods that are vulnerable to noises. In this paper, by exploiting the conditionally independent property in Bayesian networks, we prove a theorem on the information provided by a set of probes. Based on this theorem and structure property of Bayesian networks, we propose two approaches which can effectively reduce the computation time. A highly efficient adaptive probing algorithm is then presented. Compared with previous techniques, experiments have shown that our approach is more efficient in selecting an optimal set of probes without degrading diagnosis quality in large scale and noisy networks.
【Keywords】: Bayes methods; computational complexity; computer networks; fault diagnosis; Bayesian networks; active probing; computational complexity; fault diagnosis; large scale networks; monitoring networks; noisy networks; Active noise reduction; Bayesian methods; Cost function; Fault diagnosis; Instruments; Large-scale systems; Monitoring; Performance evaluation; Probes; Telecommunication traffic
【Paper Link】 【Pages】:2178-2186
【Authors】: Weiyi Zhang ; Jian Tang ; Chonggang Wang ; Shanaka de Soysa
【Abstract】: Robustness and reliability are critical issues in network management. To provide resiliency, a popular protection scheme against network failures is the simultaneous routing along multiple disjoint paths. Most previous protection and restoration schemes were designed for all-or-nothing protection and thus, an overkill for data traffic. In this work, we study the Reliable Adaptive Multipath Provisioning (RAMP) problem with reliability and differential delay constraints. We aim to route the connections in a manner such that link failure does not shut down the entire stream but allows a continuing flow for a significant portion of the traffic along multiple (not necessary disjoint) paths, allowing the whole network to carry sufficient traffic even when link/node failure occurs. The flexibility enabled by a multipath scheme has the tradeoff of differential delay among the diversely routed paths. This requires increased memory in the destination node in order to buffer the traffic until the data arrives on all the paths. Increased buffer size will raise the network element cost and could cause buffer overflow and data corruption. Therefore, differential delay between the multiple paths should be bounded by containing the delay of a path in a range. We first prove that RAMP is an NP-hard problem. Then we present a pseudo-polynomial time solution to solve a special case of RAMP, representing edge delays as integers. Next, an (1 + e)-approximation algorithm is proposed to solve the optimization version of the RAMP problem. An efficient heuristic is also provided for the RAMP problem. We also present numerical results confirming the advantage of our schemes as the first solution for the RAMP problem.
【Keywords】: computational complexity; polynomial approximation; quality of service; telecommunication network management; telecommunication network reliability; telecommunication network routing; telecommunication traffic; (1 + e)-approximation algorithm; NP-hard problem; RAMP problem; bandwidth constraints; buffer overflow; data corruption; data traffic; destination node; differential delay constraints; edge delays; efficient heuristic method; link-node failure; multiple disjoint paths; network element cost; network management; network reliability; quality of service; reliable adaptive multipath provisioning; Application software; Bandwidth; Computer science; Delay; National electric code; Neodymium; Protection; Quality of service; Routing; Telecommunication traffic
【Paper Link】 【Pages】:2187-2195
【Authors】: Giuseppe Bianchi ; Elisa Boschi ; Simone Teofili ; Brian Trammell
【Abstract】: We present an efficient network measurement primitive that measures the rate of variations, or unique values for a given characteristic of a traffic flow. The primitive is widely applicable to a variety of data reduction and pre-analysis tasks at the measurement interface, and we show it to be particularly useful for building data-reducing preanalysis stages for scan detection within a multistage network analysis architecture. The presented approach is based upon data structures derived from Bloom filters, and as such yields high performance with probabilistic accuracy and controllable worst-case time and memory complexity. This predictability makes it suitable for hardware implementation in dedicated network measurement devices. One key innovation of the present work is that it is self-tuning, adapting to the characteristics of the measured traffic.
【Keywords】: data reduction; data structures; probability; traffic engineering computing; Bloom filters; data structures; measurement data reduction; memory complexity; multistage network analysis; network measurement devices; probabilistic accuracy; traffic measurement; variation rate metering; Communication system traffic control; Communications Society; Condition monitoring; Data structures; Detectors; Europe; Filters; Fluid flow measurement; Hardware; Scalability
【Paper Link】 【Pages】:2196-2204
【Authors】: Myungjin Lee ; Nick G. Duffield ; Ramana Rao Kompella
【Abstract】: The inherent support in routers (SNMP counters or NetFlow) is not sufficient to diagnose performance problems in IP networks, especially for flow-specific problems and hence, the aggregate behavior within a router appears normal. To address this problem, in this paper, we propose a Consistent NetFlow (CNF) architecture for measuring per-flow performance measurements within routers. CNF utilizes NetFlow architecture that already reports the first and last timestamps per-flow, and hash-based sampling for ensuring that two routers record same flows. We devise a novel Multiflow estimator that approximates the intermediate delay samples from other background flows to improve the per-flow latency estimates significantly compared to the naive estimator that only uses actual flow samples. In our experiments using real backbone traces and realistic delay models, we show that Multiflow estimator is accurate with a median relative error of less than 20% for flows of size greater than 100 packets. We also show that prior approach based on trajectory sampling performs about 2-3Ã worse.
【Keywords】: IP networks; computer network management; cryptography; protocols; telecommunication network routing; IP networks; SNMP counters; backbone traces; consistent NetFlow architecture; delay models; flow-specific problems; hash-based sampling; multiflow estimator; opportunistic flow-level latency estimation; per-flow latency estimation; per-flow performance measurements; routers; timestamps per-flow; trajectory sampling; Aggregates; Communications Society; Counting circuits; Delay estimation; IP networks; Inference algorithms; Probes; Protocols; Sampling methods; Statistics
【Paper Link】 【Pages】:2205-2212
【Authors】: Peng-Jun Wan ; Zhu Wang ; Hongwei Du ; Scott C.-H. Huang ; Zhiyuan Wan
【Abstract】: Beaconing is a primitive communication task in which every node locally broadcasts a packet to all its neighbors within a fixed distance. Assume that all communications proceed in synchronous time-slots and each node can transmit at most one fixed-size packet in each time-slot. The problem Minimum-latency beaconing schedule (MLBS) in multihop wireless networks seeks a shortest schedule for beaconing subject to the interference constraint. MLBS has been intensively studied since the mid-1980s, but all assume the protocol interference model with uniform interference radii. In this paper, we first present a constant-approximation algorithm for MLBS under the protocol interference model with arbitrary interference radii. Then, we develop a constant-approximation algorithm for MLBS under the physical interference model. Both approximation algorithms have efficient implementations in a greedy first-fit manner.
【Keywords】: access protocols; directed graphs; graph colouring; telecommunication network topology; wireless mesh networks; constant-approximation algorithm; first-fit scheduling; minimum-latency beaconing schedule; multihop wireless networks; Approximation algorithms; Broadcasting; Computer science; Interference constraints; Peer to peer computing; Processor scheduling; Protocols; Scheduling algorithm; Spread spectrum communication; Wireless networks
【Paper Link】 【Pages】:2213-2221
【Authors】: Berk Birand ; Maria Chudnovsky ; Bernard Ries ; Paul D. Seymour ; Gil Zussman ; Yori Zwols
【Abstract】: Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling - GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50% throughput in switches). However, it was recently shown that in networks in which the Local Pooling conditions are satisfied, GMS achieves 100% throughput. Moreover, in networks in which the ¿-Local Pooling conditions hold, GMS achieves ¿% throughput. In this paper, we focus on identifying the specific network topologies that satisfy these conditions. In particular, we provide the first characterization of all the network graphs in which Local Pooling holds under primary interference constraints (in these networks GMS achieves 100% throughput). This leads to a linear time algorithm for identifying Local Pooling-satisfying graphs. Moreover, by using similar graph theoretical methods, we show that in all bipartite graphs (i.e., input-queued switches) of size up to 7 à n, GMS is guaranteed to achieve 66% throughput, thereby improving upon the previously known 50% lower bound. Finally, we study the performance of GMS in interference graphs and show that in certain specific topologies its performance could be very bad. Overall, the paper demonstrates that using graph theoretical techniques can significantly contribute to our understanding of greedy scheduling algorithms.
【Keywords】: graph theory; greedy algorithms; radio networks; radiofrequency interference; telecommunication network topology; bipartite graphs; graph theoretical techniques; greedy maximal scheduling algorithm; interference constraints; interference graphs; linear time algorithm; local pooling; network graphs; wireless networks; wireless switches; Bipartite graph; Graph theory; Greedy algorithms; Interference constraints; Network topology; Performance analysis; Scheduling algorithm; Switches; Throughput; Wireless networks
【Paper Link】 【Pages】:2222-2230
【Authors】: Shreeshankar Bodas ; Sanjay Shakkottai ; Lei Ying ; R. Srikant
【Abstract】: This paper considers the problem of designing scheduling algorithms for multi-channel (e.g., OFDM) wireless downlink networks with n users/OFDM sub-channels. For this system, while the classical MaxWeight algorithm is known to be throughput-optimal, its buffer-overflow performance is very poor (formally, we show it has zero rate function in our setting). To address this, we propose a class of algorithms called iHLQF (iterated Heaviest matching with Longest Queues First) that is shown to be throughput optimal for a general class of arrival/channel processes, and also rate-function optimal (i.e., exponentially small buffer overflow probability) for certain arrival/channel processes. iHLQF however has higher complexity than MaxWeight (n4 vs. n2 respectively). To overcome this issue, we propose a new algorithm called SSG (Server-Side Greedy). We show that SSG is throughput optimal, results in a much better per-user buffer overflow performance than the MaxWeight algorithm (positive rate function for certain arrival/channel processes), and has a computational complexity (n2) that is comparable to the MaxWeight algorithm. Thus, it provides a nice trade-off between buffer-overflow performance and computational complexity. These results are validated by both analysis and simulations.
【Keywords】: OFDM modulation; computational complexity; radio networks; scheduling; telecommunication links; MaxWeight algorithm; OFDM; arrival/channel processes; computational complexity; low complexity scheduling algorithms; multichannel downlink wireless networks; server-side Greedy; Algorithm design and analysis; Analytical models; Buffer overflow; Computational complexity; Computational modeling; Downlink; OFDM; Scheduling algorithm; Throughput; Wireless networks
【Paper Link】 【Pages】:2231-2239
【Authors】: Juan José Jaramillo ; R. Srikant
【Abstract】: This paper studies the problem of congestion control and scheduling in ad hoc wireless networks that have to support a mixture of best-effort and real-time traffic. Optimization and stochastic network theory have been successful in designing architectures for fair resource allocation to meet long-term throughput demands. However, to the best of our knowledge, strict packet delay deadlines were not considered in this framework previously. In this paper, we propose a model for incorporating the quality of service (QoS) requirements of packets with deadlines in the optimization framework. The solution to the problem results in a joint congestion control and scheduling algorithm which fairly allocates resources to meet the fairness objectives of both elastic and inelastic flows, and per-packet delay requirements of inelastic flows.
【Keywords】: ad hoc networks; optimisation; quality of service; resource allocation; scheduling; telecommunication congestion control; ad hoc network; congestion control; optimization; quality of service; resource allocation; scheduling algorithm; Ad hoc networks; Communication system traffic control; Delay; Design optimization; Optimal scheduling; Quality of service; Resource management; Telecommunication traffic; Traffic control; Wireless networks
【Paper Link】 【Pages】:2240-2248
【Authors】: Jin Wang ; Jianping Wang ; Kejie Lu ; Bin Xiao ; Naijie Gu
【Abstract】: Linear network coding is a promising technology that can maximize the throughput capacity of communication network. Despite this salient feature, there are still many challenges to be addressed, and security is clearly one of the most important challenges. In this paper, we will address the design of secure linear network coding. Specifically, we will investigate the network coding design that can both satisfy the weakly secure requirements and maximize the transmission data rate of multiple unicast streams between the same source and destination pair, which has not been addressed in the literature. In our study, we first prove that the secure unicast routing problem is equivalent to a constrained link-disjoint path problem. We then develop efficient algorithm that can find the optimal unicast topology in a polynomial amount of time. Based on the topology, we design deterministic linear network code that is weakly secure and can be constructed at the source node. And finally, we investigate the potential of random linear code for weakly secure unicast and prove the low bound of the probability that a random linear code is weakly secure.
【Keywords】: linear codes; network coding; random codes; telecommunication network routing; telecommunication security; constrained link-disjoint path problem; deterministic linear network coding; random linear code; secure unicast routing; source node; throughput capacity; transmission data rate; unicast streams; Communication networks; Communications Society; Computer science; Data security; Linear code; Network coding; Network topology; Routing; Throughput; Unicast
【Paper Link】 【Pages】:2249-2257
【Authors】: Peng Zhang ; Yixin Jiang ; Chuang Lin ; Yanfei Fan ; Xuemin Shen
【Abstract】: Though providing an intrinsic secrecy, network coding is still vulnerable to eavesdropping attacks, by which an adversary may compromise the confidentiality of message content. Existing studies mainly deal with eavesdroppers that can intercept a limited number of packets. However, real scenarios often consist of more capable adversaries, e.g., global eavesdroppers, which can defeat these techniques. In this paper, we propose P-Coding, a novel security scheme against eavesdropping attacks in network coding. With the lightweight permutation encryption performed on each message and its coding vector, P-Coding can efficiently thwart global eavesdroppers in a transparent way. Moreover, P-Coding is also featured in scalability and robustness, which enable it to be integrated into practical network coded systems. Security analysis and simulation results demonstrate the efficacy and efficiency of the P-Coding scheme.
【Keywords】: cryptography; encoding; P-coding; eavesdropping attacks; intrinsic secrecy; robustness; scalability; secure network coding; Communications Society; Computational modeling; Computer science; Cryptography; Information security; Network coding; Peer to peer computing; Robustness; Scalability; Throughput
【Paper Link】 【Pages】:2258-2266
【Authors】: Yaping Li ; Hongyi Yao ; Minghua Chen ; Sidharth Jaggi ; Alon Rosen
【Abstract】: By allowing routers to randomly mix the information content in packets before forwarding them, network coding can maximize network throughput in a distributed manner with low complexity. However, such mixing also renders the transmission vulnerable to pollution attacks, where a malicious node injects corrupted packets into the information flow. In a worst case scenario, a single corrupted packet can end up corrupting all the information reaching a destination. In this paper, we propose RIPPLE, a symmetric key based in-network scheme for network coding authentication. RIPPLE allows a node to efficiently detect corrupted packets and encode only the authenticated ones. Despite using symmetric key based homomorphic Message Authentication Code (MAC) algorithms, RIPPLE achieves asymmetry by delayed disclosure of the MAC keys. Our work is the first symmetric key based solution to allow arbitrary collusion among adversaries. It is also the first to consider tag pollution attacks, where a single corrupted MAC tag can cause numerous packets to fail authentication farther down the stream, effectively emulating a successful pollution attack.
【Keywords】: message authentication; network coding; telecommunication network routing; telecommunication security; RIPPLE authentication; corrupted MAC tag; corrupted packet detection; homomorphic message authentication code; information content; information flow; malicious node; network coding authentication; network throughput; network transmission; packet forwarding; pollution attack; router; single corrupted packet; symmetric key based in-network scheme; Computer interfaces; Computer networks; Delay; Message authentication; Network coding; Peer to peer computing; Public key; Public key cryptography; Throughput; Water pollution
【Paper Link】 【Pages】:2267-2275
【Authors】: Guanfeng Liang ; Rachit Agarwal ; Nitin H. Vaidya
【Abstract】: We consider the problem of misbehavior detection in wireless networks. A commonly adopted approach is to exploit the broadcast nature of the wireless medium, where nodes monitor their downstream neighbors locally using overheard messages. We call such nodes the Watchdogs. We propose a lightweight misbehavior detection scheme which integrates the idea of watchdogs and error detection coding. We show that even if the watchdog can only observe a fraction of packets, by choosing the error detection code properly, an attacker can be detected with high probability while achieving throughput arbitrarily close to optimal. Such properties reduce the incentive for the attacker to attack. We then consider the problem of locating the misbehaving node and propose a simple protocol, which locates the misbehaving node with high probability. The protocol requires exactly two watchdogs per unreliable relay node.
【Keywords】: error detection codes; network coding; probability; protocols; radio networks; error detection coding; misbehavior detection; overheard messages; probability; protocol; watchdogs; wireless networks
【Paper Link】 【Pages】:2276-2284
【Authors】: Roman Chertov ; Daniel M. Havey ; Kevin C. Almeroth
【Abstract】: Satellite systems are ideal for distributing the same content to a large number of users, as well as providing broadband connectivity in remote areas or backup in case of terrestrial network failures. Unlike terrestrial networks, satellite networks face a unique set of challenges, such as signal fading and interference from multiple transmitters combined with long propagation delays. A well known challenge is that these unique characteristics can have an adverse impact on various protocols, making it necessary to study protocol behavior in satellite networks. The challenge of our research, and the focus of this paper, is to develop an architecture for a high-fidelity and scalable emulation testbed tailored for mobile satellite communications research. The testbed is designed to provide multi-beam, multi-satellite, TDMA, and mobility functionality. Our validation studies demonstrate that the testbed is capable of achieving delay, loss, and jitter that can be associated with a mobile satellite link.
【Keywords】: digital simulation; mobile satellite communication; telecommunication computing; time division multiple access; MSET; TDMA; broadband connectivity; high fidelity emulation testbed architecture; mobile satellite communication; mobile satellite link; mobility functionality; mobility satellite emulation testbed; multibeam functionality; multisatellite functionality; scalable emulation testbed; Emulation; Fading; Interference; Jitter; Propagation delay; Protocols; Satellite communication; Testing; Time division multiple access; Transmitters
【Paper Link】 【Pages】:2285-2293
【Authors】: Xi Deng ; Yuanyuan Yang ; Sangjin Hong
【Abstract】: In this paper, we present the design and implementation of a general, flexible hardware-aware network platform which takes hardware processing behavior into consideration to accurately evaluate network performance. The platform adopts a network-hardware co-simulation which the NS-2 network simulator supervises the network-wide traffic flow and the SystemC hardware simulator simulates the underlying hardware processing in network nodes. In addition, as a case study, we implemented wireless all-to-all broadcasting with network coding on the platform. processing behavior during the algorithm execution and evaluate the overall performance of the algorithm. Our experimental results demonstrate that hardware processing has a significant impact on the algorithm performance and hence should be taken into consideration in the algorithm design. We expect that this hardware-aware platform will become a very useful tool for more accurate network simulations and optimal designs of processing-intensive applications.
【Keywords】: network coding; NS-2 network simulator; SystemC hardware simulator; general flexible hardware-aware network platform; hardware processing behavior; hardware-aware network experiments; hardware-aware platform; network nodes; network performance; network-hardware co-simulation; network-wide traffic flow; wireless all-to-all broadcasting; wireless network coding; Algorithm design and analysis; Computational modeling; Computer networks; Computer simulation; Delay; Emulation; Hardware; Telecommunication traffic; Traffic control; Wireless networks
【Paper Link】 【Pages】:2294-2302
【Authors】: Frédéric Morlot ; Salah-Eddine Elayoubi ; François Baccelli
【Abstract】: In this paper, we analyze phenomena related to user clumps and hot spots occuring in mobile networks at the occasion of large urban mass gatherings. Our analysis is based on observations made on mobility traces of GSM users in several large cities. Classical mobility models, such as the random waypoint, do not allow one to represent the observed dynamics of clumps in a proper manner. This motivates the introduction and the mathematical analysis of a new interaction-based mobility model, which is the main contribution of the present paper. This model is shown to allow one to describe the dynamics of clumps and in particular to predict key phenomena such as the building of hot spots and the scattering between hot spots, which play a key role in the engineering of wireless networks. We show how to obtain the main parameters of this model from simple communication activity measurements and we illustrate this calibration process on real cases.
【Keywords】: cellular radio; mathematical analysis; GSM users; classical mobility models; dynamic hot spot analysis; interaction-based mobility model; mathematical analysis; mobile networks; random waypoint; wireless networks; Analytical models; Cities and towns; Communications Society; Filling; GSM; Mathematical analysis; Mathematical model; Predictive models; Quality of service; Scattering
【Paper Link】 【Pages】:2303-2311
【Authors】: Roberto Di Pietro ; Gabriele Oligeri ; Claudio Soriente ; Gene Tsudik
【Abstract】: Wireless Sensor Networks (WSNs) are susceptible to a wide range of attacks due to their distributed nature, limited sensor resources and lack of tamper-resistance. Once a sensor is corrupted, the adversary learns all secrets and (even if the sensor is later released) it is very difficult for the sensor to regain security, i.e., to obtain intrusion-resilience. Existing solutions rely on the presence of an on-line trusted third party, such as a sink, or on the availability of secure hardware on sensors. Neither assumption is realistic in large-scale Unattended WSNs (UWSNs), characterized by long periods of disconnected operation and periodic visits by the sink. In such settings, a mobile adversary can gradually corrupt the entire network during the intervals between sink visits. As shown in some recent work, intrusion-resilience in UWSNs can be attained (to a degree) via cooperative self-healing techniques. In this paper, we focus on intrusion-resilience in Mobile Unattended Wireless Sensor Networks (¿UWSNs) where sensors move according to some mobility model. We argue that sensor mobility motivates a specific type of adversary and defending against it requires new security techniques. Concretely, we propose a cooperative protocol that - by leveraging sensor mobility - allows compromised sensors to recover secure state after compromise. This is obtained with very low overhead and in a fully distributed fashion. We provide a thorough analysis of the proposed protocol and support it by extensive simulation results.
【Keywords】: mobile radio; protocols; security of data; telecommunication computing; telecommunication security; wireless sensor networks; cooperative protocol; cooperative self-healing techniques; intrusion resilience; mobile unattended WSN; online trusted third party; security techniques; sensor mobility model; wireless sensor networks; Communications Society; Computer science; Cryptographic protocols; Hardware; Mobile computing; Security; Sensor phenomena and characterization; Tin; USA Councils; Wireless sensor networks
【Paper Link】 【Pages】:2312-2320
【Authors】: Zhisu Zhu ; Anthony Man-Cho So ; Yinyu Ye
【Abstract】: A fundamental problem in wireless ad-hoc and sensor networks is that of determining the positions of nodes. Often, such a problem is complicated by the presence of nodes whose positions cannot be uniquely determined. Most existing work uses the notion of global rigidity from rigidity theory to address the non-uniqueness issue. However, such a notion is not entirely satisfactory, as it has been shown that even if a network localization instance is known to be globally rigid, the problem of determining the node positions is still intractable in general. In this paper, we propose to use the notion of universal rigidity to bridge such disconnect. Although the notion of universal rigidity is more restrictive than that of global rigidity, it captures a large class of networks and is much more relevant to the efficient solvability of the network localization problem. Specifically, we show that both the problem of deciding whether a given network localization instance is universally rigid and the problem of determining the node positions of a universally rigid instance can be solved efficiently using semidefinite programming (SDP). Then, we give various constructions of universally rigid instances. In particular, we show that trilateration graphs are generically universally rigid, thus demonstrating not only the richness of the class of universally rigid instances, but also the fact that trilateration graphs possess much stronger geometric properties than previously known. Finally, we apply our results to design a novel edge sparsification heuristic that can reduce the size of the input network while provably preserving its original localization properties. One of the applications of such heuristic is to speed up existing convex optimization-based localization algorithms. Simulation results show that our speedup approach compares very favorably with existing ones, both in terms of accuracy and computation time.
【Keywords】: ad hoc networks; convex programming; sensor placement; wireless sensor networks; convex optimization; edge sparsification heuristic; semidefinite programming; trilateration graph; universal rigidity; wireless ad hoc network; wireless network localization; wireless sensor network; Bridges; Communications Society; Computational modeling; Distance measurement; Global Positioning System; Optimized production technology; Peer to peer computing; Target tracking; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2321-2329
【Authors】: Ionut Constandache ; Romit Roy Choudhury ; Injong Rhee
【Abstract】: This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.
【Keywords】: accelerometers; cartography; mobile handsets; mobility management (mobile radio); pattern matching; Nokia phones; accelerometers; electronic compasses; human-scale environments; local electronic map; mobile phone localization; noisy phone sensors; person walking pattern matching; person walking pattern recording; Accelerometers; Batteries; Communications Society; Energy efficiency; Explosions; GSM; Global Positioning System; Legged locomotion; Mobile handsets; USA Councils
【Paper Link】 【Pages】:2330-2338
【Authors】: Deokwoo Jung ; Thiago Teixeira ; Andreas Savvides
【Abstract】: This work describes a new approach for localizing people by cooperative sensor fusion of lightweight camera and wearable accelerometer measurements. We present the algorithm to identify people moving around as they are detected by cameras deployed in the infrastructure. The algorithm uses a correlation metric to develop an ID matching algorithm that can associate people in the scene to their global ID emitted from a wireless accelerometer sensor node worn on their belts. First we conduct a set of preliminary experiments to verify that the quantities of interest easily measurable by off-the-shelf components. We validate our metric and the performance of the proposed ID matching algorithm using simulations on testbed data that also includes a crowded scenario.
【Keywords】: accelerometers; correlation methods; image matching; image sensors; sensor fusion; ID matching algorithm; cooperative localization; cooperative sensor fusion; correlation metric; lightweight camera; wearable sensors; wireless accelerometer sensor node; Accelerometers; Belts; Cameras; Communications Society; Electric variables measurement; Intrusion detection; Sensor fusion; Smart phones; Wearable sensors; Wireless sensor networks
【Paper Link】 【Pages】:2339-2347
【Authors】: Zheng Yang ; Yunhao Liu
【Abstract】: Location awareness is highly critical for wireless ad-hoc and sensor networks. Many efforts have been made to solve the problem of whether or not a network can be localized. Nevertheless, based on the data collected from a working sensor network, it is observed that the network is NOT always entirely localizable. Theoretical analyses also suggest that, in most cases, it is unlikely that all nodes in a network are localizable, although a (large) portion of the nodes can be uniquely located. Existing studies merely examine whether or not a network is localizable as a whole; yet two fundamental questions remain unaddressed: First, given a network configuration, whether or not a specific node is localizable? Second, how many nodes in a network can be located and which are them? In this study, we analyze the limitation of previous works and propose a novel concept of node localizability. By deriving the necessary and sufficient conditions for node localizability, for the first time, it is possible to analyze how many nodes one can expect to locate in sparsely or moderately connected networks. To validate this design, we implement our solution on a real-world system and the experimental results show that node localizability provides useful guidelines for network deployment and other location-based services.
【Keywords】: ad hoc networks; mobile computing; wireless sensor networks; location awareness; location based services; necessary condition; network configuration; network deployment; node localizability; sufficient condition; wireless ad hoc networks; wireless sensor networks; Ad hoc networks; Communications Society; Guidelines; Mobile radio mobility management; Network topology; Peer to peer computing; Sea measurements; Sufficient conditions; Testing; Wireless sensor networks
【Paper Link】 【Pages】:2348-2356
【Authors】: Jia Liu ; Yi Shi ; Y. Thomas Hou
【Abstract】: MIMO-based communications have great potential to improve network capacity for multi-hop wireless networks. Although there has been significant progress on MIMO at the physical layer or single-hop communication, advances in the theory of MIMO for multi-hop wireless networks remain limited. This stagnation is mainly due to the lack of an accurate and more important, analytically tractable model that can be used by networking researchers. In this paper, we propose such a model to enable the networking community to carry out cross-layer research for multi-hop MIMO networks. In particular, at the physical layer, we develop a simple model for MIMO channel capacity computation that captures the essence of spatial multiplexing and transmit power limit without involving complex matrix operations and the water-filling algorithm. We show that the approximation gap in this model is negligible. At the link layer, we devise a space-time scheduling scheme called OBIC that significantly advances the existing zero-forcing beamforming (ZFBF) to handle interference in a multi-hop network setting. The proposed OBIC scheme employs simple algebraic computation on matrix dimensions to simplify ZFBF in a multi-hop network. As a result, we can characterize link layer scheduling behavior without entangling with beamforming details. Finally, we apply both the new physical and link layer models in cross-layer performance optimization for a multi-hop MIMO network.
【Keywords】: MIMO communication; channel capacity; space division multiplexing; wireless mesh networks; channel capacity; cross-layer model; multi-hop MIMO networks; multi-hop wireless networks; network capacity; space-time scheduling scheme; spatial multiplexing; transmit power limit; zero-forcing beamforming; Array signal processing; Channel capacity; Computer networks; Interference; MIMO; Physical layer; Physics computing; Processor scheduling; Spread spectrum communication; Wireless networks
【Paper Link】 【Pages】:2357-2365
【Authors】: Ece Gelal ; Konstantinos Pelechrinis ; Tae-Suk Kim ; Ioannis Broustis ; Srikanth V. Krishnamurthy ; Bhaskar Rao
【Abstract】: In Multi-User MIMO networks, receivers decode multiple concurrent signals using Successive Interference Cancellation (SIC). With SIC a weak target signal can be deciphered in the presence of stronger interfering signals. However, this is only feasible if each strong interfering signal satisfies a signal-to-noise-plus-interference ratio (SINR) requirement. This necessitates the appropriate selection of a subset of links that can be concurrently active in each receiver's neighborhood; in other words, a sub-topology consisting of links that can be simultaneously active in the network is to be formed. If the selected sub-topologies are of small size, the delay between the transmission opportunities on a link increases. Thus, care should be taken to form a limited number of sub-topologies. We find that the problem of constructing the minimum number of sub-topologies such that SIC decoding is successful with a desired probability threshold, is NP-hard. Given this, we propose MUSIC, a framework that greedily forms and activates sub-topologies, in a way that favors successful SIC decoding with a high probability. MUSIC also ensures that the number of selected sub-topologies is kept small. We provide both a centralized and a distributed version of our framework. We prove that our centralized version approximates the optimal solution for the considered problem. We also perform extensive simulations to demonstrate that (i) MUSIC forms a small number of sub-topologies that enable efficient SIC operations; the number of sub-topologies formed is at most 17% larger than the optimum number of topologies, discovered through exhaustive search (in small networks). (ii) MUSIC outperforms approaches that simply consider the number of antennas as a measure for determining the links that can be simultaneously active. Specifically, MUSIC provides throughput improvements of up to 4 times, as compared to such an approach, in various topological settings. The improvements can be directly att- - ributable to a significantly higher probability of correct SIC based decoding with MUSIC.
【Keywords】: MIMO communication; interference suppression; probability; radio receivers; telecommunication network topology; MUSIC; NP-hard problem; SIC decoding; effective interference cancellation; multiuser MIMO networks; probability threshold; radio receiver; signal-to-noise- plus-interference ratio; sub-topologies; topology control; Antenna measurements; Decoding; Delay; Interference cancellation; MIMO; Multiple signal classification; Network topology; Performance evaluation; Signal to noise ratio; Silicon carbide
【Paper Link】 【Pages】:2366-2374
【Authors】: Wei Dong ; Xue Liu ; Chun Chen ; Yuan He ; Gong Chen ; Yunhao Liu ; Jiajun Bu
【Abstract】: Previous packet length optimizations for sensor networks often employ a fixed optimal length scheme, while in this study we present DPLC, a Dynamic Packet Length Control scheme. To make DPLC more efficient in terms of channel utilization, we incorporate a lightweight and accurate link estimation method that captures both physical channel conditions and interferences. We further provide two easy-to-use services, i.e., small message aggregation and large message fragmentation, to facilitate upper-layer application programming. The implementation of DPLC based on TinyOS 2.1 is lightweight, with respect to computation, memory, and header overhead. Our experiments using a real indoor testbed running CTP show that DPLC results in a 13% reduction in transmission overhead and a 41.8% reduction in energy consumption compared with the original protocol, and a 21% reduction in transmission overhead and a 15.1% reduction in energy consumption compared with simple aggregation schemes.
【Keywords】: channel estimation; interference (signal); sensor fusion; wireless sensor networks; DPLC; TinyOS 2.1; channel utilization; dynamic packet length control; interference; large message fragmentation; link estimation method; message aggregation; packet length optimization; physical channel condition; upper layer application programming; wireless sensor networks; Communication system control; Communications Society; Energy consumption; Forward error correction; Helium; Interference; Protocols; Sensor phenomena and characterization; Sensor systems; Wireless sensor networks
【Paper Link】 【Pages】:2375-2383
【Authors】: Ramin Khalili ; Dennis Goeckel ; Donald F. Towsley ; Ananthram Swami
【Abstract】: Neighbor discovery is essential for the process of self-organization of a wireless network, where almost all routing and medium access protocols need knowledge of one-hop neighbors. In this paper we study the problem of neighbor discovery in a static and synchronous network, where time is divided into slots, each of duration equal to the time required to transmit a hello message, and potentially, some sort of feedback message. Our main contributions lie in detailing the physical layer mechanism for how nodes in receive mode detect the channel status, describing algorithms at higher layers that exploit such a knowledge, and characterizing the significant gain obtained. In particular, we describe one possible physical layer architecture that allows receivers to detect collisions, and then introduce a feedback mechanism that makes the collision information available to the transmitters. This allows nodes to stop transmitting packets as soon as they learn about the successful reception of their discovery messages by the other nodes in the network. Hence, the number of nodes that need to transmit packets decreases over time. These nodes transmit with a probability that is inversely proportional to the number of active nodes in their neighborhood, which is estimated using the collision information available at the nodes. We show through analysis and simulations that our algorithm allows nodes to discover their neighbors in a significantly smaller amount of time compared to the case where reception status feedback is not available to the transmitters.
【Keywords】: access protocols; feedback; radio networks; radio transmitters; feedback mechanism; medium access protocols; neighbor discovery; physical layer mechanism; reception status feedback; routing protocols; transmitters; wireless network; Access protocols; Communications Society; Feedback; Peer to peer computing; Physical layer; Routing protocols; Transmitters; US Government; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2384-2392
【Authors】: Amir Hamed Mohsenian Rad ; Vincent W. S. Wong ; Robert Schober
【Abstract】: Random access protocols, such as Aloha, are commonly modeled in wireless ad-hoc networks by using the protocol model. However, it is well-known that the protocol model is not accurate and particularly it cannot account for aggregate interference from multiple interference sources. In this paper, we use the more accurate physical model, which is based on the signal-to-interference-plus-noise-ratio (SINR), to study optimization-based design in wireless random access systems, where the optimization variables are the transmission probabilities of the users. We focus on throughput maximization, fair resource allocation, and network utility maximization, and show that they entail non-convex optimization problems if the physical model is adopted. We propose two schemes to solve these problems. The first design is centralized and leads to the global optimal solution using a sum-of-squares technique. However, due to its complexity, this approach is only applicable to small-scale networks. The second design is distributed and leads to a close-to-optimal solution using the coordinate ascent method. This approach is applicable to medium-size and large-scale networks. Based on various simulations, we show that it is highly preferable to use the physical model for optimization-based random access design. In this regard, even a sub-optimal design based on the physical model can achieve a significantly better performance than an optimal design based on the inaccurate protocol model.
【Keywords】: access protocols; ad hoc networks; optimisation; radio access networks; radiofrequency interference; resource allocation; Aloha; coordinate ascent method; fair resource allocation; large-scale networks; medium-size networks; multiple interference sources; network utility maximization; nonconvex optimization problems; optimal SINR-based random access protocol model; optimization-based random access design; physical model; signal-to-interference-plus-noise-ratio; sum-of-squares technique; throughput maximization; wireless ad-hoc networks; wireless random access systems; Access protocols; Ad hoc networks; Aggregates; Design optimization; Interference; Resource management; Signal design; Signal to noise ratio; Throughput; Wireless application protocol
【Paper Link】 【Pages】:2393-2401
【Authors】: Aman Jain ; Sanjeev R. Kulkarni ; Sergio Verdú
【Abstract】: We study the minimum energy per bit required for communicating a message to all the destination nodes in a wireless network. The physical layer is modeled as an additive white Gaussian noise channel affected by circularly symmetric fading. The fading coefficients are known at neither transmitters nor receivers. We provide an information-theoretic lower bound on the energy requirement of multicasting in arbitrary wireless networks as the solution of a linear program. We study the broadcast performance of decode-and-forward operating in the non-coherent wideband scenario, and compare it with the lower bounds. For arbitrary networks with k nodes, the energy requirement of decode-and-forward is within a factor of (k-1) of the lower bound regardless of the magnitude of channel gains. We also show that decode-and-forward achieves the minimum energy per bit in networks that can be represented as directed acyclic graphs, thus establishing the exact minimum energy per bit for this class of networks. We also study regular networks where the area is divided into cells, each cell containing at least k and at most k¿ nodes placed arbitrarily within the cell. A path loss model (with path loss exponent ¿ > 2) dictates the channel gains between the nodes. It is shown that the ratio between the upper bound using decode-and-forward based flooding, and the lower bound is at most a constant times (k¿¿+2/k).
【Keywords】: AWGN channels; broadband networks; channel coding; decoding; directed graphs; fading channels; linear programming; multicast communication; additive white Gaussian noise channel; arbitrary wireless networks; circularly symmetric fading; decode-and-forward; destination nodes; directed acyclic graphs; information-theoretic lower bound; linear program; minimum energy per bit; path loss model; wideband wireless multicasting; Additive white noise; Broadcasting; Decoding; Fading; Floods; Physical layer; Transmitters; Upper bound; Wideband; Wireless networks
【Paper Link】 【Pages】:2402-2410
【Authors】: Niv Buchbinder ; Liane Lewin-Eytan ; Ishai Menache ; Joseph Naor ; Ariel Orda
【Abstract】: We consider the power control problem in a time-slotted wireless channel, shared by a finite number of mobiles that transmit to a common base station. The channel between each mobile and the base station is time varying, and the system objective is to maximize the overall data throughput. It is assumed that each transmitter has a limited power budget, to be sequentially divided during the lifetime of the battery. We deviate from the classic work in this area, by considering a realistic scenario where the channel quality of each mobile changes arbitrarily from one transmission to the other. Assuming first that each mobile is aware of the channel quality of all other mobiles, we propose an online power-allocation algorithm, and prove its optimality under mild assumptions. We then indicate how to implement the algorithm when only local state information is available, requiring minimal communication overhead. Notably, the competitive ratio of our algorithm (nearly) matches the one we previously obtained for the (much simpler) single-transmitter case [BLMNO09], albeit requiring significantly different algorithmic solutions.
【Keywords】: telecommunication channels; telecommunication power supplies; arbitrary varying channels; base station; channel quality; data throughput; dynamic power allocation; multi-user case; online power-allocation algorithm; power control problem; time-slotted wireless channel; Base stations; Batteries; Communications Society; Computer science; Laboratories; Power control; Power system modeling; Throughput; Time varying systems; Transmitters
【Paper Link】 【Pages】:2411-2416
【Authors】: A. Karim Abu-Affash ; Rom Aschner ; Paz Carmi ; Matthew J. Katz
【Abstract】: A power assignment is an assignment of transmission power to each of the nodes of a wireless network, so that the induced communication graph has some desired properties. The cost of a power assignment is the sum of the powers. The energy of a transmission path from node u to node v is the sum of the squares of the distances between adjacent nodes along the path. For a constant t > 1, an energy t-spanner is a graph G', such that for any two nodes u and v, there exists a path from u to v in G', whose energy is at most t times the energy of a minimum-energy path from a ton in the complete Euclidean graph. In this paper, we study the problem of finding a power assignment, such that (i) its induced communication graph is a 'good' energy spanner, and (ii) its cost is 'low'. We show that for any constant t > 1, one can find a power assignment, such that its induced communication graph is an energy t-spanner, and its cost is bounded by some constant times the cost of an optimal power assignment (where the sole requirement is strong connectivity of the induced communication graph). This is a very significant improvement over the best current result due to Shpungin and Segal, presented in last year's conference.
【Keywords】: ad hoc networks; graph theory; Euclidean graph; communication graph; minimum power energy spanners; minimum-energy path; optimal power assignment; wireless ad hoc networks; Attenuation; Communication standards; Communications Society; Computer science; Cost function; Euclidean distance; Mobile ad hoc networks; Peer to peer computing; Read only memory; Wireless networks
【Paper Link】 【Pages】:2417-2425
【Authors】: Holger Boche ; Siddharth Naik ; Tansu Alpcan
【Abstract】: This paper investigates the properties of social choice functions that represent resource allocation strategies in interference coupled wireless systems. The allocated resources can be physical layer parameters such as power vectors or antenna weights. Strategy proofness and efficiency of social choice functions are used to capture the respective properties of resource allocation strategy outcomes being non-manipulable and Pareto optimal. In addition, this paper introduces and investigates the concepts of (strict) intuitive fairness and non-participation in interference coupled systems. The analysis indicates certain inherent limitations when designing strategy proof and efficient resource allocation strategies, if the intuitive fairness and non-participation are imposed. These restrictions are investigated in an analytical social choice function framework for interference coupled wireless systems. Among other results, it is shown that a strategy proof and efficient resource allocation strategy for interference coupled wireless systems cannot simultaneously satisfy continuity and the frequently encountered property of non-participation.
【Keywords】: Pareto analysis; radiocommunication; resource allocation; Pareto optimal resource allocation strategies; antenna weights; interference coupled wireless systems; intuitive fairness; nonmanipulable resource allocation strategies; nonparticipation; power vectors; strategy proof; Algorithm design and analysis; Communications Society; Control systems; Design engineering; Game theory; Interference; Laboratories; Physical layer; Power generation economics; Resource management
【Paper Link】 【Pages】:2426-2434
【Authors】: I-Hong Hou ; P. R. Kumar
【Abstract】: This paper studies the problem of utility maximization for clients with delay based QoS requirements in wireless networks. We adopt a model used in a previous work that characterizes the QoS requirements of clients by their delay constraints, channel reliabilities, and timely throughput requirements. In this work, we assume that the utility of a client is a function of the timely throughput it obtains. We treat the timely throughput for a client as a tunable parameter by the access point (AP), instead of a given value as in the previous work. We then study how the AP should assign timely throughputs to clients so that the total utility of all clients is maximized. We apply the techniques introduced in two previous papers to decompose the utility maximization problem into two simpler problems, a CLIENT problem and an ACCESS-POINT problem. We show that this decomposition actually describes a bidding game, where clients bid for the service time from the AP. We prove that although all clients behave selfishly in this game, the resulting equilibrium point of the game maximizes the total utility. In addition, we also establish an efficient scheduling policy for the AP to reach the optimal point of the ACCESS-POINT problem. We prove that the policy not only approaches the optimal point but also achieves some forms of fairness among clients. Finally, simulation results show that our proposed policy does achieve higher utility than all other compared policies.
【Keywords】: delays; quality of service; QoS; access point problem; client problem; delay constraints; throughput; utility maximization; wireless network; Communications Society; Constraint optimization; Contracts; Delay; Probes; Scheduling algorithm; Sufficient conditions; Throughput; USA Councils; Wireless networks
【Paper Link】 【Pages】:2435-2443
【Authors】: Matthew Andrews ; Antonio Fernández ; Lisa Zhang ; Wenbo Zhao
【Abstract】: We study network optimization that considers energy minimization as an objective. Studies have shown that mechanisms such as speed scaling can significantly reduce the power consumption of telecommunication networks by matching the consumption of each network element to the amount of processing required for its carried traffic. Most existing research on speed scaling focuses on a single network element in isolation. We aim for a network-wide optimization. Specifically, we study a routing problem with the objective of provisioning guaranteed speed/bandwidth for a given demand matrix while minimizing energy consumption. Optimizing the routes critically relies on the characteristic of the energy curve f(s), which is how energy is consumed as a function of the processing speed s. If f is superadditive, we show that there is no bounded approximation in general for integral routing, i.e., each traffic demand follows a single path. This contrasts with the well-known logarithmic approximation for subadditive functions. However, for common energy curves such as polynomials f(s) = ¿s¿, we are able to show a constant approximation via a simple scheme of randomized rounding. The scenario is quite different when a non-zero startup cost ¿ appears in the energy curve, e.g. f(s) = { ¿ + ¿s¿ 0 if (s = 0)/if (s > 0). For this case a constant approximation is no longer feasible. In fact, for any ¿ > 1, we show an ¿(log¿ N) hardness result under a common complexity assumption. (Here N is the size of the network.) On the positive side we present O((¿/¿)1/¿) and O(K) approximations, where K is the number of demands.
【Keywords】: minimisation; telecommunication network routing; telecommunication traffic; energy minimization; integral routing; network optimization; speed scaling model; telecommunication networks; telecommunication traffic; Bandwidth; Communications Society; Computer networks; Energy consumption; Energy efficiency; Integral equations; Polynomials; Routing; Switches; Telecommunication traffic
【Paper Link】 【Pages】:2444-2452
【Authors】: Pan Li ; Yuguang Fang
【Abstract】: Although capacity has been extensively studied in wireless networks, most of the results are for homogeneous wireless networks where all nodes are assumed identical. In this paper, we investigate the capacity of heterogeneous wireless networks with general network settings. Specifically, we consider a dense network with n normal nodes and m = nb (0 < b < 1) more powerful helping nodes in a rectangular area with width b(n) and length 1/b(n), where b(n) = nw and -1/2 < w ¿ 0. We assume there are n flows in the network. All the n normal nodes are sources while only randomly chosen nd (0 < d < 1) normal nodes are destinations. We further assume the n normal nodes are uniformly and independently distributed, while the m helping nodes are either regularly placed or uniformly and independently distributed, resulting in two different kinds of networks called Regular Heterogeneous Wireless Networks and Random Heterogeneous Wireless Networks, respectively. In this paper, we attempt to find out what a heterogeneous wireless network with general network settings can do by deriving a lower bound on the capacity. We also explore the conditions under which heterogeneous wireless networks can provide throughput higher than traditional homogeneous wireless networks.
【Keywords】: channel capacity; wireless channels; channel capacity; dense network; general network settings; homogeneous wireless networks; normal nodes; random heterogeneous wireless networks; regular heterogeneous wireless networks; Ad hoc networks; Base stations; Intserv networks; Laboratories; Optical fiber networks; Peer to peer computing; Routing protocols; Throughput; Wireless networks; Wireless personal area networks
【Paper Link】 【Pages】:2453-2461
【Authors】: Yanyan Zhuang ; Jianping Pan ; Lin Cai
【Abstract】: Minimizing energy consumption in wireless sensor networks has been a challenging issue, and grid-based clustering and routing schemes have attracted a lot of attention due to their simplicity and feasibility. Thus how to determine the optimal grid size in order to minimize energy consumption and prolong network lifetime becomes an important problem during the network planning and dimensioning phase. So far most existing work uses the average distances within a grid and between neighbor grids to calculate the average energy consumption, which we found largely underestimates the real value. In this paper, we propose, analyze and evaluate the energy consumption models in wireless sensor networks with probabilistic distance distributions. These models have been validated by numerical and simulation results, which shows that they can be used to optimize grid size and minimize energy consumption accurately. We also use these models to study variable-size grids, which can further improve the energy efficiency by balancing the relayed traffic in wireless sensor networks.
【Keywords】: energy consumption; probability; telecommunication network planning; wireless sensor networks; energy consumption minimization; grid-based clustering; network lifetime; network planning; probabilistic distance models; routing schemes; wireless sensor networks; Communications Society; Computational modeling; Computer aided manufacturing; Energy consumption; Energy efficiency; Grid computing; Numerical simulation; Relays; Routing; Wireless sensor networks
【Paper Link】 【Pages】:2462-2470
【Authors】: Guanqun Yang ; Daji Qiao
【Abstract】: Deploying wireless sensor networks to provide guaranteed barrier coverage is critical for many sensor networks applications such as intrusion detection and border surveillance. To reduce the number of sensors needed to provide guaranteed barrier coverage, we propose multi-round sensor deployment which splits sensor deployment into multiple rounds and can better deal with placement errors that often accompany sensor deployment. We conduct a comprehensive analytical study on multi-round sensor deployment and identify the tradeoff between the number of sensors deployed in each round of multi-round sensor deployment and the barrier coverage performance. Both numerical and simulation studies show that, by simply splitting sensor deployment into two rounds, guaranteed barrier coverage can be achieved with significantly less sensors comparing to single-round sensor deployment. Moreover, we propose two practical solutions for multi-round sensor deployment when the distribution of a sensor's residence point is not fully known. The effectiveness of the proposed multi-round sensor deployment strategies is demonstrated by numerical and simulation results.
【Keywords】: numerical analysis; security of data; video surveillance; wireless sensor networks; border surveillance; guaranteed barrier coverage; intrusion detection; multiround sensor deployment; numerical simulation; single-round sensor deployment; Aircraft; Communications Society; Costs; Intrusion detection; Minimax techniques; Numerical simulation; Performance analysis; Sensor fusion; Surveillance; Wireless sensor networks
【Paper Link】 【Pages】:2471-2479
【Authors】: Xing Xu ; Ji Luo ; Qian Zhang
【Abstract】: We are interested in event collection in a 2D region where sensors are deployed to detect and collect interested events. Using traditional multi-hop routing in wireless sensor networks to report events to a sink node or base station, will result in severe imbalanced energy consumption of static sensors. In addition, full connectivity among all the static sensors may not be possible in some cases since generally the sensors are randomly deployed in the target region. In this paper, we exploit a mobile sensor as the sink node to assist the event collection by controlling the movement of the mobile sink to collect static sensor readings. A key observation of our work is that an event has spatial-temporal correlation. Specifically, the same event can be detected by multiple nearby sensors within a period of time. Thus, it is more energy-efficient if the mobile sink can selectively communicate with only a portion of static sensors, while still collecting all the interested events. In this paper, we discuss the event collection problem by leveraging the mobility of the sink node and the spatial-temporal correlation of the event, in favor of maximizing the network lifetime with a guaranteed event collection rate. We first model the problem as sensor selection problem and show that it could be solved in polynomial time, if global knowledge of events is available and there is no velocity constraints on mobile sink. We also analyze the design of a feasible movement route for mobile sink to minimize the velocity requirements for a practical system. An online scheme is then proposed to relax the assumption about global knowledge of events and we prove that the expected event collection rate can be guaranteed in theory. Through comprehensive simulation on real trace data, we demonstrate that the network lifetime can be significantly extended, comparing to some other schemes.
【Keywords】: communication complexity; energy consumption; mobile radio; telecommunication network routing; wireless sensor networks; base station; delay tolerant event collection; energy consumption; mobile sensor; mobile sink; multihop routing; network lifetime; polynomial time; sensor selection problem; spatial-temporal correlation; static sensors; wireless sensor networks; Base stations; Disruption tolerant networking; Energy consumption; Event detection; Intelligent sensors; Monitoring; Peer to peer computing; Routing; Spread spectrum communication; Wireless sensor networks
【Paper Link】 【Pages】:2480-2488
【Authors】: Xiaole Bai ; Ziqiu Yun ; Dong Xuan ; Weijia Jia ; Wei Zhao
【Abstract】: In this paper, we study the optimal deployment pattern problem in wireless sensor networks (WSNs). We propose a new set of patterns, particularly when sensors' communication range (rc) is relatively small compared with their sensing range (rs), and prove their optimality among regular patterns. In this study, we discover a surprising and interesting phenomenon-pattern mutation. This phenomenon contradicts the conjecture presented in a previous work that there exists a universal elemental pattern among optimal pattern evolution and that pattern evolution is continuous. For example, we find mutation happens among the patterns for full-coverage and 3-connectivity when rc/rs = 1.0459, among the patterns for full-coverage and 4-connectivity when rc/rs = 1.3903, and among the patterns for full-coverage and 5-connectivity when rc/rs = 1.0406. To the best of our knowledge, this is the first time that mutation in pattern evolution has been discovered. Also, our work further completes the exploration of optimal patterns in WSNs.
【Keywords】: wireless sensor networks; WSN; pattern evolution; pattern mutation; wireless sensor network; Acoustic sensors; Acoustic signal detection; Communications Society; Frequency; Genetic mutations; Lattices; Sensor phenomena and characterization; Vehicle detection; Vehicles; Wireless sensor networks
【Paper Link】 【Pages】:2489-2497
【Authors】: Zhengye Liu ; Hao Hu ; Yong Liu ; Keith W. Ross ; Yao Wang ; Markus Mobius
【Abstract】: The success of future P2P applications ultimately depends on whether users will contribute their bandwidth, CPU and storage resources to a larger community. In this paper, we propose a new incentive paradigm, Networked Asynchronous Bilateral Trading (NABT), which can be applied to a broad range of P2P applications. In NABT, peers belong to an underlying social network, and each pair of friends keeps track of a credit balance between them. When user Alice provides a service (a file, storage space, computation and so on) to her friend Bob, she charges Bob credits. Thus, in NABT, there is no global currency; instead, there are only credit balances maintained between pairs of friends. NABT allows peers to supply each other asynchronously and further allows peers to trade with remote peers through intermediaries. We theoretically show that NABT is perfectly efficient with balanced demands and supports "networked tit-for-tat". The efficiency of NABT with unbalanced demands is determined by the min-cut of credit limits of the underlying social network. Using simulations driven by MySpace traces, we demonstrate that a simple two-hop NABT design can have high trading efficiency, provide service differentiation, exploit trading intermediaries, and discourage free-riders.
【Keywords】: commerce; peer-to-peer computing; social networking (online); CPU; MySpace trace; P2P trading; balanced demand; networked asynchronous bilateral trading; networked tit-for-tat; social network
【Paper Link】 【Pages】:2498-2506
【Authors】: Minas Gjoka ; Maciej Kurant ; Carter T. Butts ; Athina Markopoulou
【Abstract】: With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the "ground-truth" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.
【Keywords】: convergence; data mining; search problems; social networking (online); BFS; Facebook; Metropolis-Hasting random walk; breadth-first-search; crawling; data collection; ground truth; online formal convergence diagnostics; online social network; re-weighted random walk; Character generation; Communications Society; Convergence; Facebook; Internet; Legged locomotion; Sampling methods; Social network services; Sociology; Testing
【Paper Link】 【Pages】:2507-2515
【Authors】: Stratis Ioannidis ; Laurent Massoulié
【Abstract】: We propose a distributed mechanism for finding websurfing strategies that is inspired by the StumbleUpon recommendation engine. Each day, a websurfer visits a sequence of websites recommended by our mechanism, and selects one that matches her daily interests. We formally show that even with this minimal feedback from the surfer-the selected website-our mechanism finds a websurfing strategy that matches the surfer's interests optimally. The surfer does not need to know-or declare-what her daily interests are before she is presented with content she likes. Moreover, our mechanism is content-agnostic: it is oblivious to the nature of the content the surfer selects. In addition, we study how the performance of this mechanism can be improved if surfers with similar interests share their feedback. Such surfers can be found indirectly, e.g., if they are all registered as friends in a social networking application. Our analysis characterizes the improvement in the mechanism's accuracy, based on the size of the group and the degree of similarity between the surfers' interests. In particular, we show that sharing feedback can significantly accelerate the convergence of our mechanism. Our results are derived analytically using stochastic approximation techniques, but are also validated through a numerical study.
【Keywords】: Internet; optimisation; recommender systems; Web searching; blogosphere surfing; content agnostic mechanism; optimal personalized strategies; recommendation engine; social networking application; Acceleration; Blogs; Communications Society; Convergence; Feedback; Internet; Search engines; Social network services; Stochastic processes
【Paper Link】 【Pages】:2516-2524
【Authors】: Jinyuan Sun ; Xiaoyan Zhu ; Yuguang Fang
【Abstract】: Online social networks (OSNs) are attractive applications which enable a group of users to share data and stay connected. Facebook, Myspace, and Twitter are among the most popular applications of OSNs where personal information is shared among group contacts. Due to the private nature of the shared information, data privacy is an indispensable security requirement in OSN applications. In this paper, we propose a privacy-preserving scheme for data sharing in OSNs, with efficient revocation for deterring a contact's access right to the private data once the contact is removed from the social group. In addition, the proposed scheme offers advanced features such as efficient search over encrypted data files and dynamic changes to group membership. With slight modification, we extend the application of the proposed scheme to anonymous online social networks of different security and functional requirements. The proposed scheme is demonstrated to be secure, effective, and efficient.
【Keywords】: cryptography; data privacy; social networking (online); Facebook; Myspace; Twitter; data privacy; data sharing; encrypted data files; indispensable security requirement; online social networks; privacy-preserving scheme
【Paper Link】 【Pages】:2525-2533
【Authors】: Di Niu ; Baochun Li
【Abstract】: There exists a certain level of ambiguity regarding whether network coding can further improve download performance in P2P content distribution systems, as compared to commonly applied heuristics such as rarest first protocols. In this paper, we revisit the problem of broadcasting multiple data blocks from a single source in an overlay network using gossip-like protocols. Our new finding reveals that the marginal benefit of network coding critically depends on the dynamics of network topologies. We show that although network coding is optimal as a block selection mechanism, simple non-coding protocols are close to optimal in complete and random graphs, leading to marginal benefits of network coding. However, network coding demonstrates salient benefits in clustered and time-varying topologies, which are common in real-world systems with ISP-locality mechanisms implemented. Through both theoretical analysis and simulation results, we unveil the underlying reasons behind discrepancies in the power of network coding under different scenarios.
【Keywords】: graph theory; network coding; peer-to-peer computing; protocols; telecommunication network topology; ISP-locality mechanisms; P2P content distribution systems; block selection mechanism; clustered topology; decentralized broadcast; gossip-like protocols; multiple data block broadcasting; network coding; network topology; overlay network; random graphs; time-varying topology; Bandwidth; Broadcasting; Communication system traffic control; Communications Society; Delay effects; Network coding; Network topology; Peer to peer computing; Protocols; Time varying systems
【Paper Link】 【Pages】:2534-2542
【Authors】: Wei Pu ; Hao Cui ; Chong Luo ; Feng Wu ; Chang Wen Chen
【Abstract】: This research considers network coded broadcast system with multi-rate transmission and dual queue stability constraints. Existing network coded broadcast systems consider single rate transmission without receiver queue constraints. First, we shall illustrate that broadcast without network coding cannot support maximum throughput in wireless fading channels. However, the network coded broadcast poses new constraints for the receivers to manage stable queues while the fading channel characteristics suggest the broadcast to operate at multi-rate to achieve higher throughput. In this research, we propose a joint scheduling and network coding (JSNC) strategy for such network coded broadcast system to achieve maximum throughput under queue stability constraint. In a single cell broadcast networks with exogenous arrivals of packets at the base station, we prove that JSNC can stabilize the system as long as the rate of the exogenous arrival flow is within the capacity region. Sufficient control parameters are provided in JSNC for trading off between sender's buffer and the receivers' buffers. JSNC can be viewed as a generalization of the classical backpressure scheduling rule to coded information flow. Alternatively, JSNC can also be viewed as an extension of network coding theory to queuing system.
【Keywords】: fading channels; network coding; queueing theory; dual queue stability constraints; joint scheduling and network coding strategy; multi-rate transmission; network coded broadcast system; stable maximum throughput broadcast; wireless fading channels; Base stations; Broadcasting; Decoding; Fading; Network coding; Optimal scheduling; Queueing analysis; Stability; Throughput; Wireless networks
【Paper Link】 【Pages】:2543-2551
【Authors】: Weiyao Xiao ; Sachin Agarwal ; David Starobinski ; Ari Trachtenberg
【Abstract】: We examine the problem of minimizing feedbacks in reliable wireless broadcasting, by pairing rateless coding with extreme value theory. Our key observation is that, in a broadcast environment, this problem resolves into estimating the maximum number of packets dropped among many receivers rather than for each individual receiver. With rateless codes, this estimation relates to the number of redundant transmissions needed at the source in order for all receivers to correctly decode a message with high probability. We develop and analyze two new data dissemination protocols, called Random Sampling (RS) and Full Sampling with Limited Feedback (FSLF), based on the moment and maximum likelihood estimators in extreme value theory. Both protocols rely on a single-round learning phase, requiring the transmission of a few feedback packets from a small subset of receivers. With fixed overhead, we show that FSLF has the desirable property of becoming more accurate as the receivers's population gets larger. Our protocols are channel agnostic, in that they do not require a-priori knowledge of (i.i.d.) packet loss probabilities, which may vary among receivers. We provide simulations and an improved full-scale implementation of the Rateless Deluge over-the-air programming protocol on sensor motes as a demonstration of the practical benefits of our protocols, which translate into about 30% latency and energy consumption savings.
【Keywords】: information dissemination; maximum likelihood estimation; probability; protocols; radio broadcasting; wireless channels; Rateless Deluge over- the-air programming protocol; channel agnostic; data dissemination; energy consumption; extreme value theory; feedback packets; full-scale implementation; maximum likelihood estimators; near-zero feedback; packet loss probabilities; random sampling; rateless coding; redundant transmissions; reliable wireless broadcasting; sensor motes; single-round learning phase; Broadcasting; Data analysis; Delay; Energy consumption; Feedback; Maximum likelihood decoding; Maximum likelihood estimation; Protocols; Reliability theory; Sampling methods
【Paper Link】 【Pages】:2552-2560
【Authors】: Nikolaos Fountoulakis ; Anna Huber ; Konstantinos Panagiotou
【Abstract】: Broadcasting algorithms are of fundamental importance for distributed systems engineering. In this paper we revisit the classical and well-studied push protocol for message broadcasting and we investigate a faulty version of it. Assuming that initially only one node has some piece of information, at each stage every one of the informed nodes chooses randomly and independently one of its neighbors and passes the message to it with some probability q that is, it fails to do so with probability 1-q. The performance of the push protocol on a fully connected network, where each node is joined by a link to every other node, with q = 1 is very well understood. In particular, Frieze and Grimmett proved that with probability 1-o(1) the push protocol completes the broadcasting of the message within (1 ± ¿) (log2 n + ln n) stages, where n is the number of nodes in the network. However, there are no tight bounds for the broadcast time on networks that are significantly sparser than the complete graph. In this work we consider random networks on n nodes, where every edge is present with probability p, independently of every other edge. We show that if p ¿ ¿(n)ln n/n, where ¿(n) is any function that tends to infinity as n grows, then the push protocol with faulty transmissions broadcasts the message within (1± ¿) (log1+q n + 1/q ln n) stages with probability 1-o(1). In other words, in almost every network of density d such that d ¿ ¿(n) ln n, the push protocol broadcasts a message as fast as in a fully connected network and the speed is only affected by the success probability q. This is quite surprising in the sense that the time needed remains essentially unaffected by the fact that most of the links are missing. Our results are accompanied by experimental evaluation.
【Keywords】: broadcasting; message passing; protocols; systems engineering; telecommunication network reliability; broadcasting algorithms; distributed systems engineering; message broadcasting; push protocol; random networks; reliable broadcasting; Broadcasting; Communications Society; Context modeling; H infinity control; Informatics; Peer to peer computing; Protocols; Reliability engineering; Systems engineering and theory; Telecommunication network reliability
【Paper Link】 【Pages】:2561-2569
【Authors】: N. Prasanth Anthapadmanabhan ; Armand M. Makowski
【Abstract】: We consider an extension to the disk model in one dimension where communication links established between nodes may fail. With the help of the method of first and second moments, we investigate the zero-one laws for the property that there are no isolated nodes in the underlying random graph. Two specific situations are discussed: For the unit circle we prove a full zero-one law and determine its critical scaling. For the unit interval we derive a zero-law and a one-law which capture deviations from different critical scalings; a completely symmetric zero-one law is established under an additional condition. Analysis and simulations both indicate the possible presence of a gap between the one-law critical scalings for the unit interval and the unit circle. This discrepancy is quite surprising given that the zero-one laws for the absence of isolated nodes are identical in the geometric random graphs on the unit interval and on the unit circle. Connections to recent results by Yi et al. are discussed.
【Keywords】: ad hoc networks; graph theory; radio links; telecommunication network reliability; first moment method; geometric random graph; isolated nodes; second moment method; unit interval; unreliable links; wireless ad hoc networks; zero law; zero-one law; Ad hoc networks; Analytical models; Communications Society; Context; Educational institutions; Moment methods; Peer to peer computing; Random variables; Wireless communication
【Paper Link】 【Pages】:2570-2578
【Authors】: Bartlomiej Blaszczyszyn ; Paul Mühlethaler
【Abstract】: In this paper we propose two analytically tractable stochastic models of non-slotted Aloha for Mobile Ad-hoc NETworks (MANETs): one model assumes a static pattern of nodes while the other assumes that the pattern of nodes varies over time. Both models feature transmitters randomly located in the Euclidean plane, according to a Poisson point process with the receivers randomly located at a fixed distance from the emitters. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise Ratio (SINR) larger than a given threshold. With Rayleigh fading and the SINR averaged over the duration of the packet transmission, both models lead to closed form expressions for the probability of successful transmission. We show an excellent matching of these results with simulations. Using our models we compare the performances of non-slotted Aloha to previously studied slotted Aloha. We observe that when the path loss is not very strong both models, when appropriately optimized, exhibit similar performance. For stronger path loss non-slotted Aloha performs worse than slotted Aloha, however when the path loss exponent is equal to 4 its density of successfully received packets is still 75% of that in the slotted scheme. This is still much more than the 50% predicted by the well-known analysis where simultaneous transmissions are never successful. Moreover, in any path loss scenario, both schemes exhibit the same energy efficiency.
【Keywords】: Rayleigh channels; access protocols; ad hoc networks; interference; mobile radio; radio receivers; radio transmitters; stochastic processes; Euclidean plane; Poisson point process; Rayleigh fading; mobile ad-hoc networks; nonslotted Aloha; outage scenario; packet transmission; receivers; signal-to-interference-and-noise ratio; stochastic analysis; transmission probability; transmitters; wireless ad-hoc networks; Ad hoc networks; Analytical models; Constraint optimization; Interference constraints; Mobile ad hoc networks; Peer to peer computing; Rayleigh channels; Signal to noise ratio; Stochastic processes; Transmitters
【Paper Link】 【Pages】:2579-2587
【Authors】: Cem Boyaci ; Bo Li ; Ye Xia
【Abstract】: The paper studies the complexity of the wireless scheduling problem under interference constraints. We first relate the definition of the capacity region to the weighted fractional coloring problem. Then, the scheduling-for-stability problem under deterministic arrivals is studied in light of this relationship. We emphasize the requirement that the scheduling algorithm uses a tractable amount of processing and storage resources. Two classes of algorithms are defined and a complexity result is derived for the intersection of the two classes. We also exhibit an algorithm that can achieve the storage requirement by relaxing the processing requirement. The results are used to examine interesting sections of the capacity region. Finally, we relate the new interpretation and theory about the capacity region to the notion of set ¿-local pooling.
【Keywords】: graph colouring; scheduling; set theory; interference constraints; interference graph; scheduling-for-stability problem; set ¿-local pooling; storage resources; weighted fractional coloring problem; wireless scheduling problem; Access protocols; Communications Society; Costs; Information science; Interference constraints; Processor scheduling; Radio transceivers; Scheduling algorithm; Symmetric matrices; Wireless communication
【Paper Link】 【Pages】:2588-2596
【Authors】: Liqun Fu ; Soung Chang Liew ; Jianwei Huang
【Abstract】: This paper proposes and investigates the concept of a safe carrier-sensing range that guarantees interference-safe (also termed hidden-node-free) transmissions in CSMA networks under the cumulative interference model. Compared with the safe carrier-sensing range under the commonly assumed but less realistic pairwise interference model, we show that the safe carrier-sensing range required under the cumulative interference model is larger by a constant multiplicative factor. For example, the factor is 1:4 if the SINR requirement is 10dB and the pathloss exponent is 4. We further show that the concept of a safe carrier-sensing range, although amenable to elegant analytical results, is inherently not compatible with the conventional power-threshold carrier-sensing mechanism (e.g., that used in IEEE 802.11). Specifically, the absolute power sensed by a node in the conventional mechanism does not contain enough information for it to derive its distances from other concurrent transmitter nodes. We show that, fortunately, a carrier-sensing mechanism called Incremental-Power Carrier-Sensing (IPCS) can realize the carrier-sensing range concept in a simple way. Instead of monitoring the absolute detected power, the IPCS mechanism monitors every increment in the detected power. This means that IPCS can separate the detected power of every concurrent transmitter, and map the power profile to the required distance information. Our extensive simulation results indicate that IPCS can boost spatial reuse and network throughput by more than 60% relative to the conventional carrier-sensing mechanism. Last but not least, IPCS not only allows us to implement our safe carrier-sensing range, it also ties up a loose end in many other prior theoretical works that implicitly assume the use of a carrier-sensing range (safe or otherwise) without an explicit design to realize it.
【Keywords】: carrier sense multiple access; interference (signal); CSMA network; constant multiplicative factor; cumulative interference; hidden-node-free transmission; incremental-power carrier-sensing; interference-safe transmission; safe carrier-sensing range; Communications Society; Interference constraints; Monitoring; Multiaccess communication; Multiple access interference; Peer to peer computing; Protocols; Signal to noise ratio; Throughput; Transmitters
【Paper Link】 【Pages】:2597-2605
【Authors】: Hongkun Yang ; Fengyuan Ren ; Chuang Lin ; Jiao Zhang
【Abstract】: In this paper, we investigate the frequency-domain packet scheduling (FDPS) problem for 3GPP LTE Uplink (UL). Instead of studying a specific scheduling policy, we provide a unified approach to tackle this issue. First we formalize a general LTE UL FDPS problem which is suitable for various scheduling policies. Then we prove that the problem is MAX SNP-hard, which implies that approximation algorithms with constant approximation ratios are the best that we can hope for. Therefore we design two approximation algorithms, both of which have polynomial runtime. Subsequently, we analyze the two algorithms and find their approximation ratios. The first algorithm is easy to follow, since it is based on a simple greedy method. The second one is based on the local ratio technique and it can approximately solve the LTE UL FDPS problem with a approximation ratio of 2.
【Keywords】: 3G mobile communication; computational complexity; multi-access systems; packet radio networks; polynomial approximation; telecommunication standards; 3GPP LTE uplink; MAX SNP hard problem; approximation algorithm; frequency domain packet scheduling; polynomial runtime; Algorithm design and analysis; Approximation algorithms; Bandwidth; Costs; Delay; Frequency domain analysis; Heuristic algorithms; OFDM; Peak to average power ratio; Scheduling algorithm
【Paper Link】 【Pages】:2606-2614
【Authors】: I-Hong Hou ; P. R. Kumar
【Abstract】: We develop a general approach for designing scheduling policies for real-time traffic over wireless channels. We extend prior work, which characterizes a real-time flow by its traffic pattern, delay bound, timely-throughput requirement, and channel reliability, to allow time-varying channels, allow clients to have different deadlines, and allow for the optional employment of rate adaptation. Thus, our model allow the treatment of more realistic fading channels as well as scenarios with mobile nodes, and the usage of more general transmission strategies. We derive a sufficient condition for a scheduling policy to be feasibility optimal, and thereby establish a class of feasibility optimal policies. We demonstrate the utility of the identified class by deriving a feasibility optimal policy for the scenario with rate adaptation, time-varying channels, and heterogeneous delay bounds. When rate adaptation is not available, we also derive a feasibility optimal policy for time-varying channels. For the scenario where rate adaptation is not available but clients have different delay bounds, we describe a heuristic. Simulation results are also presented which indicate the usefulness of the scheduling policies for more realistic and complex scenarios.
【Keywords】: fading channels; scheduling; telecommunication traffic; time-varying channels; wireless LAN; wireless channels; channel reliability; delay bound; fading wireless channels; feasibility optimal policy; rate adaptation; real-time traffic; scheduling policy; time-varying channels; traffic pattern; wireless LAN; Contracts; Delay; Fading; Streaming media; Sufficient conditions; Telecommunication traffic; Time-varying channels; Traffic control; USA Councils; Wireless networks
【Paper Link】 【Pages】:2615-2623
【Authors】: Veeraruna Kavitha ; Eitan Altman ; Rachid El Azouzi ; Rajesh Sundaresan
【Abstract】: We consider the problem of centrally controlled 'fair' scheduling of resources to one of the many mobile stations connected to a base station (BS). The BS is the only entity making decisions in this framework based on truthful information from the mobiles on their radio channel. We study the well-known family of parametric ¿-fair scheduling problems from a game-theoretic perspective in which some of the mobiles may be noncooperative. We first show that if the BS is unaware of the noncooperative behavior from the mobiles, the noncooperative mobiles become successful in snatching the resources from the other cooperative mobiles, resulting in unfair allocations. If the BS is aware of the noncooperative mobiles, a new game arises with BS as an additional player. It can then do better by neglecting the signals from the noncooperative mobiles. The BS, however, becomes successful in eliciting the truthful signals from the mobiles only when it uses additional information (signal statistics). This new policy along with the truthful signals from mobiles forms a Nash Equilibrium (NE) called a Truth Revealing Equilibrium. Finally, we propose new iterative algorithms to implement fair scheduling policies that robustify the otherwise non-robust (in presence of noncooperation) ¿-fair scheduling algorithms.
【Keywords】: cellular radio; channel allocation; game theory; mobile communication; Nash equilibrium; base station; cellular systems; fair scheduling; game-theoretic perspective; mobile stations; noncooperative mobiles; radio channel; signal statistics; truth revealing equilibrium; Algorithm design and analysis; Base stations; Centralized control; Downlink; Iterative algorithms; Multiaccess communication; Resource management; Robustness; Scheduling algorithm; Wireless communication
【Paper Link】 【Pages】:2624-2632
【Authors】: Yang Song ; Chi Zhang ; Yuguang Fang ; Zhisheng Niu
【Abstract】: MaxWeight algorithm, a.k.a., back-pressure algorithm, has received much attention as a viable solution for dynamic link scheduling in multi-hop wireless networks. The basic principle of the MaxWeight algorithm is to select a set of interference-free links with the maximum overall link weights in the network, where the link weight is determined by the queue difference between the transmitter and the receiver. While the throughput-optimality of the MaxWeight algorithm is well understood in the literature, the energy consumption induced by the MaxWeight algorithm is less studied, which is of great interest in energy-constrained wireless networks such as wireless sensor networks. In this paper, we propose an energy-conserving scheduling scheme, a.k.a., minimum energy scheduling (MES) algorithm for multi-hop wireless networks with stochastic traffic arrivals and time-varying channel conditions. We show that our algorithm is energy optimal in the sense that the proposed MES algorithm can achieve an energy consumption which is arbitrarily close to the global minimum solution. Moreover, the energy efficiency of the MES algorithm is achieved without losing the throughput- optimality. In other words, the proposed MES algorithm is still throughput optimal whereas the average consumed energy in the network is significantly reduced, as compared to the traditional MaxWeight algorithm. The theoretical results are substantiated via simulations.
【Keywords】: radio links; radio networks; scheduling; stochastic processes; telecommunication traffic; time-varying channels; MaxWeight algorithm; back-pressure algorithm; dynamic link scheduling; energy-conserving scheduling; interference-free links; minimum energy scheduling; multi-hop wireless networks; stochastic traffic arrivals; time-varying channels; Dynamic scheduling; Energy consumption; Interference; Scheduling algorithm; Spread spectrum communication; Stochastic processes; Time-varying channels; Transmitters; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2633-2641
【Authors】: Rui Zhang ; Yanchao Zhang
【Abstract】: Neighbor discovery is a fundamental requirement and need be done frequently in underwater acoustic networks (UANs) with floating node mobility. In hostile environments, neighbor discovery is vulnerable to the wormhole attack by which the adversary uses secret wormhole links to make distant nodes falsely accept each other as neighbors. The wormhole attack may lead to many undesirable consequences and cannot be solved by cryptographic methods. Existing wormhole defenses for ground wireless networks cannot be directly applied to UANs where most of their assumptions no longer hold. This paper presents a suite of novel protocols to enable wormhole-resilient secure neighbor discovery in UANs. Our protocols are based on the Direction of Arrival (DoA) estimation of acoustic signals, a basic functionality readily available in current UANs. The proposed protocols can thwart the wormhole attack with overwhelming probability without conventional hard requirements on secure and accurate time synchronization and localization. Detailed theoretical analysis and simulation results confirm the high performance of the proposed protocols.
【Keywords】: cryptographic protocols; direction-of-arrival estimation; telecommunication links; telecommunication security; underwater acoustic communication; DoA; UAN; acoustic signals; cryptographic methods; direction of arrival estimation; floating node mobility; protocols; secret wormhole links; underwater acoustic networks; wormhole attack; wormhole-resilient secure neighbor discovery; Cryptographic protocols; Cryptography; Direction of arrival estimation; Peer to peer computing; Remotely operated vehicles; Routing; Underwater acoustics; Underwater vehicles; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2642-2650
【Authors】: Fei Chen ; Alex X. Liu
【Abstract】: The architecture of two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, has been widely adopted because of the benefits of power and storage saving for sensors as well as the efficiency of query processing. However, the importance of storage nodes also makes them attractive to attackers. In this paper, we propose SafeQ, a protocol that prevents attackers from gaining information from both sensor collected data and sink issued queries. SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values. To preserve integrity, we propose a new data structure called neighborhood chains that allows a sink to verify whether the result of a query contains exactly the data items that satisfy the query. In addition, we propose a solution to adapt SafeQ for event-driven sensor networks.
【Keywords】: data structures; protocols; query processing; telecommunication computing; telecommunication security; wireless sensor networks; SafeQ; data structure; neighborhood chains; query processing; two-tiered sensor networks; Art; Communications Society; Computer architecture; Computer science; Data privacy; Data structures; Peer to peer computing; Power engineering and energy; Protocols; Query processing
【Paper Link】 【Pages】:2651-2659
【Authors】: Ming Li ; Shucheng Yu ; Wenjing Lou ; Kui Ren
【Abstract】: Body Area Networks (BAN) is a key enabling technology in E-healthcare such as remote health monitoring. An important security issue during bootstrap phase of the BAN is to securely associate a group of sensor nodes to a patient, and generate necessary secret keys to protect the subsequent wireless communications. Due to the the ad hoc nature of the BAN and the extreme resource constraints of sensor devices, providing secure, fast, efficient and user-friendly secure sensor association is a challenging task. In this paper, we propose a lightweight scheme for secure sensor association and key management in BAN. A group of sensor nodes, having no prior shared secrets before they meet, establish initial trust through group device pairing (GDP), which is an authenticated group key agreement protocol where the legitimacy of each member node can be visually verified by a human. Various kinds of secret keys can be generated on demand after deployment. The GDP supports batch deployment of sensor nodes to save setup time, does not rely on any additional hardware devices, and is mostly based on symmetric key cryptography, while allowing batch node addition and revocation. We implemented GDP on a sensor network testbed and evaluated its performance. Experimental results show that that GDP indeed achieves the expected design goals.
【Keywords】: ad hoc networks; body area networks; cryptography; telecommunication network management; wireless sensor networks; ad hoc nature; batch node; body area networks; e-healthcare; group device pairing; key management; resource constraints; secure sensor association; sensor devices; sensor network testbed; sensor nodes; symmetric key cryptography; wireless communications; Body area networks; Body sensor networks; Economic indicators; Hardware; Humans; Patient monitoring; Protection; Protocols; Remote monitoring; Wireless communication
【Paper Link】 【Pages】:2660-2668
【Abstract】: Wireless sensor networks (WSNs) have the potential to be widely used in many areas for unattended event monitoring. Mainly due to lack of a protected physical boundary, wireless communications are vulnerable to unauthorized interception and detection. Privacy is becoming one of the major issues that jeopardize the successful deployment of wireless sensor networks. While confidentiality of the message can be ensured through content encryption, it is much more difficult to adequately address the source-location privacy. For WSNs, source-location privacy service is further complicated by the fact that the sensor nodes consist of low-cost and low-power radio devices, computationally intensive cryptographic algorithms and large scale broadcasting-based protocols are not suitable for WSNs. In this paper, we propose source-location privacy schemes through routing to randomly selected intermediate node(s) before the message is transmitted to the SINK node. We first describe routing through a single a single randomly selected intermediate node away from the source node. Our analysis shows that this scheme can provide great local source-location privacy. We also present routing through multiple randomly selected intermediate nodes based on angle and quadrant to further improve the global source location privacy. While providing source-location privacy for WSNs, our simulation results also demonstrate that the proposed schemes are very efficient in energy consumption, and have very low transmission latency and high message delivery ratio. Our protocols can be used for many practical applications.
【Keywords】: authorisation; cryptographic protocols; data privacy; telecommunication network routing; telecommunication security; wireless sensor networks; SINK node; WSN; computationally intensive cryptographic algorithms; content encryption; dynamic routing; energy consumption; global source location privacy; large scale broadcasting-based protocols; low-cost radio devices; low-power radio devices; message confidentiality; message delivery ratio; multiple randomly selected intermediate nodes; protected physical boundary; sensor nodes; single randomly selected intermediate node; source-location privacy schemes; source-location privacy service; transmission latency; unattended event monitoring; unauthorized detection; unauthorized interception; wireless communications; wireless sensor networks; Cryptographic protocols; Cryptography; Large-scale systems; Monitoring; Privacy; Protection; Radio broadcasting; Routing; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2669-2677
【Authors】: Michael J. Neely
【Abstract】: It is well known that max-weight policies based on a queue backlog index can be used to stabilize stochastic networks, and that similar stability results hold if a delay index is used. Using Lyapunov Optimization, we extend this analysis to design a utility maximizing algorithm that uses explicit delay information from the head-of-line packet at each user. The resulting policy is shown to ensure deterministic worst-case delay guarantees, and to yield a throughput-utility that differs from the optimally fair value by an amount that is inversely proportional to the delay guarantee. Our results hold for a general class of 1-hop networks, including packet switches and multi-user wireless systems with time varying reliability.
【Keywords】: Lyapunov methods; optimisation; packet radio networks; queueing theory; 1-hop networks; Lyapunov optimization; delay-based network utility maximization; max-weight policies; queue backlog index; random packet arrivals; stochastic networks; time varying reliability; worst-case delay; Algorithm design and analysis; Delay; Design optimization; Information analysis; Packet switching; Stability; Stochastic processes; Switches; Time varying systems; Utility programs
【Paper Link】 【Pages】:2678-2686
【Authors】: Thang N. Dinh ; Ying Xuan ; My T. Thai ; E. K. Park ; Taieb Znati
【Abstract】: Assessing network vulnerability before potential disruptive events such as natural disasters or malicious attacks is vital for network planning and risk management. It enables us to seek and safeguard against most destructive scenarios in which the overall network connectivity falls dramatically. Existing vulnerability assessments mainly focus on investigating the inhomogeneous properties of graph elements, node degree for example, however, these measures and the corresponding heuristic solutions can provide neither an accurate evaluation over general network topologies, nor performance guarantees to large scale networks. To this end, in this paper, we investigate a measure called pairwise connectivity and formulate this vulnerability assessment problem as a new graph-theoretical optimization problem called Ã-disruptor, which aims to discover the set of critical node/edges, whose removal results in the maximum decline of the global pairwise connectivity. Our results consist of the NP-Completeness and inapproximability proof of this problem, an O(log n log log n) pseudo-approximation algorithm for detecting the set of critical nodes and an O(log1.5 n) pseudo-approximation algorithm for detecting the set of critical edges. In addition, we devise an efficient heuristic algorithm and validate the performance of the our model and algorithms through extensive simulations.
【Keywords】: graph theory; optimisation; risk management; telecommunication network planning; telecommunication network topology; telecommunication security; NP-completeness; critical edges; critical nodes; disruptive events; general network topology; global pairwise connectivity; graph elements; graph-theoretical optimization problem; malicious attacks; natural disasters; network connectivity; network planning; network vulnerability; node degree; pseudo-approximation algorithm; risk management; Ã-disruptor; Cities and towns; Communications Society; Degradation; Electric breakdown; Heuristic algorithms; Large-scale systems; Network topology; Optimization methods; Peer to peer computing; Risk management
【Paper Link】 【Pages】:2687-2695
【Authors】: Ching-Ming Lien ; Cheng-Shang Chang ; Jay Cheng ; Duan-Shin Lee ; Jou-Ting Liao
【Abstract】: Inspired by the recent development of optical queueing theory, in this paper we study a class of multistage interconnection networks (MINs), called twister networks. Unlike the usual recursive constructions of MINs (either by two-stage expansion or by three-stage expansion), twister networks are constructed directly by a concatenation of bipartite networks. Moreover, the biadjacency matrices of these bipartite networks are sums of subsets of the powers of the circular shift matrix. Though MINs have been studied extensively in the literature, we show there are several distinct properties for twister networks, including routability and conditionally nonblocking properties. In particular, we show that a twister network satisfying (Al) in the paper is routable, and packets can be self-routed through the twister network by using the C-transform developed in optical queueing theory. Moreover, we define an N -modulo distance and use it to show that a twister network satisfying (A2) in the paper is conditionally nonblocking if the N-modulo distance between any two outputs is not greater than two times of the N-modulo distance between the corresponding two inputs. Such a conditionally nonblocking property allows us to show that a twister network with N inputs/outputs can be used as a p à p rotator and a p à p symmetric TDM switch for any 2 ¿ p ¿ N. As such, one can use a twister network as the switch fabric for a two-stage load balanced switch that is capable of providing incremental update of the number of linecards.
【Keywords】: time division multiplexing; C-transform; N-modulo distance; biadjacency matrices; bipartite networks; circular shift matrix; load balanced switches; multistage interconnection networks; optical queueing theory; packets; routability; switch fabric; symmetric TDM switch; twister networks; Communications Society; Emulation; Fabrics; Multiprocessor interconnection networks; Optical fiber networks; Optical interconnections; Optical switches; Queueing analysis; Routing; Time division multiplexing
【Paper Link】 【Pages】:2696-2704
【Authors】: Hung Q. Ngo ; Atri Rudra ; Anh N. Le ; Thanh-Nhan Nguyen
【Abstract】: The main task in analyzing a switching network design (including circuit-, multirate-, and photonic-switching) is to determine the minimum number of some switching components so that the design is non-blocking in some sense (e.g., stridor wide-sense). We show that, in many cases, this task can be accomplished with a simple two-step strategy: (1) formulate a linear program whose optimum value is a bound for the minimum number we are seeking, and (2) specify a solution to the dual program, whose objective value by weak duality immediately yields a sufficient condition for the design to be non-blocking. We illustrate this technique through a variety of examples, ranging from circuit to multirate to photonic switching, from unicast to f-cast and multicast, and from strict- to wide-sense non-blocking. The switching architectures in the examples are of Clos-type and Banyan-type, which are the two most popular architectural choices for designing non-blocking switching networks. To prove the result in the multirate Clos network case, we formulate a new problem called DYNAMIC WEIGHTED EDGE COLORING which generalizes the DYNAMIC BIN PACKING problem. We then design an algorithm with competitive ratio 5.6355 for the problem. The algorithm is analyzed using the linear programming technique. We also show that no algorithm can have competitive ratio better than 4-O (log n/n) for this problem. New lower- and upper-bounds for multirate wide-sense non-blocking Clos networks follow, improving upon a couple of 10-year-old bounds on the same problem.
【Keywords】: graph colouring; linear programming; multistage interconnection networks; Clos-type switching architectures; banyan-type switching architectures; dual program; dynamic bin packing problem; dynamic weighted edge coloring; linear programming; nonblocking switching networks; switching components; weak duality; Algorithm design and analysis; Communication switching; Communications Society; Computer science; Design engineering; Engineering profession; Linear programming; Photonics; Switching circuits; Unicast
【Paper Link】 【Pages】:2705-2713
【Authors】: Miklós Reiter ; Richard Steinberg
【Abstract】: Congestion-dependent pricing is a form of traffic management that would ensure the efficient allocation of bandwidth between users and applications. As the unpredictability of congestion prices creates revenue uncertainty for network providers and cost uncertainty for users, it has been suggested that longer-term financial agreements such as forward contracts could be used to manage these risks. In a network managed by a single service provider, long-term forward contracts are beneficial for both the provider and the users. We investigate whether forward contracts would be adopted by multiple service providers in a future Internet with congestion-dependent pricing. We develop a novel game-theoretic model of a multi-provider communication network with two complementary segments. Service on the upstream segment is provided by a single Internet Service Provider (ISP) and priced dynamically to maximize profit, while several smaller ISPs sell connectivity on the downstream network segment, with the advance possibility of entering into forward contracts with their users for some or all of their capacity. We show that the equilibrium forward contracting levels are necessarily asymmetric, with one downstream provider entering into fewer forward contracts than the other competitors, thus ensuring a high subsequent downstream price level. In practice, network providers will choose the extent of forward contracting strategically based not only on their risk tolerance, but also on the market structure in the interprovider network and their peers' actions.
【Keywords】: Internet; contracts; game theory; risk management; telecommunication network management; telecommunication traffic; Internet service provider; bandwidth allocation; complementary segments; congestion-dependent pricing; cost uncertainty; downstream network segment; forward contracts; game theoretic model; multiprovider communication network; network providers; risk tolerance; traffic management; Bandwidth; Communication networks; Costs; Financial management; Forward contracts; Pricing; Risk management; Telecommunication traffic; Uncertainty; Web and internet services
【Paper Link】 【Pages】:2714-2720
【Authors】: Tricha Anjali ; Gruia Calinescu ; Alexander Fortin ; Sanjiv Kapoor ; Nandakiran Kirubanandan ; Sutep Tongngam
【Abstract】: In this paper we address the issue of designing multi-path routing algorithms. Multi-path routing has the potential of improving the throughput but requires buffers at the destination. Our model assumes a network with capacitated edges and a delay function associated with the network links (edges). We consider the problem of establishing a specified throughput from source to destination in the network, given bounds on the buffer size available at the destination and a bound on the maximum delay paths are allowed to have. A related problem which we consider is to establish bounds on the delay variance (which we call jitter) amongst the paths chosen for the multi-path routing scheme. We show that the problems are NP-complete and present pseudo-polynomial algorithms based on linear programming. We also propose practical heuristics and present the experimental results on an existing network topology. The results are promising.
【Keywords】: computational complexity; jitter; linear programming; telecommunication network routing; telecommunication network topology; NP-complete problems; delay function; delay variance; jitter; linear programming; maximum delay paths; multipath network flows; multipath routing algorithms; network links; network throughput; network topology; pseudo-polynomial algorithms; Buffer storage; Communications Society; Computer science; Delay; Jitter; Linear programming; Routing; Statistics; Telecommunication traffic; Throughput
【Paper Link】 【Pages】:2721-2729
【Authors】: Javad Lavaei ; John C. Doyle ; Steven H. Low
【Abstract】: This paper is concerned with understanding the connection between the existing Internet congestion control algorithms and the optimal control theory. The available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the transient behavior of the closed-loop system. To take into account the real-time performance of the system, rather than merely its steady-state performance, the congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. For this reason, this work aims to investigate what utility functionals the existing congestion control algorithms maximize. In particular, it is shown that there exist meaningful utility functionals whose maximization leads to the celebrated primal, dual and primal/dual algorithms. An implication of this result is that a real network problem may be solved by regarding it as an optimal control problem on which some practical constraints, such as a real-time link capacity constraint, are imposed.
【Keywords】: Internet; closed loop systems; optimal control; resource allocation; telecommunication congestion control; Internet congestion control algorithms; celebrated primal-dual algorithms; closed-loop system; optimal control theory; real-time link capacity constraint; resource allocation controllers; steady-state performance; utility functionals; Communication system control; Communications Society; Control systems; Internet; Java; Loss measurement; Optimal control; Real time systems; Resource management; Steady-state
【Paper Link】 【Pages】:2730-2738
【Authors】: Parimal Parag ; Srinivas Shakkottai ; Jean-François Chamberland
【Abstract】: The traditional formulation of the total value of information transfer is a multi-commodity flow problem. Here, each data source is seen as generating a commodity along a fixed route, and the objective is to maximize the total system throughput under some concept of fairness, subject to capacity constraints of the links used. This problem is well studied under the framework of network utility maximization and has led to several different distributed congestion control schemes. However, this idea of value does not capture the fact that flows might associate value, not just with throughput, but with link-quality metrics such as packet delay, jitter and so on. The traditional congestion control problem is redefined to include individual source preferences. It is assumed that degradation in link quality seen by a flow adds up on the links it traverses, and the total utility is maximized in such a way that the quality degradation seen by each source is bounded by a value that it declares. Decoupling source-dissatisfaction and link- degradation through an ``effective capacity'' variable, a distributed and provably optimal resource allocation algorithm is designed, to maximize system utility subject to these quality constraints. The applicability of our controller in different situations is illustrated, and results are supported through numerical examples.
【Keywords】: Internet; quality of service; resource allocation; telecommunication congestion control; Internet; distributed congestion control; individual source preferences; information transfer; link-quality metrics; multicommodity flow problem; network utility maximization; packet delay; quality degradation function; service guarantees; source dissatisfaction; total system throughput; value-aware resource allocation; Communications Society; Degradation; Delay; Internet; Jitter; Protocols; Resource management; Throughput; USA Councils; Utility programs
【Paper Link】 【Pages】:2739-2747
【Authors】: Donghyun Kim ; Wei Wang ; Xianyue Li ; Zhao Zhang ; Weili Wu
【Abstract】: In this paper, we study the problem of constructing quality fault-tolerant Connected Dominating Sets (CDSs)in homogeneous wireless networks, which can be defined as minimum k-Connected m-Dominating Set ((k,m)-CDS) problem in Unit Disk Graphs (UDGs). We found that every existing approximation algorithm for this problem is incomplete for k ¿3 in a sense that it does not generate a feasible solution in some UDGs. Based on these observations, we propose a new polynomial time approximation algorithm for computing (3,m)-CDSs. We also show that our algorithm is correct and its approximation ratio is a constant.
【Keywords】: approximation theory; graph theory; radio networks; 3-connected m-dominating sets; constant factor approximation; homogeneous wireless networks; minimum k-connected m-dominating set; polynomial time approximation algorithm; quality fault-tolerant connected dominating sets; unit disk graphs; Approximation algorithms; Communications Society; Computer networks; Electronic mail; Mathematics; Radio frequency; Routing protocols; Spine; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2748-2756
【Authors】: Chi Yi ; Wenye Wang
【Abstract】: Many real systems are hybrid networks which include infrastructure nodes in multi-hop wireless networks, such as sinks in sensor networks and mesh routers in mesh networks. However, we have very little understanding of network connectivity in such networks. Therefore, in this paper, we consider hybrid networks denoted by H(¿, Ã) with ad hoc nodes and base stations and prove how base stations can improve the connectivity of ad hoc nodes in subcritical phase, that is, the ad hoc node density, ¿¿ is lower than the critical density ¿¿ c. We find that with the existence of a positive density of base stations, i.e., the density of base stations ¿à > 0 which have the same transmission range as ad hoc nodes, the number of connected ad hoc nodes is ¿(n) with probability nearly 1, where n is the number of ad hoc nodes. However, the size of connected ad hoc component scales linearly with ¿à when it is lower than c1(¿¿) with probability nearly 1, which demonstrates a tremendous benefit of using base stations to enhance the connectivity of ad hoc nodes. Further, we study a hybrid network architecture that makes a significant connectivity improvement with transmission range rà larger than r¿ for ad hoc nodes. Therefore, our results provide a theoretical understanding of to what extent ad hoc nodes can benefit from base stations in multi-hop wireless networks.
【Keywords】: ad hoc networks; radio networks; ad hoc nodes; base stations; connectivity analysis; large-scale hybrid wireless networks; mesh networks; mesh routers; multi-hop wireless networks; sensor networks; Ad hoc networks; Base stations; Large-scale systems; Mesh networks; Peer to peer computing; Sensor systems; Spread spectrum communication; Wireless mesh networks; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:2757-2765
【Authors】: In Keun Son ; Shiwen Mao
【Abstract】: Although having high potential for broadband wireless access, wireless mesh networks are known to suffer from throughput and fairness problems, and are thus hard to scale to large size. To this end, hierarchical architectures provide a solution to this scalability problem. In this paper, we address the problem of design and optimization of a tiered wireless access network. At the lower tier, mesh routers are clustered based on traffic demands and delay requirements. The cluster heads are equipped with wireless optical transceivers and form the upper tier free space optical (FSO) network. We first present a plane sweeping and clustering algorithm aiming to minimize the number of clusters. PSC sweeps the network area and captures cluster members under delay and traffic load constraints. We then present an algebraic connectivity-based formulation for FSO network topology optimization and develop a greedy edge-appending algorithm that iteratively inserts edges to maximize algebraic connectivity. The proposed algorithms are analyzed and evaluated via simulations, and are shown to be highly effective as compared to the performance bounds derived in this paper.
【Keywords】: broadband networks; greedy algorithms; optical links; optimisation; radio access networks; telecommunication network routing; telecommunication network topology; telecommunication traffic; wireless mesh networks; algebraic connectivity-based formulation; broadband wireless access; delay requirements; design; greedy edge-appending algorithm; mesh routers; network topology; optimization; tiered wireless access network; traffic demands; upper tier free space optical network; wireless mesh networks; wireless optical transceivers; Clustering algorithms; Delay; Design optimization; Iterative algorithms; Optical fiber networks; Scalability; Telecommunication traffic; Throughput; Wireless mesh networks; Wireless networks
【Paper Link】 【Pages】:2766-2774
【Authors】: Weiyi Zhao ; Jiang Xie
【Abstract】: Handoff management plays an important role in wireless mesh networks (WMNs) in delivering Quality of Service to mobile users. Inter-gateway (across subnets) movement in WMNs usually requires the handoff support from multilayers and thus causes nonnegligible delays and packet loss. Previous solutions on handoff management in infrastructure WMNs mainly focus on intra-gateway mobility (e.g., single gateway is assumed in IEEE 802.11s WMNs) and exert the reduction of handoff delay so as to reduce packet loss. Furthermore, some handoff issues involved in inter-gateway mobility in WMNs (e.g., the network-layer handoff detection issue) have not been properly addressed. In this paper, we present a novel architectural design, namely Explicit multicast-based (Xcast-based) WMNs (XMesh), to facilitate inter-gateway handoff management. The proposed XMesh architecture enables parallel executions of handoffs from multilayers, in conjunction with a Xcast-based caching mechanism which builds on top of mesh routing protocols to guarantee minimum packet loss during handoffs in WMNs. The required number and optimal placement of special mesh routers that form the XMesh architecture are modeled as a set covering problem which is solved based on a greedy algorithm. A comprehensive simulation study shows that the XMesh architecture enables fast handoffs and re-establishment of session communications in the inter-gateway mobility environment. With both the parallel handoff execution and data caching mechanism, our architecture offers a seamless handoff for supporting real-time applications.
【Keywords】: greedy algorithms; multicast communication; quality of service; wireless mesh networks; Xcast-based caching architecture; data caching mechanism; explicit multicast based inter-gateway handoff management; greedy algorithm; infrastructure wireless mesh networks; inter-gateway handoffs; minimum packet loss; parallel handoff execution; quality of service; Delay; IP networks; Internet; Multicast protocols; Nonhomogeneous media; Quality of service; Routing; Spread spectrum communication; Switches; Wireless mesh networks
【Paper Link】 【Pages】:2775-2783
【Authors】: Yanhua Li ; Zhi-Li Zhang
【Abstract】: In this paper we develop a unified theoretical framework for estimating various transmission costs of packet forwarding in wireless networks. Our framework can be applied to the three routing paradigms, best path routing, opportunistic routing, and stateless routing, to which nearly all existing routing protocols belong. We illustrate how packet forwarding under each paradigm can be modeled as random walks on directed graphs (digraphs). By generalizing the theory of random walks that has primarily been developed for undirected graphs to digraphs, we show how various transmission costs can be formulated in terms of hitting times and hitting costs of random walks on digraphs. As representative examples, we apply the theory to three specific routing protocols, one under each paradigm. Extensive simulations demonstrate that the proposed digraph based analytical model can achieve more accurate transmission cost estimation over existing methods.
【Keywords】: graph theory; routing protocols; best path routing; digraphs; hitting costs; hitting times; opportunistic routing; packet forwarding; random walks; routing paradigms; routing protocols; stateless routing; transmission costs; undirected graphs; wireless networks; wireless routing; Analytical models; Costs; Delay; Energy consumption; Estimation theory; Graph theory; Routing protocols; Stochastic processes; Throughput; Wireless networks
【Paper Link】 【Pages】:2784-2792
【Authors】: Shan Chu ; Xin Wang
【Abstract】: Multiple-input and multiple-output (MIMO) technique is considered as one of the most promising emerging wireless technologies that can significantly improve transmission capacity and reliability in wireless mesh networks. While MIMO has been widely studied for single link transmission scenarios in physical layer as well as from MAC perspective, its impact on network layer, especially its interaction with routing has not drawn enough research attention. In this paper, we investigate the problem of routing in MIMO-based wireless mesh networks. We mathematically formulate the MIMO-enabled multi-source multi-destination multi-hop routing problem into a multi-commodity flow problem by identifying the specific opportunities and constraints brought by MIMO transmissions, in order to provide the fundamental basis for MIMO-aware routing design. We then use this formulation to develop a polynomial time approximation solution that maximizes the scaling factor for the concurrent flows in the network. Moreover, we also consider a more practical case where controllers are distributed, and propose a distributed algorithm to minimize the congestion in the network links based on steepest descent framework, which is proved to provide a fixed approximation ratio. The performance of the algorithms is evaluated through simulations and demonstrated to outperform the counterpart strategies without considering MIMO features.
【Keywords】: MIMO communication; distributed algorithms; polynomial approximation; telecommunication network routing; wireless mesh networks; MIMO aware routing; distributed algorithm; multi commodity flow problem; multi source multi destination multi hop routing problem; multiple input multiple output technique; polynomial time approximation solution; wireless mesh networks; Communication system traffic control; MIMO; Mesh networks; Peer to peer computing; Polynomials; Receiving antennas; Routing; Spread spectrum communication; Transmitting antennas; Wireless mesh networks
【Paper Link】 【Pages】:2793-2801
【Authors】: Stanislav Miskovic ; Edward W. Knightly
【Abstract】: In this paper, we consider routing in multi-hop wireless mesh networks. We analyze three standardized and commonly deployed routing mechanisms that we term "node-pair discovery" primitives. We show that use of these primitives inherently yields inferior route selection, irrespective of the protocol that implements them. This behavior originates due to overhead reduction actions that systematically yield insufficient distribution of routing information, effectively hiding available paths from nodes. To address this problem, we propose a set of "deter and rescue" routing primitives that enable nodes to discover their hidden paths by exploiting already available historic routing information. We use extensive measurements on a large operational wireless mesh network to show that with node-pair discovery primitives, inferior route selections occur regularly and cause long-term throughput degradations for network users. In contrast, the deter and rescue primitives largely identify and prevent selection of inferior paths. Moreover, even when inferior paths are selected, the new primitives reduce their duration by several orders of magnitude, often to sub-second time scales.
【Keywords】: telecommunication network routing; wireless mesh networks; inferior route selections; multi-hop; node-pair discovery primitives; routing primitives; wireless mesh networks; Communications Society; Costs; Degradation; Mesh networks; Peer to peer computing; Routing protocols; Spread spectrum communication; Telecommunication traffic; Throughput; Wireless mesh networks
【Paper Link】 【Pages】:2802-2810
【Authors】: Chansu Yu ; Tianning Shen ; Kang G. Shin ; Jeong-Yoon Lee ; Young-Joo Suh
【Abstract】: Wireless multihop communication is becoming more important due to the increasing popularity of wireless sensor networks, wireless mesh networks, and mobile social networks. They are distinguished from conventional multihop networks in terms of scale, traffic intensity and/or node density. Being readily-available in most of 802.11 radios, multirate facility appears to be useful to address some of these issues and is particularly helpful in high-density scenarios where inter-node distance is short, demanding a prudent multirate adaptation algorithm. However, communication at high bit rates mandates a large number of hops for a given node pair and thus, can easily be depreciated as per-hop overhead at several layers of network protocol is aggregated over the increased number of hops. This paper presents a novel multihop, multirate adaptation mechanism, called Multihop Transmission OPportunity (MTOP), that allows a frame to be forwarded a number of hops consecutively but reduces the MAC-layer overhead between hops. This seemingly collision-prone multihop forwarding is proven to be safe via analysis and USRP/GNU Radio-based experiment. The idea of MTOP is in clear contrast to, but not mutually exclusive with, the conventional opportunistic transmission mechanism, referred to as TXOP, where a node transmits multiple frames back-to-back when it gets an opportunity. We conducted an extensive simulation study via ns-2, demonstrating the performance advantage of MTOP under a wide range of network scenarios.
【Keywords】: access protocols; radio networks; wireless LAN; 802.11 radios; GNU radio-based experiment; MAC-layer overhead; USRP radio-based experiment; mobile social networks; multihop transmission opportunity; multirate adaptation mechanism; network protocol; wireless mesh networks; wireless multihop networks; wireless sensor networks; Bit rate; Computer networks; Mobile communication; Peer to peer computing; Protocols; Social network services; Spread spectrum communication; Telecommunication traffic; Wireless mesh networks; Wireless sensor networks
【Paper Link】 【Pages】:2811-2819
【Authors】: Feng Li ; Yinying Yang ; Jie Wu
【Abstract】: Smartphones are envisioned to provide promising applications and services. At the same time, smartphones are also increasingly becoming the target of malware. Many emerging malware can utilize the proximity of devices to propagate in a distributed manner, thus remaining unobserved and making detections substantially more challenging. Different from existing malware coping schemes, which are either totally centralized or purely distributed, we propose a Community-based Proximity Malware Coping scheme, CPMC. CPMC utilizes the social community structure, which reflects a stable and controllable granularity of security, in smartphone-based mobile networks. The CPMC scheme integrates short-term coping components, which deal with individual malware, and long-term evaluation components, which offer vulnerability evaluation towards individual nodes. A closeness-oriented delegation forwarding scheme combined with a community level quarantine method is proposed as the short-term coping components. These components contain a proximity malware by quickly propagating the signature of a detected malware into all communities while avoiding unnecessary redundancy. The long-term components offer vulnerability evaluation towards neighbors, based on the observed infection history, to help users make comprehensive communication decisions. Extensive real- and synthetic-trace driven simulation results are presented to to evaluate the effectiveness of CPMC.
【Keywords】: invasive software; mobile computing; mobile radio; telecommunication security; CPMC; closeness-oriented delegation forwarding scheme; community level quarantine method; community-based proximity malware coping scheme; short-term coping component; smartphone-based mobile network; social community structure; Bluetooth; Communications Society; Computer worms; History; Intelligent networks; Mobile communication; Network topology; Peer to peer computing; Smart phones; Social network services
【Paper Link】 【Pages】:2820-2828
【Authors】: Fengjun Li ; Bo Luo ; Peng Liu ; Chao-Hsien Chu
【Abstract】: With rising concerns on user privacy over the Internet, anonymous communication systems that hide the identity of a participant from its partner or third parties are highly desired. Existing approaches either rely on a relative small set of pre-selected relay servers to redirect the messages, or use structured peer-to-peer systems to multicast messages among a set of relay groups. The pre-selection approaches provide good anonymity, but suffer from node failures and scalability problem. The peer-to-peer approaches are subject to node churns and high maintenance overhead, which are the intrinsic problems of P2P systems. In this paper, we present CAT, a node-failure-resilient anonymous communication protocol. In this protocol, relay servers are randomly assigned to relay groups. The initiator of a connection selects a set of relay groups instead of relay servers to set up anonymous paths. A valid path consists of relay servers, one from each selected relay group. The initiator explores valid anonymous paths via a probing process. Since the relative positions of relay servers in the path are commutative, there exist multiple anonymous yet commutative paths, which form an anonymous tunnel. When a connection encounters a node failure, it quickly switches to a nearest backup path in the tunnel through "path hopping", without tampering the initiator or renegotiating the keys. Hence, the protocol is resilient to node failures. We also show that the protocol provides good anonymity even when facing types of active and passive attacks. Finally, the operating cost of CAT is analyzed and shown to be similar to other node-based anonymous communication protocols.
【Keywords】: Internet; data privacy; multicast communication; peer-to-peer computing; protocols; telecommunication security; CAT; Internet; P2P systems; commutative path hopping; identity hiding; multicast messages; node-failure-resilient anonymous communication protocol; peer-to-peer systems; pre-selected relay servers; user privacy; Communications Society; Cryptography; Internet; Peer to peer computing; Privacy; Probes; Protocols; Relays; Scalability; Timing
【Paper Link】 【Pages】:2829-2837
【Authors】: Wojciech Galuba ; Panos Papadimitratos ; Marcin Poturalski ; Karl Aberer ; Zoran Despotovic ; Wolfgang Kellerer
【Abstract】: Wireless ad hoc networks are inherently vulnerable, as any node can disrupt the communication of potentially any other node in the network. Many solutions to this problem have been proposed. In this paper, we take a fresh and comprehensive approach that addresses simultaneously three aspects: security, scalability and adaptability to changing network conditions. Our communication protocol, Castor, occupies a unique point in the design space: it does not use any control messages except simple packet acknowledgments, and each node makes routing decisions locally and independently without exchanging any routing state with other nodes. Its novel design makes Castor resilient to a wide range of attacks and allows the protocol to scale to large network sizes and to remain efficient under high mobility. We compare Castor against four representative protocols from the literature. Our protocol achieves up to two times higher packet delivery rates, particularly in large and highly volatile networks, while incurring no or only limited additional overhead. At the same time, Castor is able to survive more severe attacks and recovers from them faster.
【Keywords】: routing protocols; telecommunication network reliability; telecommunication security; wireless mesh networks; Castor; control messages; scalable secure routing; wireless ad hoc networks; Ad hoc networks; Communication system control; Communications Society; Data communication; Feedback; Mobile communication; Peer to peer computing; Protocols; Routing; Telecommunication network reliability
【Paper Link】 【Pages】:2838-2846
【Authors】: Chi Zhang ; Xiaoyan Zhu ; Yang Song ; Yuguang Fang
【Abstract】: Recently, trust-based routing has received much attention as an effective way to improve security of wireless ad hoc networks (WANETs). Although various trust metrics have been designed and incorporated into the routing metrics, as far as we know, none of the existing works have used mathematical tools such as routing algebra to analyze the compatibility of trust related routing metrics and routing protocols in WANETs. In this paper, we first identify unique features of trust metrics compared with QoS-based routing metrics. Then, we provide a systematic analysis of the relationship between trust metrics and trust-based routing protocols by identifying the basic algebraic properties that a trust metric must have in order to work correctly and optimally with different generalized distance-vector or link-state routing protocols in WANETs. Moreover, we extend our framework to model the interactions between different trust-based routing protocols. Finally, our results are applied to check the compatibility of the trust metrics proposed in previous literature and the popular routing protocols used in WANETs.
【Keywords】: ad hoc networks; quality of service; routing protocols; telecommunication security; QoS-based routing metrics; WANET; basic algebraic properties; generalized distance-vector; link-state routing protocols; mathematical tools; quality of service; routing algebra; trust metrics; trust-based routing protocols; wireless ad hoc networks; Algebra; Communication system security; Communications Society; Computer security; IP networks; Intserv networks; Laboratories; Mobile ad hoc networks; National security; Routing protocols
【Paper Link】 【Pages】:2847-2855
【Authors】: V. J. Venkataramanan ; Xiaojun Lin ; Lei Ying ; Sanjay Shakkottai
【Abstract】: While there has been much progress in designing backpressure based stabilizing algorithms for multihop wireless networks, end-to-end performance (e.g., end-to-end buffer usage) results have not been as forthcoming. In this paper, we study the end-to-end buffer usage (sum of buffer utilization along a flow path) over a network with general topology and with fixed, loop-free routes using a large-deviations approach. We first derive bounds on the best performance that any scheduling algorithm can achieve. Based on the intuition from the bounds, we propose a class of (backpressure-like) scheduling algorithms called ¿Ã?-algorithms. We show that the parameters ¿ and Ã? can be chosen such that the system under the ¿Ã?-algorithm performs arbitrarily closely to the best possible scheduler (formally the decay rate function for end-to-end buffer overflow is shown to be arbitrarily close to optimal in the large-buffer regime). We also develop variants which have the same asymptotic optimality property, and also provide good performance in the small-buffer regime. Our results are substantiated using both analysis and simulation.
【Keywords】: queueing theory; telecommunication network topology; wireless mesh networks; asymptotic optimality property; end-to-end buffer usage; multihop wireless networks; Algorithm design and analysis; Delay; Network topology; Scheduling algorithm; Spread spectrum communication; Telecommunication traffic; Throughput; Traffic control; Wireless mesh networks; Wireless networks
【Paper Link】 【Pages】:2856-2864
【Authors】: Arun Sridharan ; Can Emre Koksal ; Elif Uysal-Biyikoglu
【Abstract】: Information theoretic Broadcast Channels (BC) and Multiple Access Channels (MAC) enable a single node to transmit data simultaneously to multiple nodes, and multiple nodes to transmit data simultaneously to a single node respectively. In this paper, we address the problem of link scheduling in multihop wireless networks containing nodes with BC and MAC capabilities. We first propose an interference model that extends protocol interference models, originally designed for point to point channels, to include the possibility of BC and MAC. Due to the high complexity of optimal link schedulers, we introduce the Multiuser Greedy Maximum Weight algorithm for link scheduling in multihop wireless networks containing BCs and MACs. Given a network graph, we develop new local pooling conditions} and show that the performance of our algorithm can be fully characterized using the associated parameter, the multiuser local pooling factor. We provide examples of some network graphs, on which we apply local pooling conditions and derive the multiuser local pooling factor. We prove optimality of our algorithm in tree networks and show that the exploitation of BCs and MACs improve the throughput performance considerably in multihop wireless networks.
【Keywords】: Gaussian channels; broadcast channels; greedy algorithms; multi-access systems; radio access networks; scheduling; telecommunication links; Gaussian multiple access channels; Wireless Networks; broadcast channels; greedy link scheduler; link scheduling; multihop wireless networks; multiuser greedy maximum weight algorithm; protocol interference models; Broadcasting; Information theory; Interference constraints; Optimal scheduling; Peer to peer computing; Scheduling algorithm; Spread spectrum communication; Throughput; Tree graphs; Wireless networks
【Paper Link】 【Pages】:2865-2873
【Authors】: Dajun Qian ; Dong Zheng ; Junshan Zhang ; Ness B. Shroff
【Abstract】:
We study the problem of distributed scheduling in multi-hop MIMO networks. We first develop a MIMO-pipe" model that provides the upper layers a set of rates and SINR requirements, which capture the rate-reliability tradeoff in MIMO communications. The main thrust of this study is then dedicated to developing CSMA-based MIMO-pipe scheduling under the SINR model. We choose the SINR model over the extensively studied matching or protocol-based interference models because it more naturally captures the impact of interference in wireless networks. The coupling among the links caused by the interference makes the problem of devising distributed scheduling algorithms particularly challenging. To that end, we explore CSMA-based MIMO-pipe scheduling, from two perspectives. First, we consider an idealized continuous time CSMA network. We propose a dual-band approach in which control messages are exchanged instantaneously over a channel separate from the data channel, and show that CSMA-based scheduling can achieve throughput optimality under the SINR model. Next, we consider a discrete time CSMA network. To tackle the challenge due to the coupling caused by interference, we propose a
conservative" scheduling algorithm in which more stringent SINR constraints are imposed based on the MIMO-pipe model. We show that this suboptimal distributed scheduling can achieve an efficiency ratio bounded from below.
【Keywords】: MIMO communication; carrier sense multiple access; interference (signal); protocols; scheduling; telecommunication network reliability; MIMO communications; MIMO-pipe; SINR model; continuous time CSMA network; data channel; distributed scheduling; multihop MIMO networks; protocol based interference; rate-reliability tradeoff; wireless networks; Dual band; Interference; MIMO; Multiaccess communication; Optimal control; Scheduling algorithm; Signal to noise ratio; Spread spectrum communication; Throughput; Wireless networks
【Paper Link】 【Pages】:2874-2882
【Authors】: Sheu-Sheu Tan ; Dong Zheng ; Junshan Zhang ; James R. Zeidler
【Abstract】: With the convergence of multimedia applications and wireless communications, there is an urgent need for developing new scheduling algorithms to support real-time traffic with stringent delay requirements. However, distributed scheduling under delay constraints is not well understood and remains an under-explored area. A main goal of this study is to take some steps in this direction and explore the distributed opportunistic scheduling (DOS) with delay constraints. Consider a network with M links which contend for the channel using random access. Distributed scheduling in such a network requires joint channel probing and distributed scheduling. Using optimal stopping theory, we explore DOS for throughput maximization, under two different types of average delay constraints: 1) a network-wide constraint where the average delay should be no greater than ?; or 2) individual user constraints where the average delay per user should be no greater than am, m = 1,..., M. Since the standard techniques for constrained optimal stopping problems are based on sample-path arguments and are not applicable here, we take a stochastic Lagrangian approach instead. We characterize the corresponding optimal scheduling policies accordingly, and show that they have a pure threshold structure, i.e. data transmission is scheduled if and only if the rate is above a threshold. Specifically, in the case with a network-wide delay constraint, somewhat surprisingly, there exists a sharp transition associated with a critical time constant, denoted by ?. If a is less than ?, the optimal rate threshold depends on ?; otherwise it does not depends on a at all, and the optimal policy is the same as that in the unconstrained case. In the case with individual user delay constraints, we cast the threshold selection problem across links as a non-cooperative game, and establish the existence of Nash equilibria. Again we observe a sharp transition associated with critical time constants {?m<- - /sub> }, in the sense that when ?m ? am for all users, the Nash equilibrium becomes the same one as if there were no delay constraints.
【Keywords】: ad hoc networks; game theory; scheduling; stochastic processes; telecommunication traffic; Nash equilibrium; ad-hoc communications; critical time constant; delay constraints; distributed opportunistic scheduling; joint channel probing; network-wide constraint; optimal stopping theory; real-time traffic; stochastic Lagrangian approach; threshold selection; throughput maximization; user constraints; Constraint theory; Convergence; Delay; Lagrangian functions; Optimal scheduling; Scheduling algorithm; Stochastic processes; Telecommunication traffic; Throughput; Wireless communication
【Paper Link】 【Pages】:2883-2891
【Authors】: Shihuan Liu ; Lei Ying ; R. Srikant
【Abstract】: We consider multiuser scheduling in wireless networks with channel variations and flow-level dynamics. Recently, it has been shown that the MaxWeight algorithm, which is throughput-optimal in networks with a fixed number users, fails to achieve the maximum throughput in the presence of flow-level dynamics. In this paper, we propose a new algorithm, called workload-based scheduling with learning, which is provably throughput-optimal, requires no prior knowledge of channels and user demands, and performs significantly better than previously suggested algorithms.
【Keywords】: multiuser channels; radio networks; scheduling; MaxWeight algorithm; channel variations; flow-level dynamics; multiuser scheduling; throughput-optimal opportunistic scheduling; user demands; wireless networks; workload-based scheduling; Base stations; Communications Society; Dynamic scheduling; Resource management; Scheduling algorithm; Stability; Telecommunication traffic; Throughput; Wireless communication; Wireless networks
【Paper Link】 【Pages】:2892-2900
【Authors】: Jun Li ; Shuang Yang ; Xin Wang ; Baochun Li
【Abstract】: Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from data losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy should be regenerated to compensate such losses. In this context, the general objective is to minimize the volume of actual network traffic caused by such regenerations. A class of codes, called regenerating codes, has been proposed to achieve an optimal trade-off curve between the amount of storage space required for storing redundancy and the network traffic during the regeneration. In this paper, we jointly consider the choices of regenerating codes and network topologies. We propose a new design, referred to as RCTREE, that combines the advantage of regenerating codes with a tree-structured regeneration topology. Our focus is the efficient utilization of network links, in addition to the reduction of the regeneration traffic. With the extensive analysis and quantitative evaluations, we show that RCTREE is able to achieve a both fast and stable regeneration, even with departures of storage nodes during the regeneration.
【Keywords】: distributed processing; storage management; telecommunication network topology; telecommunication traffic; tree data structures; RCTREE; distributed storage systems; large-scale reliable data storage; network topologies; network traffic; regenerating codes; tree-structured data regeneration; Bandwidth; Communications Society; Computer science; Large-scale systems; Maintenance; Memory; Network topology; Peer to peer computing; Redundancy; Telecommunication traffic
【Paper Link】 【Pages】:2901-2909
【Authors】: Jilong Kuang ; Laxmi N. Bhuyan
【Abstract】: Current state-of-the-art task scheduling algorithms for network packet processing schedule the program into a parallel-pipeline topology on network processors to maximize the throughput. However, there has been no existing work targeting power budget for packet processing on off-the-shelf multicore architectures. As energy consumption, reliability and cooling cost for packet processing systems become increasingly important, it is necessary to integrate power-awareness into a scheduler to meet the power budget. In this paper, we propose a novel scheduling algorithm to optimize both throughput and latency given a power budget for network packet processing on multicore architectures. This algorithm addresses power-aware parallel-pipeline scheduling problem by applying per-core DVFS to optimally adjust frequency on each core. We implement our algorithm on an AMD machine with two Quad-Core Opteron 2350 processors and compare the results with existing algorithms given the same power budget. For six real packet processing applications, our algorithm improves throughput and reduces latency by an average of 64.6% and 25.2%, respectively.
【Keywords】: computer networks; data communication equipment; microprocessor chips; pipeline processing; power aware computing; processor scheduling; AMD machine; Quad Core Opteron 2350 processor; latency; multicore architecture; network packet processing; per core DVFS; power aware parallel pipeline scheduling; power budget; task scheduling algorithms; throughput optimization; Cooling; Costs; Delay; Energy consumption; Multicore processing; Network topology; Power system reliability; Processor scheduling; Scheduling algorithm; Throughput
【Paper Link】 【Pages】:2910-2918
【Authors】: Bo Zhang ; T. S. Eugene Ng
【Abstract】: Multiple packet filters serving different purposes (e.g., firewalling, QoS) and different virtual routers are often deployed on a single physical router. The HyperCuts decision tree is one efficient data structure for performing packet filter matching in software. Constructing a separate HyperCuts decision tree for each packet filter is not memory efficient. A natural alternative is to construct shared HyperCuts decision trees to more efficiently support multiple packet filters. However, we experimentally show that naively classifying packet filters into shared HyperCuts decision trees may significantly increase the memory consumption and the height of the trees. To help decide which subset of packet filters should share a HyperCuts decision tree, we first identify a number of important factors that collectively impact the efficiency of the resulted shared HyperCuts decision tree. Based on the identified factors, we then propose to use machine learning techniques to predict whether any pair of packet filters should share a tree. Given the pair-wise prediction matrix, a greedy heuristic algorithm is used to classify packets filters into a number of shared HyperCuts decision trees. Our experiments using both real packets filters and synthetic packet filters show that the shared HyperCuts decision trees consume considerably less memory.
【Keywords】: decision trees; learning (artificial intelligence); routing protocols; efficient shared decision trees; machine learning techniques; multiple packet filters; physical router; virtual routers; Classification tree analysis; Data structures; Decision trees; Filters; Load management; Quality of service; Random access memory; Routing; Switches; Virtual private networks
【Paper Link】 【Pages】:2919-2927
【Authors】: Dimitrios Koutsonikolas ; Chih-Chun Wang ; Y. Charlie Hu
【Abstract】: The use of random linear network coding (NC) has significantly simplified the design of opportunistic routing (OR) protocols by removing the need of coordination among forwarding nodes for avoiding duplicate transmissions. However, NC-based OR protocols face a new challenge: How many coded packets should each forwarder transmit? To avoid the overhead of feedback exchange, most practical existing NC-based OR protocols compute offline the expected number of transmissions for each forwarder using heuristics based on periodic measurements of the average link loss rates and the ETX metric. Although attractive due to their minimal coordination overhead, these approaches may suffer significant performance degradation in dynamic wireless environments with continuously changing levels of channel gains, interference, and background traffic. In this paper, we propose CCACK, a new efficient NC-based OR protocol. CCACK exploits a novel Cumulative Coded ACKnowledgment scheme that allows nodes to acknowledge network coded traffic to their upstream nodes in a simple way, oblivious to loss rates, and with practically zero overhead. In addition, the cumulative coded acknowledgment scheme in CCACK enables an efficient credit-based, rate control algorithm. Our evaluation shows that, compared to MORE, a state-of-the-art NC-based OR protocol, CCACK improves both throughput and fairness, by up to 20Ã and 124%, respectively, with average improvements of 45% and 8.8%, respectively.
【Keywords】: linear codes; network coding; routing protocols; OR protocol; cumulative coded acknowledgments; efficient network coding; opportunistic routing; random linear network coding; rate control algorithm; routing protocols; Communication system traffic control; Degradation; Feedback; Interference; Loss measurement; Network coding; Performance gain; Propagation losses; Routing protocols; Telecommunication traffic
【Paper Link】 【Pages】:2928-2936
【Authors】: Jieun Yu ; Heejun Roh ; Wonjun Lee ; Sangheon Pack ; Ding-Zhu Du
【Abstract】: Cooperative Communication (CC) is a technology that allows multiple nodes to simultaneously transmit the same data. It can save power and extend transmission coverage. However, prior research work on topology control considers CC only in the aspect of energy saving, not that of coverage extension. We identify the challenges in the development of a centralized topology control scheme, named Cooperative Bridges, which reduces transmission power of nodes as well as increases network connectivity. We observe that CC can bridge (link) disconnected networks. We propose two algorithms that select the most energy efficient neighbor nodes, which assist a source to communicate with a destination node; an optimal method and a greedy heuristic. In addition we consider a distributed version of the proposed topology control scheme. Our findings are substantiated by an extensive simulation study, through which we show that the Cooperative Bridges scheme substantially increases the connectivity while consuming a similar amount of transmission power compared to other existing topology control schemes.
【Keywords】: ad hoc networks; distributed algorithms; greedy algorithms; telecommunication network topology; centralized topology control; cooperative bridge; cooperative communication; cooperative wireless ad hoc networks; greedy heuristic; optimal method; transmission power; Bridges; Centralized control; Communication system control; Communications Society; Computer science; Data engineering; Mobile ad hoc networks; Network topology; Peer to peer computing; Power engineering and energy
【Paper Link】 【Pages】:2937-2945
【Authors】: Tae-Suk Kim ; Serdar Vural ; Ioannis Broustis ; Dimitris Syrivelis ; Srikanth V. Krishnamurthy ; Thomas F. La Porta
【Abstract】: Network coding has been proposed as a technique that can potentially increase the transport capacity of a wireless network via processing and mixing of data packets at intermediate routers. However, most previous studies either assume a fixed transmission rate or do not consider the impact of using diverse rates on the network coding gain. Since in many cases, network coding implicitly relies on overhearing, the choice of the transmission rate has a big impact on the achievable gains. The use of higher rates works in favor of increasing the native throughput; however, it may in many cases work against effective overhearing. In other words, there is a tension between the achievable network coding gain and the inherent rate gain possible on a link. In this paper our goal is to drive the network towards achieving the best trade-off between these two contradictory effects. Towards this, we design a distributed framework that (a) facilitates the choice of the best rate on each link while considering the need for overhearing and (b) dictates the choice of which decoding recipient will acknowledge the reception of an encoded packet. We demonstrate that both of these features contribute significantly towards gains in throughput. We extensively simulate our framework in a variety of topological settings. We also fully implement it on real hardware and demonstrate its applicability and performance gains via proof-of-concept experiments on our wireless testbed. We show that our framework yields throughput gains of up to 390% as compared to what is achieved in a rate-unaware network coding framework.
【Keywords】: decoding; network coding; radio networks; telecommunication congestion control; data packets; decoding; intermediate routers; joint network coding; transmission rate control; wireless networks; Collaborative work; Communication system control; Communications Society; Decoding; Government; Lifting equipment; Network coding; Throughput; Transmitters; Wireless networks
【Paper Link】 【Pages】:2946-2954
【Authors】: Sofiane Hassayoun ; Patrick Maillé ; David Ros
【Abstract】: Network coding (NC) is a promising technique to improve throughput in wireless mesh networks. However, some previous studies have found that the actual performance improvements offered by NC may be much lower than the gains predicted by theory. This is especially so when the bulk of the traffic carried by the mesh network is composed of TCP flows. By means of both mathematical modeling and ns-2 simulations, we explore some issues due to random packet loss in coded mesh networks. Our results illustrate how the use of network coding may induce synchronization between TCP flows; they also suggest that, under particular conditions of random packet loss, the aggregate throughput may actually be lower when NC is used.
【Keywords】: network coding; random processes; transport protocols; wireless mesh networks; TCP; network coding; random packet loss; wireless mesh network; Aggregates; Mathematical model; Mesh networks; Network coding; Performance gain; Performance loss; Telecommunication traffic; Throughput; Traffic control; Wireless mesh networks
【Paper Link】 【Pages】:2955-2963
【Authors】: Raed T. Al-Zubi ; Marwan Krunz
【Abstract】: Ultra-wideband (UWB) communications has emerged as a promising technology for high data rate wireless personal area networks (WPANs). In this paper, we address a key issue that impacts the performance of multi-hop, multi-rate UWB-based WPANs, namely joint routing and rate selection. Arbitrary selection of routes (including direct links) and transmission rates along these routes results in unnecessarily long channel reservation time and high blocking rate for prospective reservations, and leads to low network throughput. To remedy this situation, we propose a novel overhearing-aware joint routing and rate selection (ORRS) scheme, which improves the network throughput by exploiting the dependence between the channel reservation time and the multi-rate capability of an UWB system. At the same time, ORRS takes advantage of packet overhearing, a typical characteristic of broadcast communications. For a given source-destination pair, ORRS aims at selecting a path and its transmission rates that achieve the minimum reservation time, leading to low blocking rate for prospective reservations and high network throughput. We show that achieving this goal while simultaneously exploiting packet overhearing and satisfying a target packet delivery probability over the selected route leads to an NP-hard problem. Accordingly, ORRS resorts to approximate solutions (proactive and reactive) to find a near-optimal result with reasonable computational/communication overhead. We further propose other variants that exploit packet overhearing in different ways to improve ORRS performance.
【Keywords】: communication complexity; personal area networks; probability; telecommunication network routing; ultra wideband communication; wireless channels; NP-hard problem; ORRS performance; UWB communication; blocking rate; broadcast communication; channel reservation time; communication overhead; computational overhead; direct link; high data rate wireless personal area network; multihop multirate UWB-based WPAN; network throughput; overhearing-aware joint routing; packet delivery probability; packet overhearing; rate selection; source-destination pair; transmission rate; ultra-wideband communication; Communications Society; Computer aided manufacturing; Desktop publishing; Proposals; Radar tracking; Routing; Spread spectrum communication; Throughput; Ultra wideband technology; Wireless personal area networks
【Paper Link】 【Pages】:2964-2972
【Authors】: Rolando Menchaca-Méndez ; J. J. Garcia-Luna-Aceves
【Abstract】: A new context-aware routing framework for multicast and unicast routing in mobile ad hoc networks is introduced. This framework, which is called CAROM (Context-Aware Routing over Ordered Meshes), uses regions of interest to identify connected components of the network that span sources and destinations of interest to restrict signaling to occur mostly within these regions. Context information is used to compute routing meshes composed of shortest-paths located inside of regions of interest. Experimental results based on extensive simulations show that CAROM attains similar or better data delivery and end-to-end delays than traditional unicast and multicast routing schemes for MANETs (AODV, OLSR, ODMRP), and that CAROM incurs only a fraction of the signaling overhead of traditional routing schemes.
【Keywords】: ad hoc networks; mobile radio; telecommunication network routing; CAROM; MANET; context-aware routing over ordered meshes; data delivery; mobile ad hoc networks; multicast routing; scalable integrated routing; shortest path; signaling overhead; unicast routing; Communications Society; Context; Costs; Delay; Mobile ad hoc networks; Multicast protocols; Robustness; Routing protocols; Traffic control; Unicast
【Paper Link】 【Pages】:2973-2981
【Authors】: Fragkiskos Papadopoulos ; Dmitri V. Krioukov ; Marián Boguñá ; Amin Vahdat
【Abstract】: We show that complex (scale-free) network topologies naturally emerge from hyperbolic metric spaces. Hyperbolic geometry facilitates maximally efficient greedy forwarding in these networks. Greedy forwarding is topology-oblivious. Nevertheless, greedy packets find their destinations with 100% probability following almost optimal shortest paths. This remarkable efficiency sustains even in highly dynamic networks. Our findings suggest that forwarding information through complex networks, such as the Internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.
【Keywords】: complex networks; geometry; routing protocols; telecommunication network topology; dynamic scale-free networks; greedy forwarding; hyperbolic geometry; hyperbolic metric spaces; network topologies; overlay networks; routing protocols; Cities and towns; Complex networks; Extraterrestrial measurements; IP networks; Information geometry; Network topology; Peer to peer computing; Routing; Social network services; Wireless sensor networks
【Paper Link】 【Pages】:2982-2990
【Authors】: François Baccelli ; Bartlomiej Blaszczyszyn
【Abstract】: We study a slotted version of the Aloha Medium Access (MAC) protocol in a Mobile Ad-hoc Network (MANET). Our model features transmitters randomly located in the Euclidean plane, according to a Poisson point process and a set of receivers representing the next-hop from every transmitter. We concentrate on the so-called outage scenario, where a successful transmission requires a Signal-to-Interference-and-Noise (SINR) larger than some threshold. We analyze the local delays in such a network, namely the number of times slots required for nodes to transmit a packet to their prescribed next-hop receivers. The analysis depends very much on the receiver scenario and on the variability of the fading. In most cases, each node has finite-mean geometric random delay and thus a positive next hop throughput. However, the spatial (or large population) averaging of these individual finite mean-delays leads to infinite values in several practical cases, including the Rayleigh fading and positive thermal noise case. In some cases it exhibits an interesting phase transition phenomenon where the spatial average is finite when certain model parameters (receiver distance, thermal noise, Aloha medium access probability) are below a threshold and infinite above. To the best of our knowledge, this phenomenon, which we propose to call the wireless contention phase transition, has not been discussed in the literature. We comment on the relationships between the above facts and the heavy tails found in the so-called "RESTART" algorithm. We argue that the spatial average of the mean local delays is infinite primarily because of the outage logic, where one transmits full packets at time slots when the receiver is covered at the required SINR and where one wastes all the other time slots. This results in the "RESTART" mechanism, which in turn explains why we have infinite spatial average. Adaptive coding offers another nice way of breaking the outage/RESTART logic. We show examples where the av- rage delays are finite in the adaptive coding case, whereas they are infinite in the outage case.
【Keywords】: access protocols; ad hoc networks; adaptive codes; mobile radio; ALOHA medium access protocol; MAC protocol; RESTART algorithm; adaptive coding; finite-mean geometric random delay; local delays; mobile ad hoc network; next-hop receivers; signal-to-interference-and-noise; wireless contention phase transition; Access protocols; Ad hoc networks; Adaptive coding; Delay; Logic; Media Access Protocol; Mobile ad hoc networks; Rayleigh channels; Signal to noise ratio; Transmitters