Proceedings of the IEEE INFOCOM 2013, Turin, Italy, April 14-19, 2013. IEEE 【DBLP Link】
【Paper Link】 【Pages】:1-2
【Authors】: Xuan Nam Nguyen ; Damien Saucez ; Thierry Turletti
【Abstract】: Content-Centric Networking (CCN) is designed for efficient content dissemination and supports caching contents on the path from content providers to content consumers to improve user experience and reduce costs. However, this strategy is not optimal inside a domain. In this paper, we propose a solution to improve caching in CCN using a Software-Defined Networking approach.
【Keywords】: cache storage; cost reduction; protocols; OpenFlow; caching efficiency; content dissemination; content-centric networking; cost reduction; software-defined networking approach; Bandwidth; Computer architecture; IP networks; Internet; Optimization; Protocols; Routing; Content-Centric Networking; OpenFlow; Software-Defined Networking
【Paper Link】 【Pages】:3-4
【Authors】: Mathieu Michel ; Bruno Quoitin
【Abstract】: As part of a search for a better understanding of MAC/routing interactions in Wireless Sensor Networks, this paper presents an analysis of the behaviour of the AODV routing protocol when combined with a duty-cycle MAC protocol. In particular, the paper focuses on the time it takes for a route request to be answered in presence of increased latency due to the MAC protocol sleep schedule. The paper quantifies how the response time varies with the duty-cycle ratio. Moreover, recommendations are provided to enhance this response time.
【Keywords】: Delays; Media Access Protocol; Routing; Routing protocols; Schedules; Time factors; Wireless sensor networks
【Paper Link】 【Pages】:5-6
【Authors】: João Marco C. Silva ; Paulo Carvalho ; Solange Rito Lima
【Abstract】: Traffic Sampling is a crucial step towards scalable network measurements, enclosing manifold challenges. The wide variety of foreseeable sampling scenarios demands for a modular view of sampling components and features, grounded on a consistent architecture. Articulating the measurement scope, the required information model and the adequate sampling strategy is a major design issue for achieving an encompassing and efficient sampling solution. This is the main focus of the present work, where a layered architecture, a taxonomy of existing sampling techniques distinguishing their inner characteristics and a flexible framework able to combine these characteristics are introduced. In addition, a new multiadaptive technique proposal, based on linear prediction, allows to reduce the measurement overhead significantly, while assuring that traffic samples reflect the statistical behavior of the global traffic under analysis.
【Keywords】: Internet; telecommunication traffic; global traffic; information model; layered architecture; linear prediction; multiadaptive technique proposal; scalable network measurements; traffic sampling scope; Accuracy; Computer architecture; Current measurement; Loss measurement; Process control; Taxonomy
【Paper Link】 【Pages】:7-8
【Authors】: Antonio Villani ; Daniele Riboni ; Domenico Vitali ; Claudio Bettini ; Luigi V. Mancini
【Abstract】: Through this software the authors aim to promote the sharing of network logs within the research community. The (k, j)obfuscation technique opens sundry interesting future directions. In fact, many networking and security tasks can be re-thought based on obfuscated datasets, for instance, quality of service (QoS), traffic classification, anomaly detection and more.
【Keywords】: quality of service; telecommunication computing; telecommunication security; telecommunication traffic; (k, j)obfuscation; NetFlow obfuscation; Obsidian; QoS; anomaly detection; efficient framework; network log sharing; obfuscated datasets; quality of service; research community; scalable framework; security tasks; traffic classification; Electronic mail; IP networks; Indexes; Knowledge engineering; Random access memory; Security; Standards
【Paper Link】 【Pages】:9-10
【Authors】: Alessandro Cammarano ; Dora Spenza ; Chiara Petrioli
【Abstract】: Structural health monitoring is a vital tool to help engineers improving the safety of critical structures, avoiding the risks of catastrophic failures. Wireless sensor networks (WSNs) are a very promising technology for structural health monitoring, as they can provide a quality of monitoring similar to conventional (wired) SHM systems with lower cost. In addiction, WSNs are both non-intrusive and non-disruptive and can be employed from the very early stages of construction.The main goal of this work is to investigate the feasibility of a WSN with energy-harvesting capabilities for structural health monitoring, specifically targeting underground tunnels.
【Keywords】: condition monitoring; energy harvesting; failure (mechanical); geotechnical engineering; railways; risk analysis; structural engineering; tunnels; wireless sensor networks; catastrophic failures; critical structure safety; energy harvesting WSN; risk analysis; structural health monitoring; underground train tunnels; wireless sensor networks; Ad hoc networks; Energy harvesting; Monitoring; Strain; Wireless communication; Wireless sensor networks; Wires
【Paper Link】 【Pages】:11-12
【Authors】: Anooq Muzaffar Sheikh ; Attilio Fiandrotti ; Enrico Magli
【Abstract】: Previous research has shown the benefits of random-push Network Coding (NC) for P2P video streaming. On the other hand, scalable video coding provides graceful quality adaptation to heterogeneous network conditions. Nevertheless, packet scheduling for scalable media streaming with P2P NC is still a largely unexplored problem. Our ongoing research aims at designing a packet scheduling scheme that maximizes the quality of the video with minimal coordination among peers. In this work, we provide a preliminary description of our scheduling scheme and preliminary performance measurements.
【Keywords】: network coding; peer-to-peer computing; scheduling; video coding; video streaming; P2P video streaming; distributed scheduling; packet scheduling scheme; random-push network coding; scalable media streaming; video coding; Bandwidth; Decoding; Network coding; Peer-to-peer computing; Scheduling algorithms; Streaming media; Video coding
【Paper Link】 【Pages】:13-14
【Authors】: Piergiuseppe Di Marco ; Carlo Fischione ; George Athanasiou ; Prodromos-Vasileios Mekikis
【Abstract】: In this paper, routing metrics for low power and lossy networks are designed and evaluated. The cross-layer interactions between routing and medium access control (MAC) are explored, by considering the specifications of IETF RPL over the IEEE 802.15.4 MAC. In particular, the experimental study of a reliability metric that extends the expected transmission count (ETX) to include the effects of the level of contention and the parameters at MAC layer is presented. Moreover, a novel metric that guarantees load balancing and increased network lifetime by fulfilling reliability constraints is introduced. The aforementioned metrics are compared to a routing approach based on backpressure mechanism.
【Keywords】: radio networks; telecommunication network reliability; telecommunication network routing; IEEE 802.15.4 MAC; IETF RPL; MAC layer; MAC-aware routing metrics; backpressure mechanism; contention level; cross-layer interactions; expected transmission count; load balancing; lossy networks; low power networks; medium access control; network lifetime; reliability constraints; reliability metric; IEEE 802.15 Standards; Measurement; Media Access Protocol; Power demand; Reliability; Routing
【Paper Link】 【Pages】:15-16
【Authors】: Muhammad Saqib Ilyas ; Zartash Afzal Uzmi
【Abstract】: In this thesis, we formulate a generalized optimization problem to minimize the electricity cost of network operation while using techniques applicable to operational networks. We applied our optimization formulation to two different networks: geo-diverse data centers and cellular networks. In case of cellular networks, using traffic traces from an large operational network, we observed that an operator can save up to 22% in electricity costs by using our proposed scheme. While the percentage amount seems modest, for an operator with 7000 cellular sites in an urban setting, this translates to savings of up to 35.36 MWh annually.
【Keywords】: cellular radio; computer centres; cost reduction; optimisation; telecommunication power supplies; cellular networks; cellular sites; electricity cost efficient workload mapping; electricity cost minimization; generalized optimization problem; geo-diverse data centers; network operation; traffic traces; Base stations; Computational modeling; Educational institutions; Electricity; Optimization; Power demand; Transceivers
【Paper Link】 【Pages】:17-18
【Authors】: Anh Dung Nguyen ; Patrick Sénac ; Michel Diaz
【Abstract】: Dynamic networks like Online Social Networks or Disruption Tolerant Networks (DTNs), when considering their spatial, temporal and size complexity, even if partly wired, are exposed to nodes and links churns and failures which can be modeled with dynamic graphs with time varying edges and vertices. Recently, it has been shown that dynamic networks exhibit some regularity in their temporal contact patterns. The impact of this regularity on network performances has not been well studied and analyzed. One of the most interesting problem in research on dynamic networks is the issue of efficient navigation techniques in such networks. For dynamic networks, because there is still no widely developed theoretical background to understand deeply the problems, research traditionally tends to propose heuristic solutions. In the context of DTNs, these solutions tend to answer to some specific questions about navigating in a dynamic network (e.g., how to reduce energy consumption of routing, how to maximize the delivery probability) while usually ignoring and not leveraging on the profound structural properties of the dynamic network. In this work, we aim to contribute to understanding the impact of this dynamic structure on information routing and show how to exploit this structure for efficient navigation in such networks.
【Keywords】: graph theory; telecommunication network routing; delivery probability; disruption tolerant networks; dynamic graphs; dynamic network navigation; dynamic structure; energy consumption; information routing; online social networks; size complexity; spatial complexity; structural properties; temporal complexity; temporal contact patterns; time varying edges; time varying vertices; Delays; Heuristic algorithms; Navigation; Peer-to-peer computing; Routing; Social network services
【Paper Link】 【Pages】:19-20
【Authors】: Adisorn Lertsinsrubtavee ; Naceur Malouch ; Serge Fdida
【Abstract】: In this work, we propose a heuristic for dynamic spectrum sharing in cognitive radio networks. The concept of rate compensation is introduced so that cognitive radio users are able to achieve their rate requirement by performing adequately spectrum handoffs. Indeed, performing spectrum handoff can increase the achieved rate obtained by moving from unavailable channels to available ones. However, handoffs should also be reduced to decrease handoff delays and access contention in the network which can in turn impact the achieved rate.
【Keywords】: cognitive radio; radio spectrum management; wireless channels; access contention; cognitive radio networks; cognitive radio users; dynamic spectrum sharing; handoff delays; rate compensation; rate requirement; spectrum handoff; unavailable channels; Availability; Bandwidth; Cognitive radio; Delays; Dynamic scheduling; Resource management
【Paper Link】 【Pages】:21-22
【Authors】: Shahzad Ali ; Gianluca Rizzo ; Balaji Rengarajan ; Marco Ajmone Marsan
【Abstract】: Context-awareness is a peculiar characteristic of an ever expanding set of applications that make use of a combination of restricted spatio-temporal locality and mobile communications, to deliver a variety of services to the end user. It is expected that by 2014 more than 1.5 billion people would be using applications based on local search (search restricted on the basis of spatio-temporal locality), and that mobile location based services will drive revenues of more than $15 billion worldwide. A common feature of such context-aware applications is the fact that their communication requirements significantly differ from ordinary applications. For most of them, the scope of generated content itself is local. This locally relevant content may be of little concern to the rest of the world, therefore moving this content from the user device to store it in a well-accessible centralized location and/or making this information available beyond its scope represents a clear waste of resources (connectivity, storage). Due to these specific requirements, opportunistic communication can play a special role when coupled with context-awareness. The benefit of opportunistic communications is that it naturally incorporates context as spatial proximity is closely associated with connectivity.
【Keywords】: mobile computing; communication requirements; context-aware applications; floating content; local search; mobile communications; mobile location based services; restricted spatio-temporal locality; spatial proximity; user device; Adaptation models; Analytical models; Computational modeling; Context-aware services; Open area test sites; Predictive models; Probability density function
【Paper Link】 【Pages】:23-24
【Authors】: Kamini Garg
【Abstract】: This paper presents an overview of our research proposal. We present the key challenge in deploying people-centric systems that rely on opportunistic data dissemination. We propose to address this challenge by developing a model to predict the minimum number of nodes required to disseminate information to all nodes in a given network. We conduct some simulations in OMNET++ and present our preliminary results for an opportunistic network with static data. We further plan to develop and validate our mathematical model using real world experiments.
【Keywords】: communication complexity; graph theory; mathematical analysis; mobile radio; NP hard problem; OMNET++; data dissemination bounds; dynamic communication graph; information dissemination; mathematical model; mobile network; network node; opportunistic data dissemination; opportunistic network; people-centric system; static data; Data models; Mathematical model; Mobile communication; Mobile computing; Sensors; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:25-29
【Authors】: Hongxing Li ; Chuan Wu ; Zongpeng Li ; Francis C. M. Lau
【Abstract】: The emerging federated cloud paradigm advocates sharing of resources among cloud providers, to exploit temporal availability of resources and diversity of operational costs for job serving. While extensive studies exist on enabling interoperability across different cloud platforms, a fundamental question on cloud economics remains unanswered: When and how should a cloud trade VMs with others, such that its net profit is maximized over the long run? In order to answer this question by the federation, a number of important, correlated decisions, including job scheduling, server provisioning and resource pricing, need to be dynamically made, with long-term profit optimality being a goal. In this work, we design efficient algorithms for inter-cloud resource trading and scheduling in a federation of geo-distributed clouds. For VM trading among clouds, we apply a double auction-based mechanism that is strategy proof, individual rational, and ex-post budget balanced. Coupling with the auction mechanism is an efficient, dynamic resource trading and scheduling algorithm, which carefully decides the true valuations of VMs in the auction, optimally schedules stochastic job arrivals with different SLAs onto the VMs, and judiciously turns on and off servers based on the current electricity prices. Through rigorous analysis, we show that each individual cloud, by carrying out our dynamic algorithm, can achieve a time-averaged profit arbitrarily close to the offline optimum.
【Keywords】: cloud computing; scheduling; virtual machines; SLA; VM trading; auction mechanism; auction-based mechanism; budget balance; cloud economics; cloud platforms; cloud providers; dynamic resource trading; federated cloud paradigm; geo-distributed clouds; inter-cloud resource trading; interoperability; job scheduling; job serving; long-term profit optimality; operational costs; profit-maximizing virtual machine trading; resource pricing; resource sharing; scheduling algorithm; selfish clouds; server provisioning; stochastic job arrivals; strategy proof; temporal availability; Cost accounting; Delays; Dynamic scheduling; Electricity; Heuristic algorithms; Schedules; Servers
【Paper Link】 【Pages】:30-34
【Authors】: Orathai Sukwong ; Hyong S. Kim
【Abstract】: Virtualization allows us to consolidate multiple servers onto a single physical machine, saving infrastructure cost. Yet, consolidation can lead to performance degradation, jeopardizing Service Level Agreement (SLA). In this paper, we analyze and identify the factors to the performance degradation due to consolidation - that is the wait time and the ready time. The wait time is the queuing time caused by other virtual machines (VMs). The ready time is the time the resource takes to be ready to service, such as the seek time incurred in traditional storage. The ready time can substantially deteriorate the request response time. Unfortunately, existing schedulers can only manage the wait time, but not the ready time. To control both quantities, we propose an adaptive disk scheduler called DPack. DPack schedules the VMs based on the likelihood of the VM failing the SLAs. DPack then adjusts the exclusive access time based on the VM resource access prediction. DPack considers the workload changes and request arrival to enhance robustness. We develop DPack based on the default disk scheduler in KVM and evaluate it against several existing disk schedulers available in KVM and Xen. The results show that DPack can improve the 99th percentile response time up to 76%. In the highly consolidated environment, DPack can also satisfy all the SLAs, while the other schedulers cannot meet the SLAs for at least 50% of the VMs.
【Keywords】: cloud computing; contracts; resource allocation; scheduling; virtual machines; virtualisation; DPack; KVM; SLA; VM resource access prediction; Xen; adaptive disk scheduler; exclusive access time; highly consolidated cloud; multiple server consolidation; performance degradation; queuing time; ready time; request response time; service level agreement; virtual machines; virtualization; wait time; Degradation; Dynamic scheduling; Optimization; Servers; Throughput; Time factors; Virtual machining; Adaptive scheduling; disk scheduler; hypervisor; quality of service; virtual machine; web services
【Paper Link】 【Pages】:35-39
【Authors】: Yaxiong Zhao ; Jie Wu
【Abstract】: The buzz-word big-data (application) refers to the large-scale distributed applications that work on unprecedentedly large data sets. Google's MapReduce framework and Apache's Hadoop, its open-source implementation, are the defacto software system for big-data applications. An observation regarding these applications is that they generate a large amount of intermediate data, and these abundant information is thrown away after the processing finish. Motivated by this observation, we propose a data-aware cache framework for big-data applications, which is called Dache. In Dache, tasks submit their intermediate results to the cache manager. A task, before initiating its execution, queries the cache manager for potential matched processing results, which could accelerate its execution or even completely saves the execution. A novel cache description scheme and a cache request and reply protocol are designed. We implement Dache by extending the relevant components of the Hadoop project. Testbed experiment results demonstrate that Dache significantly improves the completion time of MapReduce jobs and saves a significant chunk of CPU execution time.
【Keywords】: cache storage; parallel programming; public domain software; query processing; Apache Hadoop project; CPU execution time; Dache; Google MapReduce framework; buzz-word big-data application; cache description scheme; cache manager; cache request-reply protocol design; data-aware cache framework; defacto software system; large-scale distributed applications; potential matched processing; query processing; Acceleration; Context; Distributed databases; Indexes; Pricing; Protocols; Sorting; Big-data; Hadoop; MapReduce; cache management; distributed file system
【Paper Link】 【Pages】:40-44
【Authors】: Chen Zhang ; Xue Liu
【Abstract】: Message queuing systems can be used to support a plethora of fundamental services in distributed systems. This paper presents HBaseMQ, the first distributed message queuing system based on bare-bones HBase. HBaseMQ directly inherits HBase's properties such as scalability and fault tolerance, enabling HBase users to rapidly instantiate a message queuing system with no extra program deployment or modification to HBase. As a result, HBaseMQ effectively enhances the data processing capability of an existing HBase installation. HBaseMQ supports reliable and total order message delivery with “at least once” or “at most once” message delivery guarantees with no limit on message size. Furthermore, HBaseMQ provides atomicity, consistency, isolation and durability on any operation over messages.
【Keywords】: cloud computing; fault tolerant computing; message passing; queueing theory; HBase installation; HBase properties; HBaseMQ; at least once message delivery guarantee; at most once message delivery guarantee; bare-bones HBase; data processing capability; distributed message queuing system; fault tolerance; scalability; total order message delivery; Fault tolerance; Fault tolerant systems; Libraries; Protocols; Receivers; Scalability; Clouds; Distributed message queuing system; HBase
【Paper Link】 【Pages】:45-49
【Authors】: Xu Cheng ; Haitao Li ; Jiangchuan Liu
【Abstract】: The social networking services (SNS) have drastically changed the information distribution landscape and people's daily life. With the development in broadband accesses, video has become one of the most important types of objects spreading among social networking service users, yet presents more significant challenges than other types of objects, not only to the SNS management, but also to the network traffic engineering. In this paper, we take an important step towards understanding the characteristics of video sharing propagation in SNS, based on the real viewing event traces from a popular SNS in China. We further extend the epidemic models to accommodate the diversity of the propagation, and our model effectively captures the propagation process of video sharing in SNS.
【Keywords】: social networking (online); China; SNS management; information distribution landscape; network traffic engineering; real viewing event; social networking service users; video sharing propagation; Market research; Peer-to-peer computing; Streaming media; Twitter; Vegetation; Watches
【Paper Link】 【Pages】:50-54
【Authors】: Haitao Li ; Haiyang Wang ; Jiangchuan Liu ; Ke Xu
【Abstract】: The deep penetration of Online Social Networks (OSNs) have made them major portals for video content sharing. It is known that a significant portion of the accesses to video sharing sites are now coming from OSN users. Yet the unique features of video sharing over OSNs and their impact remain largely unknown. In this paper, we present a measurement study towards understanding the video requests from OSNs. We closely collaborated with a large-scale Facebook-like OSN to analyze its user access logs spanning over four months. Our measurement reveals a number of distinctive features on the popularity distribution of videos shared over the OSN. In particular, we observe that the OSN amplifies the skewness of video popularity so largely that about 2% most popular videos account for 90% of total views; the video requests distribution also exhibits perfect powerlaw feature; video popularity evolution shows more dynamics. All these noticeably differ from that of conventional videos, such as YouTube videos. To further understand the characteristics, we model the video viewing and sharing behaviors in OSNs, leading to the development of a practical emulator. It reveals the gap between the sharing rate and the viewing rate, and generates user requests that well capture the video popularity distribution and dynamics as observed in our empirical data.
【Keywords】: Internet; content management; social networking (online); video retrieval; video signal processing; Facebook-like OSN; OSN users; online social networks; power-law feature; user access logs; video content sharing; video popularity; video popularity distribution; video popularity dynamics; video popularity evolution; video request distribution; video sharing rate; video sharing sites; Atmospheric measurements; Correlation coefficient; Facebook; Silicon; Streaming media; YouTube
【Paper Link】 【Pages】:55-59
【Authors】: Thang N. Dinh ; Nam P. Nguyen ; My T. Thai
【Abstract】: We introduce A3CS, an adaptive framework with approximation guarantees for quickly identifying community structure in dynamic networks via maximizing Modularity Q. Our framework explores the advantages of power-law distribution property, is scalable for very large networks, and more excitingly, possesses approximation factors to ensure the quality of its detected community structure. To the best of our knowledge, this is the first framework that achieves approximation guarantees for the NP-hard modularity maximization problem, especially on dynamic networks. To certify our approach, we conduct extensive experiments in comparison with other adaptive methods on both synthesized networks with known community structures and real-world traces including ArXiv e-print citation and Facebook social networks. Excellent empirical results not only confirm our theoretical results but also promise the practical applicability of A3CS in a wide range of dynamic networks.
【Keywords】: approximation theory; complex networks; computational complexity; optimisation; social networking (online); A3CS; ArXiv e-print citation; Facebook social networks; NP-hard modularity maximization problem; adaptive approximation algorithm; adaptive framework; community detection; community structure; dynamic scale-free networks; modularity Q; power-law distribution property; synthesized networks; Adaptive algorithms; Approximation algorithms; Approximation methods; Communities; Heuristic algorithms; Social network services; Time complexity; Adaptive approximation algorithm; Community structure; Modularity; Social networks
【Paper Link】 【Pages】:60-64
【Authors】: Joydeep Chandra ; Bivas Mitra ; Niloy Ganguly
【Abstract】: In superpeer based networks, the superpeers are discovered through the process of bootstrapping, whereby resourceful peers get upgraded to superpeers. However, bootstrapping is influenced by several factors like limitation on the maximum number of connections a peer can have due to bandwidth constraints, limitation on the availability of information of existing peers due to cache size constraints and also by the attachment policy of the newly arriving peers to the resourceful peers. In this paper, we derive closed form equations that model the effect of these factors on superpeer related topological properties of the networks. Based on the model, we observe that the cache parameters and the preferentiality parameters must be suitably tuned so as to increase the fraction of superpeers in the network. Finally, we perform an empirical analysis of social networks like Twitter and Facebook using our model to obtain and derive insights for suitably bootstrapping superpeer topology.
【Keywords】: cache storage; computer bootstrapping; peer-to-peer computing; social networking (online); telecommunication network topology; Facebook; Twitter; attachment policy; bandwidth constraints; bootstrapping process; bootstrapping superpeer topology; cache parameters; cache size constraints; empirical analysis; information availability; resourceful peers; social networks; superpeer based networks; superpeer related topological properties; Equations; Facebook; Mathematical model; Peer-to-peer computing; Protocols; Topology; Twitter; Bootstrapping protocols; Rate equation; Superpeer networks; Webcache
【Paper Link】 【Pages】:65-69
【Authors】: Hao Li ; Chengchen Hu
【Abstract】: Fine-grained traffic identification (FGTI) reveals the context/purpose of each packet that flows through the network nodes/links. Instead of only indicating the application/protocol that a packet is related to, FGTI further maps the packet to a meaningful user behavior or application context. In this paper, we propose a Rule Organized Optimal Matching (ROOM) for fast and memory efficient fine-grained traffic identification. ROOM splits the identification rules into several fields and elaborately organizes the matching order of the fields. We formulate and model the optimal rule organization problem of ROOM mathematically, which is demonstrated to be NP-hard, and then we propose an approximate algorithm to solve the problem with the time complexity of O(N2) (N is the number of fields in a rule). In order to perform evaluations, we implement ROOM and related work as real prototype systems. Also, real traces collected in wired Internet and mobile Internet are used as the experiment input. The evaluations show very promising results: 1.6X to 104.7X throughput improvement is achieved by ROOM in the real system with acceptable small memory cost.
【Keywords】: Internet; mobile computing; optimisation; telecommunication traffic; FGTI; NP-hard problem; ROOM; fine-grained traffic identification; identification rules; mobile Internet; network nodes; rule organized optimal matching; wired Internet; Complexity theory; Internet; Memory management; Mobile communication; Organizations; Protocols; Throughput
【Paper Link】 【Pages】:70-74
【Authors】: Andrea Di Pietro ; Felipe Huici ; Nicola Bonelli ; Brian Trammell ; Petr Kastovsky ; Tristan Groleat ; Sandrine Vaton ; Maurizio Dusi
【Abstract】: As the growth of Internet traffic volume and diversity continues, passive monitoring and data analysis, crucial to the correct operation of networks and the systems that rely on them, has become an increasingly difficult task. We present the design and implementation of Blockmon, a flexible, high performance system for network monitoring and analysis. We present experimental results demonstrating Blockmon's performance, running simple analyses at 10Gb/s line rate on commodity hardware; and compare its performance with that of existing programmable measurement systems, showing significant improvement (as much as twice as fast) especially for small packet sizes. We further demonstrate Blockmon's applicability to measurement and data analysis by implementing and evaluating three sample applications: a flow meter, a TCP SYN flood detector, and a VoIP anomaly-detection system.
【Keywords】: Internet; computer network performance evaluation; computer network reliability; telecommunication traffic; transport protocols; Blockmon performance; Internet; TCP SYN flood detector; VoIP anomaly-detection system; bit rate 10 Gbit/s; commodity hardware; data analysis; flexible high performance system; flow meter; line rate; network analysis; network monitoring; network traffic measurement; packet size; passive monitoring; programmable measurement system; traffic diversity; traffic volume; Hardware; Logic gates; Message systems; Monitoring; Optimization; Radiation detectors; Resource management
【Paper Link】 【Pages】:75-79
【Authors】: Brian Eriksson ; Mark Crovella
【Abstract】: The ability to estimate the geographic position of a network host has a vast array of uses, and many measurement-based geolocation methods have been proposed. Unfortunately, comparing results across multiple studies is difficult. A key contributor to that difficulty is network geometry - the spatial arrangement of hosts and links. In this paper, we study the relationship between network geometry and geolocation accuracy. We define the notion of scaling dimension to characterize the geometry of a wide array of different networks. We show that the scaling dimension correlates with a number of aspects of geolocation accuracy. In networks with low scaling dimension, geolocation accuracy improves more rapidly with the addition of landmarks. Further, we show that the scaling dimension of operator networks varies considerably across different regions of the world. Our results point to the complexity of, and suggest standards for, the meaningful evaluation of geolocation algorithms.
【Keywords】: Global Positioning System; Internet; geometry; network theory (graphs); geolocation accuracy; measurement-based geolocation method; network geometry; network host geographic position estimation; operator network; scaling dimension notion; spatial host arrangement; spatial link arrangement; Accuracy; Delays; Extraterrestrial measurements; Geology; Geometry; Internet; Network topology
【Paper Link】 【Pages】:80-84
【Authors】: Kensuke Fukuda ; Shinta Sato ; Takeshi Mitamura
【Abstract】: The DNS security extensions (DNSSEC) is a new feature of DNS that provides an authentication mechanism that is now being deployed worldwide. However, we do not have enough knowledge about the deployment status of DNSSEC in the wild due to the difficulty of identifying DNSSEC validators (caching validating resolvers). In this paper, a simple and robust method is proposed that estimates DNSSEC validators from DNS query data passively measured at the server side. The key idea of the estimation method relies on the query patterns of the original query and the DNSSEC queries triggered by the original query, which is the ratio of the number of DS queries to the number of total queries per host (DSR: DS ratio). To show the effectiveness of the proposed method, we analyze passive traffic traces measured for all the “.jp” servers and actively send DNSSEC validation requests to caching resolvers that appear in the traces to obtain the ground truth data of DNSSEC validators. Our results of the active measurement reveal that less than 50% of the potential DNSSEC validators were validating caching resolvers in the wild; the remainder was related to stub validators (e.g., browser plugins) behind non-validating caching resolvers. Thus, simple IP address-based counts overestimated the number of DNSSEC validators in an investigation of the deployment of DNSSEC at the organization level (e.g., ISPs). Then, we demonstrate the effectiveness of the DSR by using the active and passive traffic traces. In summary, the ratio of validating caching resolvers in our dataset was estimated to be approximately 70% of the potential DNSSEC validators, and also 15-20% of the ASes sending DNSSEC queries were overestimated as ones with validating caching resolvers. In particular, our results show that some ASes providing public DNS service had few validating caching resolvers though they had a large number of hosts sending DNSSEC queries.
【Keywords】: Internet; query processing; security of data; DNS query data; DNS security extensions; DNSSEC validators; DS queries; authentication mechanism; nonvalidating caching resolvers; query patterns; simple IP address-based counts; IP networks; Internet; Organizations; Robustness; Security; Servers; Software
【Paper Link】 【Pages】:85-89
【Authors】: Elisha J. Rosensweig ; Jim Kurose
【Abstract】: Over the past few years Content-Centric Networking, a networking architecture in which host-to-content communication protocols are introduced, has been gaining much attention. A central component of such an architecture is a large-scale interconnected caching system. To date, the way these Cache Networks operate and perform is still poorly understood. Following the work of Cruz on queueing networks, in this paper we develop a network calculus for bounding flows in LRU cache networks of arbitrary topology. We analyze the tightness of these bounds as a function of several system parameters. Also, we derive from it several analytical results regarding these systems: the uniformizing impact of LRU on the request stream, and the significance of cache and routing diversity on performance.
【Keywords】: protocols; queueing theory; telecommunication network routing; telecommunication network topology; LRU cache networks; bounding flows; content centric networking; host-to-content communication protocols; network calculus; queueing networks; request stream; routing diversity; Calculus; Computational modeling; Computer architecture; Delays; Network topology; Topology; Writing
【Paper Link】 【Pages】:90-94
【Authors】: Reaz Ahmed ; Md. Faizul Bari ; Shihabur Rahman Chowdhury ; Md. Golam Rabbani ; Raouf Boutaba ; Bertrand Mathieu
【Abstract】: One of the crucial building blocks for Information Centric Networking (ICN) is a name based routing scheme that can route directly on content names instead of IP addresses. However, moving the address space from IP addresses to content names brings scalability issues to a whole new level, due to two reasons. First, name aggregation is not as trivial a task as the IP address aggregation in BGP routing. Second, the number of addressable contents in the Internet is several orders of magnitude higher than the number of IP addresses. With the current size of the Internet, name based, anycast routing is very challenging specially when routing efficiency is of prime importance. We propose a novel name-based routing scheme (αRoute) for ICN that offers efficient bandwidth usage, guaranteed content lookup and scalable routing table size.
【Keywords】: Internet; telecommunication network routing; αRoute; BGP routing; IP address aggregation; Internet; address space; addressable contents; anycast routing; bandwidth usage; content names; guaranteed content lookup; information centric networks; name aggregation; name based routing scheme; routing efficiency; scalable routing table size; Indexing; Internet; Routing; Routing protocols; Silicon; Vegetation
【Paper Link】 【Pages】:95-99
【Authors】: Yi Wang ; Tian Pan ; Zhian Mi ; Huichen Dai ; Xiaoyu Guo ; Ting Zhang ; Bin Liu ; Qunfeng Dong
【Abstract】: In this paper we design, implement and evaluate NameFilter, a two-stage Bloom filter-based scheme for Named Data Networking name lookup, in which the first stage determines the length of a name prefix, and the second stage looks up the prefix in a narrowed group of Bloom filters based on the results from the first stage. Moreover, we optimize the hash value calculation of name strings, as well as the data structure to store multiple Bloom filters, which significantly reduces the memory access times compared with that of non-optimized Bloom filters. We conduct extensive experiments on a commodity server to test NameFilter's throughput, memory occupation, name update as well as scalability. Evaluation results on a name prefix table with 10M entries show that our proposed scheme achieves lookup throughput of 37 million searches per second at low memory cost of only 234.27 MB, which means 12 times speedup and 77% memory savings compared to the traditional character trie structure. The results also demonstrate that NameFilter can achieve 3M per second incremental updates and exhibit good scalability to large-scale prefix tables.
【Keywords】: data structures; information retrieval; NameFilter; character trie structure; data structure; fast name lookup; large-scale prefix tables; lookup throughput; memory cost; memory occupation; name prefix table; name update; named data networking name lookup; nonoptimized Bloom filters; two-stage Bloom filter-based scheme; IP networks; Instruction sets; Internet; Memory management; Ports (Computers); Scalability; Throughput
【Paper Link】 【Pages】:100-104
【Authors】: Sumanta Saha ; Andrey Lukyanenko ; Antti Ylä-Jääski
【Abstract】: Information-centric network (ICN), which is one of the prominent Internet re-design architectures, relies on in-network caching for its fundamental operation. However, previous works argue that the performance of in-network caching is highly degraded with the current cache-along-default-path design, which makes popular objects to be cached redundantly in many places. Thus, it would be beneficial to have a distributed and uncoordinated design. Although cooperative caches could be an answer to this, previous research showed that they are generally unfeasible due to excessive signaling burden, protocol complexity, and a need for fault tolerance. In this work we illustrate the ICN caching problem, and propose a novel architecture to overcome the problem of uncooperative caches. Our design possesses the cooperation property intrinsically. We utilize controlled off-path caching to achieve almost 9-fold increase in cache efficiency, and around 20% increase in server load reduction when compared to the classic on-path caching used in ICN proposals.
【Keywords】: Internet; cache storage; computer network reliability; telecommunication network routing; ICN caching problem; Internet redesign architectures; cache efficiency; cache-along-default-path design; classic on-path caching; controlled off-path caching; cooperation property; cooperative caching; distributed design; fault tolerance; fundamental operation; in-network caching; information-centric networks; protocol complexity; routing control; server load reduction; uncooperative cache; uncoordinated design; Indexes; Internet; Network topology; Routing; Servers; Sociology; Statistics
【Paper Link】 【Pages】:105-109
【Authors】: Francesco Malandrino ; Claudio Ettore Casetti ; Carla-Fabiana Chiasserini ; Marco Fiore ; Roberto Sadao Yokoyama ; Carlo Borgiattino
【Abstract】: Knowledge of the location of vehicles and tracking of the routes they follow are a requirement for a number of applications. However, public disclosure of the identity and position of drivers jeopardizes user privacy, and securing the tracking through asymmetric cryptography may have an exceedingly high computational cost. In this paper, we address all of the issues above by introducing A-VIP, a lightweight privacy-preserving framework for tracking of vehicles. A-VIP leverages anonymous position beacons from vehicles, and the cooperation of nearby cars collecting and reporting the beacons they hear. Such information allows an authority to verify the locations announced by vehicles, or to infer the actual ones if needed. We assess the effectiveness of A-VIP through testbed implementation results.
【Keywords】: cryptography; mobile radio; A-VIP; anonymous verification; asymmetric cryptography; cars; computational cost; lightweight privacy-preserving framework; positions inference; public disclosure; routes tracking; vehicles location; vehicular networks; Cryptography; Phantoms; Privacy; Protocols; Radiation detectors; Tiles; Vehicles
【Paper Link】 【Pages】:110-114
【Authors】: Christoph Sommer ; Stefan Joerer ; Michele Segata ; Ozan K. Tonguz ; Renato Lo Cigno ; Falko Dressler
【Abstract】: We study the effect of radio signal shadowing dynamics, caused by vehicles and by buildings, on the performance of beaconing protocols in Inter-Vehicular Communication (IVC). Recent research indicates that beaconing, i.e., one hop message broadcast, shows excellent characteristics and can outperform other communication approaches for both safety and efficiency applications, which require low latency and wide area information dissemination, respectively. We show how shadowing dynamics of moving obstacles hurt IVC, reducing the performance of beaconing protocols. At the same time, shadowing also limits the risk of overloading the wireless channel. To the best of our knowledge, this is the first study identifying the problems and resulting possibilities of such dynamic radio shadowing. We demonstrate how these challenges and opportunities can be taken into account and outline a novel approach to dynamic beaconing. It provides low-latency communication (i.e., very short beaconing intervals), while ensuring not to overload the wireless channel. The presented simulation results substantiate our theoretical considerations.
【Keywords】: mobile communication; protocols; wireless channels; beaconing intervals; beaconing protocols; dynamic beaconing; dynamic radio shadowing; inter-vehicular communication; low-latency communication; moving obstacles; one hop message broadcast; radio signal shadowing dynamics; wide area information dissemination; wireless channel; Buildings; Protocols; Shadow mapping; Telecommunication standards; Traffic control; Vehicle dynamics; Vehicles
【Paper Link】 【Pages】:115-119
【Authors】: Kai Xing ; Tianbo Gu ; Zhengang Zhao ; Lei Shi ; Yunhao Liu ; Pengfei Hu ; Yuepeng Wang ; Yi Liang ; Shuo Zhang ; Yang Wang ; Liusheng Huang
【Abstract】: Though there exist ready-made DSRC/WiFi/3G/4G cellular systems for roadway communications, there are common defects in these systems for roadway safety oriented applications and the corresponding challenges remain unsolved for years, i.e., WiFi cannot work well in vehicular networks due to the high probability of packet loss caused by burst communications, which is a common phenomenon in roadway networks; 3G/4G cannot well support real-time communications due to the nature of their designs; DSRC lacks the support to roadway safety oriented applications with hard realtime and reliability requirements [1]. To solve the conflict between the capability limitations of existing systems and the ever-growing demands of roadway safety oriented communication applications, we propose a novel system design and implementation for realtime reliable roadway communications, aiming at providing safety messages to users in a realtime and reliable manner. In our extensive experimental study, the latency is well controlled within the hard realtime requirement (100ms) for roadway safety applications given by NHTSA [2], and the reliability is proved to be improved by two orders of magnitude compared with existing experimental results [1]. Our experiments show that the proposed system for roadway safety communications can provide guaranteed highly reliable packet delivery ratio (PDR) of 99% within the hard realtime requirement 100ms under various scenarios, e.g., highways, city areas, rural areas, tunnels, bridges. Our design can be widely applied for roadway communications and facilitate the current research in both hardware and software design and further provide an opportunity to consolidate the existing work on a practical and easy-configurable low-cost roadway communication platform.
【Keywords】: 3G mobile communication; 4G mobile communication; cellular radio; mobile communication; road safety; wireless LAN; DSRC/WiFi/3G/4G cellular systems; NHTSA; bridges; burst communications; city areas; hardware design; highways; low-cost roadway communication; packet delivery ratio; packet loss; reliable realtime communications; roadway communications; roadway networks; roadway safety oriented vehicular communications; rural areas; software design; system design; tunnels; vehicular networks; Ad hoc networks; Reliability; Road transportation; Safety; Standards; Wireless communication
【Paper Link】 【Pages】:120-124
【Authors】: Evsen Yanmaz ; Robert Kuschnig ; Christian Bettstetter
【Abstract】: Increasing availability of autonomous small-size aerial vehicles leads to a variety of applications for aerial exploration and surveillance, transport, and other domains. Many of these applications rely on networks between aerial nodes, that will have high mobility dynamics with vehicles moving in all directions in 3D space and positioning in different orientations, leading to restrictions on network connectivity. In this paper, we propose a simple antenna extension to 802.11 devices to be used on aerial nodes. Path loss and small-scale fading characteristics of air-to-ground links are analyzed using signal strength samples obtained via real-world measurements at 5 GHz. Finally, network performance in terms of throughput and number of retransmissions are presented. Results show that a throughput of 12Mbps can be achieved at distances in the order of 300m.
【Keywords】: Nakagami channels; antenna radiation patterns; radio networks; remotely operated vehicles; 3D positioning; 3D space; 802.11 devices; 802.11 networks; aerial exploration; aerial nodes; air-ground communications; air-to-ground links; antenna extension; autonomous small-size aerial vehicles; frequency 5 GHz; high mobility dynamics; network connectivity; network performance; path loss; real-world measurements; signal strength samples; small-scale fading characteristics; surveillance; three-dimensional aerial mobility; Ad hoc networks; Antenna measurements; Dipole antennas; Fading; Throughput; Wireless communication; 3D networks; 802.11; Nakagami fading; UAVs; link modeling; quadrotors; vehicular communications
【Paper Link】 【Pages】:125-129
【Authors】: Sangki Yun ; Lili Qiu ; Apurv Bhartia
【Abstract】: Distributed multiple-input multiple-output (MIMO) promises a dramatic capacity increase. While significant theoretical work has been done on distributed MIMO at the physical layer, how to translate the physical layer innovation into tangible benefits to real networks remains open. In particular, realizing multi-point to multi-point MIMO involves the following challenges: (i) how to accurately synchronize multiple APs in phase and time in order to successfully deliver precoded signals to the clients, and (ii) how to develop a MAC protocol to effectively support multi-point to multi-point MIMO. In this paper, we develop a practical approach to address the above challenges. We implement multi-point to multi-point MIMO for both uplink and downlink to enable multiple APs to simultaneously communicate with multiple clients. We examine a number of important MAC design issues, such as how to access the medium, perform rate adaptation, support acknowledgments in unicast traffic, deal with losses/collisions, and schedule transmissions. We demonstrate its feasibility and effectiveness through a prototype implementation on USRP and SORA, two of the most well-known software defined radio platforms.
【Keywords】: MIMO communication; access protocols; precoding; software radio; wireless LAN; MAC deliver; MAC design issues; SORA; USRP; distributed MIMO; multipoint to multipoint MIMO; physical layer innovation; precoded signals; software defined radio platforms; unicast traffic; wireless LAN; Downlink; MIMO; Multiplexing; Synchronization; Throughput; Uplink; Wireless communication
【Paper Link】 【Pages】:130-134
【Authors】: Brendan Mumey ; Jian Tang ; Ivan R. Judson ; Richard S. Wolff
【Abstract】: Relay Stations (RSs) can be deployed in a wireless network to extend its coverage and improve its capacity. Smart (directional) antennas can enhance the functionalities of RSs by forming the beam only towards intended receiving Subscriber Stations (SSs). In this paper, we study a joint problem of selecting a beam width and direction for the smart antenna at each RS and determining the RS assignment for SSs in each scheduling period. The objective is to maximize a utility function that can lead to a stable and high-throughput system. We define this as the Beam Scheduling and Relay Assignment Problem (BS-RAP). We show that BS-RAP is NP-hard, present a Mixed Integer Linear Programming (MILP) formulation to provide optimal solutions and present two polynomial-time greedy algorithms, one of which is shown to have a constant factor approximation ratio.
【Keywords】: adaptive antenna arrays; computational complexity; directive antennas; greedy algorithms; optimisation; polynomial approximation; relay networks (telecommunication); scheduling; BS-RAP; MILP formulation; NP-hard; beam scheduling; directional antennas; mixed integer linear programming; polynomial-time greedy algorithms; receiving subscriber stations; relay assignment problem; relay stations; scheduling period; smart antennas; throughput system; wireless relay networks; Approximation algorithms; Directional antennas; Directive antennas; Relays; Scheduling; Wireless communication; Wireless relay networks; approximation algorithm; beam scheduling; relay assignment; smart antenna
【Paper Link】 【Pages】:135-139
【Authors】: Peng-Jun Wan ; Lei Wang ; Chao Ma ; Zhu Wang ; Boliu Xu ; Minming Li
【Abstract】: Maximizing the wireless network capacity under physical interference model is notoriously hard due to the nonlocality and the additive nature of the wireless interference under the physical interference model. This problem has been extensively studied recently with the achievable approximation bounds progressively improved from the linear factor to logarithmic factor. It has been a major open problem whether there exists a constant-approximation approximation algorithm for maximizing the wireless network capacity under the physical interference model. In this paper, we improve the status quo for the case of linear transmission power assignment, which is widely adopted due to its advantage of energy conservation. By exploring and exploiting the rich nature of the wireless interference with the linear power assignment, we develop constant-approximation algorithms for maximizing the wireless network capacity with linear transmission power assignment under the physical interference model, in both the unidirectional mode and the bidirectional mode.
【Keywords】: approximation theory; radio networks; radiofrequency interference; achievable approximation bound; bidirectional mode; constant-approximation approximation algorithm; energy conservation; linear factor; linear power; linear transmission power assignment; logarithmic barrier; logarithmic factor; physical interference model; unidirectional mode; wireless interference; wireless network capacity maximization; Approximation algorithms; Approximation methods; Interference; Polynomials; Vectors; Wireless networks; Link scheduling; approximation algorithms; physical interference
【Paper Link】 【Pages】:140-144
【Authors】: Tsung-Han Lin ; H. T. Kung
【Abstract】: This paper presents MIMO/CON, a PHY/MAC cross-layer design for multiuser MIMO wireless networks that delivers throughput scalable to many users. MIMO/CON supports concurrent channel access from uncoordinated and loosely synchronized users. This new capability allows a multi-antenna MIMO access point (AP) to fully realize its MIMO capacity gain. MIMO/CON draws insight from compressive sensing to carry out concurrent channel estimation. In the MAC layer, MIMO/CON boosts channel utilization by exploiting normal MAC layer retransmissions to recover otherwise undecodable packets in a collision. MIMO/CON has been implemented and validated on a 4×4 MIMO testbed with software-defined radios. In software simulations, MIMO/CON achieves a 210% improvement in MAC throughput over existing staggered access protocols in a 5-antenna AP scenario.
【Keywords】: MIMO communication; access protocols; antenna arrays; channel estimation; multiuser channels; radio networks; software radio; 5-antenna AP scenario; MAC layer retransmission; MIMO capacity gain; MIMO-CON boosts channel utilization; PHY-MAC cross-layer design; compressive sensing; concurrent channel access; concurrent channel estimation; multiantenna MIMO access point; multiuser MIMO wireless networks; scalable multiuser MIMO networking; software-defined radio; synchronized users; Antennas; Channel estimation; Decoding; Delays; MIMO; Throughput; Vectors
【Paper Link】 【Pages】:145-149
【Authors】: Ju Wang ; Dingyi Fang ; Xiaojiang Chen ; Zhe Yang ; Tianzhang Xing ; Lin Cai
【Abstract】: Without relying on devices carried by the target, device-free localization (DFL) is attractive for many applications, such as wildlife monitoring. There still exist many challenges for DFL for multiple targets without dense deployment of sensor nodes. To fit the gap, in this paper, we propose a multi-target localization method based on compressive sensing, named LCS. The key observation is that given a pair of nodes, the received signal strength (RSS) will be different when a target locates at different locations. Taking advantage of compressive sensing in sparse recovery to handle the sparse property of the localization problem, (i.e., the vector which contains the number and location information of k targets is an ideal k-sparse signal), we presented a scalable compressive sensing based multiple target counting and localization method i.e., LCS, and rigorously justify the validity of the problem formulation. The results from our realistic deployment in a 12m×12m open space are promising. For 12 people with 24 nodes, the worst localization error ratio and counting error ratio of our LCS is no more than 8.3% and 33.3% respectively.
【Keywords】: compressed sensing; signal sampling; target tracking; DFL; LCS; RSS; compressive sensing; device free localization; multitarget localization method; received signal strength; sensor networks; sensor nodes; sparse property; sparse recovery; wildlife monitoring; Accuracy; Compressed sensing; Gaussian distribution; Monitoring; Sensors; Vectors; Wildlife
【Paper Link】 【Pages】:150-154
【Authors】: Prusayon Nintanavongsa ; M. Yousof Naderi ; Kaushik R. Chowdhury
【Abstract】: Wireless transfer of energy will help realize perennially operating sensors, where dedicated transmitters replenish the residual node battery level through directed radio frequency (RF) waves. However, as this radiative transfer is in-band, it directly impacts data communication in the network, requiring a fresh perspective on medium access control (MAC) protocol design for appropriately sharing the channel for these two critical functions. Through an experimental study, we first demonstrate how the placement, the chosen frequency, and number of the RF energy transmitters affect the sensor charging time. These studies are then used to design a MAC protocol called RFMAC that optimizes energy delivery to desirous sensor nodes on request. To the best of our knowledge, this is the first distributed MAC protocol for RF energy harvesting sensors, and through a combination of experimentation and simulation studies, we demonstrate 112% average network throughput improvement over the modified unslotted CSMA MAC protocol.
【Keywords】: access protocols; carrier sense multiple access; inductive power transmission; optimisation; radio transmitters; RF energy harvesting sensors; RF energy transmitters; RF-MAC; average network throughput improvement; data communication; dedicated transmitters; directed RF waves; directed radiofrequency waves; energy delivery; medium access control protocol design; modified unslotted CSMA MAC protocol; perennially operating sensors; radiative transfer; residual node battery level; sensor charging time; sensor nodes; wireless energy transfer; Energy exchange; Media Access Protocol; Multiaccess communication; Radio frequency; Sensors; Wireless sensor networks; 915 MHz; Medium Access Protocol; Optimization; RF harvesting; Sensor; Wireless power transfer
【Paper Link】 【Pages】:155-159
【Authors】: Shaojie Tang ; Jing Yuan
【Abstract】: Wireless Sensor Networks (WSN) are widely adopted to monitor and collect data, such as temperature, humidity etc., from the physical environment. Those sensor readings often exhibit strong spacial-temporal correlations, e.g., sensor readings from nearby sensors tend to be similar, and sensor readings from consecutive time slots are also highly correlated. As in our previous works, we first introduce the concept of Quality of Monitoring (QoM), and further define an utility function to quantify the QoM under different sensing schedules. In particular, the utility function is non-decreasing submodular function which is able to capture the spacial-temporal correlations among sensor readings. The objective of this work is to develop a set of distributed sensing schedules in order to achieve the highest QoM subject to energy constraint (e.g., under fixed working duty cycle). Extensive experiments validate our theoretical results. Notice that most existing works on this topic put their focus on centralized sensing schedule, which is shown to be extremely difficult to implement in large scale networked sensor system.
【Keywords】: scheduling; wireless sensor networks; DAMson; QoM; distributed sensing scheduling; quality of monitoring; spacial-temporal correlations; wireless sensor networks; Algorithm design and analysis; Correlation; Games; Monitoring; Schedules; Sensors; Wireless sensor networks; Quality of Monitoring; duty cycling; sensing schedule; submodular
【Paper Link】 【Pages】:160-164
【Authors】: Ankur Kamthe ; Miguel Á. Carreira-Perpiñán ; Alberto Cerpa
【Abstract】: High-quality wireless link models can enable better simulations and reduce the development time for new algorithms and protocols. However, the models underlying current simulators are either based on too simple assumptions, so they are unrealistic, or are based on sophisticated machine learning techniques that require extensive training data from the target link, so they are more realistic but impractical. We consider the practical scenario where data collection time is limited (e.g. a few minutes) and cannot afford to deploy a testbed infrastructure with cabling, power and storage. We propose techniques that can construct an accurate machine learning model of the short-term behavior of a target wireless link given only limited training data for the latter, by adapting a reference model that was trained with abundant data. The parameters of the target model are a constrained transformation of the parameters of the reference model, thus the actual number of free parameters is much smaller, and can be reliably estimated with much less data. While estimating the target model from scratch requires 1 to 5 hours of target link data, we show our adaptation technique only requires under 3 minutes of data, for all packet reception rate regimes. We also show that we can construct adapted models for target links in different environments, packet sizes, interference conditions and radio technology (802.15.4 or 802.11b).
【Keywords】: learning (artificial intelligence); radio links; radiofrequency interference; telecommunication computing; 802.11b; 802.15.4; adaptation technique; cabling; data collection time; data-driven model; development time reduction; high-quality wireless link model; interference condition; machine learning model; machine learning technique; packet reception rate; packet size; protocols; radio technology; reference model; short-term behavior; target link data; target model parameter; target wireless link; testbed infrastructure; training data; Adaptation models; Computational modeling; Data collection; Data models; Vectors; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:165-169
【Authors】: Hongjuan Li ; Xiuzhen Cheng ; Keqiu Li ; Xiaoshuang Xing ; Tao Jing
【Abstract】: In this paper, we consider the problem of cooperative spectrum sensing scheduling (C3S) in a cognitive radio network when there exist multiple primary channels. Deviated from the existing research our work focuses on a scenario in which each secondary user has the freedom to decide whether or not to participate in cooperative spectrum sensing; if not, the SU becomes a free rider who can eavesdrop the decision about the channel status made by others. Such a mechanism can conserve the energy for spectrum sensing at a risk of scarifying the spectrum sensing performance. To overcome this problem, we address the following two questions: “which action (contributing to spectrum sensing or not) to take?” and “which channel to sense?” To answer the first question, we model our framework as an evolutionary game in which each SU makes its decision based on its utility history, and takes an action more frequently if it brings a relatively higher utility. We also develop an entropy based coalition formation algorithm to answer the second question, where each SU always chooses the coalition (channel) that brings the most information regarding the status of the corresponding channel. All the SUs selecting the same channel to sense form a coalition. Our simulation study indicates that the proposed scheme can guarantee the detection probability at a low false alarm rate.
【Keywords】: cognitive radio; cooperative communication; evolutionary computation; game theory; probability; radio networks; radio spectrum management; wireless channels; C3S; M primary channel; N secondary user; SU; cognitive radio networks; detection probability; energy conservation; energy consumption; entropy based coalition formation algorithm; evolutionary game theory; false alarm rate; multiple primary channel; utility-based cooperative spectrum sensing scheduling; Cognitive radio; Energy consumption; Entropy; Games; History; Sensors; Uncertainty; Cognitive radio networks; coalition formation; cooperative spectrum sensing; evolutionary game; free rider
【Paper Link】 【Pages】:170-174
【Authors】: Clement Kam ; Sastry Kompella ; Gam D. Nguyen ; Jeffrey E. Wieselthier ; Anthony Ephremides
【Abstract】: In this work, we investigate the queue stability of a two-user cognitive radio system with multicast traffic. We study the impact of network-level cooperation, in which one of the nodes can relay the packets of the other user that are not received at the destinations. Under this approach, if a packet transmitted by the primary user is not successfully received by the destination set but is captured by the secondary source, then the secondary user assumes responsibility for completing the transmission of the packet; therefore, the primary releases it from its queue, enabling it to process the next packet. We demonstrate that the stability region of this cooperative approach is larger than that of the noncooperative approach, which translates into a benefit for both users of this multicast system. Our system model allows for the possibility of multipacket reception, and the optimal transmission strategies for different levels of multipacket reception capability are observed in our numerical results.
【Keywords】: cognitive radio; cooperative communication; multicast communication; queueing theory; telecommunication traffic; cognitive cooperative random access; multicast throughput stability analysis; multicast traffic; multipacket reception; network-level cooperation; noncooperative approach; optimal transmission strategies; primary user; queue stability; secondary source; two-user cognitive radio system; Markov processes; Numerical stability; Receivers; Relays; Stability analysis; Throughput; Unicast
【Paper Link】 【Pages】:175-179
【Authors】: Tao Jing ; Shixiang Zhu ; Hongjuan Li ; Xiuzhen Cheng ; Yan Huo
【Abstract】: The benefits of cognitive radio networks have been well recognized with the dramatic development of the wireless applications in recent years. While many existing works assume that the secondary transmissions are negative interference to the primary users (PUs), in this paper, we take secondary users (SUs) as positive potential cooperators for the primary users. In particular, we consider the problem of cooperative relay selection, in which the PUs actively select appropriate SUs as relay nodes to enhance their transmission performance. The most critical challenge for such a problem of cooperative relay selection is how to select a relay efficiently. But due to the potentially large number of secondary users, it is infeasible for a PU transmitter to first scan all the SUs and then pick the best one. Basically, the PU transmitter intends to observe the SUs sequentially. After observing a SU, the PU needs to make a decision on whether to terminate its observation and use the current SU as its relay or to skip it and observe the next SU. We address this problem by using the optimal stopping theory, and derive the optimal stopping rule. To evaluate the performance of our proposed scheme, we conduct an extensive simulation study. The results reveal the impact of different parameters on the system performance, which can be adjusted to satisfy specific system requirements.
【Keywords】: cognitive radio; cooperative communication; radio transmitters; radiofrequency interference; PU transmitter; cognitive radio networks; cooperative relay selection; cooperative relay selection problem; optimal stopping theory; primary users; secondary transmissions; secondary users; transmission performance; wireless applications; Cognitive radio; Radio transmitters; Receivers; Relays; Signal to noise ratio; System performance; Cognitive radio networks; cooperative relay selection; optimal stopping theory
【Paper Link】 【Pages】:180-184
【Authors】: Yanjiao Chen ; Jin Zhang ; Kaishun Wu ; Qian Zhang
【Abstract】: Spectrums are heterogeneous, especially from the aspect of their central frequency. According to signal propagation properties, low-frequency spectrum generally has lower path loss, thus longer transmission range, compared with high-frequency spectrum. Cellular operators with different targeted cell size will have different preferences for spectrums with different frequencies. Furthermore, the transmission range also affects the interference relationships among transmitters. Transmitters who can reuse the same high-frequency spectrum may interfere with each other when reusing the low-frequency spectrum, so it is difficult to decide how to construct the interference graph to exploit spectrum reusability among transmitters. Auction is considered as an efficient way for spectrum allocation. However, most of the previous works only considered homogenous spectrum auction, failing to address the problem of spectrum heterogeneity. In this paper, we propose TAMES, a Truthful Auction Mechanism for hEterogeneous Spectrum allocation, which allows buyers to freely express their different preferences towards different spectrums. Frequency-specific interference graphs are constructed to determine buyer groups. The proposed heterogeneous spectrum auction is theoretically proved to be truthful and individual rational. The simulation results verifies that the proposed auction mechanism outperforms other auction mechanisms with homogenous bid or homogenous interference graph. The proposed auction mechanism is able to yield higher buyers' satisfaction, seller's revenue and spectrum utilization.
【Keywords】: cellular radio; graph theory; radio transmitters; radiofrequency interference; TAMES; buyer groups; cellular operators; central frequency; frequency-specific interference graphs; high-frequency spectrum; homogenous interference graph; homogenous spectrum auction; interference relationships; low-frequency spectrum; path loss; signal propagation properties; transmitters; truthful auction mechanism for heterogeneous spectrum allocation; Cost accounting; Economics; Educational institutions; Interference; Resource management; Simulation; Transmitters
【Paper Link】 【Pages】:185-189
【Authors】: Shaolei Ren ; Mihaela van der Schaar
【Abstract】: In this paper, we consider a wireless cloud computing system in which a profit-maximizing wireless service provider provides cloud computing services to its subscribers. In particular, we focus on batch services, which, due to their non-urgent nature, allow more scheduling flexibility than their interactive counterparts. Unlike the existing research that studied separately demand-side management and energy cost saving techniques (both of which are critical to profit maximization), we propose a provably-efficient Dynamic Scheduling and Pricing (Dyn-SP) algorithm which proactively adapts the service demand to workload scheduling in the data center and opportunistically utilizes low electricity prices to process batch jobs for energy cost saving. Without the necessity of predicting future information as assumed by some prior works, Dyn-SP can be applied to an arbitrarily random environment in which the electricity price, available renewable energy supply, and wireless network capacities may evolve over time as arbitrary stochastic processes. It is proved that, compared to the optimal offline algorithm with future information, Dyn-SP can produce a close-to-optimal longterm profit while bounding the job queue length in the data center. We also show both analytically and numerically that a desired tradeoff between the profit and queueing delay can be obtained by appropriately tuning the control parameter. Finally, we perform a simulation study to demonstrate the effectiveness of Dyn-SP.
【Keywords】: cloud computing; computer centres; pricing; processor scheduling; profitability; queueing theory; radio networks; stochastic processes; telecommunication power management; Dyn-SP algorithm; arbitrarily random environment; arbitrary stochastic process; batch job processing; batch services; close-to-optimal long-term profit; data center; electricity prices; energy cost saving; job queue length; profit-maximizing wireless service provider; provably-efficient dynamic scheduling and pricing algorithm; queueing delay; renewable energy supply; service demand; wireless cloud computing system; wireless network capacities; workload scheduling; Cooling; Delays; Dynamic scheduling; Electricity; Heuristic algorithms; Pricing; Servers
【Paper Link】 【Pages】:190-194
【Authors】: Weiwen Zhang ; Yonggang Wen ; Dapeng Oliver Wu
【Abstract】: In this paper, we investigate the scheduling policy for collaborative execution in mobile cloud computing. A mobile application is represented by a sequence of fine-grained tasks formulating a linear topology, and each of them is executed either on the mobile device or offloaded onto the cloud side for execution. The design objective is to minimize the energy consumed by the mobile device, while meeting a time deadline. We formulate this minimum-energy task scheduling problem as a constrained shortest path problem on a directed acyclic graph, and adapt the canonical “LARAC” algorithm to solving this problem approximately. Numerical simulation suggests that a one-climb offloading policy is energy efficient for the Markovian stochastic channel, in which at most one migration from mobile device to the cloud is taken place for the collaborative task execution. Moreover, compared to standalone mobile execution and cloud execution, the optimal collaborative execution strategy can significantly save the energy consumed on the mobile device.
【Keywords】: Markov processes; cloud computing; directed graphs; energy conservation; energy consumption; groupware; mobile computing; scheduling; LARAC algorithm; Markovian stochastic channel; cloud execution; collaborative task execution; constrained shortest path problem; directed acyclic graph; energy consumption minimization; energy-efficient scheduling policy; linear topology; minimum-energy task scheduling problem; mobile application; mobile cloud computing; mobile device; mobile execution; one-climb offloading policy; optimal collaborative execution strategy; Cloud computing; Collaboration; Energy consumption; Mobile communication; Mobile handsets; Stochastic processes; Topology; collaborative execution; mobile cloud computing; scheduling policy
【Paper Link】 【Pages】:195-199
【Authors】: Peng Shu ; Fangming Liu ; Hai Jin ; Min Chen ; Feng Wen ; Yupeng Qu ; Bo Li
【Abstract】: Mobile cloud computing, promising to extend the capabilities of resource-constrained mobile devices, is emerging as a new computing paradigm which has fostered a wide range of exciting applications. In this new paradigm, efficient data transmission between the cloud and mobile devices becomes essential. This, however, is highly unreliable and unpredictable due to several uncontrollable factors, particularly the instability and intermittency of wireless connections, fluctuation of communication bandwidth, and user mobility. Consequently, this puts a heavy burden on the energy consumption of mobile devices. Confirmed by our experiments, significantly more energy is consumed during “bad” connectivity. Inspired by the feasibility to schedule data transmissions for prefetching-friendly or delay-tolerant applications, in this paper, we present eTime, a novel Energy-efficient data Transmission strategy between cloud and Mobile dEvices, based on Lyapunov optimization. It aggressively and adaptively seizes the timing of good connectivity to prefetch frequently used data while deferring delay-tolerant data in bad connectivity. To cope with the randomness and unpredictability of wireless connectivity, eTime only relies on the current status information to make a global energy-delay tradeoff decision. Our evaluations from both trace-driven simulation and realworld implementation show that eTime can be applied to various popular applications while achieving 20%-35% energy saving.
【Keywords】: cloud computing; delay tolerant networks; energy conservation; mobile computing; optimisation; radio networks; Lyapunov optimization; bad connectivity; communication bandwidth; computing paradigm; data transmissions; delay-tolerant applications; delay-tolerant data; eTime; energy consumption; energy-efficient data transmission strategy; energy-efficient transmission; frequently used data; global energy-delay tradeoff decision; mobile cloud computing; realworld implementation; resource-constrained mobile devices; status information; trace-driven simulation; user mobility; wireless connections; wireless connectivity; Bandwidth; Data communication; Energy consumption; IEEE 802.11 Standards; Mobile communication; Smart phones
【Paper Link】 【Pages】:200-204
【Authors】: Xiaofan He ; Huaiyu Dai ; Wenbo Shen ; Peng Ning
【Abstract】: A fundamental assumption of link signature based security mechanisms is that the wireless signals received at two locations separated by more than half a wavelength are essentially uncorrelated. However, it has been observed that in certain circumstances (e.g., with poor scattering and/or a strong line-of-sight (LOS) component), this assumption is invalid. In this paper, a Correlation ATtack (CAT) is proposed to demonstrate the potential vulnerability of the link signature based security mechanisms in such circumstances. Based on statistical inference, CAT explicitly exploits the spatial correlations to reconstruct the legitimate link signature from the observations of multiple adversary receivers deployed in vicinity. Our findings are verified through theoretical analysis, well-known channel correlation models, and experiments on USRP platforms and GNURadio.
【Keywords】: correlation methods; radio links; radio receivers; telecommunication security; wireless channels; CAT; GNURadio; LOS component; USRP platforms; channel correlation; correlation attack; line-of-sight component; link signature; multiple adversary receivers; poor scattering; spatial correlations; statistical inference; vulnerability; wireless security; wireless signals; Channel estimation; Communication system security; Correlation; Receivers; Security; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:205-209
【Authors】: Roberto Di Pietro ; Stefano Guarino
【Abstract】: In Mobile Unattended Wireless Sensor Networks (MUWSNs), nodes sense the environment and store the acquired data until the arrival of a trusted data sink. In this paper, we address the fundamental issue of quantifying to which extent secret sharing schemes, combined with nodes mobility, can help in assuring data availability and confidentiality. We provide accurate analytical results binding the fraction of the network accessed by the sink and the adversary to the amount of information they can successfully recover. Extensive simulations support our findings.
【Keywords】: mobility management (mobile radio); telecommunication security; wireless sensor networks; MUWSN; data availability; data confidentiality; mobile unattended wireless sensor network; node mobility; resource-constrained autonomous sensor node; secret sharing scheme; trusted data sink; Availability; Cryptography; Data models; Mobile communication; Mobile computing; Wireless sensor networks; UWSN security and privacy; metrics; mobility models
【Paper Link】 【Pages】:210-214
【Authors】: Hyungbae Park ; Sejun Song ; Baek-Young Choi ; Chin-Tser Huang
【Abstract】: While many security schemes protect the content of messages in the Distributed Sensing Systems (DSS), the contextual information, such as communication patterns, is left vulnerable and can be utilized by attackers to identify critical information such as the locations of event sources and message sinks. Existing solutions for location anonymity are mostly designed to protect source or sink location anonymity individually against limited eavesdroppers on a small region at a time. However, they can be easily defeated by highly motivated global eavesdroppers that can monitor entire communication events on the DSS. To grapple with these challenges, we propose a mechanism for Preserving Anonymity of Sources and Sinks against Global Eavesdroppers (PASSAGES). PASSAGES uses a small number of stealthy permeability tunnels such as wormholes and message ferries to scatter and hide the communication patterns. Unlike prior schemes, PASSAGES effectively achieves a high anonymity level for both source and sink locations, without incurring extra communication overheads. We quantify the location anonymity level and evaluate the effectiveness of PASSAGES via analysis as well as extensive simulations. We also perform evaluations on the synergistic effect when PASSAGES is combined with other traditional solutions.
【Keywords】: distributed sensors; telecommunication security; telecommunication traffic; telecommunication transmission lines; DSS; PASSAGES; communication overheads; communication patterns; contextual information; critical information; distributed sensing systems; event sources; global eavesdroppers; location anonymity; message ferries; message sinks; preserving anonymity; security schemes; sink location; source location; stealthy permeability tunnels; wormholes; Mobile communication; Permeability; Privacy; Sensors; Uncertainty; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:215-219
【Authors】: Shaxun Chen ; Amit Pande ; Kai Zeng ; Prasant Mohapatra
【Abstract】: Video source identification is very important in validating video evidence, tracking down video piracy crimes and regulating individual video sources. With the prevalence of wireless communication, wireless video cameras continue to replace their wired counterparts in security/surveillance systems and tactical networks. However, wirelessly streamed videos usually suffer from blocking and blurring due to inevitable packet loss in wireless transmissions. The existing source identification methods experience significant performance degradation or even fail to work when identifying videos with blocking and blurring. In this paper, we propose a method which is effective and efficient in identifying such wirelessly streamed videos. In addition, we also propose to incorporate wireless channel signatures and selective frame processing into source identification, which significantly improve the identification speed.
【Keywords】: cameras; radio networks; telecommunication security; video surveillance; wireless channels; individual video sources; inevitable packet loss; lossy wireless networks; security-surveillance systems; selective frame processing; source identification; tactical networks; video evidence; video piracy crimes; video source identification; wireless channel signatures; wireless communication; wireless transmissions; wireless video cameras; wirelessly streamed videos; Cameras; Communication system security; Noise; Packet loss; Streaming media; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:220-224
【Authors】: Boxun Zhang ; Gunnar Kreitz ; Marcus Isaksson ; Javier Ubillos ; Guido Urdaneta ; Johan A. Pouwelse ; Dick H. J. Epema
【Abstract】: Spotify is a peer-assisted music streaming service that has gained worldwide popularity in the past few years. Until now, little has been published about user behavior in such services. In this paper, we study the user behavior in Spotify by analyzing a massive dataset collected between 2010 and 2011. Firstly, we investigate the system dynamics including session arrival patterns, playback arrival patterns, and daily variation of session length. Secondly, we analyze individual user behavior on both multiple and single devices. Our analysis reveals the favorite times of day for Spotify users. We also show the correlations between both the length and the downtime of successive user sessions on single devices. In particular, we conduct the first analysis of the device-switching behavior of a massive user base.
【Keywords】: behavioural sciences computing; media streaming; music; peer-to-peer computing; Spotify; collected dataset analysis; device switching behavior; massive user base; peer assisted music streaming service; playback arrival pattern; session arrival pattern; successive user session; user behavior; Correlation; Mobile communication; Mobile computing; Mobile handsets; Music; Streaming media; Switches
【Paper Link】 【Pages】:225-229
【Authors】: John Mark Agosta ; Jaideep Chandrashekar ; Mark Crovella ; Nina Taft ; Daniel Ting
【Abstract】: We model a little studied type of traffic, namely the network traffic generated from endhosts. We introduce a parsimonious model of the marginal distribution for connection arrivals consisting of mixture models with both heavy and light-tailed component distributions. Our methodology assumes that the underlying user data can be fitted to one of several models, and we apply Bayesian model selection criterion to choose the preferred combination of components. Our experiments show that a simple Pareto-exponential mixture model is preferred over more complex alternatives, for a wide range of users. This model has the desirable property of modeling the entire distribution, effectively clustering the traffic into the heavy-tailed as well as the non-heavy-tailed components. Also this method quantifies the wide diversity in the observed endhost traffic.
【Keywords】: Bayes methods; Pareto distribution; telecommunication networks; telecommunication traffic; Bayesian model selection criterion; connection arrivals; endhost network traffic; heavy-tailed component distributions; light-tailed component distributions; marginal distribution; parsimonious model; simple Pareto-exponential mixture; traffic clustering; wide diversity; Approximation methods; Bayes methods; Computational modeling; Data models; Educational institutions; Mathematical model; Maximum likelihood estimation
【Paper Link】 【Pages】:230-234
【Authors】: Ignacio Bermudez ; Stefano Traverso ; Marco Mellia ; Maurizio M. Munafò
【Abstract】: This paper presents a characterization of Amazon's Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud design.
【Keywords】: Web services; cache storage; cloud computing; Amazon AWS case; CloudFront AWS services; CloudFront CDN; EC2; S3; Web services; automatic migration; cache selection policy; cloud-based services; load-balancing policy; passive measurement; static allocation; traffic allocation policy; Availability; IP networks; Measurement; Monitoring; Servers; Time factors; Web services
【Paper Link】 【Pages】:235-239
【Authors】: Diana Joumblatt ; Jaideep Chandrashekar ; Branislav Kveton ; Nina Taft ; Renata Teixeira
【Abstract】: We design predictors of user dissatisfaction with the performance of applications that use networking. Our approach combines user-level feedback with low level machine and networking metrics. The main challenges of predicting user dissatisfaction, that arises when networking conditions adversely affect applications, comes from the scarcity of user feedback and the fact that poor performance episodes are rare. We develop a methodology to handle these challenges. Our method processes low level data via quantization and feature selection steps. We combine this with user labels and employ supervised learning techniques to build predictors. Using data from 19 personal machines, we show how to build training sets and demonstrate that non-linear SVMs achieve higher true positive rates (around 0.9) than predictors based on linear models. Finally we quantify the benefits of building per-application predictors as compared to general predictors that use data from multiple applications simultaneously to anticipate user dissatisfaction.
【Keywords】: computer network performance evaluation; ergonomics; internetworking; learning (artificial intelligence); support vector machines; Internet application performance; data labelling; end-hosts; feature selection; low-level data processing; low-level machine metrics; low-level networking metrics; networking conditions; nonlinear SVM; personal machines; quantization; supervised learning techniques; training sets; true-positive rates; user dissatisfaction prediction; user-level feedback; Feature extraction; Measurement; Postal services; Quality of service; Training; Vectors; YouTube
【Paper Link】 【Pages】:240-244
【Authors】: Haowei Yuan ; Patrick Crowley
【Abstract】: Content distribution is a primary activity on the Internet. Name-centric network architectures support content distribution intrinsically. Named Data Networking (NDN), one recent such scheme, names packets rather than end-hosts, thereby enabling packets to be cached and redistributed by routers. Among alternative name-based systems, HTTP is the most significant by any measure. A majority of today's content distribution services leverage the widely deployed HTTP infrastructure, such as web servers and caching proxies. As a result, HTTP can be viewed as a practical, name-based content distribution solution. Of course, NDN and HTTP do not overlap entirely in their capabilities and design goals, but both support name-based content distribution. This paper presents an experimental performance evaluation of NDN-based and HTTP-based content distribution solutions. Our findings verify popular intuition, but also surprise in some ways. In wired networks with local-area transmission latencies, the HTTP-based solution dramatically outperforms NDN, with roughly 10× greater sustained throughput. In networks with lossy access links, such as wireless links with 10% drop rates, or with non-local transmission delays, due to faster link retransmission brought by architectural advantages of NDN, the situation reverses and NDN outperforms HTTP, with sustained throughput increased by roughly 4× over a range of experimental scenarios.
【Keywords】: Internet; content management; hypermedia; multimedia communication; radio links; telecommunication network routing; transport protocols; HTTP infrastructure; HTTP-based content distribution solution; Internet; NDN; Web server; caching proxies; content distribution service; drop rate; end-host; link retransmission; local-area transmission latencies; lossy access link; name-based content distribution solution; name-based system; name-centric network architecture; named data networking; names packet; network throughput; nonlocal transmission delay; router; wired network; wireless link; Computer architecture; Delays; Throughput; Web servers
【Paper Link】 【Pages】:245-249
【Authors】: Jaime Llorca ; Antonia Maria Tulino ; Kyle Guan ; Jairo O. Esteban ; Matteo Varvello ; Nakjung Choi ; Daniel C. Kilper
【Abstract】: Consider a network of prosumers of media content in which users dynamically create and request content objects. The request process is governed by the objects' popularity and varies across network regions and over time. In order to meet user requests, content objects can be stored and transported over the network, characterized by the capacity and energy efficiency of the storage and transport resources. The energy efficient dynamic in-network caching problem aims at finding the evolution of the network configuration, in terms of the content objects being cached and transported over each network element at any given time, that meets user requests, satisfies network resource capacities and minimizes overall energy use. We provide 1) an information-centric optimization framework for the energy efficient dynamic in-network caching problem, 2) an offline solution, EE-OFD, based on an integer linear program (ILP) that obtains the maximum efficiency gains that can be achieved with global knowledge of user requests and network resources, and 3) an efficient fully distributed online solution, EEOND, that allows network nodes to make local caching decisions based on their current estimate of the global energy benefit. Our solutions take into account the network heterogeneity, in terms of capacity, energy efficiency and content popularity, and adapt to changing network conditions minimizing overall energy use.
【Keywords】: cache storage; computer network management; integer programming; linear programming; EE-OFD; EEOND; ILP; content object request; content object storage; content object transport; dynamic in-network caching; dynamical content object creation; energy efficiency; energy efficient content delivery; energy use minimization; fully distributed online solution; information-centric optimization framework; integer linear program; local caching decision; maximum efficiency gain; media content prosumers; network conditions; network configuration; network heterogeneity; network nodes; network resource capacity; object popularity; transport resource; user request; Content distribution networks; Dynamic scheduling; Energy consumption; Internet; Optimization; Routing; Vegetation; Energy efficiency; content centric networking; content delivery network; distributed optimization; in-network caching; integer linear programming
【Paper Link】 【Pages】:250-254
【Authors】: Yao Liu ; Fei Li ; Lei Guo ; Bo Shen ; Songqing Chen
【Abstract】: The Internet has witnessed rapidly increasing streaming traffic to various mobile devices. In this paper, we find that for the popular iOS based mobile devices, accessing popular Internet streaming services typically involves about 10% - 70% unnecessary redundant traffic. Such a practice not only overutilizes and wastes resources on the server side and the network (cellular or Internet), but also consumes additional battery power on users' mobile devices and leads to possible monetary cost. To alleviate such a situation without changing the server side or the iOS, we design and implement a CStreamer prototype that can transparently work between existing iOS devices and media servers. We also build a CStreamer iOS App to enable end users to access Internet streaming services via CStreamer. Experiments conducted based on this prototype running on Amazon EC2 show that CStreamer can completely eliminate the redundant traffic without degrading user's QoS.
【Keywords】: Internet; mobile handsets; mobile radio; telecommunication services; video streaming; Amazon EC2; CStreamer iOS app; CStreamer prototype; Internet streaming services; cellular network; iOS based mobile device; media server; redundant Internet streaming traffic; server side; user QoS; user mobile device; video sharing Website; wastes resource; Internet; Media; Mobile communication; Mobile handsets; Servers; Streaming media; YouTube
【Paper Link】 【Pages】:255-259
【Authors】: Mohsen Sardari ; Ahmad Beirami ; Jun Zou ; Faramarz Fekri
【Abstract】: Recent studies have shown the existence of considerable amount of packet-level redundancy in the network flows. Since application-layer solutions cannot capture the packet-level redundancy, development of new content-aware approaches capable of redundancy elimination at the packet and sub-packet levels is necessary. These requirements motivate the redundancy elimination of packets from an information-theoretic point of view. For efficient compression of packets, a new framework called memory-assisted universal compression has been proposed. This framework is based on learning the statistics of the source generating the packets at some intermediate nodes and then leveraging these statistics to effectively compress a new packet. This paper investigates both theoretically and experimentally the memory-assisted compression of network packets. Clearly, a simple source cannot model the data traffic. Hence, we consider traffic from a complex source that is consisted of a mixture of simple information sources for our analytic study. We develop a practical code for memory-assisted compression and combine it with a proposed hierarchical clustering to better utilize the memory. Finally, we validate our results via simulation on real traffic traces. Memory-assisted compression combined with hierarchical clustering method results in compression of packets close to the fundamental limit. As a result, we report a factor of two improvement over traditional end-to-end compression.
【Keywords】: computer networks; data compression; encoding; pattern clustering; application-layer solutions; content-aware network data compression; hierarchical clustering method; intermediate nodes; memory-assisted universal compression; network flows; network packet memory-assisted compression; packet-level redundancy; redundancy elimination; source statistics; subpacket levels; universal coding techniques; Classification algorithms; Clustering algorithms; Compounds; Entropy; Joints; Redundancy; Servers
【Paper Link】 【Pages】:260-264
【Authors】: Ahmed Osama Fathy Atya ; Ioannis Broustis ; Shailendra Singh ; Dimitris Syrivelis ; Srikanth V. Krishnamurthy ; Thomas F. La Porta
【Abstract】: Network coding has been shown to offer significant throughput benefits over store-and-forward routing in certain wireless network topologies. However, the application of network coding may not always improve the network performance. In this paper1, we provide a comprehensive analytical study, which helps in assessing when network coding is preferable to a traditional store-and-forward approach. Interestingly, our study reveals that in many topological scenarios, network coding can in fact hurt the throughput performance; in such scenarios, applying the store-and-forward approach leads to higher network throughput. We validate our analytical findings via extensive testbed experiments, and we extract guidelines on when network coding should be applied instead of store-and-forward.
【Keywords】: network coding; telecommunication network topology; telecommunication switching; network coding; network throughput; store and forward approach; testbed experiments; wireless network topologies; Bit rate; Encoding; Lifting equipment; Network coding; Relays; Throughput; Topology; Measurements; Network Policy; Rate Adaptation; Simulation; Testbed; Wireless Network Coding
【Paper Link】 【Pages】:265-269
【Authors】: Wentao Huang ; Tracey Ho ; Hongyi Yao ; Sidharth Jaggi
【Abstract】: This paper studies rateless network error correction codes for reliable multicast in the presence of adversarial errors. We present rateless coding schemes for two adversarial models, where the source sends more redundancy over time, until decoding succeeds. The first model assumes there is a secret channel between the source and the destination that the adversaries cannot overhear. The rate of the channel is negligible compared to the main network. In the second model the source and destination share random secrets independent of the input information. The amount of secret information required is negligible compared to the amount of information sent. Both schemes are capacity optimal, distributed, polynomial-time and end-to-end in that other than the source and destination nodes, other intermediate nodes carry out classical random linear network coding.
【Keywords】: decoding; error correction codes; multicast communication; network coding; telecommunication network reliability; byzantine adversaries; capacity optimal; decoding succeeds; error correction codes; rateless resilient network coding; reliable multicast; Decoding; Encoding; Equations; Error correction codes; Network coding; Redundancy; Vectors
【Paper Link】 【Pages】:270-274
【Authors】: Jin Wang ; Kejie Lu ; Jianping Wang ; Chunming Qiao
【Abstract】: To protect user privacy in wireless mesh networks (WMNs), it is important to address two major challenges, namely: flow untraceability and movement untraceability, which prevent malicious attackers from deducing the flow paths and the movement tracks of mobile devices. For these two privacy requirements, most existing approaches rely on encrypting the whole packet, appending random padding, and applying random delay for each message at every intermediate node, resulting in significant computational and communication overheads. Recently, linear network coding (LNC) has been introduced as an alternative but the global encoding vectors (GEVs) of coded messages have to be encrypted so as to conceal the relationships between the incoming and outgoing messages. In this paper, we aim to explore the potential of LNC to ensure the flow untraceability and movement untraceability. Specifically, we first determine the necessary and sufficient condition, with which the two privacy requirements can be achieved without encrypting either GEVs or message contents. We then design a deterministic untraceable LNC (ULNC) scheme to provide flow untraceability and movement untraceability when the sufficient and necessary condition is satisfied. Finally, we discuss the effectiveness of the proposed ULNC scheme against traffic analysis attacks in WMNs.
【Keywords】: linear codes; mobile handsets; network coding; telecommunication security; telecommunication traffic; wireless mesh networks; GEV; ULNC scheme; WMN; coded messages; communication overheads; computational overheads; flow paths; flow untraceability; incoming messages; intermediate node; linear network coding; malicious attackers; message contents; mobile device movement tracks; mobile device untraceability; movement untraceability; outgoing messages; privacy requirements; random delay; random padding; traffic analysis attacks; untraceable LNC scheme; user privacy protection; wireless mesh networks; Correlation; Cryptography; Mobile handsets; Network coding; Privacy; Vectors; Wireless communication
【Paper Link】 【Pages】:275-279
【Authors】: Shuai Wang ; Guang Tan ; Yunhuai Liu ; Hongbo Jiang ; Tian He
【Abstract】: Reducing transmission redundancy is key to the efficiency of wireless network broadcast. A standard technique to achieve this is to create a network backbone consisting of a subset of nodes that are responsible for data forwarding, while other nodes act as passive receivers. On top of this, network coding (NC) is often used to further reduce unnecessary transmissions. The main problem with this backbone+NC approach is that the backbone construction process is blind of what is needed by NC, thus may produce a structure with little benefit to the NC algorithms. To address this problem, we propose a Coding Opportunity Aware Backbone (COAB) construction scheme, which seeks to maximally exploit coding opportunities when selecting backbone forwarders. We show that the better informed backbone construction process leads to significantly increased coding frequency, at minimal cost of localized information exchange. The highlight of our work is COAB's broad applicability and effectiveness. We integrate COAB with ten state-of-the-art broadcast algorithms, specified in eight publications [1]-[8], and evaluate it with prototype implementations with 30 MICAz nodes. The experimental results show that our design outperforms the existing schemes substantially.
【Keywords】: network coding; radio broadcasting; radio networks; COAB broad applicability; COAB construction scheme; NC approach; coding opportunity aware backbone construction scheme; coding opportunity aware backbone metrics; data forwarding; localized information exchange; network backbone; network coding; passive receiver; transmission redundancy reduction; wireless network broadcast; Algorithm design and analysis; Clustering algorithms; Encoding; Measurement; Network coding; Receivers; Wireless networks
【Paper Link】 【Pages】:280-284
【Authors】: Sven Wiethölter ; Andreas Ruttor ; Uwe Bergemann ; Manfred Opper ; Adam Wolisz
【Abstract】: Data rate adaptation (RA) schemes are the key means by which WLAN adapters adjust their operation to the variable quality of wireless channels. The IEEE 802.11 standard does not specify any RA preferences allowing for a competition in performance among vendors, thus numerous proprietary solutions coexist. While the RA schemes implemented in individual user terminals are unknown to the AP of a hotspot, it is well known that the way how individual stations adapt their rates strongly influences the performance of the whole WLAN cell. Therefore, the knowledge of the scheme applied by each station may be useful for the radio resource management in complex networks (e.g., HetNets or dense WLAN deployments in enterprise networks). In this paper, we present a novel approach to estimate the features of the RA schemes implemented in individual stations and demonstrate its efficiency using both simulated WLAN configurations as well as measurements.
【Keywords】: data communication; telecommunication network management; wireless LAN; wireless channels; DARA; HetNet; IEEE 802.11 standard; WLAN adapters; WLAN cell; WLAN configurations; WLAN deployments; WLAN hotspots; data rate adaptation algorithms behavior; data rate adaptation schemes; proprietary solutions; radio resource management; wireless channels; Adaptation models; Data models; Estimation; IEEE 802.11 Standards; Interference; Training; Wireless LAN
【Paper Link】 【Pages】:285-289
【Authors】: Yong Xiao ; Jianwei Huang ; Chau Yuen ; Luiz A. DaSilva
【Abstract】: We propose a general framework to analyze incentives for user cooperation, and characterize the tradeoff between fairness and efficiency for cooperative networks. More specifically, we define the incentive region as a set of action profiles that provides cooperation benefits to all users and focus on the optimization of efficiency and fairness within this region. We introduce a linear resource allocation (LRA) scheme and show that most existing fairness measures can be converted to LRA with different linear coefficient vectors. We then propose the concept of strong price of fairness (SPoF) to study the network efficiency of the strong equilibrium. We show that both the SPoF and fairness measures are connected to the linear coefficient vector of LRA, which makes it possible to study the fairness and efficiency relationship. We then use the random access (RA) system as an example to show how to use the proposed framework to study a specific wireless network.
【Keywords】: cooperative communication; optimisation; resource allocation; wireless channels; LRA; RA system; SPoF; cooperative networks; distributed wireless networks; efficiency tradeoffs; fairness tradeoffs; incentive region; linear coefficient vectors; linear resource allocation; optimization; random access system; strong price of fairness; user cooperation; Computers; Educational institutions; Games; Optimization; Resource management; Vectors; Wireless networks
【Paper Link】 【Pages】:290-294
【Authors】: Hemant Kowshik ; Partha Dutta ; Malolan Chetlur ; Shivkumar Kalyanaraman
【Abstract】: In this paper, we study the problem of efficient video delivery over the cellular downlink. The key objective is to maximize the Quality of Experience (QoE) of the user, as measured by application level metrics such as the buffering ratio and low bit rate ratio. We present a two-tiered solution with a standard base-station scheduler that works on a per-packet basis and a Video Management System (VMS) that works at the granularity of thousands of video frames. The video management system uses knowledge of the video playout curves and future channel states to develop a scheduling policy that is feasibility optimal. The algorithms are simple and leverage recent results on real-time scheduling in wireless networks. We evaluate the performance of our algorithms using real video traces and a standard channel model. The VMS ensures that the per-user QoE guarantees are maintained, as compared with a standard PF scheduler that is oblivious to application level QoE requirements.
【Keywords】: cellular radio; quality of experience; scheduling; video communication; wireless channels; PF scheduler; QoE requirements; VMS; application level metrics; bit rate ratio; cellular downlink; quality of experience; quantitative framework; scheduling policy; standard base-station scheduler; standard channel model; two-tiered solution; video delivery; video management system; video playout curves; wireless networks; Bit rate; Lyapunov methods; Mobile communication; Standards; Streaming media; Wireless communication
【Paper Link】 【Pages】:295-299
【Authors】: Qingjun Xiao ; Bin Xiao ; Shigang Chen
【Abstract】: Efficient estimation of tag population in RFID systems has many important applications. In this paper, we present a new problem called differential cardinality estimation, which tracks the population changes in a dynamic RFID system where tags are frequently moved in and out. In particular, we want to provide quick estimation on (1) the number of new tags that are moved in and (2) the number of old tags that are moved out, between any two consecutive scans of the system. We show that the traditional cardinality estimators cannot be applied here, and the tag identification protocols are too expensive if the estimation needs to be performed frequently in order to support real-time monitoring. This paper presents the first efficient solution for the problem of differential cardinality estimation. The solution is based on a novel differential estimation framework, and is named zero differential estimator. We show that this estimator can be configured to meet any pre-set accuracy requirement, with a probabilistic error bound that can be made arbitrarily small.
【Keywords】: error statistics; protocols; radiofrequency identification; consecutive scans; differential cardinality estimation; dynamic RFID systems; preset accuracy requirement; probabilistic error bound; real-time monitoring; tag identification protocols; zero differential estimator; Accuracy; Estimation error; Protocols; Radiofrequency identification; Sociology; Statistics
【Paper Link】 【Pages】:300-304
【Authors】: Zhaoyang Zhang ; Honggang Wang ; Xiaodong Lin ; Hua Fang ; Dong Xuan
【Abstract】: Accurate and real-time tracing of epidemic sources is critical for epidemic origin analyses and control when outbreaks of epidemic diseases occur. Such tracing requires the simultaneous availability of information about social interactions among people as well as their body vital signs. Existing epidemic control methods are limited due to their inability to collect the above two types of information at the same time. In this paper, for the first time, we propose integrating wireless body area networks (WBANs) for body vital signs collection with mobile phones for social interaction sensing to achieve the desired epidemic source tracing. In particular, we design a mobile phone capability driven hierarchical social interaction detection framework integrated with WBANs. With this framework, we further propose a set of epidemic source tracing and control algorithms including genetic algorithm based search and dominating set identification algorithms to effectively identify epidemic sources and inhibit epidemic spread. We have also conducted extensive simulations, analyses, and case studies based on real data sets, which demonstrate the accuracy and effectiveness of our proposed solutions.
【Keywords】: body area networks; diseases; epidemics; genetic algorithms; mobile handsets; network theory (graphs); search problems; telemedicine; WBAN; body vital sign collection; effective epidemic control; epidemic disease; epidemic origin analysis; genetic algorithm; mobile phone capability driven hierarchical social interaction detection framework; mobile social sensing; real data set; real-time tracing; social interaction sensing; source tracing; wireless body area networks; Accuracy; Diseases; Educational institutions; Mobile handsets; Real-time systems; Sensors; Social network services
【Paper Link】 【Pages】:305-309
【Authors】: Su Xia ; Ning Ding ; Miao Jin ; Hongyi Wu ; Yang Yang
【Abstract】: The medial axis of a shape provides a compact abstraction of its global topology and a proximity of its geometry. The construction of medial axis in two-dimensional (2D) sensor networks has been discussed in the literature, in support of several applications including routing and navigation. In this work, we first reveal the challenges of constructing medial axis in a three-dimensional (3D) sensor network. With more complicated geometric features and complex topology shapes, previous methods proposed for 2D settings cannot be extended easily to 3D networks. Then we propose a distributed algorithm with linear time complexity and communication cost to build a well-structured medial axis of a 3D sensor network without knowing its global shape or global position information. Furthermore we apply the computed medial axis for safe navigation and distributed information storage and retrieval in 3D sensor networks. Simulations are carried out to demonstrate the efficiency of the proposed medial axis-based applications in various 3D sensor networks.
【Keywords】: computational complexity; distributed algorithms; radionavigation; telecommunication network routing; telecommunication network topology; wireless sensor networks; 2D sensor networks; 3D wireless sensor networks; communication cost; compact abstraction; complex topology shape; distributed algorithm; geometric feature; geometry proximity; global position information; global shape; global topology; linear time complexity; medial axis construction; navigation; routing; three-dimensional sensor network; two-dimensional sensor networks; well-structured medial axis; Computational modeling; Navigation; Noise; Routing; Shape; Surface treatment; Wireless sensor networks
【Paper Link】 【Pages】:310-314
【Authors】: Liwen Xu ; Xiao Qi ; Yuexuan Wang ; Thomas Moscibroda
【Abstract】: Data gathering is one of the core algorithmic and theoretic problems in wireless sensor networks. In this paper, we propose a novel approach - Compressed Sparse Functions - to efficiently gather data through the use of highly sophisticated Compressive Sensing techniques. The idea of CSF is to gather a compressed version of a satisfying function (containing all the data) under a suitable function base, and to finally recover the original data. We show through theoretical analysis that our scheme significantly outperforms state-of-the-art methods in terms of efficiency, while matching them in terms of accuracy. For example, in a binary tree-structured network of n nodes, our solution reduces the number of packets from the best-known O(kn log n) to O(k log2 n), where k is a parameter depending on the correlation of the underlying sensor data. Finally, we provide simulations showing that our solution can save up to 80% of communication overhead in a 100-node network. Extensive simulations further show that our solution is robust, high-capacity and low-delay.
【Keywords】: compressed sensing; wireless sensor networks; binary tree-structured network; communication overhead; compressed sparse functions; core algorithmic; efficient data gathering; highly sophisticated compressive sensing techniques; satisfying function; sensor data; wireless sensor networks; Accuracy; Discrete cosine transforms; Mathematical model; Network topology; Power demand; Topology; Wireless sensor networks
【Paper Link】 【Pages】:315-319
【Authors】: Wei Dong ; Biyuan Mo ; Chao Huang ; Yunhao Liu ; Chun Chen
【Abstract】: We present a holistic reprogramming system called R3. R3 has two salient features. First, the binary differencing algorithm within R3 (R3diff) ensures an optimal result in terms of the delta size under a configurable cost measure. Second, the similarity preserving method within R3 (R3sim) optimizes the binary code format for achieving a large similarity with a small metadata overhead. Overall, R3 achieves the smallest delta size compared to other incremental approaches such as Rsync [11], RMTD [9], Zephyr/Hermes [17], [18], and R2 [2], e.g., 50%-99% reduction compared to Stream and about 20%-40% reduction compared to R2. R3's implementation on TelosB/TinyOS is lightweight and efficient. We release our code at http://code.google.com/p/r3-dongw.
【Keywords】: binary codes; embedded systems; meta data; optimising compilers; R2; R3diff; R3sim; RMTD; Rsync; Stream; TelosB; TinyOS; Zephyr-Hermes; binary code format; binary differencing algorithm; holistic reprogramming system; metadata overhead; networked embedded systems reprogramming; relocatable code optimization; Ash; Binary codes; Embedded systems; Indexes; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:320-324
【Authors】: Esa Hyytiä ; Jörg Ott
【Abstract】: We study delay tolerant networking (DTN) and in particular, its capacity to store, carry and forward messages to their final destination(s). We approach this broad question in the framework of percolation theory. To this end, we assume an elementary mobility model, where nodes arrive to an infinite plane according to a Poisson point process, move a certain distance ℓ, and then depart. In this setting, we characterize the mean density of nodes required to support DTN style networking. Under the given assumptions, we show that DTN communication is feasible when the mean node degree ν is greater than 4 · ηc(γ), where parameter γ= ℓ/d is the ratio of the distance ℓ to the transmission range d, and ηc(γ) is the critical reduced number density of tilted cylinders in a directed continuum percolation model. By means of Monte Carlo simulations, we give numerical values for ηc(γ). The asymptotic behavior of ηc(γ) when γ tends to ∞ is also derived from a fluid flow analysis.
【Keywords】: Monte Carlo methods; delay tolerant networks; mobility management (mobile radio); numerical analysis; DTN communication; DTN style networking; Monte Carlo simulation; Poisson point process; directed continuum percolation model; elementary mobility model; fluid flow analysis; infinite plane; large delay tolerant networks; numerical value; percolation theory; space-time; tilted cylinders; Ad hoc networks; Delays; Educational institutions; Mobile nodes; Monte Carlo methods; Numerical models; DTN; capacity; criticality; mobility; percolation
【Paper Link】 【Pages】:325-329
【Authors】: Wei Bao ; Ben Liang
【Abstract】: The location of active users is an important factor in the performance analysis of mobile multicell networks, but it is difficult to quantify due to the wide variety of user mobility and session patterns. In particular, the channel holding times in each cell may be arbitrarily distributed and dependent on those in other cells. In this work, we study the stationary distribution of users by modeling the system as a multi-route queueing network with Poisson inputs. We consider arbitrary routing and arbitrary joint probability distributions for the channel holding times in each route. Using a decomposition-composition approach, we show that the user distribution (1) is insensitive to the user movement patterns, (2) is insensitive to general and dependent distributed channel holding times, (3) depends only on the average arrival rate and average channel holding time at each cell, and (4) is completely characterized by an open network with M/M/∞ queues. This result is validated by experiments with the Dartmouth user mobility traces.
【Keywords】: Poisson equation; mobility management (mobile radio); queueing theory; telecommunication network routing; Dartmouth user mobility traces; M-M-∞ queues; Poisson inputs; active user location; arbitrary joint probability distributions; arbitrary routing; channel holding times; decomposition-composition approach; general mobility; mobile multicell networks; multiroute queueing network; open network; session patterns; stationary distribution; user distribution insensitivity; Analytical models; Entropy; Joints; Mobile communication; Mobile computing; Servers; Vectors
【Paper Link】 【Pages】:330-334
【Authors】: Siddhant Agrawal ; Prasanna Chaporkar ; Rajan Udwani
【Abstract】: Supporting real-time applications is paramount to sustaining the growth of wireless networks. Real time applications require strict delay guarantee, i.e., a packet delayed beyond certain predefined value is dropped. Fortunately, depending on the codec used, real-time applications can sustain some loss gracefully. Aim of an admission control algorithm is to make sure that when a new flow is admitted, its and other existing flows' packet loss on account of deadline violation is below their respective acceptable limit. The problem of admission control has been studied extensively for wireline networks. However, this analysis does not extend to wireless case on account of fading. Here, we consider a wireless network with TDMA based MAC, and for this network obtain a scalable admission control algorithm.
【Keywords】: codecs; radio networks; telecommunication congestion control; time division multiple access; TDMA based MAC; admission control problem; call admission control; codec; deadline violation; delay guarantee; real-time applications; scalable admission control algorithm; wireless network; Admission control; Algorithm design and analysis; Delays; Real-time systems; Time division multiple access; Wireless networks
【Paper Link】 【Pages】:335-339
【Authors】: Juan José Jaramillo ; Lei Ying
【Abstract】: We consider the problem of distributed admission control without knowledge of the capacity region in single-hop wireless networks, for flows that require a pre-specified bandwidth from the network. We present an optimization framework that allows us to design a scheduler and resource allocator, and by properly choosing a suitable utility function in the resource allocator, we prove that existing flows can be served with a prespecified bandwidth, while the link requesting admission can determine the largest rate that it can get such that it does not interfere with the allocation to the existing flows.
【Keywords】: optimisation; radio networks; resource allocation; scheduling; telecommunication congestion control; capacity region; distributed admission control; optimization framework; prespecified bandwidth; resource allocator; single-hop wireless networks; Admission control; Bandwidth; Optimization; Resource management; Schedules; Wireless networks
【Paper Link】 【Pages】:340-344
【Authors】: Jianying Luo ; Lei Rao ; Xue Liu
【Abstract】: Cloud computing is supported by an infrastructure known as Internet data center (IDC). As cloud computing thrives, the energy consumption and cost for IDCs are exploding. There is growing interest in energy cost minimization for IDCs in deregulated electricity markets. In this paper we study how to leverage both geographic and temporal variation of energy price to minimize energy cost for distributed IDCs. To this end, we propose a novel spatio-temporal load balancing approach. Using reallife electricity price and workload traces, extensive evaluations demonstrate that the proposed spatio-temporal load balancing approach significantly reduces energy cost for distributed IDCs.
【Keywords】: cloud computing; computer centres; cost reduction; electricity supply industry deregulation; energy consumption; power aware computing; resource allocation; scheduling; spatiotemporal phenomena; telecommunication power management; Internet data center; cloud computing; data center energy cost minimization; deregulated electricity markets; distributed IDC; energy consumption; geographic energy price variation; real-life electricity price; spatio-temporal load balancing approach; spatio-temporal scheduling approach; temporal energy price variation; Delays; Electricity; Internet; Load management; Minimization; Portals; Servers
【Paper Link】 【Pages】:345-349
【Authors】: Zeyu Zheng ; Minming Li ; Xun Xiao ; Jianping Wang
【Abstract】: Lack of proper maintenance is the root cause of anywhere from a third to a half of downtime events in a cloud data center. To help safeguard the uptime of data centers, regular preventive maintenance must be conducted. During the maintenance time, some accommodated virtual machines (VMs) may be re-provisioned to the other available (backup) resource through migration, and some VMs may be terminated. One way that can allow a data center to perform all necessary preventive maintenance activities without causing too much disruption to VMs is to design an appropriate maintenance schedule. In this paper, given the available resource in a data center and the required maintenance activities with their deadlines, we consider the joint VM resource provisioning and maintenance scheduling problem to maximize the revenue of the data center. We tackle the problem by firstly proposing a heuristic for the resource provisioning under a given maintenance schedule. Using such a heuristic algorithm as the building block, we then propose another heuristic algorithm to solve the joint resource provisioning and maintenance scheduling problem and also derive its upper bound. Extensive simulations have shown that our proposed heuristic algorithms can effectively maximize the revenue of the data center.
【Keywords】: cloud computing; computer centres; preventive maintenance; scheduling; virtual machines; VM resource provisioning; cloud data center; coordinated resource provisioning; downtime event; heuristic algorithm; maintenance schedule; maintenance scheduling problem; preventive maintenance activity; resource through migration; virtual machine; Bismuth; Heuristic algorithms; Preventive maintenance; Schedules; Servers; Virtual machining
【Paper Link】 【Pages】:350-354
【Authors】: Zhiyang Guo ; Yuanyuan Yang
【Abstract】: Many data center networks (DCNs) adopt a multirooted tree structure called fat-tree, which has the potential to deliver large bisection bandwidth through rich path multiplicity. However, unbalanced traffic load distribution may prevent efficient utilization of such high degree of parallelism. Meanwhile, high bandwidth multicast communication is critical to many data center services and applications. Hence, in this paper we consider multicast traffic load balance problem in fat-tree DCNs from a novel angle, aiming to find the most cost-effective way to build a multicast fat-tree DCN with bounded link oversubscription ratio. First, we present a multi-rate network model to accurately describe the communication environment in a fat-tree DCN. Then, we derive the minimum number of core switches required to achieve bounded link oversubscription ratio under arbitrary multicast traffic. Finally, we provide a comprehensive comparison on the cost of different approaches to building such a multicast fat-tree DCN.
【Keywords】: computer centres; computer networks; multicast communication; telecommunication switching; trees (mathematics); bounded link oversubscription ratio; core switches; fat-tree DCN; high bandwidth multicast communication; large bisection bandwidth; multicast fat-tree data center networks; multicast traffic load balance problem; multirooted tree structure; rich path multiplicity; unbalanced traffic load distribution; Bandwidth; Downlink; Load modeling; Ports (Computers); Servers; Subscriptions; Uplink; Data center networks; bisection bandwidth; cost; fat-tree; hose traffic; load balancing; multicast; oversubscription
【Paper Link】 【Pages】:355-359
【Authors】: Rami Cohen ; Liane Lewin-Eytan ; Joseph Naor ; Danny Raz
【Abstract】: The recent growing popularity of cloud-based solutions and the variety of new applications present new challenges for cloud management and resource utilization. In this paper we concentrate on the networking aspect and consider the placement problem of virtual machines (VMs) of applications with intense bandwidth requirements. Optimizing the available network bandwidth is far more complex than optimizing resources like memory or CPU, since every network link may be used by many physical hosts and thus by the VMs residing in these hosts. We focus on maximizing the benefit from the overall communication sent by the VMs to a single designated point in the data center (called the root). This is the typical case when considering a storage area network of applications with intense storage requirements. We formulate a bandwidth-constrained VM placement optimization problem that models this setting. This problem is NP hard, and we present a polynomial-time constant approximation algorithm for its most general version, in which hosts are connected to the root by a general network graph. For more practical cases, in which the network topology is a tree and the revenue is a simple function of the allocated bandwidth, we present improved approximation algorithms that are more efficient in terms of running time. We evaluate the expected performance of our proposed algorithms through a simulation study over traces from a real production data center, providing strong indications to the superiority of our proposed solutions.
【Keywords】: approximation theory; cloud computing; computational complexity; computer centres; computer network performance evaluation; network theory (graphs); optimisation; resource allocation; storage area networks; telecommunication network topology; telecommunication traffic; trees (mathematics); virtual machines; NP-hard problem; application storage area network; bandwidth allocation; bandwidth-constrained VM placement optimization problem; cloud management; network bandwidth optimization; network graph; network topology; networking aspect; optimal virtual machine placement problem; performance evaluation; physical hosts; polynomial-time constant approximation algorithm; resource utilization; running time; traffic intense data centers; Approximation algorithms; Approximation methods; Bandwidth; Greedy algorithms; Optimized production technology; Resource management; Routing
【Paper Link】 【Pages】:360-364
【Authors】: Sookhyun Yang ; Jim Kurose ; Brian Neil Levine
【Abstract】: Thousands of cases each year of child exploitation on P2P file sharing networks lead from an IP address to a home. A first step upon execution of a search warrant is to determine if the home's open Wi-Fi or the closed wired Ethernet was used for trafficking; in the latter case, a resident user is more likely to be the responsible party. We propose methods that use remotely measured traffic to disambiguate wired and wireless residential medium access. Our practical techniques work across the Internet by estimating the perflow distribution of inter-arrival times for different home access network types. We observe that the change of inter-arrival time distribution is subject to several residentialfactors, including differences between OS network stacks, and cable network mechanisms. We propose a model to explain the observed patterns of inter-arrival times, and we study the ability of supervised learning classifiers to differentiate between wired and wireless access based on these remote traffic measurements.
【Keywords】: IP networks; Internet; computer network security; digital forensics; home networks; learning (artificial intelligence); pattern classification; peer-to-peer computing; radio access networks; telecommunication traffic; wireless LAN; IP address; Internet; OS network stack; P2P file sharing network; Wi-Fi; cable network mechanism; child exploitation; closed wired Ethernet; forensic setting; home access network; interarrival time distribution; remote traffic measurement; residential factor; search warrant; supervised learning classifier; trafficking; wired residential medium access; wireless residential medium access; Entropy; Forensics; Internet; Linux; Logic gates; Throughput; Wireless communication
【Paper Link】 【Pages】:365-369
【Authors】: Yi Guo ; Lei Yang ; Xuan Ding ; Jinsong Han ; Yunhao Liu
【Abstract】: Screen locking/unlocking is important for modern smart phones to avoid the unintentional operations and secure the personal stuff. Once the phone is locked, the user should take a specific action or provide some secret information to unlock the phone. Existing approaches do not support smart phones well due to the deficiency of security, high cost, and poor usability. We collect 200 users' handshaking actions with their smart phones and discover an appealing observation: the shaking pattern of a person is kind of unique, stable and distinguishable. In this paper, we propose OpenSesame, which employs the users' shaking patterns for locking/unlocking. The key feature of our system lies in using four fine-grained and statistic features of handshaking to verify users. Moreover, we utilize support vector machine (SVM) for accurate classification. Results from comprehensive experiments show that our technique is robust compatible across different brands of smart phones, without the need of any specialized hardware.
【Keywords】: authorisation; biometrics (access control); human computer interaction; mobile computing; pattern classification; smart phones; support vector machines; user interfaces; OpenSesame; SVM; accelerometer; authentication; handshaking actions; handshaking biometrics; high cost; mobile phones; personal stuff security; poor usability; screen locking; screen unlocking; security deficiency; shaking patterns; smart phone unlocking; support vector machine; unintentional operation avoidance; Accelerometers; Intelligent sensors; Magnetic sensors; Sensor phenomena and characterization; Shape; Smart phones; Accelerameter; Authentication; Privacy; Security; Smart Phone
【Paper Link】 【Pages】:370-374
【Authors】: Mohammad Abdel-Rahman ; Hanif Rahbari ; Marwan Krunz ; Philippe Nain
【Abstract】: The operation of a wireless network relies extensively on exchanging messages over a universally known channel, referred to as the control channel. The network performance can be severely degraded if a jammer launches a denial-of-service (DoS) attack on such a channel. In this paper, we design quorum-based frequency hopping (FH) algorithms that mitigate DoS attacks on the control channel of an asynchronous ad hoc network. Our algorithms can establish unicast as well as multicast communications under DoS attacks. They are fully distributed, do not incur any additional message exchange overhead, and can work in the absence of node synchronization. Furthermore, the multicast algorithms maintain the multicast group consistency. The efficiency of our algorithms is shown by analysis and simulations.
【Keywords】: ad hoc networks; computer network security; frequency hop communication; jamming; multicast communication; protocols; telecommunication control; DoS attack mitigation; FH algorithms; asynchronous ad hoc network; control channel; denial-of-service attack; exchanging messages; fast rendezvous protocols; jammer; message exchange overhead; multicast algorithms; multicast communications; multicast group consistency; quorum-based frequency hopping algorithms; secure rendezvous protocols; wireless network; Algorithm design and analysis; Computer crime; High definition video; Jamming; Robustness; Synchronization; Unicast
【Paper Link】 【Pages】:375-379
【Authors】: Lingjun Li ; Xinxin Zhao ; Guoliang Xue
【Abstract】: Near field communication (NFC) systems provide a good location-limited channel so that many security systems can use it to force the participants to stay close to each other. Unfortunately, only a small number of smart devices in the market are equipped with NFC chips that are essential for NFC systems. The purpose of this paper is to provide the same feature, called near field authentication (NFA), without using NFC chips. We propose an easy-to-use system to achieve NFA by using human finger movement on the touch screens of two nearby smart devices. Our system does not need any prior secret information shared between two devices and generates the same high-entropy cryptographic key for both devices in a successful authentication. The efficiency of the system is demonstrated by our evaluation on a Motorola Droid smartphone.
【Keywords】: cryptography; smart phones; wireless channels; Motorola Droid smartphone; NFA; NFC chips; easy-to-use system; high-entropy cryptographic key; human finger movement; location-limited channel; near field authentication system; secret information; smart devices; Authentication; Cryptography; Data mining; Feature extraction; Performance evaluation; Protocols
【Paper Link】 【Pages】:380-384
【Authors】: János Tapolcai ; Pin-Han Ho ; Péter Babarczi ; Lajos Rónyai
【Abstract】: The paper investigates a novel monitoring trail (m-trail) scenario that can enable any shared protection scheme for achieving all-optical and ultra-fast failure restoration. Given a set of working (W-LPs) and protection (P-LPs) lightpaths, we firstly define the neighborhood of a node, which is a set of links whose failure states should be known to the node in restoration of the corresponding W-LPs. A set of m-trails is routed such that each node can localize any failure in its neighborhood according to the ON-OFF status of the traversing m-trails. Bound analysis is performed on the minimum bandwidth required for the m-trails. Extensive simulation is conducted to verify the proposed scheme.
【Keywords】: failure analysis; optical fibre networks; telecommunication network reliability; telecommunication network routing; ON-OFF status; P-LP; W-LP; all optical failure restoration; bound analysis; failure states; m-trails; monitoring trail scenario; protection lightpaths; shared protection scheme; ultra fast failure restoration; working lightpaths; Cost function; High-speed optical techniques; Monitoring; Optical fiber networks; Optical sensors; Switches; Testing
【Paper Link】 【Pages】:385-389
【Authors】: Shahrzad Shirazipourazad ; Chenyang Zhou ; Zahra Derakhshandeh ; Arunabha Sen
【Abstract】: The orthogonal frequency division multiplexing (OFDM) technology provides an opportunity for efficient resource utilization in optical networks. It allows allocation of multiple sub-carriers to meet traffic demands of varying size. Utilizing OFDM technology, a spectrum efficient and scalable optical transport network called SLICE was proposed recently. The SLICE architecture enables sub-wavelength, super-wavelength resource allocation and multiple rate data traffic that results in efficient use of spectrum. However, the benefit is accompanied by additional complexities in resource allocation. In SLICE architecture, in order to minimize the utilized spectrum, one has to solve the routing and spectrum allocation problem (RSA). In this paper, we focus our attention to RSA and (i) prove that RSA is NP-complete even when the optical network topology is as simple as a chain or a ring, (ii) provide approximation algorithms for RSA when the network topology is a binary tree or a ring, (iii) provide a heuristic for the network with arbitrary topology and measure the effectiveness of the heuristic with extensive simulation. Simulation results demonstrate that our heuristic significantly outperforms several other heuristics proposed recently for RSA.
【Keywords】: OFDM modulation; approximation theory; computational complexity; optical modulation; resource allocation; telecommunication network routing; telecommunication network topology; NP-complete; OFDM technology; RSA problem; SLICE architecture; approximation algorithms; binary tree; optical network topology; optical transport network; orthogonal frequency division multiplexing technology; routing and spectrum allocation problem; spectrum-sliced optical networks; subwavelength resource allocation; super-wavelength resource allocation; Approximation algorithms; Approximation methods; Network topology; Optical fiber networks; Resource management; Routing; Topology
【Paper Link】 【Pages】:390-394
【Authors】: Xiaomin Chen ; Admela Jukan ; Ashwin Gumaste
【Abstract】: In elastic optical networks, the spectrum consecutive and continuous constraints may cause the so-called spectrum fragmentation issue, degrading spectrum utilization, which is especially critical under dynamic traffic scenarios. In this paper, we propose a novel multipath de-fragmentation method which aggregates spectrum fragments instead of reconfiguring existing spectrum paths. We propose an optimization model based on Integer Linear Programming (ILP) and heuristic algorithms and discuss the practical feasibility of the proposed method. We show that multipath routing is an effective de-fragmentation method, as it improves spectral efficiency and reduces blocking under dynamic traffic conditions. We also show that the differential delay issue does not present an obstacle to the application of multipath de-fragmentation in elastic optical networks.
【Keywords】: linear programming; multipath channels; telecommunication network routing; telecommunication traffic; ILP; continuous constraints; differential delay; dynamic traffic scenarios; elastic optical path networks; heuristic algorithms; integer linear programming; multipath defragmentation; multipath routing; optimization model; spectral efficiency; spectrum consecutive constraints; spectrum fragments; spectrum utilization; Delays; Heuristic algorithms; Load modeling; Optical fiber networks; Optical fibers; Optimization; Routing
【Paper Link】 【Pages】:395-399
【Authors】: Zilong Ye ; Xiaojun Cao ; Xiujiao Gao ; Chunming Qiao
【Abstract】: Traffic grooming can effectively utilize the transmission capacity of WDM networks by properly multiplexing low-speed traffic flows onto high-capacity wavelength channels. In order to maximize the wavelength resource and cut down network costs associated with, e.g. OEO conversion for time-varying yet predictable traffic, we propose a novel predictive and incremental (PI) traffic grooming scheme, named PI-grooming. A conventional traffic grooming approach for fluctuated traffic is to run an algorithm that (re)assigns the traffic flows to as a few wavelengths as possible based only on the current traffic demands of these flows. This however will lead to a lot of OEO traffic. The proposed PI-grooming considers the existing flow assignment, the current traffic demands, and the expected traffic demands in the near future. We show that, compared with the conventional approach, PI-grooming can effectively minimize the amount of OEO traffic while still using a very small number of wavelengths.
【Keywords】: optical fibre networks; telecommunication congestion control; telecommunication traffic; wavelength division multiplexing; OEO traffic; PI-grooming; WDM networks; fluctuated traffic; high-capacity wavelength channel; incremental traffic grooming; low-speed traffic flow; predictive traffic grooming; time varying traffic; wavelength division multiplexing; Accuracy; Bandwidth; Heuristic algorithms; Optical fiber networks; Prediction algorithms; WDM networks; Look-ahead; Predictive and incremental; Time-varying traffic; Traffic grooming; WDM networks
【Paper Link】 【Pages】:400-404
【Authors】: Anyu Wang ; Zhifang Zhang
【Abstract】: We give an explicit construction of exact cooperative regenerating codes at the MBCR (minimum bandwidth cooperative regeneration) point. Before the paper, the only known explicit MBCR codes are given with parameters n = d + r and d = k, while our construction applies to all possible values of n, k, d, r. The code has a brief expression in the polynomial form and the data reconstruction is accomplished by bivariate polynomial interpolation. It is a scalar code and operates over a finite field of size q ≥ n. Besides, we establish several subspace properties for linear exact MBCR codes. Based on these properties we prove that linear exact MBCR codes cannot achieve repair-by-transfer.
【Keywords】: interpolation; linear codes; polynomials; storage management; bivariate polynomial interpolation; data reconstruction; distributed storage; exact cooperative regenerating code; linear exact MBCR code; minimum bandwidth cooperative regeneration point; minimum-repair-bandwidth; polynomial form; scalar code; subspace property; Bandwidth; Interpolation; Joining processes; Maintenance engineering; Network coding; Polynomials; Vectors
【Paper Link】 【Pages】:405-409
【Authors】: Linquan Zhang ; Chuan Wu ; Zongpeng Li ; Chuanxiong Guo ; Minghua Chen ; Francis C. M. Lau
【Abstract】: Cloud computing, rapidly emerging as a new computation paradigm, provides agile and scalable resource access in a utility-like fashion, especially for the processing of big data. An important open issue here is how to efficiently move the data, from different geographical locations over time, into a cloud for effective processing. The de facto approach of hard drive shipping is not flexible, nor secure. This work studies timely, cost-minimizing upload of massive, dynamically-generated, geodispersed data into the cloud, for processing using a MapReducelike framework. Targeting at a cloud encompassing disparate data centers, we model a cost-minimizing data migration problem, and propose two online algorithms, for optimizing at any given time the choice of the data center for data aggregation and processing, as well as the routes for transmitting data there. The first is an online lazy migration (OLM) algorithm achieving a competitive ratio of as low as 2.55, under typical system settings. The second is a randomized fixed horizon control (RFHC) algorithm achieving a competitive ratio of 1+ 1/l+λ κ/λ with a lookahead window of l, where κ and λ are system parameters of similar magnitude.
【Keywords】: cloud computing; storage management; MapReducelike framework; OLM algorithm; RFHC algorithm; cloud computing; competitive ratio; cost-minimizing data migration problem; geographical location; hard drive shipping; online lazy migration; randomized fixed horizon control; Algorithm design and analysis; Cloud computing; Heuristic algorithms; Optimization; Prediction algorithms; Routing; Virtual private networks
【Paper Link】 【Pages】:410-414
【Authors】: Qian Hu ; Yang Wang ; Xiaojun Cao
【Abstract】: In this paper, we study the virtual network embedding (VNE) problem in the network virtualization context, which aims at mapping the virtual network requests of the service providers (SPs) to the substrate networks managed by the infrastructure providers (InPs). Given the NP-Completeness of the VNE problem, prior approaches primarily rely on solving/relaxing the link-based Integer Linear Programming (ILP) formulations, which lead to either extensive computational time, or non-optimal solutions. In this paper, for the first time, we present a path-based model for the VNE problem, namely P-VNE. By analyzing the dual formulation of the P-VNE model, we propose a column generation process, with which an optimal solution to the VNE problem can be found efficiently (when embedded into a branch-and-bound framework).
【Keywords】: Internet; computational complexity; integer programming; linear programming; tree searching; virtualisation; ILP formulations; InP; Internet; NP-completeness; P-VNE problem; SP; branch-and-bound framework; column generation approach; infrastructure provider; link-based integer linear programming formulation; network virtualization context; path-based model; service providers; virtual network embedding problem; virtual network request mapping; Bandwidth; Computational modeling; Indium phosphide; Internet; Polynomials; Substrates; Virtualization
【Paper Link】 【Pages】:415-419
【Authors】: Yvonne Anne Pignolet ; Stefan Schmid ; Gilles Trédan
【Abstract】: This paper demonstrates that virtual networks that are dynamically embedded on a given resource network may constitute a security threat as properties of the infrastructure-typically a business secret-are disclosed. We initiate the study of this new problem and introduce the notion of request complexity which captures the number of virtual network embedding requests needed to fully disclose the infrastructure topology. We derive lower bounds and present algorithms achieving an asymptotically optimal request complexity for the important class of tree and cactus graphs (complexity θ(n)) as well as arbitrary graphs (complexity θ(n2)).
【Keywords】: communication complexity; computer network security; embedded systems; graph theory; resource allocation; virtual private networks; ISP threat; adversarial VNet embeddings; asymptotically optimal request complexity; business secret; cactus graphs; infrastructure topology; request complexity; resource network; security threat; virtual network embedding requests; virtual networks; Complexity theory; Image edge detection; Joining processes; Network topology; Substrates; Topology; Virtualization
【Paper Link】 【Pages】:420-424
【Authors】: Xuan Bao ; Yin Lin ; Uichin Lee ; Ivica Rimac ; Romit Roy Choudhury
【Abstract】: The proliferation of pictures and videos in the Internet is imposing heavy demands on mobile data networks. Though emerging wireless technologies will provide more bandwidth, the increase in demand will easily consume the additional capacity. To alleviate this problem, we explore the possibility of serving user requests from other mobile devices located geographically close to the user. For instance, when Alice reaches areas with high device density - Data Spots - the cellular operator learns Alice's content request, and guides her device to nearby devices that have the requested content. Importantly, communication between the nearby devices can be mediated by servers, avoiding many of the known problems of pure ad hoc communication. This paper argues this viability through systematic prototyping, measurements, and measurement-driven analysis.
【Keywords】: Internet; cellular radio; mobile ad hoc networks; mobile handsets; telecommunication traffic; Alice content request; Internet; ad hoc communication; data spotting; measurement-driven analysis; mobile data networks; mobile devices; naturally clustered mobile devices; offload cellular traffic; wireless technologies; Area measurement; IEEE 802.11 Standards; Mobile communication; Mobile handsets; Performance evaluation; Servers; Videos
【Paper Link】 【Pages】:425-429
【Authors】: Shan Zhou ; Jie Yang ; Dahai Xu ; Guangzhi Li ; Yu Jin ; Zihui Ge ; Mario Kosseifi ; Robert D. Doverspike ; Yingying Chen ; Lei Ying
【Abstract】: The rapid advancement of smartphones has instigated tremendous data applications for cell phones. Supporting simultaneous voice and data services in a cellular network is not only desirable but also becoming indispensable. However, if the voice and data are serviced through the same antenna (like the 3G UMTS network), a voice call with data sessions requires better radio connection than a voice-only call. In this paper, we systematically study the coordination between the voice and data transmissions in UMTS networks. From analyzing a large carrier's UMTS network recording data, we first identify the most relevant network measurements/features indicating a potential call drop, then propose a drop-call predictor based on AdaBoost. Moreover, we develop an intelligent call management strategy to voluntarily block data sessions when the voice is predicted to be dropped. Our analysis utilizing real service provider's data sets shows that our proposed scheme can not only predict drop calls with a very high accuracy but also achieve the highest user satisfaction compared to the other existing call management strategies.
【Keywords】: 3G mobile communication; antennas; data communication; smart phones; voice communication; 3G UMTS network; AdaBoost; antenna; cell phones; cellular network; data services; data transmissions; drop-call predictor; intelligent call management; proactive call drop avoidance; smart phones; user satisfaction; voice services; voice transmissions; 3G mobile communication; Data communication; Feature extraction; Machine learning algorithms; Measurement; Member and Geographic Activities Board committees; Prediction algorithms
【Paper Link】 【Pages】:430-434
【Authors】: Stefano Paris ; Fabio Martignon ; Ilario Filippini ; Lin Clien
【Abstract】: The Radio Access Network (RAN) infrastructure represents the most critical part for capacity planning, which usually accounts for peak traffic conditions. A promising approach to increase the RAN capacity and simultaneously reduce its energy consumption is represented by the opportunistic utilization of third party Wi-Fi access devices. In order to foster the utilization of unexploited Internet connections, we propose a new and open market, where a mobile operator can lease the bandwidth made available by third parties (residential users or private companies) through their access points to increase the network capacity and save large amounts of energy. We formulate the offloading problem as a reverse auction considering the most general case of partial covering of the traffic to be offloaded. We discuss the conditions (i) to offload the maximum amount of data traffic according to the capacity of third party access devices, (ii) to foster the participation of access point owners (individual rationality), and (iii) to prevent market manipulation (incentive compatibility). Finally, we propose a greedy algorithm that solves the offloading problem in polynomial time, even for large-size network scenarios.
【Keywords】: Internet; polynomials; radio access networks; telecommunication network planning; telecommunication traffic; wireless LAN; Internet connections; RAN capacity; bandwidth trading marketplace; capacity planning; data traffic; energy consumption; large-size network scenario; market manipulation; mobile data offloading; mobile operator; network capacity; peak traffic conditions; polynomial time; radio access network infrastructure; third party Wi-Fi access device; Bandwidth; Greedy algorithms; Mobile communication; Mobile computing; Radio access networks; Resource management; Wireless communication; Auction; Heterogeneous Mobile Networks; WiFi Offloading
【Paper Link】 【Pages】:435-439
【Authors】: Youngbin Im ; Carlee Joe-Wong ; Sangtae Ha ; Soumya Sen ; Ted Taekyoung Kwon ; Mung Chiang
【Abstract】: Mobile users face a tradeoff between cost, throughput, and delay in making their offloading decisions. To navigate this tradeoff, we propose AMUSE (Adaptive bandwidth Management through USer-Empowerment), a practical, costaware WiFi offloading system that takes into account a user's throughput-delay tradeoffs and cellular budget constraint. Based on predicted future usage and WiFi availability, AMUSE decides which applications to offload to what times of the day. To practically enforce the assigned rate of each TCP application, we introduce a receiver-side TCP bandwidth control algorithm that adjusts the rate by controlling the TCP advertisement window from the user side. We implement AMUSE on Windows 7 tablets and evaluate its effectiveness with 3G and WiFi usage data obtained from a trial with 25 mobile users. Our results show that AMUSE improves user utility.
【Keywords】: 3G mobile communication; bandwidth allocation; cellular radio; delays; radio receivers; transport protocols; wireless LAN; 3G; AMUSE; TCP advertisement; TCP application; WiFi availability; WiFi offloading system; WiFi usage data; Windows 7 tablets; adaptive bandwidth management through user-empowerment; cellular budget constraint; cost-aware offloading; mobile users; receiver-side TCP bandwidth control algorithm; throughput-delay tradeoffs; user throughput-delay tradeoffs; Bandwidth; Delays; Educational institutions; IEEE 802.11 Standards; Mobile communication; Prediction algorithms; Throughput
【Paper Link】 【Pages】:440-444
【Authors】: Fei Yu ; Guangtao Xue ; Hongzi Zhu ; Zhenxian Hu ; Minglu Li ; Gong Zhang
【Abstract】: 3G technology has stimulated a wide variety of high-bandwidth applications on smartphones, such as video streaming and content-rich web browsing. Although having those applications mobile is quite appealing, high data rate transmission also poses huge demand for power. It has been revealed that the tail effect in 3G radio operation results in significant energy drain on smartphones. Recent fast dormancy technique can be utilized to remove tails but, without care, can degrades user experience. In this paper, we propose a novel scheme SmartCut, which effectively mitigates the tail effect of radio usage in 3G networks with little side-effect on user experience. The core idea of SmartCut is to utilize the temporal correlation of packet arrivals to predict upcoming data, based on which unnecessary high-power-state tails of radio are cut out leveraging the Fast Dormancy mechanism. Extensive trace-driven simulation results demonstrate the efficacy of SmartCut design. On average, SmartCut can save up to 56.57% energy on average while having little side-effect to user experience.
【Keywords】: 3G mobile communication; smart phones; 3G networks; 3G radio operation; 3G radio tail effect mitigation; SmartCut scheme; content-rich web browsing; data rate transmission; energy drain; extensive trace-driven simulation; fast dormancy mechanism; packet arrivals; radio high-power-state tails; smartphones; video streaming; Correlation; Data communication; Delays; Entropy; Smart phones; Streaming media; Switches
【Paper Link】 【Pages】:445-449
【Authors】: Rongxing Lu ; Xiaodong Lin ; Zhiguo Shi ; Bin Cao ; Xuemin (Sherman) Shen
【Abstract】: Opportunistic network (OPPNET) is characterized by the intermittent connectivity among mobile nodes from their unpredictable mobility. Although it is promising, there still exist many security and privacy challenges. In this paper, we present an incentive and privacy-aware data dissemination (IPAD) scheme for OPPNETs, not only to exploit how to protect mobile node's identity privacy, location privacy and social profile privacy, but also to provide a secure incentive for privacy-aware data dissemination. Through extensive incentive analysis, we show that only if a source provides a secure incentive strategy, can a data packet be efficiently disseminated in OPPNETs.
【Keywords】: data privacy; incentive schemes; mobile ad hoc networks; telecommunication security; IPAD; MANET; OPPNET; data packet; identity privacy; incentive and privacy-aware data dissemination scheme; intermittent connectivity; location privacy; mobile ad hoc networks; mobile nodes; opportunistic networks; social profile privacy; IP networks; Mobile nodes; Nickel; Privacy; Relays; Security; Data dissemination; Incentive; Opportunistic network; Privacy-Aware
【Paper Link】 【Pages】:450-454
【Authors】: Yong Zhou ; Weihua Zhuang
【Abstract】: In this paper, we study the differences of applying cooperation to fully-connected and multi-hop wireless networks, and find out that both the enlarged interference area and link density play a pivotal role in making the beneficial cooperation decision in a multi-hop network. Through characterizing effects of the enlarged interference area and link density on the overall network performance, a beneficial cooperation opportunity can be identified. By employing a randomized scheduling scheme and deriving the interference-free probability of any two links, the expected numbers of concurrent direct and cooperative transmissions can be obtained, where the ratio of these two numbers is defined as the beneficial cooperation ratio. Such a ratio translates the reduced spatial reuse to a requirement of the cooperation gain and provides a guideline for enabling beneficial cooperation on a single-link basis. Finally, the analytical and simulation results demonstrate that the beneficial cooperation criterion for a multi-hop network derived in this paper is more accurate than that in [1].
【Keywords】: ad hoc networks; cooperative communication; probability; radio links; radiofrequency interference; scheduling; beneficial cooperation opportunity; concurrent direct transmissions; cooperation ratio; cooperative transmissions; fully-connected wireless networks; interference area; interference-free probability; link density; multihop wireless ad hoc networks; randomized scheduling; reduced spatial reuse; Interference; Probability; Receivers; Relays; Silicon; Spread spectrum communication; Transmitters
【Paper Link】 【Pages】:455-459
【Authors】: Chenxi Qiu ; Lei Yu ; Haiying Shen ; Sohraab Soltani
【Abstract】: Cooperative broadcast, in which a packet receiver cooperatively combines received weak signal power from different senders to decode the original packet, has gained increasing attention. However, existing approaches are developed based on the assumption that there is a single flow in the network; thus, they are not suitable for multi-flow broadcasting in which broadcasts are initiated by different nodes and consist of more than one packet at any point in time. In this paper, we aim to achieve low-latency multi-flow broadcast in wireless multihop networks with fading channels. We formulate this problem as a Minimum Slotted Delay Cooperative Broadcast (MSDCB) problem, and prove that it is NP-complete and o(logN) inapproximable. We then propose two heuristic algorithms named PCBHS and PCBH-M to solve MSDCB. Our experimental results show that our algorithms outperform previous methods.
【Keywords】: cooperative communication; fading channels; optimisation; packet radio networks; radio receivers; MSDCB; NP-complete problem; PCBH-M; PCBH-S; fading channel; fading wireless network; heuristic algorithm; low-latency multiflow broadcast; minimum slotted delay cooperative broadcast; packet receiver; wireless multihop network; Broadcasting; Delays; Fading; Heuristic algorithms; Relays; Schedules; Wireless networks
【Paper Link】 【Pages】:460-464
【Authors】: Pedro E. Santacruz ; Vaneet Aggarwal ; Ashutosh Sabharwal
【Abstract】: In most wireless networks, nodes have only limited local information about the network state, which includes connectivity and channel state information. With limited local information about the network, each node's knowledge is mismatched, therefore they must make distributed decisions. In this paper, we pose the following question - if every node has network state information only about a small neighborhood, how and when should nodes choose to transmit? While scheduling answers the above question for point-to-point physical layers which are designed for an interference-avoidance paradigm, we look for answers in cases when interference can be embraced by advanced code design, as suggested by results in network information theory. To make progress on this challenging problem, we propose a distributed algorithm which achieves rates higher than interference-avoidance based link scheduling, especially if each node knows more than one hop of network state information.
【Keywords】: interference suppression; network coding; radio networks; scheduling; wireless channels; advanced code design; channel state information; distributed subnetwork scheduling; interference-avoidance based link scheduling; limited local information; local views; network information theory; network state information; point-to-point physical layers; wireless networks; Color; Computer architecture; Interference; Knowledge engineering; Physical layer; Scheduling; Wireless networks
【Paper Link】 【Pages】:465-469
【Authors】: Yang Yang ; Miao Jin ; Yao Zhao ; Hongyi Wu
【Abstract】: We address the problem of in-network information processing, storage, and retrieval in three-dimensional (3D) sensor networks in this research. We propose a geographic location free double-ruling-based scheme for large-scale 3D sensor networks. The proposed approach does not require a 3D sensor network with a regular cube shape or uniform node distribution. Without the knowledge of the geographic location and the distance bound, a data query simply travels along a simple curve with the guaranteed success to retrieve aggregated data through time and space with one or different types across the network. Simulations and comparisons show the proposed approach with low cost and a balanced traffic load.
【Keywords】: distributed sensors; graph theory; information storage; query processing; balanced traffic load; cut graph-based information storage; data query; geographic location free double-ruling-based scheme; in-network information processing; information retrieval; large-scale 3D sensor networks; regular cube shape; three-dimensional sensor networks; uniform node distribution; Memory; Network topology; Routing; Shape; Tiles; Topology; Wireless sensor networks
【Paper Link】 【Pages】:470-474
【Authors】: Shibo He ; Xiaowen Gong ; Junshan Zhang ; Jiming Chen ; Youxian Sun
【Abstract】: This paper studies deterministic sensor deployment to ensure barrier coverage in wireless sensor networks. Most of existing work focused on line-based deployment, ignoring a wide spectrum of potential curve-based solutions. We, for the first time, extensively study the sensor deployment under general settings. We first present a condition under which line-based deployment is suboptimal, pointing to the advantage of curve-based deployment. By constructing a contracting mapping, we identify the characteristics for a deployment curve to be optimal. We then design sensor deployment algorithms for the optimal deployment curve by introducing a new notion of distance-continuous. Our findings show that i) when the deployment curve is distance-continuous, the proposed algorithm is optimal in terms of the vulnerability corresponding to the deployment, and ii) when the deployment curve is not distance-continuous, the approximation ratio of the vulnerability corresponding to the deployment by the proposed algorithm to the optimal one is upper bounded by min (π, ||AB||/||AGB|| 2n+√(2-1)/2n), where ||AB||, ||AGB|| and n are constants. Extensive numerical results corroborate our analysis.
【Keywords】: wireless sensor networks; barrier coverage; curve-based deployment; lined-based deployment; sensor deployment algorithms; wireless sensor networks; Ad hoc networks; Algorithm design and analysis; Approximation algorithms; Approximation methods; Educational institutions; Sun; Wireless sensor networks
【Paper Link】 【Pages】:475-479
【Authors】: Anais Vergne ; Laurent Decreusefond ; Philippe Martins
【Abstract】: In this paper, we aim at reducing power consumption in wireless sensor networks by turning off supernumerary sensors. Random simplicial complexes are tools from algebraic topology which provide an accurate and tractable representation of the topology of wireless sensor networks. Given a simplicial complex, we present an algorithm which reduces the number of its vertices, keeping its homology (i.e. connectivity, coverage) unchanged. We show that the algorithm reaches a Nash equilibrium, moreover we find both a lower and an upper bounds for the number of vertices removed, the complexity of the algorithm, and the maximal order of the resulting complex for the coverage problem. We also give some simulation results for classical cases, especially coverage complexes simulating wireless sensor networks.
【Keywords】: game theory; telecommunication network topology; wireless sensor networks; Nash equilibrium; algebraic topology; coverage problem; homology; power consumption; random simplicial complexes; reduction algorithm; supernumerary sensors; wireless sensor network topology; Abstracts; Face; Indexes; Network topology; Sensors; Topology; Wireless sensor networks
【Paper Link】 【Pages】:480-484
【Authors】: Yin Wang ; Yuan He ; Dapeng Cheng ; Yunhao Liu ; Xiang-Yang Li
【Abstract】: Constructive Interference (CI) proposed in the existing work (e.g., A-MAC [1], Glossy [2]) may degrade the packet reception performance in terms of Packet Reception Ratio (PRR) and Received Signal Strength Indication (RSSI). The packet reception performance of a set of nodes transmitting simultaneously might be no better than that of any single node transmitting individually. In this paper, we redefine CI and propose TriggerCast, a practical wireless architecture which ensures concurrent transmissions of an identical packet to interfere constructively rather than to interfere non-destructively. CI potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. Moreover, we for the first time present a theoretical sufficient condition for generating CI with IEEE 802.15.4 radio: concurrent transmissions with an identical packet should be synchronized at chip level. Meanwhile, co-senders participating in concurrent transmissions should be carefully selected, and the starting instants for the concurrent transmissions should be aligned. Based on the sufficient condition, we propose practical techniques to effectively compensate propagation and radio processing delays. TriggerCast has 95th percentile synchronization errors of at most 250ns. Extensive experiments in practical testbeds reveal that TriggerCast significantly improves PRR (from 5% to 70% with 7 concurrent senders, from 50% to 98.3% with 6 senders) and RSSI (about 6dB with 5 senders).
【Keywords】: Zigbee; radiofrequency interference; radiowave propagation; wireless sensor networks; CI; IEEE 802.15.4 radio; PRR; RSSI; TriggerCast; WSN; chip level; concurrent transmissions; constructive interference; energy consumption; identical packet; link quality improvement; magnitude reductions; packet reception performance degradation; packet reception ratio; propagation compensation; radio processing delay compensation; received signal strength indication; wireless architecture; wireless constructive collisions; wireless sensor networks; Delays; IEEE 802.15 Standards; Receivers; Signal to noise ratio; Synchronization; Wireless sensor networks
【Paper Link】 【Pages】:485-489
【Authors】: Xiuyuan Zheng ; Jie Yang ; Yingying Chen ; Yu Gan
【Abstract】: Device-free passive localization enables locating targets (e.g., intruders or victims) that do not carry any radio devices nor do they actively participate in the wireless localization process. This is because the wireless environments will get affected when people move into the area, which result in the changes of Received Signal Strength (RSS) of the wireless links. In this paper, we first show that the localization performance degrades significantly when people are moving in dynamic speeds. This is because existing studies in device-free passive localization system have an implicit assumption that the target is moving at a constant speed, which is not always true in practical scenarios. To cope with targets moving with dynamic speeds, we propose an adaptive speed change detection framework including three components: speed change detection, determination of time-window size and adaptive localization. Two speed change detection schemes have been developed to capture the changes of moving speed and adjust the time-window size adaptively to facilitate effective localization. We demonstrate that our framework is flexible to work with any device-free localization method using signal strength. Results from the real experiments confirm that our approach has over 30% improvement on both median and max localization error, under dynamically changing speed of the target.
【Keywords】: radio links; adaptive device-free passive localization coping; adaptive speed change detection framework; dynamic speeds; dynamic target speed; localization error; localization performance; radio devices; received signal strength; time-window size; wireless environments; wireless links; wireless localization process; Accuracy; Legged locomotion; Performance evaluation; Tomography; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:490-494
【Authors】: Chenshu Wu ; Zheng Yang ; Yiyang Zhao ; Yunhao Liu
【Abstract】: Global Positioning System (GPS) has enabled a number of geographical applications over many years. Quite a lot of location-based services, however, still suffer from considerable positioning errors of GPS (usually 1m to 20m in practice). In this study, we design and implement a high-accuracy global positioning solution based on GPS and human mobility captured by mobile phones. Our key observation is that smart phone-enabled dead reckoning supports accurate but local coordinates of users' trajectories, while GPS provides global but inconsistent coordinates. Considering them simultaneously, we devise techniques to refine the global positioning results by fitting the global positions to the structure of locally measured ones, so the refined positioning results are more likely to elicit the ground truth. We develop a prototype system, named GloCal, and conduct comprehensive experiments in both crowded urban and spacious suburban areas. The evaluation results show that GloCal can achieve 30% improvement on average error with respect to GPS.
【Keywords】: Global Positioning System; mobile radio; smart phones; GPS; GloCal; crowded urban area; global positioning accuracy; global positioning system; human mobility; local mobility; location based service; mobile phones; smart phone enabled dead reckoning; spacious suburban area; user trajectory; Accuracy; Dead reckoning; Global Positioning System; Legged locomotion; Mobile handsets; Sensors; Trajectory
【Paper Link】 【Pages】:495-499
【Authors】: Kaikai Liu ; Xinxin Liu ; Lulu Xie ; Xiaolin Li
【Abstract】: Since our daily activities are dominantly indoor, as smart phones emerge as the most popular personal computing companions, major IT companies recently launched aggressive investment on mobile indoor location services and positioning systems, e.g., on iOS or Android mobile devices. However, one major hurdle has not been conquered yet: smart phone-based high-resolution indoor localization. In this paper, we propose a practical solution for accurate ranging and localization based on acoustic communication between anchor nodes with speakers and the microphone on a smartphone. To identify different anchor nodes and enable time-of-arrival (TOA) ranging, we propose approaches for signal modulation, symbol detection and demodulation, synchronization and ranging. Experimental results show that the communication bit-error-rate and ranging accuracy is sufficient for our target applications. The preliminary results of localization demonstrate that our algorithm could achieve highaccuracy of 23cm in the offline mode with a promising potential for realtime smartphone-based indoor localization.
【Keywords】: acoustic signal detection; demodulation; error statistics; indoor radio; microphones; mobile radio; smart phones; time-of-arrival estimation; Android mobile device; IT companies; TOA ranging; acoustic communication; acoustic localization; communication bit-error-rate; high-resolution indoor localization; iOS; microphone; mobile indoor location service; personal computing companion; positioning system; signal modulation; smart phones; speakers; symbol demodulation; symbol detection; synchronization; time-of-arrival ranging; Accuracy; Acoustics; Bit error rate; Demodulation; Distance measurement; Microphones; Mobile handsets
【Paper Link】 【Pages】:500-504
【Authors】: Xinfeng Li ; Jin Teng ; Qiang Zhai ; Junda Zhu ; Dong Xuan ; Yuan F. Zheng ; Wei Zhao
【Abstract】: Human localization is an enabling technology for many mobile applications. As more and more people carry mobile phones with them, we can now localize a person by localizing his mobile phone. However, it is observed that presence of human bodies introduces heavy interference to mobile phone signals. This has been one of the major causes of inaccurate wireless localization for humans. In this paper, we propose using video cameras to help estimate human body's interference on mobile device's signals. We combine human orientation detection and human/phone/AP relative position inference estimation to better measure how a human blocks or reflects wireless signals. We have also developed a signal distortion compensation model. Based on these technologies, we have implemented a human localization system called EV-Human. Real world experiments show that our EV-system can accurately and robustly localize humans.
【Keywords】: mobile computing; mobile handsets; radiofrequency interference; video cameras; EV-Human; body electronic interference; human blocks; human bodies; human body interference estimation; human localization; human orientation detection; human-phone-AP relative position inference estimation; mobile device signals; mobile phone signals; signal distortion compensation model; video cameras; visual estimation; wireless localization; wireless signals; Biological system modeling; Cameras; Interference; Mobile communication; Mobile handsets; Visualization; Wireless communication
【Paper Link】 【Pages】:505-509
【Authors】: Yu Zhao ; Yunhuai Liu ; Tian He ; Athanasios V. Vasilakos ; Chuanping Hu
【Abstract】: Radio Signal Strength (RSS) based ranging is attractive by the low cost and easy deployment. In real environments, its accuracy is severely affected by the multipath effect and the external radio interference. The well-known fingerprint approaches can deal with the issues but introduce too much overhead in dynamic environments. In this paper, we attempt to address the issue along a completely different direction. We propose a new ranging framework called Fredi that exploits the frequency diversity to overcome the multi-path effect solely based on RSS measurements. We design a Discrete Fourier Transformation based algorithm and prove that it has the optimal solution under ideal cases. We further revise the algorithm to be robust to the measurement noises in practice. We implement Fredi on top of the USRP-2 platform and conduct extensive real environments in indoor environments. Experimental results show the superiority performance compared with the traditional methods.
【Keywords】: discrete Fourier transforms; diversity reception; indoor radio; radiofrequency interference; FREDI; RSS based ranging; USRP-2 platform; discrete Fourier transformation; external radio interference; frequency diversity; indoor environment; measurement noise; multipath effect; radio signal strength; Algorithm design and analysis; Discrete Fourier transforms; Distance measurement; Frequency diversity; Frequency measurement; Receivers; Robustness
【Paper Link】 【Pages】:510-514
【Authors】: Amir Nahir ; Ariel Orda ; Danny Raz
【Abstract】: Load balancing in large distributed server systems is a complex optimization problem of critical importance in cloud systems and data centers. Existing schedulers often incur a high overhead in communication when collecting the data required to make the scheduling decision, hence delaying the job request on its way to the executing server. We propose a novel scheme that incurs no communication overhead between the users and the servers upon job arrival, thus removing any scheduling overhead from the job's critical path. Our approach is based on creating several replicas of each job and sending each replica to a different server. Upon the arrival of a replica to the head of the queue at its server, the latter signals the servers holding replicas of that job, so as to remove them from their queues. We show, through analysis and simulations, that this scheme improves the expected queuing overhead over traditional schemes by a factor of 9 (or more) under various load conditions. In addition, we show that our scheme remains efficient even when the inter-server signal propagation delay is significant (relative to the job's execution time). We provide heuristic solutions to the performance degradation that occurs in such cases and show, by simulations, that they efficiently mitigate the detrimental effect of propagation delays. Finally, we demonstrate the efficiency of our proposed scheme in a real-world environment by implementing a load balancing system based on it, deploying the system on the Amazon Elastic Compute Cloud (EC2), and measuring its performance.
【Keywords】: cloud computing; computer centres; optimisation; queueing theory; resource allocation; Amazon Elastic Compute Cloud; EC2; cloud systems; communication overhead; complex optimization problem; data centers; distributed server systems; heuristic solutions; inter-server signal propagation delay; job critical path; network-aware load balancing; propagation delays; queuing overhead; real-world environment; scheduling decision; server queue; Analytical models; Delays; Load management; Load modeling; Propagation delay; Queueing analysis; Servers
【Paper Link】 【Pages】:515-519
【Authors】: Kai Wang ; Minghong Lin ; Florin Ciucu ; Adam Wierman ; Chuang Lin
【Abstract】: Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow timescale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time scale.
【Keywords】: computer centres; energy consumption; power aware computing; statistical analysis; stochastic programming; dynamic data center resizing; energy consumption; fast time-scale stochasticity; optimization-based modeling; slow time scale nonstationarity; workload impact characterization; workload process; Data models; Delays; Heuristic algorithms; Optimization; Servers; Stochastic processes; Switches
【Paper Link】 【Pages】:520-524
【Authors】: Yan Gao ; Zheng Zeng ; Xue Liu ; P. R. Kumar
【Abstract】: Internet-scale data centers (IDCs) have rapidly proliferated to such an extent that their energy consumption and GreenHouse Gas (GHG) emissions have become an important concern to society. As a result, many IDC operators have started using renewable energy, e.g., wind power, to power their data centers. Unfortunately, the utilization of wind energy has stayed at a low ratio due to the intermittent nature of wind. This paper makes the case that it is in fact possible for a distributed IDC system to exploit multiple uncorrelated wind energy sources to significantly reduce the effect of intermittency and nearly achieve “entirely green” cloud-scale services. This result is obtained based on the analysis of real-world wind power traces from 69 wind farms. The idea is to leverage the front-end load dispatching server to send work to the location where wind power is available. We propose a wind-power-aware (WPA) policy that routes jobs based only on the current states of workloads and wind power availabilities in the data centers. We show that with the WPA policy more than 95% of energy consumption in IDCs can in fact be satisfied by wind power, and, secondly, that achieving this does not require the delaying of processing of jobs due to wind availability. We also show that the locations where data centers are placed play an important role in achieving high wind power utilization. Our analysis shows that wind power utilization can generally lie in a range from 44% to 96%, depending on how the locations of wind farms are selected. We propose a method for location selection that uses the coefficient of variation instead of the correlation coefficient, and show that with this method the utilization can lie in the high end of the above range. Finally, we verify these results by simulations that are based on real-world traces for both workloads and wind power generations.
【Keywords】: computer centres; distributed power generation; power consumption; power generation dispatch; wind power; wind power plants; GHG emissions; Internet-scale data centers; WPA policy; coefficient of variation; distributed IDC system; energy consumption; entirely green cloud-scale services; front-end load dispatching server; greenhouse gas; intermittency; renewable energy; wind energy; wind farms; wind power utilization; wind-power-aware; Correlation; Green products; Power demand; Servers; Wind energy; Wind farms; Wind power generation
【Paper Link】 【Pages】:525-529
【Authors】: Tianrong Zhang ; Fan Wu ; Chunming Qiao
【Abstract】: Efficient wireless channel allocation is becoming a more and more important topic in wireless networking. Dynamic channel allocation is believed to be an effective way to cope with the shortage of wireless channel resource. In this paper, we propose SPECIAL, which is a Strategy-Proof and EffiCIent multi-channel Auction mechanism for wireLess networks. SPECIAL guarantees the strategy-proofness of the channel auction, exploits wireless channels' spatial reusability, and achieves high channel allocation efficiency.
【Keywords】: channel allocation; game theory; radio networks; SPECIAL; strategy proof and efficient multichannel auction mechanism; wireless channel allocation; wireless networks; Channel allocation; Cost accounting; Interference; Resource management; Vectors; Wireless networks
【Paper Link】 【Pages】:530-534
【Authors】: Yangming Zhao ; Sheng Wang ; Shizhong Xu ; Xiong Wang ; Xiujiao Gao ; Chunming Qiao
【Abstract】: In this paper, we study the tradeoff between two important traffic engineering objectives: load balance and energy efficiency. Although traditional commonly used multi-objective optimization methods can yield a Pareto efficient solution, they need to construct an aggregate objective function (AOF) or model one of the two objectives as a constraint in the optimization problem formulation. As a result, it is difficult to achieve a fair tradeoff between these two objectives. Accordingly, we induce a Nash bargaining framework which treats the two objectives as two virtual players in a game theoretic model, who negotiate how traffic should be routed in order to optimize both objectives. During the negotiation, each of them announces its performance threat value to reduce its cost, so the model is regarded as a threat value game. Our analysis shows that no agreement can be achieved if each player sets its threat value selfishly. To avoid such a negotiation break-down, we modify the threat value game to have a repeated process and design a mechanism to not only guarantee an agreement, but also generate a fair solution. In addition, the insights from this work are also useful for achieving a fair tradeoff in other multi-objective optimization problems.
【Keywords】: Pareto optimisation; energy conservation; game theory; resource allocation; telecommunication traffic; AOF; Nash bargaining framework; Pareto efficient solution; aggregate objective function; energy efficiency; game theoretical perspective; load balance; multiobjective optimization method; threat value game; traffic engineering; traffic engineering objective; Energy consumption; Energy efficiency; Games; Load modeling; Optimization; Routing; Telecommunication traffic; Energy Efficiency; Load Balance; Multi-Objective Optimization; Nash Bargaining; Traffic Engineering
【Paper Link】 【Pages】:535-539
【Authors】: Xin Luo ; Hamidou Tembine
【Abstract】: In this paper we consider a random access system where each user can be in two modes of operation, has a packet or not and the set of users which have a packet is available to a shared medium. We propose an evolving coalitional game theory to analyze the system outcomes. Unlike classical coalitional approaches that assume that coalitional structures are fixed and formed with cost-free, we explain how coalitions can be formed in a fully distributed manner using evolutionary dynamics and coalitional combined fully distributed payoff and strategy (CODIPAS) learning. We introduce the concept of evolutionarily stable coalitional structure (ESCS), which is, when it is formed it is resilient by small perturbation of strategies. We show that (i) the formation and the stability of coalitions depend mainly on the cost of making a coalition compared to the benefit of cooperation, (ii) the grand coalition can be unstable and a localized coalitional structure is formed as an evolutionarily stable coalitional structure. When the core is empty, the coalitional CODIPAS scheme selects one of the stable sets. Finally, we discuss the convergence and complexity of the proposed coalitional CODIPAS learning in access control with different users' activities.
【Keywords】: evolutionary computation; game theory; multi-access systems; radio networks; CODIPAS learning; ESCS; coalitional combined fully distributed payoff and strategy; coalitional game theory; evolutionarily stable coalitional structure; evolutionary coalitional games; evolutionary dynamics; localized coalitional structure; random access control; Access control; Convergence; Game theory; Games; Resource management; Stability analysis; Vectors
【Paper Link】 【Pages】:540-544
【Authors】: Chih-Yu Wang ; Yan Chen ; Hung-Yu Wei ; K. J. Ray Liu
【Abstract】: Heterogeneous multimedia content delivery over wireless networks is an important yet challenging issue. A promising solution is combining multicasting and scalable video coding (SVC) techniques via cross-layer design which has been shown to be effectively solution in the literature. Nevertheless, most existing works on SVC multicasting system focus on the static scenarios. In addition, the economic value of SVC multicasting system has seldom been explored. In this work, we study a subscription-based SVC multicasting system with stochastic user arrival and heterogeneous user preferences. A stochastic framework based on Multi-dimensional Markov Decision Process (M-MDP) is proposed to study the negative network externality existing in the proposed system. A game-theoretic analysis is conducted to understand the rational demands from heterogeneous users the subscription economic model. We show that the optimal pricing strategy which maximizes the expected revenue of the service provider can be derived through dynamic iterative updating techniques. Moreover, the overall user's valuation on the system is maximized under such an optimal pricing strategy. Finally, the solution efficiency is evaluated through simulations.
【Keywords】: Markov processes; game theory; iterative methods; multimedia communication; radio networks; video coding; M-MDP; cross-layer design; dynamic iterative updating technique; game-theoretic analysis; heterogeneous multimedia content delivery; heterogeneous user preference; multidimensional Markov decision process; optimal pricing; stochastic scalable video coding multicasting system; stochastic user arrival; subscription economic model; subscription-based SVC multicasting system; wireless network; Games; Multicast communication; Pricing; Resource management; Static VAr compensators; Streaming media; Subscriptions
【Paper Link】 【Pages】:545-549
【Authors】: Yossi Kanizo ; David Hay ; Isaac Keslassy
【Abstract】: In software-defined networks (SDNs), the network controller first formulates abstract network-wide policies, and then implements them in the forwarding tables of network switches. However, fast SDN tables often cannot scale beyond a few hundred entries. This is because they typically include wildcards, and therefore are implemented using either expensive and power-hungry TCAMs, or complex and slow data structures. This paper presents the Palette distribution framework for decomposing large SDN tables into small ones and then distributing them across the network, while preserving the overall SDN policy semantics. Palette helps balance the sizes of the tables across the network, as well as reduce the total number of entries by sharing resources among different connections. It copes with two NP-hard optimization problems: Decomposing a large SDN table into equivalent subtables, and distributing the subtables such that each connection traverses each type of subtable at least once. To implement the Palette distribution framework, we introduce graph-theoretical formulations and algorithms, and show that they achieve close-to-optimal results in practice.
【Keywords】: data structures; software radio; Palette distribution framework; TCAM; data structures; distributing tables; network controller; network switches; network-wide policies; software-defined networks; Access control; Color; Computer languages; Control systems; Monitoring; Protocols; Semantics
【Paper Link】 【Pages】:550-554
【Authors】: Qing Li ; Mingwei Xu ; Meng Chen
【Abstract】: The Internet global routing tables have been expanding at a dramatic and increasing rate. In this paper, we propose the next hop of strict partial order to construct and aggregate the Nexthop-Selectable FIB (NSFIB). We control the path stretch caused by NSFIB aggregation by setting an upper limit number of next hops. According to our simulation, our aggregation algorithms shrink the FIB to 5-15%, compared with 20-60% of single-nexthop FIB aggregation algorithms; our method works very well in controlling the path stretch.
【Keywords】: Internet; telecommunication network routing; Internet global routing table; NSFIB aggregation; NSFIB construction; aggregation algorithm; forwarding information base; next hop selectable FIB; path stretch; strict partial order; Aggregates; Complexity theory; Heuristic algorithms; Internet; Network topology; Routing; Topology
【Paper Link】 【Pages】:555-559
【Authors】: Dario G. Garao ; Guido Maier ; Achille Pattavina
【Abstract】: Future switching and interconnection fabrics inside switching equipment, high-performance computers and datacenters will require more throughput and more energy efficiency. Optical technology provides many opportunities of improvement of both features compared to electronic counterparts. This work defines a procedure to design the architecture of optical multistage switching networks. Modularity of the implementation is the primary concern, allowing for the construction of a genericsize fabric by the simple cascading of multiple stage-modules. In this paper we show in details the application of the approach to a family of banyan networks. The designed architecture can be exploited for various implementation technologies, as, for instance, integrated optics with micro-ring resonators, free-space optics with 2-D MEMS, networks on chip.
【Keywords】: computer centres; energy conservation; multistage interconnection networks; optical computing; optical interconnections; optical switches; switching networks; 2D MEMS; banyan network; data center; energy efficiency; free-space optics; generic-size fabric; high-performance computer; interconnection fabrics; microring resonator; modular architecture; multiple stage-module; networks on chip; optical multistage switching networks; switching equipment; Optical device fabrication; Optical interconnections; Optical network units; Optical resonators; Optical switches
【Paper Link】 【Pages】:560-564
【Authors】: Eugene Chai ; Kang G. Shin ; Sung-Ju Lee ; Jeongkeun Lee ; Raúl H. Etkin
【Abstract】: The growing demand for real-time streaming video on portable devices has increased the importance of multimedia multicast in mobile wireless networks. A defining characteristic of such multicast networks is its heterogeneity in both the channel states and the MIMO capabilities of its clients. However, current wireless multicast schemes adapt poorly to such heterogeneity. We introduce Procrustes, a multimedia multicast scheme that is built upon a novel PHY-layer rateless code. Unlike bit-level rateless codes (such as Raptor [14] codes), Procrustes clients automatically adjust the PSNR of the received multicast video stream to match both the instantaneous channel state and the number of active receive antennas. We demonstrate the performance of Procrustes in a simulated environment.
【Keywords】: antenna arrays; mobile handsets; multicast communication; radio networks; receiving antennas; video streaming; wireless channels; MIMO capabilities; PHY-layer rateless code; PSNR; Procrustes; active receive antennas; bit-level rateless codes; instantaneous channel state; mobile wireless networks; multimedia multicast scheme; real-time video streaming; received multicast video stream; wireless multicast networks; Antennas; Bit rate; MIMO; OFDM; PSNR; Streaming media; Transmitters
【Paper Link】 【Pages】:565-569
【Authors】: Chien-Han Chai ; Yuan-Yao Shih ; Ai-Chun Pang
【Abstract】: With the explosive growth of mobile data traffic, the femtocell technology is one of the proper solutions to enhance mobile service quality and system capacity for cellular networks. However, one of the key problems for femtocell deployment is to find appropriate access control in which mobile operators and users are willing to be involved. Among all kinds of access control modes, the hybrid access mode is considered as the most promising one, which allows femtocells to provide preferential access to femtocell owners and subscribers while other public users can access femtocells with certain restriction. Since all femtocell owners are selfish, how to provide enough incentives to the owners for sharing their femtocell resources is challenging. In this paper, we construct an economic framework for mobile operator and femtocell users by a game theoretical analysis and introduce the concept of profit sharing to provide a positive cycle to sustain the femtocell service. In this framework, a femtocell game is formulated, where the femtocell owners determine the proportion of femtocell resources shared with public users while the operator can maximize its own benefit by determining the ratio of revenue distribution to femtocell owners. The existence of the Nash equilibrium of the game is analyzed. Extensive simulations are conducted to show that the profit of the operator can be maximized while the service requirements of users can be maintained by the proposed framework.
【Keywords】: access control; femtocellular radio; game theory; radio spectrum management; telecommunication services; telecommunication traffic; Nash equilibrium; access control mode; cellular networks; cochannel hybrid access femtocell networks; femtocell owner; femtocell resources; game theoretical analysis; hybrid access mode; mobile data traffic; mobile operators; mobile service quality enhancement; profit sharing concept; revenue distribution ratio; spectrum-sharing rewarding framework; system capacity; Access control; Femtocell networks; Games; Interference; Macrocell networks; Signal to noise ratio; Femtocell; game theory; hybrid access mode; profit sharing
【Paper Link】 【Pages】:570-574
【Authors】: Richard Southwell ; Xu Chen ; Jianwei Huang
【Abstract】: Today's wireless networks are facing tremendous growth and many applications have more demanding quality of service (QoS) requirements than ever before. However, there is only a finite amount of wireless resources (such as spectrum) that can be used to satisfy these demanding requirements. We present a general QoS satisfaction game framework for modeling the issue of distributed spectrum sharing to meet QoS requirements. Our study is motivated by the observation that finding globally optimal spectrum sharing solutions with QoS guarantees is NP hard. We show that the QoS satisfaction game has the finite improvement property, and the users can self-organize into a pure Nash equilibrium in polynomial time. By bounding the price of anarchy, we demonstrate that the worst case pure Nash equilibrium can be close to the global optimal solution when users' QoS demands are not too diverse.
【Keywords】: game theory; optimisation; quality of service; radio networks; radio spectrum management; NP hard prolems; Nash equilibrium; distributed spectrum sharing; finite improvement property; general QoS satisfaction game; globally optimal spectrum sharing solutions; quality of service; wireless networks; Games; Integrated circuits; Interference; Nash equilibrium; Quality of service; Streaming media; Wireless communication
【Paper Link】 【Pages】:575-579
【Authors】: Shreeshankar Bodas ; Bilal Sadiq
【Abstract】: Uplink scheduling/resource allocation under the single-carrier FDMA constraint is investigated, taking into account the queuing dynamics at the transmitters. Under the single-carrier constraint, the problem of MaxWeight scheduling, as well as that of determining if a given number of packets can be served from all the users, are shown to be NP-complete. Finally, a matching-based scheduling algorithm is presented that requires only a polynomial number of computations per timeslot, and in the case of a system with large bandwidth and user population, provably provides a good delay (small-queue) performance, even under the single-carrier constraint. In summary, the results in first part of the paper support the recent push to remove SCFDMA from the Standards, whereas those in the second part present a way of working around the single-carrier constraint if it remains in the Standards.
【Keywords】: computational complexity; frequency division multiple access; polynomials; queueing theory; radio networks; scheduling; MaxWeight scheduling; NP-complete; SCFDMA-based wireless uplink networks; low-delay scheduling; matching-based scheduling algorithm; polynomial-complexity; queuing dynamics; single-carrier FDMA constraint; transmitters; uplink scheduling-resource allocation; user population; Barium; OFDM; Radio spectrum management; Resource management; Servers; Uplink; Wireless communication; Batch-and-allocate; Uplink scheduling; single-carrier FDMA
【Paper Link】 【Pages】:580-584
【Authors】: Qingsi Wang ; Mingyan Liu
【Abstract】: We consider the optimal transmission power control of a single wireless node with stochastic energy harvesting and an infinite/saturated queue with the objective of maximizing a certain reward function, e.g., the total data rate. We develop simple control policies that achieve near optimal performance in the finite-horizon case with finite energy storage. The same policies are shown to be asymptotically optimal in the infinite horizon case for sufficiently large energy storage. Such policies are typically difficult to directly obtain using a Markov Decision Process (MDP) formulation or through a dynamic programming framework due to the computational complexity. We relate our results to those obtained in the unsaturated regime, and highlight a type of threshold-based policies that is universally optimal.
【Keywords】: Markov processes; computational complexity; power control; queueing theory; radio networks; stochastic processes; MDP formulation; Markov decision process formulation; computational complexity; control policies; finite energy storage; infinite horizon case; infinite-saturated queue; single wireless node; stochastic energy harvesting; threshold-based policies; transmission power control; Batteries; Energy harvesting; Markov processes; Optimization; Power control; Throughput; Wireless communication
【Paper Link】 【Pages】:585-589
【Authors】: Maria Gorlatova ; Robert Margolies ; John Sarik ; Gerald Stanje ; Jianxun Zhu ; Baradwaj Vigraham ; Marcin Szczodrak ; Luca P. Carloni ; Peter R. Kinget ; Ioannis Kymissis ; Gil Zussman
【Abstract】: This paper focuses on a new type of wireless devices in the domain between RFIDs and sensor networks - Energy Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, toys, clothing), thereby providing the infrastructure for novel tracking applications. We present the design considerations for the EnHANT prototypes, developed over the past 3 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultralow-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and adapt their communications and networking patterns to the energy harvesting and battery states. We also describe a small scale EnHANTs testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs.
【Keywords】: energy harvesting; radiofrequency identification; solar cells; telecommunication power supplies; wireless sensor networks; EnHANT; RFID; battery state; energy harvesting active networked tags; indoor light energy; light energy input; multihop network; organic solar cell; tracking application; ultralow power impulse radio transceiver; ultrawideband impulse radio transceiver; wireless device; wireless sensor network; Energy harvesting; Multiaccess communication; Photovoltaic cells; Photovoltaic systems; Prototypes; Transceivers; Energy adaptive networking; cross-layer; energy harvesting; organic solar cells; ultra-low-power communications; ultrawideband (UWB)
【Paper Link】 【Pages】:590-594
【Authors】: Nicolò Michelusi ; Leonardo Badia ; Ruggero Carli ; Luca Corradini ; Michele Zorzi
【Abstract】: Harvesting-Based Wireless Sensor Devices are increasingly being deployed in today's sensor networks, due to their demonstrated advantages in terms of prolonged lifetime and autonomous operation. However, irreversible degradation mechanisms jeopardize battery lifetime, calling for intelligent management policies, which minimize the impact of these phenomena while guaranteeing a minimum Quality of Service (QoS). This paper explores a mathematical characterization of harvesting-based battery-powered sensor devices, focusing on the impact of the battery discharge policy on the irreversible degradation of the storage capacity. A general framework based on Markov chains which captures the battery degradation process is proposed. Based on such model, it is shown that a degradationaware policy significantly improves the lifetime of the sensor compared to "greedy" operation policies, while guaranteeing the minimum required QoS.
【Keywords】: Markov processes; quality of service; telecommunication network management; telecommunication network reliability; wireless sensor networks; Markov chains; QoS; autonomous operation; battery degradation impact; battery degradation process; battery discharge policy; battery lifetime; greedy operation policies; harvesting-based battery-powered sensor devices; harvesting-based wireless sensor devices; intelligent management policies; irreversible degradation mechanisms; optimal management policies; quality of service; wireless sensor networks; Batteries; Degradation; Energy harvesting; Markov processes; Quality of service; Wireless communication; Wireless sensor networks; Battery management; Energy harvesting; Lifetime estimation; Markov processes; Wireless sensor networks
【Paper Link】 【Pages】:595-599
【Authors】: Byung-Gook Kim ; Shaolei Ren ; Mihaela van der Schaar ; Jang-Won Lee
【Abstract】: Electric vehicles (EVs) will play an important role in the future smart grid because of their capabilities of storing electrical energy in their batteries during off-peak hours and supplying the stored energy to the power grid during peak hours. In this paper, we consider a power system with an aggregator and multiple customers with EVs and propose a novel electricity load scheduling which, unlike previous works, jointly considers the load scheduling for appliances and the energy trading using EVs. Specifically, we allow customers to determine how much energy to purchase from or to sell to the aggregator while taking into consideration the load demands of their residential appliances and the associated electricity bill. Under the assumption of the collaborative system where the customers agree to maximize the social welfare of the power system, we develop an optimal distributed load scheduling algorithm that maximizes the social welfare. Through numerical results, we show when the energy trading leads to an increase in the social welfare in various usage scenarios.
【Keywords】: electric vehicles; load (electric); scheduling; EV; bidirectional energy trading; distributed load scheduling algorithm; electric vehicles; electrical energy; off-peak hours; power system; residential appliances; residential load scheduling; smart grid; social welfare; Batteries; Electricity; Energy consumption; Home appliances; Scheduling; Smart grids
【Paper Link】 【Pages】:600-604
【Authors】: Shang Shang ; Paul W. Cuff ; Pan Hui ; Sanjeev R. Kulkarni
【Abstract】: We analyze a class of distributed quantized consensus algorithms for arbitrary networks. In the initial setting, each node in the network has an integer value. Nodes exchange their current estimate of the mean value in the network, and then update their estimate by communicating with their neighbors in a limited capacity channel in an asynchronous clock setting. Eventually, all nodes reach consensus with quantized precision. We start the analysis with a special case of a distributed binary voting algorithm, then proceed to the expected convergence time for the general quantized consensus algorithm proposed by Kashyap et al. We use the theory of electric networks, random walks, and couplings of Markov chains to derive an O(N3 log N) upper bound for the expected convergence time on an arbitrary graph of size N, improving on the state of art bound of O(N4 log N) for binary consensus and O(N5) for quantized consensus algorithms. Our result is not dependent on the graph topology. Simulations are performed to validate the analysis.
【Keywords】: Markov processes; convergence of numerical methods; graph theory; network analysis; telecommunication network topology; Markov chains couplings; arbitrary networks; asynchronous clock setting; convergence time; distributed binary voting algorithm; distributed quantized consensus algorithms; electric networks theory; graph topology; limited capacity channel; network mean value; quantized precision; random walks; Algorithm design and analysis; Clocks; Convergence; Markov processes; Peer-to-peer computing; Simulation; Upper bound; Distributed quantized consensus; convergence time; gossip
【Paper Link】 【Pages】:605-609
【Authors】: Lan Zhang ; Xiang-Yang Li ; Yunhao Liu ; Taeho Jung
【Abstract】: The existing work on distributed secure multi-party computation, e.g., set operations, dot product, ranking, focus on the privacy protection aspects, while the verifiability of user inputs and outcomes are neglected. Most of the existing works assume that the involved parties will follow the protocol honestly. In practice, a malicious adversary can easily forge his/her input values to achieve incorrect outcomes or simply lie about the computation results to cheat other parities. In this work, we focus on the problem of verifiable privacy preserving multiparty computation. We thoroughly analyze the attacks on existing privacy preserving multi-party computation approaches and design a series of protocols for dot product, ranging and ranking, which are proved to be privacy preserving and verifiable. We implement our protocols on laptops and mobile phones. The results show that our verifiable private computation protocols are efficient both in computation and communication.
【Keywords】: computer network security; cryptographic protocols; data privacy; mobile handsets; distributed secure multiparty computation; dot product protocol; malicious adversary; privacy preserving multiparty computation; privacy protection; ranging protocol; ranking protocol; verifiable private computation protocol; verifiable private multiparty computation; Distance measurement; Encryption; Portable computers; Privacy; Protocols; Vectors; Dot Product; Multi-party Computation; Privacy; Ranging; Ranking; Verifiability
【Paper Link】 【Pages】:610-614
【Authors】: Zhen Xu ; Cong Wang ; Qian Wang ; Kui Ren ; Lingyu Wang
【Abstract】: Cloud computing provides a “pay-per-use” utility service which offers the customer the economical access to large amount of computing resources with minimal management overhead. Despite the tremendous benefits, computation outsourcing also eliminates the customer's ultimate control over the data computation process, which makes securing cloud computation an imperative and challenging task, especially in the aspect of integrity verification. To address these challenges, in this paper we propose to research on integrity verification mechanisms for secure outsourced computations in cloud computing. In particular, we focus on outsourcing the widely applicable engineering optimization problem, i.e., convex optimization, and aim to investigate efficient integrity verification mechanisms using application-specific techniques. Our security design does not require the use of heavy cryptographic tools. Instead, we leverage the inherent structure of the optimization problems and the proof-carrying characteristics of the solving algorithms to achieve efficient integrity verification. The proposed design provides substantial computational savings on the customer side and introduce marginal overhead on the cloud side. We further prove its correctness and soundness. The extensive experiments under real cloud environment show our mechanisms ensure strong integrity assurance with high efficiency on both the customer and cloud sides and are readily applicable in practice.
【Keywords】: cloud computing; cryptography; data integrity; optimisation; outsourcing; application-specific techniques; cloud computation securing; cloud environment; computation outsourcing; computing resources; convex optimization; cryptographic tools; data computation process; economical access; engineering optimization problem; integrity verification; integrity verification mechanisms; management overhead; pay-per-use utility service; proof-carrying cloud computation; security design; Algorithm design and analysis; Convex functions; Cryptography; Educational institutions; Optimization; Outsourcing; Cloud Computing; Computation Outsourcing; Convex Optimization; Security
【Paper Link】 【Pages】:615-619
【Authors】: Stefanie Roos ; Thorsten Strufe
【Abstract】: Routing in Darknets, membership concealing overlays for pseudonymous communication, like for instance Freenet, is insufficiently analyzed, barely understood, and highly inefficient. These systems at higher performance are promising privacy preserving solutions for social applications. This paper contributes a realistic analytical model and a novel routing algorithm with provable polylog expected routing length. Using the model, we additionally prove that this can not be achieved by Freenet's routing. Simulations support that our proposed algorithm achieves a better performance than Freenet for realistic network sizes.
【Keywords】: telecommunication network routing; Freenet routing; darknet routing; privacy preserving solutions; pseudonymous communication; routing algorithm; social applications; Analytical models; Network topology; Peer-to-peer computing; Privacy; Routing; Social network services; Topology
【Paper Link】 【Pages】:620-628
【Authors】: Yang Guo ; Alexander L. Stolyar ; Anwar Walid
【Abstract】: We consider a shadow routing based approach to the problem of real-time adaptive placement of virtual machines (VM) in large data centers (DC) within a network cloud. Such placement in particular has to respect vector packing constraints on the allocation of VMs to host physical machines (PM) within a DC, because each PM can potentially serve multiple VMs simultaneously. Shadow routing is attractive in that it allows a large variety of system objectives and/or constraints to be treated within a common framework (as long as the underlying optimization problem is convex). Perhaps even more attractive feature is that the corresponding algorithm is very simple to implement, it runs continuously, and adapts automatically to changes in the VM demand rates, changes in system parameters, etc., without the need to re-solve the underlying optimization problem “from scratch”. In this paper we focus on the minmax-DC-load problem. Namely, we propose a combined VM-toDC routing and VM-to-PM assignment algorithm, referred to as Shadow scheme, which minimizes the maximum of appropriately defined DC utilizations. We prove that the Shadow scheme is asymptotically optimal (as one of its parameters goes to 0). Simulation confirms good performance and high adaptivity of the algorithm. Favorable performance is also demonstrated in comparison with a baseline algorithm based on VMware implementation [7], [8]. We also propose a simplified - “more distributed” - version of the Shadow scheme, which performs almost as well in simulations.
【Keywords】: cloud computing; computer centres; convex programming; minimax techniques; resource allocation; virtual machines; virtualisation; DC utilizations; VM allocation; VM demand rate change; VM-to-DC routing algorithm; VM-to-PM assignment algorithm; VMware implementation; cloud computing; cloud service providers; data centers; minmax-DC-load problem; network cloud; optimization problem; physical machines; real-time adaptive placement problem; resource management; shadow-routing based dynamic algorithms; system parameter change; vector packing constraints; virtual machine placement; virtualization technology; Algorithm design and analysis; Computational modeling; Routing; Servers; Steady-state; Vectors; Virtual machining
【Paper Link】 【Pages】:629-637
【Authors】: Hiroki Yanagisawa ; Takayuki Osogami ; Rudy Raymond
【Abstract】: The difficulty in allocating virtual machines (VMs) on servers stems from the requirement that sufficient resources (such as CPU capacity and network bandwidth) must be available for each VM in the event of a failure or maintenance work as well as for temporal fluctuations of resource demands, which often exhibit periodic patterns. We propose a mixed integer programming approach that considers the fluctuations of the resource demands for optimal and dependable allocation of VMs. At the heart of the approach are techniques for optimally partitioning the time-horizon into intervals of variable lengths and for reliably estimating the resource demands in each interval. We show that our new approach allocates VMs successfully in a cloud computing environment in a financial company, where the dependability requirement is strict and there are various types of VMs exist.
【Keywords】: cloud computing; finance; integer programming; virtual machines; VM; cloud computing environment; dependability requirement; dependable virtual machine allocation; financial company; mixed integer programming approach; temporal fluctuations; Cloud computing; Fault tolerance; Fault tolerant systems; Maintenance engineering; Resource management; Servers; capacity planning; dynamic programming; fault tolerance; mixed integer programming; server consolidation
【Paper Link】 【Pages】:638-646
【Authors】: Li Erran Li ; Vahid Liaghat ; Hongze Zhao ; MohammadTaghi Hajiaghayi ; Dan Li ; Gordon T. Wilfong ; Yang Richard Yang ; Chuanxiong Guo
【Abstract】: The emergence of new capabilities such as virtualization and elastic (private or public) cloud computing infrastructures has made it possible to deploy multiple applications, on demand, on the same cloud infrastructure. A major challenge to achieve this possibility, however, is that modern applications are typically distributed, structured systems that include not only computational and storage entities, but also policy entities (e.g., load balancers, firewalls, intrusion prevention boxes). Deploying applications on a cloud infrastructure without the policy entities may introduce substantial policy violations and/or security holes. In this paper, we present PACE: the first systematic framework for Policy-Aware Application Cloud Embedding. We precisely define the policy-aware, cloud application embedding problem, study its complexity and introduce simple, efficient, online primal-dual algorithms to embed applications in cloud data centers. We conduct evaluations using data from a real, large campus network and a realistic data center topology to evaluate the feasibility and performance of PACE. We show that deployment in a cloud without considering in-network policies may lead to a large number of policy violations (e.g., using tree routing as a way to enforce in-network policies may observe up to 91% policy violations). We also show that our embedding algorithms are very efficient by comparing with a good online fractional embedding algorithm.
【Keywords】: cloud computing; computer centres; trees (mathematics); PACE; campus network; cloud data center; data center topology; elastic cloud computing infrastructure; firewall; in-network policies; intrusion prevention boxes; load balancer; online fractional embedding algorithm; online primal-dual algorithm; policy entities; policy violation; policy-aware application cloud embedding; private cloud computing infrastructure; public cloud computing infrastructure; security holes; tree routing; virtualization; Bandwidth; Middleboxes; Network topology; Routing; Security; Topology; Virtual machining
【Paper Link】 【Pages】:647-655
【Authors】: Mansoor Alicherry ; T. V. Lakshman
【Abstract】: Many cloud applications are data intensive requiring the processing of large data sets and the MapReduce/Hadoop architecture has become the de facto processing framework for these applications. Large data sets are stored in data nodes in the cloud which are typically SAN or NAS devices. Cloud applications process these data sets using a large number of application virtual machines (VMs), with the total completion time being an important performance metric. There are many factors that affect the total completion time of the processing task such as the load on the individual servers, the task scheduling mechanism, communication and data access bottlenecks, etc. One dominating factor that affects completion times for data intensive applications is the access latencies from processing nodes to data nodes. Ideally, one would like to keep all data access local to minimize access latency but this is often not possible due to the size of the data sets, capacity constraints in processing nodes which constrain VMs from being placed in their ideal location and so on. When it is not possible to keep all data access local, one would like to optimize the placement of VMs so that the impact of data access latencies on completion times is minimized. We address this problem of optimized VM placement - given the location of the data sets, we need to determine the locations for placing the VMs so as to minimize data access latencies while satisfying system constraints. We present optimal algorithms for determining the VM locations satisfying various constraints and with objectives that capture natural tradeoffs between minimizing latencies and incurring bandwidth costs. We also consider the problem of incorporating inter-VM latency constraints. In this case, the associated location problem is NP-hard with no effective approximation within a factor of 2 - ϵ for any ϵ > 0. We discuss an effective heuristic for this case and evaluate by simulation the impact of the v- rious tradeoffs in the optimization objectives.
【Keywords】: cloud computing; computational complexity; virtual machines; MapReduce-Hadoop architecture; NAS devices; NP-hard problem; SAN devices; VM; bandwidth costs; capacity constraints; cloud applications; cloud systems; data access latency optimization; data intensive applications; data nodes; intelligent virtual machine placement; interVM latency constraints; large data set processing; latency minimization; optimal algorithms; processing nodes; Approximation algorithms; Approximation methods; Bandwidth; Measurement; Minimization; Optimization; Virtual machining
【Paper Link】 【Pages】:656-664
【Authors】: Zhaoquan Gu ; Qiang-Sheng Hua ; Yuexuan Wang ; Francis C. M. Lau
【Abstract】: Gathering information in a sensing field of interest is a fundamental task in wireless sensor networks. Current methods either use multihop forwarding to the sink via stationary nodes or use mobile sinks to traverse the sensing field. The multihop forwarding method intrinsically has the energy hole problem and the mobile sinks method has a large gathering latency due to its low mobility velocity. In addition, all the mobile sinks methods assume unlimited power supply and memory which is unrealistic in practice. In this paper, we propose a new approach for information gathering through a Mobile Aerial Sensor Network (MASN). We adopt the Hive-Drone model [5] where a centralized station (Hive) responsible for serving and recharging Micro-Aerial Vehicle (MAV) sensor nodes (Drones) is strategically placed in the sensing field. We then face the challenges of how to control the mobility of each MAV and devising interference-free scheduling for wireless transmissions that can substantially reduce the latency. We present a family of algorithms with constant memory to reduce both gathering latency, which is the duration from dispatching the MAVs to the moment when all the sensed information are gathered at the central station, and information latency, which is the duration from when some information is sensed to when it is received by the station. We also consider how to extend the single Hive to multiple Hives for monitoring an arbitrarily large area. Extensive simulation results corroborate our theoretical analysis.
【Keywords】: mobile radio; scheduling; space vehicles; wireless sensor networks; MASN; MAV dispatching; MAV sensor nodes; centralized station; constant memory; energy hole problem; hive-drone model; information gathering latency reduction; interference-free scheduling; microaerial vehicle sensor nodes; mobile aerial sensor network; mobile sink method; mobility velocity; multihop forwarding; multihop forwarding method; stationary nodes; unlimited power supply; wireless sensor networks; wireless transmissions; Interference; Mobile communication; Mobile computing; Monitoring; Sensors; Wireless communication; Wireless sensor networks; Gathering Latency; Information Gathering; Information Latency; Micro-Aerial Vehicle; Sensor Networks
【Paper Link】 【Pages】:665-673
【Authors】: Seongwon Han ; Youngtae Noh ; Uichin Lee ; Mario Gerla
【Abstract】: Mobile underwater networking is a developing technology for monitoring and exploring the Earth's oceans. For effective underwater exploration, multimedia communications such as sonar images and low resolution videos are becoming increasingly important. Unlike terrestrial RF communication, underwater networks rely on acoustic waves as a means of communication. Unfortunately, acoustic waves incur long propagation delays that typically lead to low throughput especially in protocols that require receiver feedback such as multimedia stream delivery. On the positive side, the long propagation delay permits multiple packets to be “pipelined” concurrently in the underwater channel, improving the overall throughput and enabling applications that require sustained bandwidth. To enable session multiplexing and pipelining, we propose the Multi-session FAMA (M-FAMA) algorithm. M-FAMA leverages passively-acquired local information (i.e., neighboring nodes' propagation delay maps and expected transmission schedules) to launch multiple simultaneous sessions. M-FAMA's greedy behavior is controlled by a Bandwidth Balancing algorithm that guarantees max-min fairness across multiple contending sources. Extensive simulation results show that M-FAMA significantly outperforms existing MAC protocols in representative streaming applications.
【Keywords】: access protocols; acoustic receivers; acoustic streaming; acoustic waves; underwater acoustic communication; M-FAMA algorithm; RF communication; acoustic wave; bandwidth balancing algorithm; low resolution video; mobile underwater networking; multimedia communication; multiple contending source; multiple packet; multisession FAMA algorithm; multisession MAC protocol; pipelining; propagation delay; receiver feedback; reliable underwater acoustic stream; session multiplexing; sonar image; terrestrial RF communication; underwater channel; underwater exploration; Bandwidth; Delays; Propagation delay; Protocols; Receivers; Schedules; Throughput; AUV; CSMA; Concurrent Transmission; Medium Access Control; SEA Swarm; Underwater
【Paper Link】 【Pages】:674-682
【Authors】: Xin Dong ; Mehmet Can Vuran
【Abstract】: Wireless underground sensor networks (WUSNs) consist of sensors that are buried in and communicate through soil. The channel quality of WUSNs is strongly impacted by environmental parameters such soil moisture. Thus, the communication range of the nodes and the network connectivity vary over time. To address the challenges in underground communication, above ground nodes are deployed to maintain connectivity. In this paper, the connectivity of WUSNs under varying environmental conditions is captured by modeling the cluster size distribution under sub-critical conditions and through a novel aboveground communication coverage model for underground clusters. The resulting connectivity model is utilized to analyze two communication schemes: transmit power control and environment-aware routing, which maintain connectivity while reducing energy consumption. It is shown that transmit power control can maintain network connectivity under all soil moisture values at the cost of energy consumption. Utilizing relays based on soil moisture levels can decrease this energy consumption. A composite of both approaches is also considered to analyze the tradeoff between connectivity and energy consumption.
【Keywords】: telecommunication network routing; underground communication; wireless sensor networks; WUSN; channel quality; cluster size distribution; energy consumption; environment aware connectivity; environment-aware routing; soil moisture; transmit power control; wireless underground sensor network; Analytical models; Approximation methods; Energy consumption; Lattices; Soil moisture; Topology
【Paper Link】 【Pages】:683-691
【Authors】: Yibo Zhu ; Zaihan Jiang ; Zheng Peng ; Michael Zuba ; Jun-Hong Cui ; Huifang Chen
【Abstract】: Recently, various Medium Access Control (MAC) protocols have been proposed for underwater acoustic networks. These protocols have significantly improved the performance of MAC layer in theory. However, two critical characteristics, low transmission rates and long preambles, found in the commercial modem-based real systems, drastically degrade the performance of existing MAC protocols in the real world. A new practical MAC design is demanded. Toward a proper approach, this paper analyzes the impact of the two newly found modem characteristics on the random access-based MAC and handshakebased MAC, which are two major types of MAC protocols for underwater acoustic networks. We further develop the nodal throughput and collision probability models for representative solutions of these two MAC protocol types. Based on the analysis, we believe time sharing-based MAC is very promising. Along this line, we propose a time sharing-based MAC and analyze its nodal throughput. Both analytical and simulation results show that the time sharing-based solution can achieve significantly better performance.
【Keywords】: access protocols; probability; underwater acoustic communication; MAC design; MAC protocol; collision probability model; commercial modem-based real system; handshake-based MAC; medium access control protocol; representative solution; time sharing-based MAC; transmission rates; underwater acoustic networks; Delays; Media Access Protocol; Modems; Propagation delay; Throughput; Underwater acoustics
【Paper Link】 【Pages】:692-700
【Authors】: Greg Kuperman ; Eytan Modiano
【Abstract】: We consider the problem of providing network protection that guarantees the maximum amount of time that flow can be interrupted after a failure. This is in contrast to schemes that offer no recovery time guarantees, such as IP rerouting, or the prevalent local recovery scheme of Fast ReRoute, which often over-provisions resources to meet recovery time constraints. To meet these recovery time guarantees, we provide a novel and flexible solution by partitioning the network into failure-independent “recovery domains”, where within each domain, the maximum amount of time to recover from a failure is guaranteed. We show the recovery domain problem to be NP-Hard, and develop an optimal solution in the form of an MILP for both the case when backup capacity can and cannot be shared. This provides protection with guaranteed recovery times using up to 45% less protection resources than local recovery. We demonstrate that the network-wide optimal recovery domain solution can be decomposed into a set of easier to solve subproblems. This allows for the development of flexible and efficient solutions, including an optimal algorithm using Lagrangian relaxation, which simulations show to converge rapidly to an optimal solution. Additionally, an algorithm is developed for when backup sharing is allowed. For dynamic arrivals, this algorithm performs better than the solution that tries to greedily optimize for each incoming demand.
【Keywords】: IP networks; computational complexity; failure analysis; integer programming; linear programming; telecommunication network routing; IP rerouting; Lagrangian relaxation; MILP; NP-Hard; backup sharing; dynamic arrivals; failure-independent recovery domains; fast reroute prevalent local recovery scheme; guaranteed recovery times; network protection; network-wide optimal recovery domain solution; over-provisions resources; protection resources; recovery domains; recovery time constraints; Delays; Heuristic algorithms; Multiprotocol label switching; Resource management; Routing; Switches; Time factors
【Paper Link】 【Pages】:701-709
【Authors】: Zhemin Zhang ; Zhiyang Guo ; Yuanyuan Yang
【Abstract】: Energy efficiency of optical packet switches (OPS) is the key to ensure the profitability of backbone network providers. However, due to lack of optical random access buffer, most optical packet switches rely on electronic buffer to resolve output contention, which requires power-hungry O/E/O conversion for all packets. The recently proposed optical cut-through (OpCut) switch holds a great potential in achieving high energy efficiency, as it allows optical packets to cut through the switch in optical domain whenever possible. The energy efficiency of OpCut switch hinges on the cut-through ratio, which is the percentage of packets that cut through the switch optically. On the other hand, it is generally desirable to maintain packet order in a switch. To achieve in-order transmission, an optical packet needs to be converted to electronic form and buffered when an earlier packet from the same flow is still in the buffer, which may lead to a low cut-through ratio. In the meanwhile, the Internet is designed to accommodate a certain degree of packet reorder, which is very common in practice due to path multiplicity. In this paper, we introduce a novel reorder metric, reorder degree, to accurately describe the extent of packet reordering, and propose a flow management scheme to bound the reorder degree of transmitted flows. We then design an efficient packet scheduling algorithm that significantly increases the cutthrough ratio of the OpCut switch while allowing a small degree of out-of-order transmission. Our extensive simulation results show that the cut-through ratio can be drastically increased with only a very small reorder degree.
【Keywords】: optical switches; packet switching; OpCut switch; backbone network provider; bounded reorder packet scheduling; cut-through ratio; efficient packet scheduling algorithm; flow management; optical cut-through switch; optical packet switch; optical random access buffer; out-of-order transmission; packet reordering; reorder degree; reorder metric; Indexes; Optical buffering; Optical packet switching; Optical switches; Ports (Computers); Scheduling algorithms; Cut-through ratio; Energy efficiency; O/E/O conversion; OpCut switch; Packet scheduling; Power consumption; Reorder bound; Reorder degree
【Paper Link】 【Pages】:710-718
【Authors】: Alexandre Fréchette ; F. Bruce Shepherd ; Marina K. Thottan ; Peter J. Winzer
【Abstract】: We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed-demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands. This study also reveals conditions under which multi-hub routing gives improvements over single-hub and shortest-path routings.
【Keywords】: telecommunication network routing; telecommunication traffic; capped hose matrices; capped hose traffic demands; dynamic capacity demands; empirical analysis; fixed demand model; fixed-demand model; hierarchical RND problem; multihub routing templates; peak point-to-point demands; robust network design; robust network design problems; routing indicator; routing template; shortest path routing; single-hub routing; single-hub tree-routing template; traffic matrices; traffic patterns; uncertain demand; Heuristic algorithms; Hoses; Measurement; Robustness; Routing; Vegetation; Virtual private networks
【Paper Link】 【Pages】:719-727
【Authors】: Tarun Bansal ; Bo Chen ; Prasun Sinha
【Abstract】: Cognitive radio devices opportunistically operate on whitespace channels, provided those channels are not in use by the primary users. This opportunistic reusing of channels requires secondary users to perform fast and efficient sensing to determine the unused channels. Although individual secondary clients may be unwilling to frequently sense all the channels, their density could be exploited for tasking the individual devices to collaboratively extract useful information on spectrum usage. It is critical to determine how the sensing tasks should be assigned to different secondary users. This is particularly challenging in practical networks due to the variability in the sensing accuracy of different users that may arise because of multipath effects on the signal and varying distances from the primary transmitters. Further, presence of multiple Primary Users on the same channel makes it challenging to select the best users for sensing. Finally, to reduce the sensing overhead, it is beneficial to limit the number of channel sensing tasks that can be performed within a given time period. We propose a novel metric that captures the sensing accuracy of a given sensing assignment. Using our metric, we design an algorithm DISCERN for computing the sensing assignment that maximizes the sensing accuracy. Our algorithm is the first to take into account the limitations in practical networks. Our work is motivated by experimental measurements. Tracedriven simulations show that DISCERN increases the sensing accuracy by at least 30%. Theoretical analysis shows that the sensing assignment computed by DISCERN performs within 63% of the exponential-time optimal solution.
【Keywords】: cognitive radio; cooperative communication; multipath channels; radio transmitters; DISCERN; channel sensing tasks; cognitive radio devices; cooperative whitespace scanning; exponential-time optimal solution; multipath effects; practical environments; primary transmitters; secondary clients; sensing overhead; unused channels; whitespace channels; Accuracy; Base stations; Channel estimation; Correlation; Measurement; Scattering; Sensors
【Paper Link】 【Pages】:728-736
【Authors】: Lei Sun ; Wenye Wang
【Abstract】: It has been demonstrated that in wireless networks, Blackholes, which are typically generated by isolated node failures, and augmented by failure correlations, can easily result in devastating impact on network performance. Therefore, many solutions, such as routing protocols and restoration algorithms, are proposed to deal with Blackholes by identifying alternative paths to bypass these holes such that the effect of Blackholes can be mitigated. These advancements are based on an underlying premise that there exists at least one alternative path in the network. However, such a hypothesis remains an open question. In other words, we do not know whether the network is resilient to Blackholes or whether an alternative path exists. The answer to this question can complement our understanding of designing routing protocols, as well as topology evolution in the presence of random failures. In order to address this issue, we focus on the topology of Cognitive Radio Networks (CRNs) because of their phenomenal benefits in improving spectrum efficiency through opportunistic communications. Particularly, we first define two metrics, namely the failure occurrence probability p and failure connection function g(·), to characterize node failures and their spreading properties, respectively. Then we prove that each Blackhole is exponentially bounded based on percolation theory. By mapping failure spreading using a branching process, we further derive an upper bound on the expected size of Blackholes. With the observations from our analysis, we are able to find a sufficient condition for a resilient CRN in the presence of Blackholes through analysis and simulations.
【Keywords】: cognitive radio; failure analysis; routing protocols; telecommunication network topology; Blackholes; CRN topology; branching process; failure connection function; failure correlations; failure occurrence probability; generic failures; isolated node failures; large-scale cognitive radio networks; network performance; opportunistic communications; random failures; restoration algorithms; routing protocols; spreading properties; wireless networks; Analytical models; Explosions; Interference; Lattices; Network topology; Topology; Wireless networks
【Paper Link】 【Pages】:737-745
【Authors】: Youngjune Gwon ; H. T. Kung
【Abstract】: We propose a spectrum analyzer that leverages many networked commodity sensor nodes, each of which samples its portion in a wideband spectrum. The sensors operate in parallel and transmit their measurements over a wireless network without performing any significant computations such as FFT. The measurements are forwarded to the backend of the system where spectrum analysis takes place. In particular, we propose a solution that compresses the raw measurements in a simple random linear projection and combines the compressed measurements from multiple sensors in-network. As a result, we achieve a substantial reduction in the network bandwidth requirement to operate the proposed system. We discover that the overall communication cost can be independent of the number of sensors and is affected only by sparsity of discretized spectrum under analysis. This principle founds the basis for a claim that our network-based spectrum analyzer can scale up the number of sensor nodes to process a very wide spectrum block potentially having a GHz bandwidth. We devise a novel recovery algorithm that systematically undoes compressive encoding and in-network combining done to the raw measurements, incorporating the least squares and I1-minimization decoding used in compressive sensing, and demonstrate that the algorithm can effectively restore an accurate estimate of the original data suitable for finegrained spectrum analysis. We present mathematical analysis and empirical evaluation of the system with software-defined radios.
【Keywords】: compressed sensing; decoding; least squares approximations; minimisation; radio networks; radio spectrum management; software radio; FFT; GHz bandwidth; compressed measurements; compressive encoding; compressive sensing; constant communication cost; discretized spectrum analysis; fine-grained spectrum analysis; l1-minimization decoding; least square analysis; mathematical analysis; networked commodity sensor nodes; random linear projection; raw measurements; recovery algorithm; scaling network-based spectrum analyzer; software-defined radio; wideband spectrum; wireless network; Atmospheric measurements; Bandwidth; Base stations; Compressed sensing; Decoding; Particle measurements; Time-domain analysis
【Paper Link】 【Pages】:746-754
【Authors】: I-Hsun Chuang ; Hsiao-Yun Wu ; Kuan-Rong Lee ; Yau-Hwang Kuo
【Abstract】: Cognitive Radio (CR) is an emerging technology developed to improve the utilization of licensed spectra, and a promising solution to alleviate the spectrum shortage problem. In cognitive radio networks (CRNs), if users want to establish communication links with neighbors, they need to rendezvous on an available channel. However, it is infeasible to employ a common control channel (CCC) in a licensed spectrum, because the CCC will be congested by primary users and degrade the performance of CRNs by the single point of failure problem. Therefore, the channel hopping (CH) based methods without CCC support are usually adopted to establish rendezvous in CRNs. This kind of rendezvous is called blind rendezvous and generally estimated by the following criteria: 1) asynchronous rendezvous support, 2) guaranteed rendezvous, 3) asymmetric model support, 4) multi-user/multi-hop scenario support, and 5) short time-to-rendezvous (TTR). Most existing blind rendezvous methods fail to fully satisfy these criteria or have considerable TTR which is significantly increased with the number of available channels. In this paper, the alternate hop-and-wait CH method is proposed to solve these problems. Furthermore, each user has a unique alternating sequence of HOP and WAIT modes. The theoretical analysis results have confirmed that the proposed method satisfies all evaluation criteria mentioned above and provides shorter TTR. According to simulation results, the performance of the proposed method even is two times better than Jump-Stay algorithm (JS), which is the most efficient blind rendezvous method as we know, under the asymmetric model which is much critical for CRNs.
【Keywords】: cognitive radio; failure analysis; radio links; wireless channels; CCC; CRN; JS algorithm; TTR; asymmetric model support; asynchronous rendezvous support; blind rendezvous methods; channel hopping based methods; cognitive radio networks; common control channel; communication links; failure problem; guaranteed rendezvous; hop-and-wait CH method; hop-and-wait channel rendezvous method; jump-stay algorithm; licensed spectra; multiuser-multihop scenario support; short time-to-rendezvous; spectrum shortage problem; Algorithm design and analysis; Analytical models; Cognitive radio; Educational institutions; Simulation; Synchronization; Transforms; blind rendezvous; channel hopping; cognitive radio
【Paper Link】 【Pages】:755-763
【Authors】: Wei Dong ; Swati Rallapalli ; Rittwik Jana ; Lili Qiu ; K. K. Ramakrishnan ; Leo Razoumov ; Yin Zhang ; Tae Won Cho
【Abstract】: The explosive growth of cellular traffic and its highly dynamic nature often make it increasingly expensive for a cellular service provider to provision enough cellular resources to support the peak traffic demands. In this paper, we propose iDEAL, a novel auction-based incentive framework that allows a cellular service provider to leverage resources from third-party resource owners on demand by buying capacity whenever needed through reverse auctions. iDEAL has several distinctive features: (i) iDEAL explicitly accounts for the diverse spatial coverage of different resources and can effectively foster competition among third-party resource owners in different regions, resulting in significant savings to the cellular service provider. (ii) iDEAL provides revenue incentives for third-party resource owners to participate in the reverse auction and be truthful in the bidding process. (iii) iDEAL is provably efficient. (iv) iDEAL effectively guards against collusion. (v) iDEAL effectively copes with the dynamic nature of traffic demands. In addition, iDEAL has useful extensions that address important practical issues. Extensive evaluation based on real traces from a large US cellular service provider clearly demonstrates the effectiveness of our approach. We further demonstrate the feasibility of iDEAL using a prototype implementation.
【Keywords】: cellular radio; electronic commerce; mobile computing; telecommunication traffic; auction-based incentive framework; buying capacity; cellular resources; cellular service provider; cellular traffic growth; iDEAL; incentivized dynamic cellular offloading; peak traffic demands; third-party resource; traffic demands; Decision support systems
【Paper Link】 【Pages】:764-772
【Authors】: Han Cai ; Irem Koprulu ; Ness B. Shroff
【Abstract】: In this paper, we focus on mobile wireless networks comprising of a powerful communication center and a multitude of mobile users. We investigate the propagation of deadline-based content in the wireless network characterized by heterogeneous (time-varying and user-dependent) wireless channel conditions, heterogeneous user mobility, and where communication could occur in a hybrid format (e.g., directly from the central controller or by exchange with other mobiles in a peer-to-peer manner). We show that exploiting double opportunities, i.e., both time-varying channel conditions and mobility, can result in substantial performance gains. We develop a class of double opportunistic multicast schedulers and prove their optimality in terms of both utility and fairness under heterogeneous channel conditions and user mobility. Extensive simulation results are provided to demonstrate that these algorithms can not only substantially boost the throughput of all users (e.g., by 50% to 150%), but also achieve different consideration of fairness among individual users and groups of users.
【Keywords】: mobility management (mobile radio); multicast communication; time-varying channels; communication center; deadline based content propagation; double opportunistic multicast schedulers; heterogeneous user mobility; heterogeneous wireless channel conditions; hybrid format; mobile users; mobile wireless networks; throughput; Aggregates; Mobile communication; Mobile computing; Throughput; Tin; Wireless networks
【Paper Link】 【Pages】:773-781
【Authors】: Bartlomiej Blaszczyszyn ; Mohamed Kadhem Karray ; Holger Paul Keeler
【Abstract】: An almost ubiquitous assumption made in the stochastic-analytic approach to study of the quality of user-service in cellular networks is Poisson distribution of base stations, often completed by some specific assumption regarding the distribution of the fading (e.g. Rayleigh). The former (Poisson) assumption is usually (vaguely) justified in the context of cellular networks, by various irregularities in the real placement of base stations, which ideally should form a lattice (e.g. hexagonal) pattern. In the first part of this paper we provide a different and rigorous argument justifying the Poisson assumption under sufficiently strong lognormal shadowing observed in the network, in the evaluation of a natural class of the typical-user service-characteristics (including path-loss, interference, signal-to-interference ratio, spectral efficiency). Namely, we present a Poisson-convergence result for a broad range of stationary (including lattice) networks subject to log-normal shadowing of increasing variance. We show also for the Poisson model that the distribution of all these typical-user service characteristics does not depend on the particular form of the additional fading distribution. Our approach involves a mapping of 2D network model to 1D image of it “perceived” by the typical user. For this image we prove our Poisson convergence result and the invariance of the Poisson limit with respect to the distribution of the additional shadowing or fading. Moreover, in the second part of the paper we present some new results for Poisson model allowing one to calculate the distribution function of the SINR in its whole domain. We use them to study and optimize the mean energy efficiency in cellular networks.
【Keywords】: cellular radio; stochastic processes; 1D image; 2D network model mapping; Poisson convergence; Poisson distribution; Poisson model; Poisson-convergence; SINR; base stations; fading distribution; lognormal shadowing; model lattice cellular networks; stationary networks; stochastic-analytic approach; typical-user service-characteristics; ubiquitous assumption; Base stations; Convergence; Fading; Interference; Propagation losses; Shadow mapping; Signal to noise ratio; Hexagonal; Poisson; Wireless cellular networks; convergence; fading; optimization; shadowing; spectral/energy efficiency
【Paper Link】 【Pages】:782-790
【Authors】: Wei Wang ; Jin Zhang ; Qian Zhang
【Abstract】: The vision of Self-Organizing Networks (SON) has been drawing considerable attention as a major axis for the development of future networks. As an essential functionality in SON, cell outage detection is developed to autonomously detect macrocells or femtocells that are inoperative and unable to provide service. Previous cell outage detection approaches have mainly focused on macrocells while the outage issue in the emerging femtocell networks is less discussed. However, due to the two-tier macro-femto network architecture and the small coverage nature of femtocells, it is challenging to enable outage detection functionality in femtocell networks. Based on the observation that spatial correlations among users can be extracted to cope with these challenges, this paper proposes a Cooperative femtocell Outage Detection (COD) architecture which consists of a trigger stage and a detection stage. In the trigger stage, we design a trigger mechanism that leverages correlation information extracted through collaborative filtering to efficiently trigger the detection procedure without inter-cell communications. In the detection stage, to improve the detection accuracy, we introduce a sequential cooperative detection rule to process the spatially and temporally correlated user statistics. In particular, the detection problem is formulated as a sequential hypothesis testing problem, and the analytical results on the detection performance are derived. Numerical studies for a variety of femtocell deployments and configurations demonstrate that COD outperforms the existing scheme in both communication overhead and detection accuracy.
【Keywords】: collaborative filtering; cooperative communication; correlation methods; femtocellular radio; COD; SON; collaborative filtering; cooperative cell outage detection; cooperative femtocell outage detection; correlation information; intercell communications; self-organizing femtocell networks; sequential hypothesis testing problem; spatial correlations; two-tier macro-femto network architecture; Computer architecture; Correlation; Femtocell networks; Handover; Macrocell networks; Microprocessors
【Paper Link】 【Pages】:791-799
【Authors】: Boxuan Gu ; Xinfeng Li ; Gang Li ; Adam C. Champion ; Zhezhe Chen ; Feng Qin ; Dong Xuan
【Abstract】: With smartphones' meteoric growth in recent years, leaking sensitive information from them has become an increasingly critical issue. Such sensitive information can originate from smartphones themselves (e.g., location information) or from many Internet sources (e.g., bank accounts, emails). While prior work has demonstrated information flow tracking's (IFT's) effectiveness at detecting information leakage from smartphones, it can only handle a limited number of sensitive information sources. This paper presents a novel IFT tagging strategy using differentiated and dynamic tagging. We partition information sources into differentiated classes and store them in fixed-length tags. We adjust tag structure based on time-varying received information sources. Our tagging strategy enables us to track at runtime numerous information sources in multiple classes and rapidly detect information leakage from any of these sources. We design and implement D2Taint, an IFT system using our tagging strategy on real-world smartphones. We experimentally evaluate D2Taint's effectiveness with 84 real-world applications downloaded from Google Play. D2Taint reports that over 80% of them leak data to third-party destinations; 14% leak highly sensitive data. Our experimental evaluation using a standard benchmark tool illustrates D2Taint's effectiveness at handling many information sources on smartphones with moderate runtime and space overhead.
【Keywords】: mobile computing; security of data; smart phones; tracking; D2Taint; Google Play; IFT tagging strategy; Internet source; differentiated information flow tracking; differentiated tagging; dynamic information flow tracking; dynamic tagging; information leakage detection; information source partitioning; sensitive information leaking; smartphone; tag structure; Androids; Runtime; Security; Sensitivity; Smart phones; Switches; Tagging
【Paper Link】 【Pages】:800-808
【Authors】: Jingchao Sun ; Rui Zhang ; Yanchao Zhang
【Abstract】: The explosive growth of mobile-connected and location-aware devices makes it possible to have a new way of establishing trust relationships, which we coin as spatiotemporal matching. In particular, a mobile user could very easily maintain his spatiotemporal profile recording his continuous whereabouts in time, and the level of his spatiotemporal profile matching that of the other user can be translated into the level of trust they two can have in each other. Since spatiotemporal profiles contain very sensitive personal information, privacy-preserving spatiotemporal matching is needed to ensure that as little information as possible about the spatiotemporal profile of either matching participant is disclosed beyond the matching result. We propose a cryptographic solution based on Private Set Intersection Cardinality and a more efficient non-cryptographic solution involving a novel use of the Bloom filter. We thoroughly analyze both solutions and compare their efficacy and efficiency via detailed simulation studies.
【Keywords】: data privacy; data structures; mobile computing; pattern matching; private key cryptography; Bloom filter; cryptographic solution; location-aware devices; mobile-connected devices; noncryptographic solution; privacy-preserving spatiotemporal matching; private set intersection cardinality; spatiotemporal profile matching; Accuracy; Estimation; Indexes; Mobile handsets; Privacy; Protocols; Spatiotemporal phenomena
【Paper Link】 【Pages】:809-817
【Authors】: Shuaifu Dai ; Alok Tongaonkar ; Xiaoyin Wang ; Antonio Nucci ; Dawn Song
【Abstract】: Network operators need to have a clear visibility into the applications running in their network. This is critical for both security and network management. Recent years have seen an exponential growth in the number of smart phone apps which has complicated this task. Traditional methods of traffic classification are no longer sufficient as the majority of this smart phone app traffic is carried over HTTP/HTTPS. Keeping up with the new applications that come up everyday is very challenging and time-consuming. We present a novel technique for automatically generating network profiles for identifying Android apps in the HTTP traffic. A network profile consists of fingerprints, i.e., unique characteristics of network behavior, that can be used to identify an app. To profile an Android app, we run the app automatically in an emulator and collect the network traces. We have developed a novel UI fuzzing technique for running the app such that different execution paths are exercised, which is necessary to build a comprehensive network profile. We have also developed a light-weight technique, for extracting fingerprints, that is based on identifying invariants in the generated traces. We used our technique to generate network profiles for thousands of apps. Using our network profiles we were able to detect the presence of these apps in real-world network traffic logs from a cellular provider.
【Keywords】: cellular radio; feature extraction; fingerprint identification; fuzzy set theory; mobility management (mobile radio); smart phones; telecommunication security; telecommunication traffic; transport protocols; Android Apps; HTTP-HTTPS traffic; NetworkProfiler; UI fuzzing technique; automatic fingerprinting; cellular provider; emulator; fingerprint extraction; network behavior characteristics; network management; network operators; network security; network traces; network traffic classification method; network traffic logs; smart phone application traffic; Androids; Fingerprint recognition; Humanoid robots; Internet; Mobile communication; Servers; Smart phones
【Paper Link】 【Pages】:818-826
【Authors】: M. H. R. Khouzani ; Soumya Sen ; Ness B. Shroff
【Abstract】: Regulating the ISPs to adopt more security measures has been proposed as an effective method in mitigating the threats of attacks in the Internet. However, economic incentives of the ISPs and the network effects of security measures can lead to an under-investment in their adoption. We study the potential gains in a network's social utility when a regulator implements a monitoring and penalizing mechanism on the outbound threat activities of autonomous systems (ASes). We then show how freeriding can render regulations futile if the subset of ASes under the regulator's authority is smaller than a threshold. Finally, we show how heterogeneity of the ASes affect the responses of the ISPs and discuss how the regulator can leverage such information to improve the overall effectiveness of different security policies.
【Keywords】: Internet; security of data; ISP; Internet; autonomous system; economic analysis; economic incentive; security investment; security policy; Economics; Internet; Investment; Monitoring; Regulators; Security; Time measurement
【Paper Link】 【Pages】:827-835
【Authors】: Qianyi Huang ; Yixin Tao ; Fan Wu
【Abstract】: The problem of dynamic spectrum redistribution has been extensively studied in recent years. Auction is believed to be one of the most effective tools to solve this problem. A great number of strategy-proof auction mechanisms have been proposed to improve spectrum allocation efficiency by stimulating bidders to truthfully reveal their valuations of spectrum, which are the private information of bidders. However, none of these approaches protects bidders' privacy. In this paper, we present SPRING, which is the first Strategy-proof and PRivacy preservING spectrum auction mechanism. We not only rigorously prove the properties of SPRING, but also extensively evaluate its performance. Our evaluation results show that SPRING achieves good spectrum redistribution efficiency with low overhead.
【Keywords】: data privacy; radio spectrum management; SPRING; bidder privacy protection; bidder private information; dynamic spectrum redistribution; privacy preserving spectrum auction mechanism; strategy-proof auction mechanism; Cost accounting; Encryption; Interference; Privacy; Resource management; Springs
【Paper Link】 【Pages】:836-844
【Authors】: Shang-Pin Sheng ; Mingyan Liu
【Abstract】: In this paper we formulate a contract design problem where a primary license holder wishes to profit from its excess spectrum capacity by selling it to potential secondary users/buyers. It needs to determine how to optimally price the excess spectrum so as to maximize its profit, knowing that this excess capacity is stochastic in nature, does not come with exclusive access, and cannot provide deterministic service guarantees to a buyer. At the same time, buyers are of different types, characterized by different communication needs, tolerance for the channel uncertainty, and so on, all of which a buyer's private information. The license holder must then try to design different contracts catered to different types of buyers in order to maximize its profit. We address this problem by adopting as a reference a traditional spectrum market where the buyer can purchase exclusive access with fixed/deterministic guarantees. We fully characterize the optimal solution in the cases where there is a single buyer type, and when multiple types of buyers share the same, known channel condition as a result of the primary user activity. In the most general case we construct an algorithm that generates a set of contracts in a computationally efficient manner, and show that this set is optimal when the buyer types satisfy a monotonicity condition.
【Keywords】: channel capacity; contracts; incentive schemes; spread spectrum communication; buyer private information; channel uncertainty; communication needs; contract design approach; deterministic service; exclusive access; fixed-deterministic guarantees; monotonicity condition; primary license holder; primary user activity; profit incentive; secondary spectrum market; spectrum capacity; Bandwidth; Licenses; Numerical models; Stochastic processes; Uncertainty
【Paper Link】 【Pages】:845-853
【Authors】: Randall Berry ; Michael L. Honig ; Thanh Nguyen ; Vijay G. Subramanian ; Hang Zhou ; Rakesh Vohra
【Abstract】: In a limited form cellular providers have long shared spectrum in the form of roaming agreements. The primary motivation for this has been to extend the coverage of a wireless carrier's network into regions where it has no infrastructure. As devices and infrastructure become more agile, such sharing could be done on a much faster time-scale and have advantages even when two providers both have coverage in a given area, e.g., by enabling one provider to acquire “overflow” capacity from another provider during periods of high demand. This may provide carriers with an attractive means to better meet their rapidly increasing bandwidth demands. On the other hand, the presence of such a sharing agreement could encourage providers to underinvest in their networks, resulting in poorer performance. We adapt the newsvendor model from the operations management literature to model such a situation and to gain insight into these trade-offs. In particular, we analyze the structure of revenue-sharing contracts that incentivize both capacity sharing and increased access for end-users.
【Keywords】: cellular radio; incentive schemes; radio spectrum management; capacity sharing; cellular providers; incentivize spectrum-sharing; newsvendor model; operation management literature; overflow capacity; revenue-sharing contracts; roaming agreements; shared spectrum; sharing agreement; wireless carrier network; Contracts; Distribution functions; Games; Investment; Joints; Nash equilibrium; Roaming
【Paper Link】 【Pages】:854-862
【Authors】: Hong Xu ; Baochun Li
【Abstract】: Many cloud services are running on geographically distributed datacenters for better reliability and performance. We consider the emerging problem of joint request mapping and response routing with distributed datacenters in this paper. We formulate the problem as a general workload management optimization. A utility function is used to capture various performance goals, and the location diversity of electricity and bandwidth costs are realistically modeled. To solve the large-scale optimization, we develop a distributed algorithm based on the alternating direction method of multipliers (ADMM). Following a decomposition-coordination approach, our algorithm allows for a parallel implementation in a datacenter where each server solves a small sub-problem. The solutions are coordinated to find an optimal solution to the global problem. Our algorithm converges to near optimum within tens of iterations, and is insensitive to step sizes. We empirically evaluate our algorithm based on real-world workload traces and latency measurements, and demonstrate its effectiveness compared to conventional methods.
【Keywords】: Web services; cloud computing; computer centres; computer network reliability; distributed algorithms; power aware computing; telecommunication network routing; telecommunication power management; ADMM; alternating direction method of multipliers; bandwidth costs; decomposition-coordination approach; distributed algorithm; electricity location diversity; geo-distributed cloud services; geographically distributed datacenters; joint request mapping and response routing problem; large-scale optimization; latency measurements; parallel datacenter implementation; real-world workload traces; utility function; workload management optimization; Accuracy; Algorithm design and analysis; Bandwidth; Electricity; Optimization; Routing; Servers
【Paper Link】 【Pages】:863-871
【Authors】: Elisha J. Rosensweig ; Daniel Sadoc Menasché ; Jim Kurose
【Abstract】: Over the past few years Content-Centric Networking, a networking model in which host-to-content communication protocols are introduced, has been gaining much attention. A central component of such an architecture is a large-scale interconnected caching system. To date, the way these Cache Networks operate and perform is still poorly understood. In this work, we demonstrate that certain cache networks are non-ergodic in that their steady-state characterization depends on the initial state of the system. We then establish several important properties of cache networks, in the form of three independently-sufficient conditions for a cache network to comprise a single ergodic component. Each property targets a different aspect of the system - topology, admission control and cache replacement policies. Perhaps most importantly we demonstrate that cache replacement can be grouped into equivalence classes, such that the ergodicity (or lack-thereof) of one policy implies the same property holds for all policies in the class.
【Keywords】: cache storage; content management; network topology; peer-to-peer computing; admission control; cache network steady state; cache replacement policy; content centric networking; ergodic component; host-to-content communication protocol; interconnected caching system; network topology; sufficient condition; Admission control; Delays; Markov processes; Network topology; Routing; Steady-state; Topology
【Paper Link】 【Pages】:872-880
【Authors】: Zhi Zhou ; Fangming Liu ; Hai Jin ; Bo Li ; Baochun Li ; Hongbo Jiang
【Abstract】: In this paper, we present an analytical framework for characterizing and optimizing the power-performance tradeoff in Software-as-a-Service (SaaS) cloud platforms. Our objectives are two-fold: (1) We maximize the operating profit when serving heterogeneous SaaS applications with unpredictable user requests, and (2) we minimize the power consumption when processing user requests. To achieve these objectives, we take advantage of Lyapunov Optimization techniques to design and analyze an optimal control framework to make online decisions on request admission control, routing, and virtual machine (VMs) scheduling. In particular, our control framework can be flexibly extended to incorporate various design choices and practical requirements of a data-center in the cloud, such as enforcing a certain power budget for improving the performance (dollar) per watt. Our mathematical analyses and simulations have demonstrated both the optimality (in terms of a cost-effective power-performance tradeoff) and system stability (in terms of robustness and adaptivity to time-varying and bursty user requests) achieved by our proposed control framework.
【Keywords】: Lyapunov methods; cloud computing; computer centres; optimal control; optimisation; scheduling; telecommunication congestion control; telecommunication network routing; virtual machines; Lyapunov optimization techniques; SaaS cloud platforms; datacenter; heterogeneous SaaS applications; mathematical analyses; online decision making; operating profit; optimal control framework; power budget; power consumption minimization; power-performance tradeoff; request admission control; routing; software-as-a-service cloud platforms; system stability; unpredictable user requests; virtual machine scheduling; Admission control; Computational modeling; Control systems; Power demand; Routing; Servers; Throughput
【Paper Link】 【Pages】:881-889
【Authors】: Wei Wei ; Harry Tian Gao ; Fengyuan Xu ; Qun Li
【Abstract】: Mencius is a protocol for general state machine replication that tolerates crash failures. It has high performance in wide-area networks. However, the commit latency of Mencius is limited by the slowest replica. This paper presents Fast Mencius, a crash fault-tolerant state machine replication protocol, which enhances Mencius with Active Revoke and Multi-instance Propose. Active Revoke allows the non-slow replicas to proceed without being delayed by the slowest replica, while Multi-instance Propose enables the slow replicas to have their proposals chosen by the replicated state machine. Our evaluation shows that in presence of slow replicas, Fast Mencius's commit latency is significantly lower than that of Mencius, and it also achieves high throughput.
【Keywords】: fault tolerant computing; finite state machines; protocols; wide area networks; active revoke; crash failures; crash fault-tolerant state machine replication protocol; fast Mencius; low-commit latency; multiinstance propose; slowest replica; Computer crashes; Delays; Detectors; Optimization; Proposals; Protocols; Throughput
【Paper Link】 【Pages】:890-898
【Authors】: Wen Luo ; Yan Qiao ; Shigang Chen
【Abstract】: RFID technology has many applications such as object tracking, automatic inventory control, and supply chain management. They can be used to identify individual objects or count the population of each type of objects in a deployment area, no matter whether the objects are passports, retail products, books or even humans. Most existing work adopts a “flat” RFID system model and performs functions of collecting tag IDs, estimating the number of tags, or detecting the missing tags. However, in practice, tags are often attached to objects of different groups, which may represent a different product type in a warehouse, a different book category in a library, etc. An interesting problem, called multigroup threshold-based classification, is to determine whether the number of objects in each group is above or below a prescribed threshold value. Solving this problem is important for inventory tracking applications. If the number of groups is very large, it will be inefficient to measure the groups one at a time. The best existing solution for multigroup threshold-based classification is based on generic group testing, whose design is however geared towards detecting a small number of populous groups. Its performance degrades quickly when the number of groups above the threshold become large. In this paper, we propose a new classification protocol based on logical bitmaps. It achieves high efficiency by measuring all groups in a mixed fashion. In the meantime, we show that the new method is able to perform threshold-based classification with an accuracy that can be pre-set to any desirable level, allowing tradeoff between time efficiency and accuracy.
【Keywords】: protocols; radiofrequency identification; RFID multigroup threshold-based classification; RFID system model; RFID technology; automatic inventory control; classification protocol; efficient protocol; generic group testing; inventory tracking applications; logical bitmaps; multigroup threshold-based classification; object tracking; retail products; supply chain management; tag ID; Accuracy; Maximum likelihood estimation; Protocols; Radiofrequency identification; Sociology
【Paper Link】 【Pages】:899-907
【Authors】: Min Chen ; Wen Luo ; Zhen Mo ; Shigang Chen ; Yuguang Fang
【Abstract】: Radio frequency identification (RFID) technology has many applications in inventory management, supply chain, product tracking, transportation and logistics. One research issue of practical importance is to search for a particular group of tags in a large-scale RFID system. Time efficiency is a core factor that must be taken into consideration when designing a tag search protocol to ensure scalability. In this paper, we design a new technique called filtering vector, which can significantly reduce transmission overhead during search process, thereby shortening search time. Based on this technique, we propose an iterative tag search protocol. In each round, we filter out some tags and eventually terminate the search process when the search result meets the accuracy requirement. The simulation results demonstrate that our protocol performs much better than the best existing ones.
【Keywords】: protocols; radiofrequency identification; telecommunication network reliability; filtering vector; inventory management; large-scale RFID system; logistics; product tracking; radio frequency identification technology; scalability; search process; supply chain; tag search protocol; transmission overhead reduction; transportation; Arrays; Cats; Protocols; Radiofrequency identification; Search problems; Servers; Vectors
【Paper Link】 【Pages】:908-916
【Authors】: Yuanqing Zheng ; Mo Li
【Abstract】: Estimating the RFID cardinality with accuracy guarantee is an important task in large-scale RFID systems. This paper proposes a fast RFID cardinality estimation scheme. The proposed Zero-One Estimator (ZOE) protocol rapidly converges to optimal parameter settings and achieves high estimation efficiency. ZOE significantly improves the cardinality estimation efficiency, achieving 3x performance gain compared with existing protocols. Meanwhile, ZOE guarantees arbitrary accuracy requirement without imposing computation and memory overhead at RFID tags. Due to the simplicity and robustness, the ZOE protocol provides reliable cardinality estimation even over noisy channel. We implement a prototype system using the USRP software defined radio and Intel WISP RFID tags. We also evaluate the performance of ZOE with extensive simulations. The evaluation of ZOE shows encouraging results in terms of estimation accuracy, time efficiency, as well as robustness.
【Keywords】: protocols; radiofrequency identification; software radio; Intel WISP RFID tags; USRP software defined radio; fast cardinality estimation; large-scale RFID systems; zero-one estimator protocol; Accuracy; Channel estimation; Estimation; Probabilistic logic; Protocols; RFID tags
【Paper Link】 【Pages】:917-925
【Authors】: Yuanqing Zheng ; Mo Li
【Abstract】: RFID systems are emerging platforms that support a variety of pervasive applications. The problem of identifying missing tag in RFID systems has attracted wide attention due to its practical importance. This paper presents P-MTI: a Physical-layer Missing Tag Identification scheme which effectively makes use of the lower layer information and dramatically improves operational efficiency. Unlike conventional approaches, P-MTI looks into the aggregated responses instead of focusing on individual tag responses and extracts useful information from physical layer symbols. P-MTI leverages the sparsity of missing tag events and reconstructs tag responses through compressive sensing. We implement P-MTI and prototype the system based on the USRP software defined radio and Intel WISP platform which demonstrates the efficacy. We also evaluate the performance of P-MTI with extensive simulations and compare with previous approaches under various scenarios. The evaluation shows promising results of P-MTI in terms of identification accuracy, time efficiency, as well as robustness over noisy channels.
【Keywords】: compressed sensing; radiofrequency identification; software radio; Intel WISP platform; P-MTI; RFID systems; USRP software defined radio; compressive sensing; identification accuracy; noisy channel; physical layer symbol; physical-layer missing tag identification; tag response; time efficiency; Compressed sensing; Monitoring; Noise; Physical layer; RFID tags; Vectors
【Paper Link】 【Pages】:926-934
【Authors】: Greg Kuperman ; Eytan Modiano
【Abstract】: We consider the problem of providing protection against failures in wireless networks subject to interference constraints. Typically, protection in wired networks is provided through the provisioning of backup paths. This approach has not been previously considered in the wireless setting due to the prohibitive cost of backup capacity. However, we show that in the presence of interference, protection can often be provided with no loss in throughput. This is due to the fact that after a failure, links that previously interfered with the failed link can be activated, thus leading to a “recapturing” of some of the lost capacity. We provide both an ILP formulation for the optimal solution, as well as algorithms that perform close to optimal. More importantly, we show that providing protection in a wireless network uses as much as 72% less protection resources as compared to similar protection schemes designed for wired networks, and that in many cases, no additional resources for protection are needed.
【Keywords】: protection; telecommunication security; wireless mesh networks; ILP formulation; backup capacity; multihop wireless mesh networks; optimal solution; protection resources; wired networks; Interference constraints; Routing; Schedules; Throughput; Wireless networks
【Paper Link】 【Pages】:935-943
【Authors】: János Tapolcai ; Gábor Rétvári
【Abstract】: IP-level failure protection based on the IP Fast ReRoute/Loop-Free Alternates (LFA) specification has become industrial requirement recently. The success of LFA lies in its inherent simplicity, but this comes at the expense of letting certain failure scenarios go unprotected. Realizing full failure coverage with LFA so far has only been possible through completely reengineering the network around LFA-compliant design patterns. In this paper, we show that attaining high LFA coverage is possible without any alteration to the installed IP infrastructure, by introducing a carefully designed virtual overlay on top of the physical network that provides LFAs to otherwise unprotected routers. We study the problem of how to provision the overlay to maximize LFA coverage, we find that this problem is NPcomplete, and we give Integer Linear Programs to solve it. We also propose novel methods to work-around the limitations of current LFA implementations concerning Shared Risk Link Groups (SRLGs), which might be of independent interest. Our numerical evaluations suggest that router virtualization is an efficient tool for improving LFA-based resilience in real topologies.
【Keywords】: IP networks; computer network reliability; integer programming; linear programming; numerical analysis; overlay networks; telecommunication network routing; IP infrastructure; IP-level failure protection; IP-level resilience improvement; LFA specification; LFA-compliant design pattern; SRLG; fast reroute-loop-free alternate specification; integer linear program; numerical analysis; router virtualization; shared risk link group; unprotected routers; virtual overlay design; IP networks; Network topology; Optimization; Routing protocols; Substrates; Topology; Virtualization; IP Fast ReRoute; Loop-Free Alternates; router virtualization
【Paper Link】 【Pages】:944-952
【Authors】: Jose Yallouz ; Ariel Orda
【Abstract】: Coping with network failures has been recognized as an issue of major importance in terms of social security, stability and prosperity. It has become clear that current networking standards fall short of coping with the complex challenge of surviving failures. The need to address this challenge has become a focal point of networking research. In particular, the concept of tunable survivability offers major performance improvements over traditional approaches. Indeed, while the traditional approach is to provide full (100%) protection against network failures through disjoint paths, it was realized that this requirement is too restrictive in practice. Tunable survivability provides a quantitative measure for specifying the desired level (0%-100%) of survivability and offers flexibility in the choice of the routing paths. Previous work focused on the simpler class of “bottleneck” criteria, such as bandwidth. In this study, we focus on the important and much more complex class of additive criteria, such as delay and cost. First, we establish some (in part, counter-intuitive) properties of the optimal solution. Then, we establish efficient algorithmic schemes for optimizing the level of survivability under additive end-to-end QoS bounds. Subsequently, through extensive simulations, we show that, at the price of negligible reduction in the level of survivability, a major improvement (up to a factor of 2) is obtained in terms of end-to-end QoS performance. Finally, we exploit the above findings in the context of a network design problem, in which we need to best invest a given “budget” for improving the performance of the network links.
【Keywords】: computer network management; computer network reliability; quality of service; telecommunication network routing; QoS aware network survivability; additive end-to-end QoS bound; bottleneck criteria; network delay; network design problem; network failure; network link budget; network link cost; quantitative measure; routing path; tunable network survivability; tunable survivability; Additives; Delays; Maximum likelihood detection; Optimization; Quality of service; Standards
【Paper Link】 【Pages】:953-961
【Authors】: Shuang Li ; Zizhan Zheng ; Eylem Ekici ; Ness B. Shroff
【Abstract】: In Cognitive Radio Networks (CRNs), secondary users (SUs) are allowed to opportunistically access the unused/under-utilized channels of primary users (PUs). To utilize spectrum resources efficiently, an auction scheme is often applied where an operator serves as an auctioneer and accepts spectrum requests from SUs. Most existing works on spectrum auctions assume that the operator has perfect knowledge of PU activities. In practice, however, it is more likely that the operator only has statistical information of the PU traffic when it is trading a spectrum hole, and it is acquiring more accurate information in real time. In this paper, we distinguish PU channels that are under the control of the operator, where accurate channel states are revealed in real-time, and channels that the operator acquires from PUs out of its control, where a sense-before-use paradigm has to be followed. Considering both spectrum uncertainty and sensing inaccuracy, we study the social welfare maximization problem for serving SUs with various levels of delay tolerance. We first model the problem as a finite horizon Markov decision process when the operator knows all spectrum requests in advance, and propose an optimal dynamic programming based algorithm. We then investigate the case when spectrum requests are submitted online, and propose a greedy algorithm that is 1/2-competitive for homogeneous channels and is comparable to the offline algorithm for more general settings. We further extend the online algorithm to an online auction scheme, which ensures incentive compatibility for the SUs and also provides a way for trading off social welfare and revenue.
【Keywords】: Markov processes; cognitive radio; delay tolerant networks; dynamic programming; greedy algorithms; radio networks; radio spectrum management; telecommunication traffic; CRN; PU traffic; SU; auction scheme; delay tolerance; finite horizon Markov decision process; online auction scheme; operator-based cognitive radio network; optimal dynamic programming based algorithm; primary user; secondary user; sense-before-use paradigm; sensing inaccuracy; social welfare maximization problem; spectrum resource; spectrum uncertainty; Decision support systems
【Paper Link】 【Pages】:962-970
【Authors】: Chunxiao Jiang ; Yan Chen ; Yu-Han Yang ; Chih-Yu Wang ; K. J. Ray Liu
【Abstract】: In a cognitive radio network with mobility, secondary users can arrive at and leave the primary users' licensed networks at any time. After arrival, secondary users are confronted with channel access under the uncertain primary channel state. On one hand, they have to estimate the channel state, i.e., the primary users' activities, through performing spectrum sensing and learning from other secondary users' sensing results. On the other hand, they need to predict subsequent secondary users' access decisions to avoid competition when accessing the ”spectrum hole”. In this paper, we propose a Dynamic Chinese Restaurant Game to study such a learning and decision making problem in cognitive radio networks. We introduce a Bayesian learning based method for secondary users to learn the channel state and propose a Multi-dimensional Markov Decision Process based approach for secondary users to make optimal channel access decisions. Finally, we conduct simulations to verify the effectiveness and efficiency of the proposed scheme.
【Keywords】: cognitive radio; game theory; Bayesian learning based method; channel access; cognitive radio networks; decision making; dynamic Chinese restaurant game; multi-dimensional Markov decision process; primary users; secondary users; spectrum sensing; Bayes methods; Channel estimation; Cognitive radio; Educational institutions; Games; Markov processes; Sensors; Bayesian Learning; Chinese Restaurant Game; Cognitive Radio; Game Theory; Markov Decision Process
【Paper Link】 【Pages】:971-979
【Authors】: Wei Li ; Xiuzhen Cheng ; Tao Jing ; Xiaoshuang Xing
【Abstract】: The cooperation between the primary and the secondary users has attracted a lot of attention in cognitive radio networks. However, most existing research mainly focuses on the single-hop relay selection for a primary transmitter-receiver pair, which might not be able to fully explore the benefit brought by cooperative transmissions. In this paper, we study the problem of multi-hop relay selection by applying the network formation game. In order to mitigate interference and reduce delay, we propose a cooperation framework FTCO by considering the spectrum sharing in both the time and the frequency domain. Then we formulate the multi-hop relay selection problem as a network formation game, in which the multi-hop relay path is computed via performing the primary player's strategies in the form of link operations. We also devise a distributed dynamic algorithm PRADA to obtain a global-path stable network. Finally, we conduct extensive numerical experiments and our results indicate that cooperative multi-hop relaying can significantly benefit both the primary and the secondary network, and that the network graph resulted from our PRADA algorithm can achieve the global-path stability.
【Keywords】: cognitive radio; cooperative communication; game theory; interference suppression; network theory (graphs); radio receivers; radio spectrum management; radio transmitters; relay networks (telecommunication); time-frequency analysis; FTCO; PRADA algorithm; cognitive radio network; cooperation framework; cooperative multihop relay selection; cooperative transmission; distributed dynamic algorithm; frequency domain analysis; global path stable network; interference mitigation; network formation game; network graph; primary network; primary player strategy; primary transmitter-receiver pair; primary user; secondary user; single hop relay selection; spectrum sharing; time domain analysis; Bit rate; Delays; Games; Heuristic algorithms; Receivers; Relays; Spread spectrum communication; Cognitive radio networks; cooperative multi-hop relaying; global-path stable network; network formation game
【Paper Link】 【Pages】:980-988
【Authors】: Shengrong Bu ; F. Richard Yu ; Yi Qian
【Abstract】: Rapidly rising energy costs and increasingly rigid environmental standards have led to an emerging trend of addressing the “energy efficiency” aspect of mobile cellular networks. Cognitive heterogeneous mobile networks are considered as important techniques to improve the energy efficiency. However, most existing works do not consider the power grid, which provides electricity to cellular networks. Currently, the power grid is experiencing a significant shift from the traditional grid to the smart grid. In the smart grid environment, only considering energy efficiency may not be sufficient, since the dynamics of the smart grid will have significant impacts on mobile networks. In this paper, we study cognitive heterogeneous mobile networks in the smart grid environment. Unlike most existing studies on cognitive networks, where only the radio spectrum is sensed, our cognitive networks sense not only the radio spectrum environment but also the smart grid environment, based on which power allocation and interference management are performed. We formulate the problems of electricity price decision, energy-efficient power allocation and interference management as a three-level Stackelberg game. A homogeneous Bertrand game with asymmetric costs is used to model price decisions made by the electricity retailers. A backward induction method is used to analyze the proposed Stackelberg game. Simulation results show that our proposed scheme can significantly reduce operational expenditure and CO2 emissions in cognitive heterogeneous mobile networks.
【Keywords】: cellular radio; cognitive radio; energy conservation; smart power grids; telecommunication power management; Stackelberg game; backward induction method; cognitive heterogeneous mobile networks; electricity price decision; electricity retailer; energy efficiency; energy efficient cognitive heterogeneous network; energy efficient power allocation; environmental standards; homogeneous Bertrand game; interference management; mobile cellular networks; radio spectrum environment; smart grid environment; Electricity; Femtocells; Games; Interference; Mobile communication; Mobile computing; Smart grids; Energy efficiency; heterogeneous networks; smart grid
【Paper Link】 【Pages】:989-997
【Authors】: Danny De Vleeschauwer ; Harish Viswanathan ; Andre Beck ; Steven A. Benno ; Gang Li ; Raymond B. Miller
【Abstract】: Video streaming, in particular, hypertext transfer protocol based (HTTP) adaptive streaming (HAS) of video, is expected to be a dominant application over mobile networks in the near future. The observation that the base station can alter the video quality requested by a HAS client to its server by controlling the over-the-air throughput from the base station to the client implies that the base station can jointly maximize aggregate video quality of all the HAS flows and throughput of data flows that it serves. We formulate a utility maximization problem that separately takes into account different utility functions for video and data flows and show that the utility maximization can be achieved through an algorithm, we term adaptive guaranteed bit rate (AGBR), wherein target bit rates are calculated for each HAS flow and passed on to an underlying minimum rate proportional fair scheduler that schedules resources across all the flows. This approach has the advantage that it retains the existing scheduling function in the base station with a separate function to compute the target bit rates for the video flows allowing them to only change slowly over time in order to avoid frequent video quality changes. Through analytical modeling and simulations we show that the proposed algorithm can achieve required fairness among the video flows as well as automatically and fairly adapt video quality with increasing congestion thereby preventing data flow throughput starvation.
【Keywords】: cellular radio; transport protocols; video streaming; AGBR; HAS client; HAS flow; HTTP adaptive streaming; adaptive guaranteed bit rate; analytical modeling; analytical simulations; hypertext transfer protocol based adaptive streaming; minimum rate proportional fair scheduler; mobile cellular networks; mobile networks; scheduling function; utility maximization problem; video quality; Base stations; Bit rate; Optimization; Quality assessment; Streaming media; Throughput; Video recording; 3GPP; GBR; HAS; LTE; mobile networks; resource allocation; scheduling; video
【Paper Link】 【Pages】:998-1006
【Authors】: Ehsan Aryafar ; Alireza Keshavarz-Haddad ; Michael Wang ; Mung Chiang
【Abstract】: We study the dynamics of network selection in heterogeneous wireless networks (HetNets). Users in such networks selfishly select the best radio access technology (RAT) with the objective of maximizing their own throughputs. We propose two general classes of throughput models that capture the basic properties of random access (e.g., Wi-Fi) and scheduled access (e.g., WiMAX, LTE, 3G) networks. Next, we formulate the problem as a non-cooperative game, and study its convergence, efficiency, and practicality. Our results reveal that: (i) Singleclass RAT selection games converge to Nash equilibria, while an improvement path can be repeated infinitely with a mixture of classes. We next introduce a hysteresis mechanism in RAT selection games, and prove that with appropriate hysteresis policies, convergence can still be guaranteed; (ii) We analyze the Pareto-efficiency of the Nash equilibria of these games. We derive the conditions under which Nash equilibria are Paretooptimal, and we quantify the distance of Nash equilibria with respect to the set of Pareto-dominant points when the conditions are not satisfied; (iii) Finally, with extensive measurement-driven simulations we show that RAT selection games converge to Nash equilibria in a small number of steps, and hence are amenable to practical implementation. We also investigate the impact of noisy throughput measurements, and propose solutions to handle them.
【Keywords】: Pareto optimisation; convergence; game theory; radio access networks; telecommunication congestion control; telecommunication network management; HetNets; Nash equilibria; Pareto efficiency; Pareto-dominant points; RAT selection games; heterogeneous wireless networks; hysteresis mechanism; network selection; radio access technology; random access networks; scheduled access networks; throughput models; Convergence; Games; Hysteresis; IEEE 802.11 Standards; Rats; Switches; Throughput
【Paper Link】 【Pages】:1007-1015
【Authors】: Ashwin Sridharan ; Jean Bolot
【Abstract】: The opportunities to understand human-mobility have increased significantly of late with the rapid adoption of wireless devices that report locations frequently. In this work1, we utilize one such rich data-set comprising of nationwide call data records from several million users to analyze and understand their location patterns. We define a location pattern as the set of locations visited by a user, which roughly speaking, can be considered to be the footprint of the user. Such an analysis is useful since it allows insight into aspects such as the range covered by a user, general direction and major routes of travel, characterization of geographic areas etc.,. These in turn are useful inputs for network planning, traffic planning and mobility models. We propose a systematic methodology that utilizes geometric structures like the Minimum Area Rectangle, line segmentation and clustering techniques to extract meaningful information for location patterns and apply it to our large data-set. Based on this we report on aspects such as the size and orientation of footprints, length of major routes as well as characterize and compare locales based on movement patterns. Finally, we identify some key features of location patterns that can be modeled very well with a single statistical distribution, the Double Pareto LogNormal (DPLN) distribution regardless of locale.
【Keywords】: Pareto distribution; mobile radio; statistical distributions; telecommunication network planning; telecommunication traffic; clustering techniques; double Pareto lognormal distribution; human-mobility; information extraction; line segmentation; location patterns; minimum area rectangle; mobile users; mobility models; network planning; statistical distribution; traffic planning; wireless devices; Approximation methods; Cities and towns; Clustering algorithms; Planning; Poles and towers; Shape; Trajectory
【Paper Link】 【Pages】:1016-1024
【Authors】: Yin Sun ; Can Emre Koksal ; Sung-Ju Lee ; Ness B. Shroff
【Abstract】: Wireless network scheduling and control techniques (e.g., opportunistic scheduling) rely heavily on access to Channel State Information (CSI). However, obtaining this information is costly in terms of bandwidth, time, and power, and could result in large overhead. Therefore, a critical question is how to optimally manage network resources in the absence of such information. To that end, we develop a cross-layer solution for downlink cellular systems with imperfect (and possibly no) CSI at the transmitter. We use rateless codes to resolve channel uncertainty. To keep the decoding complexity low, we explicitly incorporate time-average block-size constraints, and aim to maximize the system utility. The block-size of a rateless code is determined by both the network control decisions and the unknown CSI of many time slots. Therefore, unlike standard utility maximization problems, this problem can be viewed as a constrained partial observed Markov decision problem (CPOMDP), which is known to be hard due to the “curse of dimensionality.” However, by using a modified Lyapunov drift method, we develop a dynamic network control scheme, which yields a total network utility within O(1/Lav) of utility-optimal point achieved by infinite block-size channel codes, where Lav is the enforced value of the time-average block-size of rateless codes. This opens the door of being able to trade complexity/delay for performance gains in the absence of accurate CSI. Our simulation results show that the proposed scheme improves the network throughput by up to 68% over schemes that use fixed-rate codes.
【Keywords】: Markov processes; cellular radio; channel coding; computational complexity; decision theory; optimisation; resource allocation; scheduling; telecommunication control; telecommunication network management; CPOMDP; CSI; Lyapunov drift method; channel state information; channel uncertainty; constrained partial observed Markov decision problem; cross-layer solution; curse-of-dimensionality; downlink cellular systems; dynamic network control scheme; fixed-rate codes; infinite block-size channel codes; low decoding complexity; network control decisions; network resource management; network throughput improvement; opportunistic scheduling; performance gains; rateless codes; system utility maximization; time slots; time-average block-size constraints; total network utility; utility-optimal point; wireless network control technique; wireless network scheduling technique; Complexity theory; Downlink; Maximum likelihood decoding; Mutual information; Receivers; Transmitters
【Paper Link】 【Pages】:1025-1033
【Authors】: Kevin D. Bowers ; Ari Juels ; Ronald L. Rivest ; Emily Shen
【Abstract】: We introduce Drifting Keys (DKs), a simple new approach to detecting device impersonation. DKs enable detection of complete compromise by an attacker of the device and its secret state, e.g., cryptographic keys. A DK evolves within a device randomly over time. Thus an attacker will create DKs that randomly diverge from those in the original, valid device over time, alerting a trusted verifier to the attack. DKs may be transmitted unidirectionally from a device, eliminating interaction between the device and verifier. Device emissions of DK values can be quite compact - even just a single bit - and DK evolution and emission require minimal computation. Thus DKs are well suited for highly constrained devices, such as sensors and hardware authentication tokens. We offer a formal adversarial model for DKs, and present a simple scheme that we prove essentially optimal (undominated) for a natural class of attack timelines. We explore application of this scheme to one-time passcode authentication tokens. Using the logs of a large enterprise, we experimentally study the effectiveness of DKs in detecting the compromise of such tokens.
【Keywords】: cryptography; trusted computing; DK evolution; DK values; complete compromise detection; constrained device impersonation detection; cryptographic keys; device emissions; drifting keys; formal adversarial model; one-time passcode authentication tokens; secret state; trusted verifier; Authentication; Cloning; Cryptography; Forgery; Sensors; Synchronization
【Paper Link】 【Pages】:1034-1042
【Authors】: Lu Shi ; Shucheng Yu ; Wenjing Lou ; Y. Thomas Hou
【Abstract】: Lacking trusted central authority, distributed systems have received serious security threats from Sybil attack, where an adversary forges identities of more than one node and attempts to control the system. By utilizing the real-world trust relationships between users, social network-based defense schemes have been proposed to mitigate the impact of Sybil attacks. These solutions are mostly built on the assumption that the social network graph can be partitioned into two loosely linked regions - a tightly connected non-Sybil region and a Sybil region. Although such an assumption may hold in certain settings, studies have shown that the real-world social connections tend to divide users into multiple inter-connected small worlds instead of a single uniformly connected large region. Given this fact, the applicability of existing schemes would be greatly undermined for inability to distinguish Sybil users from valid ones in the small non-Sybil regions. This paper addresses this problem and presents SybilShield, the first protocol that defends against Sybil attack utilizing multi-community social network structure in real world. Our scheme leverages the sociological property that the number of cutting edges between a non-Sybil community and a Sybil community, which represent human-established trust relationships, is much smaller than that among non-Sybil communities. With the help of agent nodes, SybilShield greatly reduces false positive rate of non-Sybils among multiple communities, while effectively identifying Sybil nodes. Analytical results prove the superiority of SybilShield. Our experiments on a real-world social network graph with 100,000 nodes also validate the effectiveness of SybilShield.
【Keywords】: distributed processing; graph theory; multi-agent systems; network theory (graphs); security of data; trusted computing; Sybil attack impact mitigation; Sybil users; SybilShield; adversary; agent node; agent-aided social network-based Sybil defense; distributed system; human-established trust relationship; identity forging; loosely linked region; multicommunity social network structure; multiple interconnected small world; protocol; serious security threat; single uniformly connected large region; social connection; social network graph partitioning; sociological property; system control; tightly connected nonSybil region; trusted central authority; Communities; Image edge detection; Peer-to-peer computing; Routing; Routing protocols; Social network services
【Paper Link】 【Pages】:1043-1051
【Authors】: Zhen Ling ; Junzhou Luo ; Kui Wu ; Xinwen Fu
【Abstract】: Tor hidden services are commonly used to provide a TCP based service to users without exposing the hidden server's IP address in order to achieve anonymity and anti-censorship. However, hidden services are currently abused in various ways. Illegal content such as child pornography has been discovered on various Tor hidden servers. In this paper, we propose a protocollevel hidden server discovery approach to locate the Tor hidden server that hosts the illegal website. We investigate the Tor hidden server protocol and develop a hidden server discovery system, which consists of a Tor client, a Tor rendezvous point, and several Tor entry onion routers. We manipulate Tor cells, the basic transmission unit over Tor, at the Tor rendezvous point to generate a protocol-level feature at the entry onion routers. Once our controlled entry onion routers detect such a feature, we can confirm the IP address of the hidden server. We conduct extensive analysis and experiments to demonstrate the feasibility and effectiveness of our approach.
【Keywords】: IP networks; client-server systems; computer network security; telecommunication network routing; transport protocols; IP address; TCP-based service; Tor cells; Tor client; Tor entry onion routers; Tor hidden server localisation; Tor hidden server protocol; Tor rendezvous point; anonymous communication; hidden services; illegal Web site; illegal content; protocol-level hidden server discovery; transmission unit; Correlation; IP networks; Relays; Routing protocols; Servers; Timing; Anonymous Communication; Hidden Service; Tor
【Paper Link】 【Pages】:1052-1060
【Authors】: Liqun Chen ; Hoon Wei Lim ; Guomin Yang
【Abstract】: We revisit the problem of cross-domain secure communication between two users belonging to different security domains within an open and distributed environment. Existing approaches presuppose that either the users are in possession of public key certificates issued by a trusted certificate authority (CA), or the associated domain authentication servers share a long-term secret key. In this paper, we propose a four-party password-based authenticated key exchange (4PAKE) protocol that takes a different approach from previous work. The users are not required to have public key certificates, but they simply reuse their login passwords they share with their respective domain authentication servers. On the other hand, the authentication servers, assumed to be part of a standard PKI, act as ephemeral CAs that “certify” some key materials that the users can subsequently exchange and agree on a session key. Moreover, we adopt a compositional approach. That is, by treating any secure two-party password-based key exchange protocol and two-party asymmetric-key based key exchange protocol as black boxes, we combine them to obtain a generic and provably secure 4PAKE protocol.
【Keywords】: cryptographic protocols; public key cryptography; telecommunication security; cross-domain password-based authenticated key exchange; cross-domain secure communication; domain authentication servers; four-party password-based authenticated key exchange protocol; long-term secret key; public key certificates; trusted certificate; two-party asymmetric-key based key exchange protocol; two-party password-based key exchange protocol; Authentication; Electronic mail; Materials; Protocols; Public key; Servers; Password-based protocol; client-to-client; cross-domain; key exchange
【Paper Link】 【Pages】:1061-1069
【Authors】: Carlee Joe-Wong ; Soumya Sen ; Sangtae Ha
【Abstract】: To alleviate the congestion caused by rapid growth in demand for mobile data, ISPs have begun encouraging users to offload some of their traffic onto a supplementary, better quality network technology, e.g., offloading from 3G or 4G to WiFi and femtocells. With the growing popularity of such offerings, a deeper understanding of the underlying economic principles and their impact on technology adoption is necessary. To this end, we develop a model for user adoption of a base wireless technology and a bundle of the base plus a supplementary technology. In our model, individual users make their adoption decisions based on several factors, including the technologies' intrinsic qualities, throughput degradation due to congestion externalities from other subscribers, and the flat access rates that an ISP charges. We study the adoption dynamics and show that they converge to a unique equilibrium for a given set of exogenously determined system parameters. In particular, we characterize the occurrence of interesting adoption behaviors, including a possible decrease in the adoption of the supplementary technology as its coverage increases. Similar behaviors occur at an ISP's profit-maximizing prices and the optimal coverage area for the supplementary technology. To account for the potential benefits from offloading in practice, we collect 3G and WiFi usage and location data from twenty mobile users. We then use this data to numerically investigate the profit-maximizing adoption levels when an ISP accounts for its cost of deploying the supplemental technology and savings from offloading traffic onto this technology.
【Keywords】: 3G mobile communication; 4G mobile communication; Internet; femtocellular radio; numerical analysis; pricing; telecommunication traffic; wireless LAN; 3G mobile communication; 4G mobile communication; ISP profit-maximizing prices; Internet service provider; WiFi; adoption dynamics; femtocellular radio; flat access rates; mobile data; numerical analysis; profit-maximizing adoption levels; quality network technology; supplementary wireless technology; technology adoption behavior; user adoption; Cost accounting; Degradation; Economics; Femtocells; IEEE 802.11 Standards; Throughput; Wireless communication
【Paper Link】 【Pages】:1070-1078
【Authors】: Lingjie Duan ; Jianwei Huang ; Jean C. Walrand
【Abstract】: As the successor to the 3G standard, 4G provides much higher data rates to address cellular users' ever-increasing demands for high-speed multimedia communications. This paper analyzes the cellular operators' timing of network upgrades and models that users can switch operators and services. Being the first to upgrade 3G to 4G service, an operator increases his market share but takes more risk or upgrade cost because 4G technology matures over time. This paper first studies a 4G monopoly market with one dominant operator and some small operators, where the monopolist decides his upgrade time by trading off increased market share and upgrade cost. The paper also considers a 4G competition market and develops a game theoretic model for studying operators' interactions. The analysis shows that operators select different upgrade times to avoid severe competition. One operator takes the lead to upgrade, using the benefit of a larger market share to compensate for the larger cost of an early upgrade. This result matches well with many industry observations of asymmetric 4G upgrades. The paper further shows that the availability of 4G upgrade may decrease both operators' profits due to increased competition. Perhaps surprisingly, the profits can increase with the upgrade cost.
【Keywords】: 4G mobile communication; economics; marketing; multimedia communication; 4G competition market; 4G monopoly market; 4G network upgrade; cellular operator; economic analysis; market share; multimedia communication; operator interaction; upgrade cost; Biological system modeling; Games; Industries; Monopoly; Quality of service; Switches; Timing
【Paper Link】 【Pages】:1079-1087
【Authors】: Valentino Pacifici ; György Dán
【Abstract】: A future content-centric Internet would likely consist of autonomous systems (ASes) just like today's Internet. It would thus be a network of interacting cache networks, each of them optimized for local performance. To understand the influence of interactions between autonomous cache networks, in this paper we consider ASes that maintain peering agreements with each other for mutual benefit, and engage in content-level peering to leverage each others' cache contents. We propose a model of the interaction and the coordination between the caches managed by peering ASes. We address whether stable and efficient content-level peering can be implemented without explicit coordination between the neighboring ASes or alternatively, whether the interaction needs to rely on explicit announcements of content reachability in order for the system to be stable. We show that content-level peering leads to stable cache configurations, both with and without coordination. If the ASes do coordinate, then coordination that avoids simultaneous updates by peering ISPs provides faster and more cost efficient convergence to a stable configuration. Furthermore, if the content popularity estimates are inaccurate, content-level peering is likely to lead to cost efficient cache allocations. We validate our analytical results using simulations on the measured peering topology of more than 600 ASes.
【Keywords】: Internet; cache storage; computer network performance evaluation; convergence; optimisation; reachability analysis; telecommunication network topology; autonomous cache content-peering dynamics; autonomous cache networks; autonomous systems; cache configurations; cache management; content popularity estimation; content reachability; content-centric Internet; content-centric network; convergence; cost efficient cache allocations; interacting cache networks; optimisation; peering ASes; peering ISP; peering agreements; peering topology; Convergence; Internet; Nash equilibrium; Numerical models; Protocols; Resource management; Routing
【Paper Link】 【Pages】:1088-1096
【Authors】: Lingjie Duan ; Jianwei Huang ; Biying Shou
【Abstract】: This paper analyzes two pricing schemes commonly used in WiFi markets: flat-rate pricing and usage-based pricing. The flat-free pricing encourages users to achieve the maximum WiFi usage and targets at users with high valuations in mobile Internet access, whereas the usage-based pricing is flexible to attract more users - even those with low valuations. First, we show that for a local provider, the flat-rate pricing provides more revenue than the usage-based pricing, which is consistent with the common practice in today's local markets. Second, we study how Skype may work with many local WiFi providers to provide a global WiFi service. We formulate the interactions between Skype, local providers, and users as a two-stage dynamic game. In Stage I, Skype bargains with each local provider to determine the global Skype WiFi service price and revenue sharing agreement; in Stage II, local users and travelers decide whether and how to use local or Skype WiFi service. Our analysis discovers two key insights behind Skype's current choice of usage-based pricing for its global WiFi service: to avoid severe competition with local providers and attract travelers to the service. We further show that at the equilibrium, Skype needs to share the majority of his revenue with a local provider to compensate the local provider's revenue loss due to competition. When there are more travelers or fewer local users, the competition between Skype and a local provider becomes less severe, and Skype can give away less revenue and reduce its usage-based price to attract more users.
【Keywords】: game theory; pricing; wireless LAN; Skype; flat-rate pricing; global WiFi market; local WiFi market; mobile Internet access; optimal pricing; revenue sharing agreement; two-stage dynamic game; usage-based pricing; Cost accounting; Educational institutions; Elasticity; Games; IEEE 802.11 Standards; Internet; Pricing
【Paper Link】 【Pages】:1097-1105
【Authors】: Dimitrios Tsilimantos ; Jean-Marie Gorce ; Eitan Altman
【Abstract】: The issue of energy efficiency (EE) in Orthogonal Frequency-Division Multiple Access (OFDMA) wireless networks is discussed in this paper. Our interest is focused on the promising concept of base station (BS) sleep mode, introduced recently as a key feature in order to dramatically reduce network energy consumption. The proposed technical approach fully exploits the properties of stochastic geometry, where the number of active cells is reduced in a way that the outage probability, or equivalently the signal to interference plus noise (SINR) distribution, remains the same. The optimal EE gains are then specified with the help of a simplified but yet realistic BS power consumption model. Furthermore, the authors extend their initial work by studying a non-singular path loss model in order to verify the validity of the analysis and finally, the impact on the achieved user capacity is investigated. In this context, the significant contribution of this paper is the evaluation of the theoretically optimal energy savings of sleep mode, with respect to the decisive role that the BS power profile plays.
【Keywords】: OFDM modulation; energy conservation; frequency division multiple access; geometry; power consumption; probability; radiofrequency interference; stochastic processes; telecommunication power management; BS power consumption model; BS power profile; EE; OFDMA wireless network; SINR distribution; active cells; base station; energy efficiency; network energy consumption; nonsingular path loss model; optimal energy saving; orthogonal frequency-division multiple access wireless network; outage probability; signal to interference plus noise; sleep mode; stochastic analysis; stochastic geometry; user capacity; Analytical models; Interference; Mathematical model; Optimized production technology; Power demand; Signal to noise ratio; Stochastic processes
【Paper Link】 【Pages】:1106-1114
【Authors】: R. M. Karthik ; Arvind Chakrapani
【Abstract】: In this paper, we consider wireless communication systems where user equipments (UEs) monitor packet-data traffic characteristics and adopt Discontinuous Reception (DRX) to conserve battery power. With DRX, the receiver circuitry is configured to toggle between on (active) and off (inactive) states for specified durations, depending on the packet arrival process. Arriving at an appropriate DRX configuration remains a challenging research issue, especially when multiple applications generate traffic. Our objective in this work is to provide a practical mechanism for selecting a suitable DRX configuration. First, we derive an analytical expression for the expected maximum time (delay) required for a packet arriving during the offduration to be serviced for any arrival process. Second, we obtain an estimate for the on-duration, Ton for which the expected delay is below a certain threshold. Using Ton, we compute the active duration, Tactive in each DRX cycle by considering the timers specified in 3rd Generation Partnership Project (3GPP) Release 10 and for a given interarrival time distribution between packets. Finally, using our analysis, we propose a pragmatic algorithm and show how to select an appropriate DRX configuration which will lead to high power efficiency with acceptable buffer requirements. Through extensive analysis and simulations both with general arrival processes and real-time traces, we show that our algorithm can lead to significant extension in battery life at the UE.
【Keywords】: 3G mobile communication; next generation networks; telecommunication traffic; 3GPP; 3rd Generation Partnership Project; DRX cycle; UE; active duration; battery power; discontinuous reception; general arrival processes; next generation mobiles; packet arrival process; packet-data traffic characteristics; power efficient DRX configuration; pragmatic algorithm; real-time traces; user equipments; wireless communication systems; Batteries; Delays; Equations; Joints; Mathematical model; Monitoring; Tin
【Paper Link】 【Pages】:1115-1123
【Authors】: Rahul Vaze
【Abstract】: The design of online algorithms for minimizing packet transmission time is considered for single-user Gaussian channel and two-user Gaussian multiple access channel (GMAC) powered by natural renewable sources. The most general case of arbitrary energy arrivals is considered where neither the future energy arrival instants or amount, nor their distribution is known. The online algorithm adaptively changes the transmission rate according to the causal energy arrival information, so as to minimize the packet transmission time. For a minimization problem, the utility of an online algorithm is tested by finding its competitive ratio or competitiveness that is the maximum of the ratio of the gain of the online algorithm and the optimal offline algorithm over all input sequences. We derive a lower bound that shows that competitive ratio of any online algorithm is at least 1.38 for single-user Gaussian channel and 1.356 for GMAC. A `lazy' transmission policy that chooses its transmission power to minimize the transmission time assuming that no further energy arrivals are going to occur in future is shown to be strictly two-competitive for both the single-user Gaussian channel and the GMAC.
【Keywords】: Gaussian channels; energy harvesting; minimisation; multi-access systems; telecommunication power management; GMAC; causal energy arrival information; competitive ratio analysis; competitiveness; energy harvesting communication system; lazy transmission policy; minimization problem; natural renewable sources; online algorithm design; optimal offline algorithm; packet transmission time minimization; single-user Gaussian channel; transmission power; transmission rate; two-user Gaussian multiple access channel; Algorithm design and analysis; Channel models; Communication systems; Energy harvesting; Indexes; Optimization; Upper bound
【Paper Link】 【Pages】:1124-1132
【Authors】: Cheng Liu ; Karthikeyan Sundaresan ; Meilong Jiang ; Sampath Rangarajan ; Gee-Kung Chang
【Abstract】: Small cells have become an integral component in meeting the increased demand for cellular network capacity. Cloud radio access networks (C-RAN) have been proposed as an effective means to harness the capacity benefits of small cells at reduced capital and operational expenses. With the baseband units (BBUs) separated from the radio access units (RAUs) and moved to the cloud for centralized processing, the backhaul between BBUs and RAUs forms a key component of any C-RAN. In this work, we argue that a one-one mapping of BBUs to RAUs is highly sub-optimal, thereby calling for a functional decoupling of the BBU pool from the RAUs. Further, the backhaul architecture must be made re-configurable to allow the mapping between BBUs and RAUs to be flexible and changed dynamically so as to not just optimize RAN performance but also energy consumption in the BBU pool. Towards this end, we design and implement the first OFDMA-based C-RAN test-bed with a reconfigurable backhaul that allows 4 BBUs to connect flexibly with 4 RAUs using radio-over-fiber technology. We demonstrate the feasibility of our system over a 10 km separation between the BBU pool and RAUs. Further, real world experiments with commercial off-the-shelf WiMAX clients reveal the performance benefits of our reconfigurable backhaul in catering effectively to heterogeneous user (static and mobile clients) and traffic profiles, while also delivering energy benefits in the BBU pool.
【Keywords】: OFDM modulation; WiMax; cellular radio; frequency division multiple access; radio access networks; radio-over-fibre; C-RAN test-bed; OFDMA; RAU; baseband unit; cellular network capacity; cloud radio access network; cloud-RAN; energy consumption; functional decoupling; mobile client; off-the-shelf WiMAX client; radio access unit; radio-over-fiber technology; reconfigurable backhaul; small cell network; static client; Baseband; Computer architecture; Microprocessors; Mobile communication; Optical receivers; Optical switches; WiMAX
【Paper Link】 【Pages】:1133-1141
【Authors】: Shaojie Tang ; Qiuyuan Huang ; Xiang-Yang Li ; Dapeng Wu
【Abstract】: Assume that a set of Demand Response Switch (DRS) devices are deployed in smart meters for autonomous demand side management within one house. The DRS devices are able to sense and control the activity of each appliance. We propose a set of appliance scheduling algorithms to 1) minimize the peak power consumption under a fixed delay requirement, and 2) minimize the delay under a fixed peak demand constraint. For both problems, we first prove that they are NP-Hard. Then we propose a set of approximation algorithms with constant approximation ratios. We conduct extensive simulations using both real-life appliance energy consumption data trace and synthetic data to evaluate the performance of our algorithms. Extensive evaluations verify that the schedules obtained by our methods significantly reduce the peak demand or delay compared with naive greedy algorithm or randomized algorithm.
【Keywords】: demand side management; energy consumption; optimisation; scheduling; smart power grids; DRS devices; NP-hard; delay; demand response switch; demand side management; energy consumption data trace; greedy algorithm; peak demand; peak demand reduction; peak power consumption; randomized algorithm; Approximation algorithms; Delays; Energy consumption; Home appliances; Minimization; Schedules; Strips
【Paper Link】 【Pages】:1142-1150
【Authors】: Yingsong Huang ; Shiwen Mao ; R. M. Nelms
【Abstract】: Microgrid (MG) is a promising component for future smart grid (SG) deployment. The balance of supply and demand of electric energy is one of the most important requirements of MG management. In this paper, we present a novel framework for smart energy management based on the concept of quality-of-service in electricity (QoSE). Specifically, the resident electricity demand is classified into basic usage and quality usage. The basic usage is always guaranteed by the MG, while the quality usage is controlled based on the MG state. The microgrid control center (MGCC) aims to minimize the MG operation cost and maintain the outage probability of quality usage, i.e., QoSE, below a target value, by scheduling electricity among renewable energy resources, energy storage systems, and macrogrid. The problem is formulated as a constrained stochastic programming problem. The Lyapunov optimization technique is then applied to derive an adaptive electricity scheduling algorithm by introducing the QoSE virtual queues and energy storage virtual queues. The proposed algorithm is an online algorithm since it does not require any statistics and future knowledge of the electricity supply, demand and price processes. We derive several "hard" performance bounds for the proposed algorithm, and evaluate its performance with trace-driven simulations. The simulation results demonstrate the efficacy of the proposed electricity scheduling algorithm.
【Keywords】: Lyapunov methods; adaptive scheduling; distributed power generation; renewable energy sources; smart power grids; stochastic programming; Lyapunov optimization technique; QoSE virtual queues; adaptive electricity scheduling algorithm; electric energy; energy storage systems; energy storage virtual queues; macrogrid; microgrid control center; microgrids; outage probability; quality-of-service; renewable energy resources; smart energy management; smart grid deployment; stochastic programming; Decision support systems; Zinc
【Paper Link】 【Pages】:1151-1159
【Authors】: Yang Gao ; Yan Chen ; Chih-Yu Wang ; K. J. Ray Liu
【Abstract】: With the foreseeable large scale deployment of electric vehicles (EVs) and the development of vehicle-to-grid (V2G) technologies, it is possible to provide ancillary services to the power grid in a cost efficient way, i.e., through the bidirectional power flow of EVs. A key issue in such kind of schemes is how to stimulate a large number of EVs to act coordinately to achieve the service request. This is challenging since EVs are self-interested and generally have different preferences toward charging and discharging based on their own constraints. In this paper, we propose a contract-based mechanism to tackle this challenge. Through the design of an optimal contract, the aggregator can provide incentives for EVs to participate in ancillary services to power grid, match the aggregated energy rate with the service request and maximize its own profits. We prove that under mild conditions, the optimal contract-based mechanism takes a very simple form, i.e., the aggregator only needs to publish an optimal unit price to EVs, which is determined based on the statistical distribution of EVs' preferences. We then consider a more practical scenario where the aggregator has no prior knowledge regarding the statistical distribution and study how should the aggregator learn the optimal unit price from its interactions with EVs. Simulation results are shown to verify the effectiveness of the proposed contract-based mechanism.
【Keywords】: contracts; electric vehicles; power grids; pricing; statistical distributions; EV; V2G networks; bidirectional power flow; contract-based approach; electric vehicles; optimal contract-based mechanism; optimal unit price; power grid; service request; statistical distribution; vehicle-to-grid technologies; Batteries; Contracts; Incentive schemes; Integrated circuits; Optimization; Power grids; Statistical distributions
【Paper Link】 【Pages】:1160-1168
【Authors】: Xuan Liu ; Kui Ren ; Yanling Yuan ; Zuyi Li ; Qian Wang
【Abstract】: Power network is one of the most critical infrastructures in a nation and is always a target of attackers. Recently, many schemes are proposed to protect the security of power systems. However, most of existing works did not consider the component attacking cost and ignored the relationship between the budget deployed on the component and its attacking cost. To address this problem, in this paper we introduce the concept of budget-cost function, which describes the dynamic characteristics of component attacking cost, and propose a new model to protect power grid against intentional attacks. In our model, the attackers have limited attacking capacity and aim to maximize the damage of attacks. On the other hand, the defenders aim to find the optimal strategy of the budget deployment to limit the damage to an expected level. We formulate the above problem as a nonlinear optimization problem and solve it by employing the primal-dual interior-point method. To the author's best knowledge, this is the first work which analyzes the optimal budget deployment strategy based on budget-cost function. Simulations on the IEEE 5-bus system demonstrate the correctness and effectiveness of the proposed model and algorithms. The results provide a basis of budget investment for power systems.
【Keywords】: budgeting; nonlinear programming; power grids; power system economics; power system protection; power system security; IEEE 5-bus system; attack damage maximization; budget investment; budget-cost function concept; dynamic component attacking cost characteristics; intentional attacks; limited attacking capacity; nonlinear optimization problem; optimal budget deployment strategy; power grid interdiction; power network; power system security protection; power systems; primal-dual interior-point method; Computational modeling; Generators; Linear matrix inequalities; Load modeling; Optimization; Power systems; Vectors; attacking cost; budget-cost function; candidate line combination; optimal strategy; power system security; primal-dual interior-point method; redundant line combination
【Paper Link】 【Pages】:1169-1177
【Authors】: Xiaowen Gong ; Junshan Zhang ; Douglas Cochran
【Abstract】: Radar sensors, which actively transmit radio waves and collect RF energy scattered by objects in the environment, offer a number of advantages over purely passive sensors. An important issue in radar is that the transmitted energy may be scattered by objects that are not of interest as well as objects of interest (e.g., targets). The detection performance of radar systems is affected by such clutter as well as noise. Further, in many applications, clutter can be substantially stronger than the signals of interest. To combat the effect of clutter, a popular method is to take advantage of the Doppler frequency shift (DFS) extracted from the echo signal due to the relative motion of a target with respect to the radar. Unfortunately, a sensor coverage model that only depends on the distance to a target would fail to capture the DFS. In this paper, we set forth the concept of Doppler coverage for a network of spatially distributed radars. Specifically, a target is said to be Doppler-covered if, regardless of its direction of motion, there exists some radar in the network whose signalto-noise ratio (SNR) is sufficiently high and the DFS at that radar is sufficiently large. Based on the Doppler coverage model, we first propose an efficient method to characterize Dopplercovered regions for arbitrarily deployed radars. Then we design an algorithm for deriving the minimum radar density required to achieve Doppler coverage in a region under any polygonal deployment pattern, and further apply it to investigate the regular triangle based deployment.
【Keywords】: Doppler radar; Doppler shift; radar signal processing; radiowaves; Doppler frequency shift; Doppler sensor coverage model; RF energy; SNR; echo signal; passive sensor; polygonal deployment pattern; radar density; radar sensor network; radar system; radio waves; signal to noise ratio; target motion; triangle based deployment; Clutter; Doppler effect; Doppler radar; Radar detection; Sensors; Silicon; Doppler effect; critical sensor density; deterministic deployment; radar sensor network
【Paper Link】 【Pages】:1178-1186
【Authors】: Yilin Shen ; Dung T. Nguyen ; My T. Thai
【Abstract】: Region coverage and network connectivity are among the most important problems for the quality of service in wireless sensor networks. Unfortunately, due to the sensor failures and hostile environments, such as active volcanic regions or battle fields, the emergence of coverage holes and disconnections among sensors is unavoidable. One way to handle this problem is to deploy mobile sensors in the network, which is called hybrid sensor networks, so that these mobile sensors can be relocated to heal the holes or maintain the network connectivity. However, because of the low-power of mobile sensors, it is extremely challenging to design a fast and effective movement schedule for mobile sensors to (1) maintain both the region coverage and network connectivity at any time, and (2) minimize the moving energy consumption. In this paper, we develop an adaptive algorithm, AHCH algorithm, to adaptively heal the holes with the guarantee of network connectivity without recomputing from scratch. By comparing AHCH algorithm with the optimal solution at each time-slot, we show its expected adaptive approximation ratio as O(log |M|) with |M| mobile sensors in some special cases. In more general cases, we extend our AHCH algorithm to InAHCH and GenAHCH algorithms, handling insufficient mobile sensors as well as disconnected regions, along with the proof of their corresponding theoretical adaptive approximation ratios. The experimental evaluation shows the effectiveness of our proposed algorithms with respect to both low energy consumption and hole healing latency.
【Keywords】: approximation theory; mobile communication; quality of service; wireless sensor networks; AHCH algorithm; GenAHCH algorithms; InAHCH algorithms; O(log |M|); adaptive approximation algorithms; coverage holes; energy consumption; hole healing; hybrid wireless sensor networks; mobile sensors; network connectivity; quality of service; region coverage; sensor failures; Approximation algorithms; Approximation methods; Energy consumption; Measurement; Mobile communication; Mobile computing; Schedules; Approximation Algorithms; Hybrid Sensor Networks; Sensor Coverage
【Paper Link】 【Pages】:1187-1194
【Authors】: Lidong Wu ; Hongwei Du ; Weili Wu ; Deying Li ; Jing Lv ; Wonjun Lee
【Abstract】: Given a requested area, the Minimum Connected Sensor Cover problem is to find a minimum number of sensors such that their communication ranges induce a connected graph and their sensing ranges cover the requested area. Several polynomial-time approximation algorithms have been designed previously in the literature. Their best known performance ratio is O(r ln n) where r is the link radius of the sensor network and n is the number of sensors. In this paper, we will present two polynomial-time approximation algorithms. The first one is a random algorithm, with probability 1 - ε, producing an approximation solution with performance ratio O(log3 n log log n), independent from r. The second one is a deterministic approximation with performance ratio O(r), independent from n.
【Keywords】: approximation theory; computational complexity; deterministic algorithms; graph theory; wireless sensor networks; connected graph; deterministic approximation algorithm; link radius; minimum connected sensor cover; polynomial-time approximation algorithms; probability 1-ε algorithm; random algorithm; wireless sensor network; Algorithm design and analysis; Approximation algorithms; Approximation methods; Educational institutions; Measurement; Sensors; Steiner trees
【Paper Link】 【Pages】:1195-1203
【Authors】: Zuoming Yu ; Jin Teng ; Xinfeng Li ; Dong Xuan
【Abstract】: In this paper, we study the problem of wireless coverage in bounded areas. Coverage is one of the fundamental requirements of wireless networks. There has been considerable research on optimal coverage of infinitely large areas. However, in the real world, the deployment areas of wireless networks are always geographically bounded. It is a much more challenging and significant problem to find optimal deployment patterns to cover bounded areas. In this paper, we approach this problem starting from the development of tight lower bounds on the number of nodes needed to cover a bounded area. Then we design several deployment patterns for different kinds of convex and concave shapes such as rectangles and L-shapes. These patterns require only few more nodes than the theoretical lower bound, and can achieve efficient coverage. We have also carefully addressed and evaluated practical conditions such as coverage modeling and connectivity regarding our deployment patterns.
【Keywords】: radio networks; L-shapes; bounded areas; concave shapes; convex shapes; optimal deployment patterns; wireless network coverage; Approximation methods; Honeycomb structures; Shape; Tiles; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:1204-1212
【Authors】: Longbo Huang ; Jean C. Walrand
【Abstract】: Benes networks are constructed with simple switch modules and have many advantages, including small latency and requiring only an almost linear number of switch modules. As circuit-switches, Benes networks are rearrangeably non-blocking, which implies that they are full-throughput as packet switches, with suitable routing. Routing in Benes networks can be done by time-sharing permutations. However, this approach requires centralized control of the switch modules and statistical knowledge of the traffic arrivals. We propose a backpressure-based routing scheme for Benes networks, combined with end-to-end congestion control. This approach achieves the maximal utility of the network and requires only four queues per module, independently of the size of the network.
【Keywords】: packet radio networks; packet switching; queueing theory; telecommunication congestion control; telecommunication network routing; telecommunication traffic; Benes packet network; backpressure-based routing scheme; centralized control; circuit-switches; end-to-end congestion control; packet switches; queues per module; statistical knowledge analysis; switch module; time-sharing permutation; traffic arrivals; Algorithm design and analysis; Optical switches; Resource management; Routing; Scheduling algorithms; Servers; Benes Network; Dynamic Control; Queueing; Stochastic Network Optimization
【Paper Link】 【Pages】:1213-1221
【Authors】: Yaoqing Liu ; Beichuan Zhang ; Lan Wang
【Abstract】: The fast growth of global routing table size has been causing concerns that the Forwarding Information Base (FIB) will not be able to fit in existing routers' expensive line-card memory, and upgrades will lead to higher cost for network operators and customers. FIB Aggregation, a technique that merges multiple FIB entries into one, is probably the most practical solution since it is a software solution local to a router, and does not require any changes to routing protocols or network operations. While previous work on FIB aggregation mostly focuses on reducing table size, this work focuses on algorithms that can update compressed FIBs quickly and incrementally. Quick update is critical to routers because they have very limited time to process routing updates without impacting packet delivery performance. We have designed three algorithms: FIFA-S for smallest table size, FIFA-T for shortest running time, and FIFA-H for both small tables and short running time, and operators can use the one best suited to their needs. These algorithms significantly improve over existing work in terms of reducing routers' computation overhead and limiting impact on the forwarding plane while maintaining a good compression ratio.
【Keywords】: IP networks; computer network performance evaluation; routing protocols; FIFA; FIFA-S algorithm; FIFA-T algorithm; compressed FIB update; compression ratio maintenance; fast incremental FIB aggregation; forwarding information base; forwarding plane impact reduction; global routing table size; multiple FIB entry merging; network operations; packet delivery performance; router computation overhead reduction; router line-card memory; routing protocols; routing update; running time; Algorithm design and analysis; Binary trees; Educational institutions; Merging; Routing; Routing protocols; Software
【Paper Link】 【Pages】:1222-1230
【Authors】: Layong Luo ; Gaogang Xie ; Kavé Salamatian ; Steve Uhlig ; Laurent Mathy ; Yingke Xie
【Abstract】: Virtual routers are increasingly being studied, as an important building block to enable network virtualization. In a virtual router platform, multiple virtual router instances coexist, each having its own FIB (Forwarding Information Base). In this context, memory scalability and route updates are two major challenges. Existing approaches addressed one of these challenges but not both. In this paper, we present a trie merging approach, which compactly represents multiple FIBs by a merged trie and a table of next-hop-pointer arrays to achieve good memory scalability, while supporting fast incremental updates by avoiding the use of leaf pushing during merging. Experimental results show that storing the merged trie requires limited memory space, e.g., we only need 10MB memory space to store the merged trie for 14 full FIBs from IPv4 core routers, achieving a memory reduction by 87% when compared to the total size of the individual tries. We implement our approach in an SRAM (Static Random Access Memory)-based lookup pipeline. Using our approach, an on-chip SRAM-based lookup pipeline with 5 external stages is sufficient to store the 14 full IPv4 FIBs. Furthermore, our approach can guarantee a minimum update overhead of one write bubble per update, as well as a high lookup throughput of one lookup per clock cycle, which corresponds to a throughput of 251 million lookups per second in the implementation.
【Keywords】: IP networks; SRAM chips; telecommunication network routing; virtualisation; FIB; IPv4 core routers; forwarding information base; incremental updates; leaf pushing; lookup throughput; memory reduction; memory scalability; network virtualization; next-hop-pointer arrays; on-chip SRAM-based lookup pipeline; route updates; static random access memory; trie merging approach; virtual router platform; write bubble; Arrays; IP networks; Memory management; Merging; Routing; Scalability
【Paper Link】 【Pages】:1231-1239
【Authors】: Ori Rottenstreich ; Marat Radan ; Yuval Cassuto ; Isaac Keslassy ; Carmi Arad ; Tal Mizrahi ; Yoram Revah ; Avinatan Hassidim
【Abstract】: With the rise of datacenter virtualization, the number of entries in forwarding tables is expected to scale from several thousands to several millions. Unfortunately, such forwarding table sizes can hardly be implemented today in on-chip memory. In this paper, we investigate the compressibility of forwarding tables. We first introduce a novel forwarding table architecture with separate encoding in each column. It is designed to keep supporting fast random accesses and fixed-width memory words. Then, we suggest an encoding whose memory requirement per row entry is guaranteed to be within a small additive constant of the optimum. Next, we analyze the common case of two-column forwarding tables, and show that such tables can be presented as bipartite graphs. We deduce graph-theoretical bounds on the encoding size. We also introduce an algorithm for optimal conditional encoding of the second column given an encoding of the first one. In addition, we explain how our architecture can handle table updates. Last, we evaluate our suggested encoding techniques on synthetic forwarding tables as well as on real-life tables.
【Keywords】: computer centres; computer networks; data compression; encoding; graph theory; storage management; telecommunication network routing; bipartite graph; data center virtualization; encoding size; encoding technique; fixed-width memory words; forwarding table architecture; forwarding table compression; forwarding table size; graph-theoretical bounds; memory requirement; on-chip memory; optimal conditional encoding; random access; row entry; table update; two-column forwarding table; Additives; Approximation methods; Dictionaries; Encoding; Optimization; Servers; System-on-chip
【Paper Link】 【Pages】:1240-1248
【Authors】: Qiben Yan ; Ming Li ; Feng Chen ; Tingting Jiang ; Wenjing Lou ; Y. Thomas Hou ; Chang-Tien Lu
【Abstract】: Passive monitoring by distributed wireless sniffers has been used to strategically capture the network traffic, as the basis of automatic network diagnosis. However, the traditional monitoring techniques fall short in cognitive radio networks (CRNs) due to the much larger number of channels to be monitored, and the secondary users' channel availability uncertainty imposed by primary user activities. To better serve CRNs, we propose a systematic passive monitoring framework for traffic collection using a limited number of sniffers in WiFi like CRNs. We jointly consider primary user activity and secondary user channel access pattern to optimize the traffic capturing strategy. In particular, we exploit a non-parametric density estimation method to learn and predict secondary users' access pattern in an online fashion, which rapidly adapts to the users' dynamic behaviors and supports accurate estimation of merged access patterns from multiple users. We also design near-optimal monitoring algorithms that maximize two levels of quality-of-monitoring goals respectively, based on the predicted channel access patterns. The simulations and experiments show that our proposed framework outperforms the existing schemes significantly.
【Keywords】: cognitive radio; computerised monitoring; radio networks; telecommunication traffic; wireless LAN; wireless channels; CRN; Wi-Fi; automatic network traffic diagnosis; cognitive radio network; distributed wireless sniffer; near-optimal monitoring algorithm; nonparametric density estimation method; nonparametric passive traffic monitoring; primary user activity; quality-of-monitoring goal; secondary user channel access pattern; systematic passive monitoring framework; traffic capturing strategy; Channel estimation; Data models; Estimation; Inspection; Monitoring; Sensors; Switches
【Paper Link】 【Pages】:1249-1257
【Authors】: Yang Liu ; Mingyan Liu
【Abstract】: In this paper we study opportunistic spectrum access (OSA) policies in a multiuser multichannel random access setting, where users perform channel probing and switching in order to obtain better channel condition or higher instantaneous transmission quality. However, unlikely many prior works in this area, including channel probing and switching policies for a single user to exploit spectral diversity, and probing and access policies for multiple users over a single channel to exploit temporal and multiuser diversity, in this study we consider the collective switching of multiple users over multiple channels. In addition, we consider finite arrivals, i.e., users are not assumed to always have data to send and demand for channel follow a certain arrival process. Under such a scenario, the users' ability to opportunistically exploit temporal diversity (the temporal variation in channel quality over a single channel) and spectral diversity (quality variation across multiple channels at a give time) is greatly affected by the level of congestion in the system. We investigate the optimal decision process in this case, and evaluate the extent to which congestion affects potential gains from opportunistic dynamic channel switching.
【Keywords】: diversity reception; multi-access systems; multiuser channels; telecommunication switching; channel condition; channel probing; channel quality; collective switching; finite arrival; multiple use; multiuser diversity; multiuser dynamic channel access; multiuser multichannel random access; opportunistic dynamic channel switching; opportunistic spectrum access policy; optimal decision process; spectral diversity; switching policies; temporal diversity; temporal variation; Data communication; Delays; Diversity methods; Sensors; Switches; Throughput; Wireless communication
【Paper Link】 【Pages】:1258-1266
【Authors】: Wessam Afifi ; Marwan Krunz
【Abstract】: Inspired by recent developments in full-duplex communications, we propose and study new modes of operation for cognitive radios with the goal of achieving improved primary user (PU) detection and/or secondary user (SU) throughput. Specifically, we consider an opportunistic PU/SU setting in which the SU is equipped with partial/complete self-interference suppression (SIS), enabling it to transmit and receive/sense at the same time. Following a brief sensing period, the SU can operate in either simultaneous transmit-and-sense (TS) mode or simultaneous transmit-and-receive (TR) mode. We analytically study the performance metrics for the two modes, namely the detection and false-alarm probabilities, the PU outage probability, and the SU throughput. From this analysis, we evaluate the sensing-throughput tradeoff for both modes. Our objective is to find the optimal sensing and transmission durations for the SU that maximize its throughput subject to a given outage probability. We also explore the spectrum awareness/efficiency tradeoff that arises from the two modes by determining an efficient adaptive strategy for the SU link. This strategy has a threshold structure, which depends on the PU traffic load. Our study considers both perfect and imperfect sensing as well as perfect/imperfect SIS.
【Keywords】: cognitive radio; interference suppression; probability; radio spectrum management; telecommunication traffic; PU outage probability; PU traffic load; SU throughput; TR mode; TS mode; adaptive strategy; cognitive radio system; false-alarm probability; full-duplex communication; opportunistic PU-SU setting; performance metrics; primary user detection; secondary user throughput; self-interference suppression; sensing-throughput tradeoff; simultaneous transmit-and-receive mode; simultaneous transmit-and-sense mode; spectrum awareness-efficiency tradeoff; spectrum efficiency; High definition video; Interference; Measurement; Receivers; Sensors; Signal to noise ratio; Throughput
【Paper Link】 【Pages】:1267-1275
【Authors】: Xiang Sheng ; Jian Tang ; Chenfei Gao ; Weiyi Zhang ; Chonggang Wang
【Abstract】: With wireless resource virtualization, multiple Mobile Virtual Network Operators (MVNOs) can be supported over a shared physical wireless network and traffic loads in a Base Station (BS) can be easily migrated to more power-efficient BSs in its neighborhood such that idle BSs can be turned off or put into sleep to save power. In this paper, we propose to leverage load migration and BS consolidation for green communications and consider a power-efficient network planning problem in virtualized Cognitive Radio Networks (CRNs) with the objective of minimizing total power consumption while meeting traffic load demand of each MVNO. First, we present a Mixed Integer Linear Programming (MILP) to provide optimal solutions. Then we present a general optimization framework to guide algorithm design, which solves two subproblems, channel assignment and load allocation, in sequence. For channel assignment, we present a (Δ1)-approximation algorithm (where Δ is the maximum number of BSs a BS can potentially interfere with). For load allocation, we present a polynomial-time optimal algorithm for a special case where BSs are power-proportional as well as two effective heuristic algorithms for the general case. In addition, we present an effective heuristic algorithm that jointly solves the two subproblems. It has been shown by extensive simulation results that the proposed algorithms produce close-to-optimal solutions, and moreover, achieve over 45% power savings compared to a baseline algorithm that does not migrate loads or consolidate BSs.
【Keywords】: cognitive radio; communication complexity; integer programming; linear programming; radio networks; telecommunication network planning; telecommunication traffic; virtualisation; MILP; MVNO; base staion consolidation; green communications; leveraging load migration; mixed integer linear programming; mobile virtual network operators; physical wireless network; polynomial-time optimal algorithm; power-efficient network planning; traffic loads; virtualized cognitive radio networks; wireless resource virtualization; Approximation algorithms; Optimization; Power demand; Resource management; Virtualization; Wireless communication; Wireless sensor networks; Green wireless communications; basestation consolidation; cognitive radio; load migration; virtualization
【Paper Link】 【Pages】:1276-1284
【Authors】: Yadi Ma ; Thyaga Nandagopal ; Krishna P. N. Puttaswamy ; Suman Banerjee
【Abstract】: Geographically distributed storage is an important method of ensuring high data availability in cloud computing and storage systems. With the increasing demand for moving file systems to the cloud, current methods of providing such enterprise-grade resiliency are very inefficient. For example, replication based methods incur large storage cost though they provide low access latencies. While erasure coded schemes reduce storage cost, they are associated with large access latencies and high bandwidth cost. In this paper, we propose a novel scheme named CAROM, an ensemble of replication and erasure codes, to provide resiliency in cloud file systems with high efficiency. While maintaining the same consistency semantics seen in today's cloud file systems, CAROM provides the benefit of low bandwidth cost, low storage cost, and low access latencies. We perform a large-scale evaluation using real-world file system traces and demonstrate that CAROM outperforms replication based schemes in storage cost by up to 60% and erasure coded schemes in bandwidth cost by up to 43%, while maintaining low access latencies close to those in replication based schemes.
【Keywords】: cloud computing; costing; error correction codes; redundancy; storage management; CAROM; access latencies; bandwidth cost; cloud computing systems; cloud file system resiliency; cloud storage systems; data availability; erasure codes; geographically distributed storage; replication codes; storage cost; Availability; Bandwidth; Cloud computing; Encoding; Reed-Solomon codes; Semantics; Servers
【Paper Link】 【Pages】:1285-1293
【Authors】: Marco Valerio Barbera ; Sokol Kosta ; Alessandro Mei ; Julinda Stefa
【Abstract】: The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.
【Keywords】: cloud computing; energy consumption; mobile computing; smart phones; Amazon EC2 public cloud; Android smartphones software clones; back-clone; battery consumption; cloud-based backup systems; energy consumption; mobile cloud computing bandwidth; mobile cloud computing energy cost; mobile computation offloading; mobile software-data backup; off-clone; Bandwidth; Batteries; Cloning; IEEE 802.11 Standards; Mobile communication; Smart phones; Software
【Paper Link】 【Pages】:1294-1302
【Authors】: Lluis Pamies-Juarez ; Anwitaman Datta ; Frédérique E. Oggier
【Abstract】: To achieve reliability in distributed storage systems, data has usually been replicated across different nodes. However the increasing volume of data to be stored has motivated the introduction of erasure codes, a storage efficient alternative to replication, particularly suited for archival in data centers, where old datasets (rarely accessed) can be erasure encoded, while replicas are maintained only for the latest data. Many recent works consider the design of new storage-centric erasure codes for improved repairability. In contrast, this paper addresses the migration from replication to encoding: traditionally erasure coding is an atomic operation in that a single node with the whole object encodes and uploads all the encoded pieces. Although large datasets can be concurrently archived by distributing individual object encodings among different nodes, the network and computing capacity of individual nodes constrain the archival process due to such atomicity. We propose a new pipelined coding strategy that distributes the network and computing load of single-object encodings among different nodes, which also speeds up multiple object archival. We further present RapidRAID codes, an explicit family of pipelined erasure codes which provides fast archival without compromising either data reliability or storage overheads. Finally, we provide a real implementation of RapidRAID codes and benchmark its performance using both a cluster of 50 nodes and a set of Amazon EC2 instances. Experiments show that RapidRAID codes reduce a single object's coding time by up to 90%, while when multiple objects are encoded concurrently, the reduction is up to 20%.
【Keywords】: computer centres; distributed databases; forward error correction; pipeline processing; Amazon EC2 instances; RapidRAID codes; atomic operation; data centers; data reliability; distributed storage systems; fast data archival; pipelined coding strategy; pipelined erasure codes; single-object encodings; storage overheads; storage-centric erasure codes; Distributed databases; Encoding; Fault tolerant systems; Pipelines; Redundancy; archival; distributed storage; erasure codes; migration
【Paper Link】 【Pages】:1303-1311
【Authors】: Yu Hua ; Bin Xiao ; Xue Liu
【Abstract】: Cloud computing applications face the challenges of dealing with a huge volume of data that needs the support of fast approximate queries to enhance system scalability and improve quality of service, especially when users are not aware of exact query inputs. Locality-Sensitive Hashing (LSH) can support the approximate queries that unfortunately suffer from imbalanced load and space inefficiency among distributed data servers, which severely limits the query accuracy and incurs long query latency between users and cloud servers. In this paper, we propose a novel scheme, called NEST, which offers ease-of-use and cost-effective approximate query service for cloud computing. The novelty of NEST is to leverage cuckoo-driven locality-sensitive hashing to find similar items that are further placed closely to obtain load-balancing buckets in hash tables. NEST hence carries out flat and manageable addressing in adjacent buckets, and obtains constant-scale query complexity even in the worst case. The benefits of NEST include the increments of space utilization and fast query response. Theoretical analysis and extensive experiments in a large-scale cloud testbed demonstrate the salient properties of NEST to meet the needs of approximate query service in cloud computing environments.
【Keywords】: cloud computing; computational complexity; file organisation; quality of service; query processing; resource allocation; LSH; NEST; cloud computing; cloud servers; constant-scale query complexity; cuckoo-driven locality-sensitive hashing; distributed data servers; fast approximate queries; fast query response; hash tables; imbalanced load; large-scale cloud testbed; load balancing buckets; locality-aware approximate query service; quality of service improvement; query accuracy; query latency; space inefficiency; space utilization; system scalability enhancement; Artificial neural networks; Cloud computing; Complexity theory; Educational institutions; Servers; Standards; Vectors
【Paper Link】 【Pages】:1312-1320
【Authors】: Rafael P. Laufer ; Leonard Kleinrock
【Abstract】: Due to a poor understanding of the interactions among transmitters, wireless multihop networks have commonly been stigmatized as unpredictable in nature. Even elementary questions regarding the throughput limitations of these networks cannot be answered in general. In this paper we investigate the behavior of wireless multihop networks using carrier sense multiple access with collision avoidance (CSMA/CA). Our goal is to understand how the transmissions of a particular node affect the medium access, and ultimately the throughput, of other nodes in the network. We introduce a theory which accurately models the behavior of these networks and show that, contrary to popular belief, their performance is easily predictable and can be described by a system of equations. Using the proposed theory, we provide the analytical expressions necessary to fully characterize the capacity region of any wireless CSMA/CA multihop network. We show that this region is nonconvex in general and entirely agnostic to the probability distributions of all network parameters, depending only on their expected values.
【Keywords】: carrier sense multiple access; radio networks; statistical distributions; carrier sense multiple access with collision avoidance; probability distributions; transmitters; wireless CSMA/CA multihop network capacity; wireless multihop network behavior modelling; Multiaccess communication; Radiation detectors; Spread spectrum communication; Steady-state; Throughput; Transmitters; Wireless communication
【Paper Link】 【Pages】:1321-1329
【Authors】: Wang Liu ; Kejie Lu ; Jianping Wang ; Yi Qian ; Liusheng Huang ; Jun Liu ; Dapeng Oliver Wu
【Abstract】: In mobile ad hoc networks (MANETs), it is important to understand the throughput-delay trade-off (TD trade-off) problem in large-scale scenarios. In the literature, the TD tradeoff problem has been studied extensively and many of them are based on the independent and identically distributed (i.i.d.) mobility model, in which each node can randomly move to any place in the network, after every time slot. Although the i.i.d. model has been widely used, it cannot fully represent MANETs in which nodes change positions less frequently. To characterize such MANETs, in this paper, we propose a generalized i.i.d. (g.i.i.d.) mobility model, in which each node moves once after every 1/f (0 <; f ≤ 1) time slots, and remains static between two moves. To investigate the TD trade-off under the g.i.i.d. model, we develop a novel multi-relay multi-hop (MRMH) scheme that exploits the opportunities of multi-hop transmissions when the network is static. Furthermore, to enable the multi-hop transmissions, we construct a new percolation highway system, which has not been used in the TD trade-off analysis for MANETs. Using the proposed MRMH scheme, we develop and prove constructive bounds for throughput and delay in MANETs with different scales of f. Our constructive bound is asymptotically optimal for f = 1 (i.e., the i.i.d. model).
【Keywords】: mobile ad hoc networks; mobility management (mobile radio); MRMH scheme; TD trade-off; constructive bounds; generalized mobility model; identically distributed mobility model; large-scale MANET; mobile ad hoc networks; multihop transmissions; multirelay multihop scheme; throughput-delay trade-off; Ad hoc networks; Delays; Educational institutions; Mobile computing; Relays; Road transportation; Throughput
【Paper Link】 【Pages】:1330-1338
【Authors】: Huacheng Zeng ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou ; Sastry Kompella ; Scott F. Midkiff
【Abstract】: Interference alignment (IA) is a major advance in information theory. Despite its rapid advance in the information theory community, most results on IA remain point-to-point or single-hop and there is a lack of advance of IA in the context of multi-hop wireless networks. The goal of this paper is to make a concrete step toward advancing IA technique in multi-hop MIMO networks. We present an IA model consisting of a set of constraints at a transmitter and a receiver that can be used to determine a subset of interfering streams for IA. Based on this IA model, we develop an IA optimization framework for a multihop MIMO network. For performance evaluation, we compare the performance of a network throughput optimization problem under our proposed IA framework and the same problem when IA is not employed. Simulation results show that the use of IA can significantly decrease the DoF consumption for IC, thereby improving network throughput.
【Keywords】: MIMO communication; interference suppression; optimisation; radio links; radio receivers; radio transmitters; IA optimization; information theory; interference alignment; multihop MIMO network; multihop wireless network; network throughput; point-to-point hop; receiver; transmitter; Integrated circuits; Interference; MIMO; Receivers; Spread spectrum communication; Transmitters; Vectors
【Paper Link】 【Pages】:1339-1347
【Authors】: Guanhong Pei ; Anil Kumar S. Vullikanti
【Abstract】: In this paper, we develop the first rigorous distributed algorithm for link scheduling in the SINR model under any length-monotone sub-linear power assignments. Our algorithms give constant factor approximation guarantees, matching the bounds of the sequential algorithms for these problems, with provable bounds on the running time in terms of the graph topology. We also study a related and fundamental problem of local broadcasting for uniform power levels, and obtain similar bounds. These problems are much more challenging in the SINR model than in the more standard graph based interference models, because of the non-locality of the SINR model. Our algorithms are randomized and crucially rely on physical carrier sensing for the distributed communication steps. We find that the specific wireless device capability of duplex/halfduplex communication significantly impacts the performance. Our main technique involves the distributed computation of affectance and a construct called a ruling, which are likely to be useful in other scheduling problems in the SINR model. We also study the empirical performance of our algorithms, and find that the performance depends on the topology, and the approximation ratio is very close to the best sequential algorithm.
【Keywords】: approximation theory; broadcasting; distributed algorithms; graph theory; radio networks; radiofrequency interference; scheduling; telecommunication links; SINR model; distributed approximation algorithms; distributed communication steps; distributed computation; duplex/halfduplex communication; factor approximation; graph based interference models; graph topology; length-monotone sub-linear power assignments; local broadcasting; maximum link scheduling; physical carrier sensing; physical interference model; sequential algorithms; wireless device capability; Approximation algorithms; Approximation methods; Computational modeling; Distributed algorithms; Interference; Signal to noise ratio; Silicon
【Paper Link】 【Pages】:1348-1356
【Authors】: Mahanth Gowda ; Souvik Sen ; Romit Roy Choudhury ; Sung-Ju Lee
【Abstract】: Cooperative packet recovery has been widely investigated in wireless networks, where corrupt copies of a packet are combined to recover the original packet. While previous work such as MRD (Multi Radio Diversity) and Soft apply combining to bits and bit-confidences, combining at the symbol level has been avoided. The reason is rooted in the prohibitive overhead of sharing raw symbol information between different APs of an enterprise WLAN. We present Epicenter that overcomes this constraint, and combines multiple copies of incorrectly received “symbols” to infer the actual transmitted symbol. Our core finding is that symbols need not be represented in full fidelity - coarse representation of symbols can preserve most of their diversity, while substantially lowering the overhead. We then develop a rate estimation algorithm that actually exploits symbol level combining. Our USRP/GNURadio testbed confirms the viability of our ideas, yielding 40% throughput gain over Soft, and 25-90% over 802.11. While the gains are modest, we believe that they are realistic, and available with minimal modifications to today's EWLAN systems.
【Keywords】: cooperative communication; packet radio networks; wireless LAN; EWLAN systems; Multi Radio Diversity; USRP/GNURadio testbed; cooperative packet recovery; enterprise WLAN; rate estimation algorithm; symbol level combining; wireless networks; Algorithm design and analysis; Diversity reception; Estimation; IEEE 802.11 Standards; Modulation; Throughput; Vectors
【Paper Link】 【Pages】:1357-1365
【Authors】: Jin Tang ; Yu Cheng
【Abstract】: The open and distributed nature of the IEEE 802.11 based wireless networks provides selfish users the opportunity to to gain an unfair share of the network throughput by manipulating the protocol parameters, say, using a smaller contention window. In this paper, we propose an adaptive approach for real-time detection of such selfish misbehavior. An adaptive detector is necessary in practice, as it needs to deal with different misbehaving scenarios where the number of selfish users and the contention windows exploited by each selfish user are different. In this paper, we first design a basic misbehavior detector based on the non-parametric cumulative sum (CUSUM) test. While the basic detector can be modeled with a Markov chain, we further resort to the Markov decision process (MDP) technique to enhance the basic detector to an adaptive design. In particular, we develop a novel reward function based on which the optimal policy of the MDP can be determined. The optimal policy indicates how the adaptive detector should operate at each state. Another important feature of our detector is that it enables an effective iterative method to detect multiple misbehaving nodes. We present thorough simulation results to confirm the accuracy of our analysis, and demonstrate the efficiency of the adaptive detector compared to a static solution.
【Keywords】: Markov processes; iterative methods; protocols; radio networks; wireless LAN; CUSUM test; IEEE 802.11 based wireless networks; MDP technique; Markov chain; Markov decision process; Markov decision process technique; adaptive approach; adaptive detector; iterative method; nonparametric cumulative sum; optimal policy; protocol parameters; selfish misbehavior detection; smaller contention window; Delays; Detectors; IEEE 802.11 Standards; Markov processes; Mathematical model; Probability; Protocols
【Paper Link】 【Pages】:1366-1374
【Authors】: Yifan Zhang ; Qun Li
【Abstract】: We propose HoWiES, a system that saves energy consumed by WiFi interfaces in mobile devices with the assistance of ZigBee radios. The core component of HoWiES is a WiFiZigBee message delivery scheme that enables WiFi radios to convey different messages to ZigBee radios in mobile devices. Based on the WiFi-ZigBee message delivery scheme, we design three protocols that target at three WiFi energy saving opportunities in scanning, standby and wakeup respectively. We have implemented the HoWiES system with two mobile devices platforms and two AP platforms. Our real-world experimental evaluation shows that our system can convey thousands of different messages from WiFi radios to ZigBee radios with an accuracy over 98%, and our energy saving protocols, while maintaining the comparable wakeup delay to that of the standard 802.11 power save mode, save 88% and 85% of energy consumed in scanning state and standby state respectively.
【Keywords】: Zigbee; mobile radio; protocols; wireless LAN; AP platforms; HoWiES; WiFi energy saving opportunities; WiFi interfaces; WiFi radios; WiFi-ZigBee message delivery scheme; ZigBee assisted WiFi energy savings; ZigBee radios; energy consumption saving; energy saving protocols; mobile devices platforms; scanning opportunity; standard 802.11 power save mode; standby opportunity; wakeup opportunity; Decoding; IEEE 802.11 Standards; Power demand; Protocols; Smart phones; Zigbee
【Paper Link】 【Pages】:1375-1383
【Authors】: Pei Huang ; Xi Yang ; Li Xiao
【Abstract】: Advancements in wireless communication techniques have increased the wireless physical layer (PHY) data rates by hundreds of times in a dozen years. The high PHY data rates, however, have not been translated to commensurate throughput gains due to overheads incurred by medium access control (MAC) and PHY convergence procedure. At high PHY data rates, the time used for collision avoidance (CA) at MAC layer and the time used for PHY convergence procedure can easily exceed the time used for transmission of an actual data frame. Recent work intends to reduce the CA overhead by reducing the backoff time slot size. However, the method introduces more collisions in presence of hidden terminals because the tiny backoff slots can no longer de-synchronize hidden terminals, leading to persistent collisions among hidden terminals. As collision detection (CD) in wireless communication became feasible recently, some protocols migrate random backoff from the time domain to the frequency domain, but they fail to address the introduced high collision probability. We investigate the practical issues of CD in the frequency domain and introduce a binary mapping scheme to reduce the collision probability. Based on the binary mapping, a bitwise arbitration (BA) mechanism is devised to grant only one transmitter the permission to initiate data transmission in a contention. With the low collision probability achieved in a short bounded arbitration phase, the throughput is significantly improved by our proposed WiFi-BA. Because collisions are unlikely to happen, unfairness caused by capture effect of radios is also reduced. The bitwise arbitration mechanism can further be set to let high priority messages get through unimpeded, making WiFi-BA suitable for real time prioritized communication. We validate the effectiveness of WiFi-BA through implementation on FPGA of USRP E110. Performance evaluation demonstrates that WiFi-BA is more efficient than current Wi-Fi solutions.
【Keywords】: access protocols; frequency-domain analysis; probability; time-domain analysis; wireless LAN; FPGA; MAC layer; PHY convergence procedure; PHY data rates; USRP E110; WiFi-BA; backoff time slot size; binary mapping scheme; bitwise arbitration mechanism; collision avoidance; collision detection; frequency domain; high collision probability; high speed multicarrier wireless networks; low collision probability; medium access control; random backoff; real time prioritized communication; time domain; wireless communication techniques; wireless physical layer data rates; Binary codes; Data communication; Frequency-domain analysis; IEEE 802.11 Standards; Interference; OFDM; Wireless communication
【Paper Link】 【Pages】:1384-1392
【Authors】: Xuefeng Liu ; Jiannong Cao ; Shaojie Tang
【Abstract】: Reliably detecting event in the presence of faulty nodes, particularly nodes with faulty readings is a fundamental task in wireless sensor networks (WSNs). Existing fault-tolerant event detection schemes usually 'mask' the effect of faulty readings through high-level fusion techniques. However, in some applications such as structural health monitoring (SHM) and volcano monitoring, detecting the events of interest requires lowlevel data collaboration from multiple sensors. This implies that the effect of faulty readings cannot be masked once they are involved into event detection. Nodes with faulty readings must be firstly detected and removed from the system. Unfortunately, most existing techniques to detect faulty nodes can only take boolean or scalar data as input while in these applications, data generated from each sensor is a sequence of dynamic data. In this paper, we address these issues using an example of SHM. Detecting event in SHM (i.e. structural damage) requires low level collaboration from multiple sensors, and each sensor generates a sequence of dynamic vibrational data. We proposed a fault-tolerant event detection scheme in SHM called FTED. In FTED, three novel techniques are proposed: (1) distributed extraction of features for faulty node detection, (2) iterative faulty node detection (I-FUND), and (3) distributed event detection. In particular, I-FUND takes vector as input and can even handle the 'element mismatch problem' where comparable elements in vectors are located at unknown different positions. The effectiveness of FTED is demonstrated through both simulations and real experiments.
【Keywords】: condition monitoring; fault tolerance; structural engineering; wireless sensor networks; FTED; WSN; data collaboration; distributed event detection; dynamic data; dynamic vibrational data; element mismatch problem; fault tolerant complex event detection; fault tolerant event detection; faulty reading; high level fusion technique; iterative faulty node detection; multiple sensors; reliably detecting event; scalar data; structural health monitoring; volcano monitoring; wireless sensor network; Event detection; Fault tolerance; Feature extraction; Sensors; Shape; Vectors
【Paper Link】 【Pages】:1393-1401
【Authors】: Long Cheng ; Yu Gu ; Tian He ; Jianwei Niu
【Abstract】: Reliable flooding in wireless sensor networks (WSNs) is desirable for a broad range of applications and network operations, and has been extensively investigated. However, relatively little work has been done for reliable flooding in lowduty-cycle WSNs with unreliable wireless links. It is a challenging problem to efficiently ensure 100% flooding coverage considering the combined effects of low-duty-cycle operation and unreliable wireless transmission. In this work, we propose a novel dynamic switching-based reliable flooding (DSRF) framework, which is designed as an enhancement layer to provide efficient and reliable delivery for a variety of existing flooding tree structures in lowduty-cycle WSNs. The key novelty of DSRF lies in the dynamic switching decision making when encountering a transmission failure, where a flooding tree structure is dynamically adjusted based on the packet reception results for energy saving and delay reduction. DSRF is distinctive from existing works in that it explores both poor links and good links on demand. Through comprehensive performance comparisons, we demonstrate that, compared with the flooding protocol without DSRF enhancement, DSRF effectively reduces the flooding delay and the total number of packet transmission by 12% 25% and 10% 15%, respectively. Remarkably, the achieved performance is close to the theoretical lower bound.
【Keywords】: decision making; failure analysis; radio links; telecommunication network reliability; telecommunication switching; wireless sensor networks; DSRF framework; delay reduction; dynamic switching decision making; dynamic switching-based reliable flooding; dynamic switching-based reliable flooding framework; energy saving; enhancement layer; flooding tree structures; low duty-cycle WSN; low-duty-cycle operation; low-duty-cycle wireless sensor networks; network operations; packet reception; packet transmission; transmission failure; unreliable wireless links; unreliable wireless transmission; Dynamic scheduling; Receivers; Reliability; Schedules; Switches; Synchronization; Wireless sensor networks
【Paper Link】 【Pages】:1402-1410
【Authors】: Iordanis Koutsopoulos
【Abstract】: Participatory sensing has emerged as a novel paradigm for data collection and collective knowledge formation about a state or condition of interest, sometimes linked to a geographic area. In this paper, we address the problem of incentive mechanism design for data contributors for participatory sensing applications. The service provider receives service queries in an area from service requesters and initiates an auction for user participation. Upon request, each user reports its perceived cost per unit of amount of participation, which essentially maps to a requested amount of compensation for participation. The participation cost quantifies the dissatisfaction caused to user due to participation. This cost is considered to be private information for each device, as it strongly depends on various factors inherent to it, such as the energy cost for sensing, data processing and transmission to the closest point of wireless access, the residual battery level, the number of concurrent jobs at the device processor, the required bandwidth to transmit data and the related charges of the mobile network operator, or even the user discomfort due to manual effort to submit data. Hence, participants have strong motive to mis-report their cost, i.e. declare a higher cost that the actual one, so as to obtain higher payment. We seek a mechanism for user participation level determination and payment allocation which is most viable for the provider, that is, it minimizes the total cost of compensating participants, while delivering a certain quality of experience to service requesters. We cast the problem in the context of optimal reverse auction design, and we show how the different quality of submitted information by participants can be tracked by the service provider and used in the participation level and payment selection procedures. We derive a mechanism that optimally solves the problem above, and at the same time it is individually rational (i.e., it motivates users to part- cipate) and incentive-compatible (i.e. it motivates truthful cost reporting by participants). Finally, a representative participatory sensing case study involving parameter estimation is presented, which exemplifies the incentive mechanism above.
【Keywords】: artificial intelligence; commerce; cost reduction; data acquisition; incentive schemes; mobile radio; quality of experience; quality of service; query processing; wireless sensor networks; data collection; incentive mechanism design; mobile network operator; optimal incentive driven design; optimal reverse auction design; parameter estimation; participation compensation; participation cost; participatory sensing system; payment allocation; payment selection procedure; quality of experience; residual battery level; service provider; service query; service requester; total cost minimization; user discomfort; user participation level determination; Air pollution; Atmospheric measurements; Bayes methods; Quality of service; Resource management; Sensors; Vectors
【Paper Link】 【Pages】:1411-1419
【Authors】: Hengchang Liu ; Shaohan Hu ; Wei Zheng ; Zhiheng Xie ; Shiguang Wang ; Pan Hui ; Tarek F. Abdelzaher
【Abstract】: This paper explores efficient 3G budget utilization in mobile participatory sensing applications. 1 Distinct from previous research work that either rely on limited WiFi access points or assume the availability of unlimited 3G communication capability, we offer a more practical participatory sensing system that leverages potential 3G budgets that participants contribute at will, and uses it efficiently customized for the needs of multiple participatory sensing applications with heterogeneous sensitivity to environmental changes. We address the challenge that the information of data generation and WiFi encounters is not a priori knowledge, and propose an online decision making algorithm that takes advantage of participants' historical data. We also develop a heuristic algorithm to consume less energy and reduce the storage overhead while maintaining efficient 3G budget utilization. Experimental results from a 30-participant deployment demonstrate that, even when the budget is as small as 2.5% of a popular data plan, these two algorithms achieve higher utility of uploaded data compared to the baseline solution, especially, they increase the utility of received data by 151.4% and 137.8% for those sensitive applications.
【Keywords】: 3G mobile communication; data communication; decision making; wireless LAN; 30-participant deployment; 3G budget utilization; WiFi access points; data generation; environmental changes; heterogeneous sensitivity; heuristic algorithm; mobile participatory sensing applications; online decision making; sensitive applications; unlimited 3G communication capability; Heuristic algorithms; IEEE 802.11 Standards; Mobile communication; Sensors; Smart phones; Vehicles; Wireless communication
【Paper Link】 【Pages】:1420-1428
【Authors】: Sriharsha Gangam ; Puneet Sharma ; Sonia Fahmy
【Abstract】: Accurate online network monitoring is crucial for detecting attacks, faults, and anomalies, and determining traffic properties across the network. With high bandwidth links and consequently increasing traffic volumes, it is difficult to collect and analyze detailed flow records in an online manner. Traditional solutions that decouple data collection from analysis resort to sampling and sketching to handle large monitoring traffic volumes. We propose a new system, Pegasus, to leverage commercially available co-located compute and storage devices near routers and switches. Pegasus adaptively manages data transfers between monitors and aggregators based on traffic patterns and user queries. We use Pegasus to detect global icebergs or global heavy-hitters. Icebergs are flows with a common property that contribute a significant fraction of network traffic. For example, DDoS attack detection is an iceberg detection problem with a common destination IP. Other applications include identification of “top talkers,” top destinations, and detection of worms and port scans. Experiments with Abilene traces, sFlow traces from an enterprise network, and deployment of Pegasus as a live monitoring service on PlanetLab show that our system is accurate and scales well with increasing traffic and number of monitors.
【Keywords】: computer network performance evaluation; computer network security; supervisory programs; system monitoring; Abilene traces; DDoS attack detection; Pegasus; PlanetLab; adaptive data transfer management; aggregators; co-located compute-storage devices; enterprise network; global heavy-hitter detection; global icebergs; iceberg detection problem; live monitoring service; monitors; network flows; network traffic; online network monitoring; port scan detection; sFlow traces; top destination detection; top talkers identification; traffic patterns; user queries; worm detection; Accuracy; Bandwidth; Blades; Computer crime; IP networks; Monitoring; Ports (Computers)
【Paper Link】 【Pages】:1429-1437
【Authors】: Avinatan Hassidim ; Danny Raz ; Michal Segalov ; Ariel Shaqed
【Abstract】: Building and operating a large backbone network can take months or even years, and it requires a substantial investment. Therefore, there is an economical drive to increase the utilization of network resources (links, switches, etc.) in order to improve the cost efficiency of the network. At the same time, the utilization of network components has a direct impact on the performance of the network and its resilience to failure, and thus operational considerations are a critical aspect of the decision regarding the desired network load and utilization. However, the actual utilization of the network resources is not easy to predict or control. It depends on many parameters like the traffic demand and the routing scheme (or Traffic Engineering if deployed), and it varies over time and space. As a result it is very difficult to actually define real network utilization and to understand the reasons for this utilization. In this paper we introduce a novel way to look at the network utilization. Unlike traditional approaches that consider the average link utilization, we take the flow perspective and consider the network utilization in terms of the growth potential of the flows in the network. After defining this new Flow Utilization, and discussing how it differs from common definitions of network utilization, we study ways to efficiently compute it over large networks. We then show, using real backbone data, that Flow Utilization is very useful in identifying network state and evaluating performance of TE algorithms.
【Keywords】: network parameters; telecommunication network management; telecommunication network routing; telecommunication traffic; TE algorithms; cost efficiency; flow utilization; growth potential; network load; network resources; network state; network utilization; operational considerations; routing scheme; traffic demand; Google; Monitoring; Packet loss; Planning; Vectors
【Paper Link】 【Pages】:1438-1446
【Authors】: Gerhard Haßlinger ; Anne Schwahn ; Franz Hartleb
【Abstract】: Two-state Markov models are applied in many performance evaluation studies as a simple form of autocorrelated processes, starting with Gilbert-Elliott channels for the analysis of transmission protocols subject to error bursts. We derive an explicit formula for the 2nd order statistics of 2state semi-Markov processes in order to adapt them to correlated traffic and error processes. State and transition specific distribution functions are included in a general representation covering the special cases being usually studied in the literature. The results reveal the influence of model parameters on short and long term dependency and give rise to a straightforward procedure for parameter adaption. In general, 2-state models provide a 2-dimensional fitting space, whereas special 2-state cases often have only one parameter left to fit the shape of the 2nd order statistics. In our evaluation of IP packet measurements on aggregation links we experienced that adaptations by general 2state Markov models achieve a much closer fit to the traffic variability in different time scales than self-similar processes.
【Keywords】: IP networks; Internet; Markov processes; computer network performance evaluation; correlation methods; higher order statistics; telecommunication channels; telecommunication traffic; transport protocols; Gilbert-Elliott channels; IP packet measurements; Internet traffic measurement; autocorrelated processes; channel models; error process; explicit formula; long term dependency; model parameters; parameter adaption; performance evaluation; second order statistics; short term dependency; state specific distribution function; traffic models; traffic variability; transition specific distribution function; transmission protocols; two-dimensional fitting space; two-state Markov models; two-state semiMarkov process; Adaptation models; Equations; Fitting; IP networks; Markov processes; Mathematical model; Time measurement; 2-state (semi-)Markov; 2nd order statistics; Gilbert-Elliott; Internet traffic measurement; autocorrelation; self-similar processes; traffic variability
【Paper Link】 【Pages】:1447-1455
【Authors】: Yang Liu ; Wenji Chen ; Yong Guan
【Abstract】: There has been a long history of finding a spaceefficient data structure to support approximate membership queries, started from Bloom's work in the 1970's. Given a set A of n items and an additional item x from the same universe U of a size m ≫ n, we want to distinguish whether x ∈ A or not, using small (limited) space. The solutions for the membership query are needed for many network applications, such as cache directory, load-balancing, security, etc. If A is static, there exist optimal algorithms to find a randomized data structure to represent A using only (1+ o(1))n log 1/δ bits, which only allows for a small false positive δ but no false negative. However, existing optimal algorithms are not practical for many Internet applications, e.g., social network services, peer-to-peer systems, network traffic monitoring, etc. They are too spaceand time-expensive due to the frequent changes in the set A, because all items are needed to recompute the optimal data structure for each change using a linear running time. In this paper, we propose a novel data structure to support the approximate membership query in the time-decaying window model. In this model, items are inserted one-by-one over a data stream, and we want to determine whether an item is among the most recent w items for any given window size w ≤ n. Our data structure only requires O(n(log 1/δ+logn)) bits and O(1) running time. We also prove a non-trivial space lower bound, i.e. (n - δm) log(n - δm) bits, which guarantees that our data structure is near-optimal. Our data structure has been evaluated using both synthetic and real data sets.
【Keywords】: Internet; computational complexity; data structures; query processing; randomised algorithms; resource allocation; Internet applications; cache directory; data stream; linear running time; load-balancing; near-optimal approximate membership query; network applications; network traffic monitoring; nontrivial space lower bound; optimal algorithms; peer-to-peer systems; randomized data structure; social network services; space-efficient data structure; time-decaying window model; Algorithm design and analysis; Data models; Data structures; Dictionaries; Internet; Radiation detectors; Security
【Paper Link】 【Pages】:1456-1464
【Authors】: Ying Dai ; Jie Wu ; Chunsheng Xin
【Abstract】: The advantages of virtual backbones have been proven in wireless networks. In cognitive radio networks (CRNs), virtual backbones can also play a critical role in both routing and data transport. However, the virtual backbone construction for CRNs is more challenging than for traditional wireless networks because of the opportunistic spectrum access. Moreover, when no common control channel is available to exchange the control information, this problem is even more difficult. In this paper, we propose a novel approach for constructing virtual backbones in CRNs, without relying on a common control channel. Our approach first utilizes the geographical information to let the nodes of a CRN self-organize into cells. Next, the nodes in each cell form into clusters, and a virtual backbone is established over the cluster heads. The virtual backbone is then applied to carry out the end-to-end data transmission. The proposed virtual backbone construction approach requires only limited exchange of control messages. It is efficient and highly adaptable under the opportunistic spectrum access. We evaluate our approach through extensive simulations.
【Keywords】: cognitive radio; data communication; radio networks; radio spectrum management; CRN; cognitive radio networks; common control channel; end-to-end data transmission; opportunistic spectrum access; virtual backbone construction; wireless networks; Availability; Cognitive radio; Data communication; Global Positioning System; Organizations; Routing; Wireless networks; Cognitive radio networks; end-to-end data transmission; self-organization; virtual backbone construction
【Paper Link】 【Pages】:1465-1473
【Authors】: Xiaoshuang Xing ; Tao Jing ; Yan Huo ; Hongjuan Li ; Xiuzhen Cheng
【Abstract】: The problem of channel quality prediction in cognitive radio networks is investigated in this paper. First, the spectrum sensing process is modeled as a Non-Stationary Hidden Markov Model (NSHMM), which captures the fact that the channel state transition probability is a function of the time interval the primary user has stayed in the current state. Then the model parameters, which carry the information about the expected duration of the channel states and the spectrum sensing accuracy (detection accuracy and false alarm probability) of the SU, are estimated via Bayesian inference with Gibbs sampling. Finally, the estimated NSHMM parameters are employed to design a channel quality metric according to the predicted channel idle duration and spectrum sensing accuracy. Extensive simulation study has been performed to investigate the effectiveness of our design. The results indicate that channel ranking based on the proposed channel quality prediction mechanism captures the idle state duration of the channel and the spectrum sensing accuracy of the SUs, and provides more high quality transmission opportunities and higher successful transmission rates at shorter spectrum waiting times for dynamic spectrum access.
【Keywords】: Bayes methods; cognitive radio; hidden Markov models; parameter estimation; radio spectrum management; sampling methods; wireless channels; Bayesian inference; Gibbs sampling; NSHMM parameter estimation; channel idle duration; channel quality metric design; channel quality prediction mechanism; channel ranking; channel state transition probability; cognitive radio network; detection accuracy; dynamic spectrum access; false alarm probability; nonstationary hidden Markov model; spectrum sensing accuracy; spectrum sensing process; Accuracy; Bayes methods; Channel estimation; Cognitive radio; Hidden Markov models; Probability distribution; Sensors; Bayesian inference; Channel quality prediction; cognitive radio networks; non-stationary HMM
【Paper Link】 【Pages】:1474-1482
【Authors】: Yanxiao Zhao ; Min Song ; Chunsheng Xin
【Abstract】: Cognitive radio is viewed as a disruptive technology innovation to improve spectrum efficiency. The deployment of coexisting cognitive radio networks, however, raises a great challenge to the medium access control (MAC) protocol design. While there have been many MAC protocols developed for cognitive radio networks, most of them have not considered the coexistence of cognitive radio networks, and thus do not provide a mechanism to ensure fair and efficient coexistence of cognitive radio networks. In this paper, we introduce a novel MAC protocol, termed fairness-oriented media access control (FMAC), to address the dynamic availability of channels and achieve fair and efficient coexistence of cognitive radio networks. Different from the existing MACs, FMAC utilizes a three-state spectrum sensing model to distinguish whether a busy channel is being used by a primary user or a secondary user from an adjacent cognitive radio network. As a result, secondary users from coexisting cognitive radio networks are able to share the channel together, and hence to achieve fair and efficient coexistence. We develop an analytical model using two-level Markov chain to analyze the performance of FMAC including throughput and fairness. Numerical results verify that FMAC is able to significantly improve the fairness of coexisting cognitive radio networks while maintaining a high throughput.
【Keywords】: Markov processes; access protocols; cognitive radio; innovation management; radio spectrum management; wireless channels; FMAC; coexisting cognitive radio networks; disruptive technology innovation; dynamic availability; fair MAC protocol; fairness-oriented media access control; medium access control protocol design; primary user; secondary user; spectrum efficiency; three-state spectrum sensing model; two-level Markov chain; Cognitive radio; IEEE 802.11 Standards; Markov processes; Media Access Protocol; Sensors; Switches; Throughput
【Paper Link】 【Pages】:1483-1491
【Authors】: Mustafa Ozger ; Özgür B. Akan
【Abstract】: Wireless sensor networks (WSN) with dynamic spectrum access (DSA) capability, namely cognitive radio sensor networks (CRSN), is a promising solution for spectrum scarcity problem. Despite improvement in spectrum utilization by DSA capability, energy-efficient solutions for CRSN are required due to resource-constrained nature of CRSN inherited from WSN. Clustering is an efficient way to decrease energy consumption. Existing clustering approaches for WSN are not applicable in CRSN and existing solutions for cognitive radio networks are not suitable for sensor networks. In this paper, we propose an event-driven clustering protocol which forms temporal cluster for each event in CRSN. Upon detection of an event, we determine eligible nodes for clustering according to local position of nodes between event and sink. Cluster-heads are selected among eligible nodes according to node degree, available channels and distance to the sink in their neighborhood. They select one-hop members for maximizing the number of two-hop neighbors that are accessible by one-hop neighbors through cluster channels to increase connectivity between clusters. Clusters are between event and sink and are no longer available after the end of the event. This avoids energy consumption due to unnecessary cluster formation and maintenance overheads. Performance evaluation reveals that our solution is energy-efficient with a delay due to spontaneous cluster formation.
【Keywords】: cognitive radio; energy conservation; pattern clustering; protocols; radio spectrum management; telecommunication power management; wireless channels; wireless sensor networks; CRSN; DSA capability; WSN; cluster channels; cluster-head selection; clustering node determination; cognitive radio sensor networks; dynamic spectrum access capability; energy consumption reduction; energy-efficient solutions; event detection; event-driven clustering protocol; event-driven spectrum-aware clustering; maintenance overheads; one-hop members; one-hop neighbors; performance evaluation; spectrum scarcity problem; spectrum utilization; temporal cluster; two-hop neighbors; unnecessary cluster formation; wireless sensor networks; Availability; Clustering algorithms; Cognitive radio; Energy consumption; Protocols; Wireless sensor networks
【Paper Link】 【Pages】:1492-1500
【Authors】: Ali Khanafer ; Murali S. Kodialam ; Krishna P. N. Puttaswamy
【Abstract】: Cloud service providers (CSPs) enable tenants to elastically scale their resources to meet their demands. In fact, there are various types of resources offered at various price points. While running applications on the cloud, a tenant aiming to minimize cost is often faced with crucial trade-off considerations. For instance, upon each arrival of a query, a web application can either choose to pay for CPU to compute the response fresh, or pay for cache storage to store the response so as to reduce the compute costs of future requests. The SkiRental problem abstracts such scenarios where a tenant is faced with a to-rent-or-to-buy trade-off; in its basic form, a skier should choose between renting or buying a set of skis without knowing the number of days she will be skiing. In this paper, we introduce a variant of the classical SkiRental problem in which we assume that the skier knows the first (or second) moment of the distribution of the number of ski days in a season. We demonstrate that utilizing this information leads to achieving the best worst-case expected competitive ratio (CR) performance. Our method yields a new class of randomized algorithms that provide arrivals-distribution-free performance guarantees. Further, we apply our solution to a cloud file system and demonstrate the cost savings obtained in comparison to other competing schemes. Simulations illustrate that our scheme exhibits robust average-cost performance that combines the best of the well-known deterministic and randomized schemes previously proposed to tackle the Ski-Rental problem.
【Keywords】: cloud computing; cost reduction; deterministic algorithms; optimisation; pricing; randomised algorithms; CPU; CSP; Web application; arrival-distribution-free performance guarantees; cache storage; cloud file system; cloud service providers; constrained ski-rental problem; cost minimization; cost reduction; cost savings; deterministic schemes; first-moment-constrained ski-rental problem; online cloud cost optimization; query arrival; randomized algorithms; response computation; response storage; robust average-cost performance; second-moment-constrained ski-rental problem; ski buying; to-rent-or-to-buy trade-off; worst-case expected CR performance; worst-case expected competitive ratio performance; Algorithm design and analysis; Cache storage; Game theory; Games; Optimization; Snow; Standards
【Paper Link】 【Pages】:1501-1509
【Authors】: Di Niu ; Baochun Li
【Abstract】: In modern large-scale systems, fast distributed resource allocation and utility maximization are becoming increasingly important. Traditional solutions to such problems rely on primal/dual decomposition and gradient methods, whose convergence is sensitive to the choice of the stepsize and may not be sufficient to satisfy the requirement of large-scale real-time applications. We propose a new iterative approach to distributed resource allocation in coupled systems. Without complicating message-passing, the new approach is robust to parameter choices and expedites convergence by exploiting problem structures. We theoretically analyze the asynchronous algorithm convergence conditions, and empirically evaluate its benefits in a case of cloud network resource reservation based on real-world data.
【Keywords】: distributed processing; gradient methods; resource allocation; asynchronous algorithm convergence conditions; cloud network resource reservation; efficient distributed algorithm; fast distributed resource allocation; gradient methods; iterative approach; large-scale coupled systems; message-passing; utility maximization; Algorithm design and analysis; Convergence; Cost function; Gradient methods; Jacobian matrices; Resource management; Vectors
【Paper Link】 【Pages】:1510-1518
【Authors】: Hong Zhang ; Bo Li ; Hongbo Jiang ; Fangming Liu ; Athanasios V. Vasilakos ; Jiangchuan Liu
【Abstract】: The paradigm of cloud computing has spontaneously prompted a wide interest in market-based resource allocation mechanisms by which a cloud provider aims at efficiently allocating cloud resources among potential users. Among these mechanisms, auction-style pricing policies, as they can effectively reflect the underlying trends in demand and supply for the computing resources, have attracted a research interest recently. This paper conducts the first work on a framework for truthful online cloud auctions where users with heterogeneous demands could come and leave on the fly. Our framework desirably supports a variety of design requirements, including (1) dynamic design for timely reflecting fluctuation of supply-demand relations, (2) joint design for supporting the heterogeneous user demands, and (3) truthful design for discouraging bidders from cheating behaviors. Concretely speaking, we first design a novel bidding language, wherein users' heterogeneous demands are generalized to regulated and consistent forms. Besides, building on top of our bidding language we propose COCA, an incentive-Compatible (truthful) Online Cloud Auction mechanism based on two proposed guidelines. Our theoretical analysis shows that the worst-case performance of COCA can be well-bounded. Further, in simulations the performance of COCA is seen to be comparable to the well-known off-line Vickrey-Clarke-Groves (VCG) mechanism [11].
【Keywords】: cloud computing; commerce; pricing; research and development; resource allocation; supply and demand; trusted computing; COCA; VCG; auction-style pricing policies; bidders; cheating behaviors; cloud computing; cloud provider; design requirements; dynamic design; heterogeneous user demands; incentive-compatible online cloud auction mechanism; market-based resource allocation mechanisms; research interest; supply-demand relations; truthful online auctions; well-known off-line Vickrey-Clarke-Groves mechanism; Buildings; Cloud computing; Computer science; Cost accounting; Educational institutions; Pricing; Resource management
【Paper Link】 【Pages】:1519-1527
【Authors】: Heejun Roh ; Cheoulhoon Jung ; Wonjun Lee ; Ding-Zhu Du
【Abstract】: Cloud computing enables larger classes of application service providers to distribute their services to world-wide users in multiple regions without their own private data centers. Heterogeneity and resource limitation of geo-graphically distributed cloud data centers impose application service providers to have incentives to optimize their computing resource usage while guaranteeing some level of quality of service. Recent studies proposed various techniques for optimization of computing resource usage from cloud users (or application service providers) perspective with little consideration of competition. In addition, optimization efforts of application service providers motivate cloud service providers owning multiple geo-distributed clouds to decide their computing resource prices considering their efforts. In this context, we formulate this problem for cloud service providers as a game of resource pricing in geo-distributed clouds. One of the main challenges in this problem is how to model the best responses of application service providers, given resource price information of clouds in non-overlapped regions. We propose a novel concave game to describe the quantity competition among application service providers reducing payment while guaranteeing fair service delay to end users. Furthermore, we optimize the prices of computing resources to converge to the equilibrium. In addition, we show several characteristics of the equilibrium point and discuss their implications to design computing resource markets for geo-distributed clouds.
【Keywords】: cloud computing; game theory; pricing; quality of service; resource allocation; application service provider; cloud computing; computing resource market; computing resource price; computing resource usage optimization; concave game; geodistributed cloud; geographically distributed cloud data center; heterogeneity; nonoverlapped region; payment; private data center; quality of service; resource limitation; resource price information; resource pricing game; service delay; Cloud computing; Distributed databases; Games; Optimization; Pricing; Quality of service; Time factors
【Paper Link】 【Pages】:1528-1536
【Authors】: Bo Ji ; Changhee Joo ; Ness B. Shroff
【Abstract】: In this paper, we focus on the issue of stability in multihop wireless networks under flow-level dynamics, and explore the inefficiency and instability of the celebrated Back-Pressure algorithms. It has been well-known that the Back-Pressure (or Max-Weight) algorithms achieve queue stability and throughput optimality in a wide variety of scenarios. Yet, these results all rely on the assumptions that the set of flows is fixed, and that all the flows are long-lived and keep injecting packets into the network. Recently, in the presence of flow-level dynamics, where flows arrive and request to transmit a finite amount of packets, it has been shown that the Max-Weight algorithms may not guarantee stability due to channel fading or inefficient spatial reuse. However, these observations are made only for single-hop traffic, and thus have resulted in partial solutions that are limited to the single-hop scenarios. An interesting question is whether straightforward extensions of the previous solutions to the known instability problems would achieve throughput optimality in multihop traffic setting. To answer the question, we explore potential inefficiency and instability of the Back-Pressure algorithms, and provide interesting examples that are useful to obtain insights into developing an optimal solution. We also conduct simulations to further illustrate the instability issue of the Back-Pressure algorithms in various scenarios. Our study reveals that new types of inefficiencies may arise in the settings with multihop traffic due to underutilization of the link capacity or inefficient routing, and the stability problem becomes more challenging than in the single-hop traffic counterpart.
【Keywords】: queueing theory; radio links; radio networks; stability; telecommunication network routing; telecommunication traffic; celebrated back-pressure algorithm; channel fading; flow-level dynamics; instability problem; link capacity; max-weight algorithm; multihop traffic setting; multihop wireless network; queue stability; routing; single-hop traffic; throughput optimality; Delays; Dynamic scheduling; Heuristic algorithms; Routing; Schedules; Scheduling algorithms; Throughput
【Paper Link】 【Pages】:1537-1545
【Authors】: Sharayu Moharir ; Sanjay Shakkottai
【Abstract】: We study routing and scheduling algorithms for relay-assisted, multi-channel downlink wireless networks (e.g., OFDM-based cellular systems with relays). Over such networks, while it is well understood that the BackPressure algorithm is stabilizing (i.e., queue lengths do not become arbitrarily large), its performance (e.g., delay, buffer usage) can be poor. In this paper, we study an alternative - the MaxWeight algorithm - variants of which are known to have good performance in a single-hop setting. In a general relay setting however, MaxWeight is not even stabilizing (and thus can have very poor performance). In this paper, we study an iterative MaxWeight algorithm for routing and scheduling in downlink multi-channel relay networks. We show that, surprisingly, the iterative MaxWeight algorithm can stabilize the system in several large-scale instantiations of this setting (e.g., general arrivals with full-duplex relays, bounded arrivals with half-duplex relays). Further, using both many-channel large-deviations analysis and simulations, we show that iterative MaxWeight outperforms the BackPressure algorithm from a queue-length/delay perspective.
【Keywords】: OFDM modulation; cellular radio; iterative methods; scheduling; telecommunication channels; telecommunication network routing; BackPressure algorithm; OFDM-based cellular systems; bounded arrivals; buffer usage; delay; downlink multichannel relay networks; full-duplex relays; general arrivals; half-duplex relays; iterative MaxWeight algorithm; many-channel large-deviations analysis; many-channel large-deviations simulations; multichannel relay networks; queue-length-delay perspective; relay-assisted multichannel downlink wireless networks; routing algorithms; scheduling algorithms; Algorithm design and analysis; Delays; Downlink; Indexes; Relays; Resource management; Routing; Downlink Relay Networks; Wireless Scheduling and Routing
【Paper Link】 【Pages】:1546-1554
【Authors】: Huacheng Zeng ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou
【Abstract】: Degree-of-Freedom (DoF)-based model is a simple yet powerful tool to analyze MIMO's spatial multiplexing (SM) and interference cancellation (IC) capabilities in a multi-hop network. Recently, a new DoF model was proposed and was shown to achieve the same rate region as the matrix-based model (under SM and IC). The essence of this new DoF model is a novel node ordering concept, which eliminates potential duplication of DoF allocation for IC. In this paper, we investigate DoF scheduling for a multi-hop MIMO network based on this new DoF model. Specifically, we study how to perform DoF allocation among the nodes for SM and IC so as to maximize the minimum rate among a set of sessions. We formulate this problem as a mixed integer linear programming (MILP) and develop an efficient DoF scheduling algorithm to solve it. We show that our algorithm is amenable to local implementation and has polynomial time complexity. More importantly, it guarantees the feasibility of final solution (upon algorithm termination), despite that node ordering establishment and adjustment are performed locally. Simulation results show that our algorithm can offer a result that is close to an upper bound found by CPLEX solver, thus showing that the result found by our algorithm is highly competitive.
【Keywords】: MIMO communication; integer programming; interference suppression; linear programming; space division multiplexing; CPLEX solver; DoF allocation; DoF scheduling algorithm; MILP; degree-of-freedom-based model; interference cancellation; matrix-based model; mixed integer linear programming; multihop MIMO network; node ordering concept; polynomial time complexity; spatial multiplexing; Integrated circuit modeling; Interference; MIMO; Random access memory; Resource management; Spread spectrum communication
【Paper Link】 【Pages】:1555-1563
【Authors】: Hulya Seferoglu ; Eytan Modiano
【Abstract】: Backpressure routing and scheduling, with throughput-optimal operation guarantee, is a promising technique to improve throughput in wireless multi-hop networks. Although backpressure is conceptually viewed as layered, the decisions of routing and scheduling are made jointly, which imposes several challenges in practice. In this work, we present Diff-Max, an approach that separates routing and scheduling and has three strengths: (i) Diff-Max improves throughput significantly, (ii) the separation of routing and scheduling makes practical implementation easier by minimizing cross-layer operations; i.e., routing is implemented in the network layer and scheduling is implemented in the link layer, and (iii) the separation of routing and scheduling leads to modularity; i.e., routing and scheduling are independent modules in Diff-Max, and one can continue to operate even if the other does not. Our approach is grounded in a network utility maximization (NUM) formulation and its solution. Based on the structure of Diff-Max, we propose two practical schemes: Diff-subMax and wDiff-subMax. We demonstrate the benefits of our schemes through simulation in ns-2.
【Keywords】: radio networks; scheduling; telecommunication network routing; Diff-Max; NUM formulation; backpressure routing; backpressure-based wireless networks; cross-layer operations; network utility maximization; scheduling; wireless multihop networks; Algorithm design and analysis; Joints; Routing; Scheduling; Scheduling algorithms; Vectors; Wireless networks
【Paper Link】 【Pages】:1564-1572
【Authors】: Haochao Li ; Kaishun Wu ; Qian Zhang ; Lionel M. Ni
【Abstract】: Improving channel utilization is a well-known issue in wireless networks. In traditional point-to-point wireless communication, significant efforts had been made by the existing study on enhancing the utilization of the channel access time. However, in the emerging wireless network using MU-MIMO, considering only the time domain in channel utilization is not sufficient. As multiple transmitters are allowed to transmit packets simultaneously to the same AP, allowing more antennas at AP would lead to higher channel utilization. Thus the channel utilization in MU-MIMO should consider both time and spatial domains, i.e., the channel access time and the antenna usage, which has not been considered in the existing methods. In this paper, we point out that the fundamental problem is lacking of the antenna information of contention nodes in channel contention. To address this issue, we propose a new MAC-PHY architecture design, CUTS, to utilize the channel in both domains. Particularly, CUTS adopts interference nulling to attach the antenna information in channel contention. Meanwhile, techniques such as channel contention in frequency domain and ACK in frequency domain using self-jamming are adopted. Through the software defined radio based real experiments and extensive simulations, we demonstrate the feasibility of our design and illustrate that CUTS provides better channel utilization with the gain over IEEE 802.11 reaching up to 470%.
【Keywords】: MIMO communication; antenna arrays; interference suppression; software radio; time-frequency analysis; wireless LAN; wireless channels; ACK; AP; CUTS; IEEE 802.11; MAC-PHY architecture design; MU-MIMO; WLAN; antenna usage; channel access time; channel contention; channel utilization; contention node antenna information; frequency domain; interference nulling; multiple transmitters; point-to-point wireless communication; self-jamming; software defined radio; spatial domains; time domains; wireless networks; Frequency-domain analysis; Interference; Receiving antennas; Transmitting antennas; Wireless networks
【Paper Link】 【Pages】:1573-1581
【Authors】: Julien Herzen ; Ruben Merz ; Patrick Thiran
【Abstract】: We consider the problem of jointly allocating channel center frequencies and bandwidths for IEEE 802.11 wireless LANs (WLANs). The bandwidth used on a link affects significantly both the capacity experienced on this link and the interference produced on neighboring links. Therefore, when jointly assigning both center frequencies and channel widths, there is a trade-off between interference mitigation and the potential capacity offered on each link. We study this tradeoff and we present SAW (spectrum assignment for WLANs), a decentralized algorithm that finds efficient configurations. SAW is tailored for 802.11 home networks. It is distributed, online and transparent. It does not require a central coordinator and it constantly adapts the spectrum usage without disrupting network traffic. A key feature of SAW is that the access points (APs) need only a few out-of-band measurements in order to make spectrum allocation decisions. Despite being completely decentralized, the algorithm is self-organizing and provably converges towards efficient spectrum allocations. We evaluate SAW using both simulation and a deployment on an indoor testbed composed of off-the-shelf 802.11 hardware. We observe that it dramatically increases the overall network efficiency and fairness.
【Keywords】: home networks; radio links; radio spectrum management; radiofrequency interference; radiofrequency measurement; wireless LAN; wireless channels; AP; IEEE 802.11 home networks; IEEE 802.11 wireless LAN; SAW; access points; channel center frequencies allocation; channel widths; distributed spectrum assignment; home WLAN; indoor testbed; interference mitigation; network efficiency; network traffic; off-the-shelf 802.11 hardware; out-of-band measurements; spectrum allocation decisions; Bandwidth; IEEE 802.11 Standards; Interference; Monitoring; Radio spectrum management; Resource management; Surface acoustic waves
【Paper Link】 【Pages】:1582-1590
【Authors】: Oscar Bejarano ; Edward W. Knightly
【Abstract】: Virtual Multiple-Input Single-Output (vMISO) systems distribute multi-antenna diversity capabilities between a sending and a cooperating node. vMISO has the potential to vastly improve wireless link reliability and bit error rates by exploiting spatial diversity. In this paper, we present the first design and experimental evaluation of vMISO triggers (when to invoke vMISO rather than traditional transmission) in Wi-Fi networking environments. We consider the joint effect of gains obtained at the physical layer with MAC and network-scale factors and show that 802.11 MAC mechanisms represent a major bottleneck to realizing gains that can be attained by a vMISO PHY. In contrast, we show how vMISO alters node interconnectivity and coordination and therefore can vastly transform the network throughput distribution in beneficial ways that are not described merely by vMISO link gains. Moreover, we show how to avoid triggering vMISO when the increased spatial footprint of the new cooperator would excessively hinder other flows' performance. In this paper, we build the first multi-flow vMISO testbed and explore the trigger criteria that are essential to attain substantial gains in a fully integrated vMISO system. We find that the largest gains are achieved by a largely isolated flow (gains of 110%) whereas cooperator interference and contention effects are pronounced in larger topologies, limiting typical gains to 14%.
【Keywords】: access protocols; diversity reception; error statistics; probability; telecommunication standards; wireless LAN; 802.11 MAC mechanisms; Wi-Fi networking environments; Wi-Fi-like networks; bit error rates; contention effects; cooperator interference; distribute multi-antenna diversity; network-scale factors; physical layer; spatial diversity; virtual MISO triggers; virtual multiple-input single-output systems; wireless link reliability; Fading; Gain; Modulation; Protocols; Receivers; Signal to noise ratio; Throughput
【Paper Link】 【Pages】:1591-1599
【Authors】: Yuan Yao ; Lei Rao ; Xue Liu ; Xingshe Zhou
【Abstract】: As a key enabling technology for the next generation inter-vehicle safety communications, The IEEE 802.11p protocol is currently attracting much attention. Many inter-vehicle safety communications have stringent real-time requirements on broadcast messages to ensure drivers have enough reaction time toward emergencies. Most existing studies only focus on the average delay performance of IEEE 802.11p, which only contains very limited information of the real capacity for inter-vehicle communication. In this paper, we propose an analytical model, showing the performance of broadcast under IEEE 802.11p in terms of the mean, deviation and probability distribution of the MAC access delay. Comparison with the NS-2 simulations validates the accuracy of the proposed analytical model. In addition, we show that the exponential distribution is a good approximation to the MAC access delay distribution. Numerical analysis indicates that the QoS support in IEEE 802.11p can provide relatively good performance guarantee for higher priority messages while fails to meet the real-time requirements of the lower priority messages.
【Keywords】: access protocols; automated highways; emergency management; exponential distribution; next generation networks; numerical analysis; quality of service; radio broadcasting; road safety; road vehicles; telecommunication network reliability; vehicular ad hoc networks; DSRC safety communication; IEEE 802.11p protocol; MAC access delay distribution; NS-2 simulation; QoS support; VANET; average delay performance; broadcast message; broadcast performance; delay analysis; deviation; driver reaction time; emergencies; exponential distribution; highway environment; mean; next generation intervehicle safety communication; numerical analysis; priority message; probability distribution; real-time requirement; Analytical models; Delays; Markov processes; Protocols; Quality of service; Safety; Vehicles
【Paper Link】 【Pages】:1600-1608
【Authors】: Yousi Zheng ; Ness B. Shroff ; Prasun Sinha
【Abstract】: With the rapid increase in size and number of jobs that are being processed in the MapReduce framework, efficiently scheduling jobs under this framework is becoming increasingly important. We consider the problem of minimizing the total flowtime of a sequence of jobs in the MapReduce framework, where the jobs arrive over time and need to be processed through both Map and Reduce procedures before leaving the system. We show that for this problem for non-preemptive tasks, no on-line algorithm can achieve a constant competitive ratio (defined as the ratio between the completion time of the online algorithm to the completion time of the optimal non-causal off-line algorithm). We then construct a slightly weaker metric of performance called the efficiency ratio. An online algorithm is said to achieve an efficiency ratio of γ when the flow-time incurred by that scheduler divided by the minimum flow-time achieved over all possible schedulers is almost surely less than or equal to γ. Under some weak assumptions, we then show a surprising property that, for the flow-time problem, any work-conserving scheduler has a constant efficiency ratio in both preemptive and nonpreemptive scenarios. More importantly, we are able to develop an online scheduler with a very small efficiency ratio (2), and through simulations we show that it outperforms the state-of-the-art schedulers.
【Keywords】: data handling; parallel algorithms; parallel programming; scheduling; MapReduce framework; MapReduce schedulers; completion time; constant efficiency ratio; flow-time problem; job scheduling; nonpreemptive tasks; online algorithm; online scheduler; optimal noncausal off-line algorithm; work-conserving scheduler; Algorithm design and analysis; Delays; Schedules; Scheduling algorithms; Writing
【Paper Link】 【Pages】:1609-1617
【Authors】: Weina Wang ; Kai Zhu ; Lei Ying ; Jian Tan ; Li Zhang
【Abstract】: Scheduling map tasks to improve data locality is crucial to the performance of MapReduce. Many works have been devoted to increasing data locality for better efficiency. However, to the best of our knowledge, fundamental limits of MapReduce computing clusters with data locality, including the capacity region and theoretical bounds on the delay performance, have not been studied. In this paper, we address these problems from a stochastic network perspective. Our focus is to strike the right balance between data-locality and load-balancing to simultaneously maximize throughput and minimize delay. We present a new queueing architecture and propose a map task scheduling algorithm constituted by the Join the Shortest Queue policy together with the MaxWeight policy. We identify an outer bound on the capacity region, and then prove that the proposed algorithm stabilizes any arrival rate vector strictly within this outer bound. It shows that the algorithm is throughput optimal and the outer bound coincides with the actual capacity region. Further, we study the number of backlogged tasks under the proposed algorithm, which is directly related to the delay performance based on Little's law. We prove that the proposed algorithm is heavy-traffic optimal, i.e., it asymptotically minimizes the number of backlogged tasks as the arrival rate vector approaches the boundary of the capacity region. Therefore, the proposed algorithm is also delay optimal in the heavy-traffic regime.
【Keywords】: minimisation; parallel algorithms; queueing theory; resource allocation; scheduling; software architecture; vectors; Little law; MapReduce computing clusters; MaxWeight policy; actual capacity region; arrival rate vector stabilization; backlogged task number minimization; data locality improvement; delay minimization; delay performance; heavy-traffic optimality; join-the-shortest queue policy; load-balancing; map task scheduling algorithm; queueing architecture; stochastic network perspective; throughput maximization; throughput optimality; Computational modeling; Delays; Markov processes; Routing; Scheduling algorithms; Throughput; Vectors
【Paper Link】 【Pages】:1618-1626
【Authors】: Jian Tan ; Xiaoqiao Meng ; Li Zhang
【Abstract】: Schedulers are critical in enhancing the performance of MapReduce/Hadoop in presence of multiple jobs with different characteristics and performance goals. Though current schedulers for Hadoop are quite successful, they still have room for improvement: map tasks (MapTasks) and reduce tasks (ReduceTasks) are not jointly optimized, albeit there is a strong dependence between them. This can cause job starvation and unfavorable data locality. In this paper, we design and implement a resource-aware scheduler for Hadoop. It couples the progresses of MapTasks and ReduceTasks, utilizing Wait Scheduling for ReduceTasks and Random Peeking Scheduling for MapTasks to jointly optimize the task placement. This mitigates the starvation problem and improves the overall data locality. Our extensive experiments demonstrate significant improvements in job response times.
【Keywords】: resource allocation; Hadoop; MapReduce resource-aware scheduling; MapTasks; ReduceTasks; coupling task progress; random peeking scheduling; task placement; wait scheduling; Couplings; Delays; Heart beat; Instruction sets; Processor scheduling; Synchronization; Time factors
【Paper Link】 【Pages】:1627-1635
【Authors】: Jian Tan ; Shicong Meng ; Xiaoqiao Meng ; Li Zhang
【Abstract】: Improving data locality for MapReduce jobs is critical for the performance of large-scale Hadoop clusters, embodying the principle of moving computation close to data for big data platforms. Scheduling tasks in the vicinity of stored data can significantly diminish network traffic, which is crucial for system stability and efficiency. Though the issue on data locality has been investigated extensively for MapTasks, most of the existing schedulers ignore data locality for ReduceTasks when fetching the intermediate data, causing performance degradation. This problem of reducing the fetching cost for ReduceTasks has been identified recently. However, the proposed solutions are exclusively based on a greedy approach, relying on the intuition to place ReduceTasks to the slots that are closest to the majority of the already generated intermediate data. The consequence is that, in presence of job arrivals and departures, assigning the ReduceTasks of the current job to the nodes with the lowest fetching cost can prevent a subsequent job with even better match of data locality from being launched on the already taken slots. To this end, we formulate a stochastic optimization framework to improve the data locality for ReduceTasks, with the optimal placement policy exhibiting a threshold-based structure. In order to ease the implementation, we further propose a receding horizon control policy based on the optimal solution under restricted conditions. The improved performance is further validated through simulation experiments and real performance tests on our testbed.
【Keywords】: data handling; predictive control; scheduling; stochastic programming; MapTask; ReduceTask data locality improvement; fetching cost reduction; greedy approach; intermediate data fetching; job arrivals; job departure; large-scale Hadoop clusters; moving computation principle; network traffic; optimal placement policy; receding horizon control policy; sequential MapReduce jobs; stochastic optimization framework; task scheduling; threshold-based structure; Bismuth; Indexes; Network topology; Optimization; Random variables; System performance; Virtual machining
【Paper Link】 【Pages】:1636-1644
【Authors】: Liguang Xie ; Yi Shi ; Y. Thomas Hou ; Wenjing Lou ; Hanif D. Sherali ; Scott F. Midkiff
【Abstract】: Wireless energy transfer is a promising technology to fundamentally address energy and lifetime problems in a wireless sensor network (WSN). On the other hand, it has been well recognized that a mobile base station has significant advantages over a static one. In this paper, we study the interesting problem of co-locating the mobile base station on the wireless charging vehicle (WCV). The goal is to minimize energy consumption of the entire system while ensuring none of the sensor nodes runs out of energy. We develop a mathematical model for this complex problem. Instead of studying the general problem formulation (OPT-t), which is time-dependent, we show that it is sufficient to study a special subproblem (OPT-s) which only involves space-dependent variables. Subsequently, we develop a provably near-optimal solution to OPT-s. The novelty of this research mainly resides in the development of several solution techniques to tackle a complex problem that is seemingly intractable at first glance. In addition to addressing a challenging and interesting problem in a WSN, we expect the techniques developed in this research can be applied to address other related networking problems involving time-dependent movement, flow routing, and energy consumption.
【Keywords】: electric vehicles; minimisation; mobile radio; telecommunication network routing; telecommunication power management; wireless sensor networks; OPT-s; OPT-t; WCV; WSN; colocation problem; energy consumption minimization; energy problem; flow routing; general problem formulation; lifetime problem; mathematical model; mobile base station bundling; networking problems; sensor nodes; space-dependent variables; special subproblem; time-dependent movement; wireless charging vehicle; wireless energy transfer; wireless sensor network; Base stations; Energy consumption; Mobile communication; Routing; Vehicles; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:1645-1653
【Authors】: Xiaolin Fang ; Hong Gao ; Jianzhong Li ; Yingshu Li
【Abstract】: Data sharing for data collection among multiple applications is an efficient way to reduce the communication cost of Wireless Sensor Networks (WSNs). This paper is the first work to introduce the interval data sharing problem which is to investigate how to transmit as less data as possible over the network, and meanwhile the transmitted data satisfies the requirements of all the applications. Different from current studies where each application requires a single data sampling during each task, we study the problem where each application requires a continuous interval of data sampling in each task instead. The proposed problem is a nonlinear nonconvex optimization problem. In order to lower the high complexity for solving a nonlinear nonconvex optimization problem in resource restricted sensor nodes, a 2-factor approximation algorithm whose time complexity is O(n2) and memory complexity is O(n) is provided. A special instance of this problem is also analyzed. This special instance can be solved with a dynamic programming algorithm in polynomial time, which gives an optimal result in O(n2) time complexity and O(n) memory complexity. We evaluate the proposed algorithms with TOSSIM, a widely used simulation tool in WSNs. Theoretical analysis and simulation results both demonstrate the effectiveness of the proposed algorithms.
【Keywords】: wireless sensor networks; application-aware data collection; data sampling; data sharing; nonlinear nonconvex optimization problem; resource restricted sensor nodes; wireless sensor networks; Approximation algorithms; Approximation methods; Heuristic algorithms; Optimization; Time complexity; Wireless sensor networks
【Paper Link】 【Pages】:1654-1662
【Authors】: Linghe Kong ; Mingyuan Xia ; Xiao-Yang Liu ; Min-You Wu ; Xue Liu
【Abstract】: Reconstructing the environment in cyber space by sensory data is a fundamental operation for understanding the physical world in depth. A lot of basic scientific work (e.g., nature discovery, organic evolution) heavily relies on the accuracy of environment reconstruction. However, data loss in wireless sensor networks is common and has its special patterns due to noise, collision, unreliable link, and unexpected damage, which greatly reduces the accuracy of reconstruction. Existing interpolation methods do not consider these patterns and thus fail to provide a satisfactory accuracy when missing data become large. To address this problem, this paper proposes a novel approach based on compressive sensing to reconstruct the massive missing data. Firstly, we analyze the real sensory data from Intel Indoor, GreenOrbs, and Ocean Sense projects. They all exhibit the features of spatial correlation, temporal stability and low-rank structure. Motivated by these observations, we then develop an environmental space time improved compressive sensing (ESTICS) algorithm to optimize the missing data estimation. Finally, the extensive experiments with real-world sensory data shows that the proposed approach significantly outperforms existing solutions in terms of reconstruction accuracy. Typically, ESTICS can successfully reconstruct the environment with less than 20% error in face of 90% missing data.
【Keywords】: compressed sensing; signal reconstruction; wireless sensor networks; ESTICS algorithm; GreenOrbs project; Intel Indoor project; Ocean Sense project; compressive sensing; data loss; low rank structure; massive missing data reconstruction; sensory data; spatial correlation; temporal stability; wireless sensor networks; Compressed sensing; Estimation; Interpolation; Ocean temperature; Temperature sensors; Wireless sensor networks
【Paper Link】 【Pages】:1663-1671
【Authors】: Guohua Li ; Jianzhong Li ; Hong Gao
【Abstract】: The rapid development in processor, memory, and radio technology have contributed to the furtherance of decentralized sensor networks of small, inexpensive nodes that are capable of sensing, computation, and wireless communication. Due to the characteristic of limited communication bandwidth and other resource constraints of sensor networks, an important and practical demand is to compress time series data generated by sensor nodes with precision guarantee in an online manner. Although a large number of data compression algorithms have been proposed to reduce data volume, their offline characteristic or super-linear time complexity prevents them from being applied directly on time series data generated by sensor nodes. To remedy the deficiencies of previous methods, we propose an optimal online algorithm GDPLA for constructing a disconnected piecewise linear approximation representation of a time series which guarantees that the vertical distance between each real data point and the corresponding fit line is less than or equal to ε. GDPLA not only generates the minimum number of segments to approximate a time series with precision guarantee, but also only requires linear time O(n) bounded by a constant coefficient 6, where unit 1 denotes the time complexity of comparing the slopes of two lines. The low cost characteristic of our method makes it the popular choice for resource-constrained sensor networks. Extensive experiments on a real dataset have been conducted to demonstrate the superior compression performance of our approach.
【Keywords】: approximation theory; time series; wireless sensor networks; ε-approximation; GDPLA optimal online algorithm; data point; data streams; data volume reduction; decentralized sensor networks; disconnected piecewise linear approximation representation; fit line; inexpensive nodes; limited communication bandwidth; radio technology; resource constraints; resource-constrained sensor networks; sensor nodes; super-linear time complexity; time series data compression; wireless communication; Approximation algorithms; Arrays; Linear approximation; Piecewise linear approximation; Time complexity; Time series analysis
【Paper Link】 【Pages】:1672-1680
【Authors】: Oded Argon ; Yuval Shavitt ; Udi Weinsberg
【Abstract】: Many Internet events exhibit periodical patterns. Such events include the availability of end-hosts, usage of internetwork links for balancing load and cost of transit, traffic shaping during peak hours, etc. Internet monitoring systems that collect huge amount of data can leverage periodicity information for improving resource utilization. However, automatic periodicity inference is a non trivial task, especially when facing measurement “noise”. In this paper we present two methods for assessing the periodicity of network events and inferring their periodical patterns. The first method uses Power Spectral Density for inferring a single dominant period that exists in a signal representing the sampling process. This method is highly robust to noise, but is most useful for single-period processes. Thus, we present a novel method for detecting multiple periods that comprise a single process, using iterative relaxation of the time-domain autocorrelation function. We evaluate these methods using extensive simulations, and show their applicability on real Internet measurements of end-host availability and IP address alternations.
【Keywords】: Internet; computer network performance evaluation; resource allocation; telecommunication traffic; IP address alternations; Internet events; Internet monitoring systems; automatic periodicity inference; end-host availability; internetwork links; iterative relaxation; large-scale Internet measurements; load balancing; power spectral density; resource utilization; sampling process; time-domain autocorrelation function; traffic shaping; transit cost; Discrete Fourier transforms; Harmonic analysis; Internet; Phase noise; Robustness; Time-domain analysis
【Paper Link】 【Pages】:1681-1689
【Authors】: Xun Fan ; John S. Heidemann ; Ramesh Govindan
【Abstract】: IP anycast is a central part of production DNS. While prior work has explored proximity, affinity and load balancing for some anycast services, there has been little attention to third-party discovery and enumeration of components of an anycast service. Enumeration can reveal abnormal service configurations, benign masquerading or hostile hijacking of anycast services, and help characterize anycast deployment. In this paper, we discuss two methods to identify and characterize anycast nodes. The first uses an existing anycast diagnosis method based on CHAOS-class DNS records but augments it with traceroute to resolve ambiguities. The second proposes Internet-class DNS records which permit accurate discovery through the use of existing recursive DNS infrastructure. We validate these two methods against three widely-used anycast DNS services, using a very large number (60k and 300k) of vantage points, and show that they can provide excellent precision and recall. Finally, we use these methods to evaluate anycast deployments in top-level domains (TLDs), and find one case where a third-party operates a server masquerading as a root DNS anycast node as well as a noticeable proportion of unusual DNS proxies. We also show that, across all TLDs, up to 72% use anycast.
【Keywords】: IP networks; Internet; CHAOS-class DNS records; DNS proxies; IP anycast; Internet-class DNS records; TLD; abnormal service configurations; anycast evaluation; anycast services; benign masquerading; domain name system; hostile hijacking; production DNS; recursive DNS infrastructure; root DNS anycast node; third-party discovery; top-level domains; Chaos; Extraterrestrial measurements; IP networks; Peer-to-peer computing; Routing; Servers; Standards
【Paper Link】 【Pages】:1690-1698
【Authors】: Yi-Chao Chen ; Gene Moo Lee ; Nick G. Duffield ; Lili Qiu ; Jia Wang
【Abstract】: Customer care calls serve as a direct channel for a service provider to learn feedbacks from their customers. They reveal details about the nature and impact of major events and problems observed by customers. By analyzing the customer care calls, a service provider can detect important events to speed up problem resolution. However, automating event detection based on customer care calls poses several significant challenges. First, the relationship between customers' calls and network events is blurred because customers respond to an event in different ways. Second, customer care calls can be labeled inconsistently across agents and across call centers, and a given event naturally give rise to calls spanning a number of categories. Third, many important events cannot be detected by looking at calls in one category. How to aggregate calls from different categories for event detection is important but challenging. Lastly, customer care call records have high dimensions (e.g., thousands of categories in our dataset). In this paper, we propose a systematic method for detecting events in a major cellular network using customer care call data. It consists of three main components: (i) using a regression approach that exploits temporal stability and low-rank properties to automatically learn the relationship between customer calls and major events, (ii) reducing the number of unknowns by clustering call categories and using L1 norm minimization to identify important categories, and (iii) employing multiple classifiers to enhance the robustness against noise and different response time. For the detected events, we leverage Twitter social media to summarize them and to locate the impacted regions. We show the effectiveness of our approach using data from a large cellular service provider in the US.
【Keywords】: call centres; customer satisfaction; customer services; pattern clustering; regression analysis; social networking (online); L1 norm minimization; Twitter social media; US cellular service provider; automatic event detection; call category clustering; cellular network; customer care call records; customer feedbacks; low-rank properties; network events; problem resolution; regression approach; systematic method; temporal stability; Measurement; Noise; Principal component analysis; Scalability; Testing; Time factors; Training
【Paper Link】 【Pages】:1699-1707
【Authors】: Mehdi Malboubi ; Cuong Vu ; Chen-Nee Chuah ; Puneet Sharma
【Abstract】: Two forms of network inference (or tomography) problems have been studied rigorously: (a) traffic matrix estimation or completion based on link-level traffic measurements, and (b) link-level loss or delay inference based on end-to-end measurements. These problems are often posed as underdetermined linear inverse (UDLI) problems and solved in a centralized manner, where all the measurements are collected at a central node, which then applies a variety of inference techniques to estimate the attributes of interest. This paper proposes a novel framework for decentralizing these large-scale UDLI network inference problems by intelligently partitioning it into smaller sub-problems and solving them independently and in parallel. The resulting estimates, referred to as multiple descriptions, can then be fused together to compute the global estimate. We apply this Multiple Description and Fusion Estimation (MDFE) framework to three classical problems: traffic matrix estimation, traffic matrix completion, and loss inference. Using real topologies and traces, we demonstrate how MDFE can speed up computation time while maintaining (even improving) the estimation accuracy and how it enhances robustness against noise and failures. We also show that our MDFE framework is compatible with a variety of existing inference techniques used to solve the UDLI problems.
【Keywords】: Internet; delays; inference mechanisms; matrix algebra; telecommunication traffic; Internet; MDFE; MDFE framework; UDLI network inference problem; decentralizing network inference problem; delay inference; end-to-end measurement; inference technique; link-level loss; link-level traffic measurements; multiple-description fusion estimation; traffic matrix completion; traffic matrix estimation; under-determined linear inverse; Accuracy; Clustering algorithms; Complexity theory; Estimation; Loss measurement; Partitioning algorithms; Robustness
【Paper Link】 【Pages】:1708-1716
【Authors】: Anastasios Giannoulis ; Paul Patras ; Edward W. Knightly
【Abstract】: Wireless networks increasingly utilize diverse spectral bands that exhibit vast differences in both transmission range and usage. In this work, we present MAWS (Mobile Access of Wide-Spectrum Networks), the first scheme designed for mobile clients to evaluate and select both APs and spectral bands in wide-spectrum networks. Because of the potentially vast number of spectrum and AP options, scanning may be prohibitive. Consequently, our key technique is for clients to infer channel quality and spectral usage for their current location and bands using limited measurements collected in other bands and at other locations. We experimentally evaluate MAWS via a widespectrum network that we deploy, a testbed providing access to four bands at 700 MHz, 900 MHz, 2.4 GHz and 5 GHz. To the best of our knowledge, the spectrum of these bands is the widest to be spanned to date by a single operational access network. A key finding of our evaluation is that under a diverse set of operating conditions, mobile clients can accurately predict their performance without a direct measurement at their current location and spectral bands.
【Keywords】: mobile radio; radio access networks; radio spectrum management; AP; MAWS; access point; channel quality; frequency 2.4 GHz; frequency 5 GHz; frequency 700 MHz; frequency 900 MHz; mobile access of wide-spectrum networks; mobile clients; single operational access network; wireless networks; Channel estimation; Delays; Frequency measurement; Interference; Mobile communication; Mobile computing
【Paper Link】 【Pages】:1717-1725
【Authors】: Hongxing Li ; Chuan Wu ; Zongpeng Li ; Francis C. M. Lau
【Abstract】: In a cognitive radio system, licensed primary users can lease idle spectrum to secondary users for monetary remuneration. Secondary users acquire available spectrum for their data delivery needs, with the goal of achieving high throughput and low spectrum charges. Maximizing such a net utility (throughput utility minus spectrum cost) is a central problem faced by a multihop secondary network. Optimal decision making is challenging, since it involves multiple data flows, cross-layer coordination, and economic constraints (budgets of sources). The picture is further complicated by the inter-play between secondary data communication and primary spectrum leasing mechanisms. This work is the first to investigate the full spectrum of socially optimal secondary user communication. We design a social welfare maximization framework for multi-session multi-hop secondary data dissemination based on Lyapunov optimization techniques. A salient feature of the framework is that it takes any given primary user mechanism as input, and produces correspondingly a dynamic, distributed rate control, routing, and spectrum allocation and pricing protocol that can achieve longterm maximization of the overall system utility. Through rigorous theoretical analysis, we prove that our online protocol can achieve a social welfare that is arbitrarily close to the offline optimum, with only finite buffer space requirement at each secondary user, and guarantee of no buffer overflow. Empirical studies are conducted to examine the performance of the protocol.
【Keywords】: Lyapunov methods; cognitive radio; decision making; optimisation; Lyapunov optimization technique; arbitrary primary user mechanism; cognitive radio system; cross layer coordination; data delivery needs; distributed rate control; economic constraints; finite buffer space requirement; idle spectrum; monetary remuneration; multihop secondary network; multiple data flow; multisession multihop secondary data dissemination; net utility; online protocol; optimal decision making; pricing protocol; secondary user communication; social welfare maximization framework; socially optimal multihop secondary communication; spectrum allocation; system utility; Algorithm design and analysis; Heuristic algorithms; Optimization; Protocols; Routing; Throughput; Unicast
【Paper Link】 【Pages】:1726-1734
【Authors】: Bin Cao ; Jon W. Mark ; Qinyu Zhang ; Rongxing Lu ; Xiaodong Lin ; Xuemin (Sherman) Shen
【Abstract】: This work is concerned with enhancement of spectrum-energy efficiency whereby a primary user (PU) engages secondary users (SUs) to relay its transmission in an energy-aware cognitive radio network, i.e., forming a cooperative cognitive radio network (CCRN). The cooperation framework in CCRN can be multiple two-hop relaying with or without PU's direct link transmission using an amplify-and-forward or decode- and-forward mode. In the energy-aware CCRN, an individual cooperating partner attempts to maximize its own utility. The partner selection and parameter optimization, led by the PU, are formulated as two Stackelberg games, namely a sum-constrained power allocation game for two-phase cooperation and a power control game for three-phase cooperation, respectively. Unique Nash Equilibrium is proved and achieved in analytical format for each game. The optimal communication strategy is chosen which achieves the maximum PU utility among different optimal communication strategies. Moreover, an implementation scheme is presented to perform the partner selection and parameter optimization based on the analytical results. Theoretical analysis and performance evaluation show that the proposed CCRN model is a promising framework under which the PU's utility is maximized, while the relaying SUs can attain acceptable utilities.
【Keywords】: amplify and forward communication; cognitive radio; cooperative communication; decode and forward communication; game theory; relay networks (telecommunication); CCRN; PU; SU; Stackelberg games; amplify-and-forward mode; cooperative cognitive radio networking; decode- and-forward mode; energy-aware cognitive radio network; multiple two-hop relaying; optimal communication strategies; parameter optimization; partner selection; power control game; primary user; secondary users; spectrum-energy efficiency; sum-constrained power allocation game; three-phase cooperation; two-phase cooperation; unique Nash equilibrium; Cognitive radio; Games; Optimization; Relays; Resource management; Silicon; Throughput
【Paper Link】 【Pages】:1735-1743
【Authors】: Sang-Woon Jeon ; Michael Gastpar
【Abstract】: The capacity scaling laws of two overlaid networks sharing the same wireless resources with different priorities are investigated. The primary network is assumed to operate in an order-optimal fashion to achieve its standalone capacity scaling law. The secondary “cognitive” network must keep its interference to the primary network below a certain threshold while at the same time maximizing its own throughput scaling law based on cognition information. The existing scaling results for cognitive networks inherently assume multihop communication treating all other signals except from a single intended transmitter as noise. By contrast, in this paper, a general coding model is considered without any specific physical layer coding assumptions. Therefore, this paper provides a general framework for comprehensive understanding of fundamental limits on the capacity scaling laws of cognitive networks. For the extended network model, the capacity scaling laws of both the primary and secondary networks are completely characterized. For the dense network model, an improved throughput scaling law is achieved by inducing cooperation within the secondary network. In both cases, it turns out that the conventional multihop approach is in general quite suboptimal.
【Keywords】: cognitive radio; encoding; interference suppression; radio transmitters; cognition information; extended network model; general coding model; interference-limited communication; multihop communication; order-optimal fashion; overlaid networks; physical layer coding assumptions; primary network; secondary cognitive network capacity scaling; single intended transmitter; standalone capacity scaling law; Interference; Physical layer; Protocols; Spread spectrum communication; Throughput; Upper bound; Wireless communication
【Paper Link】 【Pages】:1744-1752
【Authors】: Youngmi Jin ; Yung Yi ; George Kesidis ; Fatih Kocak ; Jinwoo Shin
【Abstract】: This paper considers a hybrid peer-to-peer (p2p) system, a dynamic distributed caching system with an authoritative server dispensing contents only if the contents fail to be found by searching an unstructured peer-to-peer (p2p) system. We study the case when some peers may not be fully cooperative in the search process and examine the impact of various noncooperative behaviors on the querying load on the server as the peer population size increases. We categorize selfish peers into three classes: impatient peers that directly query the server without searching the p2p system, non-forwarders that refuse to forward query requests, and non-resolvers that refuse to share contents. It is shown that in the hybrid p2p system, impatient and/or nonforwarding behaviors prevent the system from scaling well because of the high server load, while the system scales well under the non-resolving selfish peers. Our study implies that the hybrid p2p system does not mandate an incentive mechanism for content sharing, which is in stark contrast to unstructured p2p systems, where incentivizing peers to share contents is known to be a key factor for the system's scalability.
【Keywords】: cache storage; client-server systems; peer-to-peer computing; query processing; authoritative server; client-server system; content sharing; dynamic distributed caching system; hybrid P2P system; impatient behavior; impatient peer; incentive mechanism; nonforwarder; nonforwarding behavior; nonresolvers; nonresolving selfish peers; peer incentive; peer noncooperative behavior; peer population size; peer-to-peer caching system; query request forwarding; server load; server querying load; system scalability; unstructured peer-to-peer system; Markov processes; Peer-to-peer computing; Probabilistic logic; Scalability; Servers; Sociology; Waste materials
【Paper Link】 【Pages】:1753-1761
【Authors】: François Baccelli ; Fabien Mathieu ; Ilkka Norros ; Rémi Varloot
【Abstract】: We propose a new model for peer-to-peer networking which takes the network bottlenecks into account beyond the access. This model can cope with key features of P2P networking like degree or locality constraints together with the fact that distant peers often have a smaller rate than nearby peers. Using a network model based on rate functions, we give a closed form expression of peers download performance in the system's fluid limit, as well as approximations for the other cases. Our results show the existence of realistic settings for which the average download time is a decreasing function of the load, a phenomenon that we call super-scalability.
【Keywords】: peer-to-peer computing; P2P networks; degree constraints; locality constraints; peer-to-peer networking; peers download performance closed form expression; rate functions; superscalability phenomenon; system fluid limit; Availability; Bandwidth; Fluids; Load modeling; Mathematical model; Peer-to-peer computing; Scalability
【Paper Link】 【Pages】:1762-1770
【Authors】: Zhongmei Yao ; Daren B. H. Cline ; Dmitri Loguinov
【Abstract】: We revisit link lifetimes in random P2P graphs under dynamic node failure and create a unifying stochastic model that generalizes the majority of previous efforts in this direction. We not only allow non-exponential user lifetimes and age-dependent neighbor selection, but also cover both active and passive neighbor-management strategies, model the lifetimes of incoming and outgoing links, derive churn-related message volume of the system, and obtain the distribution of transient in/out degree at each user. We then discuss the impact of design parameters on overhead and resilience of the network.
【Keywords】: graph theory; peer-to-peer computing; stochastic processes; dynamic node failure; link lifetimes redux; neighbor-management strategies; random P2P graphs; stochastic model; unstructured P2P; Delays; Educational institutions; Linear approximation; Peer-to-peer computing; Random variables; Resilience; Shape
【Paper Link】 【Pages】:1771-1779
【Authors】: Kianoosh Mokhtarian ; Hans-Arno Jacobsen
【Abstract】: Delivering delay-sensitive data to a group of receivers with minimum latency is a fundamental problem for various distributed applications. In this paper, we study multicast routing with minimum end-to-end delay to the receivers. The delay to each receiver in a multicast tree consist of the time that the data spends in overlay links as well as the latency incurred at each overlay node, which has to send out a piece of data several times over a finite-capacity network connection. The latter portion of the delay, which is proportional to the degree of nodes in the tree, can be a significant portion of the total delay as we show in the paper. Yet, it is often ignored or only partially addressed by previous multicast algorithms. We formulate the actual delay to the receivers in a multicast tree and consider minimizing the average and the maximum delay in the tree. We show the NP-hardness of these problems and prove that they cannot be approximated in polynomial time to within any reasonable approximation ratio. We then present a number of efficient algorithms to build a multicast tree in which the average or the maximum delay is minimized. These algorithms cover a wide range of overlay sizes for both versions of our problem. The effectiveness of our algorithms is demonstrated through comprehensive experiments on different real-world datasets, and using various overlay network models. The results confirm that our algorithms can achieve much lower delays (up to 60% less) and up to orders of magnitude faster running times (i.e., supporting larger scales) than previous minimum-delay multicast approaches.
【Keywords】: multicast communication; telecommunication network routing; trees (mathematics); delay-sensitive data; end-to-end delay; finite-capacity network connection; minimum-delay overlay multicast; multicast routing; multicast tree; receivers; Algorithm design and analysis; Approximation algorithms; Approximation methods; Delays; Overlay networks; Receivers; Routing
【Paper Link】 【Pages】:1780-1787
【Authors】: Hao Chen ; Yu Chen ; Douglas H. Summerville ; Zhou Su
【Abstract】: Shrew Distributed Denial-of-Service (DDoS) attacks are stealthy, concealing their malicious activities in normal traffic. Although it is difficult to detect shrew DDoS attacks in the time domain, the existent energy exposes them in frequency domain. For this purpose, online Power Spectral Density (PSD) analysis necessitates real-time PSD data conversion. In this paper, an optimized FPGA based accelerator for real-time PSD conversion is proposed, which is based on our innovative component-reusable Auto-Correlation (AC) algorithm and the adapted 2N-point real-valued Discrete Fourier Transform (DFT) algorithm. Further optimization is achieved through the exploration of algorithm characteristics and hardware parallelism for this case. Evaluation results from both simulation and synthesis are provided. The overall design can be easily placed in a Xilinx Virtex2 Pro FGPA.
【Keywords】: computer network security; discrete Fourier transforms; electronic data interchange; field programmable gate arrays; frequency-domain analysis; parallel processing; reconfigurable architectures; telecommunication traffic; 2N-point real-valued discrete Fourier transform algorithm; AC algorithm; DFT algorithm; Xilinx Virtex2 Pro FGPA; component-reusable auto-correlation algorithm; frequency domain; hardware parallelism; malicious activities; normal traffic; online power spectral density analysis; online shrew DDoS attack detection; optimized FPGA based accelerator; real-time PSD data conversion; reconfigurable PSD accelerator optimized design; shrew distributed denial-of-service attacks; Algorithm design and analysis; Computer crime; Convolution; Discrete Fourier transforms; Field programmable gate arrays; Frequency-domain analysis; Real-time systems
【Paper Link】 【Pages】:1788-1796
【Authors】: Wenji Chen ; Yang Liu ; Yong Guan
【Abstract】: Cyber-attacks are happening every day, with a variety of behaviors and objects. For example, email spammers may compromise computers to sign-up millions of email accounts for sending spam emails; during worm spreading, each infected host may try to connect to many hosts to further spread the worm, etc. However, many such large-scale and often distributed cyber-attacks share a common characteristic that the activities involved in them result in changes in the cardinality of attack traffic. Examples include: the cardinality of the accounts signed up by a compromised host often increases in spam email delivery scenarios, and the cardinality of the connections made from a host may increase in worm spreading scenarios. In this paper, we focus on changes in the cardinality of the network/attack traffic that may indicate on-going cyber-attacks. We formulate this problem as cardinality-based change point detection in distributed streams of attack traffic, and develop a nonparametric error-bounded scheme for it. Our scheme supports the capability of merging information collected from multiple monitoring points to detect large-scale attacks. Also, our scheme uses small space as well as constant processing time, which makes it applicable for spaceconstrained network or security systems. We have conducted experiments using both real-world traces and synthetic data. Experimental results and theoretical analysis show that our scheme can detect changes in the cardinality within given time and error bounds. We expect the solutions of this work will be deployed as a building block in network and security monitoring systems to detect large distributed cyber attacks.
【Keywords】: nonparametric statistics; pattern recognition; security of data; unsolicited e-mail; attack traffic cardinality; cardinality change-based early detection; cardinality-based change point detection; connection cardinality; distributed cyber-attack; distributed stream; email accounts; email spammers; error bound; infected host; information merging; large distributed cyber attack detection; large-scale cyber-attack; network monitoring system; nonparametric error-bounded scheme; security monitoring system; security system; space-constrained network; spam email delivery scenario; spam email sending; worm spreading; Electronic mail; Grippers; IP networks; Merging; Monitoring; Radiation detectors; Time series analysis
【Paper Link】 【Pages】:1797-1805
【Authors】: Guanyu Tian ; Zhenhai Duan ; Todd Baumeister ; Yingfei Dong
【Abstract】: Freenet is a popular peer to peer anonymous network, with the objective to provide the anonymity of both content publishers and retrievers. Despite more than a decade of active development and deployment and the adoption of well-established cryptographic algorithms in Freenet, it remains unanswered how well the anonymity objective of the initial Freenet design has been met. In this paper we develop a traceback attack on Freenet, and show that the originating machine of a content request message in Freenet can be identified; that is, the anonymity of a content retriever can be broken, even if a single request message has been issued by the retriever. We present the design of the traceback attack, and perform Emulab-based experiments to confirm the feasibility and effectiveness of the attack. With randomly chosen content requesters (and random contents stored in the Freenet testbed), the experiments show that, for 24% to 43% of the content request messages, we can identify their originating machines. We also briefly discuss potential solutions to address the developed traceback attack. Despite being developed specifically on Freenet, the basic principles of the traceback attack and solutions have important security implications for similar anonymous content sharing systems.
【Keywords】: computer network security; cryptography; peer-to-peer computing; Emulab-based experiments; Freenet testbed; anonymous content sharing systems; content publisher anonymity; content request message; content retriever anonymity; cryptographic algorithms; peer-to-peer anonymous network; random content requester selection; random content storage; traceback attack design; Algorithm design and analysis; Educational institutions; Monitoring; Peer-to-peer computing; Probes; Routing; Security
【Paper Link】 【Pages】:1806-1814
【Authors】: Hongli Zhang ; Jiantao Shi ; Lin Ye ; Xiaojiang Du
【Abstract】: In this paper, we study several important issues that can be used to prevent pirated content propagation in BitTorrent (BT) Distributed Hash-Tables (DHT) networks. We design a system called PPBD to stop pirated content propagation by utilizing several attacking methods. First, the system can efficiently deal with massive concurrent connections to reduce bandwidth consumption, schedule peers to cooperate and optimize the protection methods according to clients. Second, we construct two mathematical models for BT DHT attacks, and we theoretically analyze the system performance. Third, we take into account some countermeasures of different BT clients and make corresponding optimizations of our PPBD system. Our realworld experiments show that: (1) our system can extend the download duration at least three times by the fake-block attacking method and it is more effective in a small swarm; (2) DHT index poison and routing pollution methods can limit the sharing swarm to a small swarm.
【Keywords】: Internet; computer crime; computer network reliability; computer network security; cryptography; mathematical analysis; peer-to-peer computing; telecommunication network routing; BT DHT attack; BT DHT network; BT client; BitTorrent; DHT index poison; PPBD system; bandwidth consumption reduction; concurrent connection; distributed hash-table network; download duration; fake-block attacking method; mathematical model; optimization; peer scheduling; piracy preventing system; pirated content propagation; protection method; routing pollution; system design; system performance; Bandwidth; Crawlers; Indexes; Peer-to-peer computing; Pollution; Routing; Toxicology; BitTorren; DHT; Peer-to-peer networking; piracy prevention
【Paper Link】 【Pages】:1815-1823
【Authors】: Giuseppa Alfano ; Michele Garetto ; Emilio Leonardi
【Abstract】: We analyze throughput-delay scaling laws of mobile ad-hoc networks under a content-centric traffic scenario, where users are mainly interested in retrieving contents cached by other nodes. We assume limited buffer size available at each node and Zipf-like content popularity. We consider nodes uniformly visiting the network area according to a random-walk mobility model, whose flight size is varied from the typical distance among the nodes (quasi-static case) up to the edge length of the network area (reshuffling mobility model). Our main findings are i) the best throughput-delay trade-offs are achieved in the quasi-static case: increasing the mobility degree of nodes leads to worse and worse performance; ii) the best throughput-delay trade-offs can be recovered by power control (i.e., by adapting the transmission range to the content) even in the complete reshuffling case.
【Keywords】: mobile ad hoc networks; mobility management (mobile radio); radio networks; telecommunication traffic; Zipf-like content; content-centric traffic scenario; content-centric wireless networks; limited buffers; mobile ad hoc networks; random-walk mobility model; throughput-delay scaling laws; Aggregates; Delays; Indexes; Mobile communication; Mobile computing; Receivers; Throughput
【Paper Link】 【Pages】:1824-1832
【Authors】: Seon-Yeong Han ; Nael B. Abu-Ghazaleh ; Dongman Lee
【Abstract】: The accuracy of wireless network packet simulation critically depends on the quality of the wireless channel models. These models directly affect the fundamental network characteristics, such as link quality, transmission range, and capture effect, as well as their dynamic variation in time and space. Path loss is the stationary component of the channel model affected by the shadowing in the environment. Existing path loss models are inaccurate, require very high measurement or computational overhead, and/or often cannot be made to represent a given environment. The paper contributes a flexible path loss model that uses a novel approach for spatially coherent interpolation from available nearby channels to allow accurate and efficient modeling of path loss. We show that the proposed model, called Double Regression (DR), generates a correlated space, allowing both the sender and the receiver to move without abrupt change in path loss. Combining DR with a traditional temporal fading model, such as Rayleigh fading, provides an accurate and efficient channel model that we integrate with the NS-2 simulator. We use measurements to validate the accuracy of the model for a number of scenarios. We also show that there is substantial impact on simulation behavior (e.g., up to 600% difference in throughput for simple scenarios) when path loss is modeled accurately.
【Keywords】: fading channels; radiofrequency measurement; regression analysis; DR; NS-2 simulator; double regression; flexible path loss model; fundamental network characteristics; spatially correlated path loss model; temporal fading model; wireless channel models; wireless network packet simulation; Channel models; Computational modeling; Correlation; Fading; Loss measurement; Receivers; Shadow mapping
【Paper Link】 【Pages】:1833-1841
【Authors】: Hussein Al-Zubaidy ; Jörg Liebeherr ; Almut Burchard
【Abstract】: A fundamental problem for the delay and backlog analysis across multi-hop paths in wireless networks is how to account for the random properties of the wireless channel. Since the usual statistical models for radio signals in a propagation environment do not lend themselves easily to a description of the available service rate, the performance analysis of wireless networks has resorted to higher-layer abstractions, e.g., using Markov chain models. In this work, we propose a network calculus that can incorporate common statistical models of fading channels and obtain statistical bounds on delay and backlog across multiple nodes. We conduct the analysis in a transfer domain, which we refer to as the SNR domain, where the service process at a link is characterized by the instantaneous signal-to-noise ratio at the receiver. We discover that, in the transfer domain, the network model is governed by a dioid algebra, which we refer to as (min, ×) algebra. Using this algebra we derive the desired delay and backlog bounds. An application of the analysis is demonstrated for a simple multi-hop network with Rayleigh fading channels.
【Keywords】: Markov processes; Rayleigh channels; algebra; calculus; statistical analysis; (min, ×) algebra; (min, ×) network calculus; Markov chain models; Rayleigh fading channels; SNR domain; higher-layer abstractions; instantaneous signal-to-noise ratio; multihop fading channels; propagation environment; radio signals; statistical models; transfer domain; wireless channel; Algebra; Calculus; Delays; Fading; Servers; Signal to noise ratio; Transforms
【Paper Link】 【Pages】:1842-1850
【Authors】: Ming Li ; Pan Li ; Miao Pan ; Jinyuan Sun
【Abstract】: The rapid growth of wireless devices and services exacerbates the problem of spectrum scarcity in wireless networks. Recently, spectrum auction has emerged as one of the most promising techniques to enhance spectrum utilization and mitigate this problem. Although there exist some works studying spectrum auction, most of them are designed for single-hop communications, and it is usually not clear whom a winning user communicates with. Moreover, most previous auction schemes only focus on satisfying the incentive compatibility property, also called truthfulness, but ignore another two critical properties: individual rationality, and budget balance. Thus, they may not be economic-robust. In this paper, we propose a transmission opportunity auction scheme, called TOA, which can support multi-hop data traffic, ensure economic-robustness, and generate high revenue for the auctioneer. Specifically, in TOA, instead of spectrum bands as in traditional spectrum auction schemes, users bid for transmission opportunities (TOs). A TO is defined as the permit of data transmission on a specific link using a certain band, i.e., a link-band pair. The TOA scheme is composed of three procedures: TO allocation, TO scheduling, and pricing, which are performed sequentially and iteratively until the aforementioned goals are reached. We prove that TOA is economic-robust, and conduct extensive simulations to show its effectiveness and efficiency.
【Keywords】: data communication; radio networks; radio spectrum management; telecommunication traffic; time-of-arrival estimation; TO allocation; TO pricing; TO scheduling; TOA; budget balance; data transmission; economic-robust transmission opportunity auction; incentive compatibility property; individual rationality; link-band pair; multihop data traffic; multihop wireless networks; single-hop communications; spectrum auction; spectrum scarcity; spectrum utilization enhancement; transmission opportunities; wireless devices; wireless services; Integrated circuits; Interference; Pricing; Relays; Resource management; Scheduling; Wireless networks
【Paper Link】 【Pages】:1851-1859
【Authors】: Devu Manikantan Shila ; Yu Cheng
【Abstract】: The main focus of this paper is to show theoretically that power is a crucial factor in multi-radio multi-channel (MR-MC) wireless networks and hence by judiciously leveraging the power, one can realize a considerable gain on the capacity for MR-MC wireless networks. Such a capacity gain through power enhancement is revealed by our new insights of a co-channel enlarging effect. In particular, when the number of available channels (c) in a network is larger than that necessary for enabling the maximum set of simultaneous transmissions (c̃), allocating transmissions to those additional c-c̃ channels could enlarge the distance between the co-channel transmissions; the larger co-channel distance then allows a higher transmission power for higher link capacity. The finding of this paper specifically indicate that by exploiting the co-channel enlarging effect with power, one can realize the following gain on the capacity for MR-MC wireless networks: (i) In the channel-constraint region (c̃ <; c <; nφ/2), if each node augments its power from the minimum Pmin to Pmin c/c̃α/2, then a gain of Θ(log(c/c̃)α/2) is achieved; (ii) In the power-constraint region (c ≥ nφ/2), if each node sends at the maximum power level, Pmax = Pmin.nK or Pmin.2nφ/2, depending on the power availability at a node, then a gain of Θ(log n) or Θ(n) is achieved, respectively.
【Keywords】: channel capacity; cochannel interference; radio networks; wireless channels; MR-MC wireless networks; capacity gain; channel-constraint region; cochannel distance; cochannel enlarging effect; cochannel interference; cochannel transmissions; link capacity; multiradio multichannel wireless networks; power enhancement; power-constraint region; Bandwidth; Channel models; Interference; Receivers; Signal to noise ratio; Wireless networks
【Paper Link】 【Pages】:1860-1868
【Authors】: Haopeng Li ; Jiang Xie
【Abstract】: Fast handoff support is a basic requirement for an Internet-based wireless mesh network (WMN), aiming to guarantee mobile users to be continuously connected to the Internet, regardless of their physical locations or moving trajectory. Due to the multi-hop transmission of network-layer handoff signaling packets, handoff performance in WMNs can be largely degraded by the increasing number of wireless hops as well as the channel access contentions between data and signaling packets in the mesh backbone. However, these issues are ignored in existing handoff solutions and multi-channel medium access control schemes. In this paper, we address the seamless handoff support from a different perspective and propose a gateway scheduling-based handoff scheme in single-radio multi-hop WMNs. Our proposed handoff scheme can realize single-hop handoff signaling packet transmissions and eliminate the channel contentions between data and signaling packets. Simulation results show that the total handoff delay is improved significantly using our proposed gateway scheduling-based handoff scheme under various scenarios, as compared to existing handoff solutions in multi-hop WMNs. In addition, due to the single-hop transmission of signaling packets, the signaling overhead in the wireless mesh backbone can be substantially reduced.
【Keywords】: Internet; access protocols; internetworking; scheduling; telecommunication signalling; wireless mesh networks; GaS; Internet-based wireless mesh network; WMN; channel contentions; data packets; gateway scheduling-based handoff scheme; handoff performance; handoff solutions; handoff support; multichannel medium access control schemes; multihop transmission; physical locations; seamless handoff support; signaling overhead; signaling packets; single-hop transmission; single-radio infrastructure wireless mesh networks; single-radio multi-hop WMNs; wireless hops; wireless mesh backbone; Delays; Directional antennas; Internet; Logic gates; Manganese; Schedules; Wireless communication
【Paper Link】 【Pages】:1869-1877
【Authors】: Ramanujan K. Sheshadri ; Dimitrios Koutsonikolas
【Abstract】: We conduct the first experimental study of the performance of link quality-based routing metrics in an 802.11n wireless mesh network (WMN). Link quality-based metrics have been shown to significantly outperform the traditional hopcount metric but they have only been evaluated over legacy 802.11a/b/g radios. The new 802.11n standard introduces a number of enhancements at the MAC and PHY layers (MIMO technology, channel bonding, frame aggregation, short guard interval, and more aggressive modulation and coding schemes) marking the beginning of a new generation of 802.11 radios. Our study in a 21-node indoor 802.11n WMN testbed reveals that the gains of link quality-based metrics over the hopcount metric in legacy 802.11 WMNs do not carry over in 802.11n MIMO WMNs. We analyze the causes of this behavior and make recommendations for the design of new routing metrics in 802.11n WMNs.
【Keywords】: MIMO communication; access protocols; modulation coding; telecommunication links; telecommunication network routing; wireless LAN; wireless mesh networks; IEEE 802.11n MIMO WMN; IEEE 802.11n wireless mesh networks; MAC; PHY layers; channel bonding; coding scheme; frame aggregation; hopcount metric; link quality-based routing metrics; modulation scheme; short guard interval; Barium; Bit rate; IEEE 802.11g Standard; IEEE 802.11n Standard; Routing; Throughput
【Paper Link】 【Pages】:1878-1886
【Authors】: Zi Feng ; George Papageorgiou ; Srikanth V. Krishnamurthy ; Ramesh Govindan ; Tom La Porta
【Abstract】: The end-user experience in viewing a video depends on the distortion; however, also of importance is the delay experienced by the packets of the video flow since it impacts the timeliness of the information contained and the playback rate at the receiver. Unfortunately, these performance metrics are in conflict with each other in a wireless network. Packet losses can be minimized by perfectly avoiding interference by separating transmissions in time or frequency; however, this decreases the rate at which transmissions occur, and this increases delay. Relaxing the requirement for interference avoidance can lead to packet losses and thus increase distortion, but can decrease the delay for those packets that are delivered. In this paper, we investigate this trade-off between distortion and delay for video. To understand the trade-off between video quality and packet delay, we develop an analytical framework that accounts for characteristics of the network (e.g. interference, channel variations) and the video content (motion level), assuming as a basis, a simple channel access policy that provides flexibility in managing the interference in the network. We validate our model via extensive simulations. Surprisingly, we find that the trade-off depends on the specific features of the video flow: it is better to trade-off high delay for low distortion with fast motion video, but not with slow motion video. Specifically, for an increase in PSNR (a metric that quantifies distortion) from 20 to 25 dB, the penalty in terms of the increase in mean delay with fast motion video is 91 times that with slow motion video. Our simulation results further quantify the trade-offs in various scenarios.
【Keywords】: interference suppression; radio networks; radiofrequency interference; video communication; wireless channels; PSNR; channel access policy; distortion; end-user experience; interference avoidance; interference management; packet delay; packet loss minimisation; performance metrics; video quality; video transmission; wireless network; Delays; Interference; Packet loss; Signal to noise ratio; Streaming media; Video recording
【Paper Link】 【Pages】:1887-1895
【Authors】: Siva Theja Maguluri ; R. Srikant
【Abstract】: We consider a stochastic model of jobs arriving at a cloud data center. Each job requests a certain amount of CPU, memory, disk space, etc. Job sizes (durations) are also modeled as random variables, with possibly unbounded support. These jobs need to be scheduled non preemptively on servers. The jobs are first routed to one of the servers when they arrive and are queued at the servers. Each server then chooses a set of jobs from its queues so that it has enough resources to serve all of them simultaneously. This problem has been studied previously under the assumption that job sizes are known and upper bounded, and an algorithm was proposed which stabilizes traffic load in a diminished capacity region. Here, we present a load balancing and scheduling algorithm that is throughput optimal, without assuming that job sizes are known or are upper bounded.
【Keywords】: cloud computing; computer centres; network servers; processor scheduling; resource allocation; stochastic processes; telecommunication traffic; CPU; cloud data center; cloud job scheduling algorithm; diminished capacity region; job queues; job requests; job sizes; load balancing algorithm; random variables; servers; stochastic job model; traffic load stabilization; Markov processes; Routing; Schedules; Scheduling; Servers; Throughput; Vectors
【Paper Link】 【Pages】:1896-1904
【Authors】: Furong Huang ; Anima Anandkumar
【Abstract】: The problem of distributed load balancing among m agents operating in an n-server slotted system is considered. A randomized local search mechanism, FCD (fast, concurrent and distributed) algorithm, is implemented concurrently by each agent associated with a user. It involves switching to a different server with a certain exploration probability and then backtracking with a probability proportional to the ratio of the measured loads in the two servers (in consecutive time slots). The exploration and backtracking operations are executed concurrently by users in local alternating time slots. To ensure that users do not switch to other servers asymptotically, each user chooses the exploration probability to be decaying polynomially with time for decaying rate β ∈ [0.5, 1]. The backtracking decision is then based on an estimate of the server load which is computed based on local information. Thus, FCD algorithm does not require synchronization or coordination with other users. The main contribution of this work, besides the FCD algorithm, is the analysis of the convergence time for the system to be approximately balanced, i.e. to reach an c-Nash equilibrium. We show that the system reaches an c-Nash equilibrium in expected time O (max {n log n/ϵ + n1/β, (n3/m3 log n2/ϵ)1/β}) when m > n2. This implies that the convergence rate is robust with large scale system(large user population), and is not affected by imperfect measurements of the server load. We also extend our analysis to open systems where users arrive and depart from a system with an initial load of m users. We allow for general time-dependent arrival processes (including heavy-tailed processes) and consider a uniform and a load-oblivious routing of the arrivals to the servers. A wide class of departure processes including load-dependent departures from the servers is also allowed. Our analysis demonstrate- that it is possible to design fast, concurrent and distributed load balancing mechanisms in large multi-agent systems via randomized local search.
【Keywords】: cloud computing; computational complexity; multi-agent systems; resource allocation; FCD; decaying rate; exploration probability; fast-concurrent-distributed load balancing; load-dependent departures; multi-agent systems; n-server slotted system; randomized local search mechanism; switching costs; Convergence; Games; Load management; Nash equilibrium; Open systems; Servers; Switches
【Paper Link】 【Pages】:1905-1913
【Authors】: Johannes Schneider ; Stefan Schmid
【Abstract】: This paper attends to a generalized version of the classic page migration problem where migration costs are not necessarily given by the migration distance only, but may depend on prior migrations, or on the available bandwidth along the migration path. Interestingly, this problem cannot be viewed from a Metrical Task System (MTS) perspective, despite the generality of MTS: The corresponding MTS has an unbounded state space and, thus, an unbounded competitive ratio. Nevertheless, we are able to present an optimal online algorithm for a wide range of problem variants, improving the best upper bounds known so far for more specific problems. For example, we present a tight bound of Θ(log n/log log n) for the competitive ratio of the virtual server migration problem introduced recently.
【Keywords】: computational complexity; graph theory; network servers; optimisation; virtual machines; MTS; generalized migration cost; metrical task system; migration distance; migration path; online page migration; optimal bound; optimal online algorithm; tight bound; unbounded competitive ratio; unbounded state space; virtual server migration problem; Algorithm design and analysis; Bandwidth; Cost function; Extraterrestrial measurements; Gravity; Servers; Upper bound
【Paper Link】 【Pages】:1914-1922
【Authors】: Yuval Rochman ; Hanoch Levy ; Eli Brosh
【Abstract】: We consider the problem of how to place and efficiently utilize resources in network environments. The setting consists of a regionally organized system which must satisfy regionally varying demands for various resources. The operator aims at placing resources in the regions as to minimize the cost of providing the demands. Examples of systems falling under this paradigm are 1) A peer supported Video on Demand service where the problem is how to place various video movies, and 2) A cloud-based system consisting of regional server-farms, where the problem is where to place various contents or end-user services. The main challenge posed by this paradigm is the need to deal with an arbitrary multi-dimensional (high-dimensionality) stochastic demand. We show that, despite this complexity, one can optimize the system operation while accounting for the full demand distribution. We provide algorithms for conducting this optimization and show that their complexity is pretty small, implying they can handle very large systems. The algorithms can be used for: 1) Exact system optimization, 2) deriving lower bounds for heuristic based analysis, and 3) Sensitivity analysis. The importance of the model is demonstrated by showing that an alternative analysis which is based on the demand means only, may, in certain cases, achieve performance that is drastically worse than the optimal one.
【Keywords】: optimisation; resource allocation; sensitivity analysis; telecommunication network topology; cloud-based system; content services; distributed network topology; end-user services; exact system optimization; heuristic based analysis; multidimensional stochastic demand; regional server-farms; regionally organized system; resource assignment; resource placement; resource utilization; sensitivity analysis; video movies; video on demand service; Algorithm design and analysis; Complexity theory; Motion pictures; Network topology; Servers; Stochastic processes; Streaming media
【Paper Link】 【Pages】:1923-1931
【Authors】: Seksan Laitrakun ; Edward J. Coyle
【Abstract】: We consider a distributed detection application for a fusion center that has a limited time to make a global decision by collecting, weighting, and fusing local decisions made by nodes in a wireless sensor network that uses a random access channel. When this time is not long enough to collect decisions from all nodes in the network, a strategy is needed for collecting those with the highest reliability. This is accomplished with an easily implemented modification of the random access protocol: the collection time is divided into frames and only nodes with a particular range of reliabilities compete for the channel within each frame. Nodes with the most reliable decisions attempt transmission in the first frame; nodes with the next most reliable set of decisions attempt in the next frame; etc. We formulate an optimization problem that determines the reliability interval that defines who attempts in each frame in order to minimize the Detection Error Probability (DEP) at the fusion center. When the noise distribution affecting nodes' local decisions is continuous, symmetric, unimodal, and has a monotone likelihood ratio, reliability thresholds that maximize the channel throughput in each frame are optimal when the ratio of the collection time to the number of nodes is small. The number of frames that should be used depends on whether the average reliability or the worst-case reliability of local decisions in each frame is used to determine the DEP.
【Keywords】: decision theory; error statistics; optimisation; protocols; sensor fusion; signal detection; telecommunication network reliability; wireless channels; wireless sensor networks; DEP; WSN; channel throughput maximization; collection time; detection error probability; fusion center; local decision collection optimization; monotone likelihood ratio; node local decision fusion; noise distribution; random access channel; random access protocol; reliability interval; reliability thresholds; time-constrained distributed detection application; wireless sensor network; Approximation methods; Frequency modulation; Random variables; Reliability; Sensors; Throughput; Wireless sensor networks
【Paper Link】 【Pages】:1932-1940
【Authors】: Songtao Guo ; Cong Wang ; Yuanyuan Yang
【Abstract】: The emerging wireless energy transfer technology enables charging sensor batteries in a wireless sensor network (WSN) and maintaining perpetual operation of the network. Recent breakthrough in this area has opened up a new dimension to the design of sensor network protocols. In the meanwhile, mobile data gathering has been considered as an efficient alternative to data relaying in WSNs. However, time variation of recharging rates in wireless rechargeable sensor networks imposes a great challenge in obtaining an optimal data gathering strategy. In this paper, we propose a framework of joint Wireless Energy Replenishment and anchor-point based Mobile Data Gathering (WerMDG) in WSNs by considering various sources of energy consumption and time-varying nature of energy replenishment. To that end, we first determine the anchor point selection and the sequence to visit the anchor points. We then formulate the WerMDG problem into a network utility maximization problem which is constrained by flow conversation, energy balance, link and battery capacity and the bounded sojourn time of the mobile collector. Furthermore, we present a distributed algorithm composed of cross-layer data control, scheduling and routing subalgorithms for each sensor node, and sojourn time allocation subalgorithm for the mobile collector at different anchor points. Finally, we give extensive numerical results to verify the convergence of the proposed algorithm and the impact of utility weight on network performance.
【Keywords】: routing protocols; telecommunication power management; wireless sensor networks; WerMDG; anchor point based mobile data gathering; anchor point selection; cross layer data control; distributed algorithm; energy balance; energy consumption; flow conversation; optimal data gathering strategy; rechargeable sensor networks; routing subalgorithm; scheduling subalgorithm; sensor network protocol; time-varying nature; wireless energy replenishment; wireless energy transfer technology; Batteries; Distributed databases; Mobile communication; Optimization; Robot sensing systems; Wireless communication; Wireless sensor networks; Mobile data gathering; distributed algorithm; energy replenishment; rechargeable sensor networks
【Paper Link】 【Pages】:1941-1949
【Authors】: Dawei Gong ; Yuanyuan Yang
【Abstract】: Data gathering is a fundamental operation for various applications of wireless sensor networks (WSNs), where sensor nodes sense information and forward data to a sink node via multihop wireless communications. Typically, data in a WSN is relayed over a tree topology to the sink for effective data gathering. A number of tree-based data gathering schemes have been proposed in the literature, most of which aim at maximizing network lifetime. However, the timeliness and reliability of gathered data are also of great importance to many applications in WSNs. To achieve low-latency, high-reliability data gathering in WSNs, in this paper, we construct a data gathering tree based on a reliability model, schedule data transmissions for the links on the tree and assign transmitting power to each link accordingly. Since the reliability of a link is highly related to its signal to interference plus noise ratio (SINR), the SINR of all the currently used links on the data gathering tree should be greater than a threshold to guarantee high reliability. We formulate the joint problem of tree construction, link scheduling and power assignment for data gathering into an optimization problem, with the objective of minimizing data gathering latency. We show the problem is NP-hard and divide the problem into two subproblems: Construction of a low-latency data gathering tree; Jointly link scheduling and power assignment for the data gathering tree. We then propose a polynomial heuristic algorithm for each subproblem and conduct extensive simulations to verify the effectiveness of the proposed algorithms. Our simulation results show that the proposed algorithms achieve much lower data gathering latency than existing data gathering strategies while guaranteeing high reliability.
【Keywords】: data communication; scheduling; telecommunication network reliability; telecommunication network topology; time division multiple access; trees (mathematics); wireless sensor networks; SINR; WSN; data gathering latency; link scheduling; low latency data gathering; multihop wireless communications; network lifetime; polynomial heuristic algorithm; power assignment; reliability model; signal to interference plus noise ratio; tree construction; tree topology; wireless sensor networks; Data models; Interference; Optimization; Reliability; Signal to noise ratio; Wireless communication; Wireless sensor networks; SINR constraint; Wireless sensor networks (WSNs); data gathering; link scheduling; power assignment
【Paper Link】 【Pages】:1950-1958
【Authors】: Yeqing Yi ; Rui Li ; Fei Chen ; Alex X. Liu ; Yaping Lin
【Abstract】: Two-tiered wireless sensor networks offer good scalability, efficient power usage, and space saving. However, storage nodes are more attractive to attackers than sensors because they store sensor collected data and processing sink issued queries. A compromised storage node not only reveals sensor collected data, but also may reply incomplete or wrong query results. In this paper, we propose QuerySec, a protocol that enables storage nodes to process queries correctly while prevents them from revealing both data from sensors and queries from the sink. To protect privacy, we propose an order preserving function-based scheme to encode both sensor collected data and sink issued queries, which allows storage nodes to process queries correctly without knowing the actual values of both data and queries. To preserve integrity, we proposed a link watermarking scheme, where data items are formed into a link by the watermarks embedded in them so that any deletion in query results can be detected.
【Keywords】: query processing; watermarking; wireless sensor networks; QuerySec; attackers; digital watermarking; link watermarking scheme; query processing; sensors; storage nodes; wireless sensor networks; Cryptography; Data privacy; Privacy; Protocols; Silicon; Upper bound; Watermarking
【Paper Link】 【Pages】:1959-1967
【Authors】: Pengda Huang ; Matthew Jordan Tonnemacher ; Yongjiu Du ; Dinesh Rajan ; Joseph David Camp
【Abstract】: Channel emulators are valuable tools for controllable and repeatable wireless experimentation. Often, however, the high cost of such emulators preclude their widespread usage, especially in large-scale wireless networks. Moreover, existing channel emulators offer either very realistic channels for simplistic topologies or complex topologies with highly-abstracted, low-fidelity channels. To bridge the gap in offering a low-cost channel emulation solution which can scale to a large network size, in this paper, we study the tradeoff in channel emulation fidelity versus the hardware resources consumed using both analytical modeling and FPGA-based implementation. To reduce the memory footprint of our design, we optimize our channel emulation using an iterative structure to generate the Rayleigh fading channel. In addition, the channel update rate and word length selection are also evaluated in the paper which greatly improve the efficiency of implementation. We then extend our analysis of a single channel to understand how the implementation scales for the emulation of a large-scale wireless network, showing that up to 24 vehicular channels can be emulated in real-time on a single Virtex-4 FPGA.
【Keywords】: Rayleigh channels; field programmable gate arrays; iterative methods; telecommunication network topology; Rayleigh fading channel; Virtex-4 FPGA-based implementation; channel accuracy; controllable wireless experimentation; hardware resources; highly-abstracted channels; implementation resources; iterative structure; large-scale wireless network; large-scale wireless networks; low-cost channel emulation solution; low-fidelity channels; memory footprint; repeatable wireless experimentation; scalable network emulation; simplistic topologies; word length selection; Accuracy; Emulation; Fading; Field programmable gate arrays; Hardware; Optimization; Wireless communication
【Paper Link】 【Pages】:1968-1976
【Authors】: Minas Gjoka ; Maciej Kurant ; Athina Markopoulou
【Abstract】: Understanding network structure and having access to realistic graphs plays a central role in computer and social networks research. In this paper, we propose a complete, practical methodology for generating graphs that resemble a real graph of interest. The metrics of the original topology we target to match are the joint degree distribution (JDD) and the degree-dependent average clustering coefficient (c̅(k)). We start by developing efficient estimators for these two metrics based on a node sample collected via either independence sampling or random walks. Then, we process the output of the estimators to ensure that the target metrics are realizable. Finally, we propose an efficient algorithm for generating topologies that have the exact target JDD and a c̅(k) close to the target. Extensive simulations using real-life graphs show that the graphs generated by our methodology are similar to the original graph with respect to, not only the two target metrics, but also a wide range of other topological metrics. Furthermore, our generator is order of magnitudes faster than state-of-the-art techniques.
【Keywords】: graph theory; network theory (graphs); pattern clustering; sampling methods; statistical distributions; JDD; computer network research; degree-dependent average clustering coefficient; graph generation; graph sampling; independence sampling; joint degree distribution; network structure; node sample; random walks; real-life graphs; realistic graphs; social network research; state-of-the-art techniques; target metrics; topological metrics; topology generation; Clustering algorithms; Estimation; Joints; Measurement; Network topology; Social network services; Topology
【Paper Link】 【Pages】:1977-1985
【Authors】: Tung-Wei Kuo ; Kate Ching-Ju Lin ; Ming-Jer Tsai
【Abstract】: In this paper, we investigate the wireless network deployment problem, which seeks the best deployment of a given limited number of wireless routers. We found that many goals for network deployment, such as maximizing the number of covered users or areas, or the total throughput of the network, can be modelled with the submodular set function. Specifically, given a set of routers, the goal is to find a set of locations S, each of which is equipped with a router, such that S maximizes a predefined submodular set function. However, this deployment problem is more difficult than the traditional maximum submodular set function problem, e.g., the maximum coverage problem, because it requires all the deployed routers to form a connected network. In addition, deploying a router in different locations might consume different costs. To address these challenges, this paper introduces two approximation algorithms, one for homogeneous deployment cost scenarios and the other for heterogeneous deployment cost scenarios. Our simulations, using synthetic data and real traces of census in Taipei, show that the proposed algorithms achieve a better performance than other heuristics.
【Keywords】: approximation theory; radio networks; telecommunication network routing; Taipei; approximation algorithm; census; connected network; connectivity constraint; heterogeneous deployment cost scenario; homogeneous deployment cost scenario; maximum coverage problem; network throughput; router deployment; submodular set function maximization; wireless network deployment problem; wireless router; Algorithm design and analysis; Approximation algorithms; Approximation methods; Logic gates; Optimized production technology; TV
【Paper Link】 【Pages】:1986-1994
【Authors】: Yuefei Zhu ; Baochun Li ; Zongpeng Li
【Abstract】: In a secondary spectrum market, the utility of a secondary user often depends on not only whether it wins, but also which channels it wins. Combinatorial auctions are a natural fit here to allow secondary users to bid for combinations of channels. In this context, the VCG mechanism constitutes a generic auction that uniquely guarantees both truthfulness and efficiency, but it is vulnerable to shill bidding and generates low revenue. In this paper, without compromising efficiency, we propose to design core-selecting auctions instead, which resolves VCG's vulnerability and improves seller revenue. We prove that in a secondary spectrum market, the revenue gleaned from a core-selecting auction is at least that of the VCG mechanism, and shills are not profitable to bidders. Employing linear programming and quadratic programming techniques, we design two payment rules suitable for our core-selecting auction, which aim to minimize the incentives of bidders to deviate from truthful-telling. Our extensive simulation results show that the revenues can be largely increased due to spectrum sharing.
【Keywords】: commerce; linear programming; quadratic programming; radio spectrum management; wireless channels; VCG mechanism; VCG vulnerability; core-selecting combinatorial auction design; generic auction; linear programming techniques; quadratic programming techniques; secondary spectrum markets; secondary user; seller revenue; spectrum sharing; Channel allocation; Cost accounting; Economics; Resource management; Robustness; Vectors; Wireless communication
【Paper Link】 【Pages】:1995-2003
【Authors】: Xiaojun Feng ; Qian Zhang ; Jin Zhang
【Abstract】: According to the recent rulings of the Federal Communications Commission (FCC), TV white spaces (TVWS) can now be accessed by secondary users (SUs) after a list of vacant TV channels is obtained via a geo-location database. Proper business models are essential for database operators to manage the cost of maintaining geo-location databases. Database access can be simultaneously priced under two different schemes: the registration scheme and the service plan scheme. In the registration scheme, the database reserves part of the TV bandwidth for registered White Space Devices (WSD) in a soft-license way. In the service plan scheme, WSDs are charged according to their queries. In this paper, we investigate the business model for the TVWS database under a hybrid pricing scheme. We consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the SUs. The SUs' choices of different pricing schemes are modeled as a non-cooperative game and we derive distributed algorithms to achieve the Nash Equilibrium (NE). Considering the NE of the SUs, the database operator optimally determines the pricing parameters for both pricing schemes in terms of bandwidth reservation, registration fee and query plans.
【Keywords】: business data processing; database management systems; game theory; pricing; query processing; television; FCC; Federal Communications Commission; Nash equilibrium; TV white space database; TVWS; WSD; business models; database access; distributed algorithms; geo-location database; hybrid pricing; noncooperative game; query processing; white space devices; Bandwidth; Contracts; Cost accounting; Databases; Games; Pricing; TV
【Paper Link】 【Pages】:2004-2012
【Authors】: Yanjiao Chen ; Lingjie Duan ; Jianwei Huang ; Qian Zhang
【Abstract】: To accommodate users' ever-increasing traffic in wireless broadband services, the Federal Communications Commission (FCC) in the U.S. is considering allocating additional spectrum to the wireless market. There are two major directions: licensed (e.g. 3G) and unlicensed services (e.g. Wi-Fi). On the one hand, 3G service can realize a high spectrum efficiency and provide ubiquitous connection. On the other hand, the Wi-Fi service (often with limited coverage) can provide users with high-speed local connections, but is subject to uncontrollable interferences. Regarding spectrum allocation, prior studies only focused on revenue maximization. However, one of FCC's missions is to better improve all wireless users' utilities. This motivates us to design a spectrum allocation scheme that jointly considers social welfare and revenue. In this paper, we formulate the interactions among the FCC, typical 3G and Wi-Fi operators, and the endusers as a three-stage dynamic game and derive the equilibrium of the entire game. Compared to the benchmark case where the FCC only maximizes its revenue, the consideration of social welfare will encourage the FCC to allocate more spectrum to the service which lacks spectrum to better serve its users. Such consideration for the social welfare, to our delight, brings limited revenue loss for the FCC.
【Keywords】: 3G mobile communication; radio spectrum management; wireless LAN; 3G service; FCC; Federal Communications Commission; Wi-Fi; high-speed local connection; revenue; social welfare; spectrum allocation; wireless broadband service; Economics; FCC; Games; IEEE 802.11 Standards; Pricing; Resource management; Wireless communication
【Paper Link】 【Pages】:2013-2021
【Authors】: Peng Lin ; Xiaojun Feng ; Qian Zhang ; Mounir Hamdi
【Abstract】: Spectrum auction is widely applied in spectrum redistributions, especially under the dynamic spectrum management context. However, due to the high price asked by the spectrum holders, secondary users (SUs) with limited budget cannot benefit from such auction directly. Motivated by the recent group-buying behaviors in the Internet based service, we advocate that SUs can be grouped together to take part in the spectrum auction as a whole to increase their chances to win the channel. The cost and benefit of the won spectrum are then shared evenly among the SUs within the group. None of the existing auction models can be applied in this scenario due to three unique challenges: how can a group leader select the winning SUs and charge them fairly and efficiently; how to guarantee truthfulness of users' bids; how to match the heterogeneous channels to groups when one group would like to buy at most one channel. In this paper, we propose TASG, a Three-stage Auction framework for Spectrum Group-buying to address the above challenges and enable group-buying behaviors among SUs. In the first stage, we propose an algorithm to decide the group members and bids for the channels. In the second stage, we conduct auction between the group leaders and the spectrum holder, with a novel winner determination algorithm. In the third stage, the group leaders further distribute spectrum and bills to the SUs in the group. TASG possesses good properties such as truthfulness, individual rationality, improved system efficiency, and computational tractability.
【Keywords】: radio spectrum management; Internet based service; SU; TASG; dynamic spectrum management context; group leaders; heterogeneous channels; novel winner determination algorithm; secondary users; spectrum auction; spectrum holder; three-stage auction framework for spectrum group-buying; Aggregates; Algorithm design and analysis; Biological system modeling; Time complexity; Tin; Vectors; Wireless communication
【Paper Link】 【Pages】:2022-2030
【Authors】: Prakash Mandayam Comar ; Lei Liu ; Sabyasachi Saha ; Pang-Ning Tan ; Antonio Nucci
【Abstract】: Malware is one of the most damaging security threats facing the Internet today. Despite the burgeoning literature, accurate detection of malware remains an elusive and challenging endeavor due to the increasing usage of payload encryption and sophisticated obfuscation methods. Also, the large variety of malware classes coupled with their rapid proliferation and polymorphic capabilities and imperfections of real-world data (noise, missing values, etc) continue to hinder the use of more sophisticated detection algorithms. This paper presents a novel machine learning based framework to detect known and newly emerging malware at a high precision using layer 3 and layer 4 network traffic features. The framework leverages the accuracy of supervised classification in detecting known classes with the adaptability of unsupervised learning in detecting new classes. It also introduces a tree-based feature transformation to overcome issues due to imperfections of the data and to construct more informative features for the malware detection task. We demonstrate the effectiveness of the framework using real network data from a large Internet service provider.
【Keywords】: Internet; invasive software; learning (artificial intelligence); tree data structures; Internet service provider; Internet today; layer 3 network traffic features; layer 4 network traffic features; obfuscation methods; payload encryption; polymorphic capabilities; security threats; supervised learning; tree-based feature transformation; unsupervised learning; zero-day malware detection; Feature extraction; Kernel; Malware; Payloads; Support vector machines; Training; Unsupervised learning
【Paper Link】 【Pages】:2031-2039
【Authors】: Seyed Ali Ahmadzadeh ; Gordon B. Agnew
【Abstract】: Inspired by the challenges of designing a robust, and undetectable covert channel, in this paper we introduce a design methodology for timing covert channels that achieve provable polynomial-time undetectability. This means that the covert channel can not be detected by any polynomial-time statistical test that analyzes the samples of the covert traffic and the legitimate traffic. The proposed framework is based on modeling the covert channel as a differential communication channel, and the formulation for modulation/demodulation processes that are derived according to the communication model. The proposed scheme incorporates a trellis structure in modulating the covert message. The trellis structure is also used at the covert receiver to perform iterative demodulation/decoding of the covert message that significantly enhances the channel reliability. In addition, the paper presents an adaptive modulation strategy that improves the channel robustness without compromising the stealthiness of the channel. The combination of the adaptive modulation and the trellis structure gives the covert channel considerable flexibility and low error rate at the covert receiver. In fact, performance analysis of the channel reveals that the proposed covert communication scheme withstands extremely high levels of network noise and adversarial disruption, while it maintains an outstanding undetectability level and covert rate.
【Keywords】: adaptive modulation; communication complexity; demodulation; error statistics; iterative decoding; radio receivers; radiofrequency interference; statistical testing; telecommunication network reliability; telecommunication traffic; trellis codes; turbo codes; wireless channels; adaptive modulation strategy; channel performance analysis; channel reliability; channel robustness; communication model; covert channel modeling; covert communication scheme; covert message; covert rate; covert receiver; covert traffic; data network; design methodology; differential communication channel; error rate; iterative decoding; iterative demodulation; iterative framework; legitimate traffic; modulation/demodulation process; network noise; polynomial-time statistical test; polynomial-time undetectability; timing covert channel; trellis structure; turbo covert channel; undetectability level; Delays; Demodulation; Noise; Receivers; Transmitters
【Paper Link】 【Pages】:2040-2048
【Authors】: James Daly ; Alex X. Liu ; Eric Torng
【Abstract】: Access Control Lists (ACLs) are the core of many networking and security devices. As new threats and vulnerabilities emerge, ACLs on routers and firewalls are getting larger. Therefore, compressing ACLs is an important problem. In this paper, we propose a new approach, called Diplomat, to ACL compression. The key idea is to transform higher dimensional target patterns into lower dimensional patterns by dividing the original pattern into a series of hyperplanes and then resolving differences between two adjacent hyperplanes by adding rules that specify the differences. This approach is fundamentally different from prior ACL compression algorithms and is shown to be very effective. We implemented Diplomat and conducted side-by-side comparison with the prior Firewall Compressor algorithm on real life classifiers. The experimental results show that Diplomat outperforms Firewall Compressor most of the time, often by a considerable margin. In particular, on our largest ACLs, Diplomat has an average improvement ratio over Firewall Compressor of 30.6%.
【Keywords】: authorisation; data compression; firewalls; ACL compression algorithm; access control list compression; difference resolution approach; diplomat; firewall compressor; networking device; router; security device; Approximation algorithms; Approximation methods; Color; Dynamic programming; Heuristic algorithms; Merging; Security
【Paper Link】 【Pages】:2049-2057
【Authors】: Ori Rottenstreich ; Isaac Keslassy ; Avinatan Hassidim ; Haim Kaplan ; Ely Porat
【Abstract】: Hardware-based packet classification has become an essential component in many networking devices. It often relies on TCAMs (ternary content-addressable memories), which need to compare the packet header against a set of rules. But efficiently encoding these rules is not an easy task. In particular, the most complicated rules are range rules, which usually require multiple TCAM entries to encode them. However, little is known on the optimal encoding of such non-trivial rules. In this work, we take steps towards finding an optimal encoding scheme for every possible range rule. We first present an optimal encoding for all possible generalized extremal rules. Such rules represent 89% of all non-trivial rules in a typical real-life classification database. We also suggest a new method of simply calculating the optimal expansion of an extremal range, and present a closed-form formula of the average optimal expansion over all extremal ranges. Next, we present new bounds on the worst-case expansion of general classification rules, both in one-dimensional and two-dimensional ranges. Last, we introduce a new TCAM architecture that can leverage these results by providing a guaranteed expansion on the tough rules, while dealing with simpler rules using a regular TCAM. We conclude by verifying our theoretical results in experiments with synthetic and real-life classification databases.
【Keywords】: computer networks; content-addressable storage; encoding; telecommunication equipment; average optimal expansion; closed form formula; general classification rule; generalized extremal rule; hardware based packet classification; nontrivial rule; optimal TCAM encoding scheme; optimal encoding; packet header; real life classification database; ternary content addressable memory; worst case expansion; Educational institutions; Encoding; Heuristic algorithms; Indexes; Ports (Computers); Reflective binary codes
【Paper Link】 【Pages】:2058-2066
【Authors】: Wei Gao ; Qinghua Li
【Abstract】: Opportunistic mobile networks consist of mobile devices which only communicate when they opportunistically contact each other. Periodic contact probing is required to facilitate opportunistic communication, but seriously reduces the limited battery life of mobile devices. Current research efforts on reducing energy consumption of contact probing are restricted to optimize the probing interval, but are insufficient for energy-efficient opportunistic communication. In this paper, we propose novel techniques to adaptively schedule wakeup periods of mobile nodes between their inter-contact times. A node stays asleep during inter-contact times when contact probing is unnecessary, and only wakes up when a contact with another node is likely to happen. Our approach probabilistically predicts node contacts in the future, and analytically balances between energy consumption for contact probing and performance of opportunistic communication. Extensive trace-driven simulations show that our approach significantly improves energy efficiency of opportunistic communication compared to existing schemes.
【Keywords】: energy consumption; mobile radio; scheduling; battery life; contact probing; energy consumption; energy-efficient opportunistic communication; energy-eficient communication; mobile devices; mobile nodes; opportunistic mobile networks; wakeup scheduling; Energy consumption; Mobile computing; Mobile nodes; Peer-to-peer computing; Schedules
【Paper Link】 【Pages】:2067-2075
【Authors】: Yoora Kim ; Kyunghan Lee ; Ness B. Shroff ; Injong Rhee
【Abstract】: A variety of mathematical tools have been developed for predicting the spreading patterns in a number of varied environments including infectious diseases, computer viruses, and urgent messages broadcast to mobile agents (e.g., humans, vehicles, and mobile devices). These tools have mainly focused on estimating the average time for the spread to reach a fraction (e.g., α) of the agents, i.e., the so-called average completion time E(Tα). We claim that providing probabilistic guarantee on the time for the spread Tα rather than only its average gives a much better understanding of the spread, and hence could be used to design improved methods to prevent epidemics or devise accelerated methods for distributing data. To demonstrate the benefits, we introduce a new metric Gα,β that denotes the time required to guarantee α completion with probability β, and develop a new framework to characterize the distribution of Tα for various spread parameters such as number of seeds, level of contact rates, and heterogeneity in contact rates. We apply our technique to an experimental mobility trace of taxies in Shanghai and show that our framework enables us to allocate resources (i.e., to control spread parameters) for acceleration of spread in a far more efficient way than the state-of-the-art.
【Keywords】: computer network security; mobile agents; pattern recognition; probability; data distribution; epidemics; information spread; mathematical tools; mobile agents; opportunistic networks; pattern spreading; probabilistic guarantees; Acceleration; Analytical models; Diseases; Markov processes; Measurement; Sociology; Vectors
【Paper Link】 【Pages】:2076-2084
【Authors】: Chul-Ho Lee ; Jaewook Kwak ; Do Young Eun
【Abstract】: With recent drastic growth in the number of users carrying smart mobile devices, it is not hard to envision opportunistic ad-hoc communications taking place with such devices carried by humans. This leads to, however, a new challenge to the conventional link-level metrics, solely defined based on user mobility, such as inter-contact time, since there are many constraints including limited battery power that prevent the wireless interface of each user from being always `on' for communication. By taking into account the process of each user's availability jointly with mobility-induced contact/inter-contact process, we investigate how each of them affects the link-level connectivity depending on their relative operating time scales. We then identify three distinct regimes in each of which (1) the so-called impact of mobility on network performance prevails; (2) such impact of mobility disappears or its extent is not that significant; (3) the user availability process becomes dominant. Our findings not only caution that mobility alone is not sufficient to characterize the link-level dynamics, which in turn can lead to highly misleading results, but also suggest the presence of many uncharted research territories for further exploration.
【Keywords】: mobile ad hoc networks; mobility management (mobile radio); smart phones; intercontact time; limited battery power; link connectivity; link-level connectivity; link-level dynamics; link-level metrics; mobility-induced contact-intercontact process; opportunistic ad-hoc communications; opportunistic mobile networking; smart mobile devices; user availability; user availability process; wireless interface; Ad hoc networks; Availability; IEEE 802.11 Standards; Measurement; Mobile computing; Mobile nodes
【Paper Link】 【Pages】:2085-2091
【Authors】: Mingjun Xiao ; Jie Wu ; Cong Liu ; Liusheng Huang
【Abstract】: In this paper, we propose a time-sensitive utility model for delay tolerant networks (DTNs), in which each message has an attached time-sensitive benefit that decays over time. The utility of a message is the benefit minus the transmission cost incurred by delivering the message. This model is analogous to the postal service in the real world, which inherently provides a good balance between delay and cost. Under this model, we propose a Time-sensitive Opportunistic Utility-based Routing (TOUR) algorithm. TOUR is a single-copy opportunistic routing algorithm, in which a time-sensitive forwarding set is maintained for each node by considering the probabilistic contacts in DTNs. By forwarding messages via nodes in these sets, TOUR can achieve the optimal expected utilities. We show the outstanding performance of TOUR through extensive simulations with several real DTN traces. To the best of our knowledge, TOUR is the first utility-based routing algorithm in DTNs.
【Keywords】: delay tolerant networks; delays; postal services; radio networks; telecommunication network routing; DTN; TOUR algorithm; delay; delay tolerant networks; forwarding messages; postal service; single-copy opportunistic routing algorithm; time-sensitive benefit; time-sensitive forwarding set; time-sensitive opportunistic utility-based routing algorithm; transmission cost; Delays; Educational institutions; Exponential distribution; Nickel; Probabilistic logic; Probability density function; Routing; Delay tolerant networks; opportunistic routing; utility
【Paper Link】 【Pages】:2094-2102
【Authors】: Nathaniel M. Jones ; Brooke Shrader ; Eytan Modiano
【Abstract】: We consider distributed strategies for joint routing, scheduling, and network coding to maximize throughput in wireless networks. Network coding allows for an increase in network throughput under certain routing conditions. We previously developed a centralized control policy to jointly optimize for routing and scheduling combined with a simple network coding strategy using max-weight scheduling (MWS) [9]. In this work we focus on pairwise network coding and develop a distributed carrier sense multiple access (CSMA) policy that supports all arrival rates allowed by the network subject to the pairwise coding constraint. We extend our scheme to optimize for packet overhearing to increase the number of beneficial coding opportunities. Simulation results show that the CSMA strategy yields the same throughput as the optimal centralized policy of [9], but at the cost of increased delay. Moreover, overhearing provides up to an additional 25% increase in throughput on random topologies.
【Keywords】: carrier sense multiple access; network coding; radio networks; scheduling; telecommunication network routing; telecommunication network topology; centralized control policy; distributed CSMA; distributed carrier sense multiple access; joint routing; max-weight scheduling; network coding strategy; packet overhearing; pairwise coding constraint; pairwise network coding; random topology; throughput maximization; wireless network; Encoding; Multiaccess communication; Network coding; Routing; Standards; Throughput; Wireless networks
【Paper Link】 【Pages】:2103-2111
【Authors】: Jia Liu ; Cathy H. Xia ; Ness B. Shroff ; Hanif D. Sherali
【Abstract】: Due to the rapidly growing scale and heterogeneity of wireless networks, the design of distributed cross-layer optimization algorithms has received significant interest from the networking research community. So far, the standard distributed cross-layer approach in the literature is based on the first-order Lagrangian dual decomposition and the subgradient method, which suffers from a slow convergence rate. In this paper, we make the first known attempt to develop a distributed Newton's method, which is second-order and enjoys a quadratic convergence rate. However, due to the inherent interference in wireless networks, the Hessian matrix of the cross-layer problem has a non-separable structure. As a result, developing a distributed second-order algorithm is far more difficult than its counterpart for wireline networks. Our main contributions in this paper are two-fold: i) For a special network setting where all links mutually interfere, we derive closed-form expressions for the Hessian inverse, which further yield a distributed Newton's method; ii) For general wireless networks where the interference relationships are arbitrary, we propose a double matrix-splitting scheme, which also leads to a distributed Newton's method. Collectively, these results create a new theoretical framework for distributed cross-layer optimization in wireless networks. More importantly, our work contributes to a potential second-order paradigm shift in wireless networks optimization theory.
【Keywords】: Hessian matrices; Newton method; gradient methods; optimisation; radio networks; Hessian inverse; Hessian matrix; cross-layer problem; distributed Newton method; distributed cross-layer optimization; distributed second-order algorithm; double matrix-splitting scheme; first-order Lagrangian dual decomposition; quadratic convergence rate; subgradient method; wireless network optimization theory; Convergence; Interference; Newton method; Optimization; Routing; Vectors; Wireless networks
【Paper Link】 【Pages】:2112-2120
【Authors】: Bin Li ; Atilla Eryilmaz ; Ruogu Li
【Abstract】: In this paper, we study the design of joint flow rate control and scheduling policies in multi-hop wireless networks for achieving maximum network utility with provably optimal convergence speed. Fast convergence is especially important in wireless networks which are dominated by the dynamics of incoming and outgoing flows as well as the time sensitive applications. Yet, the design of fast converging policies in wireless networks is complicated by: (i) the interference-constrained communication capabilities, and (ii) the finite set of transmission rates to select from due to operational and physical-layer constraints. We tackle these challenges by explicitly incorporating such discrete constraints to understand their impact on the convergence speed at which the running average of the received service rates and the network utility converges to their limits. In particular, we establish a fundamental fact that the convergence speed of any feasible policy cannot be faster than Ω (1/T) under both the T rate and utility metrics. Then, we develop an algorithm that achieves this optimal convergence speed in both metrics. We also show that the well-known dual algorithm can achieve the optimal convergence speed in terms of its utility value. These results reveal the interesting fact that the convergence speed of rates and utilities in wireless networks is dominated by the discrete choices of scheduling and transmission rates, which also implies that the use of higher-order flow rate controllers with fast convergence guarantees cannot overcome the aforementioned fundamental limitation.
【Keywords】: convergence; optimisation; radio networks; radiofrequency interference; scheduling; telecommunication congestion control; discrete constraints; dual algorithm; fast converging policy; higher-order flow rate controllers; interference-constrained communication; joint flow rate control desgin; multihop wireless networks; network utility maximization; operational layer constraints; optimal convergence speed; physical-layer constraints; received service rates; transmission rates; utility metrics; wireless scheduling policy; Algorithm design and analysis; Convergence; Joints; Measurement; Upper bound; Vectors; Wireless networks
【Paper Link】 【Pages】:2121-2129
【Authors】: Peng-Jun Wan ; Xiaohua Jia ; Guojun Dai ; Hongwei Du ; Zhiguo Wan ; Ophir Frieder
【Abstract】: For wireless link scheduling in multi-channel multi-radio wireless networks aiming at maximizing (concurrent) multi-flow, constant-approximation algorithms have recently been developed in [11]. However, the running time of those algorithms grows quickly with the number of radios per node (at least in the sixth order) and the number of channels (at least in the cubic order). Such poor scalability stems intrinsically from the exploding size of the fine-grained network representation upon which those algorithms are built. In this paper, we introduce a new structure, termed as concise conflict graph, on the node-level links directly. Such structure succinctly captures the essential advantage of multiple radios and multiple channels. By exploring and exploiting the rich structural properties of the concise conflict graphs, we are able to develop fast and scalable link scheduling algorithms for either minimizing the communication latency or maximizing the (concurrent) multi-flow. These algorithms have running time growing linearly in both the number of radios per node and the number of channels, while not sacrificing the approximation bounds.
【Keywords】: approximation theory; minimisation; network theory (graphs); radio links; radio networks; scheduling; wireless channels; communication latency minimization; concise conflict graph; constant approximation algorithm; fine grained network representation; multichannel multiradio wireless network; multiflow maximization; node level link; scalable wireless link scheduling algorithm; Approximation algorithms; Approximation methods; Educational institutions; Interference; Schedules; Scheduling; Wireless networks; Link scheduling; approximation algorithms; multi-channel multi-radio
【Paper Link】 【Pages】:2130-2138
【Authors】: Advait Abhay Dixit ; Pawan Prakash ; Y. Charlie Hu ; Ramana Rao Kompella
【Abstract】: Modern data center networks are commonly organized in multi-rooted tree topologies. They typically rely on equal-cost multipath to split flows across multiple paths, which can lead to significant load imbalance. Splitting individual flows can provide better load balance, but is not preferred because of potential packet reordering that conventional wisdom suggests may negatively interact with TCP congestion control. In this paper, we revisit this “myth” in the context of data center networks which have regular topologies such as multi-rooted trees. We argue that due to symmetry, the multiple equal-cost paths between two hosts are composed of links that exhibit similar queuing properties. As a result, TCP is able to tolerate the induced packet reordering and maintain a single estimate of RTT. We validate the efficacy of random packet spraying (RPS) using a data center testbed comprising real hardware switches. We also reveal the adverse impact on the performance of RPS when the symmetry is disturbed (e.g., during link failures) and suggest solutions to mitigate this effect.
【Keywords】: computer centres; queueing theory; telecommunication congestion control; telecommunication network topology; transport protocols; trees (mathematics); RPS; TCP congestion control; data center networks; data center testbed; equal-cost multipath; link failures; load imbalance; multiple equal-cost paths; multirooted tree topologies; packet spraying impact; potential packet reordering; random packet spraying efficacy; similar queuing properties; Bandwidth; Kernel; Network topology; Servers; Spraying; Throughput; Topology
【Paper Link】 【Pages】:2139-2147
【Authors】: Jian Guo ; Fangming Liu ; Dan Zeng ; John C. S. Lui ; Hai Jin
【Abstract】: In current IaaS datacenters, tenants are suffering unfairness since the network bandwidth is shared in a besteffort manner. To achieve predictable network performance for rented virtual machines (VMs), cloud providers should guarantee minimum bandwidth for VMs or allocate the network bandwidth in a fairness fashion at VM-level. At the same time, the network should be efficiently utilized in order to maximize cloud providers' revenue. In this paper, we model the bandwidth sharing problem as a Nash bargaining game, and propose the allocation principles by defining a tunable base bandwidth for each VM. Specifically, we guarantee bandwidth for those VMs with lower network rates than their base bandwidth, while maintaining fairness among other VMs with higher network rates than their base bandwidth. Based on rigorous cooperative game-theoretic approaches, we design a distributed algorithm to achieve efficient and fair bandwidth allocation corresponding to the Nash bargaining solution (NBS). With simulations under typical scenarios, we show that our strategy can meet the two desirable requirements towards predictable performance for tenants as well as high utilization for providers. And by tuning the base bandwidth, our solution can enable cloud providers to flexibly balance the tradeoff between minimum guarantees and fair sharing of datacenter networks.
【Keywords】: bandwidth allocation; cloud computing; computer centres; game theory; virtual machines; IaaS datacenters; Nash bargaining game; Nash bargaining solution; VM-level; bandwidth sharing problem; cloud providers; cooperative game based allocation; cooperative game-theoretic approaches; data center networks; distributed algorithm; fair bandwidth allocation; fairness fashion; network bandwidth; network performance; network rates; rented virtual machines; tunable base bandwidth; Bandwidth; Bismuth; Channel allocation; Games; NIST; Resource management; Servers
【Paper Link】 【Pages】:2148-2156
【Authors】: Ziyu Shao ; Xin Jin ; Wenjie Jiang ; Minghua Chen ; Mung Chiang
【Abstract】: Today's data centers are shared among multiple tenants running a wide range of applications. These applications require a network with a scalable and robust layer-2 network management solution that enables load-balancing and QoS provisioning. Ensemble routing was proposed to achieve management scalability and robustness by using Virtual Local Area Networks (VLANs) and operating on the granularity of flow ensembles, i.e. group of flows. The key challenge of intra-data-center traffic engineering with ensemble routing is the combinatorial optimization of VLAN assignment, i.e., optimally assigning flow ensembles to VLANs to achieve load balancing and low network costs. Based on the Markov approximation framework, we solve the VLAN assignment problem with a general objective function and arbitrary network topologies by designing approximation algorithms with close-to-optimal performance guarantees. We study several properties of our algorithms, including performance optimality, perturbation bound, convergence of algorithms and impacts of algorithmic parameter choices. Then we extend these results to variants of VLAN assignment problem, including interaction with TCP congestion and QoS considerations. We validate our analytical results by conducting extensive numerical experiments. The results show that our algorithms can be tuned to meet different temporal constraints, incorporate fine-grained traffic management, overcome traffic measurement limitations, and tolerate imprecise and incomplete traffic matrices.
【Keywords】: Markov processes; approximation theory; combinatorial mathematics; computer centres; convergence; local area networks; optimisation; quality of service; resource allocation; telecommunication congestion control; telecommunication network management; telecommunication network reliability; telecommunication network routing; telecommunication network topology; telecommunication traffic; transport protocols; Markov approximation framework; QoS provisioning; TCP congestion; VLAN assignment problem; algorithmic parameter choices; approximation algorithm; close-to-optimal performance guarantees; combinatorial optimization; convergence; ensemble routing; fine-grained traffic management; flow ensembles granularity; intra-data-center traffic engineering; layer-2 network management solution; load balancing; management scalability; network cost; network topologies; performance optimality; perturbation bound; temporal constraint; traffic matrices; traffic measurement limitation; virtual local area network; Algorithm design and analysis; Approximation algorithms; Approximation methods; Markov processes; Routing; Switches; Upper bound
【Paper Link】 【Pages】:2157-2165
【Authors】: Ali Munir ; Ihsan Ayyub Qazi ; Zartash Afzal Uzmi ; Aisha Mushtaq ; Saad N. Ismail ; M. Safdar Iqbal ; Basma Khan
【Abstract】: For provisioning large-scale online applications such as web search, social networks and advertisement systems, data centers face extreme challenges in providing low latency for short flows (that result from end-user actions) and high throughput for background flows (that are needed to maintain data consistency and structure across massively distributed systems). We propose L2DCT, a practical data center transport protocol that targets a reduction in flow completion times for short flows by approximating the Least Attained Service (LAS) scheduling discipline, without requiring any changes in application software or router hardware, and without adversely affecting the long flows. L2DCT can co-exist with TCP and works by adapting flow rates to the extent of network congestion inferred via Explicit Congestion Notification (ECN) marking, a feature widely supported by the installed router base. Though L2DCT is deadline unaware, our results indicate that, for typical data center traffic patterns and deadlines and over a wide range of traffic load, its deadline miss rate is consistently smaller compared to existing deadline-driven data center transport protocols. L2DCT reduces the mean flow completion time by up to 50% over DCTCP and by up to 95% over TCP. In addition, it reduces the completion for 99th percentile flows by 37% over DCTCP. We present the design and analysis of L2DCT, evaluate its performance, and discuss an implementation built upon standard Linux protocol stack.
【Keywords】: Linux; computer centres; protocols; scheduling; DCTCP; ECN; L2DCT; LAS; Linux protocol stack; Web search; advertisement systems; application software; data centers; explicit congestion notification marking; flow completion times minimization; large-scale online applications; least attained service scheduling discipline; router hardware; social networks; Bandwidth; Hardware; Oscillators; Routing protocols; Throughput; Transport protocols
【Paper Link】 【Pages】:2166-2174
【Authors】: Xiaomeng Ban ; Mayank Goswami ; Wei Zeng ; Xianfeng Gu ; Jie Gao
【Abstract】: In this paper we propose an algorithm to construct a “space filling” curve for a sensor network with holes. Mathematically, for a given multi-hole domain R, we generate a path P that is provably aperiodic (i.e., any point is covered at most a constant number of times) and dense (i.e., any point of R is arbitrarily close to P). In a discrete setting as in a sensor network, the path visits the nodes with progressive density, which can adapt to the budget of the path length. Given a higher budget, the path covers the network with higher density. With a lower budget the path becomes proportional sparser. We show how this density-adaptive space filling curve can be useful for applications such as serial data fusion, motion planning for data mules, sensor node indexing, and double ruling type in-network data storage and retrieval. We show by simulation results the superior performance of using our algorithm vs standard space filling curves and random walks.
【Keywords】: sensor fusion; wireless sensor networks; data mules; data retrieval; data storage; density-adaptive space filling curve; discrete setting; double ruling type in-network; motion planning; multihole domain R; progressive density; random walks; sensor networks; sensor node indexing; serial data fusion; standard space filling curves; topology dependent space filling curves; Approximation methods; Data integration; Educational institutions; Harmonic analysis; Indexing; Planning; Shape
【Paper Link】 【Pages】:2175-2183
【Authors】: Tianlong Yu ; Hongbo Jiang ; Guang Tan ; Chonggang Wang ; Chen Tian ; Yu Wu
【Abstract】: In this paper, we put forward a novel scalable and distributed routing algorithm, called SINUS, for sensor networks deployed on the surface of complex-connected 3D settings such as tunnels, whose topologies are often theoretically modeled as high genus 3D surfaces. SINUS is carried out by first slicing the genus-n surface along a maximum cut set based on Morse theory and Reeb graph, in order to form a genus-0 surface with 2n boundaries. Then, it groups these 2n boundaries into two groups each of which is next connected together. By doing so, a genus-0 surface with exactly two boundaries emerges, which can be flattened into a strip, using the Ricci flow algorithm and next mapped to a planar annulus by Möbius Transform. By assigning nodes virtual coordinates on the planar annulus, SINUS finally realizes a variation of greedy routing to enable individual nodes to make local muting decisions. Our simulation results show that SINUS can achieve low-stretch routing with guaranteed delivery, as well as balanced traffic load.
【Keywords】: graph theory; set theory; telecommunication network routing; wireless sensor networks; Mobius transform; Morse theory; Reeb graph; Ricci flow algorithm; SINUS; WSN; complex connected 3D setting; distributed routing algorithm; greedy routing; guaranteed delivery; high genus 3D surface; local muting decision; maximum cut set; planar annulus; scalable routing algorithm; traffic load balancing; virtual coordinate; Indexes; Measurement; Network topology; Routing; Strips; Topology; Wireless sensor networks
【Paper Link】 【Pages】:2184-2192
【Authors】: Kai Xing ; Shuo Zhang ; Lei Shi ; Haojin Zhu ; Yuepeng Wang
【Abstract】: In this paper we propose and analyze a localized backbone renovating algorithm (LBR) to renovate a broken backbone in the network. This research is motivated by the problem of virtual backbone maintenance in wireless ad hoc and sensor networks, where the coverage area of nodes are disks with identical radii. According to our theoretical analysis, the proposed algorithm has the ability to renovate the backbone in a purely localized manner with a guaranteed connectivity of the network, while keeping the backbone size within a constant factor from that of the minimum CDS. Both the communication overhead and computation overhead of the LBR algorithm are O(k), where k is the number of nodes broken or added. We also conduct extensive simulation study on connectivity, backbone size, and the communication/computation overhead. The simulation results show that the proposed algorithm can always keep the renovated backbone being connected at low communication/computation overhead with a relatively small backbone, compared with other existing schemes. Furthermore, the LBR algorithm has the ability to deal with arbitrary number of node failures and additions in the network.
【Keywords】: ad hoc networks; wireless sensor networks; CDS; LBR algorithm; communication overhead; communication-computation overhead; computation overhead; identical radii; localized backbone renovating algorithm; virtual backbone maintenance; wireless ad hoc networks; wireless sensor networks; Ad hoc networks; Algorithm design and analysis; Maintenance engineering; Network topology; Topology; Wireless communication; Wireless sensor networks; backbone renovating; maximal independent set
【Paper Link】 【Pages】:2193-2201
【Authors】: Markus Benter ; Florentin Neumann ; Hannes Frey
【Abstract】: Within reactive topology control, a node determines its adjacent edges of a network subgraph without prior knowledge of its neighborhood. The goal is to construct a local view on a topology which provides certain desired properties such as planarity. During algorithm execution, a node, in general, is not allowed to determine all its neighbors of the network graph. There are well-known reactive algorithms for computing planar subgraphs. However, the subgraphs obtained do not have constant Euclidean spanning ratio. This means that routing along these subgraphs may result in potentially long detours. So far, it has been unknown if planar spanners can be constructed reactively. In this work, we show that at least under the unit disk network model, this is indeed possible, by proposing an algorithm for reactive construction of the partial Delaunay triangulation, which recently turned out to be a spanner. Furthermore, we show that our algorithm is message-optimal as a node will only exchange messages with nodes that are also neighbors in the spanner. The algorithm's presentation is complemented by a rigorous proof of correctness.
【Keywords】: ad hoc networks; graph theory; graphs; mesh generation; telecommunication network topology; wireless sensor networks; Euclidean spanning ratio; network subgraph; partial Delaunay triangulation; planar subgraphs; planarity; reactive algorithm execution; reactive construction; reactive planar spanner construction; reactive topology control; unit disk network model; wireless ad hoc network; wireless sensor network; Bismuth; Computational modeling; Network topology; Protocols; Routing; Topology; Wireless sensor networks; Euclidean spanner; Reactive topology control; localized algorithm; partial Delaunay triangulation
【Paper Link】 【Pages】:2202-2210
【Authors】: Ting He ; Dennis Goeckel ; Ramya Raghavendra ; Don Towsley
【Abstract】: We consider the problem of endhost-based shortest path routing in a network with unknown, time-varying link qualities. Endhost-based routing is needed when internal nodes of the network do not have the scope or capability to provide globally optimal paths to given source-destination pairs, as can be the case in networks consisting of autonomous subnetworks or those with endhost-based routing restrictions. Assuming the source can probe links along selected paths, we formulate the problem as an online learning problem, where an existing solution achieves a performance loss (called regret) that is logarithmic in time with respect to (wrt) an offline algorithm that knows the link qualities. Current solutions assume coupled probing and routing; in contrast, we give a simple algorithm based on decoupled probing and routing, whose regret is only constant in time. We then extend our solution to support multi-path probing and cooperative learning between multiple sources, where we show an inversely proportional decay in regret wrt the probing rate. We also show that without the decoupling, the regret grows at least logarithmically in time, thus establishing decoupling as critical for obtaining constant regret. Although our analysis assumes certain conditions (i.i.d.) on link qualities, our solution applies with straightforward amendments to much broader scenarios where these conditions are relaxed. The efficacy of the proposed solution is verified by trace-driven simulations.
【Keywords】: learning (artificial intelligence); telecommunication computing; telecommunication network routing; autonomous subnetwork; cooperative learning; decoupled probing; dynamic network; endhost-based routing restriction; endhost-based shortest path routing; inversely proportional decay; multipath probing; offline algorithm; online learning approach; source-destination pair; time-varying link quality; trace-driven simulation; Probes; Routing; Time measurement; Topology; Upper bound; Weight measurement
【Paper Link】 【Pages】:2211-2219
【Authors】: Sugam Agarwal ; Murali S. Kodialam ; T. V. Lakshman
【Abstract】: Software Defined Networking is a new networking paradigm that separates the network control plane from the packet forwarding plane and provides applications with an abstracted centralized view of the distributed network state. A logically centralized controller that has a global network view is responsible for all the control decisions and it communicates with the network-wide distributed forwarding elements via standardized interfaces. Google recently announced [5] that it is using a Software Defined Network (SDN) to interconnect its data centers due to the ease, efficiency and flexibility in performing traffic engineering functions. It expects the SDN architecture to result in better network capacity utilization and improved delay and loss performance. The contribution of this paper is on the effective use of SDNs for traffic engineering especially when SDNs are incrementally introduced into an existing network. In particular, we show how to leverage the centralized controller to get significant improvements in network utilization as well as to reduce packet losses and delays. We show that these improvements are possible even in cases where there is only a partial deployment of SDN capability in a network. We formulate the SDN controller's optimization problem for traffic engineering with partial deployment and develop fast Fully Polynomial Time Approximation Schemes (FPTAS) for solving these problems. We show, by both analysis and ns-2 simulations, the performance gains that are achievable using these algorithms even with an incrementally deployed SDN.
【Keywords】: approximation theory; computer centres; computer networks; network interfaces; telecommunication traffic; FPTAS; Google; SDN architecture; SDN controller optimization problem; data centers; distributed network state; fully polynomial time approximation schemes; logically centralized controller; network control plane; network utilization; network-wide distributed forwarding elements; packet delay reduction; packet forwarding plane; packet loss reduction; software defined networking; standardized interfaces; traffic engineering; Current measurement; Delays; IP networks; Optimization; Peer-to-peer computing; Routing; Standards
【Paper Link】 【Pages】:2220-2228
【Authors】: Laurent Vanbever ; Stefano Vissicchio ; Luca Cittadini ; Olivier Bonaventure
【Abstract】: Network upgrades, performance optimizations and traffic engineering activities often force network operators to adapt their IGP configuration. Recently, several techniques have been proposed to change an IGP configuration (e.g., link weights) in a disruption-free manner. Unfortunately, none of these techniques considers the impact of IGP changes on BGP correctness. In this paper, we show that known reconfiguration techniques can trigger various kinds of BGP anomalies. First, we illustrate the relevance of the problem by performing simulations on a Tier-1 network. Our simulations highlight that even a few link weight changes can produce long-lasting BGP anomalies affecting a significant part of the BGP routing table. Then, we study the problem of finding a reconfiguration ordering which maintains both IGP and BGP correctness. Unfortunately, we show examples in which such an ordering does not exist. Furthermore, we prove that deciding if such an ordering exists is NP-hard. Finally, we provide sufficient conditions and configuration guidelines that enable graceful operations for both IGP and BGP.
【Keywords】: network topology; routing protocols; telecommunication traffic; BGP anomalies; IGP configuration; link weight changes; network operators; network upgrades; performance optimizations; reconfiguration techniques; traffic engineering activities; Maintenance engineering; Oscillators; Routing; Routing protocols; Topology; Transient analysis
【Paper Link】 【Pages】:2229-2237
【Authors】: Jerry T. Chiang ; Yih-Chun Hu
【Abstract】: Spectrum sensing is one of the most important enabling techniques on which to build a cognitive radio network. However, previously proposed techniques often have shortcomings in non-ideal environments: 1) An energy detector is simple but cannot perform in face of uncertain noise power; 2) A matched filter is the optimal detector, but performs poorly with clock drifts; 3) Eigenvalue-based blind feature detectors show great promise, but cannot detect signals that are noise-like; and 4) Above protocols all rely on field survey to determine the proper decision thresholds. We propose HIPSS and its extension Δ-HIPSS that are based on the Hermitian-inner-product of two observations acquired by a wireless receiver over multiple radio paths. HIPSS and Δ-HIPSS are lightweight and through extensive analysis and evaluation, we show that 1) HIPSS and Δ-HIPSS are robust in the presence of noise power uncertainties; 2) HIPSS and ΔHIPSS require neither a much longer observation duration nor complex computation compared to an energy detector in ideal setting; 3) HIPSS and Δ-HIPSS can detect noise-like primary signals; and 4) Δ-HIPSS can reliably return sensing decisions without necessitating any field surveys.
【Keywords】: Hermitian matrices; cognitive radio; eigenvalues and eigenfunctions; matched filters; protocols; radio spectrum management; signal detection; Δ-HIPSS; Hermitian inner product; clock drifts; cognitive radio network; eigenvalue-based blind feature detectors; energy detector; matched filter; protocols; signal detection; spectrum sensing; uncertain noise power; Correlation; Detectors; Feature extraction; Receivers; Signal to noise ratio
【Paper Link】 【Pages】:2238-2246
【Authors】: Shimin Gong ; Ping Wang ; Wei Liu ; Weihua Zhuang
【Abstract】: The harmonic coexistence of secondary users (SUs) and primary users (PUs) in cognitive radio networks requires SUs to identify the idle spectrum bands. One common approach to achieve spectrum awareness is through spectrum sensing, which usually assumes known distributions of the received signals. However, due to the nature of wireless channels, such an assumption is often too strong to be realistic, and leads to unreliable detection performance in practical networks. In this paper, we study the sensing performance under distribution uncertainty, i.e., the actual distribution functions of the received signals are subject to ambiguity and not fully known. Firstly, we define a series of uncertainty models based on signals' moment statistics in different spectrum conditions. Then we present mathematical formulations to study the detection performance corresponding to these uncertainty models. Moreover, in order to make use of the distribution information embedded in historical data, we extract a reference distribution from past channel observations, and define a new uncertainty model in terms of it. With this uncertainty model, we propose two iterative procedures to study the false alarm probability and detection probability, respectively. Numerical results show that the detection performance with a reference distribution is less conservative compared with that of the uncertainty models merely based on signal statistics.
【Keywords】: cognitive radio; iterative methods; radio spectrum management; signal detection; statistical distributions; wireless channels; cognitive radio network; detection probability; distribution function; distribution information; energy detection; false alarm probability; harmonic coexistence; idle spectrum band identification; iterative procedure; performance bound; primary user; secondary user; signal moment statistics; signal uncertainty distribution; spectrum awareness; spectrum condition; spectrum sensing; uncertainty model detection; wireless channels; Distribution functions; Gaussian distribution; Mathematical model; Noise; Numerical models; Sensors; Uncertainty; Cognitive radio; distribution uncertainty; probabilistic distance measure; spectrum sensing
【Paper Link】 【Pages】:2247-2255
【Authors】: Sungro Yoon ; Li Erran Li ; Soung Chang Liew ; Romit Roy Choudhury ; Injong Rhee ; Kun Tan
【Abstract】: Spectrum sensing, the task of discovering spectrum usage at a given location, is a fundamental problem in dynamic spectrum access networks. While sensing in narrow spectrum bands is well studied in previous work, wideband spectrum sensing is challenging since a wideband radio is generally too expensive and power consuming for mobile devices. Sequential scan, on the other hand, can be very slow if the wide spectrum band contains many narrow channels. In this paper, we propose an analog-filter based spectrum sensing technique, which is much faster than sequential scan and much cheaper than using a wideband radio. The key insight is that, if the sum of energy on a contiguous band is low, we can conclude that all channels in this band are clear with just one measurement. Based on this insight, we design an intelligent search algorithm to minimize the number of total measurements. We prove that the algorithm has the same asymptotic complexity as compressed sensing while our design is much simpler and easily implementable in the real hardware. We show the availability of our technique using hardware devices that include analog filters and analog energy detectors. Our extensive evaluation using real TV “white space” signals shows the effectiveness of our technique.
【Keywords】: broadband networks; compressed sensing; mobile handsets; radio spectrum management; subscriber loops; wireless channels; TV white space signal; analog energy detectors; analog filters; asymptotic complexity; compressed sensing; dynamic spectrum access networks; energy-efficient channel sensing; hardware devices; intelligent search algorithm; mobile devices; narrow spectrum bands; power consumption; spectrum sensing; wideband radio; wideband spectrum sensing; Algorithm design and analysis; Bandwidth; Detectors; Hardware; Noise; Noise measurement
【Paper Link】 【Pages】:2256-2264
【Authors】: Qiang Liu ; Xin Wang ; Yong Cui
【Abstract】: Spectrum sensing enables cognitive radios (CRs) to opportunistically access the under-utilized spectrum. Existing efforts on sensing have not adequately addressed sensing scheduling over time for better detection performance. In this work, we consider sequential periodic sensing of an in-band channel. We focus primarily on finding the appropriate sensing frequency during an SU's active data transmission on a licensed channel. Change and outlier detection schemes are designed specifically to facilitate short-term sensing adaptation to the variations in sensed data. Simulation results demonstrate that our design guarantees better conformity to the spectrum access policies by significantly reducing the delay in change detection while ensuring better sensing accuracy.
【Keywords】: channel allocation; cognitive radio; radio spectrum management; active data transmission; cognitive radio; in-band channel; sequential periodic sensing; spectrum access policy; spectrum sensing; Accuracy; Data communication; Detectors; Interference; Signal to noise ratio; Cognitive radio; change and outlier detection; channel detection time; in-band channel; sequential periodic spectrum sensing
【Paper Link】 【Pages】:2265-2273
【Authors】: Iris Safaka ; Christina Fragouli ; Katerina J. Argyraki ; Suhas N. Diggavi
【Abstract】: We consider the problem where a group of wireless nodes, connected to the same broadcast domain, want to create pairwise secrets, in the presence of an adversary Eve, who tries to listen in and steal these secrets. Existing solutions assume that Eve cannot perform certain computations (e.g., large-integer factorization) in useful time. We ask the question: can we solve this problem without assuming anything about Eve's computational capabilities? We propose a simple secret-agreement protocol, where the wireless nodes keep exchanging bits until they have agreed on pairwise secrets that Eve cannot reconstruct with very high probability. Our protocol relies on Eve's limited network presence (the fact that she cannot be located at an arbitrary number of points in the network at the same time), but assumes nothing about her computational capabilities. We formally show that, under standard theoretical assumptions, our protocol is information-theoretically secure (it leaks zero information to Eve about the secrets). Using a small wireless testbed of smart-phones, we provide experimental evidence that it is feasible for 5 nodes to create thousands of secret bits per second, with their secrecy being independent from the adversary's capabilities.
【Keywords】: cryptographic protocols; probability; radio networks; telecommunication security; Eve computational capabilities; broadcast domain; pairwise secrets; probability; secret-agreement protocol; smartphone wireless testbed; wireless node group; Authentication; Communication system security; Privacy; Protocols; Reliability; Standards; Wireless communication
【Paper Link】 【Pages】:2274-2282
【Authors】: Chunqiang Hu ; Xiuzhen Cheng ; Fan Zhang ; Dengyuan Wu ; Xiaofeng Liao ; Dechang Chen
【Abstract】: Body Area Networks (BANs) are expected to play a major role in patient health monitoring in the near future. Providing an efficient key agreement with the prosperities of plug-n-play and transparency to support secure inter-sensor communications is critical especially during the stages of network initialization and reconfiguration. In this paper, we present a novel key agreement scheme termed Ordered-Physiological-Feature-based Key Agreement (OPFKA), which allows two sensors belonging to the same BAN to agree on a symmetric cryptographic key generated from the overlapping physiological signal features, thus avoiding the pre-distribution of keying materials among the sensors embedded in the same human body. The secret features computed from the same physiological signal at different parts of the body by different sensors exhibit some overlap but they are not completely identical. To overcome this challenge, we detail a computationally efficient protocol to securely transfer the secret features of one sensor to another such that two sensors can easily identify the overlapping ones. This protocol possesses many nice features such as the resistance against brute force attacks. Experimental results indicate that OPFKA is secure, efficient, and feasible. Compared with the state-of-the-art PSKA protocol, OPFKA achieves a higher level of security at a lower computational overhead.
【Keywords】: body area networks; cryptographic protocols; patient monitoring; telecommunication security; BAN; OPFKA; PSKA protocol; computational overhead; computationally efficient protocol; efficient ordered-physiological-feature-based key agreement; ordered-physiological-feature-based key agreement; overlapping physiological signal features; patient health monitoring; physiological signal; plug-n-play; secure ordered-physiological-feature-based key agreement; security level; symmetric cryptographic key; transparency; wireless body area networks; Biomedical monitoring; Indexes; Receivers; Security; Sensor phenomena and characterization; Vectors; Body Area Networks (BANs); Inter-Pulse-Interval (IPI); physiological feature based key agreement; secure intersensor communications
【Paper Link】 【Pages】:2283-2291
【Authors】: Xiaojun Zhu ; Fengyuan Xu ; Edmund Novak ; Chiu Chiang Tan ; Qun Li ; Guihai Chen
【Abstract】: A crucial component of vehicular network security is to establish a secure wireless channel between any two vehicles. In this paper, we propose a scheme to allow two cars to extract a secret key from RSSI (Received Signal Strength Indicator) values in such a way that nearby cars cannot obtain the same secret. Our solution can be executed in noisy, outdoor vehicular environments. We also propose an online parameter learning mechanism to adapt to different channel conditions. We conduct extensive realworld experiments to validate our solution.
【Keywords】: automobiles; private key cryptography; radio links; telecommunication security; vehicular ad hoc networks; wireless channels; RSSI values; online parameter learning mechanism; outdoor vehicular environments; real-world experiments; received signal strength indicator values; secret key extraction; vehicular network security; wireless channel security; wireless link dynamics; Communication system security; Correlation; Entropy; Markov processes; Noise; Vehicle dynamics; Wireless communication
【Paper Link】 【Pages】:2292-2300
【Authors】: Pengfei Huang ; Xudong Wang
【Abstract】: Physical layer secret key generation exploits the inherent randomness of wireless channels. However, for wireless channels with long coherence time, key generation rate can be extremely low due to lack of randomness in channels. In this paper, a novel key generation scheme is proposed for a general wireless network where channels are static and each transmitter is equipped with two antennas. It integrates opportunistic beamforming and frequency diversity to achieve fast secret key generation. In this scheme, channel fluctuations are first induced by controlling amplitude and phase of each symbol in the training sequence on each antenna. Thus, key generation rate is significantly increased. However, the induced channel fluctuations lead to correlation between the legitimate channel and the eavesdropping channel, which compromises key secrecy. To this end, frequency diversity is then exploited to ensure that key secrecy grows with the key size. The secret key generation scheme is investigated in both narrowband and wideband systems, and its performance is evaluated through both theoretical analysis and simulations. Performance results have validated randomness and secrecy of secret keys and also illustrate that the proposed scheme can generate secret keys at a rate of 2Kb/s for narrowband systems and 20Kb/s for wideband systems.
【Keywords】: antenna arrays; cryptography; diversity reception; radio networks; radio transmitters; telecommunication security; wireless channels; amplitude control; bit rate 2 kbit/s; bit rate 20 kbit/s; channel fluctuations; eavesdropping channel; frequency diversity; induced channel fluctuations; key secrecy; lack-of-channel randomness; legitimate channel; long coherence time; narrowband systems; performance evaluation; phase control; physical layer secret key generation; static wireless networks; virtual channel approach; wideband systems; wireless channels; Antennas; Channel estimation; Coherence; Error probability; Frequency diversity; Quantization (signal); Wireless communication
【Paper Link】 【Pages】:2301-2309
【Authors】: Linke Guo ; Chi Zhang ; Hao Yue ; Yuguang Fang
【Abstract】: Mobile content dissemination is very useful for many mobile applications in delay tolerant networks (DTNs), like instant messaging, file sharing, and advertisement dissemination, etc. Recently, social-based approaches, which attempt to exploit social behaviors of DTN users to forward time-insensitive data, such as family photos and friends' sightseeing video clips, have attracted intensive attentions in designing routing schemes in DTNs. Most social-based schemes leverage users' contact history and social information (e.g., community and friendship) as metrics to improve the dissemination performance. In these schemes, users need to obtain others' social information to determine their dissemination strategy, which apparently compromises others users' privacy. Moreover, the owner of mobile contents may only want to disclose his/her data to a particular group of users rather than revealing it to the public. In this paper, we propose a privacy-preserving social-assisted mobile content dissemination scheme in DTNs. We apply users' verifiable attributes to establish their potential social relationships in terms of identical attributes in a privacy-preserving way. Besides, to provide the confidentiality of mobile contents, our approach enables users to encrypt contents before the dissemination process, and only allows users who have particular attributes to decrypt them. By trace-driven simulations and experiments, we show the security and efficiency of our proposed scheme.
【Keywords】: cryptography; delay tolerant networks; mobile computing; DTN; delay tolerant networks; dissemination performance; dissemination strategy; mobile applications; privacy-preserving social-assisted mobile content dissemination scheme; routing schemes; social behaviors; threshold attribute-based encryption; time-insensitive data; Encryption; Mobile communication; Privacy; Protocols; Routing; Threshold attribute-based encryption; privacy-preserving; witness-indistinguishable proof
【Paper Link】 【Pages】:2310-2318
【Authors】: Ting Ning ; Zhipeng Yang ; Hongyi Wu ; Zhu Han
【Abstract】: In this paper, we propose a Self-Interest-Driven (SID) incentive scheme to stimulate cooperation among selfish nodes for ad dissemination in autonomous mobile social networks. As a key innovation of SID, we introduce “virtual checks” to eliminate the needs of accurate knowledge about whom and how many credits ad provider should pay. A virtual check is included in each ad packet. When an intended receiver receives the packet for the first time from an intermediate node, the former authorizes the latter a digitally signed check, which serves as a proof of successful ad delivery. Multiple copies of a virtual check can be created and signed by different receivers. When a node that owns a signed check meets the ad provider, it requests the provider to cash the check. Both ad packets and signed checks can be traded among mobile nodes. We propose the effective mechanisms to define virtual rewards for ad packets and virtual checks, and formulate the nodal interaction as a two-player cooperative game, whose solution is obtained by the Nash Bargaining Theorem. Extensive simulations are carried out to compare SID with other existing incentive algorithms under real world mobility traces.
【Keywords】: advertising data processing; game theory; mobile computing; social networking (online); Nash bargaining theorem; SID incentive scheme; ad delivery; ad dissemination; ad packet; autonomous mobile social networks; digitally signed check; nodal interaction; real world mobility traces; self-interest-driven incentives; two-player cooperative game; virtual check; virtual rewards; Games; Mobile computing; Mobile nodes; Peer-to-peer computing; Receivers; Social network services
【Paper Link】 【Pages】:2319-2327
【Authors】: Jie Wu ; Mingjun Xiao ; Liusheng Huang
【Abstract】: A mobile social network (MSN) is a special delay tolerant network (DTN) composed of mobile nodes with social characteristics. Mobile nodes in MSNs generally visit community homes frequently, while other locations are visited less frequently. We propose a novel zero-knowledge MSN routing algorithm, homing spread (HS). The community homes have a higher priority to spread messages into the network. Theoretical analysis shows that the proposed algorithm can spread a given number of message copies in an optimal way when the inter-meeting times between any two nodes and between a node and a community home follow exponential distributions. We also calculate the expected delivery delay of HS. In addition, extensive simulations are conducted. Results show that community homes are important factors in efficient message spreading. By using homes to spread messages faster, HS achieves a better performance than existing zero-knowledge MSN routing algorithms, including Epidemic, with a given number of copies, and Spray&Wait.
【Keywords】: delay tolerant networks; mobile radio; telecommunication network routing; DTN; HS; community home based multicopy routing; delay tolerant network; exponential distributions; homing spread; message copies; message spreading; mobile nodes; mobile social networks; novel zero knowledge MSN routing algorithm; social characteristics; Communities; Delays; Mobile nodes; Probability density function; Routing; Community home; mobile social networks (M-SNs); routing
【Paper Link】 【Pages】:2328-2336
【Authors】: Jaeseong Jeong ; Yung Yi ; Jeong-woo Cho ; Do Young Eun ; Song Chong
【Abstract】: An essential condition precedent to the success of mobile applications based on Wi-Fi (e.g., iCloud) is an energy-efficient Wi-Fi sensing. From a user's perspective, a good WiFi sensing policy should depend on both inter-AP arrival and contact duration time distributions. Prior work focuses on limited cases of those two distributions (e.g., exponential) or introduces heuristic approaches such as AI (Additive Increase). In this paper, we formulate a functional optimization problem on Wi-Fi sensing under general inter-AP and contact duration distributions, and propose how each user should sense Wi-Fi APs to strike a balance between energy efficiency and performance, depending on the users' mobility pattern. To that end, we derive an optimal condition which sheds insights into the aging property, the key feature required by efficient Wi-Fi sensing polices. Guided by the analytical studies and the implications, we develop a new sensing algorithm, called WiSAG (Wi-Fi Sensing with AGing), which is demonstrated to outperform the existing sensing algorithms up to 34% through extensive trace-driven simulations using the real mobility traces gathered from smartphones.
【Keywords】: heuristic programming; mobility management (mobile radio); optimisation; smart phones; wireless LAN; AI; Wi-Fi sensing with aging; WiSAG sensing algorithm; additive increase; contact duration distributions; energy-efficient Wi-Fi sensing policy; functional optimization problem; heuristic approaches; inter-AP arrival; smart phones; trace-driven simulations; user mobility pattern; Aging; Artificial intelligence; IEEE 802.11 Standards; Mobile communication; Optimized production technology; Sensors
【Paper Link】 【Pages】:2337-2345
【Authors】: Georgios S. Paschos ; Constantinos Fragiadakis ; Leonidas Georgiadis ; Leandros Tassiulas
【Abstract】: We study an 1-hop broadcast channel with two receivers. Due to overhearing channels, the receivers have side information which can be leveraged by interflow network coding techniques to provide throughput increase. In this setup, we consider two different control mechanisms, the deterministic system, where the contents of the receivers' buffers are announced to the coding node via overhearing reports and the stochastic system, where the coding node makes stochastic control decisions based on statistics and the performance is improved via NACK messages. We study the minimal evacuation times for the two systems and obtain analytical expressions of the throughput region for the deterministic and the code-constrained region for the stochastic. We show that maximum performance is achieved by simple XOR policies. For equal transmission rates r1 = r2, the two regions are equal. If r1 ≠ r2, we showcase the tradeoff between throughput and overhead.
【Keywords】: broadcast channels; network coding; radio networks; radio receivers; stochastic processes; telecommunication control; wireless channels; 1-hop broadcast channel; NACK messages; XOR policyy; code-constrained region; control mechanisms; interflow network coding technique; partial overhearing information; receivers; statistics; stochastic control decision; stochastic system; wireless network coding; Decoding; Encoding; Receivers; Stability analysis; Stochastic systems; Throughput; Wireless networks
【Paper Link】 【Pages】:2346-2354
【Authors】: Jun Li ; Xin Wang ; Baochun Li
【Abstract】: In distributed storage systems, a substantial volume of data are stored in a distributed fashion, across a large number of storage nodes. To maintain data integrity, when existing storage nodes fail, lost data are regenerated at replacement nodes. Regenerating multiple data losses in batches can reduce the consumption of bandwidth. However, existing schemes are only able to achieve lower bandwidth consumption by utilizing a large number of participating nodes. In this paper, we propose a cooperative pipelined regeneration process that regenerates multiple data losses cooperatively with much fewer participating nodes. We show that cooperative pipelined regeneration is not only able to maintain optimal data integrity, but also able to further reduce the consumption of bandwidth as well.
【Keywords】: data integrity; information storage; linear codes; bandwidth consumption reduction; cooperative pipelined regeneration process; data loss regeneration; distributed storage systems; optimal data integrity; replacement nodes; storage nodes; Bandwidth; Distributed databases; Educational institutions; Linear code; Redundancy; Resilience
【Paper Link】 【Pages】:2355-2363
【Authors】: Yuchong Hu ; Patrick P. C. Lee ; Kenneth W. Shum
【Abstract】: Modern distributed storage systems apply redundancy coding techniques to stored data. One form of redundancy is based on regenerating codes, which can minimize the repair bandwidth, i.e., the amount of data transferred when repairing a failed storage node. Existing regenerating codes mainly require surviving storage nodes encode data during repair. In this paper, we study functional minimum storage regenerating (FMSR) codes, which enable uncoded repair without the encoding requirement in surviving nodes, while preserving the minimum repair bandwidth guarantees and also minimizing disk reads. Under double-fault tolerance settings, we formally prove the existence of FMSR codes, and provide a deterministic FMSR code construction that can significantly speed up the repair process. We further implement and evaluate our deterministic FMSR codes to show the benefits. Our work is built atop a practical cloud storage system that implements FMSR codes, and we provide theoretical validation to justify the practicality of FMSR codes.
【Keywords】: cloud computing; codes; fault tolerant computing; redundancy; storage management; cloud storage system; data encoding; data storage; data transfer; deterministic FMSR codes; disk read minimization; distributed storage system; double-fault tolerance setting; encoding requirement; functional minimum storage regenerating codes; minimum repair bandwidth; redundancy coding technique; repair bandwidth minimization; repair process; storage node; uncoded repair; Bandwidth; Encoding; Fault tolerance; Fault tolerant systems; Maintenance engineering; Peer-to-peer computing; Reed-Solomon codes
【Paper Link】 【Pages】:2364-2372
【Authors】: Xunrui Yin ; Yan Wang ; Xin Wang ; Xiangyang Xue ; Zongpeng Li
【Abstract】: Network Coding encourages information coding across a communication network. While the necessity, benefit and complexity of network coding are sensitive to the underlying graph structure of a network, existing theory on network coding often treats the network topology as a black box, focusing on algebraic or information theoretic aspects of the problem. This work aims at an in-depth examination of the relation between algebraic coding and network topologies. We mathematically establish a series of results along the direction of: if network coding is necessary/beneficial, or if a particular finite field is required for coding, then the network must have a corresponding hidden structure embedded in its underlying topology, and such embedding is computationally efficient to verify. Specifically, we first formulate a meta-conjecture, the NC-Minor Conjecture, that articulates such a connection between graph theory and network coding, in the language of graph minors. We next prove that the NC-Minor Conjecture is almost equivalent to the Hadwiger Conjecture, which connects graph minors with graph coloring. Such equivalence implies the existence of K4, K5, K6, and KO(q/ log q) minors, for networks requiring F3, F4, F5 and Fq, respectively. We finally prove that network coding can make a difference from routing only if the network contains a K4 minor, and this minor containment result is tight. Practical implications of the above results are discussed.
【Keywords】: algebraic codes; graph colouring; information theory; network coding; telecommunication network topology; Hadwiger conjecture; algebraic coding; black box; communication network; finite field; graph coloring; graph minor perspective; graph minors; graph structure; graph theory; hidden structure; information coding; information theoretic aspects; meta-conjecture; minor conjecture; network coding; network topologies; Color; Encoding; Network coding; Network topology; Receivers; Routing; Vectors
【Paper Link】 【Pages】:2373-2381
【Authors】: Bo Jiang ; Nidhi Hegde ; Laurent Massoulié ; Don Towsley
【Abstract】: We consider the performance of information propagation through social networks in a scenario where each user has a budget of attention, that is, a constraint on the frequency with which he pulls content from neighbors. In this context we ask the question “when users make selfish decisions on how to allocate their limited access frequency among neighbors, does information propagate efficiently?” For the metric of average propagation delay, we provide characterizations of the optimal social cost and the social cost under selfish user optimizations for various topologies of interest. Three situations may arise: well-connected topologies where delay is small even under selfish optimization; tree-like topologies where selfish optimization performs poorly while optimal social cost is low; and “stretched” topologies where even optimal social cost is high. We propose a mechanism for incentivizing users to modify their selfish behaviour, and observe its efficiency in the family of tree-like topologies mentioned above.
【Keywords】: information dissemination; social networking (online); trees (mathematics); access frequency; average propagation delay; information propagation; optimal social cost; selfish user optimizations; social networks; stretched topologies; tree-like topologies; Delays; Network topology; Radio spectrum management; Resource management; Social network services; Stability analysis; Topology
【Paper Link】 【Pages】:2382-2390
【Authors】: Jin Cao ; Hongyu Gao ; Li Erran Li ; Brian D. Friedman
【Abstract】: Like their public counterpart such as Facebook and Twitter, enterprise social networks are poised to revolutionize how people interact in the workplace. There is a pressing need to understand how people are using these social networks. Unlike the public social networks like Facebook or Twitter which are normally characterized using the social graph or the interaction graph, enterprise social networks are also governed by an organization graph. Based on a six month dataset collected from May through October 2011 of a large enterprise social network, we study the characteristics of activities of its enterprise social network. We observe that the user attributes in the organization graph such as geographic location (eg. country) and his/her rank in the company hierarchy have a significant impact on how the user uses the social network and how user interacts with each other. We then build formal statistical models of user interaction graphs in enterprise social network and quantify effects of user attributes from organization graphs on these interactions. Furthermore, as the enterprise social network medium bring users from diverse locations and social status forming ad-hoc communities, our statistical model can be further enhanced by including these ad-hoc communities.
【Keywords】: graph theory; social networking (online); statistical analysis; Facebook; Twitter; enterprise social network analysis; formal statistical model; geographic location; organization graph; public social network; social graph; user interaction graph; Blogs; Communities; Companies; Logistics; Twitter
【Paper Link】 【Pages】:2391-2399
【Authors】: Dong Wang ; Hosung Park ; Gaogang Xie ; Sue B. Moon ; Mohamed Ali Kâafar ; Kavé Salamatian
【Abstract】: In this paper, we study the process of information diffusion in a microblog service developing Galton-Watson with Killing (GWK) model. Microblog services offer a unique approach to online information sharing allowing microblog users to forward messages to others. We describe an information propagation as a discrete GWK process based on Galton-Watson model which models the evolution of family names. Our model explains the interaction between the topology of the social graph and the intrinsic interest of the message. We validate our model on dataset collected from Sina Weibo and Twitter microblog. Sina Weibo is a Chinese microblog web service which reached over 100 million users as for January 2011. Our Sina Weibo dataset contains over 261 thousand tweets which have retweets and 2 million retweets from 500 thousand users. Twitter dataset contains over 1.1 million tweets which have retweets and 3.3 million retweets from 4.3 million users. The results of the validation show that our proposed GWK model fits the information diffusion of microblog service very well in terms of the number of message receivers. We show that our model can be used in generating tweets load and also analyze the relationships between parameters of our model and popularity of the diffused information. To the best of our knowledge, this paper is the first to give a systemic and comprehensive analysis for the information diffusion on microblog services, to be used in tweets-like load generators while still guaranteeing popularity distribution characteristics.
【Keywords】: graph theory; social networking (online); Chinese microblog Web service; GWK model; Galton-Watson with Killing model; Galton-Watson-based explicative model; Sina Weibo; Twitter microblog; family name evolution; information diffusion process; information propagation; information spreading genealogy; message intrinsic interest; online information sharing; popularity distribution characteristics; social graph; tweets-like load generators; Analytical models; Integrated circuit modeling; Load modeling; Media; Predictive models; Twitter
【Paper Link】 【Pages】:2400-2408
【Authors】: Jilong Xue ; Zhi Yang ; Xiaoyong Yang ; Xiao Wang ; Lijiang Chen ; Yafei Dai
【Abstract】: Online social networks (OSNs) currently face a significant challenge by the existence and continuous creation of fake user accounts (Sybils), which can undermine the quality of social network service by introducing spam and manipulating online rating. Recently, there has been much excitement in the research community over exploiting social network structure to detect Sybils. However, they rely on the assumption that Sybils form a tight-knit community, which may not hold in real OSNs. In this paper, we present VoteTrust, a Sybil detection system that further leverages user interactions of initiating and accepting links. VoteTrust uses the techniques of trust-based vote assignment and global vote aggregation to evaluate the probability that the user is a Sybil. Using detailed evaluation on real social network (Renren), we show VoteTrust's ability to prevent Sybils gathering victims (e.g., spam audience) by sending a large amount of unsolicited friend requests and befriending many normal users, and demonstrate it can significantly outperform traditional ranking systems (such as TrustRank or BadRank) in Sybil detection.
【Keywords】: graph theory; social networking (online); trusted computing; OSN; Renren; VoteTrust; fake user accounts; friend invitation graph; global vote aggregation; online rating; online social networks; social network service; social network sybils; sybil detection system; trust-based vote assignment; Communities; Equations; Facebook; Mathematical model; Security; Upper bound
【Paper Link】 【Pages】:2409-2417
【Authors】: Siming Li ; Wei Zeng ; Dengpan Zhou ; Xianfeng David Gu ; Jie Gao
【Abstract】: Motivated by mobile sensor networks as in participatory sensing applications, we are interested in developing a practical, lightweight solution for routing in a mobile network. While greedy routing is robust to mobility, location errors and link dynamics, it may get stuck in a local minimum, which then requires non-trivial recovery methods. We follow the approach taken by Sarkar et. al. [24] to find an embedding of the network such that greedy routing using the virtual coordinates guarantees delivery, thus eliminating the necessity of any recovery methods. Our new contribution is to replace the in-network computation of the embedding by a preprocessing of the domain before network deployment and encode the map of network domain to virtual coordinate space by using a small number of parameters which can be pre-loaded to all sensor nodes. As a result, the map is only dependent on the network domain and is independent of the network connectivity. Each node can directly compute or update its virtual coordinates by applying the locally stored map on its geographical coordinates. This represents the first practical solution for using virtual coordinates for greedy routing in a sensor network and could be easily extended to the case of a mobile network. Being extremely light-weight, greedy routing on the virtual coordinates is shown to be very robust to mobility, link dynamics and non-unit disk graph connectivity models.
【Keywords】: graph theory; mobility management (mobile radio); telecommunication network routing; wireless sensor networks; compact conformal map; disk graph connectivity; greedy routing; lightweight solution; link dynamics; location errors; mobility; network connectivity; nontrivial recovery methods; virtual coordinates; wireless mobile sensor networks; Conformal mapping; Educational institutions; Mobile communication; Mobile computing; Routing; Sensors; Surface treatment
【Paper Link】 【Pages】:2418-2426
【Authors】: Novella Bartolini ; Giancarlo Bongiovanni ; Tom La Porta ; Simone Silvestri
【Abstract】: In this paper we point out the vulnerabilities of the virtual force approach to mobile sensor deployment, which is at the basis of many deployment algorithms. For the first time in the literature, we show that some attacks significantly hinder the capability of these algorithms to guarantee a satisfactory coverage. An attacker can compromise a few mobile sensors and force them to pursue a malicious purpose by influencing the movement of other legitimate sensors. We make an example of a simple and effective attack, called Opportunistic Movement, and give an analytical study of its efficacy. We also show through simulations that, in a typical scenario, this attack can reduce coverage by more than 50% by compromising a number of nodes as low as the 7%. We propose SecureVF, a virtual force deployment algorithm able to neutralize the above mentioned attack. We show that under SecureVF malicious sensors are detected and then ignored whenever their movement is not compliant with the moving strategy provided by SecureVF. We also investigate the performance of SecureVF through simulations, and compare it to one of the most acknowledged algorithms based on virtual forces. We show that SecureVF enables a remarkably improved coverage of the area of interest, at the expense of a low additional energy consumption.
【Keywords】: energy consumption; mobile radio; sensor placement; telecommunication security; wireless sensor networks; SecureVF; energy consumption; legitimate sensors; malicious sensors; mobile sensor; virtual force deployment algorithm; virtual force security vulnerabilities; virtual forces; Decision support systems; Radio frequency; Mobile sensors; security; self-deployment; virtual force approach
【Paper Link】 【Pages】:2427-2435
【Authors】: Dongxiao Yu ; Qiang-Sheng Hua ; Yuexuan Wang ; Jiguo Yu ; Francis C. M. Lau
【Abstract】: Multiple-message broadcast is a generalization of the traditional broadcast problem. It is to disseminate k distinct (1 ≤ k ≤ n) messages stored at k arbitrary nodes to the entire network with the fewest timeslots. In this paper, we study this basic communication primitive in unstructured wireless networks under the physical interference model (also known as the SINR model). The unstructured wireless network assumes unknown network topology, no collision detection and asynchronous communications. Our proposed randomized distributed algorithm can accomplish multiple-message broadcast in O((D + k) log n + log2 n) timeslots with high probability, where D is the network diameter and n is the number of nodes in the network. To our best knowledge, this work is the first one to consider distributively implementing multiple-message broadcasting in unstructured wireless networks under a global interference model, which may shed some light on how to efficiently solve in general a “global” problem in a “local” fashion with “global” interference constraints in asynchronous wireless ad hoc networks. Apart from the algorithm, we also show an Ω(D+k+log n) lower bound for randomized distributed multiple message broadcast algorithms under the assumed network model.
【Keywords】: ad hoc networks; probability; radio broadcasting; radiofrequency interference; telecommunication network topology; asynchronous communications; asynchronous wireless ad hoc networks; basic communication primitive; collision detection; distributed multiple-message broadcasting; global interference constraints; global interference model; network topology; physical interference model; probability; randomized distributed algorithm; randomized distributed multiple message broadcast algorithms; unstructured wireless networks; Algorithm design and analysis; Interference; Receivers; Signal to noise ratio; Synchronization; TV; Wireless networks
【Paper Link】 【Pages】:2436-2444
【Authors】: Zhongming Zheng ; Anfeng Liu ; Lin X. Cai ; Zhigang Chen ; Xuemin (Sherman) Shen
【Abstract】: Wireless sensor networks (WSNs) play an increasing role in a wide variety of applications ranging from hostile environment monitoring to telemedicine services. The hardware and cost constraints of sensor nodes, however, make sensors prone to clone attacks and pose great challenges in the design and deployment of an energy-efficient WSN. In this paper, we propose a location-aware clone detection protocol, which guarantees successful clone attack detection and has little negative impact on the network lifetime. Specifically, we utilize the location information of sensors and randomly select witness nodes located in a ring area to verify the privacy of sensors and to detect clone attacks. The ring structure facilitates energy efficient data forwarding along the path towards the witnesses and the sink, and the traffic load is distributed across the network, which improves the network lifetime significantly. Theoretical analysis and simulation results demonstrate that the proposed protocol can approach 100% clone detection probability with trustful witnesses. We further extend the work by studying the clone detection performance with untrustful witnesses and show that the clone detection probability still approaches 98% when 10% of witnesses are compromised. Moreover, our proposed protocol can significantly improve the network lifetime, compared with the existing approach.
【Keywords】: data privacy; probability; telecommunication security; telecommunication traffic; wireless sensor networks; ERCD; WSN; clone attack detection; clone detection probability; data forwarding; energy-efficient clone detection protocol; environment monitoring; location-aware clone detection protocol; network lifetime improvement; ring structure; sensor privacy; telemedicine service; traffic load; wireless sensor network; Broadcasting; Cloning; Indexes; Privacy; Protocols; Security; Wireless sensor networks
【Paper Link】 【Pages】:2445-2453
【Authors】: Yashar Ghiassi-Farrokhfal ; Jörg Liebeherr
【Abstract】: Capacity and buffer sizes are critical design parameters in schedulers which multiplex many flows. Previous studies show that in an asymptotic regime, when the number of traffic flows N goes to infinity, the choice of scheduling algorithm does not have a big impact on performance. We raise the question whether or not the choice of scheduling algorithm impacts the capacity and buffer sizing for moderate values of N (e.g., few hundred). For Markov-modulated On-Off sources and for finite N, we show that the choice of scheduling is influential on (1) buffer overflow probability, (2) capacity provisioning, and (3) the viability of network decomposition in a non-asymptotic regime. This conclusion is drawn based on numerical examples and by a comparison of the scaling properties of different scheduling algorithms. In particular, we show that the per-flow capacity converges to the per-flow long-term average rate of the arrivals with convergence speeds ranging from O (√log N/N) to O(1/N) depending on the scheduling algorithm. This speed of convergences of the required capacities for different schedulers (to meet a target buffer overflow probability) is perceptible even for moderate values of N in our numerical examples.
【Keywords】: Markov processes; buffer storage; computational complexity; convergence of numerical methods; probability; scheduling; Markov-modulated on-off sources; buffer overflow probability; buffer sizing; capacity provisioning; capacity sizes; convergence speeds; design parameters; network decomposition viability; nonasymptotic regime; schedulers; scheduling algorithm; tiny buffers; traffic flows; Aggregates; Capacity planning; Convergence; Multiplexing; Optical buffering; Probabilistic logic; Scheduling algorithms
【Paper Link】 【Pages】:2454-2462
【Authors】: Sachin Kadloor ; Negar Kiyavash
【Abstract】: Traditionally, scheduling policies have been optimized to perform well on metrics such as throughput, delay and fairness. In the context of shared event schedulers, where a common processor is shared among multiple users, one also has to consider the privacy offered by the scheduling policy. The privacy offered by a scheduling policy measures how much information about the usage pattern of one user of the system can be learnt by another as a consequence of sharing the scheduler. In [1], we introduced an estimation error based metric to quantify this privacy. We showed that the most commonly deployed scheduling policy, the first-come-first-served (FCFS) offers very little privacy to its users. We also proposed a parametric non-work-conserving policy which traded off delay for improved privacy. In this work, we ask the question, is a trade-off between delay and privacy fundamental to the design to scheduling policies? In particular, is there a work-conserving, possibly randomized, scheduling policy that scores high on the privacy metric? Answering the first question, we show that there does exist a fundamental limit on the privacy performance of a work-conserving scheduling policy. We quantify this limit. Furthermore, answering the second question, we demonstrate that the round-robin scheduling policy (a deterministic policy) is privacy optimal within the class of work-conserving policies.
【Keywords】: data privacy; estimation theory; processor scheduling; FCFS; delay optimal policies; estimation error-based metric; first-come-first-served; nonwork-conserving policy; privacy fundamental; privacy metric; round-robin scheduling policy; shared event schedulers; work-conserving scheduling policy; Delays; Estimation error; Optimal scheduling; Privacy; Processor scheduling; Time division multiple access
【Paper Link】 【Pages】:2463-2471
【Authors】: Zhoujia Mao ; Can Emre Koksal ; Ness B. Shroff
【Abstract】: The problem of online job or packet scheduling with hard deadlines has been studied extensively in the single hop setting, whereas it is notoriously difficult in the multihop setting. This difficulty stems from the fact that packet scheduling decisions at each hop influences and are influenced by decisions on other hops and only a few provably efficient online scheduling algorithms exist in the multihop setting. We consider a general multihop network topology in which packets with various deadlines and weights arrive at and are destined to different nodes through given routes. We study the problem of joint admission control and packet scheduling in order to maximize the cumulative weights of the packets that reach their destinations within their deadlines. We first focus on uplink transmissions in the tree topology and show that the well known earliest deadline first algorithm achieves the same performance as the optimal off-line algorithm for any feasible arrival pattern. We then address the general topology with multiple source-destination pairs, develop a simple online algorithm and show that it is O(PM log PM)-competitive where PM is the maximum route length among all packets. Our algorithm only requires information along the route of each packet and our result is valid for general arrival samples. Via numerical results, we show that our algorithm achieves performance that is comparable to the non-causal optimal off-line algorithm. To the best of our knowledge, this is the first algorithm with a provable (based on a sample-path construction) competitive ratio, subject to hard deadline constraints for general network topologies.
【Keywords】: telecommunication congestion control; telecommunication network topology; admission control; competitive ratio; earliest deadline first algorithm; hard deadline constraint; multihop communication network; multihop network topology; multiple source-destination pairs; online packet scheduling; uplink transmission; Admission control; Delays; Network topology; Scheduling algorithms; Spread spectrum communication; Topology; Uplink
【Paper Link】 【Pages】:2472-2478
【Authors】: Leana Golubchik ; Samir Khuller ; Koyel Mukherjee ; Yuan Yao
【Abstract】: Frequently, ISPs charge for Internet use not based on peak bandwidth usage, but according to a percentile (often the 95th percentile) cost model. In other words, the time slots with the top 5 percent (in the case of 95th percentile) of data transmission volume do not affect the cost of transmission. Instead, we are charged based on the volume of traffic sent in the 95th percentile slot. In such an environment, by allowing a short delay in transmission of some data, we may be able to reduce our cost considerably. We provide an optimal solution to the offline version of this problem (in which the job arrivals are known), for any delay D > 0. The algorithm works for any choice of percentile. We also show that there is no efficient deterministic online algorithm for this problem. However, for a slightly different problem, where the maximum amount of data transmitted is used for cost accounting, we provide an online algorithm with a competitive ratio of 2D+1/D+1. Furthermore, we prove that no online algorithm can achieve a competitive ratio better than 2D+1/D+F(D) where F(D) = Σi=1D+1 i/D+i for any D > 0 in an adversarial setting. We also provide a heuristic that can be used in an online setting where the network traffic has a strong correlation over consecutive accounting cycles, based on the solution to the offline percentile problem. Experimental results are used to illustrate the performance of the algorithms proposed in this work.
【Keywords】: Internet; cost reduction; data communication; heuristic programming; telecommunication traffic; ISP; Internet service provider; consecutive accounting cycles; cost accounting; cost reduction; data transmission volume; heuristic programming; network traffic; offline percentile problem; online algorithm; percentile cost model; Bandwidth; Delays; Heuristic algorithms; Optimized production technology; Schedules; Servers; Vectors
【Paper Link】 【Pages】:2481-2489
【Authors】: Sungwon Yang ; Pralav Dessai ; Mansi Verma ; Mario Gerla
【Abstract】: Many indoor localization techniques that rely on RF signals from wireless Access Points have been proposed in the last decade. In recent years, research on crowdsourced (also known as “Organic”) Wi-Fi fingerprint positioning systems has been attracting much attention. This participatory approach introduces new challenges that no previously proposed techniques have taken into account. This paper proposes “FreeLoc”, an efficient localization method addressing three major technical issues posed in crowdsourcing based systems. Our novel solution facilitates 1) extracting accurate fingerprint values from short RSS measurement times 2) calibration-free positioning across different devices and 3) maintaining a single fingerprint for each location in a radio map, irrespective of any number of uploaded data sets for a given location. Through experiments using four different smartphones, we evaluate our new indoor positioning method. The experimental results confirm that the proposed scheme provides consistent localization accuracy in an environment where the device heterogeneity and the multiple surveyor problems exist.
【Keywords】: indoor radio; wireless LAN; RF signal; Wi-Fi fingerprint positioning systems; calibration-free crowdsourced indoor localization method; crowdsourcing based systems; device heterogeneity; indoor positioning method; localization accuracy; multiple surveyor problem; radio map; uploaded data sets; wireless access points; Accuracy; Antenna measurements; Buildings; Calibration; Databases; IEEE 802.11 Standards; Servers
【Paper Link】 【Pages】:2490-2498
【Authors】: Yinjie Chen ; Zhongli Liu ; Xinwen Fu ; Benyuan Liu ; Wei Zhao
【Abstract】: In many wireless localization applications, we rotate a directional antenna to derive the angle of arrival (AOA) of wireless signals transmitted from a target mobile device. The AOA corresponds to the direction in which the maximum received signal strength (RSS) is sensed. However, an unanswered question is how to make sure the directional antenna picks up packets producing the maximum RSS while rotating. We propose a set of novel RSS sampling theory to answer this question. We recognize the process that a directional antenna measures RSS of wireless packets while rotating as the process that the radiation pattern of the directional antenna is sampled. Therefore, if RSS samples can reconstruct the antenna's radiation pattern, the direction corresponding to the peak of the radiation pattern is the AOA of the target. We derive mathematical models to determine the RSS sampling rate given the target's packet transmission rate. Our RSS sampling theory is applicable to various types of directional antennas. To validate our RSS sampling theory, we developed BotLoc, which is a programmable and self-coordinated robot armed with a wireless sniffer. We conducted extensive real-world experiments and the experimental results match the theory very well. A video of BotLoc is at www.youtube.com/watch?v=WtUt0IqhXRU&feature=youtu.be.
【Keywords】: antenna radiation patterns; direction-of-arrival estimation; directive antennas; mobile antennas; mobile radio; mobile robots; signal sampling; AOA; BotLoc robot; RSS sampling theory; angle of arrival; antenna radiation pattern; packet transmission rate; programmable robot; received signal strength; rotating directional antenna; self-coordinated robot; target mobile device; wireless localization application; wireless packet; wireless signal; wireless sniffer; Angular velocity; Antenna measurements; Antenna radiation patterns; Directional antennas; Directive antennas; Wireless communication
【Paper Link】 【Pages】:2499-2507
【Authors】: Senshan Ji ; Kam-Fung Sze ; Zirui Zhou ; Anthony Man-Cho So ; Yinyu Ye
【Abstract】: The successful deployment and operation of location-aware networks, which have recently found many applications, depends crucially on the accurate localization of the nodes. Currently, a powerful approach to localization is that of convex relaxation. In a typical application of this approach, the localization problem is first formulated as a rank-constrained semidefinite program (SDP), where the rank corresponds to the target dimension in which the nodes should be localized. Then, the non-convex rank constraint is either dropped or replaced by a convex surrogate, thus resulting in a convex optimization problem. In this paper, we explore the use of a non-convex surrogate of the rank function, namely the so-called Schatten quasi- norm, in network localization. Although the resulting optimization problem is non-convex, we show, for the first time, that a first- order critical point can be approximated to arbitrary accuracy in polynomial time by an interior-point algorithm. Moreover, we show that such a first-order point is already sufficient for recovering the node locations in the target dimension if the input instance satisfies certain established uniqueness properties in the literature. Finally, our simulation results show that in many cases, the proposed algorithm can achieve more accurate localization results than standard SDP relaxations of the problem.
【Keywords】: convex programming; mathematical programming; polynomials; radio networks; Schatten quasinorm; convex relaxation; convex surrogate; first-order point; interior-point algorithm; location-aware networks; network localization; nonconvex rank constraint; polynomial time; polynomial-time nonconvex optimization approach; rank-constrained SDP; rank-constrained semidefinite program; standard SDP relaxations; Approximation algorithms; Distance measurement; Optimization; Polynomials; Standards; Symmetric matrices; Vectors
【Paper Link】 【Pages】:2508-2516
【Authors】: Xiaojun Zhu ; Qun Li ; Guihai Chen
【Abstract】: This paper presents APT, a localization system for outdoor pedestrians with smartphones. APT performs better than the built-in GPS module of the smartphone in terms of accuracy. This is achieved by introducing a robust dead reckoning algorithm and an error-tolerant algorithm for map matching. When the user is walking with the smartphone, the dead reckoning algorithm monitors steps and walking direction in real time. It then reports new steps and turns to the map-matching algorithm. Based on updated information, this algorithm adjusts the user's location on a map in an error-tolerant manner. If location ambiguity among several routes occurs after adjustments, the GPS module is queried to help eliminate this ambiguity. Evaluations in practice show that the error of our system is less than 1/2 that of GPS.
【Keywords】: Global Positioning System; handicapped aids; pedestrians; smart phones; target tracking; APT; GPS module; accurate outdoor pedestrian tracking; dead reckoning; error-tolerant algorithm; localization system; map matching; smart phones; Acceleration; Accuracy; Dead reckoning; Global Positioning System; Gyroscopes; Legged locomotion; Smart phones
【Paper Link】 【Pages】:2517-2525
【Authors】: Xinlei Oscar Wang ; Wei Cheng ; Prasant Mohapatra ; Tarek F. Abdelzaher
【Abstract】: With the proliferation of sensor-embedded mobile computing devices, participatory sensing is becoming popular to collect information from and outsource tasks to participating users. These applications deal with a lot of personal information, e.g., users' identities and locations at a specific time. Therefore, we need to pay a deeper attention to privacy and anonymity. However, from a data consumer's point of view, we want to know the source of the sensing data, i.e., the identity of the sender, in order to evaluate how much the data can be trusted. “Anonymity” and “trust” are two conflicting objectives in participatory sensing networks, and there are no existing research efforts which investigated the possibility of achieving both of them at the same time. In this paper, we propose ARTSense, a framework to solve the problem of “trust without identity” in participatory sensing networks. Our solution consists of a privacy-preserving provenance model, a data trust assessment scheme and an anonymous reputation management protocol. We have shown that ARTSense achieves the anonymity and security requirements. Validations are done to show that we can capture the trust of information and reputation of participants accurately.
【Keywords】: data communication; mobile computing; wireless sensor networks; ARTSense; anonymity; anonymous reputation management protocol; data trust assessment scheme; participatory sensing; proliferation; sensor-embedded mobile computing devices; trust without identity; Privacy; Protocols; Registers; Security; Sensors; Servers; Videos
【Paper Link】 【Pages】:2526-2534
【Authors】: Rui Zhang ; Jinxue Zhang ; Yanchao Zhang ; Chi Zhang
【Abstract】: Cooperative (spectrum) sensing is a key function for dynamic spectrum access and is essential for avoiding interference with licensed primary users and identifying spectrum holes. A promising approach for effective cooperative sensing over a large geographic region is to rely on special spectrum-sensing providers (SSPs), which outsource spectrum-sensing tasks to distributed mobile users. Its feasibility is deeply rooted in the ubiquitous penetration of mobile devices into everyday life. Crowdsourcing-based cooperative spectrum sensing is, however, vulnerable to malicious sensing data injection attack, in which a malicious CR users submit false sensing reports containing power measurements much larger (or smaller) than the true value to inflate (or deflate) the final average, in which case the SSP may falsely determine that the channel is busy (or vacant). In this paper, we propose a novel scheme to enable secure crowdsourcing-based cooperative spectrum sensing by jointly considering the instantaneous trustworthiness of mobile detectors in combination with their reputation scores during data fusion. Our scheme can enable robust cooperative sensing even if the malicious CR users are the majority. The efficacy and efficiency of our scheme have been confirmed by extensive simulation studies.
【Keywords】: mobile radio; signal detection; telecommunication security; cooperative spectrum sensing; distributed mobile use; dynamic spectrum access; instantaneous trustworthiness; large geographic region; malicious CR; malicious sensing data injection attack; mobile detector; secure crowdsourcing; spectrum sensing provider; Cascading style sheets; Detectors; Fading; Mobile communication; Robustness; Silicon
【Paper Link】 【Pages】:2535-2543
【Authors】: Shuai Li ; Haojin Zhu ; Zhaoyu Gao ; Xinping Guan ; Kai Xing
【Abstract】: Collaborative spectrum sensing has been recognized as a promising approach to improve the sensing performance via exploiting the spatial diversity of the secondary users. In this study, a new selfishness issue is identified, that selfish users sense no spectrum in collaborative sensing. For easier presentation, it's denoted as entropy selfishness. This selfish behavior is difficult to distinguish, making existing detection based incentive schemes fail to work. To thwart entropy selfishness in distributed collaborative sensing, we propose YouSense, a One-Time Pad (OTP) based incentive design that could naturally isolate entropy selfish users from the honest users without selfish node detection. The basic idea of YouSense is to construct a trapdoor onetime pad for each sensing report by combining the original report and a random key. Such a one-time pad based encryption could prevent entropy selfish users from accessing the original sensing report while enabling the honest users to recover the report. Different from traditional cryptography based OTP which requires the key delivery, YouSense allows an honest user to recover the pad (or key) by exploiting a unique characteristic of collaborative sensing that different secondary users share some common observations on the same radio spectrum. We further extend YouSense to improve the recovery successful rate by reducing the cardinality of set of the possible pads. By extensive USRP based experiments, we show that YouSense can successfully thwart entropy selfishness with low system overhead.
【Keywords】: cognitive radio; entropy; radio spectrum management; YouSense; cryptography based OTP; distributed collaborative spectrum sensing; entropy selfish users; entropy selfishness; incentive schemes; key delivery; one-time pad based encryption; one-time pad based incentive design; radio spectrum; secondary users; selfish behavior; selfish node detection; sensing performance; sensing report; spatial diversity; system overhead; trapdoor onetime pad; Collaboration; Encryption; Entropy; Incentive schemes; Sensors; Silicon; Cognitive Radio Security; Collaborative Spectrum Sensing; Incentive Mechanism Design
【Paper Link】 【Pages】:2544-2552
【Authors】: Zhiping Jiang ; Jizhong Zhao ; Xiang-Yang Li ; Jinsong Han ; Wei Xi
【Abstract】: Comparing to well protected data frames, Wi-Fi management frames (MFs) are extremely vulnerable to various attacks. Since MFs are transmitted without encryption or authentication, attackers can easily launch various attacks by forging the MFs. In a collaborative environment with many Wi-Fi sniffers, such attacks can be easily detected by sensing the anomaly RSS changes. However, it is quite difficult to identify these spoofing attacks without assistance from other nodes. By exploiting some unique characteristics (e.g., rapid spatial decorrelation, independence of Txpower, and much richer dimensions) of 802.11n Channel State Information (CSI), we design and implement CSITE, a prototype system to authenticate the Wi-Fi management frames on PHY layer merely by one station. Our system CSITE, built upon off-the-shelf hardware, achieves precise spoofing detection without collaboration and in-advance fingerprint. Several novel techniques are designed to address the challenges caused by user mobility and channel dynamics. To verify the performances of our solution, we conduct extensive evaluations in various scenarios. Our test results show that our design significantly outperforms the RSS-based method. We observe about 8 times improvement by CSITE over RSS-based method on the falsely accepted attacking frames.
【Keywords】: telecommunication channels; telecommunication network management; telecommunication security; wireless LAN; 802.11n; CSI information; CSITE; PHY layer; RSS-based method; Txpower; Wi-Fi management frames; Wi-Fi sniffers; channel dynamics; channel state information; rapid spatial decorrelation; source authentication; user mobility; Authentication; Cryptography; Decorrelation; IEEE 802.11n Standard; OFDM
【Paper Link】 【Pages】:2553-2561
【Authors】: Yujin Li ; Wenye Wang
【Abstract】: Vehicular ad hoc network (VANET) is one of the most promising large-scale applications of mobile ad hoc networks. VANET applications are rooted in advanced understanding of communication networks because both control messages and data information need to be disseminated in geographic regions (i.e., Geocast). The challenges come from highly dynamic environments in VANET. Destination nodes in geocast are dynamic over time due to vehicle mobility, which undermines existing results on dissemination latency and information propagation speed with pre-determined destinations. Moreover, the affected area by the dissemination, which is referred to as horizon of message (HOM), is critical in geocast as it determines the latency for the message reaching nodes inside the area of interest (AOI), in which the message is relevant to drivers. Therefore, we characterize the HOM in geocast by how far the message can reach within time t (referred as dissemination distance) and how long the message takes to inform nodes at certain locations (referred as hitting time). Analytic bounds of dissemination distance and hitting time are derived under four types of dissemination mechanisms, which provide insights into the spatial and temporal limits of HOM as well as how the numbers of disseminators and geographic information exchanges affect them. Applying analytic and simulation results to two real applications, we observe that geocast with AOI near the source or high reliability requirement should recruit multiple disseminators while geocast with AOI far from the source need to utilize geographic information for fast message propagation.
【Keywords】: information dissemination; telecommunication network reliability; vehicular ad hoc networks; AOI; HOM; VANET applications; analytic bounds; area of interest; communication networks; control messages; data information; destination nodes; dissemination distance; dissemination latency; dissemination mechanisms; fast message propagation; geocast; geographic information exchanges; geographic regions; hitting time; horizon of message; information propagation speed; intermittently connected vehicular ad hoc networks; message reaching nodes; mobile ad hoc networks; vehicle mobility; Simulation; Upper bound; Vectors; Vehicle dynamics; Vehicles; Vehicular ad hoc networks
【Paper Link】 【Pages】:2562-2570
【Authors】: Tom H. Luan ; Xuemin (Sherman) Shen ; Fan Bai
【Abstract】: The effective inter-vehicle transmission of content files, e.g., images, music and video clips, is the basis of media communications in vehicular networks, such as social communications and video sharing. However, due to the presence of diverse node velocities, severe channel fadings and intensive mutual interferences among vehicles, the inter-vehicle or vehicle-to-vehicle (V2V) communications tend to be transient and highly dynamic. Content transmissions among vehicles over the volatile and spotty V2V channels are thus susceptible to frequent interruptions and failures, resulting in many fragment content transmissions which are unable to finish during the connection time and unusable by on-top media applications. The interruptions of content transmissions not only lead to the failure of media presentations to users, but the transmission of the invalid fragment contents would also result in the significant waste of precious vehicular bandwidth. On addressing this issue, in this work we target on provisioning the integrity-oriented inter-vehicle content transmissions. Given the initial distance and mobility statistics of vehicles, we develop an analytical framework to evaluate the data volume that can be transmitted upon the short-lived and spotty V2V connection from the source to the destination vehicle. Provided the content file size, we are able to evaluate the likelihood of successful content transmissions through the model. Based upon this analysis, we propose an admission control scheme at the transmitters, that filters the suspicious content transmission requests which are unlikely to be accomplished over the transient inter-vehicle links. Using extensive simulations, we demonstrate the accuracy of the developed analytical model, and the effectiveness of the proposed admission control scheme. In the simulated scenario, with the proposed admission control scheme applied, it is observed that about 30% of the network bandwidth can be saved for effective content - ransmissions.
【Keywords】: data communication; telecommunication channels; telecommunication network reliability; telecommunication security; vehicular ad hoc networks; V2V communications; admission control scheme; content file size; data volume; diverse node velocities; integrity oriented intervehicle content transmissions; intensive mutual interferences; severe channel fadings; successful content transmissions; suspicious content transmission requests; transient intervehicle links; vehicle to vehicle; vehicular ad hoc networks; Analytical models; Fading; Media; Physical layer; Road transportation; Throughput; Vehicles
【Paper Link】 【Pages】:2571-2579
【Authors】: Xu Li ; Chunming Qiao ; Yunfei Hou ; Yunjie Zhao ; Aditya Wagh ; Adel W. Sadek ; Liusheng Huang ; Hongli Xu
【Abstract】: We consider a promising application in Vehicular Cyber-Physical Systems (VCPS) called On-road Ad Delivery (OAD), where targeted advertisements are delivered via roadside APs to attract commuters to nearby shops. Different from most existing works on VANETs which only focused on a single technical area, this work on OAD involves technical elements from human factors, cyber systems and transportation systems since a commuter's shopping decision depends on e.g. the attractiveness of the ads, the induced detour, and traffic conditions on different routes. In this paper, we address a new optimization problem in OAD whose goal is to schedule ad messages and allocate a limited amount of AP bandwidth so as to maximize the system-wide performance in terms of total realized utilities (TRU) of the delivered ads. A number of efficient heuristics are proposed to deal with ad message scheduling and AP bandwidth allocation. Besides largescale simulations, we also present a case study in a more realistic scenario utilizing real traces collected from taxis in the city of Shanghai. In addition, we use a commercial traffic simulator (PARAMICS) to show that our proposed solutions are also useful for traffic management in terms of balancing vehicular traffic and alleviating congestion.
【Keywords】: bandwidth allocation; optimisation; telecommunication traffic; vehicular ad hoc networks; OAD; PARAMICS; VANET; VCPS; bandwidth allocation; commercial traffic simulator; cyber system; human factor; on-road ads delivery scheduling; optimization problem; total realized utilities; traffic management; transportation system; vehicular CPS; vehicular cyber-physical system; vehicular traffic; Bandwidth; Channel allocation; Protocols; Roads; Unicast; Wireless communication; Daily Commuters; Human-in-the-Loop; On-road Ad Delivery; Shopping Activities; Vehicular CPS
【Paper Link】 【Pages】:2580-2588
【Authors】: Yuchen Wu ; Yanmin Zhu ; Hongzi Zhu ; Bo Li
【Abstract】: Given the unique characteristics of vehicular networks, specifically, frequent communication unavailability and short encounter time, packet replication has been commonly used to facilitate data delivery. Replication enables multiple copies of the same packet to be forwarded towards the destination, which increases the chance of delivery to a target destination. However, this is achieved at the expense of consuming extra already scarce bandwidth resource in vehicular networks. Therefore, it is crucial to investigate the fundamental problem of exploiting constrained network capacity with packet replication. We make the first attempt in this work to address this challenging problem. We first conduct extensive empirical analysis using three large datasets of real vehicle GPS traces. We show that a replication scheme that either underestimates or overestimates the network capacity results in poor delivery performance. Based on the observation, we propose a Capacity-Constrained Replication scheme or CCR for data delivery in vehicular networks. The key idea is to explore the residual capacity for packet replication. We introduce an analytical model for characterizing the relationship among the number of replicated copies of a packet, replication limit and queue length. Based on this insight, we derive the rule for adaptive adjustment towards the optimal replication strategy. We then design a distributed algorithm to dictate how each vehicle can adaptively determine its replication strategy subject to the current network capacity. Extensive simulations based on real vehicle GPS traces show that our proposed CCR can significantly improve delivery ratio comparing with the state-of-the-art algorithms.
【Keywords】: Global Positioning System; mobile radio; queueing theory; CCR; capacity-constrained replication; communication unavailability; constrained network capacity; data delivery; delivery ratio; extensive empirical analysis; optimal replication strategy; packet replication; queue length; replication limit; residual capacity; vehicle GPS traces; vehicular networks; Analytical models; Bandwidth; Barium; Global Positioning System; Performance gain; Queueing analysis; Vehicles; analytical model; data delivery; network capacity; packet replication; trace-driven simulations; vehicular networks
【Paper Link】 【Pages】:2589-2597
【Authors】: Bo Ji ; Gagan Raj Gupta ; Xiaojun Lin ; Ness B. Shroff
【Abstract】: In this paper, we focus on the scheduling problem in multi-channel wireless networks, e.g., the downlink of a single cell in fourth generation (4G) OFDM-based cellular networks. Our goal is to design efficient scheduling policies that can achieve provably good performance in terms of both throughput and delay, at a low complexity. While a recently developed scheduling policy, called Delay Weighted Matching (DWM), has been shown to be both rate-function delay-optimal (in the many-channel many-user asymptotic regime) and throughput-optimal (in general non-asymptotic setting), it has a high complexity O(n5), which makes it impractical for modern OFDM systems. To address this issue, we first develop a simple greedy policy called Delay-based Queue-Side-Greedy (D-QSG) with a lower complexity O(n3), and rigorously prove that D-QSG not only achieves throughput optimality, but also guarantees near-optimal rate-function-based delay performance. Specifically, the rate-function attained by DQSG for any fixed integer threshold b > 0, is no smaller than the maximum achievable rate-function by any scheduling policy for threshold b-1. Further, we develop another simple greedy policy called Delay-based Server-Side-Greedy (D-SSG) with an even lower complexity O(n2), and show that D-SSG achieves the same performance as D-QSG. Thus, we are able to achieve a dramatic reduction in complexity (from O(n5) of DWM to O(n2)) with a minimal drop in the delay performance. Finally, we conduct numerical simulations to validate our theoretical results in various scenarios. The simulation results show that our proposed greedy policies not only guarantee a near-optimal rate-function, but also empirically are virtually indistinguishable from the delay-optimal policy DWM.
【Keywords】: 4G mobile communication; cellular radio; greedy algorithms; numerical analysis; queueing theory; scheduling; wireless channels; 4G OFDM-based cellular networks; D-QSG; D-SSG; delay weighted matching; delay-based queue-side-greedy; delay-based server-side-greedy; delay-optimal policy DWM; fixed integer; fourth generation OFDM-based cellular networks; low-complexity greedy scheduling policies; multichannel wireless networks; near-optimal rate-function-based delay performance; numerical simulations; rate-function delay-optimal; throughput optimality; Complexity theory; Delays; Indexes; Markov processes; Scheduling; Servers; Throughput
【Paper Link】 【Pages】:2598-2606
【Authors】: Po-Kai Huang ; Xiaojun Lin
【Abstract】: CSMA algorithms have recently received a significant amount of interest in the literature for designing efficient wireless control algorithms. CSMA algorithms are attractive because they incur low computation complexity and communication overhead, and can be shown to achieve the optimal capacity under certain assumptions. However, it has also been observed that CSMA algorithms suffer the starvation problem and incur large delay that may grow exponentially with the network size. In this paper, we propose a new algorithm, called Virtual-Multi-Channel (VMC-) CSMA, that can dramatically reduce delay without sacrificing the high capacity and low complexity of CSMA. The key idea of VMC-CSMA to avoid the starvation problem is to use multiple virtual channels to emulate a multi-channel system and compute a good set of feasible schedules simultaneously (without constantly switching/re-computing schedules). Under the protocol interference model and a single-hop utility-maximization setting, our proposed VMC-CSMA algorithm can approach arbitrarily close to the optimal total system utility, with both the number of virtual channels and the computation complexity increasing logarithmically with the network size. The VMC-CSMA algorithm inherits the distributed nature of CSMA algorithms. Further, once our algorithm converges to the steady-state, the expected packet delay for each link equals to the inverse of its long-term average rate, and the distribution of its head-of-line (HOL) waiting time can also be asymptotically bounded. Our simulation results confirm that the proposed VMC-CSMA algorithm indeed achieves both high throughput and low delay. Further, it can quickly adapt to network traffic changes.
【Keywords】: carrier sense multiple access; computational complexity; telecommunication traffic; CSMA algorithms; communication overhead; computation complexity; delay performance; head-of-line; network traffic; optimal capacity; starvation problem; virtual multichannel approach; wireless control algorithms; Algorithm design and analysis; Complexity theory; Delays; Multiaccess communication; Schedules; Standards; Throughput
【Paper Link】 【Pages】:2607-2615
【Authors】: Rahul Urgaonkar ; Ram Ramanathan ; Jason Redi ; William N. Tetteh
【Abstract】: We investigate optimal channel assignment algorithms that maximize per node throughput in dense multi-channel multi-radio (MC-MR) wireless networks. Specifically, we consider an MC-MR network where all nodes are within the transmission range of each other. This situation is encountered in many real-life settings such as students in a lecture hall, delegates attending a conference, or soldiers in a battlefield. In this scenario, we show that intelligent assignment of the available channels results in a significantly higher per node throughput. We first propose a class of channel assignment algorithms, parameterized by T (the number of transceivers per node), that can achieve Θ(1/N1/T) per node throughput using Θ(TN1-1/T ) channels. In view of practical constraints on T, we then propose another algorithm that can achieve Θ((1/log2n)2) per node throughput using only two transceivers per node. Finally, we identify a fundamental relationship between the achievable per node throughput, the total number of channels used, and the network size under any strategy. Using analysis and simulations, we show that our algorithms achieve close to optimal performance at different operating points on this curve. Our work has several interesting implications on the optimal network design for dense MC-MR wireless networks.
【Keywords】: channel allocation; radio networks; radio transceivers; channel assignment; dense multi-channel multi-radio wireless networks; intelligent assignment; optimal network design; transceivers; Algorithm design and analysis; Indexes; Level set; Routing; Throughput; Transceivers; Wireless networks
【Paper Link】 【Pages】:2616-2624
【Authors】: Ruogu Li ; Atilla Eryilmaz ; Bin Li
【Abstract】: Motivated by the low-jitter requirements of streaming multi-media traffic, we focus on the development of scheduling strategies under fading conditions that not only maximize throughput performance but also provide regular inter-service times to users. Since the service regularity of the traffic is related to the higher-order statistics of the arrival process and the policy operation, it is highly challenging to characterize and analyze directly. We overcome this obstacle by introducing a new quantity, namely the time-since-last-service, which has a unique evolution different from a tradition queue. By combining it with the queue-length in the weight, we propose a novel maximum-weight type scheduling policy that is proven to be throughput-optimal and also provides provable service regularity guarantees. In particular, our algorithm can achieve a degree of service regularity within a constant factor of a fundamental lower bound we derive. This constant is independent of the higher-order statistics of the arrival process and can be as low as two. Our results, both analytical and numerical, exhibit significant service regularity improvements over the traditional throughput-optimal policies, which reveals the importance of incorporating the metric of time-since-last-service into the scheduling policy for providing regulated service.
【Keywords】: multimedia communication; scheduling; statistical analysis; telecommunication traffic; arrival process; degree of service regularity; higher-order statistics; inter-service times; maximum-weight type scheduling policy; multimedia traffic; queue-length; regulated inter-service times; scheduling strategies; service regularity; throughput performance; throughput-optimal policies; throughput-optimal wireless scheduling; time-since-last-service; unique evolution; Delays; Fading; Stability analysis; Steady-state; Throughput; Vectors
【Paper Link】 【Pages】:2625-2633
【Authors】: Taeho Jung ; Xiang-Yang Li ; Zhiguo Wan ; Meng Wan
【Abstract】: Cloud computing is a revolutionary computing paradigm which enables flexible, on-demand and low-cost usage of computing resources. Those advantages, ironically, are the causes of security and privacy problems, which emerge because the data owned by different users are stored in some cloud servers instead of under their own control. To deal with security problems, various schemes based on the Attribute-Based Encryption have been proposed recently. However, the privacy problem of cloud computing is yet to be solved. This paper presents an anonymous privilege control scheme AnonyControl to address not only the data privacy problem in a cloud storage, but also the user identity privacy issues in existing access control schemes. By using multiple authorities in cloud computing system, our proposed scheme achieves anonymous cloud data access and fine-grained privilege control. Our security proof and performance analysis shows that AnonyControl is both secure and efficient for cloud computing environment.
【Keywords】: authorisation; cloud computing; cryptography; data privacy; access control schemes; anonymous cloud data access; anonymous privilege control scheme AnonyControl; attribute-based encryption; cloud computing privacy problem; cloud servers; cloud storage; fine-grained privilege control; multiauthorities; privacy preserving cloud data access; user identity privacy issues; Cloud computing; Encryption; Generators; Gold; Public key; Servers
【Paper Link】 【Pages】:2634-2642
【Authors】: Taeho Jung ; XuFei Mao ; Xiang-Yang Li ; Shaojie Tang ; Wei Gong ; Lan Zhang
【Abstract】: Much research has been conducted to securely outsource multiple parties' data aggregation to an untrusted aggregator without disclosing each individual's privately owned data, or to enable multiple parties to jointly aggregate their data while preserving privacy. However, those works either require secure pair-wise communication channels or suffer from high complexity. In this paper, we consider how an external aggregator or multiple parties can learn some algebraic statistics (e.g., sum, product) over participants' privately owned data while preserving the data privacy. We assume all channels are subject to eavesdropping attacks, and all the communications throughout the aggregation are open to others. We propose several protocols that successfully guarantee data privacy under this weak assumption while limiting both the communication and computation complexity of each participant to a small constant.
【Keywords】: computational complexity; data privacy; telecommunication channels; computation complexity; multivariate polynomial evaluation; pairwise communication channels; privacy-preserving data aggregation; secure channel; Communication channels; Complexity theory; Computational modeling; Cryptography; Polynomials; Protocols; Privacy; SMC; aggregation; homomorphic; secure channels
【Paper Link】 【Pages】:2643-2651
【Authors】: Zhigang Zhou ; Hongli Zhang ; Xiaojiang Du ; Panpan Li ; Xiang-Zhan Yu
【Abstract】: With the advent of cloud computing, data owner is motivated to outsource their data to the cloud platform for great flexibility and economic savings. However, the development is hampered by data privacy concerns: Data owner may have privacy data and the data cannot be outsourced to cloud directly. Previous solutions mainly use encryption. However, encryption causes a lot of inconveniences and large overheads for other data operations, such as search and query. To address the challenge, we adopt hybrid cloud. In this paper, we present a suit of novel techniques for efficient privacy-aware data retrieval. The basic idea is to split data, keeping sensitive data in trusted private cloud while moving insensitive data to public cloud. However, privacy-aware data retrieval on hybrid cloud is not supported by current frameworks. Data owners have to split data manually. Our system, called Prometheus, adopts the popular MapReduce framework, and uses data partition strategy independent to specific applications. Prometheus can automatically separate sensitive information from public data. We formally prove the privacy-preserving feature of Prometheus. We also show that our scheme can defend against the malicious cloud model, in addition to the semi-honest cloud model. We implement Prometheus on Hadoop and evaluate its performance using real data set on a large-scale cloud test-bed. Our extensive experiments demonstrate the validity and practicality of the proposed scheme.
【Keywords】: cloud computing; cryptography; data privacy; outsourcing; query processing; trusted computing; Hadoop; MapReduce framework; Prometheus; cloud platform; data operations; data outsourcing; data owner; data partition strategy; data privacy concerns; economic savings; encryption; hybrid cloud computing; large-scale cloud test-bed; malicious cloud model; privacy-aware data retrieval; privacy-preserving features; public data; semihonest cloud model; sensitive information; trusted private cloud; Algorithm design and analysis; Cloud computing; Data privacy; Encryption; Partitioning algorithms; Privacy; MapReduce; data partition; data retrieval; hybrid cloud; privacy-aware
【Paper Link】 【Pages】:2652-2660
【Authors】: Jiawei Yuan ; Shucheng Yu
【Abstract】: Biometric identification is a reliable and convenient way of identifying individuals. The widespread adoption of biometric identification requires solid privacy protection against possible misuse, loss, or theft of biometric data. Existing techniques for privacy-preserving biometric identification primarily rely on conventional cryptographic primitives such as homomorphic encryption and oblivious transfer, which inevitably introduce tremendous cost to the system and are not applicable to practical large-scale applications. In this paper, we propose a novel privacy-preserving biometric identification scheme which achieves efficiency by exploiting the power of cloud computing. In our proposed scheme, the biometric database is encrypted and outsourced to the cloud servers. To perform a biometric identification, the database owner generates a credential for the candidate biometric trait and submits it to the cloud. The cloud servers perform identification over the encrypted database using the credential and return the result to the owner. During the identification, cloud learns nothing about the original private biometric data. Because the identification operations are securely outsourced to the cloud, the realtime computational/communication costs at the owner side are minimal. Thorough analysis shows that our proposed scheme is secure and offers a higher level of privacy protection than related solutions such as kNN search in encrypted databases. Real experiments on Amazon cloud, over databases of different sizes, show that our computational/communication costs at the owner side are several magnitudes lower than the existing biometric identification schemes.
【Keywords】: biometrics (access control); cloud computing; data privacy; Amazon cloud; biometric trait; cloud computing; cloud servers; encrypted biometric database; privacy protection; privacy-preserving biometric identification; private biometric data; realtime computational-communication costs; Biological system modeling; Cryptography; Euclidean distance; Indexes; Servers; Vectors
【Paper Link】 【Pages】:2661-2669
【Authors】: Hao Huang ; Jihoon Yun ; Ziguo Zhong ; Song Min Kim ; Tian He
【Abstract】: Low-duty-cycle radio operations have been proposed for wireless networks facing severe energy constraints. Despite energy savings, duty-cycling the radio creates transient-available wireless links, making communication rendezvous a challenging task under the practical issue of clock drift. To overcome limitations of prior work, this paper presents PSR, a practical design for synchronous rendezvous in low-duty-cycle wireless networks. The key idea behind PSR is to extract timing information naturally embedded in the pattern of radio duty-cycling, so that normal traffic in the network can be utilized as a “free” input for drift detection, which helps reduce (or even eliminate) the overhead of traditional time-stamp exchange with dedicated packets or bits. To prevent an overuse of such free information, leading to energy waste, an energy-driven adaptive mechanism is developed for clock calibration to balance between energy efficiency and rendezvous accuracy. PSR is evaluated with both test-bed experiments and extensive simulations, by augmenting and comparing with four different MAC protocols. Results show that PSR is practical and effective under different levels of traffic load, and can be fused with those MAC protocols to improve their energy efficiency without major change of the original designs.
【Keywords】: access protocols; radio links; radio networks; telecommunication traffic; MAC protocol; PSR; clock calibration; clock drift; energy constraints; energy efficiency; energy savings; energy waste; energy-driven adaptive mechanism; low-duty-cycle wireless networks; practical synchronous rendezvous; radio duty-cycling; time-stamp exchange; timing information extraction; traffic load; transient-available wireless links; Calibration; Clocks; Estimation; Media Access Protocol; Schedules; Synchronization; Wireless networks
【Paper Link】 【Pages】:2670-2678
【Authors】: Yang Peng ; Zi Li ; Daji Qiao ; Wensheng Zhang
【Abstract】: We present a novel holistic approach (called I2C - Intra-route and Inter-route Coordination) to prolong the sensor network lifetime under the end-to-end delivery delay constraint. I2C is composed of two lifetime balancing modules: (i) the IntraRoute Coordination module that allows the nodes on the same route to balance their nodal lifetimes through adjusting the MAC behaviors collaboratively; (ii) the Inter-Route Coordination module that balances the nodal lifetimes across different routes via adjusting the communication routes. Different from existing works which conduct either intra-route or inter-route lifetime balancing, or a simple combination of the two, I2C leverages the advantages of both techniques with a sophisticated design that emphasizes the awareness and collaboration between two modules. Thus, I2C is able to prolong the network lifetime much more effectively than the state-of-the-art solutions, while guaranteeing the desired delay bound and maintaining a similar level of network power consumption. This has been demonstrated with extensive ns-2 simulation and TinyOS experiment results.
【Keywords】: power consumption; telecommunication network routing; wireless sensor networks; I2C; MAC behaviors; TinyOS experiment; communication routes; end-to-end delivery delay constraint; intraroute and interroute coordination; lifetime balancing modules; network power consumption; nodal lifetimes; ns-2 simulation; sensor network lifetime; Data collection; Delays; Energy consumption; Equations; Media Access Protocol; Routing; Switches
【Paper Link】 【Pages】:2679-2687
【Authors】: Wei Dong ; Yunhao Liu ; Yuan He ; Tong Zhu
【Abstract】: Understanding the packet delivery performance of a wireless sensor network (WSN) is critical for improving system performance and exploring further development and applications of WSN techniques. In spite of many empirical measurements in the literature, we still lack in-depth understanding on how and to what extent different factors contribute to the overall packet losses with respect to a complete stack of protocols at large scales. Specifically, very little is known about (1) When, where, and under what kind of circumstances packet losses occur. (2) Why packets are lost. As a step towards addressing those issues, we deploy a large-scale WSN and design a measurement system for retrieving important system metrics. We propose MAP, a step-by-step methodology to identify the losses, extract system events, and perform spatial-temporal correlation analysis by employing a carefully examined casual graph. MAP enables us to get a closer look at the root causes of packet losses in a low-power adhoc network. This study validates some earlier conjectures on WSNs and reveals some new findings. The quantitative results also shed lights for future large-scale WSN deployments.
【Keywords】: sensor placement; wireless sensor networks; MAP; WSN design; casual graph; complete protocol stack; empirical measurement; large scale WSN deployment; large scale sensor network; loss identification; measurement system; packet delivery performance; packet loss; performance analysis; performance measurement; spatial-temporal correlation analysis; system event extraction; wireless sensor network; Correlation; Packet loss; Protocols; Radiation detectors; Routing; Wireless sensor networks
【Paper Link】 【Pages】:2688-2696
【Authors】: Qiang Ma ; Kebin Liu ; Xiangrong Xiao ; Zhichao Cao ; Yunhao Liu
【Abstract】: In large-scale wireless sensor networks, it proves very difficult to dynamically monitor system degradation and detect bad links. Faulty link detection plays a critical role in network diagnosis. Indeed, a destructive node impacts its links' performances including transmitting and receiving. Similarly, other potential network bottlenecks such as network partition and routing errors can be detected by link scan. Since sequentially checking all potential links incurs high transmission and storage cost, existing approaches often focus on links currently in use, while overlook those unused yet ones, thus fail to offer more insights to guide following operations. We propose a novel scheme Link Scanner (LS) for monitoring wireless links at real time. LS issues one probe message in the network and collects hop counts of the received probe messages at sensor nodes. Based on the observation that faulty links can result in mismatch between the received hop counts and the network topology, we are able to deduce all links' status with a probabilistic model. We evaluate our scheme by carrying out experiments on a testbed with 60 TelosB motes and conducting extensive simulation tests. A real outdoor system is also deployed to verify that LS can be reliably applied to surveillance networks.
【Keywords】: fault diagnosis; probability; radio links; telecommunication network topology; wireless sensor networks; bad link detection; destructive node; faulty link detection; link scanner; network diagnosis; network partition error detection; network topology; outdoor system; probabilistic model; routing error detection; surveillance network; system degradation monitoring; wireless link monitoring; wireless sensor network; Educational institutions; Monitoring; Network topology; Probes; Routing; Topology; Wireless sensor networks
【Paper Link】 【Pages】:2697-2705
【Authors】: Can Zhao ; Jian Zhao ; Xiaojun Lin ; Chuan Wu
【Abstract】: The performance of large-scaled peer-to-peer (P2P) video-on-demand (VoD) streaming systems can be very challenging to analyze. In practical P2P VoD systems, each peer only interacts with a small number of other peers/neighbors. Further, its upload capacity may vary randomly, and both its downloading position and content availability change dynamically. In this paper, we rigorously study the achievable streaming capacity of large-scale P2P VoD systems with sparse connectivity among peers, and investigate simple and decentralized P2P control strategies that can provably achieve close-to-optimal streaming capacity. We first focus on a single streaming channel. We show that a close-to-optimal streaming rate can be asymptotically achieved for all peers with high probability as the number of peers N increases, by assigning each peer a random set of Θ(log N) neighbors and using a uniform rate-allocation algorithm. Further, the tracker does not need to obtain detailed knowledge of which chunks each peer caches, and hence incurs low overhead. We then study multiple streaming channels where peers watching one channel may help in another channel with insufficient upload bandwidth. We propose a simple random cache-placement strategy, and show that a close-to-optimal streaming capacity region for all channels can be attained with high probability, again with only Θ(log N) per-peer neighbors. These results provide important insights into the dynamics of large-scale P2P VoD systems, which will be useful for guiding the design of improved P2P control protocols.
【Keywords】: cache storage; computational complexity; decentralised control; peer-to-peer computing; video on demand; video streaming; Θ(log N) neighbors; P2P On-demand streaming; P2P VoD systems; close-to-optimal streaming capacity; close-to-optimal streaming capacity region; decentralized P2P control strategies; decentralized control; large-scale P2P VoD systems; large-scaled peer-to-peer video-on-demand streaming systems; multiple streaming channels; peer caches; random cache-placement strategy; streaming channel; uniform rate-allocation algorithm; upload bandwidth; Algorithm design and analysis; Availability; Decentralized control; Peer-to-peer computing; Robustness; Servers; Streaming media
【Paper Link】 【Pages】:2706-2714
【Authors】: Zheng Lu ; Gustavo de Veciana
【Abstract】: This paper considers the design of cross-layer opportunistic transport for stored video over wireless networks with a slow varying (average) capacity. We focus on two key ideas: (1) scheduling data transmissions when capacity is high; and (2), exploiting knowledge of future capacity variations. The latter is possible when users' mobility is known or predictable, e.g., users riding on public transportation or using navigation systems. We consider the design of cross-layer transmission schedules which minimize system utilization (and thus possibly transmit/receive energy) while avoiding, if at all possible, rebuffering/delays, in several scenarios. For the single-user anticipative case where all future capacity variations are known beforehand; we establish the optimal transmission schedule is a Generalized Piecewise Constant Thresholding (GPCT) scheme. For the single-user partially anticipative case where only a finite window of future capacity variations is known, we propose an online Greedy Fixed Horizon Control (GFHC). An upper bound on the competitive ratio of GFHC and GPCT is established showing how performance loss depends on the window size, receiver playback buffer, and capacity variability. Finally we consider the multiuser case where we can exploit both future temporal and multiuser diversity. Our simulations and evaluation based on a measured wireless capacity trace exhibit robust potential gains for our proposed transmission schemes.
【Keywords】: mobile radio; video signal processing; GFHC; GPCT scheme; capacity variability; cross-layer opportunistic transport; data transmission scheduling; generalized piecewise constant thresholding scheme; mobile networks; multiuser case; online greedy fixed horizon control; public transportation; receiver playback buffer; stored video delivery; window size; wireless capacity; wireless networks; Delays; Mobile communication; Optimization; Schedules; Servers; Streaming media; Wireless communication
【Paper Link】 【Pages】:2715-2723
【Authors】: Yuedong Xu ; Salah-Eddine Elayoubi ; Eitan Altman ; Rachid El Azouzi
【Abstract】: The Quality of Experience (QoE) of streaming service is often degraded by frequent playback interruptions. To mitigate the interruptions, the media player prefetches streaming contents before starting playback, at a cost of delay. We study the QoE of streaming from the perspective of flow dynamics. First, a framework is developed for QoE when streaming users join the network randomly and leave after downloading completion. We compute the distribution of prefetching delay using partial differential equations (PDEs), and the probability generating function of playout buffer starvations using ordinary differential equations (ODEs). Second, we extend our framework to characterize the throughput variation caused by opportunistic scheduling at the base station in the presence of fast fading. Our study reveals that the flow dynamics is the fundamental reason of playback starvation. The QoE of streaming service is dominated by the average throughput of opportunistic scheduling, while the variance of throughput has very limited impact on starvation behavior.
【Keywords】: mobile radio; partial differential equations; probability; quality of experience; radio networks; video streaming; ODE; PDE; QoE; base station; fast fading presence; flow-level dynamics impact; frequent playback interruptions; interruption mitigation; media player; mobile networks; opportunistic scheduling; ordinary differential equations; partial differential equations; playout buffer starvations; prefetching delay distribution; probability generating function; quality-of-experience; starvation behavior; streaming contents; video streaming service; wireless networks; Delays; Markov processes; Mathematical model; Prefetching; Streaming media; Throughput
【Paper Link】 【Pages】:2724-2732
【Authors】: Delia Ciullo ; Valentina Martina ; Michele Garetto ; Emilio Leonardi
【Abstract】: We propose an analytical framework to tightly characterize the scaling laws for the additional bandwidth that servers must supply to guarantee perfect service in peer-assisted Video-on-Demand systems, taking into account essential aspects such as peer churn, bandwidth heterogeneity, and Zipf-like video popularity. Our results reveal that the catalog size and the content popularity distribution have a huge effect on the system performance. We show that users' cooperation can effectively reduce the servers' burden for a wide range of system parameters, confirming to be an attractive solution to limit the costs incurred by content providers as the system scales to large populations of users.
【Keywords】: network servers; peer-to-peer computing; video on demand; Zipf-like video popularity; bandwidth heterogeneity; content popularity distribution; peer churn; peer-assisted video-on-demand systems; servers; Aggregates; Bandwidth; Catalogs; Servers; Streaming media; Upper bound; Watches
【Paper Link】 【Pages】:2733-2741
【Authors】: He Wang ; Zhiyang Wang ; Guobin Shen ; Fan Li ; Song Han ; Feng Zhao
【Abstract】: The proliferation of location-based services and applications calls for provisioning of location service as a first class system component that can return accurate location fix in short response time and is energy efficient. In this paper, we present the design, implementation and evaluation of WheelLoc - a continuous system location service for outdoor scenarios. Unlike previous localization efforts that try to directly obtain a point location fix, WheelLoc adopts an indirect approach: it seeks to capture a user mobility trace first and to obtain any point location by time- and speed-aware interpolation or extrapolation. WheelLoc avoids energy-expensive sensors completely and relies solely on commonly available cheap sensors such as accelerometer and magnetometer. With a set of novel techniques and the leverage of publicly available road maps and cell tower information, WheelLoc is able to meet those requirements of a first class component. Experimental results confirmed the effectiveness of WheelLoc. It can return a location estimate within 40ms with an accuracy about 40 meters, consumes only 240mW energy, and effectively strikes a better energy-accuracy tradeoff than GPS duty-cycling.
【Keywords】: accelerometers; extrapolation; interpolation; magnetometers; mobile handsets; mobility management (mobile radio); telecommunication services; GPS duty cycling; accelerometer; cell tower information; extrapolation; interpolation; location based services; magnetometer; mobile phone; outdoor scenarios; point location; power 240 mW; road maps; user mobility trace; Acceleration; Accelerometers; Estimation; Magnetometers; Poles and towers; Roads; Vectors
【Paper Link】 【Pages】:2742-2750
【Authors】: Chih-Chuan Cheng ; Pi-Cheng Hsiu
【Abstract】: Reducing the communication energy is essential to facilitate the growth of emerging mobile applications. In this paper, we introduce signal strength into location-based applications to reduce the energy consumption of mobile devices for data reception. First, we model the problem of data fetch scheduling, with the objective of minimizing the energy required to fetch location-based information without adversely impacting user experience. Then, we propose a dynamic-programming algorithm to solve the fundamental problem and prove its optimality in terms of energy savings. We also provide an optimality condition with respect to signal strength fluctuations. Finally, based on the algorithm, we consider implementation issues. We have also developed a virtual tour system integrated with existing web applications to validate the practicability of the proposed concept. The results of experiments conducted based on real-world case studies are very encouraging.
【Keywords】: data handling; dynamic programming; mobile computing; scheduling; signal processing; communication energy; data fetch scheduling; data reception; dynamic programming; location-based applications; mobile applications; signal strength; Delays; Energy consumption; Heuristic algorithms; IEEE 802.11 Standards; Mobile communication; Mobile handsets; Schedules; Energy-efficient optimization; cellular data fetch scheduling; location-based applications; signal strength
【Paper Link】 【Pages】:2751-2759
【Authors】: Zhaoyu Gao ; Haojin Zhu ; Yao Liu ; Muyuan Li ; Zhenfu Cao
【Abstract】: Cognitive Radio Network (CRN) is regarded as a promising way to address the increasing demand for wireless channel resources. It solves the channel resource shortage problem by allowing a Secondary User (SU) to access the channel of a Primary User (PU) when the channel is not occupied by the PU. The latest FCC's rule in May 2012 enforces database-driven CRNs, in which an SU queries a database to obtain spectrum availability information by submitting a location based query. However, one concern about database-driven CRNs is that the queries sent by SUs will inevitably leak the location information. In this study, we identify a new kind of attack against location privacy of database-drive CRNs. Instead of directly learning the SUs' locations from their queries, our discovered attacks can infer an SU's location through his used channels. We propose Spectrum Utilization based Location Inferring Algorithm that enables the attacker to geo-locate an SU. To thwart location privacy leaking from query process, we propose a novel Private Spectrum Availability Information Retrieval scheme that utilizes a blind factor to hide the location of the SU. To defend against the discovered attack, we propose a novel prediction based Private Channel Utilization protocol that reduces the possibilities of location privacy leaking by choosing the most stable channels. We implement our discovered attack and proposed scheme on the data extracted from Google Earth Coverage Maps released by FCC. Experiment results show that the proposed protocols can significantly improve the location privacy.
【Keywords】: cognitive radio; data privacy; protocols; query processing; radio spectrum management; telecommunication security; wireless channels; FCC; Google Earth Coverage Maps; PU channel access; SU location; attack identification; blind factor; channel resource shortage problem; database queries; database-driven CRN; database-driven cognitive radio networks; location inferring algorithm; location privacy improvement; location privacy leakage possibility reduction; location-based query submission; prediction-based private channel utilization protocol; primary user; private spectrum availability information retrieval scheme; secondary user; spectrum availability information; spectrum utilization; wireless channel resources; Availability; Data privacy; Databases; Privacy; Protocols; Vectors; Wireless communication; Location Privacy; Private Information Retrieval; database-driven Cognitive Radio Network
【Paper Link】 【Pages】:2760-2768
【Authors】: Xiang-Yang Li ; Taeho Jung
【Abstract】: Location-Based Service (LBS) becomes increasingly popular with the dramatic growth of smartphones and social network services (SNS), and its context-rich functionalities attract considerable users. Many LBS providers use users' location information to offer them convenience and useful functions. However, the LBS could greatly breach personal privacy because location itself contains much information. Hence, preserving location privacy while achieving utility from it is still an challenging question now. This paper tackles this non-trivial challenge by designing a suite of novel fine-grained Privacy-preserving Location Query Protocol (PLQP). Our protocol allows different levels of location query on encrypted location information for different users, and it is efficient enough to be applied in mobile platforms.
【Keywords】: protocols; smart phones; LBS providers; PLQP; context-rich functionalities; encrypted location information; location-based service; mobile platforms; non-trivial challenge; personal privacy; privacy-preserving location query protocol; privacy-preserving location query service; smartphones; social network services; user location information; Access control; Encryption; Iron; Privacy; Protocols; Smart phones
【Paper Link】 【Pages】:2769-2777
【Authors】: Ningning Cheng ; Xinlei Oscar Wang ; Wei Cheng ; Prasant Mohapatra ; Aruna Seneviratne
【Abstract】: Deployment of public wireless access points (also known as public hotspots) and the prevalence of portable computing devices has made it more convenient for people on travel to access the Internet. On the other hand, it also generates large privacy concerns due to the open environment. However, most users are neglecting the privacy threats because currently there is no way for them to know to what extent their privacy is revealed. In this paper, we examine the privacy leakage in public hotspots from activities such as domain name querying, web browsing, search engine querying and online advertising. We discover that, from these activities multiple categories of user privacy can be leaked, such as identity privacy, location privacy, financial privacy, social privacy and personal privacy. We have collected real data from 20 airport datasets in four countries and discover that the privacy leakage can be up to 68%, which means two thirds of users on travel leak their private information while accessing the Internet at airports. Our results indicate that users are not fully aware of the privacy leakage they can encounter in the wireless environment, especially in public WiFi networks. This fact can urge network service providers and website designers to improve their service by developing better privacy preserving mechanisms.
【Keywords】: Internet; data privacy; online front-ends; radio access networks; wireless LAN; Internet; Web browsing; airports; domain name querying; financial privacy; identity privacy; location privacy; network service provider; online advertising; personal privacy; portable computing device; privacy leakage; privacy preserving mechanism; public Wi-Fi networks; public hotspots; public wireless access points; search engine querying; social privacy; wireless environment; Cognition; Data privacy; Electronic mail; Engines; IEEE 802.11 Standards; Privacy; Protocols
【Paper Link】 【Pages】:2778-2786
【Authors】: Ting Wang ; Yaling Yang
【Abstract】: Location spoofing attacks pose serious threats to the location based wireless network mechanisms. Most existing literature focuses on detecting location spoofing attacks or design of robust localization algorithms. However, our study shows that, in many circumstances, perfect location spoofing (PLS) can stay undetected even if robust localization algorithms or detection mechanisms are used. In this paper, we present theoretical analysis on the feasibility of beamforming-based PLS attacks and how it is affected by the anchor deployment. We formulate PLS as a nonlinear feasibility problem based on smart antenna array pattern synthesis. Due to the intractable nature of this feasibility problem, we solve it using semidefinite relaxation (SDR) in conjunction with a heuristic local search algorithm. Simulation results show the effectiveness of our analytical approach and provide insightful advices for defence against PLS attacks.
【Keywords】: adaptive antenna arrays; array signal processing; radio networks; search problems; telecommunication security; SDR; anchor deployment; beamforming-based PLS attack; heuristic local search algorithm; location based wireless network; nonlinear feasibility problem; perfect location spoofing attack; semidefinite relaxation; smart antenna array pattern synthesis; Algorithm design and analysis; Array signal processing; Partitioning algorithms; Robustness; Search problems; Vectors; Wireless communication
【Paper Link】 【Pages】:2787-2795
【Authors】: Yan Zhang ; Loukas Lazos
【Abstract】: We address the problem of MAC-layer misbehavior in distributed multi-channel MAC protocols. We show that selfish users can manipulate the protocol parameters to gain an unfair share of the available bandwidth, while remaining undetected. We identify optimal misbehavior strategies that can lead to the isolation of a subset of frequency bands for exclusive use by the misbehaving nodes and evaluate their impact on performance and fairness. We develop corresponding detection and mitigation strategies that practically eliminate the misbehavior gains. To the best of our knowledge, this is the first attempt in characterizing the impact of misbehavior on multi-channel MAC protocols.
【Keywords】: access protocols; wireless channels; MAC-layer misbehavior; bandwidth; detection strategies; distributed multichannel MAC protocol; frequency band; misbehaving node; misbehavior gains; mitigation strategies; optimal misbehavior strategies; protocol parameter; selfish misbehavior; selfish user; Bandwidth; Media Access Protocol; Monitoring; Radiation detectors; Receivers; Throughput
【Paper Link】 【Pages】:2796-2804
【Abstract】: This paper investigates secure cooperative communication with the involvement of multiple malicious eavesdroppers. By characterizing the security performance of the system by secrecy capacity, we study the secrecy capacity maximization problem in cooperative communication aware ad hoc networks. Specifically, we propose a system model where secrecy capacity enhancement is achieved by the assignment of cooperative relays. We theoretically present a corresponding formulation for the problem and discuss the security gain brought by the relay assignment process. Then, we develop an optimal relay assignment algorithm to solve the secrecy capacity maximization problem in polynomial time. The basic idea behind our proposed algorithm is to boost the capacity of the primary channel by simultaneously decreasing the capacity of the eavesdropping channel. To further increase the system secrecy capacity, we exploit the jamming technique and propose a smart jamming algorithm to interfere the eavesdropping channels. Analysis and experimental results reveal that our proposed algorithms significantly increase the system secrecy capacity under various network settings.
【Keywords】: ad hoc networks; communication complexity; cooperative communication; jamming; telecommunication security; jamming technique; multiple malicious eavesdroppers; polynomial time; primary channel; secrecy capacity maximization; secure cooperative ad hoc networks; secure cooperative communication; Ad hoc networks; Channel capacity; Jamming; Polynomials; Relays; Security; Wireless communication
【Paper Link】 【Pages】:2805-2813
【Authors】: Nguyen Tien Viet ; François Baccelli ; Kai Zhu ; Sundar Subramanian ; Xinzhou Wu
【Abstract】: The broadcast of periodic messages is a key functionality in vehicular ad hoc networks. In the emerging vehicular networks, IEEE 802.11p is the standard of choice to support the PHY and MAC layer functionalities. The broadcast process in IEEE 802.11p is based on the CSMA mechanism where a device transmitting a packet senses the channel for ongoing transmissions and performs a random back-off before accessing the channel. Without RTS/CTS mechanisms, carrier sensing is expected to provide a protection region around the transmitter where no other transmitters are allowed. The point process characterizing the concurrent transmitters is expected to enforce a minimum separation between concurrent transmitters. However, at increasing densities, the CSMA behavior breaks down to an ALOHA-like transmission pattern where concurrent transmitters are distributed as a Poisson point process, indicating the lack of protection around transmitters. In this paper, we model the CSMA mechanism as a slotted system and analytically characterize the critical node/packet arrival density where the CSMA mechanism approaches an ALOHAlike behavior. Further, we use tools from stochastic geometry to establish closed-form expressions for the performance metrics of the broadcast mechanism in the ALOHA regime. Finally, using ns2 (an unslotted asynchronous simulator), we compare the theoretical results with simulations.
【Keywords】: carrier sense multiple access; radio transmitters; stochastic processes; vehicular ad hoc networks; wireless LAN; ALOHA; ALOHA-like transmission pattern; CSMA performance analysis; IEEE 802.11p standard; MAC layer functionality; PHY layer functionality; Poisson point process; RTS-CTS mechanism; VANET; broadcast protocol; carrier sensing; closed-form expression; concurrent transmitter; critical node-packet arrival density; slotted system; stochastic geometry; vehicular ad hoc network; Multiaccess communication; Protocols; Radiation detectors; Safety; Transmitters; Vehicles
【Paper Link】 【Pages】:2814-2822
【Authors】: Lei Zhang ; Lin Cai ; Jianping Pan
【Abstract】: Connectivity has been extensively studied in ad hoc networks, most recently with the application of percolation theory in two-dimensional square lattices. Given a message source and the bond probability to connect neighbor vertexes on the lattice, percolation theory tries to determine the critical bond probability above which there exists an infinite connected giant component with high probability. This paper studies a related but different problem: what is the connectivity from the source to any vertex on the square lattice following certain directions? The original directed percolation problem has been studied in statistical physics for more than half a century, with only simulation results available. In this paper, by using a recursive decomposition approach, we have obtained the analytical expressions for directed connectivity. The results can be widely used in wireless and mobile ad hoc networks, including vehicular ad hoc networks.
【Keywords】: lattice theory; mobile ad hoc networks; percolation; probability; 2D lattice network; 2D square lattice; analytical expression; critical bond probability; directed connectivity; directed percolation problem; infinite connected giant component; mobile ad hoc network; percolation theory; recursive decomposition; vehicular ad hoc network; wireless ad hoc network; Complexity theory; Lattices; Poles and towers; Probability; Silicon; Vehicular ad hoc networks; Connectivity; directed percolation; square lattice
【Paper Link】 【Pages】:2823-2831
【Authors】: Hongjian Wang ; Yanmin Zhu ; Qian Zhang
【Abstract】: Vehicles are becoming powerful mobile sensors, and vehicular networks provide a promising platform to support a wide range of existing large-scale monitoring applications such as road surface monitoring, and etc. In vehicular networks, inter-vehicle contacts are scarce resources for data delivery. This presents a major challenge for monitoring applications with vehicular networks. By analyzing a large dataset of taxi traces collected from around 2,600 taxis in Shanghai, China, we reveal that there is strong correlation with data readings on vehicles. Motivated by this important observation, we propose a compressive sensing based approach called CSM to monitor with vehicular networks. Two key issues must be addressed. First, there is an intrinsic tradeoff between communication cost and estimation accuracy. Second, guaranteed estimation accuracy should be provided over the highly dynamic network. To address the above issues, we first characterize the relationship between estimation error (12 error) and sparsity property of a dataset. Then, we determine two critical parameters: the minimum number of seeds and the minimum transmission hop length for compressive measurements in the network. The selection of the two parameters can reduce the communication cost while guaranteeing the required estimation accuracy. Extensive simulations based on real vehicular GPS traces collected in Shanghai, China have been performed and results demonstrate that CSM achieves much higher estimation accuracy at the same communication cost compared with other alternative schemes.
【Keywords】: compressed sensing; vehicular ad hoc networks; wireless sensor networks; compressive sensing; dataset; estimation error; mobile sensors; monitoring applications; real vehicular GPS traces; taxi; vehicular networks; Accuracy; Compressed sensing; Entropy; Estimation error; Monitoring; Vehicles; Vehicular networks; compressive Sensing; monitoring; routing; seed selection
【Paper Link】 【Pages】:2832-2840
【Authors】: Hongzi Zhu ; Mianxiong Dong ; Shan Chang ; Yanmin Zhu ; Minglu Li ; Xuemin (Sherman) Shen
【Abstract】: Vehicular networks consist of highly mobile vehicles communications, where connectivity is intermittent. Due to the distributed and highly dynamic nature of vehicular network, to minimize the end-to-end delay and the network traffic at the same time in data forwarding is very hard. Heuristic algorithms utilizing either contact-level or social-level scale of vehicular mobility have only one-sided view of the network and therefore are not optimal. In this paper, by analyzing three large sets of Global Positioning System (GPS) trace of more than ten thousand public vehicles, we find that pairwise contacts have strong temporal correlation. Furthermore, the contact graph of vehicles presents complex structure when aggregating the underlying contacts. In understanding the impact of both levels of mobility to the data forwarding, we propose an innovative scheme, named ZOOM, for fast opportunistic forwarding in vehicular networks, which automatically choose the most appropriate mobility information when deciding next data-relays in order to minimize the end-to-end delay while reducing the network traffic. Extensive trace-driven simulations demonstrate the efficacy of ZOOM design. On average, ZOOM can improve 30% performance gain comparing to the state-of-art algorithms.
【Keywords】: Global Positioning System; data communication; mobile radio; telecommunication traffic; GPS trace; Global Positioning System; ZOOM design; contact-level scale; data forwarding; data-relays; end-to-end delay; fast opportunistic forwarding; mobile vehicles communications; mobility; network traffic; opportunistic forwarding; public vehicles; social-level scale; temporal correlation; vehicular mobility; vehicular networks; Algorithm design and analysis; Communities; Delays; Entropy; Global Positioning System; Routing; Vehicles; inter-contact time; mobility scale; opportunistic forwarding; social network analysis; vehicular networks
【Paper Link】 【Pages】:2841-2849
【Authors】: Richard Combes ; Zwi Altman ; Eitan Altman
【Abstract】: In dense wireless networks, inter-cell interference highly limits the capacity and quality of service perceived by users. Previous work has shown that approaches based on frequency reuse provide important capacity gains. We model a wireless network with Inter-Cell Interference Coordination (ICIC) at the flow level where users arrive and depart dynamically, in order to optimize quality of service indicators perceivable by users such as file transfer time for elastic traffic. We propose an algorithm to tune the parameters of ICIC schemes automatically based on measurements. The convergence of the algorithm to a local optimum is proven, and a heuristic to improve its convergence speed is given. Numerical experiments show that the distance between local optima and the global optimum is very small, and that the algorithm is fast enough to track changes in traffic on the time scale of hours. The proposed algorithm can be implemented in a distributed way with very small signaling load.
【Keywords】: convergence; numerical analysis; quality of service; radio networks; radiofrequency interference; telecommunication traffic; ICIC scheme; convergence speed; elastic traffic; file transfer time; flow-level perspective; global optimum; intercell interference coordination; interference coordination; local optima; numerical analysis; quality of service; small signaling load; wireless networks; Convergence; Fading; Interference; Load modeling; Niobium; Optimization; Wireless networks; Load Balancing; OFDMA; Queuing Theory; Self Optimization; Self configuration; Self-Organizing Networks; Stability; Stochastic Approximation; Traffic Engineering; Wireless Networks
【Paper Link】 【Pages】:2850-2858
【Authors】: Ahmed Badr ; Ashish Khisti ; Wai-Tian Tan ; John G. Apostolopoulos
【Abstract】: We study low-delay error correction codes for streaming recovery over a class of packet-erasure channels that introduce both burst-erasures and isolated erasures. We propose a simple, yet effective class of codes whose parameters can be tuned to obtain a tradeoff between the capability to correct burst and isolated erasures. Our construction generalizes previously proposed low-delay codes which are effective only against burst erasures. We establish an information theoretic upper bound on the capability of any code to simultaneously correct burst and isolated erasures and show that our proposed constructions meet the upper bound in some special cases. We discuss the operational significance of column-distance and column-span metrics and establish that the rate 1/2 codes discovered by Martinian and Sundberg [IT Trans. 2004] through a computer search indeed attain the optimal column-distance and column-span tradeoff. Numerical simulations over a Gilbert-Elliott channel model and a Fritchman model show significant performance gains over previously proposed low-delay codes and random linear codes for certain range of channel parameters.
【Keywords】: channel coding; error correction codes; video coding; video streaming; Fritchman model; Gilbert Elliott channel model; burst erasures; column distance metrics; column span metrics; information theoretic upper bound; isolated erasures; low delay error correction codes; packet erasure channels; random linear codes; rate 1/2 codes; streaming recovery; Convolutional codes; Decoding; Delays; Error correction codes; Linear code; Parity check codes; Upper bound
【Paper Link】 【Pages】:2859-2867
【Authors】: Chenxi Qiu ; Haiying Shen ; Sohraab Soltani ; Karan Sapra ; Hao Jiang ; Jason O. Hallstrom
【Abstract】: Underlying link-layer protocols of wireless networks use the conventional “store and forward” design paradigm cannot provide highly sustainable reliability and stability in wireless communication, which introduce significant barriers and setbacks in scalability and deployments of wireless networks. In this paper, we propose a Code Embedded Distributed Adaptive and Reliable (CEDAR) link-layer framework that targets low latency and high throughput. CEDAR is the first comprehensive theoretical framework for analyzing and designing distributed and adaptive error recovery for wireless networks. It employs a theoretically-sound framework for embedding channel codes in each packet and performs the error correcting process in selected intermediate nodes in packet's route. To identify the intermediate nodes for the en/decoding for minimizing average packet latency, we mathematically analyze the average packet delay, using Finite State Markovian Channel model and priority queuing model, and then formalize the problem as a non-linear integer programming problem. Also, we propose a scalable and distributed scheme to solve this problem. The results from real-world testbed “NESTbed” and simulation with Matlab prove that CEDAR is superior to the schemes using hop-by-hop decoding and destination-decoding not only in packet delay but also in throughput. In addition, the simulation results show that CEDAR can achieve the optimal performance in most cases.
【Keywords】: channel coding; decoding; error correction; error correction codes; packet radio networks; CEDAR; Matlab; adaptive error recovery; average packet delay; average packet latency; channel codes; code embedded distributed adaptive link layer framework; destination decoding; distributed strategy; error correcting process; finite state Markovian channel model; hop by hop decoding; link layer protocols; nonlinear integer programming problem; packet recovery; queuing model; reliable link layer framework; sustainable reliability; wireless communication stability; wireless networks; Bit error rate; Decoding; Delays; Mathematical model; Stability analysis; Wireless communication
【Paper Link】 【Pages】:2868-2876
【Authors】: Peng-Jun Wan ; Zhiguo Wan ; Zhu Wang ; XiaoHua Xu ; Shaojie Tang ; Xiaohua Jia
【Abstract】: Static greedy link schedulings have much simpler implementation than dynamic greedy link schedulings such as Longest-queue-first (LQF) link scheduling. However, its stability performance in multi-channel multi-radio (MC-MR) wireless networks is largely under-explored. In this paper, we present a stability subregion with closed form of a static greedy link scheduling in MC-MR wireless networks under the 802.11 interference model. By adopting some special static link orderings, the stability subregion is within a constant factor of the stable capacity region of the network. We also obtain constant lower bounds on the throughput efficiency ratios of the static greedy link schedulings in some special static link orderings.
【Keywords】: radio links; radiofrequency interference; scheduling; wireless LAN; wireless channels; 802.11 interference model; MC-MR wireless networks; constant lower bounds; multichannel multiradio wireless networks; stability analysis; stability performance; stability subregion; stable network capacity region; static greedy link scheduling; static link orderings; throughput efficiency ratios; IEEE 802.11 Standards; Interference; Stability analysis; Throughput; Vectors; Wireless networks; Stability; link scheduling; multi-channel multi-radio
【Paper Link】 【Pages】:2877-2885
【Authors】: Fang Hao ; Murali S. Kodialam ; T. V. Lakshman ; Krishna P. N. Puttaswamy
【Abstract】: Preventing flow of confidential data out of a network is a fundamental problem faced by network operators. This problem gets even more complex in the context of Cloud Computing, where multiple distrusting customers share the same underlying infrastructure, and data is often replicated and moved across regions. Despite the significance of this problem, existing solutions are based on generic search for keywords in outgoing data, and hence severely lack the ability to control data flow at a fine granularity with low false positives. In this paper, we advocate a fine-grained approach to prevent confidential data from leaking out of the cloud. We propose a solution using document-level fingerprint checks. We show via analysis and experiments that our algorithm for checking the fingerprints on-the-fly scale to a large amount of documents at very low cost. For example, for one TB of documents, our solution only requires 340 MB memory to achieve worst case expected detection lag (i.e. leakage length) of 1000 bytes.
【Keywords】: cloud computing; document handling; security of data; cloud computing; cloud data protection; confidential data flow; document-level fingerprint checks; dynamic inline fingerprint checks; fingerprint on-the-fly scale checking; keyword generic search; worst case expected detection lag; Algorithm design and analysis; Databases; Equations; Heuristic algorithms; Memory management; Probabilistic logic; Protocols
【Paper Link】 【Pages】:2886-2894
【Authors】: Guojun Wang ; Qin Liu ; Feng Li ; Shuhui Yang ; Jie Wu
【Abstract】: In the real world, companies would publish social networks to a third party, e.g., a cloud service provider, for marketing reasons. Preserving privacy when publishing social network data becomes an important issue. In this paper, we identify a novel type of privacy attack, termed 1-neighborhood attack. We assume that an attacker has knowledge about the degrees of a target's one-hop neighbors, in addition to the target's 1-neighborhood graph, which consists of the one-hop neighbors of the target and the relationships among these neighbors. With this information, an attacker may re-identify the target from a k-anonymity social network with a probability higher than 1/k, where any node's 1-neighborhood graph is isomorphic with k - 1 other nodes' graphs. To resist the 1-neighborhood attack, we define a key privacy property, probability indistinguishability, for an outsourced social network, and propose a heuristic indistinguishable group anonymization (HIGA) scheme to generate an anonymized social network with this privacy property. The empirical study indicates that the anonymized social networks can still be used to answer aggregate queries with high accuracy.
【Keywords】: cloud computing; data privacy; graph theory; outsourcing; query processing; social networking (online); 1*-neighborhood attack; 1-neighborhood graph; HIGA; aggregate queries; anonymized social network; cloud service provider; heuristic indistinguishable group anonymization scheme; k-anonymity social network; marketing reasons; one-hop neighbors; privacy attack; privacy property; privacy-preserving social network outsourcing; Aggregates; Educational institutions; Measurement; Outsourcing; Privacy; Probabilistic logic; Social network services; Cloud computing; privacy; probability indistinguishability; social networks
【Paper Link】 【Pages】:2895-2903
【Authors】: Kan Yang ; Xiaohua Jia ; Kui Ren ; Bo Zhang
【Abstract】: Data access control is an effective way to ensure the data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is a promising technique for access control of encrypted data. It requires a trusted authority manages all the attributes and distributes keys in the system. In cloud storage systems, there are multiple authorities co-exist and each authority is able to issue attributes independently. However, existing CP-ABE schemes cannot be directly applied to data access control for multi-authority cloud storage systems, due to the inefficiency of decryption and revocation. In this paper, we propose DAC-MACS (Data Access Control for Multi-Authority Cloud Storage), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multi-authority CP-ABE scheme with efficient decryption and also design an efficient attribute revocation method that can achieve both forward security and backward security. The analysis and the simulation results show that our DAC-MACS is highly efficient and provably secure under the security model.
【Keywords】: authorisation; cloud computing; cryptography; network servers; outsourcing; storage management; trusted computing; DAC-MACS; ciphertext-policy attribute-based encryption; cloud data security; data access control for multiauthority cloud storage; data outsourcing; encrypted copies; multiauthority CP-ABE scheme; secure data access control scheme; trusted cloud server; untrusted cloud servers; Access control; Cloud computing; Encryption; Public key; Servers; Access Control; Attribute Revocation; CP-ABE; Decryption Outsourcing; Multi-authority Cloud
【Paper Link】 【Pages】:2904-2912
【Authors】: Boyang Wang ; Baochun Li ; Hui Li
【Abstract】: With data services in the cloud, users can easily modify and share data as a group. To ensure data integrity can be audited publicly, users need to compute signatures on all the blocks in shared data. Different blocks are signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks, which were previously signed by this revoked user must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
【Keywords】: cloud computing; data integrity; digital signatures; cloud; data modifications; data services; proxy resignatures; public auditing mechanism; public verifier; shared data integrity; straightforward method; user revocation; Forgery; Games; Indexes; Manganese; Public key; Writing
【Paper Link】 【Pages】:2913-2921
【Authors】: Sk. Kajal Arefin Imon ; Adnan Khan ; Mario Di Francesco ; Sajal K. Das
【Abstract】: In most wireless sensor network (WSN) applications, data are typically gathered by the sensor nodes and reported to a data collection point, called the sink. In order to support such data collection, a tree structure rooted at the sink is usually defined. Based on different aspects, including the actual WSN topology and the available energy budget, the energy consumption of nodes belonging to different paths in the data collection tree may vary significantly. This affects the overall network lifetime, defined in terms of when the first node in the network runs out of energy. In this paper, we address the problem of lifetime maximization of WSNs in the context of data collection trees. In particular, we propose a novel and efficient algorithm, called Randomized Switching for Maximizing Lifetime (RaSMaLai) that aims at maximizing the lifetime of WSNs through load balancing with a low time complexity. We further design a distributed version of our algorithm, called D-RaSMaLai. Simulation results show that both the proposed algorithms outperform several existing approaches in terms of network lifetime. Moreover, RaSMaLai offers lower time complexity while the distributed version, D-RaSMaLai, is very efficient in terms of energy expenditure.
【Keywords】: telecommunication network topology; telecommunication switching; trees (mathematics); wireless sensor networks; D-RaSMaLai; WSN topology; data collection point; energy budget; lifetime maximization; network lifetime; randomized switching for maximizing lifetime; sensor nodes; sink nodes; time complexity; tree-based wireless sensor networks; Data collection; Educational institutions; Load management; Oscillators; Switches; Time complexity; Wireless sensor networks; Data Collection Tree; Load Balancing; Network Lifetime; Randomized Algorithm; Sensor Networks
【Paper Link】 【Pages】:2922-2930
【Authors】: Lingkun Fu ; Peng Cheng ; Yu Gu ; Jiming Chen ; Tian He
【Abstract】: As a pioneering experimental platform of wireless rechargeable sensor networks, the Wireless Identification and Sensing Platform (WISP) is an open-source platform that integrates sensing and computation capabilities to the traditional RFID tags. Different from traditional tags, a RFID-based wireless rechargeable sensor node needs to charge its onboard energy storage above a threshold in order to power its sensing, computation and communication components. Consequently, such charging delay imposes a unique design challenge for deploying wireless rechargeable sensor networks. In this paper, we tackle this problem by planning the optimal movement strategy of the RFID reader, such that the time to charge all nodes in the network above their energy threshold is minimized. We first propose an optimal solution using the linear programming method. To further reduce the computational complexity, we then introduce a heuristic solution with a provable approximation ratio of (1 + θ)/(1 - ε) by discretizing the charging power on a two-dimensional space. Through extensive evaluations, we demonstrate that our design outperforms the set-cover-based design by an average of 24.7% while the computational complexity is O((N/ε)2).
【Keywords】: approximation theory; communication complexity; energy storage; linear programming; radiofrequency identification; wireless sensor networks; RFID tag; WISP; approximation ratio; charging delay; computation capability; computational complexity; heuristic solution; linear programming method; onboard energy storage; open-source platform; optimal movement strategy; sensing capability; two-dimensional space; wireless identification; wireless rechargeable sensor network; wireless sensing platform; Delays; Merging; Radiofrequency identification; Robot sensing systems; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2931-2939
【Authors】: Maria Isabel Vergara Gallego ; Franck Rousseau
【Abstract】: The main idea of this paper is to rely only on analog channel sensing to provide integrated neighborhood maintenance and medium access control in a single mechanism for energy constrained low-power wireless sensor nodes. We propose Wake on Idle, a solution that can be implemented in the radio chip and provides these services without relying on the processor when the stable state is reached. No digital decoding is needed as we use pseudo random sequences to identify analog beacon positions in time and assess neighbors presence. Additionally, we use a code violation principle to provide medium access, ensuring that when a node wants to transmit data to a neighbor, the latter will be listening. We analyze this proposition, give some simulation results, and also provide extensive experimentation results based on an implementation running on two different hardware platforms and showing the concept to be valid, although some adaptation was needed to run on off-the-shelf radios. Even with this far from optimal initial implementation we can reach below 0.1% duty cycles for signaling periods on the order of minutes.
【Keywords】: access protocols; analogue circuits; energy conservation; low-power electronics; microprocessor chips; random sequences; telecommunication power management; telecommunication signalling; wireless sensor networks; analog beacon position; analog channel sensing; code violation principle; duty cycle; energy constrained low-power wireless sensor node; energy efficient neighborhood maintenance; hardware platform; integrated neighborhood maintenance; medium access control; off-the-shelf radio; pseudorandom sequence; radio chip; signaling period; wake on idle; Decoding; Maintenance engineering; Media Access Protocol; Schedules; Sensors; Synchronization
【Paper Link】 【Pages】:2940-2948
【Authors】: Yuhang Gao ; Jianwei Niu ; Ruogu Zhou ; Guoliang Xing
【Abstract】: Indoor localization becomes increasingly important as context-aware applications gain popularity in mobile users. A promising approach for indoor localization is to leverage the pervasive WiFi infrastructure via fingerprinting-based inference. However, a WiFi device must frequently scan for WiFi signals during localization, leading to high power consumption. Moreover, switching to the scanning mode introduces inevitable disruptions to data communication of WiFi interface. This paper presents a new indoor localization system called ZiFind that exploits the cross-technology interference in the unlicensed 2.4 GHz frequency spectrum. ZiFind utilizes low-power ZigBee interface to collect WiFi interference signals and adopts digital signal processing techniques to extract unique signatures as fingerprints for localization. To deal with the noise in the fingerprints, we design a new learning algorithm called R-KNN that can improve the accuracy of localization by assigning different weights to fingerprint features according to their importance. We implement ZiFind on TelosB motes and evaluate its performance through extensive experiments in a 16,000 ft2 office building floor consisting of 28 rooms. Our results show that ZiFind leads to significant power saving compared with existing approaches based on WiFi interface, and yields satisfactory localization accuracy in a range of realistic settings.
【Keywords】: indoor communication; ubiquitous computing; wireless LAN; ZiFind; context-aware applications; cross-technology interference signatures; data communication; energy-efficient indoor localization; fingerprinting; mobile users; pervasive WiFi infrastructure; Accuracy; IEEE 802.11 Standards; Interference; Mobile communication; Servers; Timing; Zigbee
【Paper Link】 【Pages】:2949-2957
【Authors】: Chih-Ping Li ; Eytan Modiano
【Abstract】: We consider utility maximization in networks where the sources do not employ flow control and may consequently overload the network. In the absence of flow control at the sources, some packets will inevitably have to be dropped when the network is in overload. To that end, we first develop a distributed, threshold-based packet dropping policy that maximizes the weighted sum throughput. Next, we consider utility maximization and develop a receiver-based flow control scheme that, when combined with threshold-based packet dropping, achieves the optimal utility. The flow control scheme creates virtual queues at the receivers as a push-back mechanism to optimize the amount of data delivered to the destinations via back-pressure routing. A novel feature of our scheme is that a utility function can be assigned to a collection of flows, generalizing the traditional approach of optimizing per-flow utilities. Our control policies use finite-buffer queues and are independent of arrival statistics. Their near-optimal performance is proved and further supported by simulation results.
【Keywords】: optimisation; queueing theory; statistical analysis; telecommunication control; telecommunication network routing; arrival statistics; back-pressure routing; data networks; distributed dropping policy; finite-buffer queues; flow collection; optimal utility; overload; push-back mechanism; receiver-based flow control scheme; threshold-based packet dropping policy; utility maximization; virtual queues; weighted sum throughput maximization; Aggregates; Optimization; Receivers; Resource management; Routing; Throughput; Vectors
【Paper Link】 【Pages】:2958-2966
【Authors】: Chengdi Lai ; Ka-Cheong Leung ; Victor O. K. Li
【Abstract】: The congestion control mechanisms in the standardized Transmission Control Protocol (TCP) may misinterpret packet reordering as congestive loss, leading to spurious congestion response and under-utilization of network capacity. Therefore, many TCP enhancements have been proposed to better differentiate between packet reordering and congestive loss, in order to enhance the reordering robustness (RR) of TCP. Since such enhancements are incrementally deployed, it is important to study the interactions of TCP flows with heterogeneous RR. This paper presents the first systematic study of such interactions by exploring how changing RR of TCP flows influences the bandwidth sharing among these flows. We define the quantified RR (QRR) of a TCP flow as the probability that packet reordering causes congestion response. We analyze the variation of bandwidth sharing as QRR changes. This leads to the discovery of several interesting properties. Most notably, we discover the counter-intuitive result that changing one flow's QRR does not affect its competing flows in certain network topologies. We further characterize the deviation, from the ideal case of bandwidth sharing, as RR changes. We find that enhancing RR of a flow may increase, rather than decrease, the deviation in some typical network scenarios.
【Keywords】: bandwidth allocation; telecommunication congestion control; telecommunication network topology; transport protocols; TCP; bandwidth sharing; congestion control mechanism; congestive loss; network topology; packet reordering; quantified RR; reordering robustness; spurious congestion response; standardized transmission control protocol; Aggregates; Bandwidth; Internet; Network topology; Robustness; Routing; Systematics
【Paper Link】 【Pages】:2967-2975
【Authors】: Ken Inoue ; Davide Pasetto ; Karol Lynch ; Massimiliano Meneghin ; Kay Müller ; John Sheehan
【Abstract】: Ultra low-latency networking is critical in many domains, such as high frequency trading and high performance computing (HPC), and highly desirable in many others such as VoIP and on-line gaming. In closed systems - such as those found in HPC - Infiniband, iWARP or RoCE are common choices as system architects have the opportunity to choose the best host configurations and networking fabric. However, the vast majority of networks are built upon Ethernet with nodes exchanging data using the standard TCP/IP stack. On such networks, achieving ultra low-latency while maintaining compatibility with a standard TCP/IP stack is crucial. To date, most efforts for low-latency packet transfers have focused on three main areas: (i) avoiding context switches, (ii) avoiding buffer copies, and (iii) off-loading protocol processing. This paper describes IBM PowerENTM and its networking stack, showing that an integrated system design which treats Ethernet adapters as first class citizens that share the system bus with CPUs and memory, rather than as peripheral PCI Express attached devices, is a winning solution for achieving minimal latency. The work presents outstanding performance figures, including 1.30μs from wire to wire for UDP, usually the chosen protocol for latency sensitive applications, and excellent latency and bandwidth figures for the more complex TCP.
【Keywords】: hardware-software codesign; local area networks; system buses; transport protocols; Ethernet adapter; HPC; HW-SW approach; IBM PowerENTM; Infiniband; RoCE; high bandwidth TCP/IP protocol; high frequency trading; high performance computing; iWARP; low-latency TCP/IP protocol; packet transfer; system bus; ultra low-latency networking; Hardware; IP networks; Kernel; Payloads; Ports (Computers); Protocols; Sockets
【Paper Link】 【Pages】:2976-2984
【Authors】: Jayakrishnan Nair ; Krishna Jagannathan ; Adam Wierman
【Abstract】: This paper focuses on the design and analysis of scheduling policies for multi-class queues, such as those found in wireless networks and high-speed switches. In this context, we study the response time tail under generalized max-weight policies in settings where the traffic flows are highly asymmetric. Specifically, we study an extreme setting with two traffic flows, one heavy-tailed, and one light-tailed. In this setting, we prove that classical max-weight scheduling, which is known to be throughput optimal, results in the light-tailed flow having heavy-tailed response times. However, we show that via a careful design of inter-queue scheduling policy (from the class of generalized max-weight policies) and intra-queue scheduling policies, it is possible to maintain throughput optimality, and guarantee light-tailed delays for the light-tailed flow, without affecting the response time tail for the heavy-tailed flow.
【Keywords】: queueing theory; radio networks; scheduling; telecommunication traffic; generalized max-weight scheduling; heavy-tailed flow; heavy-tailed response time; high-speed switch; interqueue scheduling policy; intraqueue scheduling policy; light-tailed delay; light-tailed flow; multiclass queue; scheduling policy; traffic flow; wireless networks; Indexes; Optimal scheduling; Queueing analysis; Servers; Stability analysis; Throughput; Time factors
【Paper Link】 【Pages】:2985-2993
【Authors】: Xinxin Liu ; Kaikai Liu ; Linke Guo ; Xiaolin Li ; Yuguang Fang
【Abstract】: Location Based Service (LBS), although it greatly benefits the daily life of mobile device users, has introduced significant threats to privacy. In an LBS system, even under the protection of pseudonyms, users may become victims of inference attacks, where an adversary reveals a user's real identity and complete moving trajectory with the aid of side information, e.g., accidental identity disclosure through personal encounters. To enhance privacy protection for LBS users, a common approach is to include extra fake location information associated with different pseudonyms, known as dummy users, in normal location reports. Due to the high cost of dummy generation using resource constrained mobile devices, self-interested users may free-ride on others' efforts. The presence of such selfish behaviors may have an adverse effect on privacy protection. In this paper, we study the behaviors of self-interested users in the LBS system from a game-theoretic perspective. We model the distributed dummy user generation as Bayesian games in both static and timing-aware contexts, and analyze the existence and properties of the Bayesian Nash Equilibria for both models. Based on the analysis, we propose a strategy selection algorithm to help users achieve optimized payoffs. Leveraging a beta distribution generalized from real-world location privacy data traces, we perform simulations to assess the privacy protection effectiveness of our approach. The simulation results validate our theoretical analysis for the dummy user generation game models.
【Keywords】: Bayes methods; data privacy; game theory; mobile computing; Bayesian Nash equilibria; Bayesian game; K-anonymity; dummy user; extra fake location information; game theory; location based service; location privacy data trace; location report; moving trajectory; privacy protection effectiveness; self-interested user; selfish behavior; static aware context; timing aware context; Analytical models; Bayes methods; Correlation; Games; Privacy; Servers; Trajectory
【Paper Link】 【Pages】:2994-3002
【Authors】: Dejun Yang ; Xi Fang ; Guoliang Xue
【Abstract】: Tremendous efforts have been made to protect the location privacy of mobile users. Some of them, e.g., k-anonymity, require the participation of multiple mobile users to impede the adversary from tracing. These participating mobile users constitute an anonymity set. However, not all mobile users are seriously concerned about their location privacy. Therefore, to achieve k-anonymity, we need to provide incentives for mobile users to participate in the anonymity set. In this paper, we study the problem of incentive mechanism design for k-anonymity location privacy. We first consider the case where all mobile users have the same privacy degree requirement. We then study the case where the requirements are different. Finally, we consider a more challenging case where mobile users can cheat about not only their valuations but also their requirements. We design an auction-based incentive mechanism for each of these cases and prove that all the auctions are computational efficient, individually rational, budget-balanced, and truthful. We evaluate the performance of different auctions through extensive simulations.
【Keywords】: data privacy; game theory; mobile computing; K-anonymity location privacy; auction based incentive mechanism; budget balancing; multiple mobile user; privacy degree requirement; truthful incentive mechanism; Algorithm design and analysis; Cost accounting; Data privacy; Mechanical factors; Mobile communication; Privacy; Sorting
【Paper Link】 【Pages】:3003-3011
【Authors】: Xinxin Zhao ; Lingjun Li ; Guoliang Xue
【Abstract】: In current location based social networks (LBSNs), users expose their location when they check in at a venue or search a place. The release of location privacy could lead to a severe breach of other privacy, such as identity or health condition. In this paper, we propose a framework to safeguard users' location information as well as the check-in records. Considering the special demands in LBSNs, we design a novel index structure to provide a fast search for users when they check in at the same venue frequently. At the same time, our framework outsources the heavy cryptographic computations to the server to reduce the computational overhead for mobile clients. Due to the dynamic feature of LBSNs, our framework uses a lightweight approach to handle a user's revoked friends and new friends. We prove the security of our framework in the random oracle model and demonstrate its efficiency on a Motorola Droid phone.
【Keywords】: cryptography; data privacy; mobile radio; social networking (online); LBSN; Motorola Droid phone; check-in records; computational overhead; heavy cryptographic computations; index structure design; location based social networks; location privacy; mobile clients; random oracle model; user location information; Encryption; Indexes; Privacy; Protocols; Servers
【Paper Link】 【Pages】:3012-3020
【Authors】: Ming Li ; Sergio Salinas ; Arun Thapa ; Pan Li
【Abstract】: With great advances in mobile devices, e.g., smart phones and tablets, location-based services (LBSs) have recently emerged as a very popular application in mobile networks. However, since LBS service providers require users to report their location information, how to preserve users' location privacy is one of the most challenging problems in LBSs. Most existing approaches either cannot fully protect users' location privacy, or cannot provide accurate LBSs. Many of them also need the help of a trusted third-party, which may not always be available. In this paper, we propose a geometric approach, called n-CD, to provide realtime accurate LBSs while preserving users' location privacy without involving any third-party. Specifically, we first divide a user's region of interest (ROI), which is a disk centered at the user's location, into n equal sectors. Then, we generate n concealing disks (CDs), one for each sector, one by one to collaboratively and fully cover each of the n sectors. We call the area covered by the n CDs the concealing space, which fully contains the user's ROI. After rotating the concealing space with respect to the user's location, we send the rotated centers of the n CDs along with their radii to the service provider, instead of the user's real location and his/her ROI. To investigate the performance of n-CD, we theoretically analyze its privacy level and concealing cost. Extensive simulations are finally conducted to evaluate the efficacy and efficiency of the proposed schemes.
【Keywords】: data privacy; geometry; mobile computing; LBS service providers; ROI; concealing disks; concealing space; equal sectors; geometric approach; location based services; location information; mobile devices; mobile networks; n-CD; preserving location privacy; region of interest; smart phones; tablets; Accuracy; Engines; Mobile radio mobility management; Privacy; Security; Servers
【Paper Link】 【Pages】:3021-3029
【Authors】: Rui Shi ; Mayank Goswami ; Jie Gao ; Xianfeng Gu
【Abstract】:
Random walk on a graph is a Markov chain and thus is memoryless' as the next node to visit depends only on the current node and not on the sequence of events that preceded it. With these properties, random walk and its many variations have been used in network routing to
randomize' the traffic pattern and hide the location of the data sources. In this paper we examine a myth in common understanding of the memoryless property of a random walk applied for protecting source location privacy in a wireless sensor network. In particular, if one monitors only the network boundary and records the first boundary node hit by a random walk, this distribution can be related to the location of the source node. For the scenario of a single data source, a very simple algorithm by integrating along the network boundary would reveal the location of the source. We also develop a generic algorithm to reconstruct the source locations for various sources that have simple descriptions (e.g., k source locations, sources on a line segment, sources in a disk). This represents a new type of traffic analysis attack for invading sensor data location privacy and essentially re-opens the problem for further examination.
【Keywords】: Markov processes; graph theory; memoryless systems; telecommunication network routing; telecommunication security; telecommunication traffic; wireless sensor networks; Markov chain; data sources; graph; memoryless property; network routing; random walk; sensor data location privacy; source location privacy; traffic analysis; traffic pattern; wireless sensor network; Brownian motion; Harmonic analysis; Monitoring; Position measurement; Privacy; Routing; Wireless sensor networks
【Paper Link】 【Pages】:3030-3038
【Authors】: Xiali Hei ; Xiaojiang Du ; Shan Lin ; Insup Lee
【Abstract】: Wireless insulin pumps have been widely deployed in hospitals and home healthcare systems. Most of these insulin pump systems have limited security mechanisms embedded to protect them from malicious attacks. In this paper, two attacks against insulin pump systems via wireless links are investigated: a single acute overdose with a significant amount of medication, and chronic overdose with an insignificant amount of extra medication over a long time period, e.g., several months. These attacks can be launched unobtrusively and may jeopardize patients' lives. It is very important and urgent to protect patients from these attacks. To address this issue, we propose a novel patient infusion pattern based access control scheme (PIPAC) for wireless insulin pumps. This scheme employs a supervised learning approach to learn normal patient infusions pattern with the dosage amount, rate, and time of infusion, which are automatically recorded in insulin pump logs. The generated regression models are used to dynamically configure a safety infusion range for abnormal infusion identification. The proposed algorithm is evaluated with real insulin pump logs used by several patients for up to 6 months. The evaluation results demonstrate that our scheme can reliably detect the single overdose attack with a success rate up to 98% and defend against the chronic overdose attack with a very high success rate.
【Keywords】: access control; biomedical equipment; health care; learning (artificial intelligence); patient care; radio links; regression analysis; PIPAC; abnormal infusion identification; chronic overdose; healthcare system; hospital; insulin pump log; medication; patient infusion pattern; patient infusion pattern based access control scheme; regression model; safety infusion range; wireless insulin pump system; wireless link; Access control; Communication system security; Diabetes; Insulin; Safety; Wireless communication; Wireless sensor networks; access control; implantable medical devices; infusion pattern; patient safety; wireless insulin pump
【Paper Link】 【Pages】:3039-3047
【Authors】: Kai Xing ; Zhiguo Wan ; Pengfei Hu ; Haojin Zhu ; Yuepeng Wang ; Xi Chen ; Yang Wang ; Liusheng Huang
【Abstract】: As the advancement of sensing and networking technologies, participatory sensing has raised more and more attention as it provides a promising way enabling public and professional users to gather and analyze private data to understand the world. However, in these participatory sensing applications both data at the individuals and analysis results obtained at the users are usually private and sensitive to be disclosed, e.g., locations, salaries, utility usage, consumptions, behaviors, etc. A natural question, also an important but challenging problem is how to keep both participants and users data privacy while still producing the best analysis to explain a phenomenon. In this paper, we have addressed this issue and proposed M-PERM, a mutual privacy preserving regression modeling approach. Particularly, we launch a series of data transformation and aggregation operations at the participatory nodes, the clusters, and the user. During regression model fitting, we provide a new way for model fitting without any need of the original private data or the exact knowledge of the model expression. To evaluate our approach, we conduct both theoretical analysis and simulation study. The evaluation results show that the proposed approach produces exactly the same best model as if the original private data were used without leakage of the fitted model to any participatory nodes, which is a significant advance compared with the existing approaches [1-5]. It is also shown that the data gathering design is able to reach maximum privacy protection under certain conditions and be robust against collusion attack. Furthermore, compared with existing works under the same context (e.g., [1-5]), to our best knowledge it is the first work showing that not only the model coefficients estimation but also a series of regression analysis and model selection methods are reachable in mutual privacy preserving data analysis scenarios such as participatory sensing.
【Keywords】: data acquisition; data analysis; data privacy; regression analysis; wireless sensor networks; M-PERM; aggregation operation; collusion attack; data cluster; data gathering design; data transformation series; model selection method; mutual privacy preserving data analysis; mutual privacy preserving regression model fitting; networking technology; participatory node; participatory sensing; users data privacy protection; Analytical models; Computational modeling; Data models; Data privacy; Privacy; Regression analysis; Sensors
【Paper Link】 【Pages】:3048-3056
【Authors】: Hongbo Liu ; Yang Wang ; Jie Yang ; Yingying Chen
【Abstract】: Securing wireless communication remains challenging in dynamic mobile environments due to the shared nature of wireless medium and lacking of fixed key management infrastructures. Generating secret keys using physical layer information thus has drawn much attention to complement traditional cryptographic-based methods. Although recent work has demonstrated that Received Signal Strength (RSS) based secret key extraction is practical, existing RSS-based key generation techniques are largely limited in the rate they generate secret bits and are mainly applicable to mobile wireless networks. In this paper, we show that exploiting the channel response from multiple Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers can provide fine-grained channel information and achieve higher bit generation rate for both static and mobile cases in real-world scenarios. We further develop a Channel Gain Complement (CGC) assisted secret key extraction scheme to cope with channel non-reciprocity encountered in practice. Our extensive experiments using WiFi networks in both indoor as well as outdoor environments demonstrate that our approach can achieve significantly faster secret bit generation rate at 60 ~ 90bit/packet, and is resilient to malicious attacks identified to be harmful to RSS-based techniques including predictable channel attack and stalking attack.
【Keywords】: cryptography; mobile communication; radiocommunication; telecommunication security; OFDM subcarriers; RSS based key generation; RSS based secret key extraction; WiFi networks; channel gain complement assisted secret key extraction; channel nonreciprocity; channel response; complement traditional cryptographic based method; dynamic mobile environment; fine grained channel information; fixed key management infrastructure; malicious attack; mobile wireless networks; multiple orthogonal frequency division multiplexing; physical layer information; predictable channel attack; received signal strength; secret bit generation rate; secret bits; stalking attack; wireless communication; wireless medium; Data mining; OFDM; Probes; Quantization (signal); Time measurement; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:3057-3065
【Authors】: Zimu Zhou ; Zheng Yang ; Chenshu Wu ; Longfei Shangguan ; Yunhao Liu
【Abstract】: Passive human detection and localization serve as key enablers for various pervasive applications such as smart space, human-computer interaction and asset security. The primary concern in devising scenario-tailored detecting systems is the coverage of their monitoring units. In conventional radio-based schemes, the basic unit tends to demonstrate a directional coverage, even if the underlying devices are all equipped with omnidirectional antennas. Such an inconsistency stems from the link-centric architecture, creating an anisotropic wireless propagating environment. To achieve an omnidirectional coverage while retaining the link-centric architecture, we propose the concept of Omnidirectional Passive Human Detection, and investigate to harness the PHY layer features to virtually tune the shape of the unit coverage by fingerprinting approaches, which is previously prohibited with mere MAC layer RSSI. We design the scheme with ubiquitously deployed WiFi infrastructure and evaluate it in typical multipath-rich indoor scenarios. Experimental results show that our scheme achieves an average false positive of 8% and an average false negative of 7% in detecting human presence in 4 directions.
【Keywords】: human computer interaction; omnidirectional antennas; asset security; directional coverage; human-computer interaction; link-centric architecture; monitoring units; omnidirectional antennas; passive human detection; scenario-tailored detecting systems; smart space; Azimuth; Computer architecture; Feature extraction; Histograms; Microprocessors; Monitoring; Wireless communication
【Paper Link】 【Pages】:3066-3074
【Authors】: Longfei Shangguan ; Zhenjiang Li ; Zheng Yang ; Mo Li ; Yunhao Liu
【Abstract】: In many logistics applications of RFID technology, goods attached with tags are placed on moving conveyor belts for processing. It is important to figure out the order of goods on the belts so that further actions like sorting can be accurately taken on proper goods. Due to arbitrary goods placement or the irregularity of wireless signal propagation, neither of the order of tag identification nor the received signal strength provides sufficient evidence on their relative positions on the belts. In this study, we observe, from experiments, a critical region of reading rate when a tag gets close enough to a reader. This phenomenon, as well as other signal attributes, yields the stable indication of tag order. We establish a probabilistic model for recognizing the transient critical region and propose the OTrack protocol to continuously monitor the order of tags. To validate the protocol, we evaluate the accuracy and effectiveness through a one-month experiment conducted through a working conveyor at Beijing Capital International Airport.
【Keywords】: mobile radio; probability; radiofrequency identification; radiowave propagation; Beijing Capital International Airport; OTrack protocol; RFID technology; conveyor belts; logistics applications; luggage; mobile RFID systems; order tracking; received signal strength; signal attributes; stable indication; tag identification; tag order; transient critical region; wireless signal propagation; Accuracy; Airports; Belts; Correlation; Market research; Protocols; Radiofrequency identification
【Paper Link】 【Pages】:3075-3083
【Authors】: Wei Cheng ; Kefeng Tan ; Victor Omwando ; Jindan Zhu ; Prasant Mohapatra
【Abstract】: RSS (Received Signal Strength) has been widely utilized in wireless applications. It is, however, susceptible to environmental unknowns from both temporal and spatial domains. As a result, the fluctuation of RSS may degrade performance of RSS based applications. In this work, we propose a novel RSS processing method at the receiver for three antenna based systems. The output of our approach is `RSS-Ratio', which eliminates the environmental unknowns and thus is a more stable variable compared to RSS itself. To validate the efficacy of the proposed method, we conduct a series of experiments in a range of wireless scenarios, including indoor laptop based measurement, indoor software defined radio - WARP based measurement, and outdoor wireless measurement. In addition, we also give an analysis to the relationship between the location of transmitter and the value of RSS-Ratio, and examine the accuracy of the estimated RSS-Ratio value via both simulations and experiments. All the experimental, analytical, and simulated results demonstrate that RSS-Ratio will be a better replacement for RSS to improve the performance of RSS based applications.
【Keywords】: indoor radio; receiving antennas; software radio; RSS processing method; RSS-based applications; WARP based measurement; antenna based systems; estimated RSS-Ratio value; indoor laptop based measurement; indoor software defined radio; outdoor wireless measurement; performance enhancement; received signal strength; receiver antenna; transmitter; wireless applications; Decision support systems
【Paper Link】 【Pages】:3084-3092
【Authors】: Siyao Cheng ; Jianzhong Li ; Zhipeng Cai
【Abstract】: To observe the complicate physical world by a WSN, the sensors in the WSN senses and samples the data from the physical world. Currently, most of the existing work use equi-frequency sampling methods (EFS) or EFS based sampling methods for data acquisition in sensor networks. However, the accuracies of EFS and EFS based sampling methods cannot be guaranteed in practice since the physical world usually varies continuously, and these methods does not support reconstructing of the monitored physical world. To overcome the shortages of EFS and EFS based sampling methods, this paper focuses on designing physical-world-aware data acquisition algorithms to support O(ϵ)-approximation to the physical world for any ϵ ≥ 0. Two physical-world-aware data acquisition algorithms based on Hermit and Spline interpolation are proposed in the paper. Both algorithms can adjust the sensing frequency automatically based on the changing trend of the physical world and given c. The thorough analysis on the performance of the algorithms are also provided, including the accuracies, the smooth of the outputted curves, the error bounds for computing first and second derivatives, the number of the sampling times and complexities of the algorithms. It is proven that the error bounds of the algorithms are O(ϵ) and the complexities of the algorithms are O(1/ϵ1/4). Based on the new data acquisition algorithms, an algorithm for reconstructing physical world is also proposed and analyzed. The theoretical analysis and experimental results show that all the proposed algorithms have high performance in items of accuracy and energy consumption.
【Keywords】: approximation theory; data acquisition; interpolation; splines (mathematics); wireless sensor networks; EFS based sampling methods; Hermit interpolation; WSN; equi-frequency sampling methods; error bounds; physical-world-aware data acquisition algorithms; spline interpolation; wireless sensor networks; Accuracy; Algorithm design and analysis; Data acquisition; Interpolation; Monitoring; Splines (mathematics); Wireless sensor networks
【Paper Link】 【Pages】:3094-3101
【Authors】: Xinyu Zhang ; Kang G. Shin
【Abstract】: Coordination of co-located wireless devices is a fundamental function/requirement for reducing interference. However, different devices cannot directly coordinate with one another as they often use incompatible modulation schemes. Even for the same type (e.g., WiFi) of devices, their coordination is infeasible when neighboring transmitters adopt different spectrum widths. Such an incompatibility between heterogeneous devices may severely degrade the network performance. In this paper, we introduce Gap Sense (GSense), a novel mechanism that can coordinate heterogeneous devices without modifying their PHYlayer modulation schemes or spectrum widths. GSense prepends legacy packets with a customized preamble, which piggy-backs information to enhance inter-device coordination. The preamble leverages the quiet period between signal pulses to convey such information, and can be detected by neighboring nodes even when they have incompatible PHY layers. We have implemented and evaluated GSense on a software radio platform, demonstrating its significance and utility in three popular protocols. GSense is shown to deliver coordination information with close to 100% accuracy within practical SNR regions. It can also reduce the energy consumption by around 44%, and the collision rate by more than 88% in networks of heterogeneous transmitters and receivers.
【Keywords】: mobile radio; modulation; protocols; radio receivers; radio transmitters; radiofrequency interference; software radio; GSense; PHY-layer modulation schemes; SNR regions; WiFi; collision rate; colocated wireless device coordination; energy consumption reduction; gap sense; heterogeneous receivers; heterogeneous transmitters; heterogeneous wireless devices; incompatible modulation schemes; interdevice coordination enhancement; interference reduction; legacy packets; mobile networks; neighboring nodes; neighboring transmitters; network performance degradation; piggy-backs information; signal pulses; software radio platform; spectrum widths; wireless networks; Clocks; IEEE 802.11 Standards; Protocols; Radio transmitters; Receivers; Signal to noise ratio; Zigbee
【Paper Link】 【Pages】:3102-3110
【Authors】: Justin Manweiler ; Naveen Santhapuri ; Romit Roy Choudhury ; Srihari Nelakuditi
【Abstract】: Today's smartphones provide a variety of sensors, enabling high-resolution measurements of user behavior. We envision that many services can benefit from short-term predictions of complex human behavioral patterns. While enablement of behavior awareness through sensing is a broad research theme, one possibility is in predicting how quickly a person will move through a space. Such a prediction service could have numerous applications. For one example, we imagine shop owners predicting how long a particular customer is likely to browse merchandise, and issue targeted mobile coupons accordingly - customers in a hurry can be encouraged to stay and consider discounts. Within a space of moderate size, WiFi access points are uniquely positioned to track a statistical framework for user length of stay, passively recording metrics such as WiFI signal strength (RSSI) and potentially receiving client-uploaded sensor data. In this work, we attempt to quantity this opportunity, and show that human dwell time can be predicted with reasonable accuracy, even when restricted to passively observed WiFi RSSI.
【Keywords】: smart phones; wireless LAN; RSSI; WiFi hotspots; access points; client-uploaded sensor data; high-resolution measurements; sensors; smartphones; Accuracy; Compass; Feature extraction; IEEE 802.11 Standards; Sensor phenomena and characterization; Support vector machines
【Paper Link】 【Pages】:3111-3119
【Authors】: Rong Zheng ; Thanh Le ; Zhu Han
【Abstract】: We consider the problem of optimally assigning p sniffers to K channels to monitor the transmission activities in a multi-channel wireless network. The activity of users is initially unknown to the sniffers and is to be learned along with channel assignment decisions. Previously proposed online learning algorithms face high computational costs due to the NPhardness of the decision problem. In this paper, we propose two approximate online learning algorithms, ϵ-GREEDY-APPROX and EXP3-APPROX, which are shown to have better scalability, and achieve sub-linear regret bounds over time compared to a greedy offline algorithm with complete information. We demonstrate both analytically and empirically the trade-offs between the computation cost and rate of learning.
【Keywords】: computational complexity; radio networks; K channels; NP hardness; approximate online learning; computation cost; decision problem; face high computational costs; greedy offline algorithm; multichannel wireless networks; online learning algorithms; passive monitoring; sublinear regret bounds; transmission activities; Approximation algorithms; Approximation methods; Complexity theory; Greedy algorithms; Joints; Monitoring; Wireless networks
【Paper Link】 【Pages】:3120-3128
【Authors】: Wenchi Cheng ; Xi Zhang ; Hailin Zhang
【Abstract】: We consider the full-duplex transmission over bidirectional channels with imperfect self-interference cancelation in wireless networks. In particular, together using propagation-domain interference suppression, analog-domain interference cancellation, and digital-domain interference cancellation, we develop the optimal dynamic power allocation schemes for the wireless full-duplex sum-rate optimization problem which aims at maximizing the sum-rate of wireless full-duplex bidirectional transmissions. In the high signal-to-interference-plus-noise ratio (SINR) region, the full-duplex sum-rate maximization problem is a convex optimization problem. For interference-dominated wireless full-duplex transmission in the high SINR region, we derive the closed-form expression for the optimal dynamic power allocation scheme. For non-interference-dominated wireless full-duplex transmission in the high SINR region, we obtain the optimal dynamic power allocation scheme by numerically solving the corresponding Karush-Kuhn-Tucker (KKT) conditions. While the full-duplex sum-rate maximization problem is usually not a convex optimization problem, by developing the tightest lower-bound function and using the logarithmic change of variables technique, we convert the full-duplex sum-rate maximization problem to a convex optimization problem. Then, using our proposed iteration algorithm, we can numerically derive the optimal dynamic power allocation scheme for the more generic scenario. Also presented are the numerical results which validate our developed optimal dynamic power allocation schemes.
【Keywords】: convex programming; interference suppression; optimal control; power control; radio networks; telecommunication control; wireless channels; KKT conditions; Karush-Kuhn-Tucker conditions; SINR region; analog-domain interference cancellation; closed-form expression; convex optimization problem; digital-domain interference cancellation; full-duplex bidirectional channel; full-duplex sum-rate maximization problem; imperfect self-interference cancelation; interference-dominated wireless full-duplex transmission; iteration algorithm; lower bound function; noninterference-dominated wireless full-duplex transmission; optimal dynamic power allocation scheme; optimal dynamic power control; propagation-domain interference suppression; signal-to-interference-plus-noise ratio; wireless full-duplex bidirectional transmission; wireless full-duplex sum-rate optimization problem; wireless networks; Dynamic scheduling; Interference; Lead; Resource management; Signal to noise ratio; Transmitters; Wireless communication; Full-duplex; bidirectional transmission; power control; self-interference; sum-rate
【Paper Link】 【Pages】:3129-3134
【Authors】: Gautam S. Thakur ; Pan Hui ; Ahmed Helmy
【Abstract】: Future vehicular networks shall enable new classes of services and applications for car-to-car and car-to-roadside communication. The underlying vehicular mobility patterns significantly impact the operation and effectiveness of these services, and hence it is essential to model and characterize such patterns. In this paper, we examine the mobility of vehicles as a function of traffic density of more than 800 locations from six major metropolitan regions around the world. The traffic densities are generated from more than 25 million images and processed using background subtraction algorithm. The resulting vehicular density time series and distributions are then analyzed. It is found using the goodness-of-fit test that the vehicular density distribution follows heavy-tail distributions such as Log-gamma, Log-logistic, and Weibull in over 90% of these locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators signifies that the traffic patterns are stochastically self-similar (0.5 ≤ H ≤ 1.0). We believe this is an important finding, which will influence the design and deployment of the next generation vehicular network and also aid in the development of opportunistic communication services and applications for the vehicles. In addition, it shall provide a much needed input for the development of smart cities.
【Keywords】: gamma distribution; mobility management (mobile radio); next generation networks; telecommunication traffic; time series; Hurst exponent estimation; Weibull distribution; background subtraction algorithm; car-to-car communication; car-to-roadside communication; heavy-tail distribution; log-gamma distribution; log-logistic distribution; next generation vehicular network; opportunistic communication service; smart city; traffic pattern; vehicular density distribution; vehicular density modeling; vehicular density time series; vehicular mobility pattern; Analytical models; Cameras; Data models; Internet; Kernel; Time series analysis; Vehicles
【Paper Link】 【Pages】:3135-3140
【Authors】: Yi Wang ; Michalis Faloutsos ; Hui Zang
【Abstract】: How do people use phone calls and text messages for their communication needs? Most studies so far study each mode of communication in isolation. Here, we study the interplay of multi-modal communications. We analyze more than a billion call and text records from a Chinese city and San Francisco Area between 2007 and 2011. First, we provide some definitions towards a framework for analyzing multi-modal communications. Then,we study the relationship of the two communication modes and quantify several aspects of correlation and inference. For a communicating pair, we find that the existence of texting during the weekend is the strongest indicator that the pair will communicate at other times with texts or calls. We compare the behavior between China and the U.S. and we find several similarities and differences. For example, we find evidence of an after-lunch siesta among Chinese users. Finally, we study the evolution of the two modes over time. We find that texting has taken over in sheer number of ”events” by flipping the number of calls over that of texts from 2: 1 in 2007 to 1:2 in 2011.
【Keywords】: cellular radio; electronic messaging; pattern recognition; multimodal communication; phone calls; text messages; usage patterns; Correlation; Cultural differences; Electronic mail; Internet; Measurement; Pricing; Telecommunications
【Paper Link】 【Pages】:3141-3146
【Authors】: Matteo Varvello ; Moritz Steiner
【Abstract】: BitTorrent is both the dominant Peer-to-Peer (P2P) protocol for file-sharing and a nightmare for ISPs due to its network agnostic nature. Many solutions exist to localize BitTorrent traffic relying on cooperation between ISPs and the trackers. Recently, BitTorrent users have been abandoning the trackers in favor of Distributed Hash Tables (DHTs). Despite DHTs are complex heterogeneous systems, DHT-based traffic localization is also possible; however, it is unclear how it performs. The goal of this work is to measure DHT-based traffic localization in the wild. We run multiple experiments involving up to five commercial ISPs and a maximum duration of one month, collecting about 400 GB of BitTorrent traffic. Then, we perform an extensive analysis with the following goals: understand the impact of system parameters, verify accuracy of the measurements, estimate the localization benefits.
【Keywords】: cryptography; peer-to-peer computing; protocols; telecommunication traffic; BitTorrent traffic localization; DHT-based traffic localization; ISP; P2P protocol; distributed hash table; file-sharing; heterogeneous system; localization benefit estimation; network agnostic nature; peer-to-peer protocol; Accuracy; Instruments; Internet; Linux; Peer-to-peer computing; Protocols; Radio access networks
【Paper Link】 【Pages】:3147-3152
【Authors】: Yingdi Yu ; Duane Wessels ; Matt Larson ; Lixia Zhang
【Abstract】: As more and more authority DNS servers turn on DNS security extensions (DNSSEC), it becomes increasingly important to understand whether, and how many, DNS resolvers perform DNSSEC validation. In this paper we present a query-based measurement method, called Check-Repeat, to gauge the presence of DNSSEC validating resolvers. Utilizing the fact that most validating resolver implementations retry DNS queries with a different authority server if they receive a bad DNS response, Check-Repeat can identify validating resolvers by removing the signatures from regular DNS responses and observing whether a resolver retries DNS queries. We tested Check-Repeat in different scenarios and our results showed that Check-Repeat can identify validating resolvers with a low error rate. We also cross-checked our measurement results with DNS query logs from .COM and .NET domains, and confirmed that the resolvers measured in our study can account for more than 60% of DNS queries in the Internet.
【Keywords】: Internet; computer network security; query processing; .COM domain; .NET domain; Check-Repeat; DNS query log; DNS response signature; DNS security extension; DNSSEC validating resolver measurement; Internet; authority DNS server; authority server; query-based measurement method; validating resolver identification; Browsers; Conferences; IP networks; Monitoring; Probes; Public key; Servers
【Paper Link】 【Pages】:3153-3158
【Authors】: Andreas Berger ; Wilfried N. Gansterer
【Abstract】: More and more Internet services are hosted by Content Distribution Networks or Cloud operators. Often, IP addresses are reused for several services, and the mapping between domain names and IPs has become highly agile. This complicates the analysis of monitoring data, as it is not clear anymore which IP address represents which service at which time. We propose a system that continuously monitors this activity using captured DNS packets in a large network. Thereby we are able to (i) understand the allocation strategies inside a hosting provider, and (ii) report significant changes that are not due the normal agility of a particular service. We evaluate our system using a 2-weeks data set from a large network operator, and demonstrate how it can be used to find malicious sites.
【Keywords】: Web sites; cloud computing; security of data; DNS agility modeling; DNS packets; DNSMap; IP addresses; Internet services; cloud operators; content distribution networks; malicious sites; network operator; Clustering algorithms; Conferences; Facebook; IP networks; Merging; Monitoring; Quality of service
【Paper Link】 【Pages】:3159-3164
【Authors】: Arian Bär ; Antonio Paciello ; Peter Romirer-Maierhofer
【Abstract】: In the last years, botnets have become one of the major sources of cyber-crime activities carried out via the public Internet. Typically, they may serve a number of different malicious activities such as Distributed Denial of Service (DDoS) attacks, email spam and phishing attacks. In this paper we validate the Domain Name System (DNS) failure graph approach presented earlier in [1]. In our work we apply this approach in an operational 3G mobile network serving a significantly larger user population.Based on the introduction of stable host identifiers we implement a novel approach to the tracking of botnets over a period of several weeks. Our results reveal the presence of several groups of hosts that are members of botnets. We analyze the host groups exhibiting the most suspicious behavior and elaborate on how these participate in botnets and other malicious activities. In the last part of this work we discuss how the accuracy of our detection approach could be improved in the future by correlating the knowledge obtained from applying our method in different networks.
【Keywords】: 3G mobile communication; Internet; computer crime; computer network security; graph theory; DDoS attacks; DNS failure graphs; botnet tracking; botnet trapping; cyber-crime activities; distributed denial of service attacks; domain name system failure graph approach; email spam; host identifiers; malicious activities; operational 3G mobile network; phishing attacks; public Internet; Algorithm design and analysis; Clustering algorithms; Electronic mail; IP networks; Monitoring; Servers; Superluminescent diodes
【Paper Link】 【Pages】:3165-3170
【Authors】: Ali Tizghadam ; Ali Shariat ; Alberto Leon-Garcia ; Hassan Naser
【Abstract】: Due to the time-varying nature of wireless networks, it is required to find robust optimal methods to control the behavior and performance of such networks; however, this is a challenging task since robustness metrics and QoS-based (Quality of service) constraints in a wireless environment are typically highly non-linear and non-convex. This paper explores the possibility of using graph theoretic metrics to provide robustness in a wireless network at the presence of a set of QoS constraints. In particular, we are interested in robust planning of a wireless network for a given demand matrix while preserving end-to-end delay for input demands below a given threshold set. To this end, we show that the upper bound of end-to-end round trip time between two nodes of a network can be approximated by point-to-point network criticality (or resistance distance) of the network. We construct a convex optimization problem to provide a delay-guaranteed jointly optimal allocation of transmit powers and link flows. We show that the solution provides a robust behavior, i.e. it is insensitive to the environmental changes such as wireless link disruption, this is expected because network criticality is a robustness metric. Our framework can be applied to a wide range of SINR (Signal to Interference plus Noise Ratio) values.
【Keywords】: convex programming; delays; graph theory; interference; quality of service; radio networks; telecommunication network planning; telecommunication network topology; QoS-guaranteed network engineering; SINR; convex optimization problem; demand matrix; end-to-end delay; graph theoretic metrics; interference aware wireless networks; joint optimal allocation; link flows; point-to-point network criticality; quality of service; resistance distance; robust network planning; round trip time; signal to interference plus noise ratio; wireless environment; wireless link disruption; Delays; Interference; Optimization; Quality of service; Robustness; Wireless networks
【Paper Link】 【Pages】:3171-3176
【Authors】: Simone Mainardi ; Enrico Gregori ; Luciano Lenzini
【Abstract】: Autonomous Systems (AS) exist and co-exist in two parallel dimensions. In one dimension they are physical networks, whose interconnections are necessary to ensure global Internet reachabilty. In the other dimension, ASes are large well-known companies competing in the same industry. In this paper we bridge together these dimensions by investigating synchronous cross correlations of stock market data and AS-level topological properties. We find that geographically close companies offering similar services are driven by common economic factors. We also provide evidence on the existence and nature of factors governing AS global as well as local topological properties.
【Keywords】: Internet; correlation methods; economics; stock markets; topology; AS-level topological properties; autonomous systems; economic factors; geographically close companies; global Internet reachabilty; parallel dimensions; physical networks; stock market data; synchronous cross correlations; Companies; Correlation; Internet; Stock markets; Topology
【Paper Link】 【Pages】:3177-3182
【Authors】: S. N. Akshay Uttama Nambi ; Thanasis G. Papaioannou ; Dipanjan Chakraborty ; Karl Aberer
【Abstract】: The continuous growth of energy needs and the fact that unpredictable energy demand is mostly served by unsustainable (i.e. fossil-fuel) power generators have given rise to the development of Demand Response (DR) mechanisms for flattening energy demand. Building effective DR mechanisms and user awareness on power consumption can significantly benefit from fine-grained monitoring of user consumption at the appliance level. However, installing and maintaining such a monitoring infrastructure in residential settings can be quite expensive. In this paper, we study the problem of fine-grained appliance power-consumption monitoring based on one house-level meter and few plug-level meters. We explore the trade-off between monitoring accuracy and cost, and exhaustively find the minimum subset of plug-level meters that maximize accuracy. As exhaustive search is time- and resource-consuming, we define a heuristic approach that finds the optimal set of plug-level meters without utilizing any other sets of plug-level meters. Based on experiments with real data, we found that few plug-level meters - when appropriately placed - can very accurately disaggregate the total real power consumption of a residential setting and verified the effectiveness of our heuristic approach.
【Keywords】: demand side management; heuristic programming; power consumption; sustainable development; demand response mechanisms; energy demand; energy needs; fine-grained appliance; heuristic approach; house-level meter; plug-level meters; residential settings; sustainable energy consumption monitoring; Accuracy; Energy consumption; Heuristic algorithms; Hidden Markov models; Home appliances; Monitoring; Power demand; Energy disaggregation; FHMM; Hidden Markov Models; NILM; plug-level meter
【Paper Link】 【Pages】:3183-3188
【Authors】: Ugo Montanari ; Alain Tcheukam Siwe
【Abstract】: Decentralized power management systems will play a key role in reducing greenhouse gas emissions and increasing electricity production through alternative energy sources. In this paper, we focus on power market models in which prosumers interact in a distributed environment during the purchase or sale of electric power. We have chosen to follow the distributed power market model DEZENT. Our contribution is the planning phase of the consumption of prosumers based on the negotiation mechanism of DEZENT. We propose a controller for the planning of the consumption which aims at minimizing the electricity cost achieved at the end of a day. In the paper we discuss the assumptions on which the controller design is based.
【Keywords】: marketing; multi-agent systems; power engineering computing; power markets; software agents; DEZENT; consumption planning; decentralized power management systems; distributed environment; distributed power market model; electric power purchase; electric power sale; electricity cost minimization; negotiation mechanism; prosumer profiling; real time market model; Conferences; Electricity; Heuristic algorithms; Learning (artificial intelligence); Planning; Power generation; Power markets
【Paper Link】 【Pages】:3189-3194
【Authors】: John Tadrous ; Atilla Eryilmaz ; Hesham El Gamal
【Abstract】: We address the question of optimal proactive service and demand shaping for content distribution in data networks through smart pricing. We develop a proactive download scheme that utilizes the probabilistic predictability of the human demand by proactively serving potential users' future requests during the off-peak times. Thus, it smooths-out the network traffic and minimizes the time average cost of service. Moreover, we incorporate the varying economic responsiveness and demand flexibilities of users into our model to develop a demand shaping mechanism that further improves the gains of proactive downloads. To that end, we propose a model that captures the uncertainty about the users' demand as well as their responsiveness to the pricing employed by the service providers. We propose a joint proactive resource allocation and demand shaping scheme based on nonconvex optimization algorithms, and show that it always leads to strictly better performance over its proactive counterpart without demand shaping.
【Keywords】: computer networks; concave programming; packet radio networks; pricing; probability; resource allocation; telecommunication services; telecommunication traffic; content distribution; demand flexibility; demand shaping mechanism; economic responsiveness; human demand; joint proactive resource allocation; network traffic; nonconvex optimization algorithm; optimal proactive service; proactive download scheme; proactive potential user future request service; probabilistic predictability; smart data network; smart pricing; time average cost of service minimization; Approximation methods; Entropy; Joints; Linear programming; Pricing; Resource management; Wireless communication
【Paper Link】 【Pages】:3195-3200
【Authors】: Yih-Farn Robin Chen ; Rittwik Jana
【Abstract】: The explosive growth of cellular traffic and its highly dynamic nature often make it increasingly expensive or even infeasible for a cellular service provider to provision enough cellular resources to support the peak traffic demands. Some service providers have started exploring various economic incentives, including smart data pricing, to manage network congestion. We present SpeedGate, a smart mobile data pricing testbed that allows a service provider to experiment with different dynamic pricing strategies. SpeedGate maintains persistent VPN connections to smartphones as users roam between different wireless networks (3G, 4G/LTE, WiFi). The maximum available bandwidth per user session can be adjusted according to various data pricing strategies. We report preliminary results on two trials with a total of 29 users for assessing their willingness to pay (WTP) for various speed tiers. Preliminary observations suggest the challenges of QoS guarantees through speed tiers in the field, the limited dynamic range of WTP values from individual users for different speed tiers, and potential opportunities for auction-based dynamic pricing.
【Keywords】: 3G mobile communication; 4G mobile communication; Long Term Evolution; cellular radio; data communication; pricing; smart phones; telecommunication traffic; wireless LAN; 3G; 4G-LTE; QoS; SpeedGate; WTP values; WiFi; auction-based dynamic pricing; cellular service provider; cellular traffic; data pricing strategies; dynamic range; economic incentives; network congestion; peak traffic demands; service provider; smart data pricing; smart mobile data pricing testbed; speed tiers; willingness to pay; wireless networks; Bandwidth; IEEE 802.11 Standards; Pricing; Quality of service; Smart phones; Virtual private networks; Wireless communication
【Paper Link】 【Pages】:3201-3206
【Authors】: Hyojung Lee ; Hyeryung Jang ; Yung Yi ; Jeong-woo Cho
【Abstract】: The Internet consists of economically selfish players in terms of access/transit connection, content distribution, and users. Such selfish behaviors often lead to techno-economic inefficiencies such as unstable peering and revenue imbalance. Recent research results suggest that cooperation in revenue sharing (thus multi-level ISP settlements) can be a candidate solution for the problem of unfair revenue share. However, it is unclear whether providers are willing to behave cooperatively. In this paper, we study the interaction between how content-oriented traffic scheduling at the edge is and how stable the intended cooperation is. We consider three traffic scheduling policies having various degrees of content-value preference, compare them in terms of implementation complexity, network neutrality, and stability of cooperation, and present interesting trade-offs among them.
【Keywords】: Internet; scheduling; telecommunication traffic; Internet; access-transit connection; content distribution; content-oriented traffic scheduling; content-value preference; cooperation stability; implementation complexity; multilevel ISP settlements; network neutrality; revenue imbalance; revenue sharing; techno-economic inefficiencies; traffic scheduling policies; unstable peering; Conferences; Economics; Games; Internet; Sociology; Stability analysis; Statistics
【Paper Link】 【Pages】:3207-3212
【Authors】: Yang Song ; Arun Venkataramani ; Lixin Gao
【Abstract】: Content Delivery Networks (CDNs) serve a large fraction of Internet traffic today improving user-perceived response time and availability of content. With tens of CDNs competing for content producers, it is important to understand the game played by these CDNs and whether the game is sustainable in the long term. In this paper, we formulate a game-theoretic model to analyze price competition among CDNs. Under this model, we propose an optimal strategy employed by two-CDN games. The strategy is incentive-compatible since any CDN that deviates from the strategy ends up with a lower utility. The strategy is also efficient since it produces a total utility that is at least two thirds of the social optimal utility. We formally derive the sufficient conditions for such a strategy to exist, and empirically show that there exists an optimal strategy for the games with more than two CDNs.
【Keywords】: Internet; game theory; pricing; CDN pricing game; Internet traffic; content availability improvement; content delivery networks; content producers; game-theoretic model; incentive-compatible strategy; price competition analysis; social optimal utility; sufficient conditions; two-CDN games; user-perceived response time improvement; Conferences; Equations; Games; Internet; Markov processes; Pricing; Servers
【Paper Link】 【Pages】:3213-3218
【Authors】: Matthew Andrews ; Ulas Özen ; Martin I. Reiman ; Qiong Wang
【Abstract】: The interaction of a content provider with end users on an infrastructure platform built and maintained by a service provider can be viewed as a two-sided market. Content sponsoring, i.e., charging the content provider instead of viewers for resources consumed in viewing the content, can benefit all parties involved. Without being charged directly or having it counted against their monthly data quotas, end users will view more content, allowing the content provider to generate more advertising revenue, extracted by the service provider to subsidize its investment and operation of the network infrastructure. However, realizing such gains requires a proper contractual relationship between the service provider and content provider. We consider the determination of this contract through a Stackelberg game. The service provider sets a pricing schedule for sponsoring and the content provider responds by deciding how much content to sponsor. We analyze the best strategies for the content provider and service provider in the event that the underlying demand for the content is uncertain. Two separate settings are defined. In the first, end users can be charged for non-sponsored views on a per-byte basis. In the second we extend the model to the more common case in which end users purchase data quotas on a periodic basis. Our main conclusion is that a coordinating contract can be designed that maximizes total system profit. Moreover, the additional profit due to sponsoring can be split between the content provider and service provider in an arbitrary manner.
【Keywords】: contracts; game theory; investment; pricing; telecommunication industry; Stackelberg game; advertising revenue; content provider; content sponsoring; contractual relationship; data quotas; economic models; pricing schedule; service provider; total system profit; wireless networks; Advertising; Bandwidth; Conferences; Contracts; Pricing; Random variables; Standards
【Paper Link】 【Pages】:3219-3224
【Authors】: Sun Sun ; Min Dong ; Ben Liang
【Abstract】: The concept of vehicle-to-grid (V2G) has gained recent interest as more and more electric vehicles (EVs) are put to use. In this paper, we consider a dynamic aggregator-EVs system, where an aggregator centrally coordinates a large number of EVs to perform regulation service. We propose a Welfare-Maximizing Regulation Allocation (WMRA) algorithm for the aggregator to fairly allocate the regulation amount among the EVs. The algorithm operates in real time and does not require any prior knowledge on the statistical information of the system. Compared with previous works, WMRA accommodates a wide spectrum of vital system characteristics, including limited EV battery size, EV self charging/discharging, EV battery degradation cost, and the cost of using external energy sources. Furthermore, our simulation results indicate that WMRA can substantially outperform a suboptimal greedy algorithm.
【Keywords】: battery powered vehicles; greedy algorithms; statistical analysis; EV battery degradation cost; EV self charging-discharging; V2G concept; WMRA algorithm; aggregator-EV systems; dynamic aggregator-EV system; electric vehicles; real-time welfare-maximizing regulation allocation; statistical information; suboptimal greedy algorithm; vehicle-to-grid concept; vital system characteristics; welfare-maximizing regulation allocation algorithm; Batteries; Conferences; Degradation; Energy states; Power grids; Real-time systems; Resource management
【Paper Link】 【Pages】:3225-3230
【Authors】: Tian Zhang ; Wei Chen ; Zhu Han ; Zhigang Cao
【Abstract】: In this paper, we consider the power allocation of the physical layer and the buffer delay of the upper application layer in energy harvesting green networks. We analyze the delay-optimal power allocation problem over fading channels. The total power required for reliable transmission includes the transmission power as well as the circuit power. The harvested power (which is stored in a battery) and the grid power constitute the power resource. The objective is to find a policy to minimize the buffer delay under the constraint on the average grid power. The policy is a two-dimensional vector with the transmission rate and the power allocation of the battery as its elements. In each transmission, the transmitter decides the transmission rate and the allocated power from the battery (the rest of the required power will be supplied by the power grid). A constrained Markov decision process (MDP) problem is formulated when the data arrival process, the harvested energy arrival process, and the channel process are Markov processes. We prove that the optimal policy can be obtained as follows. First, we solve the optimal rate through a reduced MDP problem that is only related to the average harvested energy but not the harvested energy arrival process. Second, the battery's power allocation can be given based on the optimal rate. By analyzing the reduced MDP problem through the transformations to the average cost MDP and discount optimal MDP, we derive some structural properties of the optimal policy. Moreover, the closed-form expression is obtained for the independent and identically distributed (i.i.d.) cases.
【Keywords】: Markov processes; environmental factors; fading channels; telecommunication power supplies; vectors; MDP; Markov decision process; buffer delay; cross-layer perspective; delay-optimal power allocation; energy harvesting; fading channels; green communications; green networks; physical layer; two-dimensional vector; Batteries; Delays; Energy harvesting; Green products; Markov processes; Resource management; Transmitters
【Paper Link】 【Pages】:3231-3236
【Authors】: Giulio Betti ; Edoardo Amaldi ; Antonio Capone ; Giulia Ercolani
【Abstract】: We address a traffic engineering problem where, given a communication network and a set of origin-destination demands, we have to select a single-path routing for each demand and decide which communication interfaces to switch off or run at partial load so as to minimize the total operational costs. We account for the presence of renewable energy plants at some nodes of the network, as well as feed-in-tariffs, rebates and variable energy prices. We also consider the related problem of deciding where renewable energy sources (photovoltaic modules in this case) have to be installed so as to maximize the profit, while respecting a maximum investment budget constraint. We propose mixed integer optimization models for these two problems and we report results for two different network topologies.
【Keywords】: computer networks; integer programming; photovoltaic power systems; telecommunication network routing; telecommunication network topology; telecommunication power management; telecommunication power supplies; communication network; cost aware optimization model; maximum investment budget constraint; mixed integer optimization model; network topology; origin-destination demand; photovoltaic module; renewable energy plant; renewable energy source; single path routing; traffic engineering problem; Electricity; Optimization; Photovoltaic systems; Power demand; Routing; Switches
【Paper Link】 【Pages】:3237-3242
【Authors】: Pietro Marchetta ; Antonio Pescapè
【Abstract】: Traceroute is probably the most famous networking tool widely adopted in both industry and research. Despite its long life, however, measurements based on Traceroute are potentially inaccurate, misleading or incomplete due to several unresolved issues. In this paper, we face the limitation represented by hidden routers-devices that do not decrement the TTL, being thus totally invisible to Traceroute. We present, evaluate and release DRAGO, a novel active probing technique composed of three main steps. First, a novel Traceroute enhanced by the IP Timestamp option is launched toward a destination. Second, a procedure is applied to quantify the hidden routers contained in the path, if any. Third, a last procedure is performed to identify the exact position in the path of the detected hidden routers. Experimental results demonstrate that the phenomenon is not uncommon: DRAGO detects the presence of hidden routers in at least 6% of the considered Traceroute IP paths and limits the affected area to one fifth of the trace containing these devices.
【Keywords】: IP networks; computer network performance evaluation; system monitoring; DRAGO; IP Timestamp option; TTL; Traceroute; active probing technique; hidden router detection; hidden router location; hidden router quantification; networking tool; time-to-live field; Binary trees; IP networks; Internet; Payloads; Probes; Uncertainty; Vectors
【Paper Link】 【Pages】:3243-3248
【Authors】: Andra Lutu ; Marcelo Bagnulo ; Olaf Maennel
【Abstract】: By tweaking the BGP configurations, the network operators are able to express their interdomain routing preferences, designed to accommodate a myriad goals. Given the complex interactions between policies in the Internet, the origin AS by itself cannot ensure that only by configuring a routing policy it can also achieve the anticipated results. Moreover, the definition of routing policies is a complicated process, involving a number of subtle tuning operations prone to errors. In this paper, we propose the BGP Visibility Scanner which allows network operators to validate the correct implementation of their routing policies, by corroborating the BGP routing information from approximatively 130 independent observation points in the Internet. We exemplify the use of the proposed methodology and also perform an initial validation for the BGP Visibility Scanner capabilities through various real operational use cases.
【Keywords】: Internet; internetworking; protocols; telecommunication network routing; AS; BGP configurations; BGP routing information; BGP visibility scanner; Border Gateway Protocol; Internet; autonomous system; interdomain routing preferences; routing policy; Educational institutions; Feeds; Internet; Labeling; Monitoring; Routing; Topology
【Paper Link】 【Pages】:3249-3254
【Authors】: Vasileios Giotsas ; Shi Zhou
【Abstract】: The Internet Autonomous System (AS) topology has important implications on end-to-end routing, network economics and security. Despite the significance of the AS topology research, it has not been possible to collect a complete map of the AS interconnections due to the difficulties involved in discovering peering links. The problem of topology incompleteness is amplified by the increasing popularity of Internet eXchange Points (IXPs) and the “flattening” AS hierarchy. A recent study discovered that the number of missing peering links at a single IXP is larger than the total number of the observable peering links. As a result a large body of research focuses on measurement techniques that can alleviate the incompleteness problem. Most of these proposals require the deployment of additional BGP vantage points and traceroute monitors. In this paper we propose a new measurement methodology for improving the discovery of missing peering links through the publicly available BGP data. Our approach utilizes the traffic engineering BGP Communities used by IXPs' Route Servers to implement multi-lateral peering agreements. We are able to discover 36K additional p2p links from 11 large IXPs. The discovered links are not only invisible in previous BGP-based AS topology collections, but also 97% of those links are invisible to traceroute data from CAIDA's Ark and DIMES projects for June 2012. The advantages of the proposed technique are threefold. First, it provides a new source of previously invisible p2p links. Second, it does not require changes in the existing measurement infrastructure. Finally, it offers a new source of policy data regarding multilateral peering links at IXPs.
【Keywords】: Internet; computer network security; internetworking; peer-to-peer computing; protocols; telecommunication links; telecommunication network routing; telecommunication network topology; telecommunication traffic; BGP-based AS topology collections; CAIDA's Ark; DIMES projects; IXP peering link discovery; IXP route servers; Internet autonomous system topology; Internet exchange points; P2P links; border gateway protocol; end-to-end routing; flattening AS hierarchy; multilateral peering agreements; multilateral peering links; network economics; network security; passive BGP measurement; policy data; publicly available BGP data; traffic engineering BGP Communities; Communities; Internet; Monitoring; Network topology; Routing; Servers; Topology; Autonomous Systems; BGP; IXP; Internet; inter-domain; measurement; missing links; routing; topology
【Paper Link】 【Pages】:3255-3260
【Authors】: Eitan Altman ; Francesco De Pellegrini ; Rachid El Azouzi ; Daniele Miorandi ; Tania Jiménez
【Abstract】: Social scientists have observed that human behavior in society can often be modeled as corresponding to a threshold type policy. A new behavior would propagate by a procedure in which an individual adopts the new behavior if the fraction of his neighbors or friends having adopted such behavior exceeds some threshold. In this paper we study the question of whether the emergence of threshold policies may be modeled as a result of some rational process which would describe the behavior of non-cooperative rational members of some social network. We focus on situations in which individuals take the decision whether to access or not some content, based on the number of views that the content has. Our analysis aims at understanding not only the behavior of individuals, but also the way in which information about the quality of a given content can be deduced from view counts when only part of the viewers that access the content are informed about its quality. In this paper we present a game formulation for the behavior of individuals using a meanfield model: the number of individuals is approximated by a continuum of atomless players and for which the Wardrop equilibrium is the solution concept. We derive conditions on the problem's parameters that result indeed in the emergence of threshold equilibria policies. But we also identify some parameters in which other structures are obtained for the equilibrium behavior of individuals.
【Keywords】: content management; game theory; social networking (online); Wardrop equilibrium; atomless player; behavior adoption; content access; content quality; equilibrium behavior; game formulation; human behavior; meanfield model; noncooperative rational member; online content diffusion; rational process; social network; social science; society; threshold equilibrium policy; threshold policy; threshold type policy; Communication networks; Conferences; Games; Market research; Measurement; YouTube; Complex Systems; Game theory; User-generated content; Video popularity; Wardrop equilibria
【Paper Link】 【Pages】:3261-3266
【Authors】: Pavlos Sermpezis ; Thrasyvoulos Spyropoulos
【Abstract】: In technological or social networks, diffusion processes (e.g. information dissemination, rumour/virus spreading) strongly depend on the structure of the network. In this paper, we focus on epidemic processes over one such class of networks, Opportunistic Networks, where mobile nodes within range can communicate with each other directly. As the node degree distribution is a salient property for process dynamics on complex networks, we use the well known Configuration Model, that captures generic degree distributions, for modeling and analysis. We also assume that information spreading between two neighboring nodes can only occur during random contact times. Using this model, we proceed to derive closed-form approximative formulas for the information spreading delay that only require the first and second moments of the node degree distribution. Despite the simplicity of our model, simulations based on both synthetic and real traces suggest a considerable accuracy for a large range of heterogeneous contact networks arising in this context, validating its usefulness for performance prediction.
【Keywords】: complex networks; delays; graph theory; mobile radio; telecommunication network topology; closed-form approximative formula; complex network; configuration model approach; diffusion process; epidemic process; heterogeneous contact networks; heterogeneous network; information diffusion; information dissemination; information spreading delay; mobile nodes; network structure; node degree distribution; opportunistic networks; performance prediction; process dynamics; random contact time; rumour spreading; social network; technological network; virus spreading; Accuracy; Approximation methods; Complex networks; Delays; Peer-to-peer computing; Random variables; Social network services
【Paper Link】 【Pages】:3267-3272
【Authors】: Youngmi Jin ; Jungseul Ok ; Yung Yi ; Jinwoo Shin
【Abstract】: This paper studies how global information affects the diffusion of innovations on a network. The diffusion of innovation is modeled by the logit dynamics of a weighted N-person coordination game among (bounded) rational users where innovations spread through users' strategic choices. We find a critical asymptotic threshold for the weight on global information where the diffusion of innovations undergoes a transition in the rate of convergence regardless of any network structure. In particular, it is found that the convergence to the pervasive adoption is slowed down by global information.
【Keywords】: convergence; game theory; innovation management; social networking (online); convergence rate; critical asymptotic threshold; global information; innovation diffusion; logit dynamics; network structure; rational users; social networks; user strategic choices; weighted N-person coordination game; Communication networks; Conferences; Convergence; Games; Nickel; Social network services; Technological innovation
【Paper Link】 【Pages】:3273-3278
【Authors】: Konstantinos Poularakis ; Leandros Tassiulas
【Abstract】: As the processing and transport capacity of the information and communication technologies (ICT) infrastructure increased vastly the last few years, the bottleneck of the information exchange process moved to the end points of the process, i.e. the consumers and the producers of information. On one hand there is the limited time that a consumer has to access the information and on the other hand there is the minimum utility level that a provider needs to provide to the society of consumers to cover it's investment cost. In this paper we present a novel decision model for a set of competing providers that wish to enter a market. It may happen that due to the competition, some competitors will not be able to cover their investment cost and therefore will disappear. We analyze the optimum way of forming the market, in order to maximize the aggregate utility of it. We show that this problem is NP-complete and present a linear programming rounding heuristic algorithm to solve it. Besides, we study a game where every player (provider) is to choose whether to join the market or not. We compute the price of anarchy of the game and present a heuristic algorithm that belongs to the family of best response dynamic algorithms. Systematic experiments on a real world data set have demonstrated the effectiveness of our proposed approach.
【Keywords】: computational complexity; decision making; electronic data interchange; linear programming; ICT; NP-complete; best response dynamic algorithms; competitive market; decision model; heuristic algorithm; information and communication technologies infrastructure; information exchange process; information providers; investment cost; linear programming rounding heuristic algorithm; transport capacity; Aggregates; Conferences; Games; Heuristic algorithms; Linear programming; Nash equilibrium; Vectors
【Paper Link】 【Pages】:3279-3284
【Authors】: Alessio Botta ; Antonio Pescapè
【Abstract】: In the context of Internet access technologies, satellite networks have traditionally been considered for specific purposes or as a backup technology for users not reached by traditional access networks, such as 3G, cable or ADSL. In recent years, however, new satellite technologies have been introduced in the market, reopening the debate on the possibilities of having high-performance satellite access networks. In this paper, we describe the testbed we set up - in collaboration with one of the main satellite operators in Europe - and the experiments we performed to evaluate and analyze the performance of both Tooway and Tooway on KA-SAT (or KASAT for short), two satellite broadband Internet access services. Also, we build a simulator to study the behavior of the traffic shaping mechanism used by the satellite operator. In terms of performance, our results show how new generation Internet satellite services are a promising way to provide broadband Internet connection to users. In terms of traffic shaping, our results shed light on the mechanisms employed by the operator for shaping user traffic and the possibilities left for the users.
【Keywords】: IP networks; broadband networks; computer network performance evaluation; radio access networks; satellite communication; telecommunication traffic; Europe; KA-SAT performance analysis; KA-SAT performance evaluation; Tooway performance analysis; Tooway performance evaluation; broadband Internet connection; high-performance satellite access networks; satellite broadband Internet access services; satellite operators; traffic shaping mechanism; Delays; Downlink; Internet; Jitter; Satellites; Throughput; Uplink
【Paper Link】 【Pages】:3285-3290
【Authors】: Heng Cui ; Ernst Biersack
【Abstract】: One common way to search and access information available in the Internet is via a Web browser. When clicking on a Web page, the user expects that the page gets rendered quickly, otherwise he will lose interest and may abort the page load. The causes for a Webpage to load slowly are multiple and not easy to comprehend for an end-user. In this paper, we present FireLog, a plugin for the Firefox Web browser that relies on passive measurements during users' browsing, and helps identify why a web page loads slowly. We present details of our methodology and illustrate it in a case study with real users.
【Keywords】: Internet; Web sites; information retrieval; online front-ends; search problems; FireLog; Firefox Web browser; Internet; Web browser; information access; information search; slow Web page download troubleshooting; Browsers; Degradation; Delays; Internet; Servers; Web pages
【Paper Link】 【Pages】:3291-3296
【Authors】: YiXi Gong ; Dario Rossi ; Claudio Testa ; Silvio Valenti ; M. Dave Taht
【Abstract】: Nowadays, due to excessive queuing, delays on the Internet can grow longer than several round trips between the Moon and the Earth - for which the “bufferbloat” term was recently coined. Some point to active queue management (AQM) as the solution. Others propose end-to-end low-priority congestion control techniques (LPCC). Under both approaches, promising advances have been made in recent times: notable examples are CoDel for AQM, and LEDBAT for LPCC. In this paper, we warn of a potentially fateful interaction when AQM and LPCC techniques are combined: namely (i) AQM resets the relative level of priority between best effort and low-priority congestion control protocols; (ii) while reprioritization generally equalizes the priority of LPCC and TCP, we also find that some AQM settings may actually lead best effort TCP to starvation. By an extended set of experiments conducted on both controlled testbeds and on the Internet, we show the problem to hold in the real world for all tested combination of AQM policies and LPCC protocols. To further validate the generality of our findings, we complement our experiments with packet-level simulation, to cover cases of other popular AQM and LPCC that are not available in the Linux kernel. To promote cross-comparison, we make our scripts and dataset available to the research community.
【Keywords】: Internet; computer network management; queueing theory; telecommunication congestion control; transport protocols; AQM policies; Internet; LPCC protocols; active queue management; bufferbloat; end-to-end low-priority congestion control techniques; packet-level simulation; potentially fateful interaction; research community; testbed control; Delays; Electric breakdown; Internet; Kernel; Linux; Monitoring; Protocols; AQM; Bufferbloat; Experiments; Scavenger protocol; Simulation
【Paper Link】 【Pages】:3297-3302
【Authors】: Chiara Chirichella ; Dario Rossi
【Abstract】: Recently, the “bufferbloat” term has been coined to describe very large queuing delays (up to several seconds) experienced by Internet users. This problem has pushed protocol designer to deploy alternative (delay-based) models to the standard (lossbased) TCP best effort congestion control. In this work, we exploit timestamp information carried in the LEDBAT header, a protocol proposed by BitTorrent as replacement for TCP data transfer, to infer the queuing delay suffered by remote hosts. We conduct a thorough measurement campaign, that let us conclude that (i) LEDBAT delay-based congestion control is effective in keeping the queuing delay low for the bulk of the peers, (ii) yet about 1% of peers often experience queuing delay in excess of 1s, and (iii) not only the network access type, but also the BitTorrent client and the operating system concurr in determining the bufferbloat magnitude.
【Keywords】: Internet; operating systems (computers); packet switching; peer-to-peer computing; protocols; queueing theory; telecommunication congestion control; BitTorrent client; Internet bufferbloat delays; Internet users; LEDBAT delay-based congestion control; LEDBAT header; network access type; operating system; queuing delay; remote hosts; timestamp information; Conferences; Delays; Internet; Monitoring; Operating systems; Probes; Protocols
【Paper Link】 【Pages】:3303-3308
【Authors】: Lin Gao ; George Iosifidis ; Jianwei Huang ; Leandros Tassiulas
【Abstract】: Mobile data offloading is a promising approach to alleviate network congestion and enhance quality of service (QoS) in mobile cellular networks. In this paper, we investigate the economics of mobile data offloading through third-party WiFi or femtocell access points (APs). Specifically, we consider a market-based data offloading solution, where macrocellular base stations (BSs) pay APs for offloading traffic. The key questions arising in such a marketplace are following: (i) how much traffic should each AP offload for each BS? and (ii) what is the corresponding payment of each BS to each AP? We answer these questions by using the non-cooperative game theory. In particular, we define a multi-leader multi-follower data offloading game (DOFF), where BSs (leaders) propose market prices, and accordingly APs (followers) determine the traffic volumes they are willing to offload. We characterize the subgame perfect equilibrium (SPE) of this game, and further compare the SPE with two other classic market outcomes: (i) the market balance (MB) in a perfect competition market (i.e., without price participation), and (ii) the monopoly outcome (MO) in a monopoly market (i.e., without price competition). Our results analytically show that (i) the price participation (of BSs) will drive market prices down, compared to those under the MB outcome, and (ii) the price competition (among BSs) will drive market prices up, compared to those under the MO outcome.
【Keywords】: economics; femtocellular radio; game theory; monopoly; quality of service; telecommunication traffic; wireless LAN; DOFF; QoS; economics; femtocell access point; macrocellular base station; market balance; market outcome; market price; market-based data offloading; mobile cellular network; mobile data offloading; monopoly market; monopoly outcome; multileader multifollower data offloading game; network congestion; noncooperative game theory; offloading traffic; perfect competition market; price competition; price participation; quality of service; subgame perfect equilibrium; third-party WiFi; traffic volume; Conferences; Data models; Games; IEEE 802.11 Standards; Manganese; Mobile communication; Monopoly
【Paper Link】 【Pages】:3309-3314
【Authors】: Joohyun Lee ; Yung Yi ; Song Chong ; Youngmi Jin
【Abstract】: Cellular networks are facing severe traffic overloads due to the proliferation of smart handheld devices and traffichungry applications. A cost-effective and practical solution is to offload cellular data through WiFi. Recent theoretical and experimental studies show that a scheme, referred to as delayed WiFi offloading, can significantly save the cellular capacity by delaying users' data and exploiting mobility and thus increasing chance of meeting WiFi APs (Access Points). Despite a huge potential of WiFi offloading in alleviating mobile data explosion, its success largely depends on the economic incentives provided to users and network providers to deploy and use delayed offloading. In this paper, we study how much economic benefits can be generated due to delayed WiFi offloading, by modeling the interaction between a single provider and users based on a two-stage sequential game. We first analytically prove that WiFi offloading is economically beneficial for both the provider and users. Also, we conduct trace-driven numerical analysis to quantify the practical gain, where the increase ranges from 21 to 152% in the provider's revenue, and from 73 to 319% in the users' surplus.
【Keywords】: cellular radio; wireless LAN; WiFi AP; WiFi offloading economics; cellular capacity; cellular data; cellular networks; cost effective; economic incentives; mobile data explosion; mobility; network providers; smart handheld device; trace driven numerical analysis; trading delay; traffic hungry application; Conferences; Delays; Economics; IEEE 802.11 Standards; Mobile communication; Numerical models; Pricing
【Paper Link】 【Pages】:3315-3320
【Authors】: Byung-Gook Kim ; Shaolei Ren ; Mihaela van der Schaar ; Jang-Won Lee
【Abstract】: Future generation smart grids will allow customers to trade energy bidirectionally. Specifically, each customer will be able to not only buy energy from the aggregator during its peak hours but also sell its surplus energy during its off-peak hours. In these emerging energy trading markets, a key component will be the deployment of effective energy billing schemes which consider the customers residential load scheduling. In this paper, we consider a residential load scheduling problem with bidirectional energy trading. Compared with the previous work, in which customers are assumed to be obedient and agree to maximize the social welfare of the smart grid system, in this paper, we consider a non-collaborative approach, where consumers are self-interested. We model the energy scheduling problem as a non-cooperative game, where each customer determines its load scheduling and energy trading to maximize its own profit. In order to resolve the unfairness between heavy and light customers, we propose a novel tiered billing scheme that can control the electricity rates for customers according to their different energy consumption levels. We also propose a distributed energy scheduling algorithm that converges to the unique Nash equilibrium of the studied non-cooperative game. Through the numerical results, we study the impact of the proposed tiered billing scheme on the selfish customers' behavior and on their incentives to participate in the energy trading market.
【Keywords】: energy consumption; game theory; power markets; smart power grids; bidirectional energy trading; distributed energy scheduling algorithm; electricity rates; energy billing schemes; energy consumption levels; energy trading markets; noncooperative game; off-peak hours; residential load scheduling; smart grids; surplus energy; tiered billing scheme; unique Nash equilibrium; Batteries; Collaboration; Electricity; Energy consumption; Games; Home appliances; Nash equilibrium
【Paper Link】 【Pages】:3321-3326
【Authors】: Ozgur Dalkilic ; Ozan Candogan ; Atilla Eryilmaz
【Abstract】: In this paper, we consider the design of the day-ahead market for the smart electrical grid. Consumers with flexible demand and generator companies participate in the market to settle on their load and supply schedules, respectively. The market is operated by an Independent System Operator (ISO) whose purpose is to maximize social welfare while keeping load and supply balanced in the electricity network. We develop two distributed pricing algorithms that achieve optimum welfare. The first algorithm yields time-dependent market prices under convexity assumptions on utility and cost functions and the second algorithm yields bundle prices for arbitrary utility and cost functions. In both algorithms, flexible consumers and generator companies simply determine their own schedules based on the prices updated by the ISO at each iteration. We show that the participation of flexible demand in the day-ahead market reduces supply volatility, which would be present when flexible demand does not take part in price setting procedure.
【Keywords】: iterative methods; power markets; pricing; smart power grids; ISO; cost functions; day-ahead electricity market; distributed pricing algorithms; electricity network; flexible consumer participation; generator companies; independent system operator; load and supply schedules; smart electrical grid; social welfare; supply volatility; time-dependent market prices; Companies; Cost function; Electricity supply industry; Generators; ISO; Pricing; Schedules
【Paper Link】 【Pages】:3327-3332
【Authors】: Ming-Jye Sheng ; Carlee Joe-Wong ; Sangtae Ha ; Felix Ming Fai Wong ; Soumya Sen
【Abstract】: Rapid increases in the demand for broadband data are increasingly causing a growth in costs for communication service providers (CSPs). Yet under the current pricing plans, CSPs' revenue has not kept pace with these costs. Thus, many CSPs are considering Smart Data Pricing (SDP) as a way to reduce cost or increase revenue. Before offering such novel data plans, however, CSPs must conduct trials of the specific data plans proposed. Due to the complexity of necessary changes in network equipment and a need to carefully design the trial in order to understand customer behavior, planning such trials is not only a critical precursor to SDP deployment, but also a nontrivial undertaking in itself. This paper discusses general principles of trial design and proposes two methods for estimating their effectiveness. We first give an introduction to the goals of SDP research and review three possible SDP approaches. We then discuss the importance of pre-trial participant surveys and some technical considerations of implementing the trial infrastructure for a particular SDP algorithm. Finally, we show how the CSP may extrapolate from the trial results to estimate the SDP trial's benefits, in terms of changes in traffic patterns and a reduction in spectrum requirements. We conclude with some remarks about future work.
【Keywords】: broadband networks; cost reduction; pricing; telecommunication network planning; broadband data; communication service provider; cost reduction; customer behavior; effectiveness estimation; network equipment; smart data pricing; spectrum requirement reduction; traffic pattern; trial planning; Conferences; Internet; Mobile communication; Monitoring; Planning; Pricing; Protocols; Smart Data Pricing; Spectrum Requirements
【Paper Link】 【Pages】:3333-3338
【Authors】: Larissa Spinelli ; Mark Crovella ; Brian Eriksson
【Abstract】: Internet topologies discovered by standard traceroute-based probing schemes are limited by many factors. One of the main factors is the ambiguity of the returned interfaces, where multiple unique interface IP addresses belong to the same physical router. The unknown assignment of interface IPs to physical routers can result in grossly inflated estimated topologies compared with the true underlying physical infrastructure of the network. The ability to determine which interfaces belong to which router would aid in the ability to accurately reconstruct the underlying topology of the Internet. In this paper, we present ALIASCLUSTER, a lightweight learning-based methodology that disambiguates router aliases using only observed traceroute measurements and requires no additional load on the network. Compared with existing techniques, we find that ALIASCLUSTER can resolve the same number of true router alias pairs with 50% fewer false alarms.
【Keywords】: IP networks; Internet; telecommunication network routing; telecommunication network topology; ALIASCLUSTER; Internet topology; interface IP assignment; interface disambiguation; lightweight learning-based methodology; physical infrastructure; physical router; traceroute measurement; traceroute-based probing scheme; Bayes methods; Data mining; Feature extraction; IP networks; Internet; Probes; Topology
【Paper Link】 【Pages】:3339-3344
【Authors】: Georg Hampel ; Moritz Steiner ; Tian Bu
【Abstract】: The concept of Software-Defined Networking (SDN) has been successfully applied to data centers and campus networks but it has had little impact in the fixed wireline and mobile telecom domain. Although telecom networks demand fine-granular flow definition, which is one of SDN's principal strengths, the scale of these networks and their legacy infrastructure constraints considerably limit the applicability of SDN principles. Instead, telecom networks resort to tunneling solutions using a plethora of specialized gateway nodes, which create high operation cost and single points of failure. We propose extending the concept of SDN so that it can tackle the challenges of the telecom domain. We see vertical forwarding, i.e. programmable en- and decapsulation operations on top of IF, as one of the fundamental features to be integrated into SDN. We discuss how vertical forwarding enables flow-based policy enforcement, mobility and security by replacing specialized gateways with virtualized controllers and commoditized forwarding elements, which reduces cost while adding robustness and flexibility.
【Keywords】: IP networks; cellular radio; computer network security; mobility management (mobile radio); IP; SDN; commoditized forwarding elements; cost reduction; flexibility; flow-based policy enforcement; mobility; programmable decapsulation operations; programmable encapsulation operations; robustness; security; software-defined networking; telecom networks; vertical forwarding; virtualized controllers; Decision support systems; Internet; Software-defined networking; cellular network; fixed wireline network; gateway; telecom; tunneling
【Paper Link】 【Pages】:3345-3350
【Authors】: Yan Shvartzshnaider ; Maximilian Ott ; Olivier Mehani ; Guillaume Jourjon ; Thierry Rakotoarivelo ; David Levy
【Abstract】: In this paper, we introduce the Moana network infrastructure. It draws on well-adopted practices from the database and software engineering communities to provide a robust and expressive information-sharing service using hypergraph-based network indirection. Our proposal is twofold. First, we argue for the need for additional layers of indirection used in modern information systems to bring the network layer abstraction closer to the developer's world, allowing for expressiveness and flexibility in the creation of future services. Second, we present a modular and extensible design of the network fabric to support incremental architectural evolution and innovation, as well as its initial evaluation.
【Keywords】: Internet; graph theory; information systems; Moana network infrastructure; hypergraph-based network layer indirection; incremental architectural evolution; modern information systems; network fabric design; network layer abstraction; robust expressive information-sharing service; software engineering communities; Communities; Databases; Engines; Fabrics; Internet; Iron; Ports (Computers)
【Paper Link】 【Pages】:3351-3356
【Authors】: Gautam S. Thakur ; Ahmed Helmy
【Abstract】: The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.
【Keywords】: mobile computing; protocols; COBRA; future global Internet; human mobility behavior; multidimensional mobility metric space; network protocol performance; overhead gap; realistic mobility models; realistic traces; wireless networks; Accuracy; Analytical models; Communities; Extraterrestrial measurements; Mobile communication; Protocols
【Paper Link】 【Pages】:3357-3362
【Authors】: Yanhua Li ; Moritz Steiner ; Limin Wang ; Zhi-Li Zhang ; Jie Bao
【Abstract】: In this paper, we provide a detailed analysis on the venue popularity in Foursquare, a leading location-based social network. By collecting 2.4 million venues from 14 geographic regions all over the world, we study the common characteristics of popular venues, and make the following observations. First, venues with more complete profile information are more likely to be popular. Second, venues in the Food category attract the most (43%) public tips (comments) by users, and the Travel & Transport category is the most popular category with the highest per venue check-ins, i.e., each venue in this category attracts on average 376 check-ins. Moreover, the stickiness of users checking in venues in the residence, office, and school categories is higher than in other categories. Last but not least, in general, old venues created at the early stage of Foursquare are more popular than new venues. Our results help to understand the factors that cause venues to become popular, and have applications in venue recommendations and advertisement in location based social networks.
【Keywords】: mobile computing; social networking (online); Foursquare; food category; geographic regions; location-based social network; office category; profile information; public tips; residence category; school category; travel & transport category; venue check-ins; venue popularity; Art; Cities and towns; Communication networks; Conferences; Data collection; Educational institutions; Social network services
【Paper Link】 【Pages】:3363-3368
【Authors】: Yi Wang ; Hui Zang ; Michalis Faloutsos
【Abstract】: Homophily refers to the phenomenon where people who are socially-connected share many characteristics including demographic and behavioral properties. The goal of this paper is to see whether homophily exists in call networks and if so, to what degree we can infer a cellphone user's demographic properties by knowing the demographic information of the people that s/he talks to. We focus on three types of demographic information: a) home location, b) age group, and c) income level. The novelty is two-folds. First, we use both communication metrics and structural properties of call graphs to identify those “important” friends for each user with whom (s)he is most likely to be in homophily. Second, we assess the importance of different time slices such as weekdays, or nights and weekends for capturing different user relationships. We conduct our study on a real data trace with 20M subscribers during one month from a nationwide cellular carrier. Our first contribution is that we quantify the extent of homophily on the call graph and identify the correlations between homophily and communication and structural features. As a second contribution, we develop effective methods to infer demographic information for a cellular user using linear regression to select the most homophily-like friend of her/him. We find that we can predict home location within 20km radius with 80% accuracy, and age group and income level with 78% and 72% accuracy, respectively.
【Keywords】: cellular radio; demography; graph theory; regression analysis; age group; behavioral properties; call graph; call network; cellphone user demographic properties; cellular user demographic information; communication metrics; home location; homophily-like friend; income level; linear regression; nationwide cellular carrier; socially-connected people; structural feature; structural property; user relationship; Accuracy; Communication networks; Conferences; Correlation; Linear regression; Prediction algorithms; Social network services
【Paper Link】 【Pages】:3369-3374
【Authors】: Tiphaine Phe-Neau ; Marcelo Dias de Amorim ; Vania Conan
【Abstract】: Most disruption-tolerant networking protocols focus on mere contact and intercontact characteristics to make forwarding decisions. We propose to relax such a simplistic approach and include multi-hop opportunities by annexing a node's vicinity to its network vision. We investigate how the vicinity of a node evolves through time and whether such knowledge is useful when routing data. By analyzing a modified version of the pure WAIT forwarding strategy, we observe a clear tradeoff between routing performance and cost for monitoring the neighborhood. By observing a vicinity-aware WAIT strategy, we emphasize how the pure WAIT misses interesting end-to-end transmission opportunities through nearby nodes. For the datasets we consider, our analyses also suggest that limiting a node's neighborhood view to four hops is enough to improve forwarding efficiency while keeping control overhead low.
【Keywords】: computer networks; protocols; telecommunication network routing; WAIT forwarding strategy; control overhead; disruption-tolerant networking protocols; end-to-end transmission opportunities; forwarding decisions; multihop opportunities; network vision; opportunistic networking; routing performance; vicinity annexation; vicinity-aware WAIT strategy; Communication networks; Communities; Conferences; Delays; Monitoring; Protocols; Routing; Opportunistic networks; contact; disruption-tolerant networks; intercontact; vicinity
【Paper Link】 【Pages】:3375-3380
【Authors】: Stojan Trajanovski ; Fernando A. Kuipers ; Piet Van Mieghem
【Abstract】: It is important that our vital networks (e.g., infrastructures) are robust to more than single-link failures. Failures might for instance affect a part of the network that resides in a certain geographical region. In this paper, considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region - that is, a part of the network that can be enclosed by a given elementary figure (a circle, ellipse, rectangle, square, or equilateral triangle) with a predetermined size - whose removal would lead to the highest network disruption. We determine that there is a polynomial number of non-trivial positions for such a figure that need to be considered and, subsequently, we propose a polynomial-time algorithm for the problem. Simulations on realistic networks illustrate that different figures with equal area result in different critical regions in a network.
【Keywords】: computational complexity; computational geometry; geography; network theory (graphs); circle; computational geometry; critical region; elementary figure; ellipse; equilateral triangle; geographical failure; geographical region; infrastructure; network disruption; nontrivial position; polynomial number; polynomial-time algorithm; realistic network; rectangle; single-link failure; square; two-dimensional plane; vital network; Communication networks; Complexity theory; Conferences; Measurement; Polynomials; Robustness; Shape; computational geometry; critical regions; geographical failures; network robustness
【Paper Link】 【Pages】:3381-3386
【Authors】: Luigi Grimaudo ; Marco Mellia ; Elena Baralis ; Ram Keralapura
【Abstract】: Network visibility is a critical part of traffic engineering, network management, and security. Recently, unsupervised algorithms have been envisioned as a viable alternative to automatically identify classes of traffic. However, the accuracy achieved so far does not allow to use them for traffic classification in practical scenario. In this paper, we propose SeLeCT, a Self-Learning Classifier for Internet traffic. It uses unsupervised algorithms along with an adaptive learning approach to automatically let classes of traffic emerge, being identified and (easily) labeled. SeLeCT automatically groups flows into pure (or homogeneous) clusters using alternating simple clustering and filtering phases to remove outliers. SeLeCT uses an adaptive learning approach to boost its ability to spot new protocols and applications. Finally, SeLeCT also simplifies label assignment (which is still based on some manual intervention) so that proper class labels can be easily discovered. We evaluate the performance of SeLeCT using traffic traces collected in different years from various ISPs located in 3 different continents. Our experiments show that SeLeCT achieves overall accuracy close to 98%. Unlike state-of-art classifiers, the biggest advantage of SeLeCT is its ability to help discovering new protocols and applications in an almost automated fashion.
【Keywords】: Internet; information filtering; pattern classification; pattern clustering; protocols; telecommunication traffic; unsupervised learning; ISP; Internet service provider; Internet traffic; SeLeCT; adaptive learning approach; label assignment; network management; network security; network visibility; outlier removal; protocols; self-learning classifier; traffic classification; traffic engineering; traffic traces; unsupervised learning algorithms; Accuracy; Algorithm design and analysis; Clustering algorithms; Labeling; Ports (Computers); Protocols; Servers
【Paper Link】 【Pages】:3387-3392
【Authors】: David Malone ; Darren F. Kavanagh ; Niall Richard Murphy
【Abstract】: Femtocells are small cellular telecommunication base stations that provide improved cellular coverage. These devices provide important improvements in coverage, battery life and throughput, they also present security challenges. We identify a problem which has not been identified in previous studies of femtocell security: rogue owners of femtocells can secretly monitor third-party mobile devices by using the femtocell's access control features. We present traffic analysis of real femtocell traces and demonstrate the ability to monitor mobile devices through classification of the femtocell's encrypted backhaul traffic. We also consider the femtocell's power usage and status LEDs as other side channels that provide information on the femtocell's operation. We conclude by presenting suitable solutions to overcome this problem.
【Keywords】: access control; femtocellular radio; telecommunication security; telecommunication traffic; LED; access control; backhaul traffic; battery life; cellular coverage; femtocell owners; small cellular telecommunication base stations; third-party mobile devices; traffic analysis; Algorithm design and analysis; Conferences; Cryptography; History; Light emitting diodes; Monitoring; Femtocell; cellular devices; rogue owners; security; traffic analysis
【Paper Link】 【Pages】:3393-3398
【Authors】: Stefan Ruehrup ; Pierfrancesco Urbano ; Andreas Berger ; Alessandro D'Alconzo
【Abstract】: In this paper we review state-of-the-art botnet detection algorithms that reveal the control traffic of malicious peer-topeer (P2P) networks by targeting topological properties of their interconnectivity graph. This class of detection methods does not rely on the exchanged content and therefore is also applicable to encrypted control traffic. However, in practice, an ISP monitoring customer traffic over an edge router will usually see only a fraction of the overall botnet, thus restricting the available bot connectivity information and limiting the applicability of general community detection approaches. In this paper we critically review graph based detection methods suitable for edge router monitoring using two types of real network traces. We show experimentally that using meta-graphs of mutual contacts proposed by Coskun et al. 2010 has the highest potential on result quality. We improve this approach by presenting a computationally less complex algorithm with similar result quality. Furthermore we explain ways to alleviate the cost of dealing with false positives in the result set.
【Keywords】: computer network security; graph theory; peer-to-peer computing; ISP; Internet connection graphs; bot connectivity information; botnet detection algorithm; customer traffic; edge router monitoring; graph based detection methods; interconnectivity graph; malicious P2P networks; malicious peer-topeer networks; Clustering algorithms; Communities; DSL; Dispersion; Monitoring; Peer-to-peer computing; Topology
【Paper Link】 【Pages】:3399-3404
【Authors】: Luca Deri ; Alfredo Cardigliano ; Francesco Fusco
【Abstract】: Capturing packets to disk at line rate and with high precision packet timestamping is required whenever an evidence of network communications has to be provided. Typical applications of long-term network traffic repositories are network troubleshooting, analysis of security violations, and analysis of high-frequency trading communications. Appliances for 10 Gbit packet capture to disk are often based on dedicated network adapters, and therefore very expensive, making them usable only in specific domains. This paper covers the design and implementation of n2disk, a packet capture to disk application, capable of dumping 10 Gbit traffic to disk using commodity hardware and open-source software. In addition to packet capture, n2disk is able to index the traffic at line-rate during capture, enabling users to efficiently search specific packets in network traffic dump files.
【Keywords】: computer network reliability; computer network security; public domain software; storage area networks; storage management; telecommunication traffic; 10 Gbit line rate packet-to-disk; commodity hardware; high precision packet timestamping; high-frequency trading communications; long-term network traffic repositories; n2disk; network adapters; network communications; network traffic dump files; network troubleshooting; open-source software; packet searching; security violation analysis; Band-pass filters; Indexing; Instruction sets; Matched filters; Monitoring; 10 Gbit Traffic Monitoring; Packet Capture; Traffic Dump to Disk
【Paper Link】 【Pages】:3405-3410
【Authors】: Velin Kounev ; David Tipper
【Abstract】: Using IEEE 802.15.4 and Zigbee for home area networks (HANs) in the Smart Grid is becoming an increasingly prominent topic in the research area. As the standard designed for low data rate and low cost wireless personal area networks, IEEE 802.15.4 is widely employed in the construction of home sensor networks to assist with real-time environment information. For the purposes of Smart Grid the Zigbee Alliance has defined new Smart Energy Profile Protocol that leverages the existing TCP and HTTP protocols. In this paper, we provide an overview of the Smart Grid's Advanced Metering Infrastructure (AMI) and Demand Response (DR) functionalities, and the communication requirement they pose for the new SEP protocol. The discussion is followed by an evaluation of the theoretical performance bounds of the new architecture based on a analytical model. We conclude, by extending the model to account for WiFi interference which is expected to be present in home and office environments.
【Keywords】: Zigbee; demand side management; home networks; radiofrequency interference; real-time systems; smart meters; smart power grids; transport protocols; wireless LAN; wireless sensor networks; AMI; DR functionalities; HTTP protocols; IEEE 802.15.4 networks; SEP protocol; TCP protocols; WiFi interference; Zigbee-based HAN; advanced metering infrastructure; analytical model; demand response communication performance; home area networks; home environments; home sensor networks; low cost wireless personal area networks; low data rate; office environments; real-time environment information; smart energy profile protocol; smart grid; theoretical performance bounds; Delays; Home appliances; IEEE 802.11 Standards; Load management; Protocols; Smart grids; Zigbee
【Paper Link】 【Pages】:3411-3416
【Authors】: Sören Finster ; Ingmar Baumgart
【Abstract】: The deployment of smart metering provides an immense amount of data for power grid operators and energy providers. By using this data, a more efficient and flexible power grid can be realized. However, this data also raises privacy concerns since it contains very sensitive information about customers. In this paper, we present Elderberry, a peer-to-peer protocol that enables near real-time smart metering while preserving the customer's privacy. By forming small groups of cooperating smart meters, their consumption traces are anonymized before being aggregated and sent to the grid operator. Through aggregation, Elderberry realizes efficient monitoring of large numbers of smart meters. It reaches this goal without computationally complex cryptography and adds only little communication overhead.
【Keywords】: data privacy; peer-to-peer computing; power engineering computing; power meters; smart power grids; Elderberry; flexible power grid; peer-to-peer protocol; privacy-aware smart metering protocol; real-time smart metering; Aggregates; Cryptography; Meter reading; Peer-to-peer computing; Protocols; Time measurement; Vegetation
【Paper Link】 【Pages】:3417-3422
【Authors】: Hanno Georg ; Nils Dorsch ; Markus Putzke ; Christian Wietfeld
【Abstract】: Driven by the increasing application of Smart Grid technologies in today's power systems, communication networks are becoming more and more important for exchanging monitoring, control and protection information on local and wide area level. For communication the IEC 61850 standard is a candidate for the Smart Grid and has been introduced for Substation Automation Systems (SAS) some years ago. IEC 61850 provides interoperability among various manufactures and enables systemwide communication between intelligent components of future power systems. However, as IEC 61850 addresses Ethernet (ISO/IEC 8802-3 family) as network technology, especially high performance aspects of Ethernet have become increasingly important for time-critical communication within substation automation systems. In this paper we introduce the generic architecture of IEC 61850 and present our modelling approach for evaluating high performance and real-time capability of communication technologies for future smart grid application. First, we give a short overview of the IEC 61850 protocol and present communication flows in substation automation systems according to the standard. Here we focus on substation automation at bay level, located inside an exemplary substation node taken from the IEEE 39-bus power system network. Afterwards we demonstrate our modeling approach for communication networks based on IEC 61850. For performance evaluation we developed a simulation model along with an analytical approach on basis of Network Calculus, enabling to identify worst case boundaries for intra-substation communication. Finally results for simulative and analytical modelling are provided and cross validated for two bay level scenarios, showing the applicability of Network Calculus for real-time constrained smart grid communication.
【Keywords】: IEC standards; open systems; smart power grids; substation automation; Ethernet; IEC 61850 standard; IEEE 39-bus power system network; ISO/IEC 8802-3 family; bay level scenarios; communication flows; communication technologies; generic architecture; interoperability; intra-substation communication; network calculus; network technology; power systems; protection information; smart grids; substation automation systems; substation node; system-wide communication; time-critical communication networks; wide area level; worst case boundaries; Calculus; Delays; IEC standards; Merging; Substations; Switches; Communication Network Simulation; IEC 61850; Network Calculus; Power System Simulation; Smart Grid
【Paper Link】 【Pages】:3423-3428
【Authors】: Ting Liu ; Yun Gu ; Dai Wang ; Yuhong Gui ; Xiaohong Guan
【Abstract】: Bad data injection is one of most dangerous attacks in smart grid, as it might lead to energy theft on the end users and device breakdown on the power generation. The attackers can construct the bad data evading the bad data detection mechanisms in power system. In this paper, a novel method, named as Adaptive Partitioning State Estimation (APSE), is proposed to detect bad data injection attack. The basic ideas are: 1) the large system is divided into several subsystems to improve the sensitivity of bad data detection; 2) the detection results are applied to guide the subsystem updating and re-partitioning to locate the bad data. Two attack cases are constructed to inject bad data into an IEEE 39-bus system, evading the traditional bad data detection mechanism. The experiments demonstrate that all bad data can be detected and located within a small area using APSE.
【Keywords】: IEEE standards; electric power generation; power engineering computing; power system security; power system state estimation; security of data; smart power grids; APSE; Chi-squares method; IEEE 39-bus system; adaptive partitioning state estimation; bad data injection attack detection mechanism; device breakdown; power generation; power system; sensitivity; smart grid; subsystem-extension; testing result; Conferences; Decision support systems; adaptive partitioning state estimation; bad data injection; detection; security; smart grid
【Paper Link】 【Pages】:3429-3434
【Authors】: Hasen Nicanfar ; Seyedali Hosseininezhad ; Peyman TalebiFard ; Victor C. M. Leung
【Abstract】: The concept of Electric Vehicle as Power Energy Storage has gained much attention from the research community and market recently. The increasing capacity of the power storages in the electric vehicles (EV) motivates this concept and makes it more feasible. However, the privacy of the customers can be compromised by tracing the stations that an EV has been connected to during a period of the time. The stations that are providing power charging as well as purchasing the power back from EVs can be owned by third party businesses. EVs should be authenticated through these stations in order to give or receive appropriate credit for the power. In this paper, we identify potential privacy issues and propose a robust privacy-preserving authentication scheme for communication of the EV and the station to prevent customer information leakage. In our approach, the EV and the station communication utilizes pseudonym of the EV, in which only the smart grid server (a trusted entity) can map the pseudonym to the real vehicle identity and provide the identity management. The pseudonym of an EV changes when the EV moves from one station to another, which prevents the adversary from tracing foot prints of the EV. Our analysis shows that our model is robust enough to make sure the privacy of the customers is fully preserved, and at the same time, it is efficient by consuming very limited resources.
【Keywords】: battery powered vehicles; computer network security; data privacy; energy storage; power engineering computing; secondary cells; smart power grids; customer information leakage; electric vehicle; foot print tracing; identity management; potential privacy issue; power charging; power energy storage; power station; robust privacy preserving authentication scheme; station communication; Authentication; Privacy; Public key; Servers; Smart grids; Vehicles; Electric Vehicle; Identity Management; Privacy; Pseudonymity; Security; Smart Grid; Untraceability
【Paper Link】 【Pages】:3435-3440
【Authors】: Liang Tang ; Yun Rui ; Qian Wang ; Husheng Li ; Zhiyong Bu
【Abstract】: A reverse transmission mechanism for smart grid is introduced, in which the transmission robustness of ad-hoc net-segment is guaranteed. The goal is to address the deadlock issue caused by the broken down of the targeted tower , and to reduce transmission power consumption. Performance analysis of the new mechanism is presented, in which the configuration of different number of electrical towers is discussed. Upon examination of the simulation results, we conclude that the proposed mechanism provides lower latency and power consumption compared to the traditional transmission mechanism, and guarantees the transmission robustness for the smart grid.
【Keywords】: power consumption; smart power grids; ad-hoc net-segment; electrical towers; reverse transmission; smart grid; surveillance network; transmission power consumption; Equations; Logic gates; Monitoring; Poles and towers; Power demand; Silicon; Smart grids
【Paper Link】 【Pages】:3441-3446
【Authors】: Razvan Beuran ; Shinsuke Miwa ; Yoichi Shinoda
【Abstract】: Wireless devices are widely used today to access the Internet, despite the intermittent network connectivity they often provide, especially in mobile circumstances. The paradigm of Delay/Disruption Tolerant Networks (DTN) can be applied in such cases to improve the user experience. In this paper we present a network testbed for DTN applications and protocols that we developed based on the generic-purpose wireless network emulation testbed named QOMB. Our testbed is intended for quantitative performance assessments of DTN application and protocol implementations in realistic scenarios. We illustrate the practicality of our emulation testbed through a series of experiments with the DTN2 and IBR-DTN implementations, focusing on mobility in urban environments. The scalability issues that we have identified for DTN2 emphasize the need to perform large-scale repeatable evaluations of DTN applications and protocols for functionality validation and performance optimization.
【Keywords】: delay tolerant networks; mobile radio; protocols; radio networks; telecommunication network reliability; DTN2; IBR-DTN; Internet; QOMB; delay-disruption tolerant network; generic-purpose wireless network emulation testbed; intermittent network connectivity; large-scale repeatable evaluation; performance optimization; protocol; quantitative performance assessment; scalability issue; wireless device; Emulation; Internet; Libraries; Mobile communication; Protocols; Wireless networks
【Paper Link】 【Pages】:3447-3452
【Authors】: Selcuk Cevher ; Mustafa Ulutas ; Ibrahim Hökelek
【Abstract】: To seamlessly support real-time services such as voice and video over next generation IP networks, routers must continue their forwarding tasks in case of link/node failures by limiting the service disruption time to sub-100 ms. IETF Routing Area Working Group (RTGWG) has been working on standardizing IP Fast Reroute (IPFRR) methods with a complete alternate path coverage. In this paper, a trade-off analysis of Multi Topology Routing (MTR) based IPFRR technologies targeting full coverage, namely Multiple Routing Configurations (MRC) and Maximally Redundant Trees (MRT), are presented. We implemented a comprehensive analysis tool to evaluate the performance of MRC and MRT mechanisms on various synthetic network topologies. The performance results show that MRT's alternative path lengths are not scalable with respect to the network size and density while the alternative path lengths of MRC only slightly change as the network size and density vary. We believe that this is an important scalability result providing a guidance in the selection of MTR-based IPFRR mechanism for improving the availability in ISP networks.
【Keywords】: IP networks; computer network reliability; next generation networks; quality of service; redundancy; telecommunication network routing; telecommunication network topology; trees (mathematics); IP fast reroute mechanism; IPFRR; ISP network; MRC; MRT; Routing Area Working Group; alternate path coverage; comprehensive analysis tool; maximally redundant tree; multiple routing configuration; multitopology routing; network density; network size; next generation service; synthetic network topology; IP networks; Internet; Network topology; Nickel; Routing; Routing protocols; Topology; IP fast reroute; multi topology routing; redundant tree
【Paper Link】 【Pages】:3453-1358
【Authors】: Han Zhang ; Christos Papadopoulos ; Daniel Massey
【Abstract】: Bot detection methods that rely on deep packet inspection (DPI) can be foiled by encryption. Encryption, however, increases entropy. This paper investigates whether adding highentropy detectors to an existing bot detection tool that uses DPI can restore some of the bot visibility. We present two high-entropy classifiers, and use one of them to enhance BotHunter. Our results show that while BotHunter misses about 50% of the bots when they employ encryption, our high-entropy classifier restores most of its ability to detect bots, even when they use encryption.
【Keywords】: cryptography; entropy; BotHunter; bot detection tool; bot visibility; deep packet inspection; encrypted botnet traffic detection; encryption; entropy; Detectors; Encryption; Entropy; IP networks; Malware; Payloads
【Paper Link】 【Pages】:3459-3464
【Authors】: Valerio Arnaboldi ; Marco Conti ; Andrea Passarella ; Fabio Pezzoni
【Abstract】: Online Social Networks are amongst the most important platforms for maintaining social relationships online, supporting content generation and exchange between users. They are therefore natural candidate to be the basis of future humancentric networks and data exchange systems, in addition to novel forms of Internet services exploiting the properties of human social relationships. Understanding the structural properties of OSN and how they are influenced by human behaviour is thus fundamental to design such human-centred systems. In this paper we analyse a real Twitter data set to investigate whether well known structures of human social networks identified in “offline” environments can also be identified in the social networks maintained by users on Twitter. According to the well known model proposed by Dunbar, offline social networks are formed of circles of relationships having different social characteristics (e.g., intimacy, contact frequency and size). These circles can be directly ascribed to cognitive constraints of human brain, that impose limits on the number of social relationships maintainable at different levels of emotional closeness. Our results indicate that a similar structure can also be found in the Twitter users' social networks. This suggests that the structure of social networks also in online environments are controlled by the same cognitive properties of human brain that operate offline.
【Keywords】: Internet; social networking (online); social sciences computing; Internet service; Twitter; cognitive constraint; content generation; data exchange system; ego network; experimental analysis; human brain; human social network; human social relationship; human-centred system; humancentric network; offline environment; offline social network; online social network; social characteristics; social relationship maintenance; structural property; Accuracy; Communication networks; Conferences; Facebook; Indexes; Twitter
【Paper Link】 【Pages】:3465-3470
【Authors】: Swati Rallapalli ; Wei Dong ; Gene Moo Lee ; Yi-Chao Chen ; Lili Qiu
【Abstract】: Users around the world have embraced new generation of mobile devices such as the smartphones at a remarkable rate. These devices are equipped with powerful communication and computation capabilities and they enable a wide range of exciting location-based services, e.g., location based ads, content prefetching etc. Many of these services can benefit from a better understanding of the smartphone user mobility, which may differ significantly from the general user mobility. Hence, previous works on understanding user mobility models and predicting user mobility may not directly apply to smartphone users. To overcome this, in this paper we analyze data from two popular location based social networks, where the users are real smartphone users and the places they check-in represent the typical locations where they use their smartphone applications. Specifically, we analyze how individual users move across different locations. We identify several factors that affect user mobility and their relative significance. We then leverage these factors to perform individual mobility prediction. We further show that our mobility prediction yields significant benefit to two important location based applications: content prefetching and shared ride recommendation.
【Keywords】: mobility management (mobile radio); smart phones; social networking (online); storage management; content prefetching; individual mobility prediction; location-based services; mobile devices; real smartphone users; shared ride recommendation; smartphone user mobility; social networks-based popular location; user mobility prediction; Accuracy; Cities and towns; Markov processes; Measurement; Predictive models; Prefetching; Wireless communication
【Paper Link】 【Pages】:3471-3476
【Authors】: Karyn Benson ; Alberto Dainotti ; kc claffy ; Emile Aben
【Abstract】: Internet Background Radiation (IBR) is unsolicited network traffic mostly generated by malicious software, e.g., worms, scans. In previous work, we extracted a signal from IBR traffic arriving at a large (/8) segment of unassigned IPv4 address space to identify large-scale disruptions of connectivity at an Autonomous System (AS) granularity, and used our technique to study episodes of government censorship and natural disasters [1]. Here we explore other IBR-derived metrics that may provide insights into the causes of macroscopic connectivity disruptions. We propose metrics indicating packet loss (e.g., due to link congestion) along a path from a specific AS to our observation point. We use three case studies to illustrate how our metrics can help identify packet loss characteristics of an outage. These metrics could be used in the diagnostic component of a semiautomated system for detecting and characterizing large-scale outages.
【Keywords】: Internet; computer network security; invasive software; telecommunication traffic; AS-level outages; IBR; IBR-derived metrics; Internet background radiation analysis; autonomous system; macroscopic connectivity disruptions; malicious software; network traffic; packet loss; IP networks; Internet; Packet loss; Routing; Telescopes
【Paper Link】 【Pages】:3477-3482
【Authors】: Pierre-Antoine Vervier ; Olivier Thonnard
【Abstract】: The Internet routing infrastructure is vulnerable to the injection of erroneous routing information resulting in BGP hijacking. Some spammers, also known as fly-by spammers, have been reported using this attack to steal blocks of IP addresses and use them for spamming. Using stolen IP addresses may allow spammers to elude spam filters based on sender IP address reputation and remain stealthy. This remains a open conjecture despite some anecdotal evidences published several years ago. In order to confirm the first observations and reproduce the experiments at large scale, a system called SpamTracer has been developed to monitor the routing behavior of spamming networks using BGP data and IP/AS traceroutes. We then propose a set of specifically tailored heuristics for detecting possible BGP hijacks. Through an extensive experimentation on a six months dataset, we did find a limited number of cases of spamming networks likely hijacked. In one case, the network owner confirmed the hijack. However, from the experiments performed so far, we can conclude that the fly-by spammers phenomenon does not seem to currently be a significant threat.
【Keywords】: Internet; protocols; security of data; telecommunication network routing; unsolicited e-mail; BGP data; BGP hijacking; IP/AS traceroutes; Internet routing infrastructure; SpamTracer; fly-by spammers; sender IP address reputation; spam filters; spamming networks routing behavior; stealthy spammer; stolen IP addresses; Conferences; Feeds; IP networks; Internet; Monitoring; Routing; Unsolicited electronic mail
【Paper Link】 【Pages】:3483-3488
【Authors】: Lior Neudorfer ; Yuval Shavitt ; Noa Zilberman
【Abstract】: The Internet is a complex network, comprised of thousands of interconnected Autonomous Systems. Considerable research is done in order to infer the undisclosed commercial relationships between ASes. These relationships, which have been commonly classified to four distinct Type of Relationships (ToRs), dictate the routing policies between ASes. These policies are a crucial part in understanding the Internet's traffic and behavior patterns. This work leverages Internet Point of Presence (PoP) level maps to improve AS ToR inference. We propose a method which uses PoP level maps to find complex AS relationships and detect anomalies on the AS relationship level. We present experimental results of using the method on ToR reported by CAIDA and report several types of anomalies and errors. The results demonstrate the benefits of using PoP level maps for ToR inference, requiring considerable less resources than other methods theoretically capable of detecting similar phenomena.
【Keywords】: Internet; telecommunication network routing; telecommunication traffic; AS relationship inference; CAIDA; Internet traffic; PoP level maps; PoPs; ToR inference; interconnected autonomous system; internet point-of-presence; routing policy; Conferences; Databases; Educational institutions; IP networks; Internet; Monitoring; Routing