INFOCOM 2011. 30th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 10-15 April 2011, Shanghai, China. IEEE 【DBLP Link】
【Paper Link】 【Pages】:1-5
【Authors】: Salvatore Scellato ; Ilias Leontiadis ; Cecilia Mascolo ; Prithwish Basu ; Murtaza Zafer
【Abstract】: The application of complex network theory to communication systems has led to several important results. Nonetheless, previous research has often neglected to take into account their temporal properties, which in many real scenarios play a pivotal role. Mainly because of mobility, transmission delays or protocol design, a communication network should not be considered only as a static entity. At the same time, network robustness has come extensively under scrutiny. Understanding whether networked systems can undergo structural damage and yet perform efficiently is crucial to both their protection against failures and to the design of new applications. In spite of this, it is still unclear what type of resilience we may expect in a network that continuously changes over time. In this work we present the first attempt to define the concept of temporal network robustness: we describe a measure of network robustness for time-varying networks and we show how it performs on different classes of random models by means of analytical and numerical evaluation. Particularly, we show how static approximation can wrongly indicate high robustness of fragile networks when adopted in mobile time-varying networks, while a temporal approach captures more accurately the system performance.
【Keywords】: approximation theory; mobile communication; protocols; communication network; complex network theory; failure protection; fragile network; mobile network; mobile time-varying network; numerical evaluation; protocol design; static approximation; static entity; structural damage; temporal network measure; temporal network robustness; transmission delays; Erbium; Measurement; Mobile communication; Mobile computing; Passive optical networks; Peer to peer computing; Robustness
【Paper Link】 【Pages】:6-10
【Authors】: Sukhyun Song ; Peter J. Keleher ; Bobby Bhattacharjee ; Alan Sussman
【Abstract】: The distributed nature of modern computing makes end-to-end prediction of network bandwidth increasingly important. Our work is inspired by prior work that treats the Internet and bandwidth as an approximate tree metric space. This paper presents a decentralized, accurate, and low cost system that predicts pairwise bandwidth between hosts. We describe an algorithm to construct a distributed tree that embeds bandwidth measurements. The correctness of the algorithm is provable when driven by precise measurements. We then describe three novel heuristics that achieve high accuracy for predicting bandwidth even with imprecise input data. Simulation experiments with a real-world dataset confirm that our approach shows high accuracy with low cost.
【Keywords】: Internet; bandwidth allocation; trees (mathematics); Internet; approximate tree metric space; bandwidth measurement; decentralized low cost system; distributed tree; end-to-end prediction; low-cost network bandwidth prediction; pairwise bandwidth; Accuracy; Bandwidth; Extraterrestrial measurements; Measurement uncertainty; Peer to peer computing; Prediction algorithms
【Paper Link】 【Pages】:11-15
【Authors】: Xingang Shi ; Chi-Kin Chau ; Dah-Ming Chiu
【Abstract】: The information of temporal correlations among network-wide data flows is crucial to a wide range of network management applications, such as root-cause analysis, threat monitoring, and traffic profiling. While several prior work had only studied the centralized and offline computation of flow correlations, we present DisTrack, a space-efficient network management mechanism for online tracking of network-wide temporal flow correlations. The major benefits of DisTrack include low space complexity, high processing speed, and ease of distributed deployment. This paper presents its randomized data structures, with theoretical analysis on the trade-off between space complexity and accuracy. We further provide extensive empirical evaluations on real network traces.
【Keywords】: communication complexity; data structures; telecommunication network management; telecommunication traffic; tracking; DisTrack; distributed deployment; network-wide flow correlation; network-wide temporal flow correlation; online tracking; randomized data structure; root-cause analysis; space complexity; space-efficient network management mechanism; space-efficient tracking; temporal correlation; threat monitoring; traffic profiling; Manuals; Monitoring
【Paper Link】 【Pages】:16-20
【Authors】: Gonca Gürsun ; Mark Crovella ; Ibrahim Matta
【Abstract】: Computer systems are increasingly driven by workloads that reflect large-scale social behavior, such as rapid changes in the popularity of media items like videos. Capacity planners and system designers must plan for rapid, massive changes in workloads when such social behavior is a factor. In this paper we make two contributions intended to assist in the design and provisioning of such systems.We analyze an extensive dataset consisting of the daily access counts of hundreds of thousands of YouTube videos. In this dataset, we find that there are two types of videos: those that show rapid changes in popularity, and those that are consistently popular over long time periods. We call these two types rarely-accessed and frequently-accessed videos, respectively. We observe that most of the videos in our data set clearly fall in one of these two types. In this work, we study the frequently-accessed videos by asking two questions: first, is there a relatively simple model that can describe its daily access patterns? And second, can we use this simple model to predict the number of accesses that a video will have in the near future, as a tool for capacity planning? To answer these questions we develop a framework for characterization and forecasting of access patterns. We show that for frequently-accessed videos, daily access patterns can be extracted via principal component analysis, and used efficiently for forecasting.
【Keywords】: Internet; multimedia systems; principal component analysis; video retrieval; YouTube videos; capacity planners; daily access patterns; frequently-accessed videos; large-scale social behavior; principal component analysis; rarely-accessed videos; system designers; video access pattern forecasting; Autoregressive processes; Forecasting; Internet; Matrix decomposition; Predictive models; Time series analysis; YouTube
【Paper Link】 【Pages】:21-25
【Authors】: Yiming Zhang
【Abstract】: Structured Peer-to-Peer (P2P) networks have been popularized through large-scale Internet applications in recent years. In this paper we present TPD, a Tag-based scheme exploiting Path Diversity in a structured P2P network (DLG-Kautz) to satisfy source nodes' requirements. In TPD, nodes with certain property are tagged and routing messages requiring that property are forwarded only to tagged next-hops. The effectiveness of our proposals is demonstrated through theoretical analysis.
【Keywords】: Internet; peer-to-peer computing; telecommunication network routing; DLG-Kautz network; Internet; P2P networks; routing messages; source node requirement; structured peer-to-peer networks; tag-based path diversity; Ear
【Paper Link】 【Pages】:26-30
【Authors】: Ping Xu ; XiaoHua Xu ; Shaojie Tang ; Xiang-Yang Li
【Abstract】: We propose efficient spectrum channel allocation and auction methods for the online wireless channel scheduling. Assume that each user requests for the exclusive usage of a number of wireless channels for a certain time interval. The scheduler has to decide whether to grant its exclusive usage an how much will be charged. To possibly serve users with higher priority, preemptions are allowed with penalties. We analytically prove that our protocols are efficient, truthful, and they have asymptotically optimum competitive ratios. Our extensive simulations show that they perform almost optimum: most of our methods can achieve more than 50% of the optimum by offline method.
【Keywords】: channel allocation; protocols; radio networks; scheduling; wireless channels; auction method; multichannel wireless network; online wireless channel scheduling; protocol; time interval; truthful online spectrum channel allocation; Algorithm design and analysis; Cognitive radio; Heuristic algorithms; Processor scheduling; Resource management; Upper bound; Wireless networks; Spectrum; competitive ratio; mechanisms; online algorithm; strategyproof; wireless networks
【Paper Link】 【Pages】:31-35
【Authors】: Keren Tan ; Guanhua Yan ; Jihwang Yeo ; David Kotz
【Abstract】: User association logs play an important role in wireless network research. One concern of sharing such logs with other researchers, however, is that they pose potential privacy risks for the network users. Today, the common practice in sanitizing these logs before releasing them to the public is to anonymize users' sensitive information, such as their devices' MAC addresses and their exact association locations. In this work, we aim to study whether such sanitization measures are sufficient to protect user privacy. By simulating an adversary's role, we propose a novel type of correlation attack in which the adversary uses the anonymized association log to build signatures against each user, and when combined with auxiliary information, such signatures can help to identify users within the anonymized log. Using a user association log that contains more than four thousand users and millions of association records, we demonstrate that this attack technique, under certain circumstances, is able to pinpoint the victim's identity exactly with a probability as high as 70%, or narrow it down to a set of 20 candidates with a probability close to 100%.We further evaluate the effectiveness of standard anonymization techniques, including generalization and perturbation, in mitigating correlation attacks; our experimental results reveal only limited success of these methods, suggesting that more thorough treatment is needed when anonymizing wireless user association logs before public release.
【Keywords】: computer network security; data privacy; wireless LAN; anonymized log; correlation attacks; large-scale wireless LAN; privacy analysis; privacy risks; sanitization measures; standard anonymization techniques; user association logs; user privacy; wireless network; Buildings; Correlation; Hidden Markov models; Privacy; Wireless LAN; Wireless networks
【Paper Link】 【Pages】:36-40
【Authors】: Shaxun Chen ; Kai Zeng ; Prasant Mohapatra
【Abstract】: In cognitive radio networks, an adversary transmits signals whose characteristics emulate those of primary users, in order to prevent secondary users from transmitting. Such an attack is called primary user emulation (PUE) attack. There are two main types of primary users in white space: TV towers and wireless microphones. Existing work on PUE attack detection focused on the first category. However, for the latter category, primary users are mobile and their transmission power is low. These unique properties of wireless microphones introduce great challenges and existing methods are not applicable. In this paper, we propose a novel method to detect the PUE attack of mobile primary users. We exploit the correlations between RF signals and acoustic information to verify the existence of wireless microphones. The effectiveness of our approach is validated through extensive real-world experiments. It shows that our method achieves both false positive rate and false negative rate lower than 0.1.
【Keywords】: cognitive radio; microphones; mobile radio; telecommunication security; PUE attack detection; RF signal; TV tower; acoustic information; cognitive radio network; mobile primary user emulation attack detection; transmission power; white space; wireless microphone; Microphones; Microwave integrated circuits; Transmitters; Variable speed drives
【Paper Link】 【Pages】:41-45
【Authors】: Hossein Kaffash Bokharaei ; Alireza Sahraei ; Yashar Ganjali ; Ram Keralapura ; Antonio Nucci
【Abstract】: Spam over Internet Telephony (SPIT) is a new form of spam delivered using the phone network. With the low cost of Internet telephony, SPIT has become an attractive alternative for spammers to carry out unsolicited marketing and phishing. SPIT is more intrusive than email spam as it demands immediate recipient attention. In this paper, we study characteristics of communications in a phone network with the objective of identifying “SPITters”. We collect and analyze the data from one of the largest phone providers in North America. First, we propose a new technique, Loose Tie Detection (LTD), to identify outliers based on social ties. Second, we introduce Enhanced Progressive Multi Grey-Leveling (EPMG), which identifies outliers based on call density and reciprocity. Finally, we propose SymRank, an adaptation of the PageRank algorithm that computes the reputation of subscribers based on both incoming and outgoing calls.We evaluate the three techniques and find that they compute an overlapping set of outliers. Our experiments reveal that LTD and SymRank - although seemingly independent approaches - closely match with regard to outliers, thus showing that our techniques are effective in identifying SPITters.
【Keywords】: Internet telephony; EPMG; LTD; SPIT; enhanced progressive multi grey-leveling; loose tie detection; pagerank algorithm; phone network; spam over Internet telephony; symrank; Business; Correlation; Electronic mail; History; Internet telephony; Measurement; Social network services
【Paper Link】 【Pages】:46-50
【Authors】: Mahmoud Taghizadeh ; Amir R. Khakpour ; Alex X. Liu ; Subir Biswas
【Abstract】: Firewalls are one of the essential security elements to enforce access policies in computer networks. Open network architecture, shared wireless medium, stringent resource constraints, and highly dynamic network topology impose a new set of challenges on deploying firewalls in a mobile wireless environment. The current state-of-the-art demands for self protection by personal (i.e. local) firewalls for each node; however, this requires that all unwanted traffic travels all the way to the node before it is discarded at the destination. This wastes considerable bandwidth and power of all of the nodes in a network with multi-hop routing, specially if a node is under a denial of service (DoS) attack. In this paper, we develop a novel distributed firewalling scheme for wireless networks in which nodes collaboratively perform packet filtering to address resource squandering. The proposed scheme introduces techniques to distribute discarding rules based on both proactive and reactive routing protocols. It also proposes efficient rule placement mechanisms to maximize the number of packets discarded remotely before they reach the destination and minimize the number of unwanted packet forwardings. The scheme is evaluated through various simulation scenarios. The simulation results show that by distributing only 1% of the rules, about 42% of the unwanted traffic is discarded before it reaches the destination, which significantly saves the network resources. Saving about 30% of the wasted bandwidth can be crucial for the performance of a wireless network.
【Keywords】: computer network security; mobile communication; radio networks; routing protocols; access policies; collaborative firewalling; computer networks; denial of service attack; discarding rules; distributed firewalling; highly dynamic network topology; mobile wireless environment; multihop routing; open network architecture; packet filtering; resource squandering; routing protocols; security elements; shared wireless medium; stringent resource constraints; wireless networks; Ad hoc networks; Mobile communication; Mobile computing; Routing; Routing protocols; Wireless networks
【Paper Link】 【Pages】:51-55
【Authors】: Yan Zhang ; Nirwan Ansari
【Abstract】: TCP Incast, also known as TCP throughput collapse, is a term used to describe a link capacity under-utilization phenomenon in certain many-to-one communication patterns, typically in many datacenter applications. The main root cause of TCP Incast analyzed by prior works is attributed to packet drops at the congestion switch that result in TCP timeout. Congestion control algorithms have been developed to reduce or eliminate packet drops at the congestion switch. In this paper, the performance of Quantized Congestion Notification (QCN) with respect to the TCP incast problem during data access from clustered servers in datacenters are investigated. QCN can effectively control link rates very rapidly in a datacenter environment. However, it performs poorly when TCP Incast is observed. To explain this low link utilization, we examine the rate fluctuation of different flows within one synchronous reading request, and find that the poor performance of TCP throughput with QCN is due to the rate unfairness of different flows. Therefore, an enhanced QCN congestion control algorithm, called fair Quantized Congestion Notification (FQCN), is proposed to improve fairness of multiple flows sharing one bottleneck link. We evaluate the performance of FQCN as compared to that of QCN in terms of fairness and convergence with four simultaneous and eight staggered source flows. As compared to QCN, fairness is improved greatly and the queue length at the bottleneck link converges to the equilibrium queue length very fast. The effects of FQCN to TCP throughput collapse are also investigated. Simulation results show that FQCN significantly enhances TCP throughput performance in a TCP Incast setup.
【Keywords】: telecommunication congestion control; transport protocols; FQCN; QCN congestion control; TCP incast; TCP throughput collapse; congestion control algorithm; congestion switch; data center network; fair quantized congestion notification; link capacity underutilization phenomenon; packet drop; Artificial intelligence; Bandwidth; Convergence; Servers; Switches; Synchronization; Throughput; Data Center Networks (DCN); Quantized Congestion Notification (QCN); TCP Incast; TCP throughput collapse; congestion control; fairness
【Paper Link】 【Pages】:56-60
【Authors】: Dan Li ; Mingwei Xu ; Ming-Chen Zhao ; Chuanxiong Guo ; Yongguang Zhang ; Min-You Wu
【Abstract】: Multicast benefits data center group communication in both saving network traffic and improving application throughput. The SLA (Service Level Agreement) of cloud service requires the computation correctness of distributed applications, translating to the requirement of reliable Multicast delivery. In this paper we present RDCM, a novel reliable Multicast approach for data center network. The key idea of RDCM is to minimize the impact of packet loss on the Multicast performance, by leveraging the rich link resource in data centers. A Multicast-tree-aware backup overlay is purposely built on group members for peer-to-peer packet repair. Riding on Unicast, packet repair not only achieves complete repair isolation, but also has high probability to bypass the pathological links in the Multicast tree where packet loss occurs. The backup overlay is organized in such a way that it causes little individual repair burden, control overhead, as well as overall repair traffic. We have implemented RDCM as a user-level library on Windows platform. The experiments on our test bed show that RDCM handles packet loss without obvious throughput degradation during high-speed data transmission.
【Keywords】: computer centres; data communication; multicast communication; operating systems (computers); peer-to-peer computing; telecommunication network reliability; telecommunication traffic; trees (mathematics); RDCM; SLA; Windows platform; cloud service; data center group communication; high-speed data transmission; multicast tree-aware backup overlay; network traffic; packet loss; peer-to-peer packet repair; reliable data center multicast; service level agreement; unicast communication; user level library; Internet; Maintenance engineering; Peer to peer computing; Receivers; Servers; Throughput; Unicast
【Paper Link】 【Pages】:61-65
【Authors】: Deke Guo ; Tao Chen ; Dan Li ; Yunhao Liu ; Xue Liu ; Guihai Chen
【Abstract】: A fundamental challenge in data centers is how to design networking structures for efficiently interconnecting a large number of servers. Several server-centric structures have been proposed, but are not truly expansible and suffer low degree of regularity and symmetry. To address this issue, we propose two novel structures called HCN and BCN, which utilize hierarchical compound graphs to interconnect large population of servers each with two ports only. They own two topological advantages, i.e., the expansibility and equal degree. In addition, HCN offers high degree of regularity, scalability and symmetry, which well conform to the modular design of data centers. Moreover, a BCN of level one in each dimension involves more servers than FiConn with server degree 2 and diameter 7, and is large enough for a single data center. Mathematical analysis and comprehensive simulations show that BCN possesses excellent topology properties and is a viable network structure for data centers.
【Keywords】: computer centres; graph theory; network servers; BCN; data center modular design; expansible network structures; hierarchical compound graphs; mathematical analysis; networking structures; server-centric structures; Compounds; Network topology; Routing; Scalability; Servers; Topology; USA Councils
【Paper Link】 【Pages】:66-70
【Authors】: Vivek Shrivastava ; Petros Zerfos ; Kang-Won Lee ; Hani Jamjoom ; Yew-Huey Liu ; Suman Banerjee
【Abstract】: While virtual machine (VM) migration is allowing data centers to rebalance workloads across physical machines, the promise of a maximally utilized infrastructure is yet to be realized. Part of the challenge is due to the inherent dependencies between VMs comprising a multi-tier application, which introduce complex load interactions between the underlying physical servers. For example, simply moving an overloaded VM to a (random) underloaded physical machine can inadvertently overload the network. We introduce AppAware-a novel, computationally efficient scheme for incorporating (1) inter-VM dependencies and (2) the underlying network topology into VM migration decisions. Using simulations, we show that our proposed method decreases network traffic by up to 81%compared to a well known alternative VM migration method that is not application-aware.
【Keywords】: computer centres; network servers; resource allocation; telecommunication network topology; telecommunication traffic; virtual machines; AppAware; application-aware virtual machine migration; data centers; interVM dependency; multitier application; network topology; network traffic; overloaded VM; underloaded physical machine; workload rebalancing; Computer architecture; Load modeling; Network topology; Servers; Topology; Vegetation; Virtual machining
【Paper Link】 【Pages】:71-75
【Authors】: Meng Wang ; Xiaoqiao Meng ; Li Zhang
【Abstract】: Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measurements from production data centers show that the network bandwidth demands of VMs are dynamic, making it difficult to characterize the demands by a fixed value and to apply traditional consolidation schemes. In this work, we formulate the VM consolidation into a Stochastic Bin Packing problem and propose an online packing algorithm by which the number of servers required is within (1+∈)(√2+1) of the optimum for any ∈ >; 0. The result can be improved to within (√2+1) of the optimum in a special case. In addition, we use numerical experiments to evaluate the proposed consolidation algorithm and observe 30% server reduction compared to several benchmark algorithms.
【Keywords】: bin packing; computer centres; virtual machines; benchmark algorithm; consolidation algorithm; data centers; dynamic bandwidth demand; network bandwidth demands; online packing algorithm; server reduction; stochastic bin packing problem; virtual machines; virtualization technology; Educational institutions; Servers
【Paper Link】 【Pages】:76-80
【Authors】: Mahdi Hajiaghayi ; Min Dong ; Ben Liang
【Abstract】: We consider the problem of jointly optimizing channel pairing, channel-user assignment, and power allocation in a single-relay multiple-access system. The optimization objective is to maximize the weighted sum-rate under total and individual power constraints on the transmitters. By observing the special structure of a three-dimensional assignment problem derived from the original problem, we propose a polynomial-time algorithm based on continuity relaxation and dual minimization. The proposed method is shown to be optimal for all relaying strategies that give a concave rate function in terms of power constraints.
【Keywords】: channel allocation; optimisation; polynomials; dual-hop multichannel multiuser relaying; optimal channel-user assignment; power allocation; single-relay multiple-access system; three-dimensional assignment problem; transmitter; Complexity theory; Joints; OFDM; Optimization; Relays; Resource management; Signal to noise ratio
【Paper Link】 【Pages】:81-85
【Authors】: Fan Wu ; Nitin H. Vaidya
【Abstract】: With the growing deployment of wireless communication technologies, radio spectrum is becoming a scarce resource. Thus mechanisms to efficiently allocate the available spectrum are of interest. In this paper, we model the radio spectrum allocation problem as a sealed-bid reserve auction, and propose SMALL, which is a Strategy-proof Mechanism for radio spectrum ALLocation. Furthermore, we extend SMALL to adapt to multi-radio spectrum buyers, which can bid for more than one radio.
【Keywords】: channel allocation; radio spectrum management; SMALL; multiradio spectrum buyers; radio spectrum allocation; sealed-bid reserve auction; strategy-proof mechanism; wireless communication
【Paper Link】 【Pages】:86-90
【Authors】: Ajay Gopinathan ; Zongpeng Li
【Abstract】: Dynamic spectrum allocation has proven promising for mitigating the spectrum scarcity problem. In this model, primary users lease chunks of under-utilized spectrum to secondary users, on a short-term basis. Primary users may need financial motivations to share spectrum, since they assume costs in obtaining spectrum licenses. Auctions are a natural revenue generating mechanism to apply. Recent design on spectrum auctions make the strong assumption that the primary user knows the probability distribution of user valuations. We study revenue-maximizing spectrum auctions in the more realistic prior-free setting, when information on user valuations is unavailable. A two-phase auction framework is constructed. In phase one, we design a strategyproof mechanism that computes a subset of users with an interference-free spectrum allocation, such that the potential revenue in the second phase is maximized. A tailored payment scheme ensures truthful bidding at this stage. The selected users then participate in phase two, where we design a randomized competitive auction and prove its strategyproofness through the argument of bid independence. Employing probabilistic techniques, we prove that our auction generates a revenue that is at least 1/3 of the optimal revenue, improving the best known ratio of 1/4 proven for similar settings.
【Keywords】: pricing; radio spectrum management; dynamic spectrum allocation; interference-free spectrum allocation; primary users; randomized competitive auction; revenue-maximizing spectrum auctions; secondary spectrum access; spectrum licenses; spectrum scarcity problem; spectrum sharing; tailored payment scheme; truthful bidding; two-phase auction framework; user valuations; Algorithm design and analysis; Cost accounting; Dynamic scheduling; Economics; Interference; Protocols; Resource management
【Paper Link】 【Pages】:91-95
【Authors】: Jonathan Wellons ; Yuan Xue
【Abstract】: Routing is a critical element of wireless mesh network design and serves to enhance the network's capacity as a whole and the performance of individual flows. Achieving both robustness and efficiency in mesh routing is an important, yet challenging issue due to uncertain traffic demands. To ensure routing performance while retaining robustness, we explore using knowledge of historical traffic demands. Two concepts of robustness, namely congestion robustness and performance ratio robustness are examined. We show that though a routing that is robust with respect to the interior of a convex region of traffic demands implies future robustness, the similar property does not hold for the boundaries of the region because the performance ratio is not limited by the boundaries, while the absolute congestion is. We develop a performance-ratio robust routing formulation for multi-radio, multi-channel networks that exploits traffic demands that fall into a predicted region. A linear problem transformation is presented to solve this highly complex non-linear formulation. A detailed simulation study is conducted over representative topologies with a real traffic trace to evaluate the novel algorithm. We find a strong performance improvement with little margin for further gains.
【Keywords】: telecommunication network routing; telecommunication network topology; telecommunication traffic; wireless channels; wireless mesh networks; complex nonlinear formulation; congestion robustness; linear problem transformation; mesh routing; multichannel wireless mesh networks; multiradio network; network traffic; performance-ratio robust routing formulation; Interference; Optimized production technology; Robustness; Routing; Topology; Wireless communication; Wireless mesh networks
【Paper Link】 【Pages】:96-100
【Authors】: Reuven Cohen ; Guy Grebla
【Abstract】: In an OFDMA network, the modulation and coding scheme (MCS) of the messages sent to the mobile stations (MSs) varies according to channel condition. To determine the appropriate MCS level, the base station (BS) allocates to every active MS a CQI (Channel Quality Information) channel. The CQI bandwidth is a scarce resource whose allocation must be adjusted to the actual needs of the MSs. However, allocations and deallocations of CQI channels require expensive signaling messages between the BS and the MSs, and therefore should be minimized. In this paper we propose a framework for the management of the CQI bandwidth by the BS. We identify three related optimization problems and propose efficient algorithms for solving them.
【Keywords】: OFDM modulation; bandwidth allocation; frequency division multiple access; mobile radio; modulation coding; telecommunication signalling; wireless channels; CQI bandwidth allocation; CQI bandwidth management; CQI channel; MCS level; OFDMA network; base station; broadband wireless network; channel condition; channel quality information; mobile station; modulation and coding scheme; signaling message; Bandwidth; IEEE 802.16 Standards; Mobile communication; Optimization; Resource management; Vegetation; Wireless communication
【Paper Link】 【Pages】:101-105
【Authors】: Yu Jin ; Nick G. Duffield ; Alexandre Gerber ; Patrick Haffner ; Wen-Ling Hsu ; Guy Jacobson ; Subhabrata Sen ; Shobha Venkataraman ; Zhi-Li Zhang
【Abstract】: Effective management of large-scale cellular data networks is critical to meet customer demands and expectations. Customer calls for technical support provide direct indication as to the problems customers encounter. In this paper, we study the customer tickets - free-text recordings and classifications by customer support agents - collected at a large cellular network provider, with two inter-related goals: i) to characterize and understand the major factors which lead to customers to call and seek support; and ii) to utilize such customer tickets to help identify potential network problems. For this purpose, we develop a novel statistical approach to model customer call rates which account for customer-side factors (e.g., user tenure and handset types) and geo-locations. We show that most calls are due to customer-side factors and can be well captured by the model. Furthermore, we also demonstrate that location-specific deviations from the model provide a good indicator of potential network-side issues.
【Keywords】: cellular radio; customer services; statistical analysis; technical support services; telecommunication network management; telecommunication traffic recording; cellular networks; customer demands; customer tickets; customer-side factors; free-text classifications; free-text recordings; location-specific deviations; statistical approach; technical support; 3G mobile communication; Correlation; Ground penetrating radar; IEEE Potentials; Land mobile radio cellular systems; Portable computers; Software
【Paper Link】 【Pages】:106-110
【Authors】: Salah-Eddine Elayoubi ; Louai Saker ; Tijani Chahed
【Abstract】: In this paper, we investigate network sleep mode for reducing energy consumption of radio access networks. We propose an offline-optimized controller that associates to each traffic an activation/deactivation policy that maximizes a multiple objective function of the Quality of Service (QoS) and the energy consumption. We focus on practical implementation issues that may affect the QoS and the stability of the system. We namely consider the activation time issue that results in the degradation of the throughput and the ping-pong effect that results in unnecessary ON/OFF oscillations. We illustrate our results numerically in beyond 3G networks.
【Keywords】: 3G mobile communication; energy consumption; optimal control; quality of service; radio access networks; 3G networks; QoS; base station; energy consumption; network sleep mode; offline-optimized controller; optimal control; ping-pong effect; quality of service; radio access networks; Base stations; Energy consumption; Hysteresis; Multiaccess communication; Quality of service; Switches; Throughput
【Paper Link】 【Pages】:111-115
【Authors】: David López-Pérez ; Ákos Ladányi ; Alpár Jüttner ; Hervé Rivano ; Jie Zhang
【Abstract】: This article investigates the problem of the allocation of modulation and coding, subcarriers and power to users in LTE. The proposed model achieves inter-cell interference mitigation through the dynamic and distributed self-organization of cells. Therefore, there is no need for any a prior frequency planning. Moreover, a two-level decomposition method able to find near optimal solutions is proposed to solve the optimization problem. Finally, simulation results show that compared to classic reuse schemes the proposed approach is able to pack more users into the same bandwidth, decreasing the probability of user outage.
【Keywords】: Long Term Evolution; decomposition; encoding; modulation; coding rates; distributed cell self-organization; dynamic cell self-organization; frequency planning; inter-cell interference mitigation; joint allocation; modulation schemes; near optimal solutions; optimization method; optimization problem; resource blocks; self-organizing LTE networks; subcarriers; two-level decomposition method; Interference; Macrocell networks; Modulation; OFDM; Optimization; Resource management; Throughput
【Paper Link】 【Pages】:116-120
【Authors】: SeYoung Yun ; Yung Yi ; Dong-Ho Cho ; Jeonghoon Mo
【Abstract】: The femtocell is an enabling technology to handle exponentially increasing wireless data traffic. Despite extensive attentions paid to resource control, e.g., power control and load balancing in femtocell networks, the success largely depends on whether operators and users accept this technology or not. In this paper, we study the economic aspects of femtocell services with game theoretic models between providers and/or users. We consider three services: users can access only macro BSs (mobile-only), or open/exclusively use their femto BS (open or closed-femto). The main messages include: 1) it is better off for the operator to provide just the open-femto service than a mix of closed and open-femto services; 2) two polices of allowing or blocking the access of mobile-only users to open femto BS are not significantly differentiated in the revenue.
【Keywords】: femtocellular radio; game theory; power control; resource allocation; telecommunication traffic; femtocell networks; game theoretic models; load balancing; macro BSs; mobile-only; open-femto service; power control; resource control; wireless data traffic; Biological system modeling; Economics; Femtocells; Games; Measurement; Pricing; Subscriptions
【Paper Link】 【Pages】:121-125
【Authors】: Gyan Ranjan ; Zhi-Li Zhang ; Supranamaya Ranjan ; Ram Keralapura ; Joshua Robinson
【Abstract】: Despite the rapid growth in cellular data traffic, we know very little about the (operational) cellular data service network (CDSN) infrastructure. A key step in the process of developing any such understanding is to first understand the locations and distribution of the basestations in the CDSN infrastructure that serve as physical access points for end users for communicating with the underlying network. Such knowledge not only can provide critical insight into the the CDSN infrastructure, but can also guide the development of innovative (e.g. location-aware) services and applications. In this paper we propose a novel approach for mapping the CDSN basestation infrastructure via (explicit) user geo-intent. The intuition behind the proposed approach is to exploit specific geo-locations (i.e. geo-intent) contained in user queries to location-based services, and correlate them with basestation id's to geo-map the CDSN infrastructure. To investigate the validity of our approach, we employ data (RADIUS/RADA data sessions and application sessions) collected at the core IP network inside a CDSN. We develop heuristics for identifying user geo-intent and for geo-mapping the CDSN infrastructure - in particular, the basestations - and evaluate their efficacy using a subset of basestations with ground-truth GPS locations.
【Keywords】: Global Positioning System; cellular radio; mobility management (mobile radio); telecommunication traffic; CDSN infrastructure; IP network; cellular data service network; cellular data traffic; cellular infrastructure locations; geo-locations; ground-truth GPS locations; location-based services; user geo-intent; Global Positioning System
【Paper Link】 【Pages】:126-130
【Authors】: Péter Mátray ; Péter Hága ; Sándor Laki ; István Csabai ; Gábor Vattay
【Abstract】: The geographic layout of the physical Internet inherently determines important network properties and traffic characteristics. To give insight into the geography of the Internet, we examine the spatial properties of the topology and routing. To represent the network we conducted a geographically dispersed traceroute campaign, and embedded the extracted topology into the geographic space by applying a novel IP geolocalization service, called Spotter. In this paper we present the frequency analysis of link lengths, quantify path circuitousness and explore the symmetry of end-to-end Internet routes.
【Keywords】: Internet; telecommunication network routing; telecommunication network topology; IP geolocalization service; Internet network geography; Spotter service; end-to-end Internet route; network routing property; network topology property; traceroute campaign; Asia; Europe; IP networks; Internet; Network topology; Routing; Topology
【Paper Link】 【Pages】:131-135
【Authors】: Shiva Prasad Kasiviswanathan ; Stephan Eidenbenz ; Guanhua Yan
【Abstract】: In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300, 000 backbone routers, 130 million end hosts, and one billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAIDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coast pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3.
【Keywords】: Internet; telecommunication network routing; US Internet traffic; coast-to-coast pattern; geography-based analysis; geolocated IP hop-level session data; geometric straight-line distance; internet infrastructure; Computational modeling; Computers; Correlation; Heating; Internet; Routing; Topology
【Paper Link】 【Pages】:136-140
【Authors】: Mohit Chamania ; Marcel Caria ; Admela Jukan
【Abstract】: We present an novel analytical framework for modeling current IP offloading schemes and show their comparative performance. We analyze relevant parameters, including IP routing re-convergence time, the number of affected IP routes, and the cost of circuit capacity used for offloading. Our results show that emerging offloading solutions based on invisible bypasses can better maintain network stability and reduce the routing reconfiguration effort compared to the traditional traffic engineering approaches that modify the IP routing.
【Keywords】: IP networks; telecommunication network routing; telecommunication traffic; IP routing reconvergence time; IP traffic offloading scheme; circuit capacity; dynamic circuit; network stability; IP networks; Integrated circuit modeling; Multiprotocol label switching; Network topology; Optical switches; Routing; Topology
【Paper Link】 【Pages】:141-145
【Authors】: Onur Turkcu ; Arun K. Somani
【Abstract】: We consider routing multicast traffic requests originating in a hierarchical optical network architecture with multiple access networks attached to a core network. A request originates in a source access network (also called collection network) and include multiple destinations in different access networks (also called distribution networks). We call them together as collection-distribution networks (CDN). The connection points between a CDN and the core network is an edge node which can groom several multicast connections together. In such an architecture, given a set of multicast requests, our goal is to maximize the amount of multicast destinations and bandwidth served. Therefore, we formulate an optimization problem with an objective of maximizing the product of bandwidth-number of destinations according to partial multicast service discipline. We show that this is an NP-hard problem and present a suboptimal and a heuristic algorithm. We develop a lower bound on the performance. We present numerical results and show that our algorithms achieve a very close performance to the lower bound.
【Keywords】: computational complexity; heuristic programming; multicast communication; optical fibre subscriber loops; optimisation; telecommunication network routing; telecommunication traffic; NP-hard problem; collection-distribution network; heuristic algorithm; multiple access network; optical network architecture; optimization problem; partial multicast service discipline; routing multicast traffic request; source access network; suboptimal algorithm; Bandwidth; Measurement; Optical fiber networks; Optimization; Receivers; Transceivers; Transmitters
【Paper Link】 【Pages】:146-150
【Authors】: Ravish Khosla ; Sonia Fahmy ; Y. Charlie Hu
【Abstract】: The Border Gateway Protocol (BGP), the de-facto Internet interdomain routing protocol, disseminates information about Internet prefixes to Autonomous Systems (ASes). Prefixes are announced and withdrawn as routes and policies change, making them unreachable from portions of the Internet for certain time periods. This paper aims to predict routing failures of prefixes in the Internet.We investigate the similarity of prefixes in the Internet with respect to their propensity to fail, i.e., become unreachable. Given a prefix of interest, we define a “BGP molecule” - the prefixes in the Internet that are likely to fail together with this prefix. We show that the AS paths to prefixes, coupled with knowledge of the prefix geographical location, contribute to its failure tendency. The BGP molecules constructed are used in four failure prediction schemes among which a hybrid scheme achieves 91% predictability of failures with 99.3% coverage of prefixes in the Internet.
【Keywords】: Internet; routing protocols; BGP molecule; Internet interdomain routing protocol; Internet prefix; border gateway protocol; prefix failure prediction; Arrays; Correlation; Internet; Measurement; Prediction algorithms; Routing; Training
【Paper Link】 【Pages】:151-155
【Authors】: Udi Weinsberg ; Augustin Soule ; Laurent Massoulié
【Abstract】: The increasing adoption of high speed Internet connectivity in homes has led to the development of bandwidth hungry applications. This, in turn, induces ISPs to protect their core networks by deploying traffic shaping devices. End users, ISPs and regulators need to better understand the shaping policies that are enforced by the network. The paper presents a method for inferring flow discrimination and shaping parameters in the presence of cross traffic using active probing. The key concept is a stochastic comparison of the inter-arrival times of packets and measured bandwidth of a base-line flow and the measured flow. We present Packsen, a framework designed to provide high detection accuracy by sending interleaved flows at a very precise bandwidth, and used it for measurements on a local testbed and on PlanetLab. Evaluation shows the accuracy and robustness of the proposed method for detecting traffic shaping and inferring its parameters.
【Keywords】: Internet; bandwidth allocation; telecommunication traffic; ISP; Packsen framework; PlanetLab; active probing; bandwidth hungry application; bandwidth measurement; base-line flow; cross traffic; end host measurement; flow discrimination; high detection accuracy; high speed Internet connectivity; home; interarrival time; interleaved flow; shaping policy parameter; traffic shaping device; Artificial neural networks; Extraterrestrial measurements; Internet; Nickel; Q measurement; Traffic control
【Paper Link】 【Pages】:156-160
【Authors】: Pinghui Wang ; Xiaohong Guan ; Weibo Gong ; Donald F. Towsley
【Abstract】: We present a new virtual indexing method for estimating host connection degrees for high speed links. It is based on the virtual connection degree sketch where a compact sketch of network traffic is built by generating the associated virtual bitmaps for each host. Each virtual bitmap consists of a fixed number of bits selected randomly from a shared bit array by a new method for recording the traffic flows of the corresponding host. The shared bit array is efficiently utilized by all hosts since its every bit is shared by the virtual bitmaps of multiple hosts. To reduce the “noise” contaminated in a host's virtual bitmaps due to sharing, we propose a new method to generate the “filtered” bitmap used to estimate host connection degree. Furthermore, it can be easily implemented in parallel and distributed processing environments. The experimental and testing results based on the actual network traffic show that the new method is accurate and efficient.
【Keywords】: media streaming; parallel processing; radio links; telecommunication traffic; compact sketch; data streaming; distributed processing; filtered bitmap; high speed link; host connection degree; network traffic; noise; parallel processing; traffic flow; virtual bitmap; virtual connection degree sketch; virtual indexing; Accuracy; Arrays; Bismuth; Indexing; Monitoring; Noise; Pollution measurement; Bitmap; Data streaming; Host connection degree; Virtual Indexing
【Paper Link】 【Pages】:161-165
【Authors】: Kohei Watabe ; Masaki Aida
【Abstract】: Active measurement, which can provide end-to-end measurements of network performance, is critical since the Internet is managed by multiple organizations. Recently, on the active measurement of delay and loss, Baccelli et al reported that many probing policies can be used to provide appropriate estimation in addition to the traditional policy based on PASTA property if the volume of probe stream is negligible compared to the traffic stream. Probing schemes with fixed probe packet intervals suffer from the phase-lock phenomenon due to synchronization against the network performance; they do, however, provide superior accuracy. A remaining issue is how to decide the optimal probing policy while taking the phase-lock phenomenon into consideration. In this paper, we propose the probing policy that randomly fluctuates the probe packet interval to avoid the phase-lock phenomenon. We start by clarifying the relationships among the fluctuation magnitude, the properties of the target network, and estimation accuracy, and we discuss the optimal probing policy with regard to the properties of the target network.
【Keywords】: Internet; measurement; quality of service; Internet; active measurement; fluctuation magnitude; optimal probing policy; phase-lock phenomenon; probe interval; probe packet interval; Accuracy; Delay; Internet; Loss measurement; Probes; Quality of service; Synchronization
【Paper Link】 【Pages】:166-170
【Authors】: Mikkel Thorup
【Abstract】: We consider portable software implementations of hash tables with timeouts. The context is a high volume stream of keyed items. When a new item arrives, we want to know if has been seen recently in terms of a fixed lifespan. This problem has numerous applications as a front-end for Internet traffic processing where the key could be a selection of fields from the header of a packet, e.g., tracing packets through networks, aggregation of netflows from routers, and helping firewalls reuse decisions from recent packets with the same key. We propose time-reversed linear probing tables to deal with timeouts. In experiments, compared with tumbling windows and lazy deletions, our time-reversed tables typically gains at least 25% in speed while using only half the space. The cost per item approaches that of a single random memory access. Our new scheme is cleaner to implement than previous schemes, and also more versatile, e.g., it is trivial to allow different items to have different lifespans; something not possible with tumbling windows. Adding this to the improved time and space efficiency, makes time-reversed linear probing a canonical first choice for hash tables with time-outs.
【Keywords】: Internet; cryptography; file organisation; random-access storage; telecommunication network routing; Internet traffic processing; front-end; hash tables; lazy deletions; portable software implementations; random memory access; routing; time-reversed linear probing; timeouts; Arrays; Hardware; Internet; Monitoring; Probes; Software; Switches
【Paper Link】 【Pages】:171-175
【Authors】: Brian Eriksson ; Paul Barford ; Joel Sommers ; Robert D. Nowak
【Abstract】: Despite many efforts over the past decade, the ability to generate topological maps of the Internet at the router-level accurately and in a timely fashion remains elusive. Mapping campaigns commonly involve traceroute-like probing that are usually non-adaptive and incomplete, thus revealing only a portion of the underlying topology. In this paper we demonstrate that standard probing methods yield datasets that implicitly contain information about much more than just the directly observed links and routers. Each probe, in addition to the underlying domain knowledge, returns information that places constraints on the underlying topology, and by integrating a large number of such constraints it is possible to accurately infer the existence of unseen components of the Internet. We describe DomainImpute, a novel data analysis methodology designed to accurately infer the unseen hop-count distances between observed routers. We use both synthetic and a large empirical dataset to validate the proposed methods. On our empirical real world dataset, we show that our methods can estimate over 55% of the unseen distances between observed routers to within a one-hop error.
【Keywords】: Internet; data analysis; telecommunication network routing; telecommunication network topology; DomainImpute; Internet; data analysis methodology; mapping campaigns; router; topological maps; traceroute-like probing; Approximation error; Estimation; Extraterrestrial measurements; Internet; Network topology; Probes; Topology
【Paper Link】 【Pages】:176-180
【Authors】: Kaustubh Nyalkalkar ; Sushant Sinha ; Michael Bailey ; Farnam Jahanian
【Abstract】: Modern networks are complex and hence, network operators often rely on automation to assist in assuring the security, availability, and performance of these networks. At the core of many of these systems are general-purpose anomaly-detection algorithms that seek to identify normal behavior and detect deviations. While the number and variations of these algorithms are large, two broad categories have emerged as leading approaches to this problem: those based on spatial correlation and those based on temporal analysis. In this paper, we compare one promising approach from each of these categories, namely entropy-based PCA and HHH-based wavelets.
【Keywords】: principal component analysis; telecommunication network management; telecommunication network reliability; telecommunication security; HHH-based wavelet; entropy-based PCA; general-purpose anomaly detection algorithm; modern network automation; modern networksecurity; network operators; network-based anomaly detection method; temporal analysis; Accuracy; Algorithm design and analysis; Correlation; Detectors; Measurement; Principal component analysis; Time series analysis
【Paper Link】 【Pages】:181-185
【Authors】: Jing Gao ; Wei Fan ; Deepak S. Turaga ; Olivier Verscheure ; Xiaoqiao Meng ; Lu Su ; Jiawei Han
【Abstract】: Network operators are continuously confronted with malicious events, such as port scans, denial-of-service attacks, and spreading of worms. Due to the detrimental effects caused by these anomalies, it is critical to detect them promptly and effectively. There have been numerous softwares, algorithms, or rules developed to conduct anomaly detection over traffic data. However, each of them only has limited descriptions of the anomalies, and thus suffers from high false positive/false negative rates. In contrast, the combination of multiple atomic detectors can provide a more powerful anomaly capturing capability when the base detectors complement each other. In this paper, we propose to infer a discriminative model by reaching consensus among multiple atomic anomaly detectors in an unsupervised manner when there are very few or even no known anomalous events for training. The proposed algorithm produces a perevent based non-trivial weighted combination of the atomic detectors by iteratively maximizing the probabilistic consensus among the output of the base detectors applied to different traffic records. The resulting model is different and not obtainable using Bayesian model averaging or weighted voting. Through experimental results on three network anomaly detection datasets, we show that the combined detector improves over the base detectors by 10% to 20% in accuracy.
【Keywords】: invasive software; telecommunication security; telecommunication traffic; Bayesian model averaging; consensus extraction; denial-of-service attacks; heterogeneous detectors; malicious events; multiple atomic anomaly detectors; network anomaly detection datasets; network operators; network traffic anomaly detection; nontrivial weighted combination; port scans; probabilistic consensus; weighted voting; worm spreading; Accuracy; Clustering algorithms; Correlation; Detectors; Intrusion detection; Optimization; Prediction algorithms
【Paper Link】 【Pages】:186-190
【Authors】: Zhen Ling ; Xinwen Fu ; Weijia Jia ; Wei Yu ; Dong Xuan
【Abstract】: Anonymizer is a proprietary anonymous communication system. We discovered its architecture and found that the size of web packets through Anonymizer are very dynamic at the client. Motivated by this finding, we investigated a novel packet size based covert channel attack, against the anonymity service. In the attack, one attacker manipulates the web packet size between the web server and Anonymizer and embed signal symbols into the target traffic. An accomplice at the user side can sniff the traffic and recognize the secret signal. We developed intelligent and robust algorithms to cope with the packet size distortion incurred by Anonymizer and Internet. We developed several techniques to make the attack harder to detect: (i) We pick up right packets of web objects to manipulate in order to preserve the regularity of the TCP packet size dynamics; (ii) We adopt the Monte Carlo sampling technique to preserve the distribution of the web packet size despite manipulation. We have implemented the attack over Anonymizer and conducted extensive analysis and experimental evaluations. It is observed that the attack is highly efficient and requires only tens of packets to compromise the anonymous web surfing. The experimental results are consistent with our theoretical analysis.
【Keywords】: Monte Carlo methods; telecommunication channels; telecommunication security; transport protocols; Monte Carlo sampling technique; anonymizer; anonymous communication system; packet size based covert channel attack; web packet size despite manipulation; Spread spectrum communication; Anonymizer; Covert Channel; TCP dynamics
【Paper Link】 【Pages】:191-195
【Authors】: Anh Le ; Athina Markopoulou ; Michalis Faloutsos
【Abstract】: Phishing is an increasingly sophisticated method to steal personal user information using sites that pretend to be legitimate. In this paper, we take the following steps to identify phishing URLs. First, we carefully select lexical features of the URLs that are resistant to obfuscation techniques used by attackers. Second, we evaluate the classification accuracy when using only lexical features, both automatically and hand-selected, vs. when using additional features. We show that lexical features are sufficient for all practical purposes. Third, we thoroughly compare several classification algorithms, and we propose to use an online method (AROW) that is able to overcome noisy training data. Based on the insights gained from our analysis, we propose PhishDef, a phishing detection system that uses only URL names and combines the above three elements. PhishDef is a highly accurate method (when compared to state-of-the-art approaches over real datasets), lightweight (thus appropriate for online and client-side deployment), proactive (based on online classification rather than blacklists), and resilient to training data inaccuracies (thus enabling the use of large noisy training data).
【Keywords】: Web sites; computer crime; AROW; PhishDef; URL names; classification accuracy; lexical features; obfuscation technique resilience; online classification; online method; personal user information stealing; phishing detection system; Accuracy; Classification algorithms; Error analysis; Feature extraction; Noise; Prediction algorithms; Servers
【Paper Link】 【Pages】:196-200
【Authors】: Yeim-Kuan Chang ; Chun-I Lee ; Cheng-Chien Su
【Abstract】: Packet classification has wide applications such as unauthorized access prevention in firewalls and Quality of Service supported in Internet routers. The classifier containing pre-defined rules is processed by the router for finding the best matching rule for each incoming packet and for taking appropriate actions. Although many software-based solutions had been proposed, high search speed required for Internet backbone routers is not easy to achieve. To accelerate the packet classification, the state-of-the-art ternary content-addressable memory (TCAM) is a promising solution. In this paper, we propose an efficient multi-field range encoding scheme to solve the problem of storing ranges in TCAM and to decrease TCAM usage. Existing range encoding schemes are usually single-field schemes that perform range encoding processes in the range fields independently. Our performance experiments on real-life classifiers show that the proposed multi-field range encoding scheme uses less TCAM memory than the existing single field schemes. Compared with existing notable single-field encoding schemes, the proposed scheme uses 12% ~ 33% of TCAM memory needed in DRIPE or SRGE and 56% ~ 86% of TCAM memory needed in PPC for the classifiers of up to 10k rules.
【Keywords】: Internet; computer network security; content-addressable storage; encoding; telecommunication network routing; Internet backbone router; TCAM; firewalls; multifield range encoding; packet classification; quality of service; ternary content addressable memory; unauthorized access prevention; Erbium; Random access memory; Packet classification; TCAM; multi-field range encoding
【Paper Link】 【Pages】:201-205
【Authors】: Zixia Huang ; Chao Mei ; Li Erran Li ; Thomas Woo
【Abstract】: Existing media providers such as YouTube and Hulu deliver videos by turning it into a progressive download. This can result in frequent video freezes under varying network dynamics. In this paper, we present CloudStream: a cloud-based video proxy that can deliver high-quality streaming videos by transcoding the original video in real time to a scalable codec which allows streaming adaptation to network dynamics. The key is a multi-level transcoding parallelization framework with two mapping options (Hallsh-based Mapping and Lateness-first Mapping) that optimize transcoding speed and reduce the transcoding jitters while preserving the encoded video quality. We evaluate the performance of CloudStream on our campus cloud testbed.
【Keywords】: media streaming; video coding; CloudStream; Hallsh-based mapping; Hulu; YouTube; cloud-based SVC proxy; cloud-based video proxy; high-quality streaming videos; lateness-first mapping; multilevel transcoding parallelization framework; network dynamics; video transcoding; Jitter; Parallel processing; Static VAr compensators; Transcoding; Upper bound; Videos
【Paper Link】 【Pages】:206-210
【Authors】: Upendra Sharma ; Prashant J. Shenoy ; Sambit Sahu ; Anees Shaikh
【Abstract】: In this paper we present Kingfisher, a cost-aware system that provides efficient support for elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimizes the cost. We have implemented a prototype of Kingfisher and have evaluated its efficacy on a laboratory cloud platform. Our experiments with varying application workloads demonstrate that Kingfisher is able to (i) decrease the cost of virtual server resources by as much as 24% compared to the current cost-unaware approach, (ii) reduce by an order of magnitude the time to transition to a new configuration through multiple elasticity mechanisms in the cloud, and (iii), illustrate the opportunity for further design alternatives which trade-off the cost of server resources with the time required to scale the application.
【Keywords】: activity based costing; cloud computing; Kingfisher cost-aware system; cloud computing; elasticity mechanism; virtual server configuration; Cloud computing; Elasticity; Engines; Hardware; Pricing; Random access memory; Servers
【Paper Link】 【Pages】:211-215
【Authors】: Yousuk Seung ; Terry Lam ; Li Erran Li ; Thomas Woo
【Abstract】: This paper proposes and studies a system, called CloudFlex, which transparently taps cloud resources to serve application requests that exceed capacity of internal infrastructure. CloudFlex operates as a feedback control system with two key interacting components: load balancer and controller. We focus on operational optimality and stability of the system, highlight the tradeoffs between cost and responsiveness, and address important design considerations such as choke point detection that are critical in avoiding pathological system operations. For evaluation, we develop a prototype of CloudFlex on our testbed comprising servers of our enterprise data center and Amazon EC2 instances.
【Keywords】: business data processing; cloud computing; feedback; Amazon EC2; CloudFlex system; cloud computing; controller component; enterprise application; enterprise data center; feedback control system; load balancer component; Databases; World Wide Web
【Paper Link】 【Pages】:216-220
【Authors】: Xavier León ; Leandro Navarro
【Abstract】: Energy related costs are becoming one of the largest contributors to the overall cost of operating a data center, whereas the degree of data center utilization continues to be very low. Energy-aware dynamic provision of resources based on the consolidation of existing application instances can simultaneously address under-utilization of servers while highly reducing energy costs. Thus, energy costs cannot be treated separately from resource provision and allocation. However, current scheduling techniques based on market mechanisms do not specifically deal with such scenario. In this paper we model the problem of minimizing energy consumption of the allocation of resources to networked applications as a Stackelberg leadership game to find an upper bound of energy saving. The model is applied to a proportional-share mechanism where resource providers can maximize their profit by minimizing energy costs while users can select resources ensuring the minimum requirements are satisfied. We show that our mechanism can determine the optimal set of resources on and off, even in realistic conditions considering incomplete information, and heterogeneous applications.
【Keywords】: computer centres; computer network management; game theory; resource allocation; Stackelberg leadership game; data center resources; data center utilization; energy costs; energy saving; energy-aware dynamic resource provision; proportional-share mechanism; resource allocation; Computational modeling; Data models; Energy consumption; Games; Resource management; Servers; Virtual machining
【Paper Link】 【Pages】:221-225
【Authors】: Xin Sun ; Sanjay G. Rao
【Abstract】: Recent works, have shown the benefits of a systematic approach to designing enterprise networks. However, these works are limited to the design of greenfield (newly deployed) networks, or to incremental evolution of existing networks without altering prior design decisions. In this paper, we focus on redesigning existing networks, allowing for changes to existing decisions. Such redesign (migration) may be desirable from the perspective of improved network performance or lower complexity. However, the key challenge is that the costs of redesign may be high due to the presence of complex dependencies between network configurations. We consider these issues in the context of virtual local area networks (VLANs), an important area of enterprise network design. We make three contributions. First, we present a model to capture VLAN redesign costs. Such costs may arise from the need to reconfigure policies (e.g., security policies) to reflect the changes in VLAN design and ensure the continued correctness of the network. Second, we present a framework that enables operators to systematically determine the best strategies to redesign VLANs so the desired performance goals may be achieved while the costs of redesign are minimized. Finally, we demonstrate the effectiveness of our approach using data obtained from a large-scale campus network.
【Keywords】: business communication; computer network management; cost-benefit analysis; local area networks; telecommunication network planning; cost-benefit framework; enterprise network design; judicious enterprise network redesign; large scale campus network; network configuration; virtual local area networks; Algorithm design and analysis; Complexity theory; IP networks; Merging; Routing; Security; Systematics
【Paper Link】 【Pages】:226-230
【Authors】: Chuan Han ; Yaling Yang
【Abstract】: Information propagation speed (IPS) in a multihop cognitive radio network (CRN) is an important factor that affects the network's delay performance and needs to be considered in network planning. The impact of primary user (PU) activities on IPS makes the problem of analyzing IPS in multihop CRNs very challenging and hence unsolved in existing literature. In this paper, we fill this technical void. We establish a IPS model in multihop CRNs, and compute the maximum network IPS that maximizes IPS across a network topology over an infinite plane. We reveal that the maximum network IPS is determined by the PU activity level and the placement of secondary user relay nodes. We design optimal relay placement strategies in CRNs to maximize the network IPS under different PU activity levels. The correctness of our analytical results is validated by simulations and numerical experiments.
【Keywords】: cognitive radio; telecommunication network planning; CRN; IPS model; information propagation speed; multihop cognitive radio networks; network planning; secondary user relay nodes; Analytical models; Delay; Numerical models; Radio transmitters; Receivers; Relays; Sensors
【Paper Link】 【Pages】:231-235
【Authors】: Ying Zhu ; Minsu Huang ; Siyuan Chen ; Yu Wang
【Abstract】: Cooperative communication (CC) allows multiple nodes to simultaneously transmit the same packet to the receiver so that the combined signal at the receiver can be correctly decoded. Since CC can reduce the transmission power and extend the transmission coverage, it has been considered in topology control protocols. However, prior research on topology control with CC only focuses on maintaining the network connectivity, minimizing the transmission power of each node, whereas ignores the energy-efficiency of paths in constructed topologies. This may cause inefficient routes and hurt the overall network performance. In this paper, to address this problem, we introduce a new topology control problem: energy-efficient topology control problem with cooperative communication, and propose two topology control algorithms to build cooperative energy spanners in which the energy efficiency of individual paths are guaranteed. Simulation results confirm the nice performance of the proposed algorithms.
【Keywords】: ad hoc networks; cooperative communication; telecommunication network topology; cooperative ad hoc networks; cooperative communication; cooperative energy spanners; energy-efficient topology control; topology control protocols; Bridges; Energy consumption; Mobile ad hoc networks; Network topology; Topology; Wireless networks
【Paper Link】 【Pages】:236-240
【Authors】: Kaigui Bian ; Jung-Min Park
【Abstract】: In Cognitive Radio (CR) networks, establishing a link between a pair of communicating nodes requires that their radios are able to “rendezvous” on a common channel (a.k.a. a rendezvous channel). When unlicensed (secondary) users opportunistically share spectrum with licensed (primary or incumbent) users, a given rendezvous channel may become unavailable due to the appearance of licensed user signals, which makes rendezvous impossible. Ideally, any node pair should be able to rendezvous over every available channel to minimize the possibility of such rendezvous failures. Channel hopping (CH) protocols have been proposed previously for establishing pairwise rendezvous. Some of them enable pairwise rendezvous over all channels but require global clock synchronization, which is very difficult to achieve in decentralized networks. In this paper, we present a systematic approach, called asynchronous channel hopping (ACH), for designing CH-based rendezvous protocols for decentralized CR networks. The resulting protocols are resistant to rendezvous failures caused by the appearance of primary user signals and do not require clock synchronization. We propose an optimal ACH design that maximizes the rendezvous probability between any pair of nodes, and show its rendezvous performance via simulation results.
【Keywords】: cognitive radio; probability; protocols; radio spectrum management; wireless channels; asynchronous channel hopping; channel hopping protocol; common channel; decentralized cognitive radio network; decentralized network; global clock synchronization; pairwise rendezvous; rendezvous channel; rendezvous probability; rendezvous protocol; spectrum sharing; Arrays; Clocks; Indexes; Protocols; Radio transmitters; Receivers; Synchronization
【Paper Link】 【Pages】:241-245
【Authors】: Chengzhi Li ; Huaiyu Dai
【Abstract】: Due to the emergence of Cognitive Radio, a special type of heterogeneous networks attracts increasing interest recently, in which a secondary network composed of cognitive users shares the same resources opportunistically with a primary network of licensed users. Network throughput in this setting is of essential importance. Some pioneer works in this area showed that this type of heterogeneous networks performs as well as two stand-alone networks, under the dense network model where the size of a network grows with the node density in a fixed area. A key assumption behind this conclusion is that the density of the secondary network is higher than that of the primary one in the order sense, which essentially decouples the two overlaid networks, as the secondary network dominates asymptotically. In this paper we endeavor to investigate this problem with a weaker condition that the dimensions of the two overlaid networks are on the same order, and consider the extended network model where the size of a network scales with the area with the node density fixed. Surprisingly, our analysis shows that this weaker (and arguably more practical) condition does not degrade either network throughput in terms of scaling law. Our result further reveals the potentials of CR technology in real applications.
【Keywords】: ad hoc networks; cognitive radio; ad hoc networks; cognitive radio technology; cognitive users; heterogeneous networks; licensed users; network throughput; node density; overlaid networks; primary network; scaling law; secondary network; throughput scaling; Interference; Protocols; Road transportation; Routing; Slabs; Throughput; Transmitters
【Paper Link】 【Pages】:246-250
【Authors】: Lei Sun ; Wenye Wang
【Abstract】: Dissemination latency and speed are central to the applications of cognitive radio networks, which have become an important component of current communication infrastructure. In this paper, we investigate the distributions and limits of information dissemination latency and speed in a cognitive radio network where licensed users (primary users) are static and cognitive radio users (secondary users) are mobile. We show that the dissemination latency depends on the stationary spatial distribution and mobility capability α (characterizing the region that a mobile secondary user can reach) of secondary users. Given any stationary spatial distribution, we find that there exists a critical value on α, below which the latency and speed are heavy-tailed and above which the right tails of their distribution are bounded by Gamma random variables. We further show that as the network grows to infinity, the latency asymptotically scales linearly with the “distance” (characterized by transmission hops or Euclidean distance) between the source and the destination. Our results are validated through simulations.
【Keywords】: cognitive radio; mobile radio; radio networks; Euclidean distance; cognitive radio users; communication infrastructure; gamma random variables; information dissemination latency; licensed users; mobile cognitive radio networks; primary users; secondary users; stationary spatial distribution; transmission hops; Cognitive radio; Euclidean distance; Markov processes; Mobile communication; Mobile computing; Random variables; Wireless networks
【Paper Link】 【Pages】:251-255
【Authors】: Alessandro Mei ; Giacomo Morabito ; Paolo Santi ; Julinda Stefa
【Abstract】: In this paper we describe SANE, the first forwarding mechanism that combines the advantages of both social-aware and stateless approaches in pocket switched network routing. SANE is based on the observation“that we validate on real-world traces”that individuals with similar interests tend to meet more often. In our approach, individuals (network members) are characterized by their interest profile, a compact representation of their interests. Through extensive experiments, we show the superiority of social-aware, stateless forwarding over existing stateful, social-aware and stateless, social-oblivious forwarding. An important byproduct of our interest-based approach is that it easily enables innovative routing primitives, such as interest-casting. An interest-casting protocol is also described, and extensively evaluated through experiments based on both real-world and synthetic mobility traces.
【Keywords】: packet switching; protocols; telecommunication network routing; SANE; forwarding mechanism; innovative routing primitive; interest profile; interest-casting protocol; network member; packet switched network routing; real-world mobility trace; real-world trace; social-aware stateless forwarding; social-oblivious forwarding; synthetic mobility trace; Correlation; Delay; Protocols; Relays; Routing; Unicast
【Paper Link】 【Pages】:256-260
【Authors】: Yifan Li ; Ping Wang ; Dusit Niyato ; Weihua Zhuang
【Abstract】: Cooperative communication has attracted dramatic attention in the last few years due to its advantage in mitigating channel fading. Despite much effort that has been made in theoretical analysis of the performance gain, cooperative relay selection, which is one of the fundamental issues in cooperative communications, is still left as an open problem. In this paper, the tradeoff between improvement and corresponding cost of cooperative communication, focusing on relay selection is addressed. We consider a challenging scenario which takes user mobility into consideration. Based on user mobility pattern, a dynamic relay selection scheme aiming at minimizing the long-term average cost while satisfying the QoS requirement is proposed. For relay selection to achieve maximal performance, an optimization model based on the constrained Markov decision process (CMDP) is formulated and solved by applying the linear programming (LP) technique. Comprehensive analysis and comparison with several other relay selection schemes are presented. Through extensive simulations, our scheme shows its high effectiveness and flexibility in balancing the cost and QoS performance.
【Keywords】: Markov processes; cooperative communication; decision theory; fading channels; linear programming; mobility management (mobile radio); quality of service; radio networks; QoS requirement; channel fading mitigation; constrained Markov decision process; cooperative communication; cooperative relay selection; dynamic relay selection scheme; linear programming technique; long-term average cost; mobile users; mobility pattern; optimization model; wireless relay networks; Throughput
【Paper Link】 【Pages】:261-265
【Authors】: Riccardo Masiero ; Giovanni Neglia
【Abstract】: In this paper we apply distributed sub-gradient methods to optimize global performance in Delay Tolerant Networks (DTNs). These methods rely on simple local node operations and consensus algorithms to average neighbours' information. Existing results for convergence to optimal solutions can only be applied to DTNs in the case of synchronous operation of the nodes and memory-less random meeting processes. In this paper we address both these issues. First, we prove convergence to the optimal solution for a more general class of mobility models. Second, we show that, under asynchronous operations, a direct application of the original sub-gradient method would lead to suboptimal solutions and we propose some adjustments to solve this problem. Further, at the end of the paper, we illustrate a possible DTN application to demonstrate the validity of this optimization approach.
【Keywords】: gradient methods; random processes; telecommunication networks; DTN; consensus algorithms; delay tolerant networks; distributed subgradient methods; memoryless random meeting process; mobility models; optimization approach; Bandwidth; Convergence; Convex functions; Markov processes; Mobile computing; Optimization; Resource management; consensus; delay tolerant networks; distributed optimization; sub-gradient method
【Paper Link】 【Pages】:266-270
【Authors】: Suk-Bok Lee ; Starsky H. Y. Wong ; Kang-Won Lee ; Songwu Lu
【Abstract】: We study the challenging problem of strategic content placement in a dynamic MANET. Existing content placement techniques cannot cope with such network dynamics since they are designed for fixed networks. Opportunistic caching approaches are insufficient as they do not actively manage contents for certain goals. In this paper, we present a novel content management approach called LACMA, which leverages the location information available to mobile devices via GPS. The main idea of LACMA is to bind data to geographic location (as opposed to network nodes). This location-based strategy decouples the content placement problem from the changing network topology, and allows us to design an optimization framework even in a dynamic MANET environment. We present key components of LACMA used for strategic content placement and content-location binding (through proactive content push). We evaluate LACMA and compare its performance with existing caching schemes and show that LACMA considerably outperforms existing schemes over a wide range of scenarios.
【Keywords】: Global Positioning System; content management; mobile ad hoc networks; optimisation; telecommunication network topology; GPS; LACMA; content management; dynamic MANET; location information; location-aided content management architecture; mobile ad hoc network; mobile devices; opportunistic caching approach; optimization; strategic content placement; telecommunication network topology; Algorithm design and analysis; Content management; Maintenance engineering; Mobile ad hoc networks; Network topology; TV
【Paper Link】 【Pages】:271-275
【Authors】: Chi-Kin Chau ; Richard J. Gibbens ; Robert E. Hancock ; Donald F. Towsley
【Abstract】: One of the challenges of wireless networks is to provide a reliable end-to-end path between two end hosts in the face of link and node outages. These can occur due to fluctuations in channel quality, node movement, or node failure. One mechanism that has been proposed is based on multipath routing, the idea being to establish two or more paths between the end hosts so that they always have a path between them with high probability in the face of outages. This naturally raises the question of how to discover these paths in an unknown, random wireless network to enable robust multipath routing. In order to answer this question, we model a random wireless network as a 2D spatial Poisson process. Based on the results of percolation highways in Franceschetti, et al., we present accurate conditions that enable robust multipath routing. If the number of hops of a path between the end hosts is n, then there exists a path between them in a strip of width proportional to log n. More precisely, there exist C log n disjoint paths in a strip of width a(C, p) · log n, where p is the probability that characterizes the availability of an individual wireless communication link. We derive tight bounds for the function a(C, p). This provides a useful guideline for the establishment of multiple paths in a real wireless network, namely that the width should grow logarithmically in the number of hops on the path between the hosts.
【Keywords】: radio networks; stochastic processes; telecommunication network routing; 2D spatial Poisson process; channel quality; node failure; node movement; random wireless network; robust multipath routing; wireless communication link; Approximation methods; Lattices; Robustness; Routing; Strips; Upper bound; Wireless networks
【Paper Link】 【Pages】:276-280
【Authors】: Tao Chen ; Zheng Yang ; Yunhao Liu ; Deke Guo ; Xueshan Luo
【Abstract】: Localization is an enabling technique for many sensor and ad-hoc network applications. Real-world deployments demonstrate that, in practice, a network is not always entirely localizable, leaving a certain number of theoretically non-localizable nodes. Previous studies mainly focus on how to tune network settings to make a network localizable; however, they are considered to be coarse-grained, since they equally deal with localizable and non-localizable nodes. Ignoring localizability induces unnecessary adjustments and accompanying costs. In this study, we propose a fine-grained approach, Localizability-aided Localization (LAL), which basically consists of three phases: node localizability testing, component tree construction, and network adjustment. LAL triggers a single round adjustment, after which some popular localization methods can be successfully carried out. Being aware of node localizability, all adjustments made by LAL are purposefully selected. Simulation results show that LAL effectively guides the adjustment.
【Keywords】: ad hoc networks; wireless sensor networks; ad-hoc networks; localizability-aided localization; non-localizable sensor networks; Ad hoc networks; Distance measurement; Educational institutions; Sea measurements; Testing; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:281-285
【Authors】: XuFei Mao ; Shaojie Tang ; XiaoHua Xu ; Xiang-Yang Li ; Huadong Ma
【Abstract】: Target tracking is a main application of wireless sensor networks (WSNs), and has been studied widely [4], [10]. In this work, we study indoor passive tracking problem using WSNs, in which we assume no equipment is carried by the target and the tracking procedure is passive. We propose to use light to track a moving target in WSNs. To our best knowledge, this is the first work which tracks a moving object by using light sensors and general light sources. We design a novel probabilistic protocol (system) iLight to track a moving target and several efficient methods to compute the target's moving patterns (like height, etc.) at the same time. We implement and evaluate our tracking system iLight in a testbed consisting of 40 sensor nodes, 10 general light sources and one base station. Through extensive experiments, we show that iLight can track a moving target efficiently and accurately.
【Keywords】: probability; protocols; target tracking; wireless sensor networks; iLight; indoor device-free passive tracking; light sensor; probabilistic protocol; target tracking; wireless sensor network; Light sources; Monitoring; Sensors; Target tracking; Trajectory; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:286-290
【Authors】: Wenbo Zhao ; Xueyan Tang
【Abstract】: The network traffic pattern of continuous sensor data collection often changes constantly over time due to the exploitation of temporal and spatial data correlations as well as the nature of condition-based monitoring applications. This paper develops a novel TDMA schedule that is capable of efficiently collecting sensor data for any network traffic pattern and is thus well suited to continuous data collection with dynamic traffic patterns. Following this schedule, the energy consumed by sensor nodes for any traffic pattern is very close to the minimum required by their workloads given in the traffic pattern. The schedule also allows the base station to conclude data collection as early as possible according to the traffic load, thereby reducing the latency of data collection. Experimental results using real-world data traces show that, compared with existing schedules that are targeted on a fixed traffic pattern, our proposed schedule significantly improves the energy efficiency and time efficiency of sensor data collection with dynamic traffic patterns.
【Keywords】: scheduling; telecommunication traffic; wireless sensor networks; condition-based monitoring; continuous sensor data collection; data collection scheduling; dynamic traffic patterns; fixed traffic pattern; network traffic pattern; spatial data correlations; temporal data correlations; wireless sensor networks; Base stations; Dynamic scheduling; Radiation detectors; Schedules; Scheduling algorithm; Time division multiple access; Wireless sensor networks
【Paper Link】 【Pages】:291-295
【Authors】: Ming Liu ; Haigang Gong ; Yonggang Wen ; Guihai Chen ; Jiannong Cao
【Abstract】: Disasters (e.g., earthquakes, flooding, tornadoes, oil spilling and mining accidents) often result in tremendous cost to our society. Previously, wireless sensor networks (WSNs) have been proposed and deployed to provide information for decision making in post-disaster relief operations. The existing WSN solutions for post-disaster operations normally assume that the deployed sensor network can tolerate the damage caused by disasters and maintain its connectivity and coverage, even though a significant portion of nodes have been physically destroyed. In reality, however, this assumption is often invalid for disastrous events like earthquakes in large scale, limiting the relief capability of the existing solutions. Inspired by the “blackbox” technique in flight industry, we propose that preserving “the last snapshot” of the whole network and transferring those data to a safe zone would be the most logical approach to provide necessary information for rescuing lives and control damages. In this paper, we introduce Data Evacuation (DE), an original idea that takes advantage of the survival time of the WSN, i.e., the gap from the time when the disaster hits and the time when the WSN is paralyzed, to transmit critical data to sensor nodes in the safe area. Mathematically, the problem can be formulated as a nonlinear programming problem with multiple minimums in its support. We propose a gradient-based DE algorithm (GRAD-DE) to verify our DE strategy. Numerical investigations reveal the effectiveness of GRAD-DE algorithm.
【Keywords】: data analysis; disasters; gradient methods; linear programming; sensor placement; wireless sensor networks; WSN; data evacuation; gradient-based DE algorithm; linear programming; post disaster; sensor deployment; sensor nodes; wireless sensor networks; Fires; Floods; Volcanoes; data evacuation; post-disaster applications; sensor networks
【Paper Link】 【Pages】:296-300
【Authors】: Cheng Wang ; Shaojie Tang ; Xiang-Yang Li ; Changjun Jiang
【Abstract】: In this work, for a wireless sensor network (WSN) of n randomly placed sensors with node density λ ∈ [1, n], we study the tradeoffs between the aggregation throughput and gathering efficiency. The gathering efficiency refers to the ratio of the number of the sensors whose data have been gathered to the total number of sensors. Specifically, we design two efficient aggregation schemes, called single-hop-length (SLH) scheme and multiple-hop-length (MLH) scheme. By novelly integrating these two schemes, we theoretically prove that our protocol achieves the optimal tradeoffs, and derive the optimal aggregation throughput depending on a given threshold value (lower bound) on gathering efficiency. Particularly, we show that under the MLH scheme, for a practically important set of symmetric functions called perfectly compressible functions, including the mean, max, or various kinds of indicator functions, etc., the data from Θ(n) sensors can be aggregated to the sink at the throughput of a constant order Θ(1), implying that our MLH scheme is indeed scalable.
【Keywords】: protocols; sensor fusion; wireless sensor networks; SelectCast protocol; indicator function; multiple hop length scheme; perfectly compressible function; scalable data aggregation; single hop length scheme; wireless sensor networks; Blogs; Lattices; Power capacitors; TV; Wireless sensor networks; Data Aggregation; Percolation theory; Wireless sensor networks; aggregation capacity
【Paper Link】 【Pages】:301-305
【Authors】: Jin Zhao ; Xinya Zhang ; Xin Wang ; Yangdong Deng ; Xiaoming Fu
【Abstract】: As the physical link speeds grow and the size of routing table continues to increase, IP address lookup has been a challenging problem at routers. There have been growing demands in achieving high-performance IP lookup cost-effectively. Existing approaches typically resort to specialized hardwares, such as TCAM. While these approaches can take advantage of hardware parallelism to achieve high-performance IP lookup, they also have the disadvantage of high cost. This paper investigates a new way to build a cost-effective IP lookup scheme using graphics processor units (GPU). Our contribution here is to design a practical architecture for high-performance IP lookup engine with GPU, and to develop efficient algorithms for routing prefix update operations such as deletion, insertion, and modification. Leveraging GPU's many-core parallelism, the proposed schemes addressed the challenges in designing IP lookup at GPU-based software routers. Our experimental results on real-world route traces show promising gains in IP lookup and update operations.
【Keywords】: IP networks; coprocessors; parallel processing; IP address lookup; IP lookup engine; TCAM; graphics processor units; hardware parallelism; high-performance IP lookup; many-core parallelism; physical link; routing prefix update operations; routing table; software routers; Engines; Graphics processing unit; IP networks; Instruction sets; Routing; Throughput
【Paper Link】 【Pages】:306-310
【Authors】: Kun Huang ; Gaogang Xie ; Yanbiao Li ; Alex X. Liu
【Abstract】: This paper presents a novel offset encoding scheme for memory-efficient IP address lookup, called Offset Encoded Trie (OET). Each node in the OET contains only a next hop bitmap and an offset value, without the child pointers and the next hop pointers. Each traversal node uses the next hop bitmap and the offset value as two offsets to determine the location address of the next node to be searched. The on-chip OET is searched to find the longest matching prefix, and then the prefix is used as a key to retrieve the corresponding next hop from an off-chip prefix hash table. Experiments on real IP forwarding tables show that the OET outperforms previous multi-bit trie schemes in terms of the memory consumption. The OET facilitates the far more effective use of on-chip memory for faster IP address lookup.
【Keywords】: IP networks; encoding; table lookup; tree data structures; child pointers; matching prefix; memory consumption; memory-efficient IP address lookup; next hop bitmap; next hop pointers; off-chip prefix hash table; offset addressing approach; offset encoded trie; offset encoding scheme; traversal node; Algorithm design and analysis; Data structures; Encoding; IP networks; Memory management; Software; System-on-a-chip
【Paper Link】 【Pages】:311-315
【Authors】: Zhuo Huang ; Jih-Kwon Peir ; Shigang Chen
【Abstract】: IP lookup is one of the key functions in the design of core routers. Its efficiency determines how fast a router can forward packets. As new content is continuously brought to the Internet, novel routing technologies must be developed to meet the increasing throughput demand. Hash-based lookup schemes are promising because they have low lookup delays and can handle large routing tables. To achieve high throughput, we must choose the hash function to reduce the lookup bandwidth from the off-chip memory where the routing table is stored. The routing table updates also need to be handled to avoid costly re-setup. In this paper, we propose AP-Hash, an approximately perfect hashing approach that not only distributes routing-table entries evenly in the hash buckets but also handles routing table updates with low overhead. We also present an enhanced approach, called AP-Hash-E, which is able to process far more updates before a complete re-setup becomes necessary. Experimental results based on real routing tables show that our new hashing approaches achieve a throughput of 250M packets per second and perform re-setup as few as just once per month.
【Keywords】: IP networks; Internet; file organisation; table lookup; telecommunication network routing; AP-Hash-E; IP lookup; Internet; approximately-perfect hashing; hash function; hash-based lookup scheme; network throughput; off-chip routing table lookup; Delay; IP networks; Indexes; Radiation detectors; Routing; System-on-a-chip; Throughput
【Paper Link】 【Pages】:316-320
【Authors】: Guner D. Celik ; Long Bao Le ; Eytan Modiano
【Abstract】: We consider a dynamic server control problem for two parallel queues with randomly varying connectivity and server switchover delay between the queues. At each time slot the server decides either to stay with the current queue or switch to the other queue based on the current connectivity and the queue length information. The introduction of switchover time is a new modeling component of this problem, which makes the problem much more challenging. We develop a novel approach to characterize the stability region of the system by using state-action frequencies, which are stationary solutions to a Markov Decision Process (MDP) formulation of the corresponding saturated system. We characterize the stability region explicitly in terms of the connectivity parameters and develop a frame-based dynamic control (FBDC) policy that is shown to be throughput-optimal. In fact, the FBDC policy provides a new framework for developing throughput-optimal network control policies using state-action frequencies. Furthermore, we develop simple Myopic policies that achieve more than 96% of the stability region. Finally, simulation results show that the Myopic policies may achieve the full stability region and are more delay efficient than the FBDC policy in most cases.
【Keywords】: Markov processes; queueing theory; scheduling; Markov decision process; Myopic policy; dynamic server control problem; frame-based dynamic control policy; parallel queues; queue length information; randomly varying connectivity; scheduling; server switchover delay; state-action frequency; switchover time; throughput-optimal network control policy; Markov processes; Switches; Variable speed drives
【Paper Link】 【Pages】:321-325
【Authors】: Qing Li ; Dan Wang ; Mingwei Xu ; Jiahai Yang
【Abstract】: In recent years, the core-net routing table, e.g., Forwarding Information Base (FIB), is growing at an alarming speed and this has become a major concern for Internet Service Providers. One effective solution for this routing scalability problem, which requires only upgrades on individual routers, is FIB aggregation. Intrinsically, IP prefixes with numerical prefix matching and the same next hop can be aggregated. Very commonly, all previous studies assume that each IP prefix has one corresponding next hop, i.e., towards one optimal path. In this paper, we argue that a packet can be delivered to its destination through a path other than the one optimal path. Based on this observation, we for the first time propose Nexthop-Selectable FIB Aggregation that is fundamentally different from all previous aggregation schemes. IP prefixes are aggregated if they have numerical prefix matching and share one common next hop. Consequently, IP prefixes that cannot be aggregated, due to lack of the same next hop, are aggregated; and we achieve a substantially higher aggregation ratio. In this paper, we provide a systematic study on this Nexthop-Selectable FIB Aggregation problem. We present several practical choices to build the sets of selectable next hops for the IP prefixes. To maximize the aggregation, we formulate the problem as an optimization problem. We show that the problem can be solved by dynamic programming. While the straightforward application of dynamic programming has exponential complexity, we propose a novel algorithm that is O(N). We then develop an optimal online algorithm with constant running time. We evaluate our algorithms through a comprehensive set of simulations with BRITE with RIBs collected from RouteViews. Our evaluation shows that we can reduce more than an order of the FIB size.
【Keywords】: IP networks; Internet; computational complexity; dynamic programming; telecommunication network routing; BRITE; IP prefixes; Internet service providers; Nexthop-selectable FIB aggregation; RIB; aggregation ratio; constant running time; core-net routing table; dynamic programming; exponential complexity; forwarding information base; individual routers; numerical prefix matching; optimal online algorithm; optimal path; optimization problem; router forwarding tables; routeviews; routing scalability; Complexity theory; Dynamic programming; Heuristic algorithms; IP networks; Internet; Routing; Topology
【Paper Link】 【Pages】:326-330
【Authors】: Sushmita Ruj ; Amiya Nayak ; Ivan Stojmenovic
【Abstract】: We address pairwise and (for the first time) triple key establishment problems in wireless sensor networks (WSN). We use combinatorial designs to establish pairwise keys between nodes in a WSN. A BIBD(v; b; r; k; λ) (or t - (v; b; r; k; λ)) design can be mapped to a sensor network, where v represents the size of the key pool, b represents the maximum number of nodes that the network can support, k represents the size of the key chain. Any pair (or t-subset) of keys occurs together uniquely in exactly λ nodes. λ = 2 and λ = 3 are used to establish unique pairwise or triple keys. Our pairwise key distribution is the first one that is fully secure (none of the links among uncompromised nodes is affected) and applicable for mobile sensor networks (as key distribution is independent on the connectivity graph), while preserving low storage, computation and communication requirements. We also use combinatorial trades to establish pairwise keys. This is the first time that trades are being applied to key management. We describe a new construction of Strong Steiner Trades. We introduce a novel concept of triple key distribution, in which a common key is established between three nodes. This allows secure passive monitoring of forwarding progress in routing tasks. We present a polynomial-based approach and a combinatorial approach (using trades) for triple key distribution.
【Keywords】: combinatorial mathematics; mobile radio; polynomials; public key cryptography; telecommunication network routing; telecommunication security; wireless sensor networks; Steiner trades; WSN; combinatorial designs; mobile sensor networks; pairwise key distribution; passive monitoring; polynomial based approach; routing tasks; security; triple key distribution; wireless sensor networks; Computer science; Cryptography; Mobile communication; Mobile computing; Polynomials; Wireless sensor networks; Key predistribution; Pairwise-keys; Security; Steiner Trades
【Paper Link】 【Pages】:331-335
【Authors】: Daojing He ; Jiajun Bu ; Sencun Zhu ; Mingjian Yin ; Yi Gao ; Haodong Wang ; Sammy Chan ; Chun Chen
【Abstract】: A distributed access control module in wireless sensor networks (WSNs) allows the network to authorize and grant user access privileges for in-network data access. Prior research mainly focuses on designing such access control modules for WSNs, but little attention has been paid to protect user's identity privacy when a user is verified by the network for data accesses. Often, a user does not want the WSN to associate his identity to the data he requests, particularly in a single-owner multi-user WSN. In this paper, we present the design, implementation, and evaluation of a novel approach, Priccess, to ensure privacy-preserving access control. In addition to the theoretical analysis that demonstrates the security properties of Priccess, this paper also reports the experimental results of Priccess in a network of Imote2 motes, which show the efficiency of Priccess in practice.
【Keywords】: authorisation; data privacy; multi-access systems; telecommunication security; wireless sensor networks; Priccess approach; distributed access control module; distributed privacy-preserving access control; in-network data access; security property; single-owner multiuser sensor network; user access privilege; user authorization; user identity privacy protection; wireless sensor network; Access control; Protocols; Public key; Registers; Silicon; Wireless sensor networks
【Paper Link】 【Pages】:336-340
【Authors】: Chuang Wang ; Guiling Wang ; Wensheng Zhang ; Taiming Feng
【Abstract】: When wireless sensors are deployed to monitor the working or life conditions of people, the data collected and processed by these sensors may reveal privacy of people. The actual content of sensory data should be concealed to preserve the privacy, but the data concealment feature may be abused by compromised sensors to modify or ill-process data without being caught. Hence, reconciling privacy preservation and intrusion detection, which apparently conflict with each other, is important. This paper studies this problem in the context of sensory data aggregation, a fundamental primitive for efficient operation of sensor networks. A scheme is proposed that can detect ill-performed aggregation without knowing the actual content of sensory data, and therefore allow sensory data to be kept concealed. The results show that, the actual content of raw and aggregated sensory data can be well concealed. Meanwhile, most of ill-performed aggregations can be detected; the ill-performed aggregations that can escape from being detected have only negligible impact on the final aggregation results.
【Keywords】: data privacy; security of data; wireless sensor networks; data concealment feature; data privacy; intrusion detection; privacy preservation; sensory data aggregation; wireless sensors; Data privacy; Histograms; Intrusion detection; Monitoring; Sensor systems; Wireless sensor networks
【Paper Link】 【Pages】:341-345
【Authors】: Xiaoyan Li ; Yingying Chen ; Jie Yang ; Xiuyuan Zheng
【Abstract】: Received Signal Strength (RSS) based localization algorithms are sensitive to a set of non-cryptographic attacks. For example, the attacker can perform signal strength attacks by placing an absorbing or reflecting material around a wireless device to modify its RSS readings. In this work, we first formulate the all-around signal strength attacks, where similar attacks are launched towards all landmarks, and experimentally show the feasibility of launching such attacks. We then propose a general principle for designing RSS-based algorithms so that they are robust to all-around signal strength attacks. To evaluate our approach, we adapt two RSS-based localization algorithms according to our principle and experiment with real attack scenarios. All the experiments show that our design principle can be applied to achieve comparable performance with much better robustness.
【Keywords】: radiocommunication; signal processing; telecommunication security; RSS readings; RSS-based localization algorithm; all-around signal strength attack; noncryptographic attack; received signal strength; wireless device; Algorithm design and analysis; Degradation; Measurement; Mobile handsets; Robustness; Tin; Wireless communication
【Paper Link】 【Pages】:346-350
【Authors】: Xiali Hei ; Xiaojiang Du
【Abstract】: Implantable Medical Devices (IMDs) are widely used to treat chronic diseases. Nowadays, many IMDs can wirelessly communicate with an outside programmer (reader). However, the wireless access also introduces security concerns. An attacker may get an IMD reader and gain access to a patient's IMD. IMD security is an important issue since attacks on IMDs may directly harm the patient. A number of research groups have studied IMD security issues when the patient is in nonemergency situations. However, these security schemes usually require the patient's participation, and they may not work during emergencies (e.g., when the patient is in comma) for various reasons. In this paper, we propose a light-weight secure access control scheme for IMDs during emergencies. Our scheme utilizes patient's biometric information to prevent unauthorized access to IMDs. The scheme consists of two levels: level 1 employs some basic biometric information of the patient and it is lightweight; level 2 utilizes patients' iris data for authentication and it is very effective. In this research, we also make contributions in human iris verification: we discover that it is possible to perform iris verification by comparing partial iris data rather than the entire iris data. This significantly reduces the overhead of iris verification, which is critical for resource-limited IMDs. We evaluate the performance of our schemes by using real iris data sets. Our experimental results show that the secure access control scheme is very effective and has small overhead (hence feasible for IMDs). Specifically, the false acceptance rate (FAR) and false rejection rate (FRR) of our secure access control scheme are close to 0.000% with suitable threshold, and the memory and computation overheads are acceptable. Our analysis shows that the secure access control scheme reduces computation overhead by an average of 58%.
【Keywords】: biomedical communication; biomedical electronics; iris recognition; medical image processing; patient treatment; prosthetics; radio access networks; IMD security; basic biometric information; biometric based two level secure access control; chronic disease treatment; emergencies; false acceptance rate; false rejection rate; human iris verification; implantable medical devices; light weight secure access control scheme; partial iris data; patient biometric information; patient iris data; resource limited IMD; wireless IMD communication; wireless access; Access control; Authentication; Biomedical imaging; Iris; Iris recognition; Noise; access control; biometric-based security; implantable medical devices; iris
【Paper Link】 【Pages】:351-355
【Authors】: Lei Kang ; Kaishun Wu ; Jin Zhang ; Haoyu Tan
【Abstract】: RFID has been gaining popularity due to its variety of applications, such as inventory control and localization. One important issue in RFID system is tag identification. In RFID systems, the tag randomly selects a slot to send a Random Number (RN16) packet to contend for identification. Collision happens when multiple tags select the same slot, which makes the RN packet undecodable and thus reduces the channel utilization. In this paper, we redesign the RN pattern to make the collided RNs decodable. By leveraging the collision slots, the system performance can be dramatically enhanced. This novel scheme is called DDC, which is able to directly decode the collisions without exact knowledge of collided RNs. In the DDC scheme, we modify the RN generator in RFID tag and add a collision decoding scheme for RFID reader. We implement DDC in GNU Radio and USRP2 based testbed to verify its feasibility. Both theoretical analysis and testbed experiment show that DDC achieves 40% tag read rate gain compared with traditional RFID protocol.
【Keywords】: decoding; radiofrequency identification; random number generation; telecommunication congestion control; GNU Radio; RFID protocol; RFID systems; RN generator; USRP2 based; channel utilization; collision decoding scheme; collision slots; inventory control; radiofrequency identification; random number packet; tag identification; Algorithm design and analysis; Decoding; Estimation; Interference; Protocols; Radiofrequency identification; Signal to noise ratio
【Paper Link】 【Pages】:356-360
【Authors】: Wen Luo ; Shigang Chen ; Tao Li ; Shiping Chen
【Abstract】: RFID tags have many important applications in automated warehouse management. One example is to monitor a set of tags and detect whether some tags are missing - the objects to which the missing tags are attached are likely to be missing, too, due to theft or administrative error. Prior research on this problem has primarily focused on efficient protocols that reduce the execution time in order to avoid disruption of normal inventory operations. This paper makes several new advances. First, we observe that the existing protocol is far from being optimal in terms of execution time. We are able to cut the execution time to a fraction of what is currently needed. Second, we study the missing-tag detection problem from a new energy perspective, which is very important when battery-powered active tags are used. The new insight provides flexibility for the practitioners to meet their energy and time requirements.
【Keywords】: radiofrequency identification; storage automation; warehouse automation; RFID systems; RFID tags; automated warehouse management; missing tag detection; Batteries; Energy consumption; Estimation; Inventory management; Protocols; RFID tags
【Paper Link】 【Pages】:361-365
【Authors】: Yong Cui ; Wei Li ; Xiuzhen Cheng
【Abstract】: In this study, we investigate the problem of partially overlapping channel assignment to improve the performance of 802.11 wireless networks. We first derive a novel interference model that takes into account both the adjacent channel separation and the physical distance of the two nodes employing adjacent channels. This model defines “node orthogonality”, which states that two nodes over adjacent channels are orthogonal if they are physically sufficiently separated. We propose an approximate algorithm MICA to minimize the total interference for throughput maximization. Extensive simulation study has been performed to validate our design and to compare the performances of our algorithm with those of the state-of-the-art.
【Keywords】: channel allocation; radiofrequency interference; wireless LAN; 802.11 wireless networks; adjacent channel separation; approximate algorithm MICA; interference model; node orthogonality; partially overlapping channel assignment; throughput maximization; Approximation algorithms; Bit rate; Interference; Optimization; Signal to noise ratio; Throughput; Wireless networks
【Paper Link】 【Pages】:366-370
【Authors】: Sriram Lakshmanan ; Shruti Sanadhya ; Raghupathy Sivakumar
【Abstract】: The IEEE 802.11n standard is gaining popularity to achieve high throughput in Wireless LANs. In this paper, we explore link adaptation in practical 802.11n systems using experiments with off-the-shelf hardware. Our experiments reveal several non-trivial insights. Specifically, (1) trivial extensions of algorithms developed for 802.11g provide minimal benefits in 802.11n systems; (2) in contrast to theoretical expectation, multiple antenna transmission does not always lead to higher throughput in practice; (3) both stream and antenna selection are essential to reap the full benefits of MIMO technologies. We use insights developed from experiments to develop a new metric for stream selection called the Median Multiplexing Factor (MMF). The proposed metric can be used to develop intelligent rate selection algorithms that can achieve high throughput with purely software changes.
【Keywords】: IEEE standards; MIMO communication; wireless LAN; 802.11g; IEEE 802.11n standard; MIMO technologies; WLAN; antenna selection; link rate adaptation; median multiplexing factor; multiple antenna transmission; stream selection; wireless LAN; Antennas; IEEE 802.11n Standard; Interference; Modulation; Multiplexing; Signal to noise ratio; Throughput
【Paper Link】 【Pages】:371-375
【Authors】: Libin Jiang ; Mathieu Leconte ; Jian Ni ; R. Srikant ; Jean C. Walrand
【Abstract】: Glauber dynamics is a powerful tool to generate randomized, approximate solutions to combinatorially difficult problems. It has been recently used to design distributed CSMA scheduling algorithms for multi-hop wireless networks. In this paper, we derive bounds on the mixing time of a generalization of Glauber dynamics where multiple links update their states in parallel and the fugacity of each link can be different. The results are used to prove that the average queue length (and hence, the delay) under the parallel-Glauber-dynamics-based CSMA grows polynomially in the number of links for wireless networks with bounded-degree interference graphs when the arrival rate lies in a fraction of the capacity region. Other versions of adaptive CSMA can be analyzed similarly.
【Keywords】: carrier sense multiple access; dynamic scheduling; bounded-degree interference graphs; fast mixing; low-delay CSMA scheduling; multi-hop wireless networks; parallel Glauber dynamics; Delay; Heuristic algorithms; Interference; Markov processes; Multiaccess communication; Schedules; Scheduling algorithm
【Paper Link】 【Pages】:376-380
【Authors】: Wei Dong ; Yunhao Liu ; Chun Chen ; Jiajun Bu ; Chao Huang
【Abstract】: We present ℛ2, an incremental ℛeprogramming approach using Relocatable code, to improve program similarity for efficient incremental reprogramming in networked embedded systems. ℛ2 achieves a higher degree of similarity than existing approaches by mitigating the effects of both function shifts and data shifts. ℛ2 makes efficient use of memory and does not degrade program quality. It adopts an optimized differencing algorithm to generate small delta files for efficient dissemination. We demonstrate ℛ2's advantages through detailed analysis of TinyOS examples. We also present case studies on the software programs of a large-scale and long-term sensor system-GreenOrbs. Results show that ℛ2 reduces the dissemination cost by approximately 65% compared to Deluge-state-of-the-art network reprogramming approach, and reduces the dissemination cost by approximately 20% compared to Zephyr and Hermes-the latest works on incremental reprogramming.
【Keywords】: embedded systems; large-scale systems; operating systems (computers); programming; software quality; telecommunication computing; wireless sensor networks; Deluge-state-of-the-art network reprogramming approach; GreenOrbs; TinyOS examples; data shifts; delta files; dissemination cost; function shifts; incremental reprogramming; large-scale system; long-term sensor system; networked embedded systems; optimized differencing algorithm; program quality; program similarity; relocatable code; software programs; Algorithm design and analysis; Ash; Embedded systems; Geophysical measurement techniques; Green products; Protocols; Wireless sensor networks
【Paper Link】 【Pages】:381-385
【Authors】: Jun Sun ; Yonggang Wen ; Lizhong Zheng
【Abstract】: With the emergence of the adaptive bit rate (ABR) streaming technology, the video/content streaming technology is shifting toward a file-based content distribution. That is, video content is encoded into a set of smaller media files containing video of 2-10 seconds before transmission. This file-based content distribution, coupled with increasingly rapid adoption of smartphones, requires an efficient file-based distribution algorithm to satisfy the QoS demand in wireless networks. In this paper, we study the transmission of a finite-sized file over wireless networks using multipath routing, with the objective to minimize file transmission delay instead of average packet delay. The file transmission delay is defined as the time interval from the instant that a file is first transmitted to the time at which the file can be reconstructed in the destination node. We observe that file transmission delay depends not only on the mean of the packet delay but also on its distribution, especially the tail. This observation leads to a better understanding of the file transfer delay in wireless networks and a minimum delay file transmission strategy. In a wireless multipath communication scenario, we propose to use packet level erasure code (e.g., digital fountain code) to transmit data file with redundancy. Given that a file with k packets is encoded into n packets for transmission, the use of digital fountain code allows the file to be received when only k out of n packets are received. By adding redundant packets, the destination node does not have to wait for the packet to arrive late, hence reducing the delay of the file transmission. We characterize the tradeoff between the code rate (i.e., the ratio of the number of transmitted packets to the number of the original packets) and the file delay reduction. As a rule of thumb, we provide practical guidelines in determining an appropriate code rate for a fixed file to achieve a reasonable transmission delay. We show that only- - a few redundant packets are needed to achieve a significant reduction in file transmission delay.
【Keywords】: quality of service; radio networks; telecommunication network routing; video coding; video streaming; QoS demand; adaptive bit rate streaming technology; average packet delay; coding; data file; delay trade-off; destination node; digital fountain code; file delay reduction; file transmission delay; file-based distribution algorithm; media files; multipath routing; multiple path; on file-based content distribution; redundant packets; smartphones; video content streaming technology; wireless multipath communication; wireless networks; Delay; Encoding; Relays; Routing; Satellites; Streaming media; Wireless networks
【Paper Link】 【Pages】:386-390
【Authors】: Wai-Leong Yeow ; Cédric Westphal ; Ulas C. Kozat
【Abstract】: Erasure coding has been proposed in distributed storage systems for both high data reliability and low storage redundancy. With virtualization, virtual machines (VMs) are essentially defined by software and by their memory states, and erasure coding can be used in the same manner as in storage for high reliability and availability. Each VM can backup its memory state to a hot spare, and multiple memory states are coded at the hot spare to provide data reliability and low redundancy. This poses a new set of challenges. Both synchronization and recovery of the VMs consume significant bandwidth which may impede the performance of a data center. To guarantee high availability, recovery from a machine failure (decoding) at the hot spare must be fast. When (de)coding is done in-network, computation can be distributed over hosts, routers or switches and bandwidth for synchronization and recovery can be reduced significantly. We show the conditions needed to support optimal rates for synchronization and recovery for many-to-one backup in arbitrary networks. Furthermore, we propose routing and code construction algorithms that run in polynomial-time.
【Keywords】: computer centres; computer network reliability; digital storage; network coding; synchronisation; system recovery; telecommunication network routing; virtual machines; code construction algorithms; data center; data recovery; data reliability; distributed storage systems; erasure coding; highly available virtual machine; machine failure; network coding; routing; storage redundancy; synchronization; Availability; Bandwidth; Encoding; Fault tolerance; Network coding; Synchronization
【Paper Link】 【Pages】:391-395
【Authors】: Dan Zhang ; Narayan B. Mandayam
【Abstract】: In this paper, we consider multicast with Random Network Coding (RNC) over a wireless network using Orthogonal Frequency Division Multiple Access (OFDMA). Specifically, we propose a cross-layer resource allocation mechanism to minimize the total transmit power in the network to achieve a target throughput. The problem in its original form is a NP-hard mixed integer program. We alleviate this problem with a greedy power and subcarrier allocation algorithm that is combined with a node selection strategy that is enabled by RNC, which we refer to as “min-cut chasing.” We compare it with a reference algorithm that assigns subcarriers independently based on the max-min fairness criterion followed by optimal power allocation. Our results reveal that the proposed greedy algorithm with min-cut chasing, which is of polynomial complexity, yields power savings of 3dB and is within 1dB of a lower bound based on an interference-free assumption.
【Keywords】: OFDM modulation; communication complexity; frequency division multiple access; greedy algorithms; integer programming; minimax techniques; multicast communication; network coding; radio networks; resource allocation; NP-hard mixed integer program; OFDMA network; cross-layer resource allocation mechanism; greedy power; interference-free assumption; max-min fairness criterion; min-cut chasing; node selection strategy; optimal power allocation; orthogonal frequency division multiple access; polynomial complexity; random network coding; reference algorithm; subcarrier allocation algorithm; wireless network; Greedy algorithms; Interference; Network coding; OFDM; Resource management; Throughput; Wireless networks
【Paper Link】 【Pages】:396-400
【Authors】: Shanshan Wang ; Yalin Evren Sagduyu ; Junshan Zhang ; Jason Hongjun Li
【Abstract】: We consider a cognitive radio network where primary users (PUs) employ network coding for data transmissions. We view network coding as a spectrum shaper, in the sense that it increases spectrum availability to secondary users (SUs) and offers more structure of spectrum holes, which in turn improves the predictability of the primary spectrum. With this spectrum shaping effect of network coding, each SU can carry out adaptive channel sensing by dynamically updating the list of the (predicted) idle PU channels and giving priority to these channels for spectrum sensing. This dynamic spectrum access approach with network coding improves how SUs detect and utilize spectrum holes over PU channels. Our results show that compared to the existing approaches based on retransmission, both PUs and SUs can achieve higher stable throughput, thanks to the spectrum shaping effect of network coding.
【Keywords】: cognitive radio; data communication; network coding; adaptive channel sensing; cognitive radio networks; data transmissions; dynamic spectrum access; network coding; primary users; secondary users; spectrum availability; spectrum holes; spectrum shaping; Adaptive systems; Automatic repeat request; Cognitive radio; Markov processes; Network coding; Sensors; Throughput; Cognitive radio networks; dynamic spectrum access; network coding; spectrum shaping
【Paper Link】 【Pages】:401-405
【Authors】: Matthieu Latapy ; Clémence Magnien ; Raphaël Fournier
【Abstract】: Increasing knowledge of paedophile activity in P2P systems is a crucial societal concern, with important consequences on child protection, policy making, and internet regulation. Because of a lack of traces of P2P exchanges and rigorous analysis methodology, however, current knowledge of this activity remains very limited. We consider here a widely used P2P system, eDonkey, and focus on two key statistics: the fraction of paedophile queries entered in the system and the fraction of users who entered such queries. We collect hundreds of millions of keyword-based queries; we design a paedophile query detection tool for which we establish false positive and false negative rates using assessment by experts; with this tool and these rates, we then estimate the fraction of paedophile queries in our data. We conclude that approximately 0.25% of queries are paedophile. Our statistics1 are by far the most precise and reliable ever obtained in this domain.
【Keywords】: Internet; peer-to-peer computing; query processing; Internet regulation; child protection; crucial societal concern; keyword-based queries; large P2P system; policy making; quantifying paedophile queries; Context; Encyclopedias; IP networks; Internet; Law enforcement; Peer to peer computing; Servers
【Paper Link】 【Pages】:406-410
【Authors】: Rafit Izhak-Ratzin ; Hyunggon Park ; Mihaela van der Schaar
【Abstract】: In this paper, we propose a BitTorrent-like protocol that replaces the peer selection mechanisms in the regular BitTorrent protocol with a novel reinforcement learning based mechanism. The inherent operation of P2P systems, which involves repeated interactions among peers over a long time period, allows peers to efficiently identify free-riders as well as desirable collaborators by learning the behavior of their associated peers. Thus, it can help peers improve their download rates and discourage free-riding (FR), while improving fairness. We model the peers' interactions in the BitTorrent-like network as a repeated interaction game, where we explicitly consider the strategic behavior of the peers. A peer that applies the reinforcement learning based mechanism uses a partial history of the observations on associated peers' statistical reciprocal behaviors to determine its best responses and estimate the corresponding impact on its expected utility. The policy determines the peer's resource reciprocations with other peers, which would maximize the peer's long-term performance.
【Keywords】: learning (artificial intelligence); peer-to-peer computing; protocols; P2P systems; bittorrent protocol; peer selection mechanisms; reinforcement learning; strategic behavior; Bandwidth; Games; History; Learning; Peer to peer computing; Protocols; Robustness; BitTorrent; P2P; reinforcement learning
【Paper Link】 【Pages】:411-415
【Authors】: Brett Stone-Gross ; Marco Cova ; Christopher Kruegel ; Giovanni Vigna
【Abstract】: Drive-by-download attacks have become the method of choice for cyber-criminals to infect machines with malware. Previous research has focused on developing techniques to detect web sites involved in drive-by-download attacks, and on measuring their prevalence by crawling large portions of the Internet. In this paper, we take a different approach at analyzing and understanding drive-by-download attacks. Instead of horizontally searching the Internet for malicious pages, we examine in depth one drive-by-download campaign, that is, the coordinated efforts used to spread malware. In particular, we focus on the Mebroot campaign, which we periodically monitored and infiltrated over several months, by hijacking parts of its infrastructure and obtaining network traces at an exploit server. By studying the Mebroot drive-by-download campaign from the inside, we could obtain an in-depth and comprehensive view into the entire life-cycle of this campaign and the involved parties. More precisely, we could study the security posture of the victims of drive-by attacks (e.g., by measuring the prevalence of vulnerable software components and the effectiveness of software updating mechanisms), the characteristics of legitimate web sites infected during the campaign (e.g., the infection duration), and the modus operandi of the miscreants controlling the campaign.
【Keywords】: Internet; Web sites; computer crime; invasive software; Internet; Mebroot drive-by-download campaign; Web sites; cyber-criminals; drive-by-download attacks; iFrame; malicious pages; malware; Browsers; Fires; Linux; Security; Software
【Paper Link】 【Pages】:416-420
【Authors】: Jin Zhang ; Wei Lou
【Abstract】: In this paper we model the digital rights management (DRM) for peer-to-peer streaming (P2PS) systems as a game. We construct the DRM game from both content service provider (CSP) and user's aspects, and propose a design of DRM policy based on homogeneous peers and homogeneous digital goods, which gets the maximal utility for the CSP as well as the criterion whether the DRM is fit for a P2PS system. Another sort of games in this paper consider how a peer deals with digital goods with regard to various situations in P2PS systems with DRM, together with the CSP's response to the peer's actions. We construct different games to avoid three notorious misbehaviors of peers: freeriding, jailbreaking and whitewashing. We take examples to show how these games work in P2PS systems with DRM and how equilibria are established in these games. Numerical experiments are conducted to demonstrate the effectiveness of the strategies devised from these games.
【Keywords】: digital rights management; game theory; media streaming; peer-to-peer computing; DRM game; DRM policy; P2PS system; content service provider; digital rights management game; freeriding; game theory; homogeneous digital goods; homogeneous peers; jailbreaking; peer-to-peer streaming system; whitewashing; Games; Digital rights management; game theory; peer-to-peer streaming systems
【Paper Link】 【Pages】:421-425
【Authors】: Di Niu ; Zimu Liu ; Baochun Li ; Shuqiao Zhao
【Abstract】: Peer-assisted on-demand video streaming services are extremely large-scale distributed systems on the Internet. Automated demand forecast and performance prediction, if implemented, can help with capacity planning and quality control so that sufficient server bandwidth can always be supplied to each video channel without incurring wastage. In this paper, we use time-series analysis techniques to automatically predict the online population, the peer upload and the server bandwidth demand in each video channel, based on the learning of both human factors and system dynamics from online measurements. The proposed mechanisms are evaluated on a large dataset collected from a commercial Internet video-on-demand system.
【Keywords】: Internet; peer-to-peer computing; quality control; time series; video on demand; video streaming; Internet video-on-demand system; automated demand forecasting; capacity planning; distributed system; online measurement; online population prediction; peer-assisted on-demand video streaming service; performance prediction; quality control; server bandwidth; time-series analysis technique; video channel; Bandwidth; Channel estimation; Internet; Peer to peer computing; Predictive models; Servers; Streaming media
【Paper Link】 【Pages】:426-430
【Authors】: Francesco Malandrino ; Claudio Casetti ; Carla-Fabiana Chiasserini ; Marco Fiore
【Abstract】: Content downloading in vehicular networks is a topic of increasing interest: services based upon it are expected to be hugely popular and investments are planned for wireless roadside infrastructure to support it. We focus on a content downloading system leveraging both infrastructure-to-vehicle and vehicle-to-vehicle communication. With the goal to maximize the system throughput, we formulate a max-flow problem that accounts for several practical aspects, including channel contention and the data transfer paradigm. Through our study, we identify the factors that have the largest impact on the performance and derive guidelines for the design of the vehicular network and of the roadside infrastructure supporting it.
【Keywords】: optimisation; vehicular ad hoc networks; channel contention; content downloading system; data transfer paradigm; infrastructure-to-vehicle communication; max-flow problem; vehicle-to-vehicle communication; vehicular network; wireless roadside infrastructure; TV
【Paper Link】 【Pages】:431-435
【Authors】: Nianbo Liu ; Ming Liu ; Wei Lou ; Guihai Chen ; Jiannong Cao
【Abstract】: In Vehicular Ad Hoc Networks (VANETs), the major communication challenge lies in very poor connectivity, which can be caused by sparse or unbalanced traffic. Deploying supporting infrastructure could relieve this problem, but it often requires a large amount of investment and elaborate design, especially at the city scale. In this paper, we propose the idea of Parked Vehicle Assistance (PVA), which allows parked vehicles to join VANETs as static nodes. With wireless device and rechargable battery, parked vehicles can easily communicate with one another and their moving counterparts. Owing to the extensive parking in cities, parked vehicles are natural roadside nodes characterized by large number, long-time staying, wide distribution, and specific location. So parked vehicles can serve as static backbone and service infrastructure to improve connectivity. We investigate network connectivity in PVA through theoretic analysis and realistic survey and simulations. The results prove that even a small proportion of PVA vehicles could overcome sparse or unbalanced traffic, and promote network connectivity greatly. Thus, PVA enhances VANETs from down to top, and paves the way for new hybrid networks with static and mobile nodes.
【Keywords】: traffic engineering computing; vehicular ad hoc networks; PVA; VANET; network connectivity; parked vehicle assistance; rechargable battery; vehicular ad hoc networks; wireless device; Ad hoc networks; Cities and towns; Delay; Relays; Roads; Vehicles; Wireless communication; PVA; VANET; connectivity; parking
【Paper Link】 【Pages】:436-440
【Authors】: Emmanuel Baccelli ; Philippe Jacquet ; Bernard Mans ; Georgios Rodolakis
【Abstract】: In this paper, we provide an analysis of the information propagation speed in bidirectional vehicular delay tolerant networks on highways. We show that a phase transition occurs concerning the information propagation speed, with respect to the vehicle densities in each direction of the highway. We prove that under a certain threshold, information propagates on average at vehicle speed, while above this threshold, information propagates dramatically faster at a speed that increase exponentially when vehicle density increases. We provide the exact expressions of the threshold and of the average propagation speed near the threshold. We show that under the threshold, the information propagates on a distance which is bounded by a sub-linear power law with respect to the elapsed time, in the referential of the moving cars. On the other hand, we show that information propagation speed grows quasi-exponentially with respect to vehicle densities in each direction of the highway, when the densities become large, above the threshold. We confirm our analytical results using simulations carried out in several environments.
【Keywords】: roads; vehicular ad hoc networks; bidirectional vehicular delay tolerant networks; highways; information propagation speed; moving cars; sublinear power law; Ad hoc networks; Analytical models; Bridges; Delay; Roads; Vehicles
【Paper Link】 【Pages】:441-445
【Authors】: Fulong Xu ; Shuo Guo ; Jaehoon Jeong ; Yu Gu ; Qing Cao ; Ming Liu ; Tian He
【Abstract】: Vehicular ad hoc networks (VANETs) represent promising technologies for improving driving safety and efficiency. Due to the highly dynamic driving patterns of vehicles, it has been a challenging research problem to achieve effective and time-sensitive data forwarding in vehicular networks. In this paper, a Shared-Trajectory-based Data Forwarding Scheme (STDFS) is proposed, which utilizes shared vehicle trajectory information to address this problem. With access points sparsely deployed to disseminate vehicles' trajectory information, the encounters between vehicles can be predicted by the vehicle that has data to send, and an encounter graph is then constructed to aid packet forwarding. This paper focuses on the specific issues of STDFS such as encounter prediction, encounter graph construction, forwarding sequence optimization and the data forwarding process. Simulation results demonstrate the effectiveness of the proposed scheme.
【Keywords】: packet radio networks; vehicular ad hoc networks; VANET; data forwarding process; disseminate vehicle trajectory information; forwarding sequence optimization; graph construction; packet forwarding; shared vehicle trajectory information; shared-trajectory-based data forwarding scheme; time-sensitive data forwarding; vehicular ad hoc networks; Delay; Protocols; Roads; Safety; Trajectory; Vehicles
【Paper Link】 【Pages】:446-450
【Authors】: Utku Günay Acer ; Paolo Giaccone ; David Hay ; Giovanni Neglia ; Saed Tarapiah
【Abstract】: WiFi-enabled buses and stops may form the backbone of a metropolitan delay tolerant network, that exploits nearby communications, temporary storage at stops, and predictable bus mobility to deliver non-real time information. This paper studies the problem of how to route data from its source to its destination in order to maximize the delivery probability by a given deadline. We assume to know the bus schedule, but we take into account that randomness, due to road traffic conditions or passengers boarding and alighting, affects bus mobility. We propose a simple stochastic model for bus arrivals at stops, supported by a study of real-life traces collected in a large urban network. A succinct graph representation of this model allows us to devise an optimal (under our model) single-copy routing algorithm and then extend it to cases where several copies of the same data are permitted. Through an extensive simulation study, we compare the optimal routing algorithm with three other approaches: minimizing the expected traversal time over our graph, minimizing the number of hops a packet can travel, and a recently-proposed heuristic based on bus frequencies. Our optimal algorithm outperforms all of them, but most of the times it essentially reduces to minimizing the expected traversal time. For values of deadlines close to the expected delivery time, the multi-copy extension requires only 10 copies to reach almost the performance of the costly flooding approach.
【Keywords】: graph theory; probability; road traffic; stochastic processes; telecommunication network routing; transportation; wireless LAN; WiFi-enabled buses; bus arrivals; bus frequency; bus schedule; costly flooding approach; delivery probability; delivery time; expected traversal time; large urban network; metropolitan delay tolerant network; multicopy extension; nonreal time information; optimal algorithm; optimal routing algorithm; passengers alighting; passengers boarding; predictable bus mobility; real-life traces; realistic bus network; recently-proposed heuristic; road traffic conditions; route data; single-copy routing algorithm; stochastic model; succinct graph representation; temporary storage; timely data delivery; Delay; Random variables; Routing; Schedules; Trajectory; Vehicles
【Paper Link】 【Pages】:451-455
【Authors】: Sourjya Bhaumik ; David Chuck ; Girija J. Narlikar ; Gordon T. Wilfong
【Abstract】: Access networks, in particular, Digital Subscriber Line (DSL) equipment, are a significant source of energy consumption for wireline operators. Replacing large monolithic DSLAMs with smaller remote DSLAM units closer to customers can reduce the energy consumption as well as increase the reach of the access network. This paper attempts to formalize the design and optimization of the “last mile” wireline access network with energy as one of the costs to be minimized. In particular, the placement of remote DSLAM units needs to be optimized. We propose solutions for two scenarios. For the scenario where an existing all-copper network from the central office to the customers is to be transformed into a fiber-copper network with remote DSLAM units, we present efficient polynomial-time solutions. For the green-field scenario, where both the access network layout and the placement of remote DSLAM units must be determined, we show that this problem is NP-complete. We present an optimal ILP formulation and also design an efficient heuristic-based approach to build a power and cost optimized access network. Our heuristic-based approach yields results that are very close to optimal. We show how the power consumption of the access network can be reduced by carefully planning the access network and introducing remote DSLAM units.
【Keywords】: communication complexity; digital subscriber lines; telecommunication network planning; DSL equipment; DSLAM; NP-complete; all-copper network; digital subscriber line; energy-efficient design; energy-efficient optimization; fiber-copper network; green-field scenario; heuristic-based approach; planning; power consumption; wireline access networks; Conferences; Copper; DSL; Green products; Heuristic algorithms; Power demand; Steiner trees
【Paper Link】 【Pages】:456-460
【Authors】: Omur Ozel ; Kaya Tutuncuoglu ; Jing Yang ; Sennur Ulukus ; Aylin Yener
【Abstract】: Wireless systems comprised of rechargeable nodes have a significantly prolonged lifetime and are sustainable. A distinct characteristic of these systems is the fact that the nodes can harvest energy throughout the duration in which communication takes place. As such, transmission policies of the nodes need to adapt to these harvested energy arrivals. In this paper, we consider optimization of the transmission policy of an energy harvesting transmitter which has a limited battery capacity, communicating in a wireless fading channel. In particular, we identify the optimal offline transmission policies that maximize the number of bits delivered by a deadline, and minimize the transmission completion time of the communication session. We introduce a directional water-filling algorithm which provides a simple and concise interpretation of the necessary optimality conditions as well as energy storage capacity and causality. We solve the throughput maximization problem for the fading channel using the directional water-filling algorithm, which simultaneously adapts to the energy harvested as well as the channel variations in time. We then solve the transmission completion time minimization problem by utilizing its equivalence to its throughput maximization counterpart.
【Keywords】: energy harvesting; fading channels; minimisation; radio transmitters; resource allocation; channel variation; communication session; directional water-filling algorithm; energy harvesting node; energy harvesting transmitter; energy storage capacity; harvested energy arrival; limited battery capacity; optimal offline transmission policy; optimization; rechargeable node; resource management; throughput maximization problem; transmission completion time minimization; wireless fading channel; wireless system; Batteries; Energy harvesting; Fading; Minimization; Resource management; Transmitters; Wireless communication
【Paper Link】 【Pages】:461-465
【Authors】: Miao He ; Sugumar Murugesan ; Junshan Zhang
【Abstract】: Integrating volatile renewable energy resources into the bulk power grid is challenging, due to the reliability requirement that the load and generation in the system remain balanced all the time. In this study, we tackle this challenge for smart grid with integrated wind generation, by leveraging multi-timescale dispatch and scheduling. Specifically, we consider smart grids with two classes of energy users - traditional energy users and opportunistic energy users (e.g., smart meters or smart appliances), and investigate pricing and dispatch at two timescales, via day-ahead scheduling and real-time scheduling. In day-ahead scheduling, with the statistical information on wind generation and energy demands, we characterize the optimal procurement of the energy supply and the day-ahead retail price for the traditional energy users; in real-time scheduling, with the realization of wind generation and the load of traditional energy users, we optimize real-time prices to manage the opportunistic energy users so as to achieve system-wide reliability. More specifically, when the opportunistic users are non-persistent, we obtain closed-form solutions to the two-level scheduling problem. For the persistent case, we treat the scheduling problem as a multi-timescale Markov decision process. We show that it can be recast, explicitly, as a classic Markov decision process with continuous state and action spaces, the solution to which can be found via standard techniques.
【Keywords】: Markov processes; power generation planning; power generation reliability; power grids; wind power; Markov decision process; bulk power grid; day-ahead retail price; day-ahead scheduling; energy supply; multiple timescale dispatch; multiple timescale scheduling; opportunistic energy user; real-time scheduling; smart appliance; smart grids; smart meter; statistical information; stochastic reliability; traditional energy user; volatile renewable energy resources; wind generation integration; Elasticity; Markov processes; Pricing; Real time systems; Schedules; Uncertainty; Wind energy generation
【Paper Link】 【Pages】:466-470
【Authors】: Mario Marchese ; Marco Cello ; Giorgio Gnecco ; Marcello Sanguineti
【Abstract】: Necessary optimality conditions for Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions and two classes of users are derived. The policies are restricted to the class of coordinate-convex policies. Two kinds of structural properties of the optimal policies and their robustness with respect to changes of the feasibility region are investigated: 1) general properties not depending on the revenue ratio associated with the two classes of users and 2) more specific properties depending on such a ratio. The results allow one to narrow the search for the optimal policies to a suitable subset of the set of coordinate-convex policies.
【Keywords】: convex programming; telecommunication congestion control; call admission control problem; nonlinearly-constrained feasibility region; optimal coordinate-convex policy; structural property; Robustness; Call Admission Control; Coordinate Convex Policies; Feasibility Region
【Paper Link】 【Pages】:471-475
【Authors】: Somayeh Sojoudi ; Steven H. Low ; John C. Doyle
【Abstract】: Almost all existing fluid models of congestion control assume that the fluid flow at the output of a link is the same as the fluid flow at the input of the link. This means that all links in the path of a flow see the original source rate. In reality, a fluid flow is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this paper, we propose a simple model that explicitly takes into account of the effect of buffering on output flows. We study the dual and primal-dual algorithms that use implicit feedback and show that, while they are always asymptotically stable if feedback delay is ignored, they can be unstable in the new model.
【Keywords】: Internet; buffer storage; queueing theory; stability; telecommunication congestion control; Internet congestion controllers; buffers; fluid flow; fluid models; queueing processes; stability; Aggregates; Asymptotic stability; Delay; Equations; Heuristic algorithms; Internet; Stability analysis
【Paper Link】 【Pages】:476-480
【Authors】: Hao Jin ; Deng Pan ; Jason Liu ; Niki Pissinou
【Abstract】: Flow level bandwidth provisioning offers fine granularity bandwidth assurance for individual flows. It is especially important for virtual network based experiment environments, to isolate traffic of different experiments or different types, which may be fed to the same switch or router port. Existing flow level bandwidth provisioning solutions suffer from a number of drawbacks, including high implementation complexity, poor performance guarantees, and inefficiency to process variable length packets. In this paper, we study flow level bandwidth provisioning for combined-input-crosspoint-queued switches in the OpenFlow context. We propose the FEBR (Flow lEvel Bandwidth pRovisioning) algorithm, which reduces the switch scheduling problem to multiple instances of fair queueing problems, each employing a well studied fair queueing algorithm. FEBR can tightly emulate the ideal Generalized Processing Sharing model, and accurately guarantee the provisioned bandwidth. Further, we implement FEBR in the OpenFlow version 1.0 software switch. In conjunction with the capability of OpenFlow to flexibly define and manipulate flows, we thus provide a practical flow level bandwidth provisioning solution. Finally, we present extensive simulation and experiment data to validate the analytical results and evaluate our design.
【Keywords】: bandwidth allocation; packet switching; queueing theory; scheduling; CICQ switches; OpenFlow based flow level bandwidth provisioning; OpenFlow context; combined-input-crosspoint-queued switches; fair queueing problems; generalized processing sharing model; switch scheduling problem; variable length packets; virtual network; Algorithm design and analysis; Bandwidth; Delay; Global Positioning System; Scheduling; Software; Software algorithms; CICQ switches; OpenFlow; bandwidth provisioning
【Paper Link】 【Pages】:481-485
【Authors】: Xiao Lei ; Laura Cottatellucci ; Konstantin Avrachenkov
【Abstract】: We consider a block fading interference channels with partial channel state information and we address the issue of joint power and rate allocation in a game theoretic framework. The system is intrinsically affected by outage events. Resource allocation algorithms based on Bayesian games are proposed. The existence, uniqueness, and some stability properties of Nash equilibriums (NE) are analyzed. For some asymptotic setting, closed form expressions of NEs are also provided.
【Keywords】: Bayes methods; fading channels; game theory; radiofrequency interference; Bayesian game; Nash equilibrium; block fading interference channel; game theoretic framework; outage event; partial channel state information; partial knowledge; power allocation; rate allocation; resource allocation; slow fading interfering channel; stability property; Bayesian methods; Fading; Games; Interference channels; Noise; Resource management
【Paper Link】 【Pages】:486-490
【Authors】: Boulat A. Bash ; Dennis Goeckel ; Donald F. Towsley
【Abstract】: Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.
【Keywords】: ad hoc networks; antenna arrays; cooperative communication; diversity reception; fading channels; ad hoc wireless network; cooperative diversity; cooperative network clustering; dynamic clustering; fading channel; packet flow; static clustering; virtual antenna arrays; Fading; Interference; Physical layer; Routing; Strips; Transmitters; Wireless networks
【Paper Link】 【Pages】:491-495
【Authors】: Krishna P. Jagannathan ; Shie Mannor ; Ishai Menache ; Eytan Modiano
【Abstract】: We consider scheduling over a wireless system, where the channel state information is not available a priori to the scheduler, but can be inferred from the past. Specifically, the wireless system is modeled as a network of parallel queues. We assume that the channel state of each queue evolves stochastically as an ON/OFF Markov chain. The scheduler, which is aware of the queue lengths but is oblivious of the channel states, has to choose one queue at a time for transmission. The scheduler has no information regarding the current channel states, but can estimate them by using the acknowledgment history. We first characterize the capacity region of the system using tools from Markov Decision Processes (MDP) theory. Specifically, we prove that the capacity region boundary is the uniform limit of a sequence of Linear Programming (LP) solutions. Next, we combine the LP solution with a queue length based scheduling mechanism that operates over long `frames,' to obtain a throughput optimal policy for the system. By incorporating results from MDP theory within the Lyapunov-stability framework, we show that our frame-based policy stabilizes the system for all arrival rates that lie in the interior of the capacity region.
【Keywords】: Lyapunov methods; Markov processes; linear programming; scheduling; stability; wireless channels; Lyapunov stability framework; Markov chain; Markov decision processes theory; channel state information; frame-based policy; linear programming; parallel queues; scheduling mechanism; state action frequency approach; throughput maximization; uncertain wireless channels; wireless system
【Paper Link】 【Pages】:496-500
【Authors】: Saeed Moghaddam ; Ahmed Helmy
【Abstract】: Users behavior and interests will play a central role in future mobile networks. We introduce a systematic method for large-scale multi-dimensional analysis of online activity for thousands of mobile users across 79 buildings over 100 web domains. We propose a modeling approach based on self-organizing maps for discovering, organizing and visualizing different mobile users' trends from billions of WLAN and netflow records. We find surprisingly that users' trends based on domains or locations can be modeled using a self-organizing map with clearly distinct characteristics. This is the first study to acquire such detailed results for wireless users Internet access patterns.
【Keywords】: Internet; behavioural sciences; mobile radio; self-organising feature maps; WLAN; Web domain; large-scale multidimensional analysis; mobile network; mobile user trend; netflow record; online activity; self-organizing map; spatio-temporal modeling; user behavior; user interest; wireless user Internet access pattern; Analytical models; Correlation; IP networks; Internet; Mobile communication; Mobile computing; Wireless communication; data-driven; self-organizing map; trend; wireless
【Paper Link】 【Pages】:501-505
【Authors】: Thanh Dang ; Wu-chi Feng ; Nirupama Bulusu
【Abstract】: As sensor networking technologies continue to develop, the notion of adding large-scale mobility into sensor networks is becoming feasible by crowd-sourcing data collection to personal mobile devices. However, tasking such networks at fine granularity becomes problematic because the sensors are heterogeneous and owned by users instead of network operators. In this paper, we present Zoom, a multi-resolution tasking framework for crowdsourced geo-spatial sensor networks. Zoom allows users to define arbitrary sensor groupings over heterogeneous, unstructured and mobile networks and assign different sensing tasks to each group. The key idea is the separation of the task information ( what task a particular sensor should perform ) from the task implementation ( code ). Zoom consists of (i) a map, an overlay on top of a geographic region, to represent both the sensor groups and the task information, and (ii) adaptive encoding of the map at multiple resolutions and region-of-interest cropping for resource-constrained devices, allowing sensors to zoom in quickly to a specific region to determine their task. Simulation of a realistic traffic application over an area of 1 sq. km with a task map of size 1.5 KB shows that more than 90% of nodes are tasked correctly. Zoom also outperforms Logical Neighborhoods, the state-of-the-art tasking protocol in task information size for similar tasks. Its encoded map size is always less than 50% of Logical Neighborhood's predicate size.
【Keywords】: mobile communication; wireless sensor networks; Zoom; adaptive encoding; crowd-sourcing data collection; crowdsourced geo-spatial sensing; large-scale mobility; logical neighborhoods; mobile networks; multiresolution tasking framework; personal mobile devices; sensor networking technologies; sensor networks; tasking protocol; Encoding; Image resolution; Indexes; Mobile communication; Pixel; Roads; Sensors
【Paper Link】 【Pages】:506-510
【Authors】: Massimiliano Pierobon ; Ian F. Akyildiz
【Abstract】: Molecular Communication (MC) is a promising bio-inspired paradigm in which molecules are transmitted, propagated and received between nanoscale machines. One of the main challenges is the theoretical study of the maximum achievable information rate (capacity). The objective of this paper is to provide a mathematical expression for the capacity in MC nanonetworks when the propagation of the information relies on the free diffusion of molecules. Solutions from statistical mechanics and thermodynamics are used to derive a closed-form expression for the capacity as function of physical parameters, such as the size of the system, the temperature and the number of molecules as well as of the bandwidth of the system and the transmitted power. An extremely high order of magnitude of the capacity numerical values demonstrates the enormous potential of the diffusion-based MC systems.
【Keywords】: biomimetics; channel capacity; nanotechnology; statistical mechanics; thermodynamics; bioinspired paradigm; closed form expression; diffusion based molecular communication; information capacity; maximum achievable information rate; molecular communication nanonetworks; nanoscale machine; statistical mechanics; thermodynamics; Bandwidth; Entropy; Mathematical model; Molecular communication; Receivers; Thermodynamics; Transmitters
【Paper Link】 【Pages】:511-515
【Authors】: Peng-Jun Wan ; Minming Li ; Lixin Wang ; Zhu Wang ; Ophir Frieder
【Abstract】: Longest Queue First (LQF) is a well-known link scheduling strategy in multihop wireless networks. Its throughput efficiency ratio was shown to be exactly the local pooling factor (LPF) of the multihop wireless network in a recent seminar work by Joo et al.. Under the 802.11 interference model with uniform interference radii, the LPF of a multihop wireless network was known to be at least 1/6. However, little is known about the LPF of a multihop wireless network under the 802.11 interference model with arbitrary interference radii or under the protocol interference model. In this paper, we derive constant lower bounds on LPFs of these multihop wireless networks. Specifically, under the 802.11 interference model with arbitrary interference radii, the LPF is at least 1/16. Under the protocol interference model, if the communication radius of each node is at most c times its interference radius for some c <; 1, then the LPF is at least 1/ (2 ⌈π/ arcsin 1-c/2⌈ - 1)).
【Keywords】: protocols; queueing theory; radio networks; radiofrequency interference; scheduling; wireless LAN; 802.11 interference model; constant lower bounds; link scheduling strategy; local pooling factor; longest queue first; multihop wireless networks; protocol interference model; throughput efficiency ratio; uniform interference radii; IEEE 802.11 Standards; Interference; Optimal scheduling; Protocols; Spread spectrum communication; Throughput; Wireless networks; Link scheduling; interference; local pooling factor; longest queue first; throughput efficiency ratio
【Paper Link】 【Pages】:516-520
【Authors】: Greg Kuperman ; Eytan Modiano ; Aradhana Narula-Tam
【Abstract】: This paper develops a mesh network protection scheme that guarantees a quantifiable minimum grade of service upon a failure within a network. The scheme guarantees that a fraction q of each demand remains after any single link failure. A linear program is developed to find the minimum-cost capacity allocation to meet both demand and protection requirements. For q ≤ 1/2, an exact algorithmic solution for the optimal routing and allocation is developed using multiple shortest paths. For q >; 1/2, a heuristic algorithm based on disjoint path routing is developed that performs, on average, within 1.4% of optimal, and runs four orders of magnitude faster than the minimum-cost solution achieved via the linear program. Moreover, the partial protection strategies developed achieve reductions of up to 82% over traditional full protection schemes.
【Keywords】: telecommunication network routing; telecommunication security; wireless mesh networks; heuristic algorithm; linear program; mesh network protection; minimum-cost capacity allocation; optimal allocation; optimal routing; partial protection; Algorithm design and analysis; Heuristic algorithms; Mesh networks; Optical fiber networks; Resource management; Routing; WDM networks
【Paper Link】 【Pages】:521-525
【Authors】: Abhijeet Bhorkar ; Tara Javidi ; Alex C. Snoeren
【Abstract】: This work presents the Congestion Diversity Protocol (CDP), a routing protocol for multi-hop wireless networks that combines important aspects of shortest-path and backpressure routing to achieve improved end-end delay performance. In particular, CDP delivers lower end-to-end delay and fewer packet drops than existing routing protocols while maintaining equivalent throughput. This paper reports on a practical (hardware and software) implementation of CDP in an indoor WiFi network consisting of 12 802.11g nodes. This small test-bed enables an imperical comparison of CDP's performance against a set of state of the art protocols which include both congestion unaware and congestion aware routing protocols. In most topologies and scenarios we consider, CDP provides improvements for UDP traffic with respect to both end-end delay and throughput over the existing protocols.
【Keywords】: ad hoc networks; routing protocols; UDP traffic; backpressure routing; congestion diversity protocol; end-end delay performance; multihop wireless networks; routing protocol; shortest-path; wireless ad hoc networks; Ad hoc networks; Delay; Routing; Routing protocols; Throughput
【Paper Link】 【Pages】:526-530
【Authors】: Honghai Zhang ; Yuanxi Jiang ; Karthikeyan Sundaresan ; Sampath Rangarajan ; Baohua Zhao
【Abstract】: Using beamforming antennas to improve wireless multicast transmissions has received considerable attention recently. The work in proposes to partition all single-lobe beams into groups and to form composite multi-lobe beam patterns to transmit multicast traffic. The key challenge left is, how to partition the beams into multiple groups in order to minimize the total multicast delay. This work makes significant progress in both algorithmic and analytic aspects of the problem. We prove that, under the asymmetric power split (ASP) model, it is NP-hard to have (3/2 - ε)-approximation algorithm for any ε >; 0. We then develop an APTAS, an asymptotic (3/2 + β)-approximation solution (where β ≥ 0 depends on the wireless technology), and an asymptotic 2-approximation solution to the problem by relating the problem to a generalized version of the bin-packing problem. Extensive trace-driven simulations based on real-world channel measurements corroborate our analytical results by showing significant improvement compared to state of the art algorithms.
【Keywords】: approximation theory; array signal processing; communication complexity; multicast communication; radio networks; APTAS; NP-hard; approximation algorithm; asymmetric power split model; asymptotic approximation solution; bin-packing problem; multi-lobe beam patterns; multicast traffic; single-lobe beams; switched beamforming antennas; wireless data multicasting; wireless multicast transmissions; Antennas; Approximation algorithms; Array signal processing; Delay; Signal to noise ratio; Switches; Wireless communication
【Paper Link】 【Pages】:531-535
【Authors】: Hsiao-Chen Lu ; Wanjiun Liao
【Abstract】: In this paper, we study the cooperative strategies of relay stations in wireless relay networks. As pure relay stations distributed across the network and centrally controlled by the base station can be exploited to form cooperative antenna arrays, we determine which relay stations should cooperate with one another such that the performance of wireless relay networks can be optimized. Specifically, we define and formulate the throughput maximization problem for wireless relay networks under practical model of relay station cooperation. We design an iterative search algorithm to solve the relay station cooperation problem. The proposed algorithm is proved via simulations to yield near-optimal solutions with only linear-time complexity. The simulation results also show that cooperative transmissions of relay station can significantly improve system throughput and reduce the handover probability of mobile stations. More importantly, the placement of relay stations is crucial to the throughput gain obtained by relay station cooperation.
【Keywords】: cooperative communication; iterative methods; mobility management (mobile radio); radio networks; search problems; base station; cooperative antenna arrays; cooperative strategy; handover probability; iterative search algorithm; linear-time complexity; mobile stations; relay station cooperation problem; wireless relay networks; Clustering algorithms; Diversity methods; Gain; Relays; Throughput; Wireless communication; IEEE 802.16j; cooperative communications; cooperative strategies; wireless relay networks
【Paper Link】 【Pages】:536-540
【Authors】: Niels Karowski ; Aline Carneiro Viana ; Adam Wolisz
【Abstract】: We consider the problem of neighbor discovery in wireless networks with nodes operating in multiple frequency bands and with asymmetric beacon intervals. This is a challenging task when considering such heterogenous operation conditions and when performed without any external assistance. We present linear programming (LP) optimization and two strategies, named OPT and SWOPT, allowing nodes performing fast, asynchronous, and passive discovery. Our optimization is slotted based and determines a listening schedule describing when to listen, for how long, and on which channel. We compare our strategies with the passive discovery of the IEEE 802.15.4 standard. The results confirm that our optimization improves the performance in terms of first, average, and last discovery time.
【Keywords】: linear programming; radio networks; IEEE 802.15.4 standard; LP optimization; asymmetric beacon intervals; linear programming; listening schedule; multiple frequency bands; optimized asynchronous multichannel neighbor discovery; wireless networks; Acceleration; Linear programming; Optimized production technology; Schedules; Switches; Wireless networks
【Paper Link】 【Pages】:541-545
【Authors】: Ruichuan Chen ; Eng Keong Lua ; Zhuhua Cai
【Abstract】: Online social networking systems are rapidly becoming popular for users to share, organize and locate interesting content. However, these systems have increasingly been employed as platforms to spread spam and irrelevant content, abusing valuable human attention and service resource. In this paper, we propose a social reputation model to guide users to browse desirable content. First, we compute the statistical correlation between different users to distinguish various user interests; then, since a user's friends are usually trustworthy and share similar interest, we further exploit the inherent friend relationships to perform reliable social enhancements of vote history extension and efficient reputation estimation. Our social reputation model provides strong incentives for user cooperation, and moreover, our model can handle practical problems of inactive users, unpopular content and Sybil attacks effectively and efficiently. Our evaluation on a large-scale network validates our analysis, and shows that our social reputation model can help users find the desirable content in various scenarios with a precision of 94%.
【Keywords】: security of data; social networking (online); statistical analysis; user interfaces; Sybil attacks; online social networks; reputation estimation; social enhancement; social networking system; social reputation model; statistical correlation; user cooperation; user interest; vote history extension; Analytical models; Computational modeling; Correlation; Databases; Estimation; History; Social network services
【Paper Link】 【Pages】:546-550
【Authors】: Ajay Sridharan ; Yong Gao ; Kui Wu ; James Nastos
【Abstract】: Degree distribution of nodes, especially a power law degree distribution, has been regarded as one of the most significant structural characteristics of social and information networks. Node degree, however, only discloses the first-order structure of a network. Higher-order structures such as the edge embeddedness and the size of communities may play more important roles in many online social networks. In this paper, we provide empirical evidence on the existence of rich higher-order structural characteristics in online social networks, develop mathematical models to interpret and model these characteristics, and discuss their various applications in practice. In particular, 1) We show that the embeddedness distribution of links in social networks has interesting and rich behavior that cannot be captured by well-known network models. 2) We formally prove that random k-tree, a recent model for complex networks, has a power law embeddedness distribution, and show empirically that the random k-tree model can be used to capture the rich behavior of higher-order structures we observed in real-world social network. 3) Going beyond the embeddedness, we show that a variant of the random k-tree model can be used to capture the power law distribution of the size of communities of overlapping cliques discovered recently.
【Keywords】: social networking (online); trees (mathematics); community size; edge embeddedness behavior; node degree distribution; online social networks; overlapping clique community; power law distribution; random k-tree model; Analytical models; Barium; Communities; Complex networks; Electronic mail; Mathematical model; Social network services
【Paper Link】 【Pages】:551-555
【Authors】: Xiwang Yang ; Yang Guo ; Yong Liu
【Abstract】: In this paper, we propose a Bayesian-inference based recommendation system for online social networks. In our system, users share their movie ratings with friends. The rating similarity between a pair of friends is measured by a set of conditional probabilities derived from their mutual rating history. A user propagates a movie rating query along the social network to his direct and indirect friends. Based on the query responses, a Bayesian network is constructed to infer the rating of the querying user. We develop distributed protocols that can be easily implemented in online social networks. The proposed algorithm is evaluated in a synthesized social network derived from a movie rating data set of real users. We show that the Bayesian-inference based recommendation provides personalized recommendations as accurate as the traditional CF approaches, and allows the flexible trade-offs between recommendation quality and recommendation quantity.
【Keywords】: belief networks; inference mechanisms; probability; recommender systems; social networking (online); user interfaces; Bayesian network; Bayesian-inference based recommendation system; conditional probability; distributed protocols; mutual rating history; online social networks; user query; user rating; Accuracy; Bayesian methods; Correlation; Engines; Motion pictures; Probability distribution; Social network services
【Paper Link】 【Pages】:556-560
【Authors】: Bin Liu ; Peter Terlecky ; Amotz Bar-Noy ; Ramesh Govindan ; Michael J. Neely
【Abstract】: With the advent of smartphone technology, it has become possible to conceive of entirely new classes of applications. Social swarming, in which users armed with smartphones are directed by a central director to report on events in the physical world, has several real-world applications. In this paper, we focus on the following problem: how does the director optimize the selection of reporters to deliver credible corroborating information about an event? We first propose a model, based on common intuitions of believability, about the credibility of information. We then cast the problem as a discrete optimization problem, and introduce optimal centralized solutions and an approximate solution amenable to decentralized implementation whose performance is about 20% off on average from the optimal while being 3 orders of magnitude more computationally efficient. More interesting, a time-averaged version of the problem is amenable to a novel stochastic utility optimization formulation, and can be solved optimally, while in some cases yielding decentralized solutions.
【Keywords】: mobile handsets; stochastic programming; user interfaces; believability intuition; discrete optimization problem; information credibility; smartphone technology; social swarming application; stochastic utility optimization formulation; Approximation algorithms; Approximation methods; Complexity theory; Context; Dynamic programming; Heuristic algorithms; Optimization
【Paper Link】 【Pages】:561-565
【Authors】: Muhammad Usman Ilyas ; Muhammad Zubair Shafiq ; Alex X. Liu ; Hayder Radha
【Abstract】: This paper addresses the problem of identifying the top-k information hubs in a social network. Identifying top-k information hubs is crucial for many applications such as advertising in social networks where advertisers are interested in identifying hubs to whom free samples can be given. Existing solutions are centralized and require time stamped information about pair-wise user interactions and can only be used by social network owners as only they have access to such data. Existing distributed and privacy preserving algorithms suffer from poor accuracy. In this paper, we propose a new algorithm to identify information hubs that preserves user privacy. The intuition is that highly connected users tend to have more interactions with their neighbors than less connected users. Our method can identify hubs without requiring a central entity to access the complete friendship graph. We achieve this by fully distributing the computation using the Kempe-McSherry algorithm to address user privacy concerns. To the best of our knowledge, the proposed algorithm represents an arguably first attempt that (1) uses friendship graphs (instead of interaction graphs), (2) employs a truly distributed method over friendship graphs, and (3) maintains user privacy by not requiring them to disclose their friend associations and interactions, for identifying information hubs in social networks. We evaluate the effectiveness of our proposed technique using a real-world Facebook data set containing about 3.1 million users and more than 23 million friendship links. The results of our experiments show that our algorithm is 50% more accurate than existing distributed algorithms. Results also show that the proposed algorithm can estimate the rank of the top-k information hubs users more accurately than existing approaches.
【Keywords】: data privacy; distributed algorithms; social networking (online); user interfaces; Kempe-McSherry algorithm; distributed algorithm; friendship graph; information hub identification; pairwise user interaction; privacy preserving algorithm; social network; top-k information hub; user privacy; Communities; Correlation; Eigenvalues and eigenfunctions; Equations; Facebook; Privacy
【Paper Link】 【Pages】:566-570
【Authors】: Felix Ming ; Fai Wong ; Peter Marbach
【Abstract】: A fundamental challenge in peer-to-peer and online social networks is the design of a simple, distributed algorithm that allows users to discover, and connect to, peers who closely match their interests or preferences. In this paper, we consider an algorithm that is based on simple, local comparisons, and analyze it to provide insights into why similar peer discovery algorithms work well in practice. To do so, we use a mathematical framework to characterize the closeness of individual interests, and formally introduce the notion of a “perfect network formation” under the framework. Our analysis shows that the proposed algorithm indeeds achieves perfect network formation. Our analysis uses bounding techniques based on Chernoff bounds.
【Keywords】: distributed algorithms; peer-to-peer computing; social networking (online); Chernoff bounds; bounding technique; distributed algorithm; mathematical framework; online social network; peer discovery algorithm; peer-to-peer network; perfect network formation; Algorithm design and analysis; Copper; Mathematical model; Motion pictures; Noise; Peer to peer computing; Social network services
【Paper Link】 【Pages】:571-575
【Authors】: Jaeok Park ; Mihaela van der Schaar
【Abstract】: Overcoming the inefficiency of non-cooperative out-comes poses an important challenge for network managers in achieving efficient utilization of network resources. This paper studies a class of incentive schemes based on intervention, which are aimed to drive self-interested users towards a system objective. A manager can implement an intervention scheme by introducing in the network an intervention device that is able to monitor the actions of users and to take an action that influences the network usage of users. We consider the case of perfect monitoring, where the intervention device can immediately observe the actions of users without errors. We also assume that there exist actions of the intervention device that are most and least preferred by all users and the intervention device, regardless of the actions of users. We derive analytical results about the outcomes achievable with intervention and optimal intervention rules, and illustrate the results with an example based on random access networks.
【Keywords】: computer network management; game theory; incentive schemes; incentive provision; intervention device; intervention games; network management; network resources; noncooperative outcomes; optimal intervention rules; perfect monitoring; random access networks; Economics; Games; Incentive schemes; Internet; Monitoring; Nash equilibrium; Pricing; Game theory; incentive schemes; intervention; network management; random access networks
【Paper Link】 【Pages】:576-580
【Authors】: Stefano Paris ; Cristina Nita-Rotaru ; Fabio Martignon ; Antonio Capone
【Abstract】: Wireless mesh networks (WMNs) have emerged as a flexible and low-cost network infrastructure, where heterogeneous mesh routers managed by different users collaborate to extend network coverage. Several routing protocols have been proposed to improve the packet delivery rate based on enhanced metrics that capture the wireless link quality. However, these metrics do not take into account that some participants can exhibit selfish behavior by selectively dropping packets sent by other mesh routers in order to prioritize their own traffic and increase their network utilization. This paper proposes a novel routing metric to cope with the problem of selfish behavior (i.e., packet dropping) of mesh routers in a WMN. Our solution combines, in a cross-layer fashion, routing-layer observations of forwarding behavior with MAC-layer measurements of wireless link quality to select the most reliable and high-performance path. We integrated the proposed metric with a well-known routing protocol for wireless mesh networks, OLSR, and evaluated it using the NS2 simulator. The results show that our cross-layer metric accurately captures the path reliability, even when a high percentage of network nodes misbehave, thus considerably increasing the WMN performance.
【Keywords】: routing protocols; wireless mesh networks; EFW; MAC-layer measurements; cross-layer metric; heterogeneous mesh routers; network infrastructure; network utilization; packet delivery rate; reliable routing; routing metric; routing protocols; selfish participants; wireless link quality; wireless mesh networks; Ad hoc networks; Measurement; Network topology; Routing; Routing protocols; Topology; Wireless communication; Packet Dropping; Routing Metrics; Selfish Nodes; Wireless Mesh Networks
【Paper Link】 【Pages】:581-585
【Authors】: MohammadHossein Bateni ; Mohammad Taghi Hajiaghayi ; Sina Jafarpour ; Dan Pei
【Abstract】: As wireless service providers move from flat-fee unlimited data plans to tiered usage-based ones, there has been little published research on how such tiered plans should be designed. In this paper, we tackle this problem from an algorithmic perspective: formulating the problem of tiered data pricing plans for a wireless provider, and proposing an efficient algorithmic framework to compute the plans. Our algorithmic framework can be applied to the usage and cost data of any provider to obtain the pricing functions specific to that provider.
【Keywords】: cellular radio; mobile handsets; pricing; cellular data service; efficient algorithmic framework; flat-fee unlimited data plans; tiered data pricing plans; wireless service providers; Approximation algorithms; Approximation methods; Cost function; Data models; Internet; Pricing; Wireless communication
【Paper Link】 【Pages】:586-590
【Authors】: Hong Xu ; Baochun Li
【Abstract】: In this paper, we advocate the use of stable matching framework in solving networking problems, which are traditionally solved using utility-based optimization or game theory. Born in economics, stable matching efficiently resolves conflicts of interest among selfish agents in the market, with a simple and elegant procedure of deferred acceptance. We illustrate through one technical case study how it can be applied in practical scenarios where the impeding complexity of idiosyncratic factors makes defining a utility function difficult. Due to its use of generic preferences, stable matching has the potential to offer efficient and practical solutions to networking problems, while its mathematical structure and rich literature in economics provide many opportunities for theoretical studies. In closing, we discuss open questions when applying the stable matching framework.
【Keywords】: telecommunication networks; idiosyncratic factor; mathematical structure; networking problem; selfish agent; stable marriages; stable matching framework; utility function; Complexity theory; Economics; Educational institutions; Internet; Optimization; Proposals; Stability analysis
【Paper Link】 【Pages】:591-595
【Authors】: Gaoning He ; Samson Lasaulce ; Yezekael Hayel
【Abstract】: This paper addresses the power control problem in wireless networks where transmitters choose their control policy freely and selfishly in order to maximize their individual energy-efficiency. In this framework, two new scenarios are studied in details: 1. a scenario where a fraction of the transmitters can observe the power levels of the other transmitters while the latter have no sensing capabilities; 2. a scenario where the observation structure is triangular, that is, the kth transmitter can observe the k - 1th transmitters (which corresponds to a multi-level hierarchical game). In both scenarios the equilibrium analysis (existence, uniqueness, determination, efficiency) is conducted. In scenario 1, it is proved that the game outcome Pareto dominates the one obtained when no transmitters can sense the others. Taking the sensing cost into account, a simple condition under which being a follower (namely a transmitter who senses) is better than a leader is provided. Interestingly, the existence of an optimum fraction of cognitive transmitters in terms of sum utility is proved in the case where the sensing cost is neglected. In scenario 2, it is proved analytically that knowing more leads to a better utility and the game outcome Pareto dominates the solution with no sensing. The derived results are illustrated by numerical results and provide some insights on how to deploy cognitive radios in heterogeneous networks in terms of sensing capabilities.
【Keywords】: game theory; power control; radio networks; telecommunication control; Stackelberg games; cognitive transmitters; energy-efficiency; energy-efficient power control; equilibrium analysis; heterogeneous networks; multilevel hierarchical game; wireless networks; Games; Lead; Transmitters
【Paper Link】 【Pages】:596-600
【Authors】: Yuan Wu ; Hongseok Kim ; Prashanth Hande ; Mung Chiang ; Danny H. K. Tsang
【Abstract】: In this paper, we study the revenue sharing and rate allocation for Internet Service Providers (ISPs) that jointly provide network connectivity between content providers and end-users. Without colluding, each ISP may selfishly set a high transit-price to cover its cost and maximize its own profit, which inevitably results in a loss in social profit. We model this noncooperative interaction between an “eyeball” ISP and a “content” ISP as a Stackelberg game and quantify the resulting loss in social profit. To recover the profit loss, we propose a revenue sharing contract between ISPs by modeling them as a supply chain to deliver traffic in a two-sided market. Parameterized by the profit division factor, the sharing contract coordinates ISPs' objectives such that they aim to maximize the social profit self-incentively. We further propose a Nash bargaining process to determine the profit division factor such that all ISPs are simultaneously better off compared to the noncooperative equilibrium.
【Keywords】: incentive schemes; industrial economics; ISP; network connectivity; profit division factor; rate allocation; revenue sharing; two-sided markets; Contracts; Economics; Elasticity; Games; NIST; Pricing; Resource management
【Paper Link】 【Pages】:601-605
【Authors】: Guohua Li ; Jianzhong Li
【Abstract】: In wireless sensor networks, congestion not only leads to buffer overflowing, but also increases delays and lowers network throughput with a lot of wasted energy due to retransmissions. Therefore an effective solution should be proposed to avoid congestion to increase energy efficiency and prolong the lifetime of network. Traditional solutions work in an open-loop fashion and hence fail to adapt to the change of network after network deployment. In this paper, we propose a novel decentralized rate control based congestion avoidance protocol (RCCAP), which is an adaptive rate control algorithm based on the combination of discrete proportional-integral-derivative (PID) control method and single neuron. The main idea of RCCAP is to ensure that the buffer length of each sensor node is as close as possible to an ideal length by a feedback control loop that adaptively calibrates the total rate of data packets entering in each sensor node periodically according to the information collected by the node recently. In addition, we prove the stability of system based on Lyapunov stability theory. We also prove that global weighted fairness metrics fM = 1 - O(M-2), where M is the number of periods, under the condition that the system is stable when time approaches infinity. Our approach has been evaluated on a real wireless sensor network testbed. Detailed experimental results demonstrate that the global weighted fairness of RCCAP achieves 99% on average, which is much stronger than the existing rate-based congestion control protocol. Moreover, RCCAP achieves 52% and 18% gains in network throughput and global weighted fairness over PCCP on average, respectively.
【Keywords】: adaptive control; decentralised control; feedback; telecommunication congestion control; three-term control; wireless sensor networks; PID control method; RCCAP; adaptive rate control algorithm; decentralized rate control based congestion avoidance protocol; discrete proportional-integral-derivative control method; feedback control loop; global weighted fairness guaranteed congestion avoidance protocol; rate-based congestion control protocol; wireless sensor networks; Frequency modulation; Measurement; Neurons; Protocols; Routing; Throughput; Wireless sensor networks
【Paper Link】 【Pages】:606-610
【Authors】: Yongle Cao ; Ziguo Zhong ; Yu Gu ; Tian He
【Abstract】: Working in the duty cycling mode enables sensor nodes to utilize limited energy efficiently instead of unnecessary idle listening. In such networks, awareness of neighboring nodes' working schedules is essential, especially when each node sets up the schedule independently. Most traditional research assumes that a node can always share its working schedule with its neighbors once it joins the network. However, dynamic energy supply and varied requirements of system performance make adjusting duty cycles necessary. Consequently, sensor nodes have to regularly change their schedules and advertise the new schedule to their one-hop neighbors. In this work, instead of changing nodes' working schedule directly, we introduce SSD, a staged and smooth design for safeguarding schedule updates. Our design is aimed at bounding average packet loss rate with a minimum energy cost. We evaluate our design with large-scale simulation and show that under the comparable packet loss rate, our design saves up to 35% energy compared with the minimal packet loss solution.
【Keywords】: wireless sensor networks; dynamic energy supply; idle listening; large-scale simulation; minimum energy cost; neighboring nodes; one-hop neighbors; packet loss rate; safeguarding schedule updates; sensor nodes; wireless sensor networks; Loss measurement; Merging; Radiation detectors; Receivers; Schedules; Synchronization; Wireless sensor networks
【Paper Link】 【Pages】:611-615
【Authors】: Chul-Ho Lee ; Do Young Eun
【Abstract】: A simple random walk (SRW) has been considered as an effective forwarding method for many applications in wireless sensor networks (WSNs) due to its desirable properties. However, a critical downside of SRW - slow diffusion or exploration over the space, typically leads to longer packet delay and undermines its own benefits. Such slow-mixing problem becomes even worse under random duty cycling adopted for energy conservation. In this paper, we study how to overcome this problem without any sacrifice or tradeoff, and propose a simple modification of random duty cycling, named Smart Sleep, which achieves more power-saving as well as faster packet diffusion (or smaller delay) while retaining the benefits of SRW. We also introduce a class of p-backtracking random walks and establish its properties to analytically explain the fast packet diffusion induced by Smart Sleep. We further obtain a necessary condition to achieve an optimal performance under our Smart Sleep, and finally demonstrate remarkable performance improvement via independent simulation results over various network topologies.
【Keywords】: wireless sensor networks; SRW; WSN; duty-cycled wireless sensor networks; energy conservation; p-backtracking random walks; packet diffusion; power-saving; simple random walk; slow-mixing problem; smart sleep; Artificial neural networks; Delay; Wireless sensor networks
【Paper Link】 【Pages】:616-620
【Abstract】: When data productions and consumptions are heavily unbalanced and when the origins of data queries are spatially and temporally distributed, the so called in-network data storage paradigm supersedes the conventional data collection paradigm in wireless sensor networks (WSNs). In this paper, we first introduce geometric quorum systems (along with their metrics) to incarnate the idea of in-network data storage. These quorum systems are “geometric” because curves (rather than discrete node sets) are used to form quorums. We then propose GeoQuorum as a new quorum system, for which the quorum forming curves are parameterized. Our proposal significantly expands the quorum design methodology, by endowing a system with a great flexibility to fine-tune itself towards different application requirements. In particular, the tunability allows GeoQuorum to substantially improve the load balancing performance and to remain competitive in energy efficiency. Our simulation results confirm the performance enhancement brought by GeoQuorum.
【Keywords】: energy conservation; wireless sensor networks; GeoQuorum; data collection paradigm; energy efficiency; energy efficient data access; geometric quorum systems; in-network data storage paradigm; load balancing; quorum design methodology; wireless sensor networks; Geometry; Load management; Measurement; Production; Robustness; Tuning; Wireless sensor networks
【Paper Link】 【Pages】:621-625
【Authors】: Xiaokang Yu ; Xiaomeng Ban ; Wei Zeng ; Rik Sarkar ; Xianfeng Gu ; Jie Gao
【Abstract】: In this paper we address the problem of scalable and load balanced routing for wireless sensor networks. Motivated by the analog of the continuous setting that geodesic routing on a sphere gives perfect load balancing, we embed sensor nodes on a convex polyhedron in 3D and use greedy routing to deliver messages between any pair of nodes with guaranteed success. This embedding is known to exist by the Koebe-Andreev-Thurston Theorem for any 3-connected planar graphs. In our paper we use discrete Ricci flow to develop a distributed algorithm to compute this embedding. Further, such an embedding is not unique and differs from one another by a Möbius transformation. We employ an optimization routine to look for the Möbius transformation such that the nodes are spread on the polyhedron as uniformly as possible. We evaluated the load balancing property of this greedy routing scheme and showed favorable comparison with previous schemes.
【Keywords】: computational geometry; distributed algorithms; graph theory; greedy algorithms; optimisation; resource allocation; telecommunication network routing; wireless sensor networks; 3-connected planar graphs; Koebe-Andreev-Thurston theorem; Möbius transformation; continuous setting; convex polyhedron; discrete Ricci flow; distributed algorithm; geodesic routing; greedy routing scheme; load balanced routing; load balancing property; message delivery; optimization routine; polyhedron routing; sensor nodes; spherical representation; wireless sensor networks; Batteries; Face; Load management; Optimization; Routing; Three dimensional displays; Wireless networks
【Paper Link】 【Pages】:626-630
【Authors】: Ai Chen ; Zhizhou Li ; Ten-Hwang Lai ; Cong Liu
【Abstract】: One of the extensively studied coverage models in wireless sensor networks is barrier coverage, which guarantees that any movement crossing the given belt must be detected, while the direction of the movement is not required. For some intrusion detection applications, it may be the case that only one direction of crossing (the belt) is illegal such as border guarding. Therefore, we introduce a new coverage model called one-way barrier coverage, which requires that the network reports illegal intruders while ignores legal intruders. We propose an appropriate definition for one-way barrier coverage. We deeply investigate one-way barrier coverage with binary sensors. Our research illustrates that it is not straightforward to provide oneway barrier coverage even though there is only one intruder. When there are multiple intruders, we introduce the concept of neighboring barriers and design different protocols to provide one-way barrier coverage for different sensor models based on neighboring barriers.
【Keywords】: wireless sensor networks; border guarding; intrusion detection; one-way barrier coverage; sensor models; wireless sensor networks; Belts; Law; Protocols; Sensors; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:631-639
【Authors】: Guoqiang Mao ; Brian D. O. Anderson
【Abstract】: Consider a network where all nodes are distributed on a unit square following a Poisson distribution with known density ρ and a pair of nodes separated by an Euclidean distance x are directly connected with probability g(x/rρ), where g : [0,∞) → [0,1] satisfies three conditions: rotational invariance, non-increasing monotonicity and integral boundedness, √(log ρ+b)/Cρ, C = ∫ℜ2 g (||x||) dx and b is a constant, independent of the event that another pair of nodes are directly connected. In this paper, we analyze the asymptotic distribution of the number of isolated nodes in the above network using the Chen-Stein technique and the impact of the boundary effect on the number of isolated nodes as ρ → ∞. On that basis we derive a necessary condition for the above network to be asymptotically almost surely connected. These results form an important link in expanding recent results on the connectivity of the random geometric graphs from the commonly used unit disk model to the more generic and more practical random connection model.
【Keywords】: Poisson distribution; network theory (graphs); radio networks; random processes; Chen-Stein technique; Euclidean distance; Poisson distribution; asymptotic connectivity; asymptotic distribution; boundary effect; random connection model; random geometric graph; random network; Analytical models; Couplings; Equations; Euclidean distance; Indexes; Markov processes; Random variables; Isolated nodes; connectivity; random connection model
【Paper Link】 【Pages】:640-648
【Authors】: Yun Wang ; Xiaoyu Chu ; Xinbing Wang ; Yu Cheng
【Abstract】: In this paper, we give a global perspective of multicast capacity and delay analysis in Mobile Ad-hoc Networks (MANETs). Specifically, we consider two node mobility models: (1) two-dimensional i.i.d. mobility, (2) one-dimensional i.i.d. mobility. Two mobility time-scales are included in this paper: (i) Fast mobility where node mobility is at the same time-scale as data transmissions; (ii) Slow mobility where node mobility is assumed to occur at a much slower time-scale than data transmissions. Given a delay constraint D, we first characterize the optimal multicast capacity for each of the four mobility models, and then we develop a scheme that can achieve a capacity-delay tradeoff close to the upper bound up to a logarithmic factor. Our study can be further extended to two-dimensional/one-dimensional hybrid random walk fast/slow mobility models and heterogeneous networks.
【Keywords】: mobile ad hoc networks; MANET; capacity-delay tradeoff; data transmissions; delay tradeoffs; heterogeneous networks; mobile ad-hoc networks; mobility models; optimal multicast capacity; Ad hoc networks; Delay; Mobile communication; Mobile computing; Relays; Upper bound; Wireless communication
【Paper Link】 【Pages】:649-657
【Authors】: Luoyi Fu ; Yi Qin ; Xinbing Wang ; Xue Liu
【Abstract】: This paper investigates throughput and delay based on a newly predominant traffic pattern, called converge-cast, where each of the n nodes in the network act as a destination with k randomly chosen sources corresponding to it. Adopting Multiple-Input-Multiple-Output (MIMO) technology, we devise two many-to-one cooperative schemes under converge-cast for both static and mobile ad hoc networks (MANETs), respectively. In a static network, our scheme highly utilizes hierarchical cooperation MIMO transmission. This feature overcomes the bottleneck which hinders converge-cast traffic from yielding ideal performance in traditional ad hoc network, by turning the originally interfering signals into interference-resistant ones. It helps to achieve an aggregate throughput up to Ω(n1-ε) for any ε >; 0. In the mobile ad hoc case, our scheme characterizes on joint transmission from multiple nodes to multiple receivers. With optimal network division where the number of nodes per cell is constant bounded, the achievable per-node throughput can reach Θ(1) with the corresponding delay reduced to Θ(k). The gain comes from the strong and intelligent cooperation between nodes in our scheme, along with the maximum number of concurrent active cells and the shortest waiting time before transmission for each node within a cell. This, to a great extent, increases the chances for each destination to receive the data it needs with minimum overhead on extra transmission. Moreover, our converge-based analysis well unifies and generalizes previous work since the results derived from converge-cast in our schemes can also cover other traffic patterns. Last but not least, our cooperative schemes are of interest not only from a theoretical perspective but also shed light on future design of MIMO schemes in wireless networks.
【Keywords】: MIMO communication; mobile ad hoc networks; telecommunication traffic; MANET; converge-east traffic; hierarchical cooperation MIMO transmission; joint transmission; mobile ad hoc networks; multiple receivers; multiple-input-multiple-output technology; predominant traffic pattern; static network; wireless networks; Ad hoc networks; Irrigation; Mobile computing; Routing
【Paper Link】 【Pages】:658-666
【Authors】: Yuxin Chen ; Sanjay Shakkottai ; Jeffrey G. Andrews
【Abstract】: Information dissemination in a large network is typically achieved when each user shares its own information or resources with each other user. Consider n users randomly located over a fixed region, and k of them wish to flood their individual messages among all other users, where each user only has knowledge of its own contents and state information. The goal is to disseminate all messages using a low-overhead strategy that is one-sided and distributed while achieving an order-optimal spreading rate over a random geometric graph. In this paper, we investigate the random-push gossip-based algorithm where message selection is based on the sender's own state in a random fashion. It is first shown that random-push is inefficient in static random geometric graphs. Specifically, it is Ω(√n) times slower than optimal spreading. This gap can be closed if each user is mobile, and at each time moves “locally” using a random walk with velocity v(n). We propose an efficient dissemination strategy that alternates between individual message flooding and random gossiping. We show that this scheme achieves the optimal spreading rate as long as the velocity satisfies v(n) = ω(√log n/k). The key insight is that the mixing introduced by this velocity-limited mobility approximately uniformizes the locations of all copies of each message within the optimal spreading time, which emulates a balanced geometry-free evolution over a complete graph.
【Keywords】: information dissemination; mobile communication; information dissemination; low-overhead strategy; message flooding; message selection; mobile networks; multiple messages sharing; order-optimal spreading rate; random gossiping; random-push gossip-based algorithm; static random geometric graphs; Mobile communication; Mobile computing; Protocols; Tiles; Transmitters; Unicast; Wireless networks
【Paper Link】 【Pages】:667-675
【Authors】: Haiying Shen ; Lianyu Zhao ; Harrison Chandler ; Jared Stokes ; Jin Li
【Abstract】: Online forums have long since been the most popular platform for people to communicate and share ideas. Nowadays, with the boom of multimedia sharing, users tend to share more and more with their online peers within online communities such as forums. The server-client model of forums has been used since its creation in the mid-nineties. However, this model has begun to fall short in meeting the increasing need of bandwidth and storage as an increasing number of people share more and more multimedia content. In this work, we first investigate the unique properties of forums based on the data collected from the Disney discussion boards. According to these properties, we design a scheme to support P2P-based multimedia sharing in forums called Multimedia Board (MBoard). Extensive simulation results utilizing real trace data show that MBoard can significantly reduce the load on the server while maintaining a high quality of service for the users.
【Keywords】: client-server systems; multimedia communication; peer-to-peer computing; quality of service; Disney discussion boards; P2P-based multimedia sharing; multimedia board; online forums; online peers; quality of service; server-client model; user generated contents; Media; Multimedia communication; Peer to peer computing; Servers; Streaming media; Videos; YouTube
【Paper Link】 【Pages】:676-684
【Authors】: Ke Xu ; Meng Shen ; Mingjiang Ye
【Abstract】: Peer-to-Peer (P2P) applications have become increasingly popular in recent few years, which bring new challenges to network management and traffic engineering (TE). As basic input information, P2P traffic matrices are of significant importance for TE. Due to excessively high cost of direct measurement, a lot of studies aim at modeling and estimating general traffic matrices, but few focus on P2P traffic matrices. In this paper, we proposed a model to estimate P2P traffic matrices in networks. Important factors are considered, including the number of peers, the localization ratio of P2P traffic, and the distances among different networks. Here distance can be hop counts or geographic distance accordingly. To validate our model, we have evaluated the performance using both real P2P live steaming traces and file sharing application traces. Evaluation results show that the proposed model outperforms the other two typical models for general traffic matrices estimation, in terms of estimate errors. To the best of our knowledge, this is the first research on P2P traffic matrices estimation. P2P traffic matrices, derived from the model, can be applied to P2P traffic optimization and other TE fields.
【Keywords】: computer network management; estimation theory; file organisation; matrix algebra; peer-to-peer computing; telecommunication traffic; P2P live steaming trace; P2P traffic optimization; file sharing application trace; localization ratio; network management; peer-to-peer traffic matrix estimation; traffic engineering; Data communication; Estimation; Mathematical model; Peer to peer computing; Routing; Silicon; Throughput; Peer-to-Peer (P2P); Traffic engineering; Traffic matrix
【Paper Link】 【Pages】:685-693
【Authors】: Milan Vojnovic ; Alexandre Proutière
【Abstract】: We study the performance of hop-limited broadcasting of a message in dynamic graphs where links between nodes switch between active and inactive states. We analyze the performance with respect to the completion time, defined as the time for the message to reach a given portion of nodes, and the communication complexity, defined as the number of message forwarding per node. We analyze two natural flooding algorithms. First is a lazy algorithm where the message can be forwarded by a node only if it was first received by this node through a path shorter than the hop limit count. Second is a more complex protocol where each node forwards the message at a given time, if it could have been received by this node through a path shorter than the hop limit count. We derive exact asymptotics for the completion time and the communication complexity for large network size which reveal the effect of the hop limit count. Perhaps surprisingly, we find that both flooding algorithms perform near optimum and that the simpler (lazy) algorithm is only slightly worse than the other, more complicated algorithm. The results provide insights into performance of networked systems that use hop limits, for example, in the contexts of peer-to-peer systems and mobile ad-hoc networks.
【Keywords】: ad hoc networks; communication complexity; message passing; peer-to-peer computing; protocols; communication complexity; completion time; dynamic graphs; dynamic networks; hop limit count; hop limited flooding; hop-limited broadcasting; lazy algorithm; message forwarding; mobile ad-hoc networks; peer-to-peer systems; Ad hoc networks; Algorithm design and analysis; Heuristic algorithms; Markov processes; Mobile communication; Mobile computing; Peer to peer computing
【Paper Link】 【Pages】:694-702
【Authors】: Bo Tan ; Laurent Massoulié
【Abstract】: In this paper, we address the problem of content placement in peer-to-peer systems, with the objective of maximizing the utilization of peers' uplink bandwidth resources. We consider system performance under a many-user asymptotic. We distinguish two scenarios, namely “Distributed Server Networks” (DSN) for which requests are exogenous to the system, and “Pure P2P Networks” (PP2PN) for which requests emanate from the peers themselves. For both scenarios, we consider a loss network model of performance, and determine asymptotically optimal content placement strategies in the case of a limited content catalogue. We then turn to an alternative “large catalogue” scaling where the catalogue size scales with the peer population. Under this scaling, we establish that storage space per peer must necessarily grow unboundedly if bandwidth utilization is to be maximized. Relating the system performance to properties of a specific random graph model, we then identify a content placement strategy and a request acceptance policy which jointly maximize bandwidth utilization, provided storage space per peer grows unboundedly, although arbitrarily slowly, with system size.
【Keywords】: DiffServ networks; graph theory; peer-to-peer computing; video on demand; DSN; PP2PN; distributed server networks; optimal content placement; peer-to-peer video-on-demand systems; pure P2P networks; random graph model; request acceptance policy; Bandwidth; Equations; Load modeling; Optimization; Peer to peer computing; Servers; Streaming media
【Paper Link】 【Pages】:703-711
【Authors】: Qiuyu Peng ; Xinbing Wang ; Huan Tang
【Abstract】: In this paper, we investigate the multicast capacity for static network with heterogeneous clusters. We study the effect of heterogeneities on the achievable capacity from two aspects, including heterogeneous cluster traffic (HCT) and heterogeneous cluster size (HCS). HCT means cluster clients are more likely to appear near the cluster head, instead of being uniformly distributed across the network and HCS means each cluster is also not equal in size as most prior literatures assume. Both of these two properties are commonly found in realistic networks. For this class of networks, we find that HCT increases network capacity for all the clusters and HCS does not influence the network capacity. Our work can generalize various results obtained under non-heterogeneous networks in the literature.
【Keywords】: multicast communication; radio networks; telecommunication traffic; HCT; cluster clients; clustered network; heterogeneous cluster size; heterogeneous cluster traffic; multicast capacity; network capacity; static network; wireless network; Density functional theory; Dispersion; Relays; Routing; Unicast; Upper bound; Wireless communication
【Paper Link】 【Pages】:712-720
【Authors】: Cheng Wang ; Changjun Jiang ; Xiang-Yang Li ; Shaojie Tang ; Panlong Yang
【Abstract】: We study the general scaling laws of the capacity for random wireless networks under the generalized physical model. The generality of this work is embodied in three dimensions denoted by (λ ∈ [1, n], nd ∈ [1, n], ns ∈ (1, n]). It means that: (1) We study the random network of a general node density λ ∈ [1, n], rather than only study either random dense network (RDN, λ = n) or random extended network (REN, λ = 1) as in the literature. (2) We focus on the multicast capacity to unify unicast and broadcast capacities by setting the number of destinations for each session as a general value nd ∈ [1, n]. (3)We allow the number of sessions changing in the range ns ∈ (1, n], rather than assume that ns = Θ(n) as in the literature.We derive the general lower bounds on the capacity for the arbitrary case of (λ, nd, ns). Particularly, we show that for the special cases (λ = 1, nd ∈ [1, n], ns= n) and (λ = n, nd ∈ [1, n], ns = n), our schemes achieve the highest multicast throughputs proposed in the existing works.
【Keywords】: channel capacity; multicast communication; radio broadcasting; wireless sensor networks; broadcast capacity; capacity scaling; generalized physical model; multicast capacity; random dense network; random extended network; random wireless networks; unicast capacity; Lattices; Mathematical model; Roads; Routing; Schedules; Throughput
【Paper Link】 【Pages】:721-729
【Authors】: Abhigyan Sharma ; Aditya Kumar Mishra ; Vikas Kumar ; Arun Venkataramani
【Abstract】: Traffic engineering (TE) has been long studied as a network optimization problem, but its impact on user-perceived application performance has received little attention. Our paper takes a first step to address this disparity. Using real traffic matrices and topologies from three ISPs, we conduct very large-scale experiments simulating ISP traffic as an aggregate of a large number of TCP flows. Our application-centric, empirical approach yields two rather unexpected findings. First, link utilization metrics, and MLU in particular, are poor predictors of application performance. Despite significant differences in MLU, all TE schemes and even a static shortest-path routing scheme achieve nearly identical application performance. Second, application adaptation in the form of location diversity, i.e., the ability to download content from multiple potential locations, significantly improves the capacity achieved by all schemes. Even the ability to download from just 2-4 locations enables all TE schemes to achieve near-optimal capacity, and even static routing to be within 30% of optimal. Our findings call into question the value of TE as practiced today, and compel us to significantly rethink the TE problem in the light of application adaptation.
【Keywords】: Internet; matrix algebra; optimisation; telecommunication network routing; telecommunication network topology; telecommunication traffic; ISP traffic simulation; TCP flows; application-centric comparison; link utilization metrics; network optimization problem; static shortest-path routing scheme; topology; traffic engineering scheme; traffic matrix; Delay; Internet; Loss measurement; Routing; Throughput; Topology
【Paper Link】 【Pages】:730-738
【Authors】: Cheng-Shang Chang ; Chin-Yi Hsu ; Jay Cheng ; Duan-Shin Lee
【Abstract】: Based on Newman's fast algorithm, in this paper we develop a general probabilistic framework for detecting community structure in a network. The key idea of our generalization is to characterize a network (graph) by a bivariate distribution that specifies the probability of the two vertices appearing at both ends of a randomly selected path in the graph. With such a bivariate distribution, we give a probabilistic definition of a community and a definition of a modularity index. To detect communities in a network, we propose a class of distribution-based clustering algorithms that have comparable computational complexity to that of Newman's fast algorithm. Our generalization provides the additional freedom to choose a bivariate distribution and a correlation measure. As such, we obtain significant performance improvement over the original Newman fast algorithm in the computer simulations of random graphs with known community structure.
【Keywords】: computational complexity; computer networks; graph theory; pattern clustering; Newman fast algorithm; bivariate distribution; community structure; computational complexity; correlation measure; distribution-based clustering algorithm; general probabilistic framework; graph theory; modularity index; random graphs; clustering algorithms; graph partitioning; large complex networks
【Paper Link】 【Pages】:739-747
【Authors】: Guanfeng Liang ; Nitin H. Vaidya
【Abstract】: We consider the problem of maximizing the throughput of Byzantine agreement, when communication links have finite capacity. Byzantine agreement is a classical problem in distributed computing. In existing literature, the communication links are implicitly assumed to have infinite capacity. The problem changes significantly when the capacity of links is finite. We define the throughput and capacity of agreement, and identify necessary conditions of achievable agreement throughputs. We propose an algorithm structure for achieving agreement capacity in general networks. We also introduce capacity achieving algorithms for two classes of networks: (i) arbitrary four-node networks with at most 1 failure; and (ii) symmetric networks of arbitrary size.
【Keywords】: distributed processing; fault tolerant computing; radio links; Byzantine agreement; agreement capacity; agreement throughput; communication links; distributed computing; finite link capacity; four-node network; general network; point-to-point links; Algorithm design and analysis; Arrays; Barium; Image edge detection; Peer to peer computing; Throughput; Upper bound
【Paper Link】 【Pages】:748-756
【Authors】: Yuan Feng ; Zimu Liu ; Baochun Li
【Abstract】: Multi-touch mobile devices (e.g. iPhone and iPad) and motion-sensing game controllers (e.g. Kinect for Xbox 360) share one common feature: users interact with computing devices in non-conventional gesture-intensive ways, be they multi-touch gestures on the iPad or body motion gestures with the Kinect. As a new way to interact with computing devices, gestures have been proven to be intuitive and natural, with very minimal learning curve. They can be used in applications beyond games, such as those that allow the creation of artistic and musical content in a collaborative fashion. In order for multiple users to collaborate or compete in real time, however, such gestures need to be streamed in multiple broadcast sessions with an “all-to-all” broadcast nature, with each session corresponding to one of users as a source of a gesture stream. These streams of gestures typically incur low yet bursty bit rates, but have unique requirements with respect to delay and loss. In this paper, we present the design of GestureFlow, a gesture broadcast protocol designed specifically for concurrent gesture streams in multiple broadcast sessions. We motivate the effectiveness and practicality of using inter-session network coding, and address challenges introduced by linear dependence, discovered in our extensive experiments involving a new gesture-intensive iPad application that we developed from scratch.
【Keywords】: computer games; gesture recognition; mobile handsets; motion control; network coding; radio broadcasting; touch sensitive screens; body motion gestures; broadcast protocol; gesture flow; gesture stream; iPad; inter-session network coding; learning curve; motion sensing game controllers; multitouch mobile devices; scratch; Bit rate; Collaboration; Delay; Encoding; Network coding; Receivers; Relays
【Paper Link】 【Pages】:757-765
【Authors】: Jin Wang ; Jianping Wang ; Kejie Lu ; Yi Qian ; Bin Xiao ; Naijie Gu
【Abstract】: In this paper, we study the optimal design of linear network coding (LNC) for secure unicast against passive attacks, under the requirement of information theoretical security (ITS). The objectives of our optimal LNC design include (1) satisfying the ITS requirement, (2) maximizing the transmission rate of a unicast stream, and (3) minimizing the number of additional random symbols. We first formulate the problem that maximizes the secure transmission rate under the requirement of ITS, which is then transformed to a constrained maximum network flow problem.We devise an efficient algorithm that can find the optimal transmission topology. Based on the transmission topology, we then design a deterministic LNC which satisfies the aforementioned objectives and provide a constructive upper bound of the size of the finite field. In addition, we also study the potential of random LNC and derive the low bound of the probability that a random LNC is information theoretically secure.
【Keywords】: network coding; security of data; telecommunication network topology; telecommunication security; constrained maximum network flow problem; information theoretical security; information theoretically secure unicast; linear network coding; optimal design; optimal transmission topology; secure transmission rate; unicast stream; Data communication; Encoding; Network topology; Security; Topology; Unicast; Vectors
【Paper Link】 【Pages】:766-774
【Authors】: Tracey Ho ; Sidharth Jaggi ; Svitlana Vyetrenko ; Lingxiao Xia
【Abstract】: Random linear network codes can be designed and implemented in a distributed manner, with low computational complexity. However, these codes are classically implemented over finite fields whose size depends on some global network parameters (size of the network, the number of sinks) that may be unknown prior to code design. Also, the entire network code may have to be redesigned when a new node joins. In this work, we present the first universal and robust distributed linear network coding schemes. Our schemes are universal since they are independent of all network parameters. They are robust since in case nodes join or leave, the remaining nodes do not need to change their coding operations and the receivers can still decode. They are distributed since nodes need only have topological information about the part of the network upstream of them, which can be naturally streamed as part of the communication protocol. We present both probabilistic and deterministic schemes that are all asymptotically rate-optimal in the coding block-length, and have guarantees of correctness. Our probabilistic designs are computationally efficient, with order-optimal complexity. Our deterministic designs guarantee zero error decoding, albeit via codes with high computational complexity in general. Our coding schemes are based on network codes over “scalable fields”. Instead of choosing coding coefficients from one field at every node as in, each node uses linear coding operations over an “effective field-size” which depends on the node's distance from the source node. The analysis of our schemes requires technical tools that may be of independent interest. In particular, we generalize the Schwartz-Zippel lemma by proving a nonuniform version, wherein variables are chosen from sets of possibly different sizes. We also provide a novel robust distributed algorithm to assign unique IDs to network nodes.
【Keywords】: computational complexity; linear codes; network coding; probability; random codes; Schwartz-Zippel lemma; computational complexity; deterministic schemes; error decoding; global network parameters; probabilistic schemes; random linear network codes; robust distributed linear network codes; universal distributed linear network codes; Computational modeling
【Paper Link】 【Pages】:775-783
【Authors】: Jiliang Wang ; Yunhao Liu ; Mo Li ; Wei Dong ; Yuan He
【Abstract】: Due to its large scale and constrained communication radius, a wireless sensor network mostly relies on multi-hop transmissions to deliver a data packet along a sequence of nodes. It is of essential importance to measure the forwarding quality of multi-hop paths and such information shall be utilized in designing efficient routing strategies. Existing metrics like ETX, ETF mainly focus on quantifying the link performance in between the nodes while overlooking the forwarding capabilities inside the sensor nodes. The experience on manipulating GreenOrbs, a large-scale sensor network with 330 nodes, reveals that the quality of forwarding inside each sensor node is at least an equally important factor that contributes to the path quality in data delivery. In this paper we propose QoF, Quality of Forwarding, a new metric which explores the performance in the gray zone inside a node left unattended in previous studies. By combining the QoF measurements within a node and over a link, we are able to comprehensively measure the intact path quality in designing efficient multi-hop routing protocols. We implement QoF and build a modified Collection Tree Protocol (CTP). We evaluate the data collection performance in a test-bed consisting of 50 TelosB nodes, and compare it with the original CTP protocol. The experimental results show that our approach takes both transmission cost and forwarding reliability into consideration, thus achieving a high throughput for data collection.
【Keywords】: quality of service; routing protocols; telecommunication network reliability; wireless sensor networks; CTP protocol; QoF; comprehensive path quality measurement; constrained communication radius; data collection performance; data packet; forwarding reliability; large-scale sensor network; modified collection tree protocol; multihop path transmissions; multihop routing protocols; quality of forwarding; routing strategy; sensor nodes; transmission cost; wireless sensor networks; Estimation; Measurement; Reliability; Routing; Software; Throughput; Wireless sensor networks
【Paper Link】 【Pages】:784-792
【Authors】: Hongbo Jiang ; Shengkai Zhang ; Guang Tan ; Chonggang Wang
【Abstract】: Sensor networks are invariably coupled tightly with the geometric environment in which the sensor nodes are deployed. Network boundary is one of the key features that characterize such environments. While significant advances have been made for 2D cases, so far boundary extraction for 3D sensor networks has not been thoroughly studied. We present CABET, a novel Connectivity-bAsed Boundary Extraction scheme for large-scale Three-dimensional sensor networks. To the best of our knowledge, CABET is the first 3D-capable and pure connectivity-based solution for detecting sensor network boundaries. It is fully distributed. A highlight of CABET is its non-uniform critical node sampling, called r r'-sampling, that selects landmarks to form boundary surfaces with bias toward nodes embodying salient topological features. Simulations show that CABET is able to extract a well-connected boundary in the presence of holes and shape variation, with performance superior to that of some state-of-the-art alternatives. In addition, we show how CABET benefits a range of sensor network applications including 3D skeleton extraction and 3D segmentation.
【Keywords】: wireless sensor networks; 3D segmentation; 3D skeleton extraction; CABET; connectivity-based boundary extraction scheme; hole variation; large-scale 3D sensor network; nonuniform critical node sampling; salient topological feature; shape variation; Feature extraction; Image edge detection; Joining processes; Nickel; Shape; Surface reconstruction; Three dimensional displays
【Paper Link】 【Pages】:793-801
【Authors】: Hongbo Jiang ; Jie Cheng ; Dan Wang ; Chonggang Wang ; Guang Tan
【Abstract】: Top-k query has long been an important topic in many fields of computer science. Efficient implementation of the top-k queries is the key for information searching. With the new frontier such as the cyber-physical systems, where there can be a large number of users searching information directly into the physical world, many new challenges arise for top-k query processing. From the client's perspective, different users may request different set of information, with different priorities and at different times. Thus, the top-k search not only should be multi-dimensional, but also across time domain. From the system's perspective, the data collection is usually carried out by small sensing devices. Unlike the data centers used for searching in the cyber-space, these devices are often extremely resource-constrained and system efficiency is of paramount importance. In this paper, we develop a framework that can effectively satisfy the two ends. The sensor network maintains an efficient dominant graph data structure for data readings. A simple top-k extraction algorithm is used for the user query processing and two schemes are proposed to further reduce communication cost. Our proposed methods can be used for top-k query with any linear convex query function. To the best of our knowledge, this is the first work for continuous multi-dimensional top-k query processing in sensor networks; and our simulation results show that our schemes can reduce the total communication cost by up to 90%, compared with the centralized scheme or a straightforward extension from previous top-k algorithm on one-dimensional sensor data.
【Keywords】: data structures; query processing; wireless sensor networks; continuous multidimensional top-k query processing; data readings; graph data structure; sensor network; Aggregates; Algorithm design and analysis; Data mining; Data structures; Query processing; Routing; Temperature sensors
【Paper Link】 【Pages】:802-810
【Authors】: Jun Zhu ; Zhefu Jiang ; Zhen Xiao
【Abstract】: A key benefit of Amazon EC2-style cloud computing service is the ability to instantiate a large number of virtual machines (VMs) on the fly during flash crowd events. Most existing research focuses on the policy decision such as when and where to start a VM for an application. In this paper, we study a different problem: how can the VMs and the applications inside be brought up as quickly as possible? This problem has not been solved satisfactorily in existing cloud services. We develop a fast start technique for cloud applications by restoring previously created VM snapshots of fully initialized application. We propose a set of optimizations, including working set estimation, demand prediction, and free page avoidance, that allow an application to start running with only partially loaded memory, yet without noticeable performance penalty during its subsequent execution. We implement our system, called Twinkle, in the Xen hypervisor and employ the two-dimensional page walks supported by the latest virtualization technology. We use the RUBiS and TPC-W benchmarks to evaluate its performance under flash crowd and failure over scenarios. The results indicate that Twinkle can provision VMs and restore the QoS significantly faster than the current approaches.
【Keywords】: cloud computing; quality of service; virtual machines; Amazon EC2-style cloud computing service; Internet service; QoS; RUBiS; TPC-W; VM; fast resource provisioning mechanism; twinkle; virtual machine; Ash; Image storage; Operating systems; Random access memory; Servers; Virtual machine monitors; Web and internet services
【Paper Link】 【Pages】:811-819
【Authors】: Clint Sparkman ; Hsin-Tsang Lee ; Dmitri Loguinov
【Abstract】: With the proliferation of web spam and questionable content with virtually infinite auto-generated structure, large-scale web crawlers now require low-complexity ranking methods to effectively budget their limited resources and allocate the majority of bandwidth to reputable sites. To shed light on Internet-wide spam avoidance, we study the domain-level graph from a 6.3B-page web crawl and compare several agnostic topology-based ranking algorithms on this dataset. We first propose a new methodology for comparing the various rankings and then show that in-degree BFS-based techniques decisively outperform classic PageRank-style methods. However, since BFS requires several orders of magnitude higher overhead and is generally infeasible for real-time use, we propose a fast, accurate, and scalable estimation method that can achieve much better crawl prioritization in practice, especially in applications with limited hardware resources.
【Keywords】: Internet; security of data; unsolicited e-mail; BFS-based technique; PageRank-style method; Web crawl; Web spam; agnostic topology-based ranking algorithm; spam avoidance; Algorithm design and analysis; Crawlers; Electronic mail; Google; Internet; Manuals; Search engines
【Paper Link】 【Pages】:820-828
【Authors】: Cong Wang ; Kui Ren ; Jia Wang
【Abstract】: Cloud computing enables customers with limited computational resources to outsource large-scale computational tasks to the cloud, where massive computational power can be easily utilized in a pay-per-use manner. However, security is the major concern that prevents the wide adoption of computation outsourcing in the cloud, especially when end-user's confidential data are processed and produced during the computation. Thus, secure outsourcing mechanisms are in great need to not only protect sensitive information by enabling computations with encrypted data, but also protect customers from malicious behaviors by validating the computation result. Such a mechanism of general secure computation outsourcing was recently shown to be feasible in theory, but to design mechanisms that are practically efficient remains a very challenging problem. Focusing on engineering computing and optimization tasks, this paper investigates secure outsourcing of widely applicable linear programming (LP) computations. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. The resulting flexibility allows us to explore appropriate security/efficiency tradeoff via higher-level abstraction of LP computations than the general circuit representation. In particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some random one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-t- - o-zero additional cost on both cloud server and customers. Extensive security analysis and experiment results show the immediate practicability of our mechanism design.
【Keywords】: cloud computing; data privacy; linear programming; cloud computing; computation outsourcing; end-user confidential data; engineering computing task; linear programming; optimization task; privacy-preserving problem transformation technique; secure computation outsourcing mechanism; Computational modeling; Encryption; Linear programming; Outsourcing; Servers
【Paper Link】 【Pages】:829-837
【Authors】: Ning Cao ; Cong Wang ; Ming Li ; Kui Ren ; Wenjing Lou
【Abstract】: With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE). We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching”, i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use “inner product similarity” to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication.
【Keywords】: cloud computing; cryptography; data privacy; query processing; Boolean keyword search; cloud computing; cloud data utilization system; coordinate matching measurement; data encryption; data privacy; inner product similarity measurement; multikeyword semantics; plaintext keyword search; privacy-preserving multikeyword ranked search; single keyword search; Cloud computing; Data privacy; Encryption; Indexes; Privacy; Servers
【Paper Link】 【Pages】:838-845
【Authors】: Peng-Jun Wan ; Ophir Frieder ; Xiaohua Jia ; F. Frances Yao ; XiaoHua Xu ; Shaojie Tang
【Abstract】: Link scheduling is a fundamental problem in multihop wireless networks because the capacities of the communication links in multihop wireless networks, rather than being fixed, vary with the underlying link schedule subject to the wireless interference constraint. The majority of algorithmic works on link scheduling in multihop wireless networks assume binary interference models such as the 802.11 interference model and the protocol interference model, which often put severe restrictions on interference constraints for practical applicability of the link schedules. On the other hand, while the physical interference model is much more realistic, the link scheduling problem under physical interference model is notoriously hard to resolve and been studied only recently by a few works. This paper conducts a full-scale algorithmic study of link scheduling for maximizing throughput capacity or minimizing the communication latency in multihop wireless networks under the physical interference model. We build a unified algorithmic framework and develop approximation algorithms for link scheduling with or without power control.
【Keywords】: approximation theory; interference (signal); radio links; radio networks; scheduling; 802.11 interference model; approximation algorithm; binary interference model; communication latency minimization; communication link; multihop wireless network; physical interference model; protocol interference model; throughput capacity maximization; wireless interference constraint; wireless link scheduling; Approximation algorithms; Approximation methods; Interference; Polynomials; Schedules; Spread spectrum communication; Wireless networks; Link scheduling; approximation algorithm; maximum (concurrent) multiflow; maximum independent set; physical interference model
【Paper Link】 【Pages】:846-854
【Authors】: Peng-Jun Wan ; Yu Cheng ; Zhu Wang ; F. Frances Yao
【Abstract】: This paper studies maximum multiflow (MMF) and maximum concurrent multiflow (MCMF) in muliti-channel multi-radio multihop wireless networks under the 802.11 interference model or the protocol interference model. We introduce a fine-grained network representation of multi-channel multi-radio multihop wireless networks and present some essential topological properties of its associated conflict graph. By exploiting these properties, we develop practical polynomial approximation algorithms for MMF and MCMF with constant approximation bounds regardless of the number of channels and radios. Under the 802.11 interference model, their approximation bounds are at most 20 in general and at most 8 with uniform interference radii; under the protocol interference model, if the interference radius of each node is at least c times its communication radius, their approximation bounds are at most 2 (⌈π/ arcsin c-1/2c⌉ + 1). In addition, we also prove that if the number of channels is bounded by a constant (which is typical in practical networks), both MMF and MCMF admit a polynomial-time approximation scheme under the 802.11 interference model or under the protocol interference model with some additional mild conditions.
【Keywords】: communication complexity; graph theory; radio networks; radiofrequency interference; wireless LAN; wireless channels; 802.11 interference model; MCMF; MMF; associated conflict graph; maximum concurrent multiflow; maximum multiflow; multichannel multiradio multihop wireless network; polynomial-time approximation scheme; protocol interference model; topological property; Approximation methods; IEEE 802.11 Standards; Interference; Polynomials; Protocols; Spread spectrum communication; Wireless networks; Link scheduling; approximation algorithm; maximum (concurrent) multiflow; multi-channel multi-radio
【Paper Link】 【Pages】:855-863
【Authors】: Fung Po Tso ; Lin Cui ; Lizhuo Zhang ; Weijia Jia ; Di Yao ; Jin Teng ; Dong Xuan
【Abstract】: Wide range wireless networks often suffer from annoying service deterioration due to fickle wireless environment. This is especially the case with passengers on long distance train (LDT) to connect onto the Internet. To improve the service quality of wide range wireless networks, we present the DragonNet protocol with its implementation. The DragonNet system is a chained gateway which consists of a group of interlinked DragonNet routers working specifically for mobile chain transport systems. The protocol makes use of the spatial diversity of wireless signals that not all spots on a surface see the same level of radio frequency radiation. In the case of a LDT of around 500 meters, it is highly possible that some of the spanning routers still see sound signal quality, when the LDT is partially blocked from wireless Internet. DragonNet protocol fully utilizes this feature to amortize single point router failure over the whole router chain by intelligently rerouting traffics on failed ones to sound ones. We have implemented the DragonNet system and tested it in real railways over a period of three months. Our results have pinpointed two fundamental contributions of DragonNet protocol. First, DragonNet significantly reduces average temporary communication blackout (i.e. no Internet connection) to 1.5 seconds compared with 6 seconds that without DragonNet protocol. Second, DragonNet efficiently doubles the aggregate throughput on average.
【Keywords】: Internet; mobile computing; protocols; radio networks; telecommunication network routing; DragonNet protocol; DragonNet routers; LDT; communication blackout; long distance trains; mobile chain transport systems; radiofrequency radiation; robust mobile Internet service system; time 1.5 s; wireless Internet; wireless environment; wireless networks; wireless signals; Internet; Power system faults; Power system protection; Routing protocols; Servers; Wireless communication
【Paper Link】 【Pages】:864-872
【Authors】: Marjan A. Baghaie ; Bhaskar Krishnamachari
【Abstract】: We formulate the problem of delay constrained energy-efficient broadcast in cooperative multihop wireless networks. We show that this important problem is not only NP-complete, but also o(log(n)) inapproximable. We derive approximation results and an analytical lower-bound for this problem. We break this NP hard problem into three parts: ordering, scheduling and power control. We show that when the ordering is given, the joint scheduling and power-control problem can be solved in polynomial time by a novel algorithm that combines dynamic programming and linear programming to yield the minimum energy broadcast for a given delay constraint. We further show empirically that this algorithm used in conjunction with an ordering derived heuristically using the Dijkstra's shortest path algorithm yields near-optimal performance in typical settings. We use our algorithm to study numerically the trade-off between delay and power-efficiency in cooperative broadcast and compare the performance of our cooperative algorithm with a smart non-cooperative algorithm.
【Keywords】: broadcast channels; computational complexity; dynamic programming; graph theory; linear programming; radio networks; scheduling; Dijkstra's shortest path algorithm; NP hard problem; NP-complete; analytical lower-bound; cooperative broadcast; cooperative multihop wireless networks; cooperative wireless networks; delay constrained energy-efficient broadcast; delay constrained minimum energy broadcast; delay constraint; dynamic programming; joint scheduling; linear programming; near-optimal performance; ordering; polynomial time; power control; smart noncooperative algorithm; Artificial neural networks
【Paper Link】 【Pages】:873-881
【Authors】: Yunhao Liu ; Yuan He ; Mo Li ; Jiliang Wang ; Kebin Liu ; Lufeng Mo ; Wei Dong ; Zheng Yang ; Min Xi ; Jizhong Zhao ; Xiang-Yang Li
【Abstract】: In spite of the remarkable efforts the community put to build the sensor systems, an essential question still remains unclear at the system level, motivating us to explore the answer from a point of real-world deployment view. Does the wireless sensor network really scale? We present findings from a large scale operating sensor network system, GreenOrbs, with up to 330 nodes deployed in the forest. We instrument such an operating network throughout the protocol stack and present observations across layers in the network. Based on our findings from the system measurement, we propose and make initial efforts to validate three conjectures that give potential guidelines for future designs of large scale sensor networks. (1) A small portion of nodes bottlenecks the entire network, and most of the existing network indicators may not accurately capture them. (2) The network dynamics mainly come from the inherent concurrency of network operations instead of environment changes. (3) The environment, although the dynamics are not as significant as we assumed, has an unpredictable impact on the sensor network. We suggest that an event-based routing structure can be trained optimal and thus better adapt to the wild environment when building a large scale sensor network.
【Keywords】: protocols; sensor placement; telecommunication network routing; wireless sensor networks; GreenOrbs; event-based routing structure; large scale sensor networks; network dynamics; operating network throughout; protocol stack; sensor deployment; wireless sensor network; Area measurement; Heating
【Paper Link】 【Pages】:882-890
【Authors】: Utpal Paul ; Anand Prabhu Subramanian ; Milind M. Buddhikot ; Samir R. Das
【Abstract】: We conduct the first detailed measurement analysis of network resource usage and subscriber behavior using a large-scale data set collected inside a nationwide 3G cellular data network. The data set tracks close to a million subscribers over thousands of base stations. We analyze individual subscriber behaviors and observe a significant variation in network usage among subscribers. We characterize subscriber mobility and temporal activity patterns and identify their relation to traffic volume. We then investigate how efficiently radio resources are used by different subscribers as well as by different applications. We also analyze the network traffic from the point of view of the base stations and find significant temporal and spatial variations in different parts of the network, while the aggregated behavior appears predictable. Broadly, our observations deliver important insights into network-wide resource usage. We describe implications in pricing, protocol design and resource and spectrum management.
【Keywords】: 3G mobile communication; cellular radio; mobility management (mobile radio); protocols; telecommunication traffic; 3G cellular data network; network traffic; network-wide resource; protocol design; resource management; spectrum management; subscriber mobility; temporal activity; traffic dynamics; Base stations; Bit rate; Mobile communication; Planning; Protocols; Trajectory; Virtual private networks
【Paper Link】 【Pages】:891-899
【Authors】: Biao Han ; Jie Li ; Jinshu Su
【Abstract】: One of the challenging issues for supporting emergency services in wireless ad hoc networks (WANETs) is coordinating the network under emergency situations. It may lead to inefficient use of the network resources by increasing congestion, as well as affect the network connectivity due to the non-cooperation behaviors of some selfish users. In this paper, we focus on promoting self-supported and congestion-aware networking for emergency services in WANETs based on the idea of Do-It-Yourself1. We model network congestion and non-cooperation behaviors according to the relations between nodes in the constructed dependency graph. Then we propose an energy-efficient and congestion-aware routing protocol for the emergency services of WANETs. Based on the proposed model and routing protocol, we design two novel movement schemes, called Direct Movement to potential selfish/busy Relays (DMR) scheme and Iterative Movement to potential selfish/busy Relays (IMR) scheme for urgent sources to support themselves and to avoid congestion and non-cooperation. Analysis and simulation results show that our approaches significantly achieve better network performance and typically satisfy the requirements for emergency services in WANETs.
【Keywords】: ad hoc networks; routing protocols; telecommunication congestion control; Direct Movement to potential selfish/busy Relays scheme; Iterative Movement to potential selfish/busy Relays scheme; WANET; congestion-aware routing protocol; emergency services; network congestion; noncooperation behaviors; self-supported congestion-aware networking; wireless ad hoc networks; Ad hoc networks; Emergency services; Mathematical model; Relays; Routing protocols; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:900-908
【Authors】: Roy Friedman ; Alex Kogan ; Yevgeny Krivolapov
【Abstract】: This paper describes a combined power and throughput performance study of WiFi and Bluetooth usage in smartphones. The study reveals several interesting phenomena and tradeoffs. The conclusions from this study suggest preferred usage patterns, as well as operative suggestions for researchers and smartphone developers.
【Keywords】: Bluetooth; mobile handsets; wireless LAN; Bluetooth; WiFi; smartphones; Ad hoc networks; Bluetooth; IEEE 802.11 Standards; Power demand; Power measurement; Smart phones; Throughput
【Paper Link】 【Pages】:909-917
【Authors】: Ze Li ; Haiying Shen
【Abstract】: Encouraging cooperative and deterring selfish behaviors are important for proper operations of MANETs. For this purpose, most previous efforts either rely on reputation systems or price systems. However, both systems are neither sufficiently effective in providing cooperation incentives nor efficient in resource consumption. Nodes in both systems can be uncooperative while still being considered trustworthy. Also, information exchange between mobile nodes in reputation systems and credit circulation in price systems consume significant resources. This paper presents a hierarchical Account-aided Reputation Management system (ARM) to efficiently and effectively provide cooperation incentives. ARM builds a hierarchical locality-aware DHT infrastructure for efficient and integrated operations of both reputation and price systems. The infrastructure helps to globally collect all reputation information in the system, which helps to calculate more accurate reputation and detect abnormal reputation information. Also, ARM coordinately integrates resource and price systems by enabling higher-reputed nodes to pay less for their received services. Theoretical analysis demonstrates the properties of ARM. Simulation results show that ARM outperforms both a reputation system and price system in terms of effectiveness and efficiency.
【Keywords】: mobile ad hoc networks; telecommunication network management; ARM; cooperation incentive; hierarchical account-aided reputation management system; hierarchical locality-aware DHT infrastructure; large-scale MANET; price systems; resource consumption; Ad hoc networks; Maintenance engineering; Mobile communication; Mobile computing; Monitoring; Routing; Topology
【Paper Link】 【Pages】:918-926
【Authors】: Chi Zhang ; Xiaoyan Zhu ; Yang Song ; Yuguang Fang
【Abstract】: For a multi-hop wireless network (MWN) consisting of mobile nodes controlled by independent self-interested users, incentive mechanism is essential for motivating mobile nodes to cooperate and forward packets for each other. Existing solutions such as barter based, virtual-currency based and reputation based schemes are either less effective or incur high implementation costs, and therefore do not fit well with the unique requirements of MWNs. In this paper, we propose a novel and promising incentive paradigm, Controlled Coded packets as virtual Commodity Currency (C4), to induce cooperative behaviors in MWNs. In our C4, through introducing several techniques from network coding, coded information packets are utilized as a new kind of virtual currency to facilitate packet/service exchanges among self-interested nodes in a MWN. Since the virtual currency implemented in this way also carries useful data information, it is the counterpart of the so-called commodity currency in the physical world, and the overhead brought by C4 is extremely small compared to traditional schemes. We theoretically show that C4 is perfectly efficient to support MWNs with broadcast and multicast traffics. For pure unicast communications, by adjusting the grouping parameter, our C4 provides a systematic way to smoothly trade incentive effectiveness for implementation cost, and traditional barter based and virtual-currency based schemes are just two extreme cases of C4. We also show that when our C4 is combined with the social network formed by mobile users in the MWN, the implementation costs can be further reduced without sacrificing incentive effectiveness.
【Keywords】: broadcast channels; incentive schemes; mobile radio; multicast communication; network coding; packet radio networks; telecommunication traffic; MWN; barter based scheme; broadcast traffic; coded information packets; controlled coded packets; cooperative behaviors; data information; forward packets; incentive mechanism; incentive paradigm; independent self-interested users; mobile nodes; multicast traffics; multihop wireless networks; network coding; packet exchange; reputation based schemes; self-interested nodes; service exchange; social network; unicast communications; virtual commodity currency; virtual currency; virtual-currency based schemes; Mobile communication; Vehicles
【Paper Link】 【Pages】:927-935
【Authors】: Joshua Reich ; Vishal Misra ; Dan Rubenstein ; Gil Zussman
【Abstract】: We explore distributed mechanisms for maintaining the physical layer connectivity of a mobile wireless network while still permitting significant area coverage. Moreover, we require that these mechanisms maintain connectivity despite the unpredictable wireless propagation behavior found in complex real-world environments. To this end, we propose the Spreadable Connected Autonomic Network (SCAN) algorithm, a fully distributed, on-line, low overhead mechanism for maintaining the connectivity of a mobile wireless network. SCAN leverages knowledge of the local (2-hop) network topology to enable each node to intelligently halt its own movement and thereby avoid network partitioning events. By relying on topology data instead of locality information and deterministic connectivity models, SCAN can be applied in a wide range of realistic operational environments. We believe it is for precisely this reason that, to our best knowledge, SCAN was the first such approach to be implemented in hardware. Here, we present results from our implementation of SCAN, finding that our mobile robotic testbed maintains full connectivity over 99% of the time. Moreover, SCAN achieves this in a complex indoor environment, while still allowing testbed nodes to cover a significant area.
【Keywords】: mobile ad hoc networks; telecommunication network topology; connectivity maintenance; constrained mobility; mobile wireless networks; network topology; spreadable connected autonomic network algorithm; Maintenance engineering; Mobile communication; Mobile computing; Neodymium; Robots; Robustness; Wireless communication
【Paper Link】 【Pages】:936-944
【Authors】: Fangming Liu ; Shijun Shen ; Bo Li ; Baochun Li ; Hao Yin ; Sanli Li
【Abstract】: In this paper, we present Novasky, a real-world Video-on-Demand (VoD) system capable of delivering cinematic-quality video streams to end users. The foundation of the Novasky design is a peer-to-peer (P2P) storage cloud, storing and refreshing media streams in a decentralized fashion using local storage spaces of end users. We present our design objectives in Novasky, and how these objectives are achieved using a collection of unique mechanisms, with respect to caching strategies, coding mechanisms, and the maintenance of the supply-demand relationship when it comes to media availability in the P2P storage cloud. The production Novasky system has been implemented with over 100,000 lines of code. It has been deployed in the Tsinghua University campus network, operational since September 2009, attracting 10,000 users to date, and providing over 1,000 cinematic-quality video streams with bit rates of 1 - 2 Mbps. Based on real-world traces collected over 6 months, we show that Novasky can achieve rapid startups within 4 - 9 seconds, and extremely short seek latencies within 3 seconds. Our empirical experiences with Novasky may bring valuable insights to future designs of production-quality P2P storage cloud systems.
【Keywords】: cache storage; cinematography; cloud computing; peer-to-peer computing; video on demand; video streaming; P2P storage; VoD; cinematic quality; cloud computing; coding mechanisms; media streams; peer-to-peer storage; video streams; video-on-demand; Availability; Bandwidth; Encoding; Peer to peer computing; Reed-Solomon codes; Servers; Streaming media
【Paper Link】 【Pages】:945-953
【Authors】: Yipeng Zhou ; Tom Z. J. Fu ; Dah Ming Chiu
【Abstract】: Traditional Video-on-Demand (VoD) systems reply purely on servers to stream video content to clients, which does not scale. In recent years, Peer-to-peer assisted VoD (P2P VoD) has proven to be practical and effective. In P2P VoD, each peer contributes some storage to store videos (or segments of videos) to help the video server. Assuming peers have sufficient bandwidth for the given video playback rate, a fundamental question is what is the relationship between the storage capacity (at each peer), the number of videos, the number of peers and the resultant off-loading of video server bandwidth. In this paper, we use a simple statistical model to derive this relationship. We propose and analyze a generic replication algorithm RLB which balances the service to all movies, for both deterministic and random demand models, and both homogeneous and heterogeneous peers (in upload bandwidth). We use simulation to validate our results, for sensitivity analysis and for comparisons with other popular replication algorithms. This study leads to several fundamental insights for design P2P VoD systems in practice.
【Keywords】: peer-to-peer computing; sensitivity analysis; statistical analysis; video on demand; video servers; video streaming; VoD service; generic replication algorithm; peer-to-peer replication; random demand models; sensitivity analysis; statistical analysis; statistical modeling; storage capacity; video playback rate; video server bandwidth; video streaming; video-on-demand; Bandwidth; Correlation; Load modeling; Motion pictures; Peer to peer computing; Servers; Streaming media
【Paper Link】 【Pages】:954-962
【Authors】: Raphael Eidenbenz ; Thomas Locher ; Roger Wattenhofer
【Abstract】: We consider the question of how a conspiring subgroup of peers in a p2p network can find each other and communicate without provoking suspicion among regular peers or an authority that monitors the network. In particular, we look at the problem of how a conspirer can broadcast a message secretly to all fellow conspirers. As a subproblem of independent interest, we study the problem of how a conspirer can safely determine a connected peer's type, i.e., learning whether the connected peer is a conspirer or a regular peer without giving away its own type in the latter case. For several levels of monitoring, we propose distributed and efficient algorithms that transmit hidden information by varying the block request sequence meaningfully. We find that a p2p protocol offers several steganographic channels through which hidden information can be transmitted, and p2p networks are susceptible to hidden communication even if they are completely monitored.
【Keywords】: distributed algorithms; peer-to-peer computing; protocols; steganography; P2P networks; P2P protocol; distributed algorithms; hidden communication; steganographic channels; Algorithms; Blogs; Complexity theory; Monitoring; Peer to peer computing; Protocols; Servers
【Paper Link】 【Pages】:963-971
【Authors】: Rubén Cuevas Rumín ; Nikolaos Laoutaris ; Xiaoyuan Yang ; Georgos Siganos ; Pablo Rodriguez
【Abstract】: A substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Secondly, we go beyond individual case studies and present results for the top 100 ISPs in terms of number of users represented in our dataset of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. We develop scalable methodologies that allow us to process this huge dataset and get concrete quantitative answers rather than qualitative speculations to questions like: “what is the minimum and the maximum transit traffic reduction across hundreds of ISPs?”, “what are the win-win boundaries for ISPs and their users?”, “what is the maximum amount of transit traffic that can be localized without requiring fine-grained control of inter-AS overlay connections?”.
【Keywords】: Internet; ubiquitous computing; BitTorrent; Crawlers; Numerical models; Peer to peer computing; Protocols; Quality of service; Steady-state; Upper bound
【Paper Link】 【Pages】:972-980
【Authors】: Yuanzhong Xu ; Xinbing Wang
【Abstract】: We study the fundamental lower bound for node buffer size in intermittently connected wireless networks. The intermittent connectivity is caused by the possibility of node inactivity due to some external constraints. We find even with infinite network capacity and node processing speed, buffer occupation in each node does not approach zero in a static random network where each node keeps a constant message generation rate. Given the condition that each node has the same probability p to be inactive during each time slot, there exists a critical value pc(λ) for this probability from a percolation-based perspective. When p <; pc(λ), the network is in the supercritical case, and there is an achievable lower bound for the occupied buffer size of each node, which is asymptotically independent of the size of the network. If p >; pc(λ), the network is in the subcritical case, and there is a tight lower bound Θ(√n)for buffer occupation, where n is the number of nodes in the network.
【Keywords】: radio networks; telecommunication network management; buffer occupation; infinite network capacity; intermittent connectivity; intermittently connected wireless networks; message generation rate; node buffer size; node processing speed; static random network; Delay; Mobile communication; Routing; Throughput; Transmitters; Wireless networks; Wireless sensor networks
【Paper Link】 【Pages】:981-989
【Authors】: Peng-Jun Wan ; Lixin Wang
【Abstract】: Consider a random multihop wireless network represented by a Poisson point process over a unit-area disk with mean n. Let øn denote its critical transmission radius for its greedy forward routing. Recently, asymptotic bounds on øn have been progressively improved. However, the precise asymptotic probability distribution of øn remains open. In this paper, we settle this open problem. Specifically, let σ = 2π/3 - √3/2. Then for any constant c, the asymptotic probability of equation is proved to be exactly exp (-(1/σ/π-1/3-π/2σ)e-c).
【Keywords】: probability; radio networks; random processes; stochastic processes; telecommunication network routing; Poisson point process; asymptotic probability distribution; critical transmission radius; greedy forward routing; random multihop wireless network; Area measurement; Electronic mail; Euclidean distance; Markov processes; Probability distribution; Routing; Topology; Greedy forward routing; asymptotic distribution; critical transmission radius; random deployment
【Paper Link】 【Pages】:990-998
【Authors】: Hongkun Li ; Yu Cheng ; Peng-Jun Wan ; Jiannong Cao
【Abstract】: It is very challenging to compute the capacity region of a multi-radio multi-channel (MR-MC) network, which involves complex resource contention including the co-channel interferences and radio interface contentions. In this paper, we study the local sufficient rate constraints that can be constructed at each network node in a distributed manner to ensure a feasible flow allocation for the MR-MC network. The analysis of capacity region with the rate constraints is facilitated by our tool of multi-dimensional conflict graph (MDCG), which systematically describes all kinds of conflict relationships in an MR-MC network. Specially, we establish two types of local sufficient constraints, the neighborhood constraint and the sufficient clique constraint, respectively; and both types can ensure a constant portion of the optimal capacity region, termed as capacity efficiency ratio. The capacity efficiency ratios associated with the neighborhood constraint and the sufficient clique constraint are related to the analysis of the interference degree and the imperfection ratio of an MDCG, respectively. A specific challenge is that methodology computing the interference degree and the imperfection ratio of single-radio single-channel (SR-SC) networks could not be directly extended to the MR-MC context, because MR-MC network has disruptively different geometric properties compared to the SR-SC network: In an MR-MC network, the geometric closeness does not necessarily imply interference due to possible parallel transmissions over different radios and channels. The fundamental contributions of this paper are the theoretical studies of the interference degree and the imperfection ratio of an MDCG, revealing how such graphical characteristics are related to those in the SR-SC context under the impact of the MR-MC geometric property. We also present extensive numerical results to demonstrate the effectiveness of the proposed local sufficient constraints in ensuring a larger capacity reg- - ion compared to the well-known results in.
【Keywords】: channel capacity; cochannel interference; graph theory; radio networks; capacity efficiency ratio; cochannel interference contention; flow allocation; geometric property; graphical characteristics; guaranteed capacity region; imperfection ratio; interference degree computation; local sufficient rate constraint; multidimensional conflict graph; multiradio multichannel wireless network; neighborhood constraint; parallel transmission; radio interface contention; resource contention; single-radio single-channel network; sufficient clique constraint; Context; Interchannel interference; Mathematical model; Resource management; Schedules; Upper bound
【Paper Link】 【Pages】:999-1007
【Authors】: Aditya Gopalan ; Siddhartha Banerjee ; Abhik Kumar Das ; Sanjay Shakkottai
【Abstract】: We study infection spreading on large static networks when the spread is assisted by a small number of additional virtually mobile agents. For networks which are “spatially constrained”, we show that the spread of infection can be significantly sped up even by a few virtually mobile agents acting randomly. More specifically, for general networks with bounded virulence (e.g., a single or finite number of random virtually mobile agents), we derive upper bounds on the order of the time taken (as a function of network size) for infection to spread. Conversely, for certain common classes of networks such as linear graphs, grids and random geometric graphs, we also derive lower bounds on the order of the spreading time over all (potentially network-state aware and adversarial) virtual mobility strategies. We show that up to a logarithmic factor, these lower bounds for adversarial virtual mobility match the upper bounds on spreading via an agent with random virtual mobility. This demonstrates that random, state-oblivious virtual mobility is in fact order-wise optimal for dissemination in such spatially constrained networks.
【Keywords】: computer network security; computer viruses; graph theory; mobile agents; adversarial virtual mobility; general network; grids; infection spreading; large static network; linear graph; logarithmic factor; network size; network state-aware virtual mobility; random geometric graph; random mobility; spatially constrained network; virtual mobile agent
【Paper Link】 【Pages】:1008-1016
【Authors】: Jin Wang ; Jianping Wang ; Chuan Wu ; Kejie Lu ; Naijie Gu
【Abstract】: Flow untraceability is one critical requirement for anonymous communication with network coding, which prevents malicious attackers with wiretapping and traffic analysis abilities from relating the senders to the receivers, using linear dependency of the received packets. There have recently been proposals advocating encryptions on the Global Encoding Vectors (GEV) of network coding to thwart such attacks. Nevertheless, there has been no exploration of the capability of networking coding itself, to constitute more efficient and effective algorithms which guarantee anonymity. In this paper, we design a novel, simple, and effective linear network coding mechanism (ALNCode) to achieve flow untraceability in a communication network with multiple unicast flows. With solid theoretical analysis, we first show that linear network coding (LNC) can be applied to thwart traffic analysis attacks without the need of encrypting GEVs. Our key idea is to mix multiple flows at their intersection nodes by generating downstream GEVs from the common basis of upstream GEVs belonging to multiple flows, in order to hide the correlation of upstream and downstream GEVs in each flow. We then design a deterministic LNC scheme to implement our idea, by which the downstream GEVs produced are guaranteed to obfuscate their correlation with the corresponding upstream GEVs. We also give extensive theoretical analysis on the intersection probability of GEV bases and the influential factors to the effectiveness of our scheme, as well as the algorithm complexity to support its efficiency.
【Keywords】: linear codes; network coding; telecommunication security; telecommunication traffic; ALNCode; algorithm complexity; anonymous communication; communication network; downstream GEV; flow untraceability; global encoding vector; linear dependency; linear network coding; malicious attacker; multiple unicast flows; received packet; traffic analysis attack; upstream GEV; wiretapping; Correlation; Cryptography; Encoding; Network coding; Routing protocols; Unicast; Vectors
【Paper Link】 【Pages】:1017-1025
【Authors】: Arash Saber Tehrani ; Alexandros G. Dimakis ; Michael J. Neely
【Abstract】: The multiple-access framework of ZigZag decoding is a useful technique for combating interference via multiple repeated transmissions, and is known to be compatible with distributed random access protocols. However, in the presence of noise this type of decoding can magnify errors, particularly when packet sizes are large. We present a simple soft-decoding version, called SigSag, that improves performance. We show that for two users, collisions result in a cycle-free factor graph that can be optimally decoded via belief propagation. For collisions between more than two users, we show that if a simple bit-permutation is used then the graph is locally tree-like with high probability, and hence belief propagation is near optimal. Through simulations we show that our scheme performs better than coordinated collision-free time division multiple access (TDMA) and the ZigZag decoder.
【Keywords】: iterative decoding; message passing; time division multiple access; SigSag; ZigZag decoding; belief propagation; coordinated collision-free time division multiple access; cycle-free factor graph; distributed random access protocols; iterative detection; multiple-access framework; simple bit-permutation; soft message-passing; Decoding; Signal to noise ratio
【Paper Link】 【Pages】:1026-1034
【Authors】: Peng Zhang ; Yixin Jiang ; Chuang Lin ; Hongyi Yao ; Albert Wasef ; Xuemin Shen
【Abstract】: Network coding provides a promising alternative to traditional store-and-forward transmission paradigm. However, due to its information-mixing nature, network coding is notoriously susceptible to pollution attacks: a single polluted packet can end up corrupting bunches of good ones. Existing authentication mechanisms either incur high computation/bandwidth overheads, or cannot resist the tag pollution proposed recently. This paper presents a novel idea termed “padding for orthogonality” for network coding authentication. Inspired by it, we design a public-key based signature scheme and a symmetric-key based MAC scheme, which can both effectively contain pollution attacks at forwarders. In particular, we combine them to propose a unified scheme termed MacSig, the first hybrid-key cryptographic approach to network coding authentication. It can thwart both normal pollution and tag pollution attacks in an efficient way. Simulative results show that our MacSig scheme has a low bandwidth overhead, and a verification process 2-4 times faster than typical signature-based solutions in some circumstances.
【Keywords】: access protocols; network coding; public key cryptography; MAC scheme; MacSig; bandwidth overheads; hybrid-key cryptographic approach; information-mixing nature; network coding; orthogonality; padding; pollution attacks; public-key based signature scheme; store-and-forward transmission; subspace authentication; symmetric-key; Ink; Manganese
【Paper Link】 【Pages】:1035-1043
【Authors】: Hulya Seferoglu ; Athina Markopoulou ; K. K. Ramakrishnan
【Abstract】: In this work, we are interested in improving the performance of constructive network coding schemes in lossy wireless environments. We propose I2NC - an approach that combines inter-session and intra-session network coding and has two strengths. First, the error-correcting capabilities of intra-session network coding make our scheme resilient to loss. Second, redundancy allows intermediate nodes to operate without knowledge of the decoding buffers of their neighbors. Based only on the knowledge of the loss rates on the direct and overhearing links, intermediate nodes can make decisions for both intra-session (i.e., how much redundancy to add in each flow) and inter-session (i.e., what percentage of flows to code together) coding. Our approach is grounded on a network utility maximization (NUM) formulation of the problem. We propose two practical schemes, I2NC-state and I2NC-stateless, which mimic the structure of the NUM optimal solution. We also address the interaction of our approach with the transport layer. We demonstrate the benefits of our schemes through simulation in GloMoSim.
【Keywords】: error correction codes; network coding; wireless mesh networks; inter-session coding; intra-session coding; lossy wireless environments; network coding; network utility maximization; unicast flows; wireless networks; Encoding; Network coding; Propagation losses; Redundancy; Throughput; Unicast; Wireless communication; Network coding; cross-layer optimization; error correction; wireless networks
【Paper Link】 【Pages】:1044-1052
【Authors】: Dezun Dong ; Yunhao Liu ; Xiangke Liao ; Xiang-Yang Li
【Abstract】: Extracting planar graph from network topologies is of great importance for efficient protocol design in wireless ad hoc and sensor networks. Previous techniques of planar topology extraction are often based on ideal assumptions, such as UDG communication model and accurate node location measurements. To make these protocols work effectively in practice, we need extract a planar topology in a location-free and distributed manner with small stretch factor. Current location-free methods cannot provide any guarantee on the stretch factor of the constructed planar topologies. In this work, we present a fine-grained and location-free network planarization method. Compared with existing location-free planarization approaches, our method can extract a high-quality planar graph, called TPS (Topological Planar Simplification), from the communication graph using local connectivity information. TPS is proved to be a planar graph and has a constant stretch factor for a large class of network instances. We evaluate our design through extensive simulations and compare with the state-of-the-art approaches. The simulation results show that our method produces planar graphs with a small constant stretch factor, often less than 1.5.
【Keywords】: graph theory; protocols; telecommunication network topology; wireless sensor networks; UDG communication model; communication graph; fine-grained location-free planarization; local connectivity information; network topologies; node location measurements; planar graph; planar topology extraction; protocol design; topological planar simplification; wireless ad hoc networks; wireless sensor networks; Ad hoc networks; Network topology; Planarization; Protocols; Routing; Tiles; Topology
【Paper Link】 【Pages】:1053-1061
【Authors】: Hongyu Zhou ; Hongyi Wu ; Su Xia ; Miao Jin ; Ning Ding
【Abstract】: Triangulation serves as the basis for many geometry-based algorithms in wireless sensor networks. In this paper we propose a distributed algorithm that produces a triangulation for an arbitrary sensor network, with no constraints on communication model or granularity of the triangulation. We prove its correctness in 2D, and further extend it to sensor networks deployed on 3D open and closed surfaces. Our simulation results show that the proposed algorithms can tolerate distance measurement errors, and thus work well under practical sensor network settings and effectively promote the performance a range of applications that depend on triangulations.
【Keywords】: distance measurement; distributed algorithms; mesh generation; wireless sensor networks; 2D surface; 3D surface; communication model; distance measurement error; distributed triangulation algorithm; geometry-based algorithm; wireless sensor network; Artificial neural networks; Variable speed drives
【Paper Link】 【Pages】:1062-1070
【Authors】: Shouling Ji ; Yingshu Li ; Xiaohua Jia
【Abstract】: Data collection is an important operation of wireless sensor networks (WSNs). The performance of data collection can be measured by its achievable network capacity. Most existing works focus on the capacity of unicast, multicast or snapshot data collection in single-radio single-channel wireless networks, and no dedicated works consider the continuous data collection capacity for WSNs in detail under the protocol interference model. In this paper, we first propose a multi-path scheduling algorithm for the snapshot data collection in single-radio multi-channel WSNs and prove that its achievable network capacity is at least W/[(3.63/H)ρ2+o(ρ)], which is a tighter lower bound compared with the previously best result in which is W/(8ρ2), where W is the bandwidth over a channel, H is the number of the available orthogonal channels, ρ is the ratio of the interference radius over the transmission radius of a sensor and o(ρ) is a linear equation of ρ. For the continuous data collection problem, although the authors in claim that data collection can be pipelined with existing works, we find that such an idea cannot actually improve network capacity. We explain the reason for this and propose a novel continuous data collection method for dual-radio multi-channel WSNs. This method significantly speeds up the data collection process, and achieves a capacity of nW/[12M((3.63/H)ρ2+o(ρ))] when Δe ≤ 12, or nW/[MΔc((3.63/H)ρ2+o(ρ))] when Δe >; 12, where n is the number of sensors, M is a constant value and usually M<;<; n, and Δe is the maximum number of leaf nodes having a same parent node in the routing tree (i.e. data collection tree). The simulation results also indicate that the proposed algorithms significantly improve network capacity compared with the existing works.
【Keywords】: multipath channels; scheduling; telecommunication network routing; trees (mathematics); wireless sensor networks; continuous data collection; data collection tree; dual-radio multichannel wireless sensor networks; multipath scheduling algorithm; network capacity; routing tree; single-radio multichannel WSN; snapshot data collection; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:1071-1079
【Authors】: Matthew P. Johnson ; Amotz Bar-Noy
【Abstract】: We introduce the pan and scan problem, in which cameras are configured to observe multiple target locations. This is representative example within a broad family of problems in which multiple sensing devices are deployed, each in general observing multiple targets. A camera's configuration here consists of its orientation and its zoom factor or field of view (its position is given); the quality of a target's reading by a camera depends (inversely) on both the distance and field of view. After briefly discussing an easy setting in which a target accumulates measurement quality from all cameras observing it, we move on to a more challenging setting in which for each target only the best measurement of it is counted. Although both variants admit continuous solutions, we observe that we may restrict our attention to solutions based on pinned cones. For a geometrically constrained setting, we give an optimal dynamic programming algorithm. For the unconstrained setting of this problem, we prove NP-hardness, present efficient centralized and distributed 2-approximation algorithms, and observe that a PTAS exists under certain assumptions. For a synchronized distributed setting, we give a 2-approximation protocol and a (2β)/(1 - α)-approximation protocol (for all 0 ≤ α ≤ 1 and β >; 1, though satisfying these constraints with equality will in different ways trivialize the guarantees) with the stability feature that no target's camera assignment changes more than logβ(m/α) times. We also discuss the running times of the algorithms and study the speed-ups that are possible in certain situations.
【Keywords】: approximation theory; cameras; computational complexity; dynamic programming; sensor fusion; 2-approximation protocol; NP-hardness; cameras; distributed 2-approximation algorithms; multiple target locations; optimal dynamic programming algorithm; pan and scan problem; zoom factor; Approximation algorithms; Approximation methods; Cameras; Image sensors; Optimized production technology; Polynomials; Sensors
【Paper Link】 【Pages】:1080-1088
【Authors】: Mathias Björkqvist ; Lydia Y. Chen ; Marko Vukolic ; Xi Zhang
【Abstract】: Content cloud systems, e.g. CloudFront and CloudBurst, in which content items are retrieved by end-users from the edge nodes of the cloud, are becoming increasingly popular. The retrieval latency in content clouds depends on content availability in the edge nodes, which in turn depends on the caching policy at the edge nodes. In case of local content unavailability (i.e., a cache miss), edge nodes resort to source selection strategies to retrieve the content items either vertically from the central server, or horizontally from other edge nodes. Consequently, managing the latency in content clouds needs to take into account several interrelated issues: asymmetric bandwidth and caching capacity for both source types as well as edge node heterogeneity in terms of caching policies and source selection strategies applied. In this paper, we study the problem of minimizing the retrieval latency considering both caching and retrieval capacity of the edge nodes and server simultaneously. We derive analytical models to evaluate the content retrieval latency under two source selection strategies, i.e., Random and Shortest-Queue, and three caching policies: selfish, collective, and a novel caching policy that we call the adaptive caching policy. Our analysis allows the quantification of the interrelated performance impacts of caching and retrieval capacity and the exploration of the corresponding design space. In particular, we show that the adaptive caching policy combined with Shortest-Queue selection scales well with various network configurations and adapts to the load changes in our simulation and analytical results.
【Keywords】: cloud computing; information retrieval; CloudBurst; CloudFront; adaptive caching policy; collective caching policy; content cloud systems; edge node caching capacity; edge node retrieval capacity; random source selection strategy; retrieval latency minimization; selfish caching policy; shortest-queue source selection strategy; source selection strategy; Adaptive systems; Bandwidth; Collaboration; Gold; Peer to peer computing; Servers; Silver
【Paper Link】 【Pages】:1089-1097
【Authors】: Danhua Guo ; Laxmi N. Bhuyan
【Abstract】: As the line speed of the network evolves at an unprecedented rate, a wide spectrum of network applications call for increasing processing density on network devices. The prevalence of multicore chips ameliorates the stress on processing power, but the QoS guarantee is often ignored. In addition, results of legacy QoS studies are difficult to apply to multicore web servers. Therefore, a multicore scheduler that incorporates QoS concerns is missing. As the network development moves towards cloud computing, we see an increasing importance of QoS guarantees on high performance multicore network appliances. In this paper, we propose a proportional share hash based scheduler, PS-HRW, which extends existing optimizations in multicore scheduling with QoS concerns. We address the network QoS requirement by assigning weights to each connection following the classic General Processor Sharing (GPS) theory. Based on our previous multicore scheduling studies, PS-HRW allocates computing resources based on the QoS requirement, such that the workload is balanced at the packet level, and the connection locality is maintained. To provide accurate QoS guarantee, PS-HRW allocates an integral number of cores first and then allocates the residuals using a partitioning theory. However, different from traditional simulation based approach, we target at two popular applications on modern network appliances: Deep Packet Inspection (DPI) and multimedia transcoding. In addition, we generalize the topology of different multicore architectures into a communication matrix and optimize PS-HRW to incorporate cache awareness. Essentially, PS-HRW schedules incoming traffic efficiently by balancing between connection locality, load balancing, core/cache topology and QoS guarantees.
【Keywords】: Web services; cloud computing; microprocessor chips; multimedia computing; multiprocessing systems; optimisation; peer-to-peer computing; processor scheduling; quality of service; radio spectrum management; resource allocation; transcoding; QoS; Web servers; cache awareness; cloud computing; deep packet inspection; general processor sharing; load balancing; multicore architectures; multicore chips; multicore hash scheduling; multicore network; multimedia transcoding; network development; network spectrum; optimization; Load management; Multicore processing; Processor scheduling; Quality of service; Scheduling; Servers; Topology; Cache Locality; DPI; Load Balancing; Multicore; Multimedia; QoS; Scheduling
【Paper Link】 【Pages】:1098-1106
【Authors】: Minghong Lin ; Adam Wierman ; Lachlan L. H. Andrew ; Eno Thereska
【Abstract】:
Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of predictably low load. This paper investigates how much can be saved by dynamically right-sizing' the data center by turning off servers during such periods, and how to achieve that saving via an online algorithm. We prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new
lazy' online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data center workloads and show that significant cost-savings are possible.
【Keywords】: cloud computing; computer centres; power aware computing; power consumption; cloud services; dynamic right-sizing algorithm; lazy online algorithm; power consumption; power-proportional data centers; Data models; Delay; Heuristic algorithms; Optimization; Prediction algorithms; Servers; Switches
【Paper Link】 【Pages】:1107-1115
【Authors】: Honghai Zhang ; Yuanxi Jiang ; Sampath Rangarajan ; Baohua Zhao
【Abstract】: Wireless multicast based video delivery is constrained by the user with the lowest channel quality within a multicast group. Beamforming increases the SINR at the clients thereby increasing the minimum channel quality among the users that form the multicast group. Thus it can be utilized to enhance the wireless multicast video transmission. In this paper, we investigate how to exploit switched beamforming antennas to improve wireless multicast based video transmission in indoor environments. The fundamental problem to be solved is the selection and scheduling of beams to be used to cover different subsets of users within the multicast group. We consider both multi-resolution videos and multi-layered videos and formulate the problem as maximizing the total utility of all multicast clients subject to a total delay constraint, where the utility is a general measure of video quality or user-satisfaction. We prove that it is NP-hard to have a (1-1/e+ε)-approximation solution to this problem for any ε >; 0, under both multi-resolution and multi-layered video models. For the multi-resolution video model, we develop a unified approximation algorithm with a parameter k that controls both the algorithm complexity and the approximation factor. For k = 0, 1, 2, 3, we prove that the proposed algorithm achieves an approximation guarantee close to 0.31, 0.38, 0.55, and 0.63, respectively. For the multi-layered video model, we propose a similar heuristic solution. The proposed algorithms are evaluated with both hypothetical video sequences and real video sequences using channel data collected in an indoor wireless testbed. Evaluation results show that the proposed algorithms have much better performance than naive multicast algorithms for scheduling the beams.
【Keywords】: antennas; approximation theory; array signal processing; image resolution; image sequences; indoor radio; multicast communication; multimedia communication; video communication; video signal processing; wireless channels; NP-hard; SINR; algorithm complexity; approximation factor; beam scheduling; beam selection; channel quality; delay constraint; heuristic solution; hypothetical video sequence; indoor environment; indoor wireless network; multicast group; multilayered video; multiresolution video; real video sequence; switched beamforming antenna; unified approximation algorithm; video quality; wireless multicast video delivery; wireless multicast video transmission; Approximation algorithms; Approximation methods; Array signal processing; Complexity theory; Streaming media; Switches; Wireless communication
【Paper Link】 【Pages】:1116-1124
【Authors】: Guanhong Pei ; V. S. Anil Kumar ; Srinivasan Parthasarathy ; Aravind Srinivasan
【Abstract】: We study the problem of throughput maximization in multi-hop wireless networks with end-to-end delay constraints for each session. This problem has received much attention starting with the work of Grossglauser and Tse (2002), and it has been shown that there is a significant tradeoff between the end-to-end delays and the total achievable rate. We develop algorithms to compute such tradeoffs with provable performance guarantees for arbitrary instances, with general interference models. Given a target delay-bound Δ(c) for each session c, our algorithm gives a stable flow vector with a total throughput within a factor of O (logΔm/log log Δm) of the maximum, so that the per-session (end-to-end) delay is O ((logΔm/log log Δm Δ(c))2), where Δm = maxc{Δ(c)}; note that these bounds depend only on the delays, and not on the network size, and this is the first such result, to our knowledge.
【Keywords】: approximation theory; delays; optimisation; radio networks; radiofrequency interference; approximation algorithms; delay constraints; general interference model; multihop wireless networks; target delay bound; throughput maximization; Approximation algorithms; Approximation methods; Delay; Interference; Optimized production technology; Throughput; Wireless networks
【Paper Link】 【Pages】:1125-1133
【Authors】: Shyamnath Gollakota ; Dina Katabi
【Abstract】: There is a growing interest in physical layer security. Recent work has demonstrated that wireless devices can generate a shared secret key by exploiting variations in their channel. The rate at which the secret bits are generated, however, depends heavily on how fast the channel changes. As a result, existing schemes have a low secrecy rate and are mainly applicable to mobile environments. In contrast, this paper presents a new physical-layer approach to secret key generation that is both fast and independent of channel variations. Our approach makes a receiver jam the signal in a manner that still allows it to decode the data, yet prevents other nodes from decoding. Results from a testbed implementation show that our method is significantly faster and more accurate than state of the art physical-layer secret key generation protocols. Specifically, while past work generates up to 44 secret bits/s with a 4% bit disagreement between the two devices, our design has a secrecy rate of 3-18 Kb/s with 0% bit disagreement.
【Keywords】: cryptographic protocols; decoding; jamming; telecommunication security; wireless channels; bit rate 3 kbit/s to 18 kbit/s; data decoding; mobile environment; physical layer wireless security; physical-layer secret key generation protocol; secret bits generation; signal jamming; Binary phase shift keying; Bit error rate; Communication system security; Jamming; OFDM; Receivers; Wireless communication
【Paper Link】 【Pages】:1134-1142
【Authors】: Sungro Yoon ; Injong Rhee ; Bang Chul Jung ; Babak Daneshrad ; Jae H. Kim
【Abstract】: A practical protocol jointly considering PHY and MAC for MIMO based concurrent transmissions in wireless ad hoc networks, called Contrabass, is presented. Concurrent transmissions refer to simultaneous transmissions by multiple nodes over the same carrier frequency within the same interference range. Contrabass is the first-to-date open-loop based concurrent transmission protocol which implements simultaneous channel training for concurrently transmitting links without any control message exchange. Its MAC protocol is designed for each active transmitter to independently decide to transmit with near optimal transmission probability. Contrabass maximizes the number of successful concurrent transmissions, thus achieving very high aggregate throughput, low delays and scalability even under dynamic environments. The design choices of Contrabass are deliberately made to enable practical implementation which is demonstrated through GNURadio implementation and experimentation.
【Keywords】: MIMO communication; access protocols; ad hoc networks; probability; radiofrequency interference; GNU radio implementation; MAC protocol; MIMO systems; PHY; active transmitter; concurrent transmission protocol; contrabass; first-to-date open-loop; optimal transmission probability; wireless ad hoc network; Indexes; Interference; Optimized production technology; Protocols; Receivers; Training; Transmitters
【Paper Link】 【Pages】:1143-1151
【Authors】: Xinyu Xing ; Jianxun Dang ; Shivakant Mishra ; Xue Liu
【Abstract】: WiFi access points that provide Internet access to users have been steadily increasing in urban areas. Different access points differ from one another in terms of services that they provide, including available upstream and downstream bandwidths, overall network capacity, open/blocked ports, security features, and so on. However, there is no reliable service available at present that can aid a user in selecting an access point from the many that are available. The primary research challenge is how to accurately estimate the current backhaul bandwidth of different access points in an efficient manner without requiring any installation of special software on the access points, and not burdening the WiFi subscribers to perform any communication or computation intensive task. This paper presents a new highly scalable bandwidth estimation technique that is suitable for efficiently estimating the backhaul bandwidth of a large number of APs. This technique has been extensively evaluated via a prototype implementation in an indoor testbed and in the Amazon EC2 platform. The evaluation demonstrates that the proposed technique exhibits high measurement accuracy, low latency, high scalability, and minimal intrusiveness.
【Keywords】: bandwidth allocation; indoor communication; subscriber loops; wireless LAN; Amazon EC2 platform; Internet access; WiFi access points; WiFi subscribers; backhaul bandwidth; commercial hotspot access points; indoor testbed; measurement accuracy; network capacity; scalable bandwidth estimation; Bandwidth; Delay; Estimation; IEEE 802.11 Standards; Probes; Random variables; Servers
【Paper Link】 【Pages】:1152-1160
【Authors】: Pallavi Arora ; Csaba Szepesvári ; Rong Zheng
【Abstract】: We consider the problem of optimally assigning p sniffers to K channels to monitor the transmission activities in a multi-channel wireless network. The activity of users is initially unknown to the sniffers and is to be learned along with channel assignment decisions while maximizing the benefits of this assignment, resulting in the fundamental trade-off between exploration versus exploitation. We formulate it as the linear partial monitoring problem, a super-class of multi-armed bandits. As the number of arms (sniffer-channel assignments) is exponential, novel techniques are called for, to allow efficient learning. We use the linear bandit model to capture the dependency amongst the arms and develop two policies that take advantage of this dependency. Both policies enjoy logarithmic regret bound of time-slots with a term that is sub-linear in the number of arms.
【Keywords】: channel allocation; learning (artificial intelligence); radio networks; telecommunication computing; wireless channels; linear bandit model; linear partial monitoring problem; multiarmed bandits; multichannel wireless network; optimal monitoring; sequential learning; sniffer-channel assignment; transmission activity monitoring; Joints; Monitoring; Performance evaluation; Resource management; Uncertainty; Wireless networks
【Paper Link】 【Pages】:1161-1169
【Authors】: Mariyam Mirza ; Paul Barford ; Xiaojin Zhu ; Suman Banerjee ; Michael Blodgett
【Abstract】: The effectiveness of rate adaptation algorithms is an important determinant of 802.11 wireless network performance. The diversity of algorithms that has resulted from efforts to improve rate adaptation has introduced a new dimension of variability into 802.11 wireless networks, further complicating the already difficult task of understanding and debugging 802.11 performance. To assist with this task, in this paper we present and evaluate a methodology for accurately fingerprinting 802.11 rate adaptation algorithms. Our approach uses a Support Vector Machine (SVM)-based classifier that requires only simple passive measurements of 802.11 traffic. We demonstrate that careful conversion of raw packet traces into input features for SVM is necessary for achieving high classification accuracy. We tested our classifier on the four rate adaptation algorithms available in MadWifi, the most popular open source driver for commodity wireless cards. The classifier performs with an accuracy of 95% -100%. We also show that the classifier is robust over a variety of network conditions if the training data includes a sufficient sampling of the range of an algorithm's behavior.
【Keywords】: support vector machines; wireless LAN; 802.11 debugging performance; 802.11 rate adaptation algorithm; 802.11 traffic; 802.11 wireless network performance; MadWifi; SVM-based classifier; commodity wireless cards; fingerprinting; support vector machine; Fingerprint recognition; Hardware; Throughput
【Paper Link】 【Pages】:1170-1178
【Authors】: Ryad Ben-El-Kezadri ; Giovanni Pau ; Thomas Claveirole
【Abstract】: Clock synchronization is particularly challenging in resource constrained networks. This paper presents an accurate, coherent and bandwidth efficient synchronization scheme called TurboSync. Unlike traditional solutions that synchronize pairs of nodes, TurboSync is able to synchronize entire node clusters. TurboSync relies on principal component analysis with missing data. Packets are broadcasted on the medium and their capture times at each node side are used to compute the clock conversion parameters. To have a complete and usable set of capture times for each broadcast, we propose to fill out the missing packet timestamps at the transmitters' side using an inference mechanism. TurboSync synchronizes all the clocks in the cluster at a time which leads to coherent clock conversion between nodes. Our performance results show better accuracy compared to the RBS protocol.
【Keywords】: principal component analysis; protocols; radio networks; synchronisation; RBS protocol; TurboSync; clock conversion parameter; clock synchronization; node cluster; principal component analysis; shared media network; Algorithm design and analysis; Clocks; Principal component analysis; Protocols; Receivers; Synchronization; Transmitters; Broadcast Media; Clock; Missing Data; Principal Component Analysis; Synchronization
【Paper Link】 【Pages】:1179-1187
【Authors】: Gjergji Zyba ; Geoffrey M. Voelker ; Stratis Ioannidis ; Christophe Diot
【Abstract】: Opportunistic ad-hoc communication enables portable devices such as smartphones to effectively exchange information, taking advantage of their mobility and locality. The nature of human interaction makes information dissemination using such networks challenging. We use three different experimental traces to study fundamental properties of human interactions. We break our traces down in multiple areas and classify mobile users in each area according to their social behavior: Socials are devices that show up frequently or periodically, while Vagabonds represent the rest of the population. We find that in most cases the majority of the population consists of Vagabonds. We evaluate the relative role of these two groups of users in data dissemination. Surprisingly, we observe that under certain circumstances, which appear to be common in real life situations, the effectiveness of dissemination predominantly depends on the number of users in each class rather than their social behavior, contradicting some of the previous observations. We validate and extend the findings of our experimental study through a mathematical analysis.
【Keywords】: information dissemination; mobile ad hoc networks; information dissemination; information exchange; mobile user; opportunistic mobile ad hoc network; social behavior; vagabond population; Ad hoc networks; Avatars; Knee; Mobile communication; Mobile computing; Second Life
【Paper Link】 【Pages】:1188-1196
【Authors】: Bruno Nardelli ; Jinsung Lee ; Kangwook Lee ; Yung Yi ; Song Chong ; Edward W. Knightly ; Mung Chiang
【Abstract】: By `optimal CSMA' we denote a promising approach to maximize throughput-based utility in wireless networks without message passing or synchronization among nodes. Despite the theoretical guarantees on the performance of these protocols, their evaluation in real networking scenarios has been preliminary. In this paper, we propose a methodical approach for the first comprehensive evaluation of optimal CSMA, via experimentation with a custom implementation. Example findings include; 1) hidden terminals with symmetric channels can drive the protocol to a state of extreme contention aggressiveness due to the low service received by flows. Since increasing aggressiveness does not mitigate collisions but actually aggravates them, optimal CSMA enters a positive-feedback loop eventually reaching a deadlock state of total flow starvation; 2) however, the use of RTS/CTS in such scenarios can reduce collisions to lower levels, restoring throughput and preventing an excessive contention aggressiveness by optimal CSMA flows; 3) in practical hidden terminal scenarios with physical layer capture optimal CSMA reduces the aggressiveness of dominant flows, but the contention window sizes used by such adaptation mechanism are not long enough to solve competing flows' starvation when carrier sensing fails; 4) topologies with a “flow-in-the-middle” yield starvation in traditional CSMA but fairness in optimal CSMA, because its contention aggressiveness adaptation creates frequent transmission opportunities for the central (otherwise starved) flow; 5) optimal CSMA excessively prioritizes links with low channel quality, due to queue-based control that does not otherwise incorporate channel conditions; 6) in its current design, optimal CSMA conflicts with window-based end-to-end congestion control, and leads to a efficiency-fairness tradeoff in TCP performance. This study deepens our understanding of optimal CSMA and the general adaptation philosophy behind its design, and - - the derived insights suggest enhancements to optimal CSMA theory.
【Keywords】: carrier sense multiple access; protocols; queueing theory; radio networks; telecommunication congestion control; flow-in-the-middle yield starvation; frequent transmission; low channel quality; optimal CSMA; positive-feedback loop; protocols; queue-based control; window-based end-to-end congestion control; wireless networks; IEEE 802.11 Standards; Multiaccess communication; Network topology; Protocols; Throughput; Topology; Transmitters
【Paper Link】 【Pages】:1197-1205
【Authors】: Di Niu ; Baochun Li
【Abstract】: We consider the problem of distributing k blocks from a source to N nodes in a peer-to-peer (P2P) network with both node upload and download capacity constraints. As k scales up, we prove for homogeneous networks that if network coding is allowed, randomly matching senders and receivers in each time slot asymptotically achieves the maximum downloading rate at each node. For heterogeneous networks with network coding allowed, we show that a fair and optimal downloading rate at each node can be asymptotically approached, if in each time slot, each node randomly allocates its upload bandwidth to its receivers that have available download bandwidth. We also give a performance lower bound of the above randomized coded dissemination when both k and N scale under certain conditions. These results demonstrate that with network coding, simple randomized receiver selection and rate allocation suffice to achieve P2P broadcast capacity, forming a theoretical foundation for mesh-based P2P networks with network coding.
【Keywords】: bandwidth allocation; channel capacity; network coding; peer-to-peer computing; radio broadcasting; random processes; wireless mesh networks; P2P; asymptotic optimality; bandwidth allocation; broadcast capacity; homogeneous networks; mesh network; network coding; nodes distribution; peer-to-peer network; random network; randomized receiver selection; Bandwidth; Encoding; Heuristic algorithms; Network coding; Peer to peer computing; Receivers; Resource management
【Paper Link】 【Pages】:1206-1214
【Authors】: Weijie Wu ; John C. S. Lui
【Abstract】: Content providers of P2P-Video-on-Demand (P2P-VoD) services aim to provide a high quality, scalable service to users, and at the same time, operate the system with a manageable operating cost. Given the volume-based charging model by ISPs, it is to the best interest of the P2P-VoD content providers to reduce peers' access to the content server so as to reduce the operating cost. In this paper, we address an important open problem: what is the “optimal replication ratio” in a P2P-VoD system such that peers will receive service from each other and at the same time, reduce the traffic to the content server. We address two fundamental problems: (1) what is the optimal replication ratio of a movie given its popularity, and (2) how to achieve the optimal ratios in a distributed and dynamic fashion. We formally show how movie popularities can impact server's workload, and formulate the video replication as an optimization problem. We show that the conventional wisdom of using the proportional replication strategy is non-optimal, and expand the design space to both passive replacement policy and active push policy to achieve the optimal replication ratios. We consider practical implementation issues, evaluate the performance of P2P-VoD systems and show that our algorithms can greatly reduce server's workload and improve streaming quality.
【Keywords】: optimisation; peer-to-peer computing; video on demand; video streaming; ISP; P2P-VoD content providers; P2P-VoD services; dynamic fashion; optimal replication ratio; optimal replication strategy; optimization; peer-to-peer technology; streaming quality; video replication; video-on-demand; Algorithm design and analysis; Bandwidth; Motion pictures; Optimization; Peer to peer computing; Servers; Software
【Paper Link】 【Pages】:1215-1223
【Authors】: Frédérique E. Oggier ; Anwitaman Datta
【Abstract】: Erasure codes provide a storage efficient alternative to replication based redundancy in (networked) storage systems. They however entail high communication overhead for maintenance, when some of the encoded fragments are lost and need to be replenished. Such overheads arise from the fundamental need to recreate (or keep separately) first a copy of the whole object before any individual encoded fragment can be generated and replenished. There has recently been intense interest to explore alternatives, most prominent ones being regenerating codes (RGC) and hierarchical codes (HC). We propose as an alternative a new family of codes to improve the maintenance process, called self-repairing codes (SRC), with the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments by downloading less data than the size of the complete object, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. These properties allow for not only low communication overhead to recreate a missing fragment, but also independent reconstruction of different missing fragments in parallel, possibly in different parts of the network. The fundamental difference between SRCs and HCs is that different encoded fragments in HCs do not have symmetric roles (equal importance). Consequently the number of fragments required to replenish a specific fragment in HCs depends on which specific fragments are missing, and not solely on how many. Likewise, object reconstruction may need different number of fragments depending on which fragments are missing. RGCs apply network coding over (n, k) erasure codes, and provide network information flow based limits on the minimal maintenance overheads. RGCs need to communicate with at least k other nodes to recreate any fragment, and the minimal overhead is achieved if only one- - fragment is missing, and information is downloaded from all the other n-1 nodes. We analyze the static resilience of SRCs with respect to erasure codes, and observe that SRCs incur marginally larger storage overhead in order to achieve the aforementioned properties. The salient SRC properties naturally translate to low communication overheads for reconstruction of lost fragments, and allow reconstruction with lower latency by facilitating repairs in parallel. These desirable properties make SRC a practical candidate for networked distributed storage systems.
【Keywords】: codes; storage area networks; distributed storage systems; erasure codes; hierarchical codes; low communication overhead; redundancy; regenerating codes; self-repairing homomorphic codes; Decoding; Encoding; Maintenance engineering; Peer to peer computing; Polynomials; Redundancy; Resilience; coding; networked storage; self-repair
【Paper Link】 【Pages】:1224-1232
【Authors】: Jie Dai ; Bo Li ; Fangming Liu ; Baochun Li ; Hai Jin
【Abstract】: Collaborative ISP caching has been advocated to reduce the otherwise significant amount of costly inter-ISP traffic generated by peer-to-peer (P2P) applications. The fundamental design criteria employed by ISP cache servers are, however, not well understood, with respect to dynamic P2P traffic patterns, ISP peering policies and cache server capacity constraints. In particular, there is a lack of investigations on the design and analysis of resource allocation mechanisms with awareness of inter-ISP traffic and ISP policies in the context of collaborative ISP caching - which is our focus in this study. In this paper, by characterizing practical inter-ISP traffic patterns, we have developed a theoretical framework to analyze representative cache resource allocation schemes within the design space of collaborative caching, with a particular focus on minimizing costly inter-ISP traffic. The optimization framework incorporates both locality-aware and locality-unaware peer selection strategies and ISP peering agreements, in order to examine their respective effects on the design of ISP collaborative caching mechanisms. Our analyses not only help us understand the traffic characteristics of existing P2P systems in light of realistic elements, but also offer fundamental insights into designing collaborative ISP caching mechanisms.
【Keywords】: Internet; mobile computing; network servers; optimisation; peer-to-peer computing; telecommunication services; telecommunication traffic; P2P networks; cache servers; collaborative caching; context awareness; inter-ISP traffic; locality-aware peer selection; locality-unaware peer selection; optimization; peer-to-peer networks; resource allocation; Bandwidth; Collaboration; Optimization; Peer to peer computing; Resource management; Servers; Streaming media
【Paper Link】 【Pages】:1233-1241
【Authors】: Ting He ; Animashree Anandkumar ; Dakshi Agrawal
【Abstract】: We consider the problem of tracking the topology of a large-scale dynamic network with limited monitoring resources. By modeling the dynamics of links as independent ON-OFF Markov chains, we formulate the problem as that of maximizing the overall accuracy of tracking link states when only a limited number of network elements can be monitored at each time step. We consider two forms of sampling policies: link sampling, where we directly observe the selected links, and node sampling, where we observe states of the links adjacent to the selected nodes. We reduce the link sampling problem to a Restless Multi-armed Bandit (RMB) and prove its indexability under certain conditions. By applying the Whittle's index policy, we develop an efficient link sampling policy with methods to compute the Whittle's index explicitly. Under node sampling, we use a linear programming (LP) formulation to derive an extended policy that can be reduced to determining the nodes with maximum coverage of the Whittle's indices. We also derive performance upper bounds in both sampling scenarios. Simulations show the efficacy of the proposed policies. Compared with the myopic policy, our solution achieves significantly better tracking performance for heterogeneous links.
【Keywords】: Markov processes; linear programming; network theory (graphs); sampling methods; LP formulation; RMB; Whittle index policy; dynamic network tracking; heterogeneous links; index-based sampling policies; large-scale dynamic network topology; linear programming formulation; link sampling; myopic policy; node sampling; on-off Markov chains; restless multiarmed bandit; sampling constraints; Accuracy; Equations; Indexes; Markov processes; Monitoring; Network topology; Upper bound; Network sampling; Whittle's index policy; network topology tracking; restless multi-armed bandits
【Paper Link】 【Pages】:1242-1250
【Authors】: Yashar Ghiassi-Farrokhfal ; Jörg Liebeherr ; Almut Burchard
【Abstract】: We study how the choice of packet scheduling algorithms influences end-to-end performance on long network paths. Taking a network calculus approach, we consider both deterministic and statistical performance metrics. A key enabling contribution for our analysis is a significantly sharpened method for computing a statistical bound for the service given to a flow by the network as a whole. For a suitably parsimonious traffic model we develop closed-form expressions for end-to-end delays, backlog, and output burstiness. The deterministic versions of our bounds yield optimal bounds on end-to-end backlog and output burstiness for some schedulers, and are highly accurate for end-to-end delay bounds.
【Keywords】: statistical analysis; telecommunication links; link scheduling; optimal bounds; packet scheduling algorithms; statistical analysis; Calculus; Convolution; Delay; Probabilistic logic; Probability; Scheduling algorithm
【Paper Link】 【Pages】:1251-1259
【Authors】: Shreeshankar Bodas ; Sanjay Shakkottai ; Lei Ying ; R. Srikant
【Abstract】: This paper considers the problem of designing scheduling algorithms for multi-channel (e.g., OFDM-based) wireless downlink systems. We show that the Server-Side Greedy (SSG) rule introduced in earlier papers for ON-OFF channels performs well even for more general channel models. The key contribution in this paper is the development of new mathematical techniques for analyzing Markov chains that arise when studying general channel models. These techniques include a way of calculating the distribution of the maximum of a multi-dimensional Markov chain (note that the maximum does not have the Markov property on its own), and also a Markov chain stochastic dominance result using coupling arguments.
【Keywords】: Markov processes; radio networks; scheduling; wireless channels; Markov chain stochastic dominance; ON-OFF channels; SSG rule; coupling arguments; mathematical techniques; multichannel wireless downlink systems; multidimensional Markov chain; multirate multichannel wireless networks; scheduling algorithms; server-side greed rule; IP networks; Queueing analysis; Silicon; Tin; Wireless communication; Markov chain stochastic dominance; Scheduling algorithms; large deviations; small buffer
【Paper Link】 【Pages】:1260-1268
【Authors】: Jian Tan ; Yang Yang ; Ness B. Shroff ; Hesham El Gamal
【Abstract】: Recent work has shown that retransmissions can cause heavy-tailed transmission delays even when packet sizes are light-tailed. Moreover, the impact of heavy tailed delays persist even when packets are of finite size. The key question we study in this paper is how the use of coding techniques to transmit information could mitigate delays. To investigate this problem, we consider an important communication channel called the Binary Erasure Channel, where transmitted bits are either received successfully or lost (called an erasure). This model is a good abstraction of not only the wireless channel but also the higher layer link, where erasure errors can happen. Many coding schemes, known as erasure codes, have been designed for this channel. Specifically, we focus on the fixed rate coding scheme, where decoding is said to be successful if a certain fraction β of the codeword is received correctly. We study two different scenarios: (I) A codeword of length Lc is retransmitted as a unit until the receiver successfully receives more than βLc bits in the last transmission. (II) All successfully received bits from every (re)transmissions are buffered at the receiver according to their positions in the codeword, and the transmission completes once the received bits become decodable for the first time. Our studies reveal that complicated and surprising relationships exist between the coding complexity and the transmission delay/throughput. From a theoretical perspective, our results provide a benchmark to quantify the tradeoffs between coding complexity and transmission throughput for receivers that use memory to buffer (re)transmissions until success and those that do not buffer intermediate transmissions.
【Keywords】: channel coding; coding errors; decoding; delays; wireless channels; binary erasure channel; codeword; coding complexity; communication channel; decoding; erasure code; erasure error; fixed rate coding scheme; heavy-tailed transmission delay; retransmission code technique; wireless channel; Logic gates
【Paper Link】 【Pages】:1269-1277
【Authors】: Dejun Yang ; Xi Fang ; Guoliang Xue
【Abstract】: The notion of probabilistic network has been used to characterize the unpredictable environment in wireless communication networks or other unstable networks. In this paper, we are interested in the problem of placing servers in probabilistic networks subject to budget constraint, so as to maximize the expected number of servable clients that can successfully connect to a server. We study this problem in both the single-hop model and the multi-hop model. We discuss the computational complexity of this problem and show that it is NP-hard under both models.We then develop efficient approximation algorithms, which produce solutions provably close to optimal. If the costs of candidate locations are uniform, when extra budget is available in the future, the progressive feature of our algorithms allows for placing additional servers instead of relocating all the servers, while retaining the guaranteed performance. Results of extensive experiments on different topologies confirm the performance of our algorithms compared to the optimal algorithm and other heuristic algorithms.
【Keywords】: computational complexity; network servers; radio networks; ESPN; NP-hard; budget constraint; computational complexity; efficient server placement; heuristic algorithm; multihop model; optimal algorithm; probabilistic network; single-hop model; unstable network; wireless communication network0; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computational modeling; Probabilistic logic; Servers; Silicon
【Paper Link】 【Pages】:1278-1286
【Authors】: Moti Geva ; Amir Herzberg
【Abstract】: We present QoSoDoS, a protocol that ensures QoS over a DoS-prone (Best-Effort) network. QoSoDoS ensures timely delivery of time sensitive messages over unreliable network, susceptible to high congestion and network flooding DoS attacks. QoSoDoS is based on scheduling multiple transmissions of packets while attempting to minimize overhead and load, and avoiding self-creation of DoS. We present a model and initial empirical results of QoSoDoS implementation. Our results show that under typical scenarios, QoSoDoS can handle high congestion and DoS attacks quite well.
【Keywords】: Internet; computer network security; protocols; DoS prone network; QoSoDoS protocol; multiple transmission scheduling; network flooding DoS attack; time sensitive message; unreliable network; Quality of service; Subspace constraints; Variable speed drives
【Paper Link】 【Pages】:1287-1295
【Authors】: Shaohe Lv ; Weihua Zhuang ; Xiaodong Wang ; Xingming Zhou
【Abstract】: Successive interference cancellation (SIC) is an effective way of multipacket reception (MPR) to combat interference in wireless networks. To understand the potential MPR advantages, we study link scheduling in an ad hoc network with SIC at the physical layer. The fact that the links detected sequentially by SIC are correlated at the receiver poses key technical challenges. We characterize the link dependence and propose simultaneity graph (SG) to capture the effect of SIC. Then interference number is defined to measure the interference of a link. We show that scheduling over SG is NP-hard and the maximum interference number bounds the performance of maximal greedy schemes. An independent set based greedy scheme is explored to efficiently construct a maximal feasible schedule. Moreover, with careful selection of link ordering, we present a scheduling scheme that improves the bound. The performance is evaluated by both simulations and measurements in testbed. The throughput gain is on average 40% and up to 120% over IEEE 802.11. The complexity of SG is comparable with that of conflict graph, especially when the network size is not large.
【Keywords】: ad hoc networks; graph theory; interference suppression; MPR; NP-hard problem; SG; SIC; independent set based greedy scheme; link scheduling; maximal greedy scheme; multipacket reception; simultaneity graph; successive interference cancellation; wireless ad hoc network; Interference; Protocols; Receivers; Schedules; Scheduling; Silicon carbide; Wireless networks; Link scheduling; ad hoc network; successive interference cancellation
【Paper Link】 【Pages】:1296-1304
【Authors】: Yunbo Wang ; Mehmet C. Vuran ; Steve Goddard
【Abstract】: Emerging applications of wireless sensor networks (WSNs) require real-time event detection to be provided by the network. In a typical event monitoring WSN, multiple reports are generated by several nodes when a physical event occurs, and are then forwarded through multi-hop communication to a sink that detects the event. To improve the event detection reliability, usually timely delivery of a certain number of packets is required. Traditional timing analysis of WSNs are, however, either focused on individual packets or traffic flows from individual nodes. In this paper, a spatio-temporal fluid model is developed to capture the delay characteristics of event detection in large-scale WSNs. More specifically, the distribution of delay in event detection from multiple reports is modeled. Accordingly, metrics such as mean delay and soft delay bounds are analyzed for different network parameters. Motivated by the fact that queue build up in WSNs with low-rate traffic is negligible, a lower-complexity model is also developed. Testbed experiments and simulations are used to validate the accuracy of both approaches. The resulting framework can be utilized to analyze the effects of network and protocol parameters on event detection delay to realize real-time operation in WSNs. To the best of our knowledge, this is the first approach that provides a transient analysis of event detection delay when multiple reports via multi-hop communication are needed.
【Keywords】: telecommunication network reliability; transient analysis; wireless sensor networks; WSN; event detection delay reliability analysis; mean delay bound; multihop communication; multiple report; protocol parameter; soft delay bound; spatio-temporal fluid model; traffic flows; transient analysis; wireless sensor networks; Complexity theory; Delay; Event detection; Monitoring; Network topology; Protocols
【Paper Link】 【Pages】:1305-1313
【Authors】: Lei Tang ; Yanjun Sun ; Omer Gurewitz ; David B. Johnson
【Abstract】: This paper presents PW-MAC (Predictive-Wakeup MAC), a new energy-efficient MAC protocol based on asynchronous duty cycling. In PW-MAC, nodes each wake up to receive at randomized, asynchronous times. PW-MAC minimizes sensor node energy consumption by enabling senders to predict receiver wakeup times; to enable accurate predictions, PW-MAC introduces an on-demand prediction error correction mechanism that effectively addresses timing challenges such as unpredictable hardware and operating system delays and clock drift. PW-MAC also introduces an efficient prediction-based retransmission mechanism to achieve high energy efficiency even when wireless collisions occur and packets must be retransmitted. We evaluate PW-MAC on a testbed of MICAz motes and compare it to X-MAC, WiseMAC, and RI-MAC, three previous energy-efficient MAC protocols, under multiple concurrent multihop traffic flows and under hidden-terminal scenarios and scenarios in which nodes have wakeup schedule conflicts. In all experiments, PW-MAC significantly outperformed these other protocols. For example, evaluated on scenarios with 15 concurrent transceivers in the network, the average sender duty cycle for X-MAC, WiseMAC, and RI-MAC were all over 66%, while PW-MAC's average sender duty cycle was only 11%; the delivery latency for PW-MAC in these scenarios was less than 5% that for WiseMAC and X-MAC. In all experiments, PW-MAC maintained a delivery ratio of 100%.
【Keywords】: access protocols; telecommunication traffic; wireless sensor networks; MICAz motes; PW-MAC; RI-MAC; WiseMAC; X-MAC; asynchronous duty cycling; energy-efficient predictive-wakeup MAC protocol; multiple concurrent multihop traffic flow; ondemand prediction error correction mechanism; prediction-based retransmission mechanism; sensor node energy consumption; wireless collision; wireless sensor network; Clocks; Hardware; Media Access Protocol; Receivers; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:1314-1322
【Authors】: Yang Peng ; Zi Li ; Daji Qiao ; Wensheng Zhang
【Abstract】: This paper presents a new receiver-initiated sensor network MAC protocol, called CyMAC, which has the following unique features. It reduces the idle listening time of sensor nodes via establishing rendezvous times between neighbors, provides the desired relative delay bound guarantee for data delivery services via planning the rendezvous schedules carefully, and adjusts the sensor nodes' duty cycles dynamically to the varying traffic condition. More importantly, CyMAC achieves the above goals without requiring time synchrony between sensor nodes. We have implemented and evaluated CyMAC in both TinyOS and the ns-2 simulator. Experimental and simulation results show that, comparing with RI-MAC - a state-of-the-art sensor network MAC protocol, CyMAC can always guarantee the desired delay bound for data delivery services and yields a lower duty cycle under reasonable delay requirements.
【Keywords】: access protocols; scheduling; wireless sensor networks; CyMAC protocol; RI-MAC protocol; TinyOS simulator; data delivery services; delay-bounded MAC protocol; idle listening time; ns-2 simulator; receiver-initiated sensor network MAC protocol; relative delay bound guarantee; rendezvous schedule planning; sensor node duty cycles; Clocks; Delay; Media Access Protocol; Receivers; Schedules; Sleep
【Paper Link】 【Pages】:1323-1331
【Authors】: Shuguang Xiong ; Jianzhong Li ; Mo Li ; Jiliang Wang ; Yunhao Liu
【Abstract】: For energy conservation, a wireless sensor network is usually designed to work in a low-duty-cycle mode, in which a sensor node keeps active for a small percentage of time during its working period. In applications where there are multiple data delivery tasks with high data rates and time constraints, low-duty-cycle working mode may cause severe transmission congestion and data loss. In order to alleviate congestion and reduce data loss, the tasks need to be carefully scheduled to balance the workloads among the sensor nodes in both spatial and temporal dimensions. This paper studies the load balancing problem, and proves it is NP-Complete in general network graphs. Two efficient scheduling algorithms to achieve load balance are proposed and analyzed. Furthermore, a task scheduling protocol is designed relying on the proposed algorithms. To the best of our knowledge, this paper is the first one to tackle multiple task scheduling for low-duty-cycled sensor networks. The simulation results show that the proposed algorithms greatly improve the network performance in most scenarios.
【Keywords】: energy conservation; graph theory; optimisation; protocols; resource allocation; scheduling; wireless sensor networks; NP-complete; energy conservation; low-duty-cycle mode; network graphs; scheduling algorithms; task scheduling protocol; transmission congestion; wireless sensor networks; workload balancing; Algorithm design and analysis; Delay; Optimal scheduling; Schedules; Scheduling; Time factors; Wireless sensor networks
【Paper Link】 【Pages】:1332-1340
【Authors】: Brian Guenter ; Navendu Jain ; Charles Williams
【Abstract】: We present ACES, an automated server provisioning system that aims to meet workload demand while minimizing energy consumption in data centers. To perform energy-aware server provisioning, ACES faces three key tradeoffs between cost, performance, and reliability: (1) maximizing energy savings vs. minimizing unmet load demand, (2) managing low power draw vs. high transition latencies for multiple power management schemes, and (3) balancing energy savings vs. reliability costs of server components due to on-off cycles. To address these challenges, ACES (1) predicts demand in the near future to turn on servers gradually before they are needed and avoids turning on unnecessary servers to cope with transient load spikes, (2) formulates an optimization problem that minimizes a linear combination of unmet demand and total energy and reliability costs, and uses the program structure to solve the problem efficiently in practice, and (3) constructs an execution plan based on the optimization decisions to transition servers between different power states and actuates them using system and load management interfaces. Our evaluation on three data center workloads shows that ACES's energy savings are close to the optimal and it delivers power proportionality while balancing the tradeoff between energy savings and reliability costs.
【Keywords】: Internet; computer centres; computer network reliability; power consumption; resource allocation; telecommunication services; ACES; Internet service; automated server provisioning system; cost management; data center; energy consumption; energy savings; energy-aware server provisioning; high transition latency; load management interface; low power draw; on-off cycle; optimization decision; power management; program structure; reliability cost; reliability tradeoff; server component; transient load spikes; unmet load demand; workload demand; Variable speed drives
【Paper Link】 【Pages】:1341-1349
【Authors】: Canming Jiang ; Yi Shi ; Y. Thomas Hou ; Sastry Kompella
【Abstract】: Network throughput and energy consumption are two important performance metrics for a multi-hop wireless network. Current state-of-the-art is limited to either maximizing throughput under some energy constraint or minimizing energy consumption while satisfying some throughput requirement. In this paper, we take a multicriteria optimization approach to offer a systematic study on the relationship between the two performance objectives. We show that the solution to the multicriteria optimization problem is equivalent to finding an optimal throughput-energy curve, which characterizes the envelope of the entire throughput-energy region. We prove some important properties of the optimal throughput-energy curve. For case study, we consider both linear and nonlinear throughput functions. In the linear case, we characterize the optimal throughput-energy curve precisely through parametric analysis, while in the nonlinear case, we use a piece-wise linear approximation to approximate the optimal throughput-energy curve with arbitrary accuracy. Our results offer important insights on exploiting the trade-off between the two performance metrics.
【Keywords】: approximation theory; optimisation; radio networks; energy consumption; multicriteria optimization; multihop wireless network; network throughput; optimal throughput-energy curve; parametric analysis; performance metrics; piece-wise linear approximation; throughput-energy region; Accuracy; IEC; IEC standards; Routing
【Paper Link】 【Pages】:1350-1358
【Authors】: Yi Shi ; Liguang Xie ; Y. Thomas Hou ; Hanif D. Sherali
【Abstract】: Traditional wireless sensor networks are constrained by limited battery energy. Thus, finite network lifetime is widely regarded as a fundamental performance bottleneck. Recent breakthrough in the area of wireless energy transfer offers the potential of removing such performance bottleneck, i.e., allowing a sensor network remain operational forever. In this paper, we investigate the operation of a sensor network under this new enabling energy transfer technology. We consider the scenario of a mobile charging vehicle periodically traveling inside the sensor network and charging each sensor node's battery wirelessly. We introduce the concept of renewable energy cycle and offer both necessary and sufficient conditions. We study an optimization problem, with the objective of maximizing the ratio of the wireless charging vehicle (WCV)'s vacation time over the cycle time. For this problem, we prove that the optimal traveling path for the WCV is the shortest Hamiltonian cycle and provide a number of important properties. Subsequently, we develop a near-optimal solution and prove its performance guarantee.
【Keywords】: battery chargers; optimisation; telecommunication power supplies; wireless sensor networks; Hamiltonian cycle; battery charging; finite network lifetime; mobile charging vehicle scenario; optimization; renewable energy cycle; renewable sensor networks; wireless charging vehicle vacation time; wireless energy transfer; wireless sensor networks; Batteries; Energy exchange; Optimization; Renewable energy resources; Routing; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:1359-1367
【Authors】: Matthew Andrews ; Spyridon Antonakopoulos ; Lisa Zhang
【Abstract】: A key problem in the control of packet-switched data networks is to schedule the data so that the queue sizes remain bounded over time. Scheduling algorithms have been developed in a number of different models that ensure network stability as long as no queue is inherently overloaded. However, this literature typically assumes that each server runs at a fixed maximum speed. Although this is optimal for clearing queue backlogs as fast as possible, it may be suboptimal in terms of energy consumption. Indeed, a lightly loaded server could operate at a lower rate, at least temporarily, to save energy. Within an energy-aware framework, a natural question arises: "What is the minimum energy that is required to keep the network stable?" In this paper, we demonstrate the following results towards answering that question. Starting with the simplest case of a single server in isolation, we consider three types of rate adaptation policies: a heuristic policy, which sets server speed depending on queue size only, and two more complex ones that exhibit a tradeoff between queue size and energy usage. We also present a lower bound on the best such tradeoff that can possibly be achieved. Next, we study a general network environment and investigate two scenarios. In a temporary sessions scenario, where connection paths can rapidly change over time, we propose a combination of the above rate adaptation policies with the standard Farthest-to Go scheduling algorithm. This approach provides stability in the network setting, while using an amount of energy that is within a bounded factor of the optimum. In a permanent sessions scenario, where connection paths are fixed, we examine an analogue of the well-known Weighted Fair Queueing scheduling policy and show how delay bounds are affected under rate adaptation.
【Keywords】: packet switching; queueing theory; scheduling; stability; delay bounds; energy consumption; energy-aware scheduling algorithms; farthest-to go scheduling algorithm; heuristic policy; loaded server; lower bound; network stability; packet-switched data networks; queue backlogs; queue size; single server; weighted fair queueing scheduling policy; Adaptation model; Delay; Energy consumption; Optimized production technology; Scheduling algorithm; Servers; Stability analysis
【Paper Link】 【Pages】:1368-1376
【Authors】: Dan Li ; Jiangwei Yu ; Junbiao Yu ; Jianping Wu
【Abstract】: Multicast benefits group communications in saving network traffic and improving application throughput, both of which are important for data center applications. However, the technical trend of future data center design poses new challenges for efficient and scalable Multicast routing. First, the densely connected networks make traditional receiver-driven Multicast routing protocols inefficient in Multicast tree formation. Second, it is quite difficult for the low-end switches largely used in data centers to hold the routing entries of massive Multicast groups.
【Keywords】: computational complexity; computer centres; multicast protocols; routing protocols; telecommunication network topology; trees (mathematics); Steiner-tree algorithm; bloom filter; computation complexity; data center networks; low-end switch; network traffic; regular topology; scalable multicast routing protocol; source-to-receiver expansion approach; Approximation algorithms; Bandwidth; Buildings; Receivers; Routing; Servers; Topology
【Paper Link】 【Pages】:1377-1385
【Authors】: Jiao Zhang ; Fengyuan Ren ; Chuang Lin
【Abstract】: Recently, TCP incast problem attracts increasing attention since the receiver suffers drastic goodput drop when it simultaneously strips data over multiple servers. Lots of attempts have been made to address the problem through experiments and simulations. However, to the best of our knowledge, few solutions can solve it fundamentally at low cost. In this paper, a goodput model of TCP incast is built to understand why goodput collapse occurs. We conclude that TCP incast goodput deterioration is mainly caused by two types of timeouts, one happens at the tail of a data block and dominates the goodput when the number of senders is small, while the other one at the head of a data block and governs the goodput when the number of senders is large. The proposed model describes the causes of these two types of timeouts which are related to the incast communication pattern, block size, bottleneck buffer and so on. We validate the proposed model by comparing with simulation data, finding that it can well characterize the features of TCP incast. We also discuss the impact of most parameters on the goodput of TCP incast.
【Keywords】: computer centres; multicast communication; transport protocols; TCP incast; data center networks; US Department of Defense; Data Center Networks; Goodput; Modeling; TCP incast
【Paper Link】 【Pages】:1386-1394
【Authors】: Chao-Chih Chen ; Lihua Yuan ; Albert G. Greenberg ; Chen-Nee Chuah ; Prasant Mohapatra
【Abstract】: In a multi-tenant data center environment, the current paradigm for route control customization involves a labor-intensive ticketing process, in which tenants submit route control requests to the landlord. This results in a tight coupling between tenants and landlord, extensive human resource deployment, and long ticket resolution time. We propose Routing-as-a-Service (RaaS), a framework for tenant-directed route control in data centers. We show that RaaS-based implementation provides a route control platform for multiple tenants to perform route control independently with little administrative involvement, and for the landlord to set the overall network policies. RaaS-based solutions can run on commercial off-the-shelf (COTS) hardware and leverage existing technologies, so it can be implemented in existing networks without major infrastructural overhaul. We present the design of RaaS, introduce its components, and evaluate a prototype based on RaaS.
【Keywords】: computer centres; telecommunication network routing; COTS hardware; RaaS; commercial off-the-shelf; data center; human resource deployment; labor-intensive ticketing process; route control customization; routing-as-a-service; tenant-directed route control; Data structures; IP networks; Measurement; Memory management; Protocols; Routing; Servers
【Paper Link】 【Pages】:1395-1403
【Authors】: Yong Cui ; Hongyi Wang ; Xiuzhen Cheng
【Abstract】: Unbalanced traffic demands of different data center applications are an important issue in designing Data center networks (DCNs). In this paper, we present our exploratory investigation of utilizing wireless transmissions in DCNs. Our work aims to solve the congestion problem caused by a few hot nodes to improve the global performance. We model the wireless transmissions in a DCN by considering both the wireless interference and the adaptive transmission rate. Moreover, both throughput and job completion time are taken into account to evaluate the impact of wireless transmissions on the global performance. Based on this model, we formulate the channel allocation in wireless DCNs as an optimization problem and design a genetic algorithm (GA) based approach to address it. To demonstrate the effectiveness of wireless transmissions as well as our GA-based algorithm in a wireless DCN, extensive simulation study is carried out and the results validate our design.
【Keywords】: channel allocation; computer centres; data communication; genetic algorithms; interference suppression; radio networks; DCN; adaptive transmission rate; channel allocation; genetic algorithm; optimization; wireless data center networks; wireless interference; wireless transmissions; Channel allocation; Genetic algorithms; Interference; Servers; Throughput; Transmitting antennas; Wireless communication
【Paper Link】 【Pages】:1404-1412
【Authors】: Nam Tuan Nguyen ; Guanbo Zheng ; Zhu Han ; Rong Zheng
【Abstract】: Each wireless device has its unique fingerprint, which can be utilized for device identification and intrusion detection. Most existing literature employs supervised learning techniques and assumes the number of devices is known. In this paper, based on device-dependent channel-invariant radio-metrics, we propose a non-parametric Bayesian method to detect the number of devices as well as classify multiple devices in a unsupervised passive manner. Specifically, the infinite Gaussian mixture model is used and a modified collapsed Gibbs sampling method is proposed. Sybil attacks and Masquerade attacks are investigated. We have proven the effectiveness of the proposed method by both simulation data and experimental measurements obtained by USRP2 and Zigbee devices.
【Keywords】: Bayes methods; Gaussian processes; Zigbee; radiocommunication; telecommunication security; Masquerade attacks; Sybil attacks; USRP2; Zigbee device; device dependent channel invariant radio metrics; device fingerprinting; device identification; infinite Gaussian mixture model; intrusion detection; modified collapsed Gibbs sampling method; nonparametric Bayesian method; unsupervised passive manner; wireless security; Bayesian methods; Data models; Feature extraction; Measurement; Radio transmitters; Receivers; Wireless communication
【Paper Link】 【Pages】:1413-1421
【Authors】: Qian Wang ; Ping Xu ; Kui Ren ; Xiang-Yang Li
【Abstract】: Anti-jamming communication without pre-shared secrets has gained increasing research interest recently and is commonly tackled by utilizing the technique of uncoordinated frequency hopping (UFH). Existing researches, however, are almost all based on ad hoc designs of frequency hopping strategies, lacking of theoretical foundations for scheme design and performance evaluation. To fill this gap, this paper introduces the online optimization theory into the solution and, for the first time, makes thorough quantitative performance characterization possible for UFH-based anti-jamming communications. Specifically, we propose an efficient online UFH algorithm achieving asymptotic optimum and analytically prove its optimality under different message coding scenarios. Extensive simulative evaluations are conducted to validate our theoretical analysis under both oblivious and adaptive jamming strategies.
【Keywords】: frequency hop communication; jamming; ad hoc design; adaptive jamming strategy; anti-jamming wireless communication; frequency hopping strategy; jamming communication; message coding; online optimization theory; optimality; uncoordinated frequency hopping; Algorithm design and analysis; Encoding; Heuristic algorithms; Jamming; Receivers; Spread spectrum communication; Time frequency analysis
【Paper Link】 【Pages】:1422-1430
【Authors】: Qian Wang ; Hai Su ; Kui Ren ; Kwangjo Kim
【Abstract】: Recently, there has been great interest in physical layer security techniques that exploit the randomness of wireless channels for securely extracting cryptographic keys. Several interesting approaches have been developed and demonstrated for their feasibility. The state-of-the-art, however, still has much room for improving their practicality. This is because i) the key bit generation rate supported by most existing approaches is very low which significantly limits their practical usage given the intermittent connectivity in mobile environments; ii) existing approaches suffer from the scalability and flexibility issues, i.e., they cannot be directly extended to support efficient group key generation and do not suit for static environments. With these observations in mind, we present a new secret key generation approach that utilizes the uniformly distributed phase information of channel responses to extract shared cryptographic keys under narrowband multipath fading models. The proposed approach enjoys a high key bit generation rate due to its efficient introduction of multiple randomized phase information within a single coherence time interval as the keying sources. The proposed approach also provides scalability and flexibility because it relies only on the transmission of periodical extensions of unmodulated sinusoidal beacons, which allows effective accumulation of channel phases across multiple nodes. The proposed scheme is thoroughly evaluated through both analytical and simulation studies. Compared to existing work that focus on pairwise key generation, our approach is highly scalable and can improve the analytical key bit generation rate by a couple of orders of magnitude.
【Keywords】: cryptography; fading channels; multipath channels; radio networks; telecommunication security; channel phase randomness; channel response; coherence time interval; cryptographic key; key bit generation rate; mobile environment; narrowband multipath fading model; physical layer security; secret key generation; unmodulated sinusoidal beacons; wireless channel; wireless network; Coherence; Fading; Maximum likelihood estimation; Protocols; Quantization; Steady-state; Wireless communication
【Paper Link】 【Pages】:1431-1439
【Authors】: Udi Ben-Porat ; Anat Bremler-Barr ; Hanoch Levy ; Bernhard Plattner
【Abstract】: Channel aware schedulers of modern wireless networks - such as the popular Proportional Fairness Scheduler (PFS) - improve throughput performance by exploiting channel fluctuations while maintaining fairness among the users. In order to simplify the analysis, PFS was introduced and vastly investigated in a model where frame losses do not occur, which is of course not the case in practical wireless networks. Recent studies focused on the efficiency of various implementations of PFS in a realistic model where frame losses can occur. In this work we show that the common straight forward adaptation of PFS to frame losses exposes the system to a malicious attack (which can alternatively be caused by malfunctioning user equipment) that can drastically degrade the performance of innocent users. We analyze the factors behind the vulnerability of the system and propose a modification of PFS designed for the frame loss model which is resilient to such malicious attack while maintaining the fairness properties of original PFS.
【Keywords】: radio networks; scheduling; telecommunication security; PFS; channel aware schedulers; channel fluctuations; fairness property; frame losses; malfunctioning user equipment; malicious attack; modern wireless networks; popular proportional fairness scheduler; retransmission attacks; throughput performance; vulnerability; Analytical models; Electronic mail; Propagation losses; Signal to noise ratio; Steady-state; Throughput; Wireless networks
【Paper Link】 【Pages】:1440-1448
【Authors】: Eitan Altman ; Philippe Nain ; Adam Shwartz ; Yuedong Xu
【Abstract】: The paper has two objectives. The first is to study rigorously the transient behavior of some peer-to-peer (P2P) networks whenever information is replicated and disseminated according to epidemic-like dynamics. The second is to use the insight gained from the previous analysis in order to predict how efficient are measures taken against P2P networks. We first introduce a stochastic model which extends a classical epidemic model, and characterize the P2P swarm behavior in presence of free riding peers. We then study a second model in which a peer initiates a contact with another peer chosen randomly. In both cases the network is shown to exhibit phase transitions: a small change in the parameters causes a large change in the behavior of the network. We show, in particular, how phase transitions affect measures of content providers against P2P networks that distribute non-authorized music or books, and what is the efficiency of counter-measures.
【Keywords】: peer-to-peer computing; stochastic processes; transient analysis; P2P networks; classical epidemic model; peer-to-peer network; phase transitions; stochastic model; transient behaviors; Approximation methods; Availability; Equations; Internet; Markov processes; Peer to peer computing; Transient analysis
【Paper Link】 【Pages】:1449-1457
【Authors】: Can Zhao ; Xiaojun Lin ; Chuan Wu
【Abstract】: Peer-to-Peer (P2P) streaming technologies can take advantage of the upload capacity of clients, and hence can scale to large content distribution networks with lower cost. A fundamental question for P2P streaming systems is the maximum streaming rate that all users can sustain. Prior works have studied the optimal streaming rate for a complete network, where every peer is assumed to communicate with all other peers. This is however an impractical assumption in real systems. In this paper, we are interested in the achievable streaming rate when each peer can only connect to a small number of neighbors. We show that even with a random peer selection algorithm and uniform rate allocation, as long as each peer maintains Ω(log N) downstream neighbors, where N is the total number of peers in the system, the system can asymptotically achieve a streaming rate that is close to the optimal streaming rate of a complete network.We then extend our analysis to multi-channel P2P networks, and we study the scenario where “helpers” from channels with excessive upload capacity can help peers in channels with insufficient upload capacity. We show that by letting each peer select Ω(log N) neighbors randomly from either the peers in the same channel or from the helpers, we can achieve a close-to-optimal streaming capacity region. Simulation results are provided to verify our analysis.
【Keywords】: media streaming; peer-to-peer computing; content distribution network; distributed control; multichannel P2P network; optimal streaming rate; peer-to-peer streaming technology; random peer selection algorithm; sparsely-connected P2P system; streaming capacity; uniform rate allocation; Capacity planning; Clustering algorithms; Peer to peer computing; Random variables; Resource management; Servers; Streaming media
【Paper Link】 【Pages】:1458-1466
【Authors】: Michael J. Neely ; Leana Golubchik
【Abstract】: We consider a peer-to-peer network where nodes can send and receive files amongst their peers. File requests are generated randomly, and each new file can correspond to a different subset of peers that already have the file and hence can assist in the download. Nodes that help others are rewarded by being able to download more. The goal is to design a control algorithm that allocates requests and schedules transmissions to maximize overall throughput-utility, subject to meeting “tit-for-tat” constraints that incentivize participation. Our algorithm is shown to operate efficiently on networks with arbitrary traffic and channel sample paths, including wireless networks whose capacity can be significantly extended by the peer-to-peer functionality.
【Keywords】: optimisation; peer-to-peer computing; radio networks; telecommunication traffic; control algorithm design; dynamic peer-to-peer networks; file downloading; file generation; request allocation; telecommunication traffic; tit-for-tat constraint; utility optimization; wireless networks; Cloud computing; Downlink; Manganese; Optimization; Peer to peer computing; Wireless communication; Lyapunov drift; Queueing analysis; optimization
【Paper Link】 【Pages】:1467-1475
【Authors】: François Baccelli ; Jean Bolot
【Abstract】: The defining characteristic of wireless and mobile networking is user mobility, and related to it is the ability for the network to capture information on where users are located and how users change location over time. Information about location is becoming critical, and therefore valuable, for an increasingly larger number of location-based or location-aware services. One key open question, however, is how valuable exactly this information is. Our goal in this paper is to develop an analytic framework, namely models and the techniques to solve them, to help quantify the economics of location information. Our aim is to derive models which can be used as decision making tools for entities interested in or involved in the location data economics chain, such as mobile operators or providers of location aware services (mobile advertising, etc). We consider in particular the fundamental problem of quantifying the value of different granularities of location information, for example how much more valuable is it to know the GPS location of a mobile user compared to only knowing the access point, or the cell tower, that the user is associated with. We illustrate our approach by considering what is arguably the quintessential location-based service, namely proximity-based advertising. We make three main contributions. First, we develop several novel models, based on stochastic geometry, which capture the location-based economic activity of mobile users with diverse sets of preferences or interests. Second, we derive closed-form analytic solutions for the economic value generated by those users. Third, we augment the models to consider uncertainty about the users' location, and derive expressions for the economic value generated with different granularities of location information. To our knowledge, this paper is the first one to present and analyze this class of economic models.
【Keywords】: Global Positioning System; decision making; mobility management (mobile radio); stochastic processes; GPS location; decision making tools; economic value; location data economic chain; location-aware services; location-based economic activity; mobile networking; mobile operators; mobile user location data; proximity-based advertising; quintessential location-based service; stochastic geometry; wireless networking; Advertising; Analytical models; Biological system modeling; Data models; Economics; Internet; Mobile communication
【Paper Link】 【Pages】:1476-1484
【Authors】: Devu Manikantan Shila ; Yu Cheng ; Tricha Anjali
【Abstract】: How much information can one send through a random ad hoc network of n nodes, if overlaid with a cellular architecture of m base stations? This network model is commonly referred to as hybrid wireless networks and our paper analyzes the above question by characterizing its throughput capacity. Although several research efforts related to throughput capacity exist in the area of hybrid wireless networks, most of these solutions under-explore the capacity analysis. Their results particularly indicate that one can realize only a less than log or no gain on capacity, as compared to pure ad hoc networks, when m scales slower than some threshold. This unsatisfying capacity gain is due to the fact that the base stations were not properly exploited while formulating the capacity analysis. Moreover, these research efforts also assume an one-hop wireless uplink between a node and its associated base station. Nevertheless, with those power-constrained wireless nodes, this assumption clearly indicates an unrealistic scenario. In this paper, we establish the bounds on capacity and delay by resolving the issues in existing efforts and at the heart of our analysis lies a simple routing policy known as same cell routing policy. Our findings particularly stipulate that whether m = O(n/log n) or Ω(n/log n), each node can realize a throughput that scales, sublinearly or linearly, with m. This is in fact a significant result as opposed to previous efforts which claims that if m grows slower than some threshold, the benefit of augmenting those base stations to the original ad hoc network is insignificant. Our analysis also shows that for a maximum per node throughput Λ(n, m), the average end-to-end delay in a hybrid network can be bounded by Θ(Λ(n, m)n/m), which has an inverse relationship to m.
【Keywords】: ad hoc networks; cellular radio; telecommunication network routing; ad hoc network; base stations; cellular architecture; delay analysis; end-to-end delay; hybrid wireless networks; multihop uplinks; one-hop wireless uplink; power-constrained wireless nodes; same-cell routing policy; throughput capacity analysis; Ad hoc networks; Base stations; Delay; Routing; Throughput; Wireless networks
【Paper Link】 【Pages】:1485-1493
【Authors】: Pan Li ; Miao Pan ; Yuguang Fang
【Abstract】: Network capacity investigation has been intensive in the past few years. A large body of work has appeared in the literature. However, so far most of the effort has been made on two-dimensional wireless networks only. With the great development of wireless technologies, wireless networks are envisioned to extend from two-dimensional space to three-dimensional space. In this paper, we investigate for the first time the throughput capacity of 3D regular ad hoc networks (RANETs) and of 3D heterogeneous ad hoc networks (HANETs), respectively, by employing a generalized physical model. In 3D RANETs, we assume that the nodes are regularly placed, while in 3D HANETs, we consider that the nodes are distributed according to a general Nonhomogeneous Poisson Process (NPP). We find both lower and upper bounds in both types of networks in a broad power propagation regime, i.e., when the path loss exponent is no less than 2.
【Keywords】: ad hoc networks; stochastic processes; 3D heterogeneous ad hoc network; 3D regular ad hoc network; general nonhomogeneous Poisson process; generalized physical model; network capacity; three-dimensional wireless ad hoc network; Ad hoc networks; Interference; Receivers; Three dimensional displays; Throughput; Transmitters; Wireless networks
【Paper Link】 【Pages】:1494-1502
【Authors】: George Iosifidis ; Iordanis Koutsopoulos ; Georgios Smaragdakis
【Abstract】: Recent technological advances have rendered storage a cheap and at large scale available resource. Yet, there exist only few examples in networking that consider storage for enhancing data transfer capabilities. In this paper we study networks with time varying link capacity and analyze the impact of node storage on their capability to convey data from source to destination. We show that storage capacity is quite beneficial in terms of the amount of data that can be pushed from the source to the destination within a given time horizon. Equivalently, storage can be used to reduce incurred delay for the delivery of a certain amount of data. For linear networks, we show that this performance improvement depends on the relative patterns of link capacity variations. We extend our study to general networks and we use a novel method that iteratively updates the minimum cut of the time expanded graph, in a constructive manner, in the sense that during the process, the storage capacity allocation in the network is shown. Next, we incorporate routing in our methodology and derive a joint storage capacity management and routing policy to maximize the amount of data transferred to the destination. This policy stems from the solution of the maximum flow problem defined for the dynamic network over a certain time period, by using the ε-relaxation solution method. The later is amenable to distributed implementation, which is a very desirable property for the large scale modern networks which operate without central control.
【Keywords】: channel allocation; channel capacity; data communication; delays; storage management; telecommunication network routing; time-varying networks; ε-relaxation solution method; data transfer capabilities; dynamic network; end-to-end delay; linear networks; network routing; node storage; storage capacity allocation; storage capacity management; time varying networks; Delay; Heuristic algorithms; Joining processes; Joints; Peer to peer computing; Routing; Upper bound
【Paper Link】 【Pages】:1503-1511
【Authors】: Yang Wang ; Xiaojun Cao ; Yi Pan
【Abstract】: In OFDM-based optical networks, multiple subcarriers can be allocated to accommodate various size of traffic demands. By using the multi-carrier modulation technique, subcarriers for the same node-pair can be overlapping in the spectrum domain. Compared to the traditional wavelength routed networks (WRNs), the OFDM-based Spectrum-sliced Elastic Optical Path (SLICE) network has higher spectrum efficiency due to its finer granularity and frequency-resource saving. In this work, for the first time, we comprehensively study the routing and spectrum allocation (RSA) problem in the SLICE network. After proving the NP-hardness of the static RSA problem, we formulate the RSA problem using the Integer Linear Programming (ILP) formulations to optimally minimize the maximum number of sub-carriers required on any fiber of a SLICE network. We then analyze the lower/upper bounds for the sub-carrier number in a network with general or specific topology. We also propose two efficient algorithms, namely, balanced load spectrum allocation (BLSA) algorithm and shortest path with maximum spectrum reuse (SPSR) algorithm to minimize the required sub-carrier number in a SLICE network. The results show that the proposed algorithms can match the analysis and approximate the optimal solutions using the ILP model.
【Keywords】: OFDM modulation; computational complexity; integer programming; linear programming; optical fibre networks; optical modulation; telecommunication network routing; telecommunication traffic; BLSA algorithm; ILP model; NP-hard problem; OFDM-based optical networks; RSA problem; SLICE network; balanced load spectrum allocation algorithm; frequency-resource saving; integer linear programming; lower bounds; multicarrier modulation technique; routing and spectrum allocation problem; spectrum efficiency; spectrum-sliced elastic optical path networks; upper bounds; wavelength routed networks; Frequency domain analysis; Indexes; OFDM; Optical fiber networks; Resource management; Routing; Upper bound
【Paper Link】 【Pages】:1512-1520
【Authors】: Matthew Johnston ; Hyang-Won Lee ; Eytan Modiano
【Abstract】: This paper presents a scheme in which a dedicated backup network is designed to provide protection from random link failures. Upon a link failure in the primary network, traffic is rerouted through a preplanned path in the backup network. We introduce a novel approach for dealing with random link failures, in which probabilistic survivability guarantees are provided to limit capacity over-provisioning. We show that the optimal backup routing strategy in this respect depends on the reliability of the primary network. Specifically, as primary links become less likely to fail, the optimal backup networks employ more resource sharing amongst backup paths. We apply results from the field of robust optimization to formulate an ILP for the design and capacity provisioning of these backup networks. We then propose a simulated annealing heuristic to solve this problem for largescale networks, and present simulation results that verify our analysis and approach.
【Keywords】: computer networks; simulated annealing; backup network design; optimal backup networks; optimal backup routing; over-provisioning; probabilistic survivability guarantees; random failures; random link failures; resource sharing; robust optimization approach; simulated annealing; Capacity planning; Optimization; Probabilistic logic; Robustness; Routing; Uncertainty
【Paper Link】 【Pages】:1521-1529
【Authors】: Pankaj K. Agarwal ; Alon Efrat ; Shashidhara K. Ganjugunte ; David Hay ; Swaminathan Sankararaman ; Gil Zussman
【Abstract】: Telecommunications networks, and in particular optical WDM networks, are vulnerable to large-scale failures of their physical infrastructure, resulting from physical attacks (such as an Electromagnetic Pulse attack) or natural disasters (such as solar flares, earthquakes, and floods). Such events happen at specific geographical locations and disrupt specific parts of the network but their effects are not deterministic. Therefore, we provide a unified framework to model the network vulnerability when the event has a probabilistic nature, defined by an arbitrary probability density function. Our framework captures scenarios with a number of simultaneous attacks, in which network components consist of several dependent subcomponents, and in which either a 1+1 or a 1:1 protection plan is in place. We use computational geometric tools to provide efficient algorithms to identify vulnerable points within the network under various metrics. Then, we obtain numerical results for specific backbone networks, thereby demonstrating the applicability of our algorithms to real-world scenarios. Our novel approach allows for identifying locations which require additional protection efforts (e.g., equipment shielding). Overall, the paper demonstrates that using computational geometric techniques can significantly contribute to our understanding of network resilience.
【Keywords】: computational geometry; optical fibre networks; probability; telecommunication security; wavelength division multiplexing; arbitrary probability density function; backbone network; computational geometric tool; natural disaster; network vulnerability; optical WDM network resiliency; physical attack; probabilistic geographical failure; telecommunications network; Approximation methods; Complexity theory; Compounds; Loss measurement; Optical fiber networks; Probabilistic logic; Resilience; Network survivability; computational geometry; geographic networks; network protection; optical networks
【Paper Link】 【Pages】:1530-1538
【Authors】: Lin Liu ; Zhenghao Zhang ; Yuanyuan Yang
【Abstract】: Optical switches are widely considered as the most promising candidate to provide ultra-high speed interconnections. Due to the difficulty in implementing all-optical buffer, optical switches with electronic buffers have been proposed recently. Among these switches, the Optical Cut-Through (OpCut) switch has the capability to achieve low latency and minimize optical-electronic-optical (O/E/O) conversions. In this paper, we consider packet scheduling in this switch with wavelength division multiplexing (WDM). Our goal is to maximize throughput and maintain packet order at the same time. While we prove that such an optimal scheduling problem is NP-hard and inapproximable in polynomial time within any constant factor by reducing it to the set packing problem, we present an approximation algorithm that maintains packet order and approximates the optimal scheduling within a factor of √2Nk with regard to the number of packets transmitted, where N is the switch size and k is the number of wavelengths multiplexed on each fiber. This result is in line with the best known approximation algorithm for set packing. Based on the approximation algorithm, we also give practical schedulers that can be implemented in fast optical switches. Simulation results show that the schedulers achieve close performance to the ideal WDM output-queued switch in terms of packet delay under various traffic models.
【Keywords】: approximation theory; computational complexity; optical communication equipment; optical switches; optimisation; wavelength division multiplexing; NP-hard problem; approximation algorithm; electronic buffer; low latency optical switch; optical cut through switch; optimal scheduling; packet scheduling; ultra high speed interconnection; wavelength division multiplexing; Indium tin oxide; Optical buffering; Optical fibers; Optical transmitters; Optical switch; approximate algorithm; electronic buffer; packet scheduling; wavelength division multiplexing (WDM)
【Paper Link】 【Pages】:1539-1547
【Authors】: Kebin Liu ; Qiang Ma ; Xibin Zhao ; Yunhao Liu
【Abstract】: Existing approaches to diagnosing sensor networks are generally sink-based, which rely on actively pulling state information from all sensor nodes so as to conduct centralized analysis. However, the sink-based diagnosis tools incur huge communication overhead to the traffic sensitive sensor networks. Also, due to the unreliable wireless communications, sink often obtains incomplete and sometimes suspicious information, leading to highly inaccurate judgments. Even worse, we observe that it is always more difficult to obtain state information from the problematic or critical regions. To address the above issues, we present the concept of self-diagnosis, which encourages each single sensor to join the fault decision process. We design a series of novel fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. The fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the fault diagnosis by transiting the detector's current state to a new one based on local evidences and then pass the fault detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs the final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100 nodes testbed.
【Keywords】: fault diagnosis; telecommunication network reliability; wireless sensor networks; TinyD2; centralized analysis; fault decision process; fault detector; fault diagnosis; large scale wireless sensor network; self-diagnosis tool; sensor nodes; sink-based diagnosis tool; state information; traffic sensitive sensor network; unreliable wireless communication; Debugging; Detectors; Fault detection; Fault diagnosis; Green products; Monitoring; Wireless sensor networks
【Paper Link】 【Pages】:1548-1556
【Authors】: Xin Miao ; Kebin Liu ; Yuan He ; Yunhao Liu ; Dimitris Papadias
【Abstract】: In wireless sensor networks (WSNs), diagnosis is a crucial and challenging task due to the distributed nature and stringent resources. Most previous approaches are supervised, relying on a-priori knowledge of network faults. On the other hand, our experience with GreenOrbs, a long-term large-scale WSN system, reveals the need of diagnosis in an agnostic manner. Specifically, in addition to predefined faults (i.e., with known types and symptoms), silent failures that are unknown beforehand, account for a large fraction of network performance degradation. Currently, there is no effective solution for silent failures because they are often diverse and highly system-related. In this paper, we propose Agnostic Diagnosis (AD), an online lightweight failure detection approach. AD is motivated by the fact that the system metrics (e.g., radio-on time, number of packets transmitted) of GreenOrbs sensors usually exhibit certain correlation patterns. Violations of such patterns indicate potential silent failures. We accordingly design a correlation graph, which systematically characterizes internal correlations inside a node. Silent failures are discovered by tracking the changes and anomalies of correlation graphs. We implement AD on a working WSN consisting of 330 nodes. Our experimental results demonstrate the advantages of AD to discover silent failures, effectively expanding the capacity and scope of WSN diagnosis.
【Keywords】: correlation methods; failure analysis; wireless sensor networks; agnostic diagnosis; correlation graphs; online lightweight failure detection approach; wireless sensor networks; Correlation; Covariance matrix; Navigation; Silicon; Wireless sensor networks
【Paper Link】 【Pages】:1557-1565
【Authors】: Ling Ding ; Weili Wu ; James Willson ; Hongjie Du ; Wonjun Lee
【Abstract】: It is well-known that the application of directional antennas can help conserve bandwidth and energy consumption in wireless networks. Thus, to achieve efficiency in wireless networks, we study a special virtual backbone (VB) using directional antennas, requiring that from one node to any other node in the network, there exists at least one directional shortest path all of whose intermediate directions should belong to the VB, named as Minimum rOuting Cost Directional VB (MOC-DVB). In addition, VB has been well studied in Unit Disk Graph (UDG). However, radio wave based communications in wireless networks may be interrupted by obstacles (e.g., buildings and mountains). Thus, in this paper, we model a network as a general directed graph. We prove that construction of a minimum MOC-DVB is an NP-hard problem in a general directed graph and in term of the size of MOC-DVB, there exists an unreachable lower bound of the polynomial-time selected MOC-DVB. Therefore, we propose a distributed approximation algorithm for constructing MOC-DVB with approximation ratio of 1 + ln K + 2ln δD, where K is the number of antennas on each node and δD is the maximum direction degree in the network. Extensive simulations demonstrate that our constructed MOC-DVB is much more efficient in the sense of MOC-DVB size and routing cost compared to other VBs.
【Keywords】: computational complexity; directed graphs; directive antennas; polynomial approximation; radio networks; telecommunication network routing; MOC-DVB; bandwidth consumption; directional antennas; directional virtual backbones; distributed approximation algorithm; energy consumption; general directed graph; lower bound; minimum routing cost; minimum routing cost directional shortest path; polynomial-time approximation algorithm; radiowave based communications; unit disk graph; wireless networks; Approximation algorithms; Approximation methods; Digital video broadcasting; Directional antennas; Routing; Wireless networks
【Paper Link】 【Pages】:1566-1574
【Authors】: Dijun Luo ; Xiaojun Zhu ; Xiaobing Wu ; Guihai Chen
【Abstract】: In many applications of wireless sensor networks, a sensor node senses the environment to get data and delivers them to the sink via a single hop or multi-hop path. Many systems use a tree rooted at the sink as the underlying routing structure. Since the sensor node is energy constrained, how to construct a good tree to prolong the lifetime of the network is an important problem. We consider this problem under the scenario where nodes have different initial energy, and they can do in-network aggregation. In previous works, it has been proved that finding a maximum lifetime tree from all feasible spanning trees is NP-complete. Since delay is also an important element in time-critical applications, and shortest path trees intuitively have short delay, it is imperative to find a shortest path tree with long lifetime. This paper studies the problem of maximizing the lifetime of data aggregation trees, which are limited to shortest path trees. We find that when it is restricted to shortest path trees, the original problem is in P. We transform the problem into a general version of semi-matching problem, and show that the problem can be solved by min-cost max-flow approach in polynomial time. Also we design a distributed solution. Simulation results show that our approach greatly improves the lifetime of the network and is more competitive when it is applied in a dense network.
【Keywords】: computational complexity; optimisation; polynomials; telecommunication network reliability; telecommunication network routing; trees (mathematics); wireless sensor networks; NP-complete; data aggregation tree; delay; innetwork aggregation; mincost maxflow approach; multihop path; network lifetime; polynomial time; routing structure; semimatching problem; sensor node; shortest path aggregation tree; single hop path; spanning tree; time-critical application; wireless sensor network; Delay; Energy consumption; Polynomials; Routing; Transforms; Vegetation; Wireless sensor networks
【Paper Link】 【Pages】:1575-1583
【Authors】: Paul Balister ; Béla Bollobás ; Animashree Anandkumar ; Alan S. Willsky
【Abstract】: The problem of designing policies for in-network function computation with minimum energy consumption subject to a latency constraint is considered. The scaling behavior of the energy consumption under the latency constraint is analyzed for random networks, where the nodes are uniformly placed in growing regions and the number of nodes goes to infinity. The special case of sum function computation and its delivery to a designated root node is considered first. A policy which achieves order-optimal average energy consumption in random networks subject to the given latency constraint is proposed. The scaling behavior of the optimal energy consumption depends on the path-loss exponent of wireless transmissions and the dimension of the Euclidean region where the nodes are placed. The policy is then extended to computation of a general class of functions which decompose according to maximal cliques of a proximity graph such as the k-nearest neighbor graph or the geometric random graph. The modified policy achieves order-optimal energy consumption albeit for a limited range of latency constraints.
【Keywords】: energy consumption; graph theory; random functions; telecommunication networks; wireless channels; Euclidean region; energy consumption; energy-latency tradeoff; geometric random graph; in-network function computation; k-nearest neighbor graph; latency constraint; proximity graph; random networks; root node; wireless transmissions; Computational modeling; Context; Electronic mail; Energy consumption; Markov processes; Routing; Wireless sensor networks; Euclidean random graphs; Function computation; latency-energy tradeoff; minimum broadcast problem
【Paper Link】 【Pages】:1584-1592
【Authors】: Yi Xu ; Wenye Wang
【Abstract】: Energy efficient communication is a critical research problem in large-scale multihop wireless networks because of the limited energy supplies from batteries. We investigate in this paper the minimum energy required to fulfill various information delivery goals that correspond to the major communication paradigms in large wireless networks. We characterize the minimum energy requirement in two steps. We first derive the lower bounds on the energy consumption for all the possible solutions that deliver the information as required. We then design routing schemes that accomplish the information delivery tasks by using an amount of energy comparable to the lower bounds. Our work provides the fundamental understandings of energy needs and the efficient solutions for energy usages in major communication scenarios, which contribute to the rational dimensioning and wise utilization of the energy resources in large wireless networks.
【Keywords】: radio networks; telecommunication network routing; battery; energy consumption; energy efficient communication; information delivery goals; information delivery tasks; large-scale multihop wireless networks; limited energy supply; major communication paradigms; minimum energy expense; minimum energy requirement; rational dimensioning; routing schemes; Energy consumption; Interference; Noise; Receivers; Routing; Wireless networks
【Paper Link】 【Pages】:1593-1601
【Authors】: Tao Jin ; Guevara Noubir ; Bo Sheng
【Abstract】: The high density ofWiFi Access Points and large unlicensed RF bandwidth over which they operate makes them good candidates to alleviate cellular network's limitations. However, maintaining connectivity through WiFi results in depleting the mobile phone's battery in a very short time. We propose WiZi-Cloud, a system that utilizes a dual WiFi-ZigBee radio on mobile phones and Access Points, supported by WiZi-Cloud protocols, to achieve ubiquitous connectivity, high energy efficiency, real time intra-device/inter-AP handover, that is transparent to the applications. WiZi-Cloud runs mostly on commodity hardware such as Android phones and OpenWrt capable access points. Our extensive set of experiments demonstrate that for maintaining connectivity, WiZi-Cloud achieves more than a factor of 11 improvement in energy consumption in comparison with energy-optimized WiFi, and a factor of 7 in comparison with GSM. WiZi-Cloud has a better coverage than WiFi, and a low delay resulting in a good Mean Opinion Score (MOS) of 4.26 for a VoIP US cross-country communication.
【Keywords】: Internet; Zigbee; cellular radio; wireless LAN; WiFi access points; WiZi-cloud protocols; application-transparent dual ZigBee-WiFi radios; cellular network; dual WiFi-ZigBee radio; low power Internet access; mobile phones; unlicensed RF bandwidth; Bluetooth; Hardware; IEEE 802.11 Standards; IP networks; Mobile handsets; Protocols; Zigbee
【Paper Link】 【Pages】:1602-1610
【Authors】: Maria Gorlatova ; Aya Wallwater ; Gil Zussman
【Abstract】: Recent advances in energy harvesting materials and ultra-low-power communications will soon enable the realization of networks composed of energy harvesting devices. These devices will operate using very low ambient energy, such as indoor light energy. We focus on characterizing the energy availability in indoor environments and on developing energy allocation algorithms for energy harvesting devices. First, we present results of our long-term indoor radiant energy measurements, which provide important inputs required for algorithm and system design (e.g., determining the required battery sizes). Then, we focus on algorithm development, which requires nontraditional approaches, since energy harvesting shifts the nature of energy-aware protocols from minimizing energy expenditure to optimizing it. Moreover, in many cases, different energy storage types (rechargeable battery and a capacitor) require different algorithms. We develop algorithms for determining time fair energy allocation in systems with predictable energy inputs, as well as in systems where energy inputs are stochastic.
【Keywords】: energy harvesting; energy measurement; energy storage; indoor communication; low-power electronics; energy availability; energy aware protocol; energy storage; long-term indoor radiant energy measurement; low-power energy harvesting device networking; time fair energy allocation; ultra low-power communication; Algorithm design and analysis; Availability; Energy harvesting; Energy measurement; Energy storage; Prediction algorithms; Radiation effects; Energy harvesting; energy-aware algorithms; indoor radiant energy; measurements; ultra-low-power networking
【Paper Link】 【Pages】:1611-1619
【Authors】: Longjiang Guo ; Raheem A. Beyah ; Yingshu Li
【Abstract】: Wireless sensors are attached to all kinds of mobile devices/entities such as mobile phones, PDAs, vehicles, robots and animals. This generates Mobile Wireless Sensor Networks (MWSNs) with very dynamic topologies and loose connectivity that depend on mobility of the mobile devices. Data collection from these mobile sensors has become a great challenge considering volatile topologies, loose connectivity and limited buffer storage. This paper proposes a stochastic compressive data collection protocol for MWSNs named SMITE. SMITE consists of three parts: random collector election, stochastic direct transmission from common nodes to collectors when common nodes are in the collectors' transmission range, and angle transmission from collectors to the mobile sink when collectors gather enough data using a predictive method. The collectors use bloom filters to compress the received data. The protocol's performance is theoretically analyzed. The analytic results show that data from the common nodes can be gathered to the collectors with a high probability and gathered data on the collectors can also be forwarded to the mobile sink with a high probability. Simulations are carried out for performance evaluation. The simulation results show that SMITE significantly outperforms the state-of-the-art solutions such as DFT-MSN, SCAR and Sidewinder on the aspects of delivery ratio, transmission overhead, and time delay.
【Keywords】: data compression; filtering theory; mobility management (mobile radio); probability; protocols; stochastic processes; telecommunication network topology; wireless sensor networks; PDA; animal; bloom filter; data compression; dynamic topology; mobile device mobility; mobile phone; mobile sink; mobile wireless sensor network; performance evaluation; predictive method; probability; random collector election; robot; stochastic compressive data collection protocol; stochastic direct transmission; vehicle; Context; Mobile communication; Mobile computing; Protocols; Sensors; Topology; Wireless sensor networks
【Paper Link】 【Pages】:1620-1628
【Authors】: Yingying Chen ; Sourabh Jain ; Vijay Kumar Adhikari ; Zhi-Li Zhang ; Kuai Xu
【Abstract】: Effectively managing multiple data centers and their traffic dynamics pose many challenges to their operators, as little is known about the characteristics of inter-data center (D2D) traffic. In this paper we present a first study of D2D traffic characteristics using the anonymized NetFlow datasets collected at the border routers of five major Yahoo! data centers. Our contributions are mainly two-fold: i) we develop novel heuristics to infer the Yahoo! IP addresses and localize their locations from the anonymized NetFlow datasets, and ii) we study and analyze both D2D and client traffic characteristics and the correlations between these two types of traffic. Our study reveals that Yahoo! uses a hierarchical way of deploying data centers, with several satellite data centers distributed in other countries and backbone data centers distributed in US locations. For Yahoo! US data centers, we separate the client-triggered D2D traffic and background D2D traffic from the aggregate D2D traffic using port based correlation, and study their respective characteristics. Our findings shed light on the interplay of multiple data centers and their traffic dynamics within a large content provider, and provide insights to data center designers and operators as well as researchers.
【Keywords】: Internet; computer centres; telecommunication traffic; IP addresses; Yahoo! datasets; anonymized NetFlow datasets; client-triggered D2D traffic; data center management; interdata center traffic; Correlation; IP networks; Integrated circuits; Satellites; Anonymization; Content provider; Inter-data center; NetFlow
【Paper Link】 【Pages】:1629-1637
【Authors】: Andrew R. Curtis ; Wonho Kim ; Praveen Yalagandula
【Abstract】: Datacenters need high-bandwidth interconnection fabrics. Several researchers have proposed highly-redundant topologies with multiple paths between pairs of end hosts for datacenter networks. However, traffic management is necessary to effectively utilize the bisection bandwidth provided by these topologies. This requires timely detection of elephant flows-flows that carry large amount of data-and managing those flows. Previously proposed approaches incur high monitoring overheads, consume significant switch resources, and/or have long detection times. We propose, instead, to detect elephant flows at the end hosts. We do this by observing the end hosts's socket buffers, which provide better, more efficient visibility of flow behavior. We present Mahout, a low-overhead yet effective traffic management system that follows OpenFlow-like central controller approach for network management but augments the design with our novel end host mechanism. Once an elephant flow is detected, an end host signals the network controller using in-band signaling with low overheads. Through analytical evaluation and experiments, we demonstrate the benefits of Mahout over previous solutions.
【Keywords】: computer centres; computer network management; telecommunication traffic; Mahout; OpenFlow-like central controller; bisection bandwidth; datacenter traffic management; elephant flows; end-host-based elephant detection; high-bandwidth interconnection fabrics; in-band signaling; monitoring overheads; network controller; network management; socket buffers; Heating; Switches
【Paper Link】 【Pages】:1638-1646
【Authors】: Jin Tang ; Yu Cheng ; Weihua Zhuang
【Abstract】: The distributed nature of the CSMA/CA based wireless protocols, e.g., the IEEE 802.11 distributed coordinated function (DCF), allows malicious nodes to deliberately manipulate their backoff parameters and thus unfairly gain a large share of the network throughput. The non-parametric cumulative sum (CUSUM) test is a promising method for real-time misbehavior detection due to its ability to quickly find abrupt changes in a process without any a priori knowledge of the statistics of the change occurrences. While most of the existing schemes for selfish behavior detection depend on heuristic parameter configuration and experimental performance evaluation, we develop a Markov chain based analytical model to systematically study the CUSUM based scheme for real-time detection of the backoff misbehavior. Based on the analytical model, we can quantitatively compute the system configuration parameters for guaranteed performance in terms of average false positive rate, average detection delay and missed detection ratio under a detection delay constraint. Moreover, we find that the short-term fairness issue of the 802.11 DCF impacts the transition probabilities of the Markov model and thus the detection accuracy. We develop a shuffle scheme to mitigate the short-term fairness impact on the sample series, and investigate the proper shuffle period (in terms of observation windows) that can maintain the randomness in each node's backoff behavior while resolving the short-term fairness issue. We present simulation results to confirm the accuracy of our theoretical analysis as well as demonstrate the performance of the developed real-time detection scheme.
【Keywords】: Markov processes; carrier sense multiple access; wireless LAN; CSMA/CA based wireless protocols; IEEE 802.11 based wireless networks; IEEE 802.11 distributed coordinated function; Markov chain; analytical approach; backoff misbehavior; malicious nodes; nonparametric cumulative sum test; performance evaluation; real-time misbehavior detection; Analytical models; Delay; Detectors; IEEE 802.11 Standards; Markov processes; Protocols; Real time systems
【Paper Link】 【Pages】:1647-1655
【Authors】: Wei Dong ; Vacha Dave ; Lili Qiu ; Yin Zhang
【Abstract】: Mobile social networks extend social networks in the cyberspace into the real world by allowing mobile users to discover and interact with existing and potential friends who happen to be in their physical vicinity. Despite their promise to enable many exciting applications, serious security and privacy concerns have hindered wide adoption of these networks. To address these concerns, in this paper we develop novel techniques and protocols to compute social proximity between two users to discover potential friends, which is an essential task for mobile social networks.We make three major contributions. First, we identify a range of potential attacks against friend discovery by analyzing real traces. Second, we develop a novel solution for secure proximity estimation, which allows users to identify potential friends by computing social proximity in a privacy-preserving manner. A distinctive feature of our solution is that it provides both privacy and verifiability, which are frequently at odds in secure multiparty computation. Third, we demonstrate the feasibility and effectiveness of our approaches using real implementation on smartphones and show it is efficient in terms of both computation time and power consumption.
【Keywords】: mobile computing; security of data; social networking (online); mobile social networks; physical vicinity; potential attacks; potential friend discovery; privacy concerns; privacy-preserving manner; protocols; secure friend discovery; secure proximity estimation; smartphones; verifiability; Cryptography; Mobile communication; Mobile computing; Privacy; Protocols; Servers; Social network services
【Paper Link】 【Pages】:1656-1664
【Authors】: Kanthakumar Pongaliur ; Li Xiao
【Abstract】: In a sensor network, an important problem is to provide privacy to the event detecting sensor node and integrity to the data gathered by the node. Compromised source privacy can inadvertently leak event location. Existing techniques use either random walk path or generate fake event packets to make it hard for the adversary to traceback to the source, since encryption alone may not help prevent a traffic analysis attack. In this work, without using the traditional overhead intensive methods, we present a scheme to hide source information using cryptographic techniques incurring lower overhead. The packet is modified and route by dynamically selected nodes to make it difficult for a malicious entity to traceback the packet to a source node and also prevent packet spoofing. This is important because the adversary model considers a super-local eavesdropper having the ability to compromise sensor nodes. We analyze the ability of our proposed scheme to withstand different attacks and demonstrate its efficiency in terms of overhead and functionality when compared to existing work.
【Keywords】: cryptography; data privacy; telecommunication security; wireless sensor networks; compromised source privacy; cryptographic techniques; event detecting sensor node; node compromise attacks; overhead intensive methods; packet spoofing; random walk path; source node; source privacy; superlocal eavesdropper; traffic analysis attack; wireless sensor networks; Birds; Cryptography; Delay; Payloads; Privacy; Protocols; Routing
【Paper Link】 【Pages】:1665-1673
【Authors】: Hao Han ; Fengyuan Xu ; Chiu Chiang Tan ; Yifan Zhang ; Qun Li
【Abstract】: This paper considers vehicular rogue access points (APs) that rogue APs are set up in moving vehicles to mimic legitimate roadside APs to lure users to associate to them. Due to its mobility, a vehicular rogue AP is able to maintain a long connection with users. Thus, the adversary has more time to launch various attacks to steal users' private information. We propose a practical detection scheme based on the comparison of Receive Signal Strength (RSS) to prevent users from connecting to rogue APs. The basic idea of our solution is to force APs (both legitimate and fake) to report their GPS locations and transmission powers in beacons. Based on such information, users can validate whether the measured RSS matches the value estimated from the AP's location, transmission power, and its own GPS location. Furthermore, we consider the impact of path loss and shadowing and propose a method based on rate adaption to deal with advanced rogue APs. We implemented our detection technique on commercial off-the-shelf devices including wireless cards, antennas, and GPS modules to evaluate the efficacy of our scheme.
【Keywords】: Global Positioning System; data privacy; vehicular ad hoc networks; GPS locations; GPS modules; antennas; beacons; commercial off-the-shelf devices; path loss; receive signal strength; shadowing; transmission powers; user private information; vehicular rogue access points; wireless cards
【Paper Link】 【Pages】:1674-1682
【Authors】: Tae Hyun Kim ; Jian Ni ; R. Srikant ; Nitin H. Vaidya
【Abstract】: Recently, it has been shown that a simple, distributed CSMA algorithm can achieve throughput-optimality. However, the optimality is established under the ideal carrier sensing assumption, i.e., each link can precisely sense the presence of other active links in its neighborhood. This paper, in contrast, investigates the achievable throughput of the CSMA algorithm under imperfect carrier sensing. The main result is that CSMA can achieve an arbitrary fraction of the capacity region if certain access probabilities are set appropriately. To establish this result, we use the perturbation theory of Markov chains.
【Keywords】: Markov processes; carrier sense multiple access; distributed algorithms; perturbation techniques; probability; Markov chains; access probability; achievable throughput; active links; arbitrary fraction; capacity region; distributed CSMA algorithm; ideal carrier sensing assumption; imperfect carrier sensing; perturbation theory; throughput-optimality; Barium; Markov processes; Multiaccess communication; Probes; Schedules; Sensors; Throughput
【Paper Link】 【Pages】:1683-1691
【Authors】: M. H. R. Khouzani ; Saswati Sarkar ; Eitan Altman
【Abstract】: Epidemic models based on nonlinear differential equations have been extensively applied in a variety of systems as diverse as infectious outbreaks, marketing, diffusion of beliefs, etc., to the dissemination of messages in MANET or p2p networks. Control of such systems is achieved at the cost of consuming the resources. We construct a unifying framework that models the interactions of the control and the elements in systems with epidemic behavior. Specifically, we consider non-replicative and replicative dissemination of messages in a network: a pre-determined set of disseminators distribute the messages in the former, whereas the disseminator set continually grows in the latter as the nodes that receive the patch become disseminators themselves. In both cases, the desired trade-offs can be attained by activating at any given time only fractions of disseminators and selecting their dissemination rates. We formulate the above trade-offs as optimal control problems that seek to minimize a general aggregate cost function which cogently depends on both the states and the overall resource consumption. We prove that the dynamic control strategies have simple structures: (1) it is never optimal to activate a partial fraction of the disseminators (all or none) (2) when the resource consumption cost is concave, the distribution rate of the activated nodes are bang-bang with at most one jump from the maximum to the minimum value. When the resource consumption cost is convex, the above transition is strict but continuous. We compare the efficacy and robustness of different dispatch models and also those of the optimum dynamic and static controls using numerical computations.
【Keywords】: mobile ad hoc networks; nonlinear differential equations; optimal control; peer-to-peer computing; telecommunication control; MANET; P2P network; aggregate cost function; epidemic evolution control; mobile adhoc network; nonlinear differential equations; nonreplicative message dissemination; optimal control; peer-to-peer network; replicative message dissemination; resource consumption; Variable speed drives
【Paper Link】 【Pages】:1692-1700
【Authors】: Zhiyang Guo ; Zhemin Zhang ; Yuanyuan Yang
【Abstract】: All-optical packet switches (OPS) are considered as a good candidate for future ultra-fast communications as they do not require optical-electronic-optical (O/E/O) conversions. However, currently there is still no practical optical random access memory available, which makes it difficult to reduce packet loss to an acceptable level in OPS. Thus, hybrid optical/electronic switch architectures, such as the switch proposed in which we refer to as the OpCut switch in this paper, are promising alternatives due to their potential to achieve ultra-low packet loss and packet delay. Although there has been extensive work on the performance modeling of different types of electronic and all-optical switches, little work has been done for the performance modeling of hybrid switches. In this paper, we present an efficient analytical model called the aggregation model that comprehensively analyzes various performance metrics of the OpCut switch under different types of traffic. By inductively aggregating more queues in the buffer into a block, the aggregation model can achieve a polynomial complexity to the switch size. We develop the aggregation model for the OpCut switch under both Bernoulli traffic and ON-OFF Markovian traffic. The effectiveness of our model is validated by extensive simulations. The results show that the aggregation model is very accurate in all tested scenarios.
【Keywords】: optical switches; packet switching; ON-OFF Markovian traffic; aggregation model; all-optical packet switches; hybrid optical packet switches; hybrid optical/electronic switch architecture; optical random access memory; optical-electronic-optical conversion; packet delay; packet loss reduction; performance modeling; polynomial complexity; shared buffer; ultra-fast communication; Analytical models; Buffer storage; Markov processes; Optical buffering; Optical packet switching; Optical switches; Optical transmitters; Optical packet switching; analytical model; hybrid switch; optical packet switch; packet delay; packet loss; performance modeling; shared buffer
【Paper Link】 【Pages】:1701-1709
【Authors】: Cheng Wang ; Changjun Jiang ; Yunhao Liu ; Xiang-Yang Li ; Shaojie Tang ; Huadong Ma
【Abstract】: A critical function of wireless sensor networks (WSNs) is data gathering. While, one is often only interested in collecting a relevant function of the sensor measurements at a sink node, rather than downloading all the data from all the sensors. This paper studies the capacity of computing and transporting the specific functions of sensor measurements to the sink node, called aggregation capacity, for WSNs. It focuses on random WSNs that can be classified into two types: random extended WSN and random dense WSN. All existing results about aggregation capacity are studied for dense WSNs, including random cases and arbitrary cases, under the protocol model (ProM) or physical model (PhyM). In this paper, we propose the first aggregation capacity scaling laws for random extended WSNs. We point out that unlike random dense WSNs, for random extended WSNs, the assumption made in ProM and PhyM that each successful transmission can sustain a constant rate is over-optimistic and unpractical due to transmit power limitation.We derive the first result on aggregation capacity for random extended WSNs under the generalized physical model. Particularly, we prove that, for the type-sensitive perfectly compressible functions and type-threshold perfectly compressible functions, the aggregation capacities for random extended WSNs with n nodes are of order Θ ((log n)-β/2-1) and Θ (((log n)-β/2)/(log log n)), respectively, where β >; 2 denotes the power attenuation exponent in the generalized physical model.
【Keywords】: wireless sensor networks; aggregation capacity; data gathering; physical model; protocol model; random dense WSN; random extended WSN; sensor measurements; sink node; wireless sensor networks; Interference; Lattices; PROM; Routing; Signal to noise ratio; Throughput; Wireless sensor networks
【Paper Link】 【Pages】:1710-1718
【Authors】: Aniket Pingley ; Nan Zhang ; Xinwen Fu ; Hyeong-Ah Choi ; Suresh Subramaniam ; Wei Zhao
【Abstract】: Location-based services (LBS) have become an immensely valuable source of real-time information and guidance. Nonetheless, the potential abuse of users' sensitive personal data by an LBS server is evolving into a serious concern. Privacy concerns in LBS exist on two fronts: location privacy and query privacy. In this paper we investigate issues related to query privacy. In particular, we aim to prevent the LBS server from correlating the service attribute, e.g., bar/tavern, in the query to the user's real-world identity. Location obfuscation using spatial generalization aided by anonymization of LBS queries is a conventional means to this end. However, effectiveness of this technique would abate in continuous LBS scenarios, i.e., where users are moving and recurrently requesting for LBS. In this paper, we present a novel query-perturbation-based scheme that protects query privacy in continuous LBS even when user-identities are revealed. Unlike most exiting works, our scheme does not require the presence of a trusted third party.
【Keywords】: data privacy; mobile computing; continuous location based service server; location based service queries anonymization; location obfuscation; location privacy; query privacy protection; query-perturbation-based scheme; spatial generalization; Computer architecture; Context; Mobile handsets; Predictive models; Privacy; Servers; Trajectory
【Paper Link】 【Pages】:1719-1727
【Authors】: Matthew Keally ; Gang Zhou ; Guoliang Xing ; Jianxin Wu
【Abstract】: Wireless sensor networks for human health monitoring, military surveillance, and disaster warning all have stringent accuracy requirements for detecting or classifying events while maximizing system lifetime. We define meeting such user accuracy requirements as confident sensing. To perform confident sensing and reduce energy, we must address sensing diversity: sensing capability differences among heterogeneous and homogeneous sensors in a specific deployment. We are among the first to explore the impact of sensing diversity on sensor collaboration, exploit diversity for sensing confidence, and apply diversity exploitation for confident sensing coverage. We show that our diversity-exploiting confident coverage problem is NP-hard for any specific deployment and present a practical solution, Wolfpack. Through a distributed and iterative sensor collaboration approach, Wolfpack maximizes a specific deployment's capability to meet user detection requirements and save energy by powering off unneeded nodes. Using real vehicle detection trace data, we demonstrate that Wolfpack provides confident event detection coverage for 30% more detection locations, using 20% less energy than a state of the art approach.
【Keywords】: computational complexity; wireless sensor networks; NP-hard problem; distributed sensor collaboration approach; event detection coverage; heterogeneous sensors; homogeneous sensors; human health monitoring; iterative sensor collaboration approach; military surveillance; sensing diversity; vehicle detection trace data; wireless sensor networks; Accuracy; Collaboration; Event detection; Machine learning; Sensors; Vehicle detection; Vehicles
【Paper Link】 【Pages】:1728-1736
【Authors】: Michael J. Neely
【Abstract】: We first consider a multi-user, single-hop wireless network with arbitrarily varying (and possibly non-ergodic) arrivals and channels. We design an opportunistic scheduling algorithm that guarantees all sessions have a bounded worst case delay. The algorithm has no knowledge of the future, but yields throughput-utility that is close to (or better than) that of a T-slot lookahead policy that makes “ideal” decisions based on perfect knowledge up to T slots into the future. We then extend the algorithm to treat worst case delay guarantees in multi-hop networks. Our analysis uses a sample-path version of Lyapunov optimization together with a novel virtual queue structure.
【Keywords】: Lyapunov methods; delays; optimisation; queueing theory; radio networks; scheduling; Lyapunov optimization; T-slot lookahead policy; multihop wireless network; opportunistic scheduling; single-hop wireless network; virtual queue structure; worst case delay; Algorithm design and analysis; Delay; Heuristic algorithms; Optimization; Queueing analysis; Resource management; Spread spectrum communication; Queueing analysis; flow control; optimization; wireless networks
【Paper Link】 【Pages】:1737-1744
【Authors】: Hongwei Du ; Qiang Ye ; Weili Wu ; Wonjun Lee ; Deying Li ; Ding-Zhu Du ; Stephen Howard
【Abstract】: In wireless sensor networks, virtual backbone construction based on connected dominating set is a competitive issue for routing efficiency and topology control. Assume that a sensor networks is defined as a connected unit disk graph (UDG). The problem is to find a minimum connected dominating set of given UDG with minimum routing cost for each node pair. We present a constant approximation scheme which produces a connected dominating set D, whose size |D| is within a factor α from that of the minimum connected dominating set and each node pair exists a routing path with all intermediate nodes in D and with length at most 5 · d(u,v), where d(u,v) is the length of shortest path of this node pair. A distributed algorithm is also provided with analogical performance. Extensive simulation shows that our distributed algorithm achieves significantly than the latest solution in research direction.
【Keywords】: approximation theory; graph theory; telecommunication network routing; telecommunication network topology; wireless sensor networks; UDG; connected dominating set; connected unit disk graph; constant approximation scheme; distributed algorithm; guaranteed routing cost; routing efficiency; topology control; virtual backbone construction; wireless sensor network; Approximation methods; Computer science; Distributed algorithms; Electronic mail; Polynomials; Routing; Wireless sensor networks
【Paper Link】 【Pages】:1745-1753
【Authors】: Yan Qiao ; Tao Li ; Shigang Chen
【Abstract】: The Bloom filters have been extensively applied in many network functions. Their performance is judged by three criteria: processing overhead, space overhead, and false positive ratio. Due to wide applicability, any improvement to the performance of Bloom filters can potentially have broad impact in many areas of networking research. In this paper, we propose Bloom-1, a new data structure that performs membership check in one memory access, which compares favorably with the k memory accesses of a classical Bloom filter. We also generalize Bloom-1 to Bloom-g, allowing performance tradeoff between membership query overhead and false positive ratio. We thoroughly examine the variants in this new family of filters, and show that they can be configured to outperform the Bloom filters with a smaller number of memory accesses, a smaller or equal number of hash bits, and a smaller and comparable false positive ratio in practical scenarios. We also perform experiments based on a real traffic trace to support our new filter design.
【Keywords】: computer network security; cryptography; filtering theory; telecommunication traffic; Bloom filter; Bloom-1; Bloom-g; data structure; false positive ratio; filter design; hash bit; membership check; membership query overhead; memory access; network function; networking research; processing overhead; real traffic trace; space overhead; Arrays; Hardware; Information filters; Memory management; Random access memory; Throughput
【Paper Link】 【Pages】:1754-1762
【Authors】: Rachit Agarwal ; Philip Brighten Godfrey ; Sariel Har-Peled
【Abstract】: An approximate distance query data structure is a compact representation of a graph, and can be queried to approximate shortest paths between any pair of vertices. Any such data structure that retrieves stretch 2k Ω 1 paths must require space Ω(n1+1/k) for graphs of n nodes. The hard cases that enforce this lower bound are, however, rather dense graphs with average degree Ω(n1/k). We present data structures that, for sparse graphs, substantially break that lower bound barrier at the expense of higher query time. For instance, general graphs require O(n3/2) space and constant query time for stretch 3 paths. For the realistic scenario of a graph with average degree Θ(log n), special cases of our data structures retrieve stretch 2 paths with O(n3/2) space and stretch 3 paths with O̅(n) space, albeit at the cost of O̅(√n) query time. Moreover, supported by large-scale simulations on graphs including the AS-level Internet graph, we argue that our stretch-2 scheme would be simple and efficient to implement as a distributed compact routing protocol.
【Keywords】: data structures; graph theory; query processing; approximate distance queries; constant query time; data structure; dense graphs; distributed compact routing protocol; lower bound; shortest paths; sparse graphs; Algorithm design and analysis; Approximation algorithms; Approximation methods; Data structures; Internet; Routing; Routing protocols
【Paper Link】 【Pages】:1763-1771
【Authors】: Lei Yu ; Jianzhong Li ; Siyao Cheng ; Shuguang Xiong
【Abstract】: In-network aggregation provides an energy-efficient way to extract summarization information from sensor networks. Continuous aggregation is usually required in many sensor applications to obtain the temporal variation information of some interesting aggregates. However, for the continuous in-network aggregation in a hostile environment, the adversary could manipulate a series of aggregation results through compromised nodes to fabricate false temporal variation patterns of the aggregates. Existing secure aggregation schemes conduct one individual verification for each aggregation result. Due to the high rate and the long period of a continuous aggregation, directly applying these schemes to detect false temporal variation pattern would incur a great communication cost. In this paper, we identify distinct design issues for protecting continuous in-network aggregation and propose a novel scheme to detect false temporal variation patterns. Compared with the existing schemes, our scheme greatly reduces the communication cost by selecting and checking only a small part of aggregation results to verify the correctness of the temporal variation patterns in a time window. The checking of the aggregation results uses a sampling-based approach, which enables our scheme independent of any particular in-network aggregation protocol. We also propose a series of security mechanisms to protect the sampling process. Both theoretical analysis and simulations show the effectiveness and efficiency of our scheme.
【Keywords】: pattern recognition; telecommunication security; wireless sensor networks; false temporal variation pattern detection; in-network aggregation; sampling-based verification; secure continuous aggregation; security mechanisms; wireless sensor networks; Aggregates; Approximation methods; Authentication; Base stations; Heuristic algorithms; Protocols
【Paper Link】 【Pages】:1772-1780
【Authors】: Ryo Sugihara ; Rajesh K. Gupta
【Abstract】: Localizability of network or node is an important subproblem in sensor localization. While rigidity theory plays an important role in identifying several localizability conditions, major limitations are that the results are only applicable to generic frameworks and that the distance measurements need to be error-free. These limitations, in addition to the hardness of finding the node locations for a uniquely localizable graph, miss large portions of practical application scenarios that require sensor localization. In this paper, we describe a novel SDP-based formulation for analyzing node localizability and providing a deterministic upper bound of localization error. Unlike other optimization-based formulations for solving localization problem for the whole network, our formulation allows fine-grained evaluation on the localization accuracy per each node. Our formulation gives a sufficient condition for unique node localizability for any frameworks, i.e., not only for generic frameworks. Furthermore, we extend it for the case with measurement errors and for computing directional error bounds. We also design an iterative algorithm for large-scale networks and demonstrate the effectiveness by simulation experiments.
【Keywords】: distance measurement; optimisation; sensor placement; SDP-based formulation; computing directional error bound; deterministic accuracy guarantee; distance measurement; fine-grained evaluation; generic frameworks; optimization-based formulation; sensor localization error; Accuracy; Distance measurement; Iterative methods; Measurement errors; Reliability; Ultrasonic variables measurement; Upper bound
【Paper Link】 【Pages】:1781-1789
【Authors】: Yi Wang ; Guohong Cao
【Abstract】: Camera sensors are different from traditional scalar sensors as different cameras from different positions can form distinct views of the object. However, traditional disk sensing model does not consider this intrinsic property of camera sensors. To this end, we propose a novel model called full-view coverage. An object is considered to be full-view covered if for any direction from 0 to 2π (object's facing direction), there is always a sensor such that the object is within the sensor's range and more importantly the sensor's viewing direction is sufficiently close to the object's facing direction. With this model, we propose an efficient method for full-view coverage detection in any given camera sensor networks. We also derive a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment. Finally, we show a necessary and sufficient condition on the sensor density for full-view coverage in a triangular lattice based deployment.
【Keywords】: cameras; object detection; camera sensor networks; disk sensing; full-view coverage; object facing direction; random uniform deployment; sensor density; triangular lattice
【Paper Link】 【Pages】:1790-1798
【Authors】: Yi Wang ; Guohong Cao
【Abstract】: In directional sensor networks, sensors can steer around to serve multiple target points. Most previous works assume there are always enough deployed sensors so that all target points can be served simultaneously. However, this assumption may not hold when the mission requirement changes or when more target points need to be served. Since it is not always practical to deploy new sensors, we propose to reconfigure the network by letting existing sensors steer and serve the targets periodically. As a result, targets may not be served continuously, and the service delay affects the quality of service. One important problem is how to choose the optimal set of targets to serve by each sensor such that the maximum service delay is minimized. We first show that this problem is NP-complete, and then we propose a centralized protocol whose performance is bounded by a logarithm factor of the optimal solution. We also propose a distributed protocol which achieves the same performance as the centralized protocol. Finally, we extend the optimization model and the protocols by considering the rotation delay, which is critical for some applications but ignored by previous work.
【Keywords】: computational complexity; delays; protocols; quality of service; sensor placement; wireless sensor networks; NP-complete problem; centralized protocol; directional sensor networks; distributed protocol; logarithm factor; optimization model; quality of service; rotation delay; sensor deployment; service delay; Delay; Minimization; Monitoring; Protocols; Schedules; Sensors; Silicon
【Paper Link】 【Pages】:1799-1807
【Authors】: Tao Li ; Shigang Chen ; Yibei Ling
【Abstract】: Traffic measurement provides critical real-world data for service providers and network administrators to perform capacity planning, accounting and billing, anomaly detection, and service provision. One of the greatest challenges in designing an online measurement module is to minimize the per-packet processing time in order to keep up with the line speed of the modern routers. To meet this challenge, we should minimize the number of memory accesses per packet and implement the measurement module in the on-die SRAM. The small size of SRAM requires extremely compact data structures to be designed for storing per-flow information. The best existing work, called counter braids, requires more than 4 bits per flow and performs 6 or more memory accesses per packet. In this paper, we design a fast and compact measurement function that estimates the sizes of all flows. It achieves the optimal processing speed: 2 memory accesses per packet. In addition, it provides reasonable measurement accuracy in a tight space where the counter braids no longer work. Our design is based on a new data encoding/decoding scheme, called randomized counter sharing. This scheme allows us to mix per-flow information together in storage for compactness and, at the decoding time, separate the information of each flow through statistical removal of the error introduced during information mixing from other flows. The effectiveness of our online per-flow measurement approach is analyzed and confirmed through extensive experiments based on real network traffic traces.
【Keywords】: decoding; encoding; telecommunication network management; telecommunication network planning; telecommunication security; telecommunication traffic recording; anomaly detection; capacity planning; data encoding/decoding scheme; mix per-flow information; network administrator; online measurement module; per flow traffic measurement; randomized counter sharing; service provider; service provision; Accuracy; Estimation; Memory management; Noise; Radiation detectors; Random access memory; Size measurement
【Paper Link】 【Pages】:1808-1816
【Authors】: Xiaoming Wang ; Xiaoyong Li ; Dmitri Loguinov
【Abstract】: Traffic monitoring and estimation of flow parameters in high speed routers have recently become challenging as the Internet grew in both scale and complexity. In this paper, we focus on a family of flow-size estimation algorithms we call Residual-Geometric Sampling (RGS), which generates a random point within each flow according to a geometric random variable and records all remaining packets in a flow counter. Our analytical investigation shows that previous estimation algorithms based on this method exhibit certain bias in recovering flow statistics from the sampled measurements. To address this problem, we derive a novel set of unbiased estimators for RGS, validate them using real Internet traces, and show that they provide an accurate and scalable solution to Internet traffic monitoring.
【Keywords】: Internet; parameter estimation; sampling methods; telecommunication network routing; telecommunication traffic; Internet traffic monitoring; RGS; flow counter; flow parameter estimation; flow-size estimation algorithms; geometric random variable; high speed routers; residual-geometric flow sampling modeling; Estimation; Internet; Mathematical model; Monitoring; Radiation detectors; Random access memory; Random variables
【Paper Link】 【Pages】:1817-1825
【Authors】: Ralf Lübben ; Markus Fidler ; Jörg Liebeherr
【Abstract】: We develop a stochastic foundation for bandwidth estimation of networks with random service, where bandwidth availability is expressed in terms of bounding functions with a defined violation probability. Exploiting properties of a stochastic max-plus algebra and system theory, the task of bandwidth estimation is formulated as inferring an unknown bounding function from measurements of probing traffic. We derive an estimation methodology that is based on iterative constant rate probes. Our solution provides evidence for the utility of packet trains for bandwidth estimation in the presence of variable cross traffic. Taking advantage of statistical methods, we show how our estimation method can be realized in practice, with adaptive train lengths of probe packets, probing rates, and replicated measurements required to achieve both high accuracy and confidence levels. We evaluate our method in a controlled testbed network, where we show the impact of cross traffic variability on the time-scales of service availability, and provide a comparison with existing bandwidth estimation tools.
【Keywords】: algebra; bandwidth allocation; stochastic processes; adaptive train lengths; bandwidth availability; bounding functions; probe packets; probing rates; random service; replicated measurements; stochastic bandwidth estimation; stochastic max-plus algebra; system theory; violation probability; Algebra; Bandwidth; Delay; Estimation; Probes; Steady-state; Transforms
【Paper Link】 【Pages】:1826-1834
【Authors】: Imed Lassoued ; Amir Krifa ; Chadi Barakat ; Konstantin Avrachenkov
【Abstract】: The remarkable growth of the Internet infrastructure and the increasing heterogeneity of applications and users' behavior make more complex the manageability and monitoring of ISP networks and raises the cost of any new deployment. The main consequence of this trend is an inherent disagreement between existing monitoring solutions and the increasing needs of management applications. In this context, we present the design of an adaptive centralized architecture that provides visibility over the entire network through a network-wide cognitive monitoring system. Practically, given a measurement task and a constraint on the volume of collected information, the proposed architecture drives the sampling rates on the interfaces of network routers to achieve the maximum possible accuracy, while adapting itself to any change in network traffic conditions. We illustrate our work with an accounting application whose purpose is to estimate the volume of aggregate flows across a backbone transit network. The paper provides a global study of the functioning of the proposed system and the impact of the different parameters on its behavior. The performance of our system is validated in typical scenarios over an experimental platform we developed for the purpose of the study.
【Keywords】: Internet; telecommunication network routing; telecommunication traffic; ISP networks; Internet infrastructure; adaptive centralized architecture; aggregate flows; backbone transit network; measurement task; network routers; network traffic; network-wide cognitive monitoring system; self-configuring adaptive system; Accuracy; Engines; Estimation; IP networks; Monitoring; Network topology; Topology
【Paper Link】 【Pages】:1835-1843
【Authors】: Ze Li ; Haiying Shen
【Abstract】: The explosive growth of unsolicited emails has prompted the development of numerous spam filtering techniques. A Bayesian spam filter is superior to a static keywordbased spam filter because it can continuously evolve to tackle new spam by learning keywords in new spam emails. However, Bayesian spam filters can be easily poisoned by avoiding spam keywords and adding many innocuous keywords in the emails. In addition, they need a significant amount of time to adapt to a new spam based on user feedback. Moreover, few current spam filters exploit social networks to assist spam detection. In order to develop an accurate and user-friendly spam filter, in this paper, we propose a SOcial network Aided Personalized and effective spam filter (SOAP). Unlike previous filters that focus on parsing keywords (e.g, Bayesian filter) or building blacklists, SOAP exploits the social relationship among email correspondents to detect the spam adaptively and automatically. SOAP integrates three components into the basic Bayesian filter: social closeness-based spam filtering, social interest-based spam filtering, and adaptive trust management. We evaluate performance of SOAP based on the trace data from Facebook. Experimental results show that SOAP can greatly improve the performance of Bayesian spam filters in terms of the accuracy, attack-resilience and efficiency of spam detection. We also find that the performance of Bayesian spam filters is the lower bound of SOAP.
【Keywords】: e-mail filters; social networking (online); unsolicited e-mail; user interfaces; Bayesian spam filter; Facebook; SOAP spam filter; adaptive trust management; e-mail box; keyword-based spam filter; social closeness-based spam filtering; social interest-based spam filtering; social network aided personalized spam filter; spam detection; unsolicited email; user feedback; Accuracy; Bayesian methods; Simple object access protocol; Social network services; Training; Unsolicited electronic mail
【Paper Link】 【Pages】:1844-1852
【Authors】: Matthew Knysz ; Xin Hu ; Kang G. Shin
【Abstract】: In this paper, we explore the escalating “arms race” between fast-flux (FF) botnet detectors and the botmasters' effort to subvert them, and investigate several novel mimicry-attack techniques that allow botmasters to avoid detection. We first analyze the state-of-art FF detectors and their effectiveness against the current botnet threat, demonstrating how botmasters can - with their current resources - thwart detection strategies. Based on the realistic assumptions inferred from empirically observed trends, we create formal models for bot decay, online availability, DNS-advertisement strategies and performance, allowing us to demonstrate the effectiveness of different mimicry attacks and evaluate their effects on the overall online availability and capacity of botnets.
【Keywords】: authorisation; invasive software; DNS-advertisement strategy; FF detector; fast-flux botnet detector; fast-flux detection system; mimicry attack; Advertising; Availability; Computers; Detectors; IP networks; Monitoring; Servers
【Paper Link】 【Pages】:1853-1861
【Authors】: Yi-Hua E. Yang ; Viktor K. Prasanna
【Abstract】: Regular expression matching (REM) with nondeterministic finite automata (NFA) can be computationally expensive when a large number of patterns are matched concurrently. On the other hand, converting the NFA to a deterministic finite automaton (DFA) can cause state explosion, where the number of states and transitions in the DFA are exponentially larger than in the NFA. In this paper, we seek to answer the following question: to match an arbitrary set of regular expressions, is there a finite automaton that lies between the NFA and DFA in terms of computation and memory complexities? We introduce the semi-deterministic finite automata (SFA) and the state convolvement test to construct an SFA from a given NFA. An SFA consists of a fixed number (p) of constituent DFAs (c-DFA) running in parallel; each c-DFA is responsible for a subset of states in the original NFA. To match a set of regular expressions with n overlapping symbols (that can match to the same input character concurrently), the NFA can require O(n) computation per input character, whereas the DFA can have a state transition table with O(2n) states. By exploiting the state convolvements during the SFA construction, an equivalent SFA reduces the computation complexity to O(p2=c2) per input character while limiting the space requirement to O(|Σ|×p2×(n=p)c) states, where Σ is the alphabet and c ≥ 1 is a small design constant. Although the problem of constructing the optimal (minimum-sized) SFA is shown to be NP-complete, we develop a greedy heuristic to quickly construct a near-optimal SFA in time and space quadratic in the number of states in the original NFA. We demonstrate our SFA construction using real-world regular expressions taken from the Snort IDS.
【Keywords】: computational complexity; deterministic automata; finite automata; greedy algorithms; pattern matching; NP-complete problem; computation complexity; deterministic finite automaton; greedy heuristic; memory complexity; nondeterministic finite automata; regular expression matching; semi-deterministic finite automata; space-time tradeoff; state explosion; Automata; Complexity theory; Computer architecture; Doped fiber amplifiers; Explosions; Pattern matching; Throughput; Aho-Corasick algorithm; DFA; NFA; Regular expression; deep packet inspection; graph coloring; space-time tradeoff; state explosion
【Paper Link】 【Pages】:1862-1870
【Authors】: Fengyuan Xu ; Zhengrui Qin ; Chiu Chiang Tan ; Baosheng Wang ; Qun Li
【Abstract】: Recent studies have revealed security vulnerabilities in implantable medical devices (IMDs). Security design for IMDs is complicated by the requirement that IMDs remain operable in an emergency when appropriate security credentials may be unavailable. In this paper, we introduce IMDGuard, a comprehensive security scheme for heart-related IMDs to fulfill this requirement. IMDGuard incorporates two techniques tailored to provide desirable protections for IMDs. One is an ECG based key establishment without prior shared secrets, and the other is an access control mechanism resilient to adversary spoofing attacks. The security and performance of IMDGuard are evaluated on our prototype implementation.
【Keywords】: access control; electrocardiography; prosthetics; ECG based key establishment; IMDGuard; access control mechanism; external wearable guardian; heart-related IMD; implantable medical devices; security design; Authentication; Electrocardiography; Feature extraction; Jamming; Medical services; Protocols
【Paper Link】 【Pages】:1871-1879
【Authors】: Zhuo Lu ; Wenye Wang ; Cliff Wang
【Abstract】: Time-critical wireless applications in emerging network systems, such as e-healthcare and smart grids, have been drawing increasing attention in both industry and academia. The broadcast nature of wireless channels unavoidably exposes such applications to jamming attacks. However, existing methods to characterize and detect jamming attacks cannot be applied directly to time-critical networks, whose communication traffic model differs from conventional models. In this paper, we aim at modeling and detecting jamming attacks against time-critical traffic. We introduce a new metric, message invalidation ratio, to quantify the performance of time-critical applications. A key insight that leads to our modeling is that the behavior of a jammer who attempts to disrupt the delivery of a time-critical message can be exactly mapped to the behavior of a gambler who tends to win a gambling game. We show via the gambling-based modeling and real-time experiments that there in general exists a phase transition phenomenon for a time-critical application under jamming attacks: as the probability that a packet is jammed increases from 0 to 1, the message invalidation ratio first increases slightly (even negligibly), then increases dramatically to 1. Based on analytical and experimental results, we further design and implement the JADE (Jamming Attack Detection based on Estimation) system to achieve efficient and robust jamming detection for time-critical wireless networks.
【Keywords】: jamming; telecommunication traffic; wireless channels; communication traffic; e-healthcare; gambling-based modeling; jamming attack detection; jamming attack estimation; phase transition phenomenon; real-time experiments; robust jamming detection; smart grids; time-critical message; time-critical networks; time-critical traffic; time-critical wireless applications; time-critical wireless networks; wireless channels; Delay; Games; Jamming; Time factors; Wireless networks
【Paper Link】 【Pages】:1880-1888
【Authors】: Kai Zeng ; Kannan Govindan ; Daniel Wu ; Prasant Mohapatra
【Abstract】: Identity-based attacks (IBAs) are one of the most serious threats to wireless networks. Recently, received signal strength (RSS) based detection mechanisms were proposed to detect IBAs in static networks. Although mobility is an inherent property of wireless networks, limited work has addressed IBA detection in mobile scenarios. In this paper, we propose a novel RSS based technique, Reciprocal Channel Variation-based Identification (RCVI), to detect IBAs in mobile wireless networks. RCVI takes advantage of the location decorrelation, randomness, and reciprocity of the wireless fading channel to decide if all packets come from a single sender or more. If the packets are only coming from the genuine sender, the RSS variations reported by the sender should be correlated with the receiver's observations. Otherwise, the correlation should be degraded, then an attack can be flagged. We evaluate RCVI through theoretical analysis, and validate it through experiments using off-the-shelf 802.11 devices under different attacking patterns in real indoor and outdoor mobile scenarios. We show that RCVI can detect IBAs with a high probability even when the attacker is half a meter away from the genuine user.
【Keywords】: fading channels; mobile radio; radio networks; wireless LAN; 802.11 devices; RCVI; identity-based attack detection; mobile wireless networks; received signal strength; reciprocal channel variation-based identification; wireless fading channel; Correlation; Fading; IEEE 802.11 Standards; Mobile communication; Mobile computing; Wireless networks
【Paper Link】 【Pages】:1889-1897
【Authors】: Zhichao Zhu ; Guohong Cao
【Abstract】: Today's location-sensitive service relies on user's mobile device to determine its location and send the location to the application. This approach allows the user to cheat by having his device transmit a fake location, which might enable the user to access a restricted resource erroneously or provide bogus alibis. To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which co-located Bluetooth enabled mobile devices mutually generate location proofs, and update to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. We also develop user-centric location privacy model in which individual users evaluate their location privacy levels in real-time and decide whether and when to accept a location proof exchange request based on their location privacy levels. APPLAUS can be implemented with the existing network infrastructure and the current mobile devices, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that our scheme, besides providing location proofs effectively, can significantly preserve the source location privacy.
【Keywords】: Bluetooth; data privacy; mobile communication; APPLAUS; colocated Bluetooth; location-based services; location-sensitive service; mobile devices; network infrastructure; privacy-preserving location proof updating system; source location privacy; user-centric location privacy model; Bluetooth; Mobile communication; Mobile handsets; Peer to peer computing; Position measurement; Privacy; Servers
【Paper Link】 【Pages】:1898-1906
【Authors】: Sushant Sharma ; Yi Shi ; Y. Thomas Hou ; Hanif D. Sherali ; Sastry Kompella
【Abstract】: Network-coded cooperative communications (NC-CC) is a new paradigm in wireless networks that employs network coding (NC) to improve the performance of CC. The core mechanism to harness the benefits of NC-CC is to appropriately combine sessions into separate groups, and then have each group select the most beneficial relay node for NC-CC. In this paper, we study this joint grouping and relay node selection problem for NC-CC. Due to NP-hardness of problem, we propose a distributed and online algorithm that offers near-optimal solution to this problem. The key idea in our algorithm is to have each neighboring relay node of a new session determine and offer its best local group; and then to have the source node of the new session select the best group among all offers. We show that our distributed algorithm has polynomial complexity. Using extensive numerical results, we show that our distributed algorithm adapts well to online network dynamics.
【Keywords】: cooperative communication; network coding; optimisation; NP-hard problem; joint session grouping; neighboring relay node; network coding; network-coded cooperative communications; polynomial complexity; relay node selection; source node; wireless networks; Bandwidth; Distributed algorithms; Joints; Mutual information; Relays; Signal to noise ratio
【Paper Link】 【Pages】:1907-1915
【Authors】: Xi Fang ; Dejun Yang ; Guoliang Xue
【Abstract】: Opportunistic routing is proposed to improve the performance of wireless networks by exploiting the broadcast nature and spatial diversity of the wireless medium. In this paper, we study the problems of how to choose opportunistic route for each user to optimize the total utility or profit of multiple simultaneous users in a wireless mesh network (WMN) subject to node constraints. We formulate these two problems as two convex programming systems. By combining primal-dual and subgradient methods, we present a distributed iterative algorithm Consort (node-Constrained Opportunistic Routing). In each iteration, Consort updates Lagrange multipliers in a distributed manner according to the user and node behaviors obtained in the previous iteration, and then each user and each node individually adjusts its own behavior based on the updated Lagrange multipliers. We prove the convergence of this iterative algorithm, and provide bounds on the amount of feasibility violation and the gap between our solution and the optimal solution in each iteration.
【Keywords】: convex programming; diversity reception; gradient methods; telecommunication network routing; wireless mesh networks; WMN; broadcast nature; consort; convergence; convex programming systems; distributed iterative algorithm; distributed manner; feasibility violation; multiple simultaneous users; node behaviors; node constraints; node-constrained opportunistic routing; opportunistic route; primal-dual methods; spatial diversity; subgradient methods; updated Lagrange multipliers; wireless medium; wireless mesh networks; wireless networks; Energy consumption; Network coding; Optimization; Routing; Routing protocols; Wireless networks
【Paper Link】 【Pages】:1916-1924
【Authors】: Yi Shi ; Jia Liu ; Canming Jiang ; Cunhao Gao ; Y. Thomas Hou
【Abstract】: The rapid advances of MIMO to date have mainly stayed at the physical layer. Such fruits have not been fully benefited at the network layer mainly due to the computational complexity associated with the matrix-based model that MIMO involves. Recently, there are some efforts to simplify link layer model for MIMO so as to ease research for the upper layers. These models only require numeric computations on MIMO's degrees-of-freedom (DoFs) for spatial multiplexing (SM) and interference cancellation (IC) to obtain a feasible rate region. Thus, these models are much simpler than the original matrix-based model from the communications world. However, none of these DoF-based models is shown to achieve the same rate region as that by the matrix-based model. In this paper, we re-visit this important problem of MIMO modeling. Based on accurate accounting of how DoFs are consumed, we develop a simple link layer model for multi-hop MIMO networks. We show that this model is optimal in the sense of achieving the same rate region as that by the matrix-based model under SM and IC for any network topology. This work offers an important building block for theoretical research on multi-hop MIMO networks.
【Keywords】: MIMO communication; communication complexity; interference suppression; space division multiplexing; DoF-based model; MIMO modeling; computational complexity; degrees-of-freedom based model; interference cancellation; matrix-based model; multihop MIMO networks; optimal link layer model; spatial multiplexing; MIMO; Variable speed drives
【Paper Link】 【Pages】:1925-1933
【Authors】: Jianping Wang ; Chunming Qiao ; Hongfang Yu
【Abstract】: A major disruption may affect many network components and significantly lower the capacity of a network measured in terms of the maximum total flow among a set of source-destination pairs. Since only a subset of the failed components may be repaired at a time due to e.g., limited availability of repair resources, the network capacity can only be progressively increased over time by following a recovery process that involves multiple recovery stages. Different recovery processes will restore the failed components in different orders, and accordingly, result in different amount of network capacity increase after each stage. This paper aims to investigate how to optimally recover the network capacity progressively, or in other words, to determine the optimal recovery process, subject to limited available repair resources. We formulate the optimization problem, analyze its computational complexity, devise solution schemes, and conduct numerical experiments to evaluate the algorithms. The concept of progressive network recovery proposed in this paper represents a paradigm-shift in the field of resilient and survivable networking to handle large-scale failures, and will motivate a rich body of research in network design and other applications.
【Keywords】: computational complexity; optimisation; telecommunication network management; computational complexity; large-scale failures; major disruption; multiple recovery stages; network capacity; network components; network design; optimization problem; progressive network recovery; repair resources; resilient networking; survivable networking; Algorithm design and analysis; Availability; Communications technology; Computer science; Heuristic algorithms; Maintenance engineering; Sensitivity analysis; Disruption; Network flow; Recovery
【Paper Link】 【Pages】:1934-1942
【Authors】: Xu Li ; Nathalie Mitton ; Isabelle Simplot-Ryl ; David Simplot-Ryl
【Abstract】: We propose a radically new family of geometric graphs, i.e., Hypocomb, Reduced Hypocomb and Local Hypocomb. The first two are extracted from a complete graph; the last is extracted from a Unit Disk Graph (UDG). We analytically study their properties including connectivity, planarity and degree bound. All these graphs are connected (provided the original graph is connected) planar. Hypocomb has unbounded degree while Reduced Hypocomb and Local Hypocomb have maximum degree 6 and 8, respectively. To our knowledge, Local Hypocomb is the first strictly-localized, degree-bounded planar graph computed using merely 1-hop neighbor position information. We present a construction algorithm for these graphs and analyze its time complexity. Hypocomb family graphs are promising for wireless ad hoc networking. We report our numerical results on their average degree and their impact on FACE routing. We discuss their potential applications and some open problems.
【Keywords】: ad hoc networks; graph theory; graphs; FACE routing; Hypocomb family graphs; degree-bounded planar graph; geometric planar graphs; time complexity; unit disk graph; wireless ad hoc networking; wireless ad hoc networks
【Paper Link】 【Pages】:1943-1951
【Authors】: Abedelaziz Mohaisen ; Nicholas Hopper ; Yongdae Kim
【Abstract】: Social network-based Sybil defenses exploit the algorithmic properties of social graphs to infer the extent to which an arbitrary node in such a graph should be trusted. However, these systems do not consider the different amounts of trust represented by different graphs, and different levels of trust between nodes, though trust is being a crucial requirement in these systems. For instance, co-authors in an academic collaboration graph are trusted in a different manner than social friends. Furthermore, some social friends are more trusted than others. However, previous designs for social network-based Sybil defenses have not considered the inherent trust properties of the graphs they use. In this paper we introduce several designs to tune the performance of Sybil defenses by accounting for differential trust in social graphs and modeling these trust values by biasing random walks performed on these graphs. Surprisingly, we find that the cost function, the required length of random walks to accept all honest nodes with overwhelming probability, is much greater in graphs with high trust values, such as co-author graphs, than in graphs with low trust values such as online social networks. We show that this behavior is due to the community structure in high-trust graphs, requiring longer walk to traverse multiple communities. Furthermore, we show that our proposed designs to account for trust, while increase the cost function of graphs with low trust value, decrease the advantage of attacker.
【Keywords】: graph theory; probability; security of data; social networking (online); Sybil defense; co-author graphs; cost function; differential trust; random walk bias; social graph; social network; trust behavior; Algorithm design and analysis; Facebook; Knowledge engineering; Peer to peer computing; Physics; Time measurement
【Paper Link】 【Pages】:1952-1960
【Authors】: Yi-Ruei Chen ; J. D. Tygar ; Wen-Guey Tzeng
【Abstract】: The group key management is for a group manager to maintain a consistent group key for a dynamic group of members through a broadcast channel. In this paper we propose a group key management scheme based on a meta proxy re-encryption (PRE) scheme. In particular, we propose an RSA-based PRE scheme with special properties. It is the first RSA-based PRE scheme for group key management and has the desired properties of uni-directionality and multi-hop. In our group key management scheme, each group member holds just one secret auxiliary key and logN public auxiliary keys. The size of rekey messages for each group key update remains O(logN). Additionally, our scheme has some distinct features. Firstly, the size of the key update history is a constant O(N) no matter how many times of group key updates occur. Secondly, the computation time of computing the newest group key from the key update history is always O(logN) no matter how many group key updates are missed. This feature provides a practical solution for group key update when members go offline from time to time. Finally, the proposed scheme is immune to the collusion attack of other members.
【Keywords】: broadcast channels; public key cryptography; telecommunication security; RSA-based PRE scheme; broadcast channel; computation time; key update history; multihop; public auxiliary key; rekey message; secret auxiliary key; secure group key management; unidirectional proxy reencryption scheme; unidirectionality; Broadcasting; Computer science; Encryption; History; Law; Group key management; proxy re-encryption
【Paper Link】 【Pages】:1961-1969
【Authors】: Sastry Kompella ; Gam D. Nguyen ; Jeffrey E. Wieselthier ; Anthony Ephremides
【Abstract】: This paper addresses fundamental issues in a shared channel where the users have different priority levels. In particular, we characterize the stable-throughput region in a two user cognitive shared channel where the primary (higher priority) user transmits whenever it has packets to transmit while the secondary (cognitive) node transmits its packets with probability p. Therefore, in this system, the secondary link is allowed to share the channel along with the primary link, in contrast to the traditional notion of cognitive radio, in which the secondary user is required to relinquish the channel as soon as the primary is detected. The analysis also takes into account the compound effects of multi-packet reception as well as of the relaying capability on the stability region of the network. We start by analyzing the non-cooperation case where nodes transmit their own packets to their respective destinations. We then extend the analysis to a system where the secondary node cooperatively relays some of the primary's packets. Specifically, in the cooperation case, the secondary node relays those packets that it receives successfully from the primary, but are not decoded properly by the primary destination. In such cognitive shared channels, a tradeoff arises in terms of activating the secondary along with the primary so that both transmissions may be successful, but with a lower probability, compared to the case of the secondary node staying idle when the primary user transmits. Results show the benefits of relaying for both the primary as well as the secondary nodes in terms of the stable-throughput region.
【Keywords】: cognitive radio; cooperative communication; probability; radio links; telecommunication network routing; cognitive radio; cognitive shared channels; cooperative relaying; multipacket reception; primary link; probability; secondary link; secondary node relays; stable throughput tradeoffs; Interference; Numerical stability; Queueing analysis; Receivers; Relays; Stability analysis; Throughput
【Paper Link】 【Pages】:1970-1978
【Authors】: Jens B. Schmitt ; Nicos Gollan ; Steffen Bondorf ; Ivan Martinovic
【Abstract】: Non-FIFO processing of flows by network nodes is not a rare phenomenon. Unfortunately, the state-of-the-art analytical tool for the computation of performance bounds in packet-switched networks, network calculus, cannot deal well with non-FIFO systems. The problem lies in its conventional service curve definitions. Either the definition is too strict to allow for a concatenation and consequent beneficial end-to-end analysis, or it is too loose and results in infinite delay bounds. Hence, in this paper, we propose a new service curve definition and demonstrate its strength with respect to achieving both finite delay bounds and a concatenation of systems resulting in a favorable end-to-end delay analysis. In particular, we show that the celebrated pay bursts only once phenomenon is retained even without any assumptions on the processing order of packets. This seems to contradict previous work; the reasons for this are discussed.
【Keywords】: calculus; packet switching; switched networks; consequent beneficial end-to-end analysis; conventional service curve definition; infinite delay bounds; network calculus; network nodes; nonFIFO system; packet-switched network; Adaptation model; Calculus; Delay; Internet; Numerical models; Servers; Silicon
【Paper Link】 【Pages】:1979-1987
【Authors】: Florin Ciucu ; Jens B. Schmitt ; Hao Wang
【Abstract】: Convolution-form networks have the property that the end-to-end service of network flows can be expressed in terms of a (min, +)-convolution of the per-node services. This property is instrumental for deriving end-to-end queueing results which fundamentally improve upon alternative results derived by a node-by-node analysis. This paper extends the class of convolution-form networks with stochastic settings to scenarios with flow transformations, e.g., by loss, dynamic routing or retransmissions. In these networks, it is shown that by using the tools developed in this paper end-to-end delays grow as O(n) in the number of nodes n; in contrast, by using the alternative node-by-node analysis, end-to-end delays grow as O(n2).
【Keywords】: communication complexity; queueing theory; convolution-form networks; end-to-end delays; end-to-end queueing results; flow transformations; network flows; node-by-node analysis; Calculus; Convolutional codes; Delay; Markov processes; Queueing analysis; Random processes
【Paper Link】 【Pages】:1988-1996
【Authors】: Florin Ciucu ; Oliver Hohlfeld ; Lydia Y. Chen
【Abstract】: Many of computing and communication systems are based on FIFO queues whose performance, e.g., in terms of throughput and fairness, is highly influenced by load fluctuations, especially in the case of short-term overload. This paper analytically proves that, for both Markovian and heavy-tailed/self-similar arrivals, overloaded FIFO queues are asymptotically fair in the sense that each flow or aggregate of flows receives a weighted fair share over large time scales. In addition, the paper provides the corresponding transient results and convergence rates, i.e., the amount of time it takes for a flow to probabilistically attain the fair share. Interestingly, for Markovian arrivals, the paper indicates smaller convergence rates at higher utilizations, which is exactly the opposite behavior characteristic to underloaded queueing systems.
【Keywords】: Markov processes; queueing theory; FIFO queues; Markovian arrivals; load fluctuations; overloaded FIFO systems; self-similar arrivals; short-term overload; underloaded queueing systems; Aggregates; Calculus; Convergence; Probabilistic logic; Probability; Queueing analysis; Transient analysis
【Paper Link】 【Pages】:1997-2005
【Authors】: Zengfeng Huang ; Ke Yi ; Yunhao Liu ; Guihai Chen
【Abstract】: Consider a distributed system with n nodes where each node holds a multiset of items. In this paper, we design sampling algorithms that allow us to estimate the global frequency of any item with a standard deviation of εN, where N denotes the total cardinality of all these multisets. Our algorithms have a communication cost of O(n +√n/ε), which is never worse than the O(n + 1/ε2) cost of uniform sampling, and could be much better when n ≪ 1/ε2. In addition, we prove that one version of our algorithm is instance-optimal in a fairly general sampling framework. We also design algorithms that achieve optimality on the bit level, by combining Bloom filters of various granularities. Finally, we present some simulation results comparing our algorithms with previous techniques. Other than the performance improvement, our algorithms are also much simpler and easily implementable in a large-scale distributed system.
【Keywords】: computational complexity; distributed processing; frequency estimation; sampling methods; Bloom filter; distributed system; global frequency estimation; optimal sampling algorithm; Algorithm design and analysis; Approximation algorithms; Distributed databases; Encoding; Frequency estimation; Peer to peer computing; Probabilistic logic
【Paper Link】 【Pages】:2006-2014
【Authors】: Shibo He ; Jiming Chen ; Fachang Jiang ; David K. Y. Yau ; Guoliang Xing ; Youxian Sun
【Abstract】: Wireless rechargeable sensor networks (WRSNs) have emerged as an alternative to solving the challenges of size and operation time posed by traditional battery-powered systems. In this paper, we study a WRSN built from the industrial wireless identification and sensing platform (WISP) and commercial off-the-shelf RFID readers. The paper-thin WISP tags serve as sensors and can harvest energy from RF signals transmitted by the readers. This kind of WRSNs is highly desirable for indoor sensing and activity recognition, and is gaining attention in the research community. One fundamental question in WRSN design is how to deploy readers in a network to ensure that the WISP tags can harvest sufficient energy for continuous operation. We refer to this issue as the energy provisioning problem. Based on a practical wireless recharge model supported by experimental data, we investigate two forms of the problem: point provisioning and path provisioning. Point provisioning uses the least number of readers to ensure that a static tag placed in any position of the network will receive a sufficient recharge rate for sustained operation. Path provisioning exploits the potential mobility of tags (e.g., those carried by human users) to further reduce the number of readers necessary: mobile tags can harvest excess energy in power-rich regions and store it for later use in power-deficient regions. Our analysis shows that our deployment methods, by exploiting the physical characteristics of wireless recharging, can greatly reduce the number of readers compared with those assuming traditional coverage models.
【Keywords】: energy harvesting; indoor radio; radiofrequency identification; sensor placement; wireless sensor networks; RF signal; WRSN design; activity recognition; commercial off-the-shelf RFID reader; energy harvesting; energy provisioning problem; indoor sensing; industrial wireless identification and sensing platform; paper-thin WISP tag; path provisioning; point provisioning; sensor deployment; wireless rechargeable sensor networks; Antennas; Mathematical model; RF signals; Radiofrequency identification; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2015-2023
【Authors】: Khuong Vu ; Rong Zheng
【Abstract】: Uncertainty in sensor locations is a norm in both planned and unplanned deployments. Even carefully positioned in the deployment phase, sensors may be displaced due to environmental or human factors during the course of operation. In this paper, we present a systematic study of the impact of location uncertainty on the coverage properties of wireless sensor networks. The uncertainty is modeled as disks of possibly different radius around the nominal positions. We introduce the concept of order-k (k ≥ 1) max Voronoi Diagram (VD) and devise an efficient polynomial algorithm to construct order-k VDs. Order-k max VD is critical in determining the minimum sensing radius needed to ensure worst-case k-coverage, call k-exposure. Simulation studies validate the correctness of the proposed algorithms and demonstrate their superiority over a naive approach.
【Keywords】: computational geometry; polynomial approximation; uncertain systems; wireless sensor networks; Voronoi diagram; call k-exposure; coverage property; location uncertainty; minimum sensing radius; order-k VD; order-k max VD; polynomial algorithm; robust coverage; sensor locations; systematic study; unplanned deployments; wireless sensor networks; worst-case k-coverage; Algorithm design and analysis; Complexity theory; Optimization; Robustness; Sensors; Uncertainty; Wireless sensor networks
【Paper Link】 【Pages】:2024-2032
【Authors】: Michael M. Groat ; Wenbo He ; Stephanie Forrest
【Abstract】: When wireless sensor networks accumulate sensitive or confidential data, privacy becomes an important concern. Sensors are often resource-limited and power-constrained, and data aggregation is commonly used to address these issues. However, providing privacy without disrupting in-network data aggregation is challenging. Although privacy-preserving data aggregation for additive and multiplicative aggregation functions has been studied, nonlinear aggregation functions such as maximum and minimum have not been well addressed. We present KIPDA, a privacy-preserving aggregation method, which we specialize for maximum and minimum aggregation functions. KIPDA obfuscates sensitive measurements by hiding them among a set of camouflage values, enabling k-indistinguishability for data aggregation. In principle, KIPDA can be used to hide a wide range of aggregation functions, although this paper considers only maximum and minimum. Because the sensitive data are not encrypted, it is easily and efficiently aggregated with minimal in-network processing delay. We quantify the efficiency of KIPDA in terms of power consumption and time delay, studying tradeoffs between the protocol's effectiveness and its resilience against collusion.
【Keywords】: data privacy; delays; power consumption; wireless sensor networks; KIPDA; additive aggregation function; in-network processing delay; k-indistinguishable privacy-preserving data aggregation; multiplicative aggregation function; nonlinear aggregation functions; power consumption; time delay; wireless sensor networks; Base stations; Data privacy; Encryption; Indexes; Privacy; Sensors; Wireless sensor networks
【Paper Link】 【Pages】:2033-2041
【Authors】: Robert Sauter ; Olga Saukh ; Oliver Frietsch ; Pedro José Marrón
【Abstract】: Logging and tracing are important methods to gain insight into the behavior of sensor network applications. Existing generic solutions are often limited to nodes with a direct serial connection and do not provide the required efficiency for network-wide logging. Instead, this is often realized by application-specific subsystems developed for custom logging statements. In this paper, we present TinyLTS - a generic and efficient Logging and Tracing System for TinyOS. TinyLTS consists of a compiler extension that separates dynamic from static information at compile time, a declarative solution for inserting logging statements, an extensible framework for flexible storing and transmitting of logging data and a frontend for recombining dynamic and static information. Our system provides concise yet expressive programming abstractions for the developer combined with efficiency comparable to custom solutions.
【Keywords】: operating systems (computers); system monitoring; TinyLTS; TinyOS; custom logging statements; dynamic information; efficient network-wide logging; expressive programming abstractions; logging data; static information; tracing system; Instruments; Random access memory; Routing; Routing protocols; Runtime; Weaving
【Paper Link】 【Pages】:2042-2050
【Authors】: Xin Zhao ; Jun Guo ; Chun Tung Chou ; Archan Misra ; Sanjay Jha
【Abstract】: We propose a routing metric for enabling high-throughput reliable multicast in multi-rate wireless mesh networks. This new multicast routing metric, called expected multicast transmission time (EMTT), captures the combined effects of 1) MAC-layer retransmission-based reliability, 2) transmission rate diversity, 3) wireless broadcast advantage, and 4) link quality awareness. The EMTT of one-hop transmission of a multicast packet minimizes the amount of expected transmission time (including that required for retransmissions). This is achieved by allowing the sender to adapt its bit-rate for each ongoing transmission/retransmission, optimized exclusively for its next-hop receivers that have not yet received the multicast packet. We model the rate adaptation process as a Markov decision process (MDP) and derive an efficient procedure for computing EMTT from the theory of MDP. We present receiver-initiated algorithms and describe protocol implementation for the EMTT-based multicast routing problem. Numerical results are presented to demonstrate the accuracy of the proposed algorithms against optimal solutions to the multicast routing problem. Simulation experiments confirm that, in comparison with single-rate multicast, multi-rate multicast using the EMTT metric effectively reduces the overall multicast transmission time while yielding higher packet delivery ratio and lower end-to-end latency.
【Keywords】: Markov processes; access protocols; multicast communication; radio broadcasting; radio links; radio receivers; telecommunication network reliability; telecommunication network routing; wireless mesh networks; EMTT-based multicast routing problem; MAC-layer retransmission-based reliability; Markov decision process; bit rate; end-to-end latency; expected multicast transmission time; high-throughput routing metric; link quality awareness; multicast packet; multicast routing metric; multirate wireless mesh network; next-hop receiver; one-hop transmission; packet delivery ratio; protocol; rate adaptation process; receiver-initiated algorithm; reliable multicast; transmission rate diversity; wireless broadcast advantage; Ad hoc networks; Computational modeling; Markov processes; Measurement; Receivers; Routing; Wireless communication
【Paper Link】 【Pages】:2051-2059
【Authors】: Yong Ding ; Yang Yang ; Li Xiao
【Abstract】: We study the multi-source video on-demand application in multi-channel multi-radio wireless mesh networks. When a user initiates a new video request, the application can stream the video not only from the media servers, but also from the peers that have buffered the video. The multi-path multi-source video on-demand streaming has been applied in wired networks with great success. However, it remains a challenging task in wireless networks due to wireless interference. In this paper, we first focus on the problem of finding the maximum number of high-quality and independent paths from the user to the servers or peers for each VoD request by considering the effect of wireless interference. We formulate it as a constrained maximum independent paths problem, and propose two efficient heuristic path discovery algorithms. Based on the multiple paths discovered, we further propose a joint routing and rate allocation algorithm, which minimizes the network congestion caused by the new VoD session. The algorithm is aware of the optimization for both existing and potential VoD sessions in the wireless mesh network. We evaluate our algorithms with real video traces. Simulation results demonstrate that our algorithm not only improves the average video streaming performance over all the coexisting VoD sessions in the network, but also increases the network's capacity of satisfying more subsequent VoD requests.
【Keywords】: multimedia communication; multipath channels; resource allocation; telecommunication network routing; video on demand; wireless mesh networks; constrained maximum independent paths problem; heuristic path discovery algorithms; multichannel multiradio wireless mesh networks; multipath routing; multisource video on-demand streaming; network congestion minimization; rate allocation; video streaming; wireless interference; wireless networks; Interference; Receivers; Servers; Streaming media; Topology; Wireless communication; Wireless mesh networks
【Paper Link】 【Pages】:2060-2068
【Authors】: Weiyi Zhang ; Shi Bai ; Guoliang Xue ; Jian Tang ; Chonggang Wang
【Abstract】: The emerging WiMAX technology (IEEE 802.16) is the fourth generation standard for low-cost, high-speed and long-range wireless communications for a large variety of civilian and military applications. IEEE 802.16j has introduced the concept of mesh network model and a special type of node called Relay Station (RS) for traffic relay for Subscriber Stations (SSs). A WiMAX mesh network is able to provide larger wireless coverage, higher network capacity and Non-Line-Of-Sight (NLOS) communications. This paper studies a Distance-Aware Relay Placement (DARP) problem in WiMAX mesh networks, which considers a more realistic model that takes into account physical constraints such as channel capacity, signal strength and network topology, which were largely ignored in previous studies. The goal here is to deploy the minimum number of RSs to meet system requirements such as user data rate requests, signal quality and network topology. We divide the DARP problem into two sub-problems, LOwer-tier Relay Coverage (LORC) Problem and Minimum Upper-tier Steiner Tree (MUST) Problem. For LORC problem, we present two approximation algorithms based on independent set and hitting set, respectively. For MUST problem, an efficient approximation algorithm is provided and proved. Then, an approximation solution for DARP is proposed and proved which combines the solutions of the two sub-problems. We also present numerical results confirming the theoretical analysis of our schemes as the first solution for the DARP problem.
【Keywords】: WiMax; approximation theory; channel capacity; telecommunication network topology; telecommunication standards; wireless mesh networks; DARP; IEEE 802.16j; WiMAX mesh networks; approximation algorithms; channel capacity; distance-aware relay placement; hitting set; independent set; lower-tier relay coverage problem; minimum upper-tier Steiner tree problem; network capacity; network topology; non-line-of-sight communications; relay station; signal strength; subscriber stations; traffic relay; wireless coverage; Approximation algorithms; Approximation methods; Channel capacity; Mesh networks; Receivers; Relays; WiMAX; WiMAX mesh network; approximation algorithm; hitting set; independent set; relay station placement
【Paper Link】 【Pages】:2069-2077
【Authors】: Arik Motskin ; Ian Downes ; Branislav Kusy ; Omprakash Gnawali ; Leonidas J. Guibas
【Abstract】: We consider the problem of distributing time-sensitive information from a collection of sources to mobile users traversing a wireless mesh network. Our strategy is to distributively select a set of well-placed nodes (warehouses) to act as intermediaries between the information sources and clusters of users. Warehouses are selected via the distributed construction of Hierarchical Well-Separated Trees (HSTs), which are sparse structures that induce a natural spatial clustering of the network. Unlike many traditional multicast protocols, our approach is not data driven. Rather, it is agnostic to the number and position of sources as well as to the mobility patterns of users. Whereas source-rooted tree multicast algorithms construct a separate routing infrastructure to support each source, our sparse and flexible infrastructure is precomputed and efficiently reused by sources and users, its cost amortized over time. Moreover, the route acquisition delay inherent in on-demand wireless ad hoc network protocols is avoided by exploiting the HST addressing scheme. Our algorithm ensures with high probability a guaranteed stretch bound for the information delivery path, and is robust to lossy links and node failure by providing alternative HST-induced routes. Nearby users are clustered and their requests aggregated, further reducing communication overhead.
【Keywords】: mobile ad hoc networks; multicast protocols; routing protocols; trees (mathematics); wireless mesh networks; HST addressing scheme; distributed construction; efficient information distribution; hierarchical well-separated trees; mobile users; mobility patterns; multicast protocols; natural spatial clustering; network warehouses; on-demand wireless ad hoc network protocols; route acquisition delay; routing infrastructure; source-rooted tree multicast algorithms; time-sensitive information; wireless mesh network; Artificial neural networks; Routing; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2078-2086
【Authors】: Kuai Xu ; Feng Wang ; Lin Gu
【Abstract】: This paper explores the behavior similarity of Internet end hosts in the same network prefixes. We use bipartite graphs to model network traffic, and then construct one-mode projection graphs for capturing social-behavior similarity of end hosts. By applying a simple and efficient spectral clustering algorithm, we perform network-aware clustering of end hosts in the same prefixes into different behavior clusters. Based on information-theoretical measures, we find that the clusters exhibit distinct traffic characteristics which provides improved interpretations of the separated traffic compared with the aggregated traffic of the prefixes. Finally, we demonstrate the applications of exploring behavior similarity in profiling network behaviors and detecting anomalous behaviors through synthetic traffic that combines Internet backbone traffic and packet traces from real scenarios of worm propagations and denial of service attacks.
【Keywords】: Internet; computer network security; graph theory; pattern clustering; Internet backbone traffic; Internet end hosts; aggregated traffic; anomalous behavior detection; bipartite graphs; denial of service attacks; information-theoretical measures; network behavior profiling; network traffic model; network-aware behavior clustering; one-mode projection graphs; packet traces; social behavior similarity; spectral clustering algorithm; synthetic traffic; worm propagations; Bipartite graph; Clustering algorithms; Eigenvalues and eigenfunctions; IP networks; Indexes; Internet; Uncertainty
【Paper Link】 【Pages】:2087-2095
【Authors】: Weiyu Xu ; Enrique Mallada ; Ao Tang
【Abstract】: In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with n nodes, it is shown that, using O(k log(n)) path measurements, we are able to recover any k-sparse link vector (with no more than k nonzero elements), even though the measurements have to follow the graph path constraints. We mainly show that the computationally efficient ℓ1 minimization can provide theoretical guarantees for inferring such k-sparse vectors with O(k log(n)) path measurements from the graph.
【Keywords】: graph theory; minimisation; network theory (graphs); signal processing; vectors; ℓ1 minimization; collective additive measurement; compressive sensing; connected graph; k-sparse link vector; network inference; network tomography; sparse signal vector; sparse vector recovering; Bipartite graph; Compressed sensing; Delay; Indexes; Minimization; Testing; Tomography
【Paper Link】 【Pages】:2096-2104
【Authors】: Xiaofei Wu ; Ke Yu ; Xin Wang
【Abstract】: Internet structure possesses many properties of complex networks. However, existing studies are often constrained to deriving web connections based on partial data collected, and the actual Internet traffic and user behaviors are far from being understood. With detailed traffic flow records collected through powerful hardware-based monitors, we study from the perspective of complex network the characteristics of four types of traffic: P2Pdownload, HTTP, Instant Messaging and overall traffic. Based on the data analysis and comparison of different applications, we confirm that both the distributions of node degree and strength of nodes/edges follow power law but they have significant different exponents. Specifically, taking advantage of the strict timing of the records, we study the dynamics of flow graphs. The growth of edges upon nodes is nonlinear. Edges formed between existing nodes, instead of the ones arriving with new nodes dominate the growth. We also observe linear preferential attachment behaviors in the flow graphs.
【Keywords】: Internet; electronic messaging; flow graphs; hypermedia; peer-to-peer computing; telecommunication traffic; HTTP; Internet application flow; Internet traffic; P2Pdownload; Web connection; complex network; flow graph; instant messaging; linear preferential attachment behavior; overall traffic; traffic flow record; Internet flow; complex network; degree distribution; growth process; power law; preferential attachment
【Paper Link】 【Pages】:2105-2113
【Authors】: Paul Tune ; Darryl Veitch
【Abstract】: The main approaches to high speed measurement in routers are traffic sampling, and sketching. However, it is not known which paradigm is inherently better at extracting information from traffic streams. We tackle this problem for the first time using Fisher information as a means of comparison, in the context of flow size distribution measurement. We first provide a side-by-side information theoretic comparison, and then with added resource constraints according to simple models of router implementations. Finally, we evaluate the performance of both methods on actual traffic traces.
【Keywords】: information theory; signal sampling; telecommunication network routing; telecommunication traffic; Fisher information; flow size distribution measurement; high speed measurement; information theoretic comparison; resource constraints; routers; traffic sampling; traffic sketching; traffic streams; Arrays; Context; Indexes; Measurement; Radiation detectors; Random access memory; Symmetric matrices
【Paper Link】 【Pages】:2114-2122
【Authors】: Yaxuan Qi ; Kai Wang ; Jeffrey Fong ; Yibo Xue ; Jun Li ; Weirong Jiang ; Viktor K. Prasanna
【Abstract】: Modern networks are increasingly becoming content aware to improve data delivery and security via content-based network processing. Content-aware processing at the front end of distributed network systems, such as application identification for datacenter load-balancers and deep packet inspection for security gateways, is more challenging due to the wire-speed and low-latency requirement. Existing work focuses on algorithm-level solutions while lacking system-level design to meet the critical requirement for front-end content processing. In this paper, we propose a system-level solution named FEACAN for front-end acceleration of content-aware network processing. FEACAN employs a software-hardware co-design supporting both signature matching and regular expression matching for content-aware network processing. A two-dimensional DFA compression algorithm is designed to reduce the memory usage and a hardware lookup engine is proposed for high-performance lookup. Experimental results show that FEACAN achieves better performance than existing work in terms of processing speed, resource utilization, and update time.
【Keywords】: content-based retrieval; hardware-software codesign; telecommunication security; telecommunication traffic; FEACAN; data delivery; distributed network systems; front-end acceleration for content-aware network processing; regular expression matching; signature matching; Acceleration; Algorithm design and analysis; Doped fiber amplifiers; Engines; Hardware; Redundancy; Security; DFA; hardware acceleration; regular expressions
【Paper Link】 【Pages】:2123-2128
【Authors】: Hrishikesh B. Acharya ; Mohamed G. Gouda
【Abstract】: A firewall is a packet filter that is placed at the entrance of a private network. It checks the header fields of each incoming packet into the private network and decides, based on the specified rules in the firewall, whether to accept the packet and allow it to proceed or to discard the packet. To validate the correctness and effectiveness of the rules in a firewall, the firewall rules are usually subjected to two types of analysis: verification and redundancy checking. Verification is used to verify that the rules in a firewall accept all packets that should be accepted and discard all packets that should be discarded. Redundancy checking is used to check that no rule in a firewall is redundant (i.e. can be removed from the firewall without changing the sets of packets accepted and discarded by the firewall). In this paper we show that, contrary to the conventional wisdom, these two types of analysis are in fact equivalent. In particular, we show that (1) every verification algorithm can be also used to check whether a rule in a firewall is redundant, and (2) every redundancy checking algorithm can be also used to verify whether the rules in a firewall accept or discard an intended set of packets.
【Keywords】: authorisation; computer network security; redundancy; firewall verification; packet filter; private network; redundancy checking; Algorithm design and analysis; Complexity theory; Fires; Internet; Redundancy; TV; Testing
【Paper Link】 【Pages】:2129-2137
【Authors】: Tingwen Liu ; Yifu Yang ; Yanbing Liu ; Yong Sun ; Li Guo
【Abstract】: Deep packet inspection plays a increasingly important role in network security devices and applications, which use more regular expressions to depict patterns. DFA engine is usually used as a classical representation for regular expressions to perform pattern matching, because it only need O(1) time to process one input character. However, DFAs of regular expression sets require large amount of memory, which limits the practical application of regular expressions in high-speed networks. Some compression algorithms have been proposed to address this issue in recent literatures. In this paper, we reconsider this problem from a new perspective, namely observing the characteristic of transition distribution inside each state, which is different from previous algorithms that observe transition characteristic among states. Furthermore, we introduce a new compression algorithm which can reduce 95% memory usage of DFA stably without significant impact on matching speed. Moreover, our work is orthogonal to previous compression algorithms, such as D2FA, δFA. Our experiment results show that applying our work to them will have several times memory reduction, and matching speed of up to dozens of times comparing with original δFA in software implementation.
【Keywords】: data compression; deterministic automata; finite automata; pattern matching; security of data; DFA engine; deep packet inspection; deterministic finite automata; network security; pattern matching; regular expression compression algorithm; Arrays; Clustering algorithms; Compression algorithms; Doped fiber amplifiers; Memory management; Pattern matching; Sparse matrices
【Paper Link】 【Pages】:2138-2146
【Authors】: M. H. R. Khouzani ; Saswati Sarkar ; Eitan Altman
【Abstract】: Given the flexibility that software-based operation provides, it is unreasonable to expect that new malware will demonstrate a fixed behavior over time. Instead, malware can dynamically change the parameters of their infective hosts in response to the dynamics of the network, in order to maximize their overall damage. However, in return, the network can also dynamically change its counter-measure parameters in order to attain a robust defense against the spread of malware while minimally affecting the normal performance of the network. The infinite dimension of freedom introduced by variation over time and antagonistic and strategic optimization of malware and network against each other demand new attempts for modeling and analysis. We develop a zero-sum dynamic game model and investigate the structural properties of the saddle-point strategies. We specifically show that saddle-point strategies are simple threshold-based policies and hence, a robust dynamic defense is practicable.
【Keywords】: game theory; invasive software; optimisation; antagonistic optimization; malware attack; saddle-point strategy; strategic optimization; zero-sum dynamic game model; Computational modeling; Media; Optimized production technology
【Paper Link】 【Pages】:2147-2155
【Authors】: Xiaodong Lin ; Rongxing Lu ; Xiaohui Liang ; Xuemin Shen
【Abstract】: Receiver-location privacy is an important security requirement in privacy-preserving Vehicular Ad hoc Networks (VANETs), yet the unavailable receiver's location information makes many existing packet forwarding protocols inefficient in VANETs. To tackle this challenging issue, in this paper, we propose an efficient social-tier-assisted packet forwarding protocol, called STAP, for achieving receiver-location privacy preservation in VANETs. Specifically, by observing the phenomena that vehicles often visit some social spots, such as well-traversed shopping malls and busy intersections in a city environment, we deploy storage-rich Roadside Units (RSUs) at social spots and form a virtual social tier with them. Then, without knowing the receiver's exact location information, a packet can be first forwarded and disseminated in the social tier. Later, once the receiver visits one of social spots, it can successfully receive the packet. Detailed security analysis shows that the proposed STAP protocol can protect the receiver's location privacy against an active global adversary, and achieve vehicle's conditional privacy preservation as well. In addition, performance evaluation via extensive simulations demonstrates its efficiency in terms of high delivery ratio and low average delay.
【Keywords】: data privacy; routing protocols; vehicular ad hoc networks; STAP; conditional privacy preservation; delivery ratio; exact location information; privacy-preserving VANET; receiver-location privacy preservation; roadside units; social-tier-assisted packet forwarding protocol; vehicular ad hoc networks; virtual social tier; Authentication; Cities and towns; Delay; Privacy; Protocols; Receivers; Vehicles; Packet forwarding; Receiver-location privacy preservation; Social-tier-assisted; VANETs
【Paper Link】 【Pages】:2156-2164
【Authors】: Zhuo Hao ; Sheng Zhong ; Erran L. Li
【Abstract】: Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving an application on private communications. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol and its application to private communications. Our experimental results demonstrate that it has reasonable efficiency.
【Keywords】: computational complexity; cryptographic protocols; telecommunication security; wireless channels; computational assumptions; crucial cryptographic tool; cryptographic protocol; cryptographic tools; cryptography; oblivious transfer protocol; polynomial-time computable function; private communications; unauthenticated wireless channel; wireless channel characteristics; wireless devices; wireless security; Channel estimation; Communication system security; Cryptography; Probes; Protocols; Wireless communication
【Paper Link】 【Pages】:2165-2173
【Authors】: Yunchuan Wei ; Kai Zeng ; Prasant Mohapatra
【Abstract】: Generating a shared key between two parties from the wireless channel is of increasing interest. The procedure for obtaining information from wireless channel is called channel probing. Previous works used a constant channel probing rate to generate a key, but they neither consider the tradeoff between the bit generation rate (BGR) and channel resource consumption, nor adjust the probing rate according to different scenarios. In order to satisfy users' requirement for BGR and to use the wireless channel efficiently, we first build a mathematical model of channel probing and derive the relationship between BGR and probing rate. Second, we introduce an adaptive channel probing system based on Lempel-Ziv complexity (LZ76) and Proportional-Integral-Derivative (PID) controller. Our scheme uses LZ76 to estimate the entropy rate of the channel statistics, e.g. the Received Signal Strength (RSS), and uses the PID controller to control the channel probing rate. Our experiments show that this system is able to dynamically adjust its probing rate to achieve a desired BGR under different moving speeds, different mobile types, and different sites. Our results also show that the standard deviation of the LZ76 calculator is less than 0.15 bits/s. The PID controller is able to stabilize the bit generation rate at a desired value with mean error of less than 0.9 bits/s.
【Keywords】: data compression; three-term control; wireless channels; LZ76; Lempel-Ziv complexity; PID controller; adaptive wireless channel probing; bit generation rate; channel resource consumption; proportional-integral-derivative controller; shared key generation; Adaptive systems; Calculators; Communication system security; Complexity theory; Entropy; Probes; Wireless communication
【Paper Link】 【Pages】:2174-2182
【Authors】: Xu Li ; Xuegang Yu ; Aditya Wagh ; Chunming Qiao
【Abstract】: It is essential to consider drivers' perceptions and reactions when building Vehicular Cyber-Physical Systems (VCPS) since the effectiveness and efficiency of VCPS will largely depend on how drivers could benefit from such a system. This paper considers, for the first time, novel service scheduling problems from Human Factors (HF) standpoint by taking into consideration the following fact: a driver may not be able to receive more than one service in a short period of time, even if the VCPS can somehow transmit multiple services to the driver from the conventional communications and networking standpoint. We study a family of the HF-aware Service Scheduling (HFSS) Problems, where the goal is to deliver up to n services, each having a time-dependent (and non-increasing) utility to a subset of intended drivers so as to minimize the system-wide total utility loss due to unsuccessful delivery of some services. We show that such problems are different from all existing problems. We formulate the basic HFSS problem (BHFSSP) using Integer Linear Programming (ILP) and prove it and other more general problems to be NP-Complete. We also propose efficient heuristics and present numerical results from large-scale test cases.
【Keywords】: human factors; integer programming; linear programming; scheduling; vehicular ad hoc networks; HF-aware service scheduling; HFSS problem; NP-complete; VCPS; human factors; integer linear programming; service scheduling; vehicular cyber physical systems; Complexity theory; Driver circuits; Heuristic algorithms; Human factors; Humans; Receivers; Schedules; Human Factors; ITS; NP-Complete; Service Scheduling; VANET; Vehicular Cyber-Physical Systems
【Paper Link】 【Pages】:2183-2191
【Authors】: Yuchen Wu ; Yanmin Zhu ; Bo Li
【Abstract】: Efficient data delivery is a great challenge in vehicular networks because of frequent network disruption, fast topological change and mobility uncertainty. The vehicular trajectory knowledge plays a key role in data delivery. Existing algorithms have largely made predictions on the trajectory with coarse-grained patterns such as spatial distribution or/and the inter-meeting time distribution, which has led to poor data delivery performance. In this paper, we mine the extensive trace datasets of vehicles in an urban environment through conditional entropy analysis, we find that there exists strong spatiotemporal regularity. By extracting mobile patterns from historical traces, we develop accurate trajectory predictions by using multiple order Markov chains. Based on an analytical model, we theoretically derive packet delivery probability with predicted trajectories. We then propose routing algorithms taking full advantage of predicted vehicle trajectories. Finally, we carry out extensive simulations based on real traces of vehicles. The results demonstrate that our proposed routing algorithms can achieve significantly higher delivery ratio at lower cost when compared with existing algorithms.
【Keywords】: Markov processes; entropy; mobile radio; telecommunication network routing; telecommunication network topology; conditional entropy analysis; data delivery; mobile pattern; mobility uncertainty; multiple order Markov chains; network disruption; packet delivery probability; routing algorithm; spatiotemporal regularity; topological change; trace datasets; trajectory prediction; urban environment; vehicle trajectory; vehicular network; vehicular trajectory knowledge; Algorithm design and analysis; Entropy; Mobile communication; Prediction algorithms; Routing; Trajectory; Vehicles; Markov chain; prediction; routing; trajectory; vehicular networks
【Paper Link】 【Pages】:2192-2200
【Authors】: Hongzi Zhu ; Shan Chang ; Minglu Li ; Kshirasagar Naik ; Sherman X. Shen
【Abstract】: Inter-contact times (ICTs) between moving vehicles are one of the key metrics in vehicular networks, and they are also central to forwarding algorithms and the end-to-end delay. Recent study on the tail distribution of ICTs based on theoretical mobility models and empirical trace data shows that the delay between two consecutive contact opportunities drops exponentially. While theoretical results facilitate problem analysis, how to design practical opportunistic forwarding protocols in vehicular networks, where messages are delivered in carry-and-forward fashion, is still unclear. In this paper, we study three large sets of Global Positioning System (GPS) traces of more than ten thousand public vehicles, collected from Shanghai and Shenzhen, two metropolises in China. By mining the temporal correlation and the evolution of ICTs between each pair of vehicles, we use higher order Markov chains to characterize urban vehicular mobility patterns, which adapt as ICTs between vehicles continuously get updated. Then, the next hop for message forwarding is determined based on the previous ICTs. With our message forwarding strategy, it can dramatically increase delivery ratio (up to 80%) and reduce end-to-end delay (up to 50%) while generating similar network traffic comparing to current strategies based on the delivery probability or the expected delay.
【Keywords】: Global Positioning System; Markov processes; mobility management (mobile radio); protocols; statistical distributions; telecommunication traffic; vehicular ad hoc networks; Global Positioning System; carry-and-forward fashion; delivery probability; end-to-end delay; higher order Markov chains; inter-contact times; message forwarding; network traffic; opportunistic forwarding protocol; tail distribution; temporal correlation mining; temporal dependency; theoretical mobility model; urban vehicular mobility pattern; urban vehicular network; Algorithm design and analysis; Correlation; Delay; Entropy; Markov processes; Redundancy; Vehicles; Inter-contact time; Markov chain; opportunistic forwarding; temporal dependency; vehicular networks
【Paper Link】 【Pages】:2201-2209
【Abstract】: This paper studies the problem of bulk data transfer between a pair of moving vehicles as they encounter, using IEEE 802.11p (Dedicated Short Range Communication, or DSRC) radios. Comparing to the well-studied problem of “Drive-Through Internet”, enabling an efficient, reliable and fair data transfer between two fast-moving vehicles could be even more challenging because the effective link duration is halved, vehicle-to-vehicle channel is unpredictable and DSRC is a new technology. We design an Encounter Transfer Protocol (ETP) which is suitable for this vehicle encounter case. We introduce two new components which are able to improve the data transfer performance in such a challenging environment: (1) we advocate an enhanced window adjustment policy called BIBD(Bimodal Increase, Bimodal Decrease) which is able to quickly adapt to fast-changing channel conditions and stabilize afterwards. We experimentally demonstrate the BIBD policy outperforms other policies, including commonly used AIMD policy. (2) We also suggest a variety of enhancement techniques that not only fully utilize the precious link duration as vehicles encounter but also aggressively compensate packet drops caused by fading channel. Using a small fleet of DSRC-equipped vehicles, we experimentally demonstrate that ETP is able to at least double the throughput of TCP as both vehicles are moving, and improve performance by about 20-50% as only one vehicle is moving. Our experiments also show that ETP fairly allocates bandwidth resource under stressful network congestion scenarios.
【Keywords】: fading channels; mobile communication; transport protocols; wireless LAN; BIBD policy; DSRC-equipped vehicle; ETP; IEEE 802.11p; bimodal increase-bimodal decrease policy; bulk data transfer; dedicated short range communication; encounter transfer protocol; fading channel; opportunistic vehicle communication; vehicle-to-vehicle channel; window adjustment policy; Delay; Internet; Protocols; Servers; Throughput; Vehicles; Wireless communication
【Paper Link】 【Pages】:2210-2218
【Authors】: Wentao Huang ; Xinbing Wang
【Abstract】: There has been recent interest within the networking research community to understand how performance scales in cognitive networks with overlapping n primary nodes and m secondary nodes. Two important metrics, i.e., throughput and delay, are studied in this paper. We first propose a simple and extendable decision model, i.e., the hybrid protocol model, for the secondary nodes to exploit spatial gap among primary transmissions for frequency reuse. Then a framework for general cognitive networks is established based on the hybrid protocol model to analyze the occurrence of transmission opportunities for secondary nodes. We show that in the case that the transmission range of the secondary network is smaller than that of the primary network in order, as long as the primary network operates in a generalized round-robin TDMA fashion, the hybrid protocol model suffice to guide the secondary network to achieve the same throughput and delay scaling as a standalone network, without harming the transmissions of the primary network. Our approach is general in the sense that we only make a few weak assumptions on both networks, and therefore obtains a wide variety of results. We show secondary networks can obtain the same order of throughput and delay as standalone networks when primary networks are classic static networks, networks with random walk mobility, hybrid networks, multicast networks, hierarchically cooperative networks or clustered networks. Our work presents a relatively complete picture of the performance scaling of cognitive networks and provides fundamental insight on the design of them.
【Keywords】: cognitive radio; time division multiple access; classic static networks; clustered networks; cognitive networks; decision model; delay scaling; hierarchically cooperative networks; hybrid networks; hybrid protocol model; multicast networks; random walk mobility; round-robin TDMA; Ad hoc networks; Delay; Interference; Protocols; Signal to noise ratio; Throughput; Transmitters
【Paper Link】 【Pages】:2219-2227
【Authors】: Lara B. Deek ; Xia Zhou ; Kevin C. Almeroth ; Haitao Zheng
【Abstract】: Online spectrum auctions offer ample flexibility for bidders to request and obtain spectrum on-the-fly. Such flexibility, however, opens up new vulnerabilities to bidder manipulation. Aside from rigging their bids, selfish bidders can falsely report their arrival time to game the system and obtain unfair advantage over others. Such time-based cheating is easy to perform yet produces severe damage to auction performance. We propose Topaz, a truthful online spectrum auction design that distributes spectrum efficiently while discouraging bidders from misreporting their bids or time report. Topaz makes three key contributions. First, Topaz applies a 3D bin packing mechanism to distribute spectrum across time, space and frequency, exploiting spatial and time reuse to improve allocation efficiency. Second, Topaz enforces truthfulness using a novel temporal-smoothed critical value based pricing. Capturing the temporal and spatial dependency among bidders who arrive subsequently, this pricing effectively diminishes gain from bid and/or time-cheating. Finally, Topaz offers a “scalable” winner preemption to address the uncertainty of future arrivals at each decision time, which significantly boosts auction revenue. We analytically prove Topaz's truthfulness, which does not require any knowledge of bidder behavior, or an optimal spectrum allocation to enforce truthfulness. Using empirical arrival and bidding models, we perform simulations to demonstrate the efficiency of Topaz. We show that proper winner preemption improves auction revenue by 45-65% at a minimum cost of spectrum utilization.
【Keywords】: Internet; bin packing; electronic commerce; frequency allocation; pricing; 3D bin packing mechanism; Topaz; allocation efficiency; auction performance; auction revenue; bidder behavior; bidder manipulation; distribute spectrum across time; online spectrum auctions; optimal spectrum allocation; scalable winner preemption; selfish bidders; spatial dependency; spectrum on-the-fly; spectrum utilization; tackling bid; temporal dependency; temporal-smoothed critical value based pricing; time-based cheating; time-cheating; truthful online spectrum auction design; Delay; Pricing; Resists; Resource management; Three dimensional displays; Time frequency analysis; Uncertainty
【Paper Link】 【Pages】:2228-2236
【Authors】: Lei Yang ; Hongseok Kim ; Junshan Zhang ; Mung Chiang ; Chee-Wei Tan
【Abstract】: Market-based mechanisms offer promising approaches for spectrum access in cognitive radio networks. In this paper, we focus on two market models, one with a monopoly primary user (PU) market and the other with a multiple PU market, where each PU sells its temporarily unused spectrum to secondary users (SUs). We propose a pricing-based spectrum trading mechanism that enables SUs to contend for channel usage by random access, in a distributed manner, which naturally mitigates the complexity and time overhead associated with centralized scheduling. For the monopoly PU market model, we first consider SUs contending via slotted Aloha. The revenue maximization problems here are nonconvex. We first characterize the Pareto optimal region, and then obtain a Pareto optimal solution that maximizes the SUs' throughput subject to the SUs' budget constraints. To mitigate the spectrum underutilization due to the “price of contention,” we revisit the problem where SUs contend via CSMA, and show that spectrum utilization is enhanced, resulting in higher revenue. When the PU's unused spectrum is a control parameter, we study further the tradeoff between the PU's utility and its revenue. For the multiple PU market model, we cast the competition among PUs as a three-stage Stackelberg game, where each SU selects a PU's channel to maximize its throughput. We characterize the Nash equilibria, in terms of access prices and the spectrum offered to SUs. Our findings reveal that the number of equilibria exhibits a phase transition phenomenon, in the sense that when the number of PUs is greater than a threshold, there exist infinitely many equilibria; otherwise, there exists a unique Nash equilibrium, where the access prices and spectrum opportunities are determined by the budgets/elasticity of SUs and the utility level of PUs.
【Keywords】: Pareto optimisation; carrier sense multiple access; cognitive radio; game theory; pricing; radio access networks; radio networks; radio spectrum management; telecommunication control; wireless channels; CSMA; Nash equilibrium; Pareto optimal solution; cognitive radio networks; market-based mechanism; monopoly primary user market model; multiple PU market model; pricing-based spectrum access control; pricing-based spectrum trading mechanism; random access; revenue maximization problem; secondary users; slotted Aloha; spectrum utilization; three-stage Stackelberg game; Access control; Games; Monopoly; Multiaccess communication; Pricing; Resource management; Throughput
【Paper Link】 【Pages】:2237-2245
【Authors】: Miao Pan ; Chi Zhang ; Pan Li ; Yuguang Fang
【Abstract】: The essential impediment to apply cognitive radio (CR) technology for spectrum utilization improvement lies in the uncertainty of licensed spectrum supply. In this paper, we investigate the joint routing and link scheduling problem of multi-hop CR networks under uncertain spectrum supply. We model the vacancy of licensed bands with a series of random variables, and introduce corresponding scheduling constraints and flow routing constraints for such a network. From a CR network planner/operator's point of view, we characterize the network with a pair of (α, β) parameters, and present a mathematical formulation with the goal of minimizing the required network-wide spectrum resource at the (α, β) level. Given that (α, β) is specified, we derive a lower bound for the optimization problem and develop a threshold based coarse-grained fixing algorithm for a feasible solution. Simulation results show that i) for any (α, β) level, the proposed algorithm provides a near-optimal solution to the formulated NP-hard problem; ii) the (α, β) based solution is better than expected bandwidth based one in terms of blocking ratio as well as spectrum utilization in CR networks..
【Keywords】: cognitive radio; computational complexity; mathematical analysis; scheduling; telecommunication network routing; NP-hard problem; coarse-grained fixing algorithm; cognitive radio networks; joint routing; licensed spectrum supply; link scheduling; mathematical formulation; multihop CR networks; optimization problem; uncertain spectrum supply; Bandwidth; Optical wavelength conversion; Positron emission tomography
【Paper Link】 【Pages】:2246-2254
【Authors】: Zhenjiang Li ; Mo Li ; Jiliang Wang ; Zhichao Cao
【Abstract】: We study the ubiquitous data collection for mobile users in wireless sensor networks. People with handheld devices can easily interact with the network and collect data. We propose a novel approach for mobile users to collect the network-wide data. The routing structure of data collection is additively updated with the movement of the mobile user. With this approach, we only perform a local modification to update the routing structure while the routing performance is bounded and controlled compared to the optimal performance. The proposed protocol is easy to implement. Our analysis shows that the proposed approach is scalable in maintenance overheads, performs efficiently in the routing performance, and provides continuous data delivery during the user movement. We implement the proposed protocol in a prototype system and test its feasibility and applicability by a 49-node testbed. We further conduct extensive simulations to examine the efficiency and scalability of our protocol with varied network settings.
【Keywords】: routing protocols; wireless sensor networks; handheld devices; maintenance overheads; mobile users; network-wide data; protocol; routing structure; ubiquitous data collection; wireless sensor networks; Delay; Mobile communication; Mobile computing; Protocols; Routing; Vegetation; Wireless sensor networks
【Paper Link】 【Pages】:2255-2263
【Authors】: Bowu Zhang ; Xiuzhen Cheng ; Nan Zhang ; Yong Cui ; Yingshu Li ; Qilian Liang
【Abstract】: In this paper, we propose a novel compressive sensing (CS) based approach for sparse target counting and positioning in wireless sensor networks. While this is not the first work on applying CS to count and localize targets, it is the first to rigorously justify the validity of the problem formulation. Moreover, we propose a novel greedy matching pursuit algorithm (GMP) that complements the well-known signal recovery algorithms in CS theory and prove that GMP can accurately recover a sparse signal with a high probability. We also propose a framework for counting and positioning targets from multiple categories, a novel problem that has never been addressed before. Finally, we perform a comprehensive set of simulations whose results demonstrate the superiority of our approach over the existing CS and non-CS based techniques.
【Keywords】: greedy algorithms; iterative methods; probability; sensor placement; signal detection; wireless sensor networks; CS; GMP; compressive sensing; greedy matching pursuit algorithm; probability; sensor localization; signal recovery algorithms; sparse target counting; wireless sensor networks; Algorithm design and analysis; Argon; Compressed sensing; Matching pursuit algorithms; Monitoring; Sensors; Sparse matrices; compressive sensing; sensor networks; target counting; target localization
【Paper Link】 【Pages】:2264-2272
【Authors】: Zuoming Yu ; Jin Teng ; Xiaole Bai ; Dong Xuan ; Weijia Jia
【Abstract】: In this paper, we address a new unexplored problem - what are the optimal patterns to achieve connected coverage in wireless networks with directional antennas. As their name implies, directional antennas can focus their transmission energy in a certain direction. This feature leads to lower cross-interference and larger communication distance. It has been shown that with proper scheduling mechanisms, directional antennas may substantially improve networking performance in wireless networks. In this paper, we propose a set of optimal patterns to achieve full coverage and global connectivity under two different antenna models, i.e., the sector model and the knob model. We also introduce with detailed analysis several fundamental theorems and conjectures. Finally, we examine a more realistic physical model, where there might be strong interference, and both the sensing range and the communication range might be irregular. The results show that our designed patterns work well even in unstable and fickle physical environments.
【Keywords】: directive antennas; radio networks; radiofrequency interference; antenna knob model; antenna sector model; communication distance; communication range; connected coverage; cross-interference; directional antenna; global connectivity; networking performance; scheduling mechanism; sensing range; transmission energy; wireless network; Brain models; Directional antennas; Directive antennas; Sensors; Wireless networks
【Paper Link】 【Pages】:2273-2281
【Authors】: Shengbo Chen ; Prasun Sinha ; Ness B. Shroff ; Changhee Joo
【Abstract】: In this paper, we investigate the problem of maximizing the throughput over a finite-horizon time period for a sensor network with energy replenishment. The finite-horizon problem is important and challenging because it necessitates optimizing metrics over the short term rather than metrics that are averaged over a long period of time. Unlike the infinite-horizon problem, the fact that inefficiencies cannot be made to vanish to infinitesimally small values, means that the finite-horizon problem requires more delicate control. The finite-horizon throughput optimization problem can be formulated as a convex optimization problem, but turns out to be highly complex. The complexity is brought about by the “time coupling property,” which implies that current decisions can influence future performance. To address this problem, we employ a three-step approach. First, we focus on the throughput maximization problem for a single node with renewable energy assuming that the replenishment rate profile for the entire finite-horizon period is known in advance. An energy allocation scheme that is equivalent to computing a shortest path in a simply-connected space is developed and proven to be optimal. We then relax the assumption that the future replenishment profile is known and develop an online algorithm. The online algorithm guarantees a fraction of the optimal throughput. Motivated by these results, we propose a low-complexity heuristic distributed scheme, called NetOnline, in a rechargeable sensor network. We prove that this heuristic scheme is optimal under homogeneous replenishment profiles. Further, in more general settings, we show via simulations that NetOnline significantly outperforms a state-of-the-art infinite-horizon based scheme, and for certain configurations using data collected from a testbed sensor network, it achieves empirical performance close to optimal.
【Keywords】: optimisation; telecommunication network routing; wireless sensor networks; NetOnline; convex optimization problem; energy replenishment; finite-horizon energy allocation; finite-horizon throughput optimization problem; finite-horizon time period; low-complexity heuristic distributed scheme; rechargeable sensor networks; replenishment profile; replenishment rate profile; routing scheme; testbed sensor network; time coupling property; Routing
【Paper Link】 【Pages】:2282-2290
【Authors】: Nam P. Nguyen ; Thang N. Dinh ; Ying Xuan ; My T. Thai
【Abstract】: Social networks exhibit a very special property: community structure. Understanding the network community structure is of great advantages. It not only provides helpful information in developing more social-aware strategies for social network problems but also promises a wide range of applications enabled by mobile networking, such as routings in Mobile Ad Hoc Networks (MANETs) and worm containments in cellular networks. Unfortunately, understanding this structure is very challenging, especially in dynamic social networks where social activities and interactions are evolving rapidly. Can we quickly and efficiently identify the network community structure? Can we adaptively update the network structure based on previously known information instead of recomputing from scratch? In this paper, we present Quick Community Adaptation (QCA), an adaptive modularity-based method for identifying and tracing community structure of dynamic online social networks. Our approach has not only the power of quickly and efficiently updating network communities, through a series of changes, by only using the structures identified from previous network snapshots, but also the ability of tracing the evolution of community structure over time. To illustrate the effectiveness of our algorithm, we extensively test QCA on real-world dynamic social networks including ENRON email network, arXiv e-print citation network and Facebook network. Finally, we demonstrate the bright applicability of our algorithm via a realistic application on routing strategies in MANETs. The comparative results reveal that social-aware routing strategies employing QCA as a community detection core outperform current available methods.
【Keywords】: mobile ad hoc networks; social networking (online); telecommunication network routing; ENRON email network; Facebook network; MANET routing strategy; adaptive algorithms; adaptive modularity-based method; arXiv e-print citation network; cellular networks; community structure detection; dynamic online social networks; dynamic social networks; mobile ad hoc network routing; network community structure; quick community adaptation method; social-aware routing strategy; social-aware strategy; worm containments; Communities; Electronic mail; Heuristic algorithms; Joining processes; Mobile computing; Periodic structures; Social network services
【Paper Link】 【Pages】:2291-2299
【Authors】: Shaojie Tang ; Jing Yuan ; XuFei Mao ; Xiang-Yang Li ; Wei Chen ; Guojun Dai
【Abstract】: In this paper, we study two tightly coupled topics in online social networks (OSN): relationship classification and information propagation. The links in a social network often reflect social relationships among users. In this work, we first investigate identifying the relationships among social network users based on certain social network property and limited pre-known information. Social networks have been widely used for online marketing. A critical step is the propagation maximization by choosing a small set of seeds for marketing. Based on the social relationships learned in the first step, we show how to exploit these relationships to maximize the marketing efficacy. We evaluate our approach on large scale real-world data from Renren network, showing that the performances of our relationship classification and propagation maximization algorithm are pretty good in practice.
【Keywords】: marketing data processing; pattern classification; social networking (online); Renren network; information propagation; online marketing; online social network; propagation maximization; relationship classification; social relationship; Accuracy; Communities; Educational institutions; Games; Labeling; Social network services; Software
【Paper Link】 【Pages】:2300-2308
【Authors】: Michael Sirivianos ; Kyungbaek Kim ; Xiaowei Yang
【Abstract】: We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.
【Keywords】: inference mechanisms; social networking (online); unsolicited e-mail; SocialFilter; Sybil-resilient Online-Social-Network-based trust inference mechanisms; Sybil-resilient trust inference; collaborating spam-detecting nodes; collaborative spam mitigation attack-resistance; collaborative spam mitigation reliability; collaborative unwanted traffic mitigation system; email classification functionality; network querying; social links; social network; social trust; trust-aware collaborative spam mitigation system; Collaboration; Electronic mail; Humans; Peer to peer computing; Proposals; Routing; Social network services
【Paper Link】 【Pages】:2309-2317
【Authors】: Youngsang Shin ; Minaxi Gupta ; Steven A. Myers
【Abstract】: Forums on the Web are increasingly spammed by miscreants in order to attract visitors to their (often malicious) websites. In this paper, we study the prevalence of forum spamming and find that Internet users are at a high risk of encountering forums with spam links posted on them. To mitigate the problem, we examine the characteristics of 286 days of forum spam posted at a research blog and develop light-weight features based on spammers' IP, commenting activity and the anatomy of their posts. We find that an SVM classifier trained on these features can achieve a 99.81% precision and 92.82% recall in identifying forum spam.
【Keywords】: Internet; support vector machines; unsolicited e-mail; IP; Internet; SVM classifier; Web; forum spamming; research blog; Blogs; IP networks; Information services; Internet; Search engines; Software; Unsolicited electronic mail
【Paper Link】 【Pages】:2318-2326
【Authors】: Krishna P. Jagannathan ; Mihalis G. Markakis ; Eytan Modiano ; John N. Tsitsiklis
【Abstract】: We investigate the asymptotic behavior of the steady-state queue length distribution under generalized max-weight scheduling in the presence of heavy-tailed traffic. We consider a system consisting of two parallel queues, served by a single server. One of the queues receives heavy-tailed traffic, and the other receives light-tailed traffic. We study the class of throughput optimal max-weight-α scheduling policies, and derive an exact asymptotic characterization of the steady-state queue length distributions. In particular, we show that the tail of the light queue distribution is heavier than a power-law curve, whose tail coefficient we obtain explicitly. Our asymptotic characterization also shows that the celebrated max-weight scheduling policy leads to the worst possible tail of the light queue distribution, among all non-idling policies. Motivated by the above `negative' result regarding the max-weight-α policy, we analyze a log-max-weight (LMW) scheduling policy. We show that the LMW policy guarantees an exponentially decaying light queue tail, while still being throughput optimal.
【Keywords】: queueing theory; scheduling; telecommunication networks; telecommunication traffic; generalized max-weight scheduling; heavy-tailed traffic; light-tailed traffic; log-max-weight scheduling policy; max-weight-α scheduling policy; parallel queues; power-law curve; queue length asymptotics; steady-state queue length distribution; Heating
【Paper Link】 【Pages】:2327-2335
【Authors】: Peng-Jun Wan ; Chao Ma ; Zhu Wang ; Boliu Xu ; Minming Li ; Xiaohua Jia
【Abstract】: Link scheduling is a fundamental design issue in multihop wireless networks. All existing link scheduling algorithms require the precise information of the positions, and/or communication/interference radii of all nodes. For practical networks, it is not only difficult or expensive to obtain these parameters, but also often impossible to get their precise values. The link scheduling determined by the imprecise values of these parameters may fail to guarantee the same approximation bounds of the link scheduling determined by precise values. Therefore, the existing link scheduling algorithms lack performance robustness. In this paper, we propose a robust link scheduling, which can be easily computed with only the information on whether a given pair of links have conflict or not and therefore is robust. In addition, our link scheduling does not compromise the approximation bound and indeed sometimes can achieve better approximation bound. Particularly, under the 802.11 interference model, its approximation bound is 16 in general and 6 with uniform interference radii, an improvement over the respective best-known approximation bounds 23 and 7.
【Keywords】: approximation theory; radio links; radio networks; scheduling; approximation bound; multihop wireless network; robust link scheduling; weighted wireless link scheduling; Approximation algorithms; Approximation methods; IEEE 802.11 Standards; Interference; Processor scheduling; Protocols; Schedules; Link scheduling; approximation algorithm; interference; latency; robustness
【Paper Link】 【Pages】:2336-2344
【Authors】: V. J. Venkataramanan ; Xiaojun Lin
【Abstract】: We consider the problem of link scheduling for efficient convergecast in a wireless system. While there have been many results on scheduling algorithms that attain the maximum possible throughput in such a system, there have been few results that provide scheduling algorithms that are optimal in terms of some quality-of-service metric such as the probability that the end-to-end buffer usage exceeds a large threshold. Using a large deviations framework, we design a novel and low complexity algorithm that attains the optimal asymptotic decay rate for the overflow probability of the sum-queue (i.e. the total queue backlog in the entire system) as the overflow threshold becomes large. Simulations show that this algorithm has better performance than well known algorithms such as the standard back-pressure algorithm and the multihop version of greedy maximal matching (combined with back-pressure). Our proposed algorithm performs better not only in terms of the asymptotic decay rate at large overflow thresholds, but also in terms of the actual probability of overflow for practical range of overflow thresholds.
【Keywords】: quality of service; queueing theory; radio networks; scheduling; end-to-end buffer usage; greedy maximal matching; low-complexity scheduling; optimal asymptotic decay rate; overflow probability; quality-of-service metric; queue backlog; sum-queue minimization; wireless convergecast; Algorithm design and analysis; Equations; Heuristic algorithms; Interference; Lyapunov methods; Scheduling algorithm; Topology
【Paper Link】 【Pages】:2345-2353
【Authors】: Ching-Ming Lien ; Cheng-Shang Chang ; Jay Cheng ; Duan-Shin Lee
【Abstract】: In this paper, we consider the problem for maximizing the throughput of a discrete-time wireless network, where only certain sets of links can transmit simultaneously. It is well known that each set of such links can be represented by a configuration vector and the convex hull of the configuration vectors determines the capacity region of the wireless network. In the literature, packet scheduling polices that stabilize any admissible traffic in the capacity region are mostly related to the maximum weighted matching algorithm (MWM) that identifies the most suitable configuration vector in every time slot. Unlike the MWM algorithm, we propose a dynamic frame sizing (DFS) algorithm that also stabilizes any admissible traffic in the capacity region. The DFS algorithm, as an extension of our previous work for wired networks, also does not have a fixed frame size. To determine the frame size, an optimization problem needs to be solved at the beginning of each frame. Once the frame size is determined, a hierarchical smooth schedule is devised to determine both the schedule for configuration vectors and the schedule for multicast traffic flows in each link. Under the assumption of Bernoulli arrival processes with admissible rates, we show that the number of packets of each multicast traffic flow inside the wireless network is bounded above by a constant and thus one only requires to implement a finite internal buffer in each link in such a wireless network.
【Keywords】: queueing theory; radio networks; Bernoulli arrival process; DFS algorithm; MWM algorithm; configuration vector; convex hull; discrete-time wireless network; dynamic frame sizing algorithm; finite internal buffer; flow queueing; hierarchical smooth schedule; maximum weighted matching algorithm; multicast traffic flow; packet scheduling policy; Heuristic algorithms; Mathematical model; Partitioning algorithms; Schedules; Scheduling algorithm; Upper bound; Wireless networks
【Paper Link】 【Pages】:2354-2362
【Authors】: Myungjin Lee ; Mohammad Y. Hajjat ; Ramana Rao Kompella ; Sanjay G. Rao
【Abstract】: The Internet has significantly evolved in the number and variety of applications. Network operators need mechanisms to constantly monitor and study these applications. Given modern applications routinely consist of several flows, potentially to many different destinations, existing measurement approaches such as Sampled NetFlow sample only a few flows per application session. To address this issue, in this paper, we introduce RelSamp architecture that implements the notion of related sampling where flows that are part of the same application session are given higher probability. In our evaluation using real traces, we show that RelSamp achieves 5-10x more flows per application session compared to Sampled NetFlow for the same effective number of sampled packets. We also show that behavioral and statistical classification approaches such as BLINC, SVM and C4.5 achieve up to 50% better classification accuracy compared to Sampled NetFlow, while not breaking existing management tasks such as volume estimation.
【Keywords】: Internet; statistical analysis; Internet; RelSamp; application structure; sampled NetFlow; sampled flow measurements; statistical classification; Accuracy; Estimation; IP networks; Inspection; Internet; Monitoring; Random variables
【Paper Link】 【Pages】:2363-2371
【Authors】: Joel Sommers ; Rhys Alistair Bowden ; Brian Eriksson ; Paul Barford ; Matthew Roughan ; Nick G. Duffield
【Abstract】: Experiments on diverse topics such as network measurement, management and security are routinely conducted using empirical flow export traces. However, the availability of empirical flow traces from operational networks is limited and frequently comes with significant restrictions. Furthermore, empirical traces typically lack critical meta-data (e.g., labeled anomalies) which reduce their utility in certain contexts. In this paper, we describe fs: a first-of-its-kind tool for automatically generating representative flow export records as well as basic SNMP-like router interface counts. fs generates measurements for a target network topology with specified traffic characteristics. The resulting records for each router in the topology have byte, packet and flow characteristics that are representative of what would be seen in a live network. fs also includes the ability to inject different types of anomalous events that have precisely defined characteristics, thereby enabling evaluation of proposed attack and anomaly detection methods. We validate fs by comparing it with the ns-2 simulator, which targets accurate recreation of packet-level dynamics in small network topologies. We show that data generated by fs are virtually identical to what are generated by ns-2, except over small time scales (below 1 second). We also show that fs is highly efficient, thus enabling test sets to be created for large topologies. Finally, we demonstrate the utility of fs through an assessment of anomaly detection algorithms, highlighting the need for flexible, scalable generation of network-wide measurement data with known ground truth.
【Keywords】: telecommunication network routing; telecommunication network topology; telecommunication traffic; SNMP-like router interface; anomaly detection; attack detection; empirical flow trace; flow export record; network-wide flow record generation; ns-2 simulator; operational network; packet-level dynamics; small network topology; traffic characteristics; Computational modeling; Detection algorithms; Generators; Modulation; Network topology; Throughput; US Department of Transportation
【Paper Link】 【Pages】:2372-2380
【Authors】: Angela Wang ; Cheng Huang ; Jin Li ; Keith W. Ross
【Abstract】: To optimize network performance, cloud service providers have a number of options available to them, including co-locating production servers in well-connected Internet eXchange (IX) points, deploying data centers in additional locations, or contracting with external Content Distribution Networks (CDNs). Some of these options can be very costly, and some may or may not improve performance significantly. Cloud service providers would clearly like to be able to estimate a priori performance gain of the various options before sinking significant capital expenditures into major infrastructure changes. In this paper we take a measurement-oriented approach and develop methodologies that accurately predict the performance improvement for making major infrastructure changes. Our methodologies leverage active web content, existing large-scale CDN infrastructures, and the SpeedTest network. We then apply our methodologies and a CloudBeacon tool to the problem of locating satellite data centers throughout the world. The results show that for North America, a deployment limited to 11 locations will be sufficient. However, in order to provide good latency and throughput performance on a global scale, somewhere between a total of 36 and 72 cloud-service locations with good peering connections is most likely needed.
【Keywords】: cloud computing; CloudBeacon tool; Internet exchange point; SpeedTest network; cloud service providers; content distribution networks; hypothetical cloud service deployment; Australia; Servers
【Paper Link】 【Pages】:2381-2389
【Authors】: Sourabh Jain ; Yingying Chen ; Zhi-Li Zhang
【Abstract】: In this paper we propose VIRO - a novel, virtual identifier (Id) routing paradigm for future networks. The objective is three-fold. First, VIRO directly addresses the challenges faced by the traditional layer-2 technologies such as Ethernet, while retaining its simplicity feature. Second, it provides a uniform convergence layer that integrates and unifies routing and forwarding performed by the traditional layer-2 and layer-3, as prescribed by the traditional local-area/wide-area network dichotomy. Third and perhaps more importantly, VIRO decouples routing from addressing, and thus is namespace-independent. The key idea in our design is to introduce a topology-aware, structured virtual id (vid) space onto which both physical identifiers as well as higher layer addresses/names are mapped. VIRO completely eliminates network-wide flooding in both the data and control planes, and thus is highly scalable and robust. Furthermore, VIRO effectively localizes failures, and possesses built-in mechanisms for fast rerouting and load-balancing.
【Keywords】: Internet; telecommunication network routing; telecommunication network topology; VIRO; convergence layer; namespace independent virtual Id routing; network-wide flooding; structured virtual Id space; topology-aware virtual Id space; virtual identifier; Binary trees; Logic gates; Network topology; Robustness; Routing; Switches; Topology; Addressing & Namespace management; Future Networks; Routing; Scalability
【Paper Link】 【Pages】:2390-2398
【Authors】: Talmai Oliveira ; Srisudha Mahadevan ; Dharma P. Agrawal
【Abstract】: With the constant evolution of wireless communication access technologies, there is a clear trend for mobile clients (MC) to be equipped with multiple interfaces for simultaneous access to different types of networks. This has been denominated a heterogeneous wireless networks. However, in order to achieve a desired quality of service while satisfying the user's requirements, MCs must take advantage of the inherent characteristics of these access technologies, and rely on adaptable decision making mechanism for data forwarding. Unfortunately, route selection satisfying multiple constraints has proven to be NP-Complete, and although various heuristic algorithms have been proposed, they assume that the network state information is static and both the user's and networks constraints are clearly specified. This paper focuses on the imprecise and dynamic nature of the network conditions while satisfying multiple - often contradicting - constraints. A fuzzy logic model is proposed which aims at translating the uncertainty factor of the network conditions to accurate values using the fuzzy logic tools and techniques. We then perform a thorough analysis of the metric values offered by various wireless technologies, and derive crisp values for the imprecise network parameters. A sensitivity analysis reflecting the performance of, and relative importance of metrics on each network is carried out. These results are shown to impact user's decision in handing over data to the appropriate interface.
【Keywords】: computational complexity; decision making; fuzzy logic; heuristic programming; quality of service; radio networks; sensitivity analysis; NP-complete problem; data forwarding; decision making mechanism; fuzzy logic model; heterogeneous wireless networks; heuristic algorithms; mobile clients; network state information; network uncertainty handling; quality of service; sensitivity analysis; wireless communication access technology; Bandwidth; Delay; Fuzzy logic; Stability analysis; Throughput; Uncertainty; Fuzzy Logic; Heterogeneous Multiple Attribute Decision Making; Multiple Constraints; SAW; TOPSIS; Uncertainty; Wireless Networks
【Paper Link】 【Pages】:2399-2407
【Authors】: Mikko Särelä ; Christian Esteve Rothenberg ; Tuomas Aura ; András Zahemszky ; Pekka Nikander ; Jörg Ott
【Abstract】: Several recently proposed multicast protocols use in-packet Bloom filters to encode multicast trees. These mechanisms are in principle highly scalable because no per-flow state is required in the routers and because routing decisions can be made efficiently by simply checking for the presence of outbound links in the filter. Yet, the viability of previous approaches is limited by the possibility of forwarding anomalies caused by false positives inherent in Bloom filters. This paper explores such anomalies, namely (1) packets storms, (2) forwarding loops and (3) flow duplication. We propose stateless solutions that increase the robustness and the scalability of Bloom filter-based multicast protocols. In particular, we show that the parameters of the filter need to be varied to guarantee the stability of the packet forwarding, and we present a bit permutation technique that effectively prevents both accidental and maliciously created anomalies. We evaluate our solutions in the context of BloomCast, a source-specific inter-domain multicast protocol, using analytical methods and simulations.
【Keywords】: multicast protocols; telecommunication network routing; Bloom filter-based multicast protocols; BloomCast; anomaly forwarding; in-packet Bloom filters; multicast tree encoding; packet forwarding; routing decisions; source-specific inter-domain multicast protocol; Bandwidth; History; Internet; Peer to peer computing; Routing; Routing protocols; Storms
【Paper Link】 【Pages】:2408-2416
【Authors】: Ting Wang ; Yaling Yang
【Abstract】: This paper studies the problem of location privacy protection in wireless LAN (WLAN) environment, where received signal strength (RSS) at access points (AP) can potentially be obtained by adversaries to obtain the location of a legitimate mobile station. We propose a two-step location privacy protection scheme using a linear smart antenna array on the mobile station. In the first step, the mobile station observes the arrangement of surrounding APs by moving around and estimating the path losses from itself to the APs. Based on the path loss information, in the second step, the mobile station optimizes the radiation pattern of its smart antenna so that its location privacy is protected while its communication quality is not affected. Two strategies are used in the radiation pattern optimization. The first strategy is to limit the number of APs in range of the mobile station to a safe level so that there are not enough measurements from the APs to make an estimation of the mobile station's location. If the first strategy is not possible, the mobile station falls to the second strategy, where its radiation pattern introduces maximum bias to any location estimation attempt so that the mobile station's true location is not revealed. Simulation results show that compared with traditional transmit power control (TPC) scheme, the first strategy significantly increases the probability of inadequate measurements for location computation. Simulation also demonstrates that the second strategy can significantly degenerate the precision of the positioning system. In many cases, the degenerated location precision is as low as the coverage range of the AP that the mobile station is associated with for communications. This essentially means that the second strategy can invalidate the use of RSS measurement for precise localization.
【Keywords】: adaptive antenna arrays; antenna radiation patterns; linear antenna arrays; mobile radio; probability; wireless LAN; AP; RSS localization system; TPC scheme; WLAN environment; access points; antenna pattern synthesis; linear smart antenna array; mobile station; radiation pattern optimization; transmit power control scheme; two-step location privacy protection; wireless LAN environment; Antenna measurements; Antenna radiation patterns; Estimation; Mobile antennas; Mobile communication; Privacy; Wireless LAN
【Paper Link】 【Pages】:2417-2425
【Abstract】: Avionics Full DupleX (AFDX) Switched Ethernet technology provides a deterministic network with guaranteed service to support real-time data transmission in real-world avionics applications. The determinism provides a worst-case upper bound of end-to-end transmission delays of virtual links (VLs) that are often assumed to be homogeneous and have similar transmission requirements. However, performing the analysis of end-to-end delays of heterogeneous flows remains an open problem. This paper derives end-to-end delay bounds of transmitting heterogeneous flows, including avionics, multimedia (video & audio) and best-effort data flows, in an AFDX network that uses Deficit Round Robin (DRR) scheduling policy on switch output ports. To this end, we transform scaled multi-type flows to a single representation of utilization, i.e., DRR quanta, to efficiently handle heterogeneous flows in a unified way and comprehensively study their end-to-end delays. We further compare the DRR-based scheduling policy with current avionics standards, i.e., FIFO and a static priority policy, in terms of transmission delays and fairness. Extensive experiments based on periodical and sporadic flows in an AFDX prototype show the efficacy and efficiency of our proposed schemes. To the best of our knowledge, this is the first work that analyzes end-to-end delays based on the DRR policy for heterogeneous flows in an AFDX network.
【Keywords】: avionics; data communication; local area networks; real-time systems; scheduling; AFDX; avionics full duplex; avionics network; deficit round robin; end-to-end heterogeneous flows; end-to-end transmission delays; real-time data transmission; scheduling design; switched Ethernet technology; virtual links; Aerospace electronics; Calculus; Delay; Real time systems; Scheduling; Servers; Switches
【Paper Link】 【Pages】:2426-2434
【Authors】: Junwei Huang ; Xian Pan ; Xinwen Fu ; Jie Wang
【Abstract】: Cyber crimes often involve complicated scenes. In this paper, we investigate unidentified crimes committed through anonymous communication networks. We developed a long Pseudo-Noise (PN) code based Direct Sequence Spread Spectrum (DSSS) flow marking technique for invisibly tracing suspect anonymous flows. By interfering with a sender's traffic and marginally varying its rate, an investigator can embed a secret spread spectrum signal into the sender's traffic. Each signal bit is modulated with a small segment of a long PN code. By tracing where the embedded signal goes, the investigator can trace the sender and receiver of the suspect flow despite the use of anonymous networks. Benefits of the Long PN code include its resistance to previous discovered detection approaches. We may also use the vast number of long PN code at different phases to conduct parallel tracback without worrying about the interference between codes. Using a combination of analytical modeling and experiments on Anonymizer, we demonstrate the effectiveness of the long PN code based DSSS watermarking technique.
【Keywords】: Internet; computer crime; computer network security; pseudonoise codes; spread spectrum communication; watermarking; DSSS watermarking; anonymous communication network; cyber crime; direct sequence spread spectrum flow marking technique; long PN code; long pseudo noise code; unidentified crime; Chirp; Decision support systems; Spread spectrum communication
【Paper Link】 【Pages】:2435-2443
【Authors】: Ming Li ; Ning Cao ; Shucheng Yu ; Wenjing Lou
【Abstract】: Making new connections according to personal preferences is a crucial service in mobile social networking, where the initiating user can find matching users within physical proximity of him/her. In existing systems for such services, usually all the users directly publish their complete profiles for others to search. However, in many applications, the users' personal profiles may contain sensitive information that they do not want to make public. In this paper, we propose FindU, the first privacy-preserving personal profile matching schemes for mobile social networks. In FindU, an initiating user can find from a group of users the one whose profile best matches with his/her; to limit the risk of privacy exposure, only necessary and minimal information about the private attributes of the participating users is exchanged. Several increasing levels of user privacy are defined, with decreasing amounts of exchanged profile information. Leveraging secure multi-party computation (SMC) techniques, we propose novel protocols that realize two of the user privacy levels, which can also be personalized by the users. We provide thorough security analysis and performance evaluation on our schemes, and show their advantages in both security and efficiency over state-of-the-art schemes.
【Keywords】: data privacy; mobile computing; social networking (online); user interfaces; FindU scheme; mobile social networks; privacy exposure risk; privacy-preserving personal profile matching; secure multiparty computation technique; user personal profile; user privacy; Encryption; Polynomials; Privacy; Protocols; Silicon
【Paper Link】 【Pages】:2444-2452
【Authors】: Zhiyong Lin ; Hai Liu ; Xiaowen Chu ; Yiu-Wing Leung
【Abstract】: Cognitive radio networks (CRNs) have emerged as advanced and promising paradigm to exploit the existing wireless spectrum opportunistically. It is crucial for users in CRNs to search for neighbors via rendezvous process and thereby establish the communication links to exchange the information necessary for spectrum management and channel contention etc. This paper focuses on the design of algorithms for blind rendezvous, i.e., rendezvous without using any central controller and common control channel (CCC). We propose a jump-stay based channel-hopping (CH) algorithm for blind rendezvous. The basic idea is to generate CH sequence in rounds and each round consists of a jump-pattern and a stay-pattern. Users “jump” on available channels in the jump-pattern while “stay” on a specific channel in the stay-pattern. Compared with the existing CH algorithms, our algorithm achieves the following advances: i) guaranteed rendezvous without the need of time-synchronization; ii) applicability to rendezvous of multi-user and multi-hop scenarios. We derive the maximum time-to-rendezvous (TTR) and the upper-bound of expected TTR of our algorithm for both 2-user and multi-user scenarios (shown in Table I). Extensive simulations are further conducted to evaluate performance of our algorithm.
【Keywords】: cognitive radio; radio spectrum management; synchronisation; blind rendezvous; channel contention; cognitive radio networks; common control channel; communication links; guaranteed rendezvous; jump-pattern; jump-stay based channel-hopping algorithm; spectrum management; stay-pattern; time synchronization; time-to-rendezvous; wireless spectrum; Algorithm design and analysis; Clocks; Cognitive radio; Indexes; Servers; Synchronization; blind rendezvous; channel hopping; cognitive radio
【Paper Link】 【Pages】:2453-2461
【Authors】: Hyoil Kim ; Jaehyuk Choi ; Kang G. Shin
【Abstract】: The whitespaces (WS) in the legacy spectrum provide new opportunities for the future Wi-Fi-like Internet access, often called Wi-Fi 2.0, since service quality can be greatly enhanced thanks to the better propagation characteristics of the WS than the ISM bands. In the Wi-Fi 2.0 networks, each wireless service provider (WSP) temporarily leases a licensed spectrum band from the licensees and opportunistically utilizes it during the absence of the legacy users. The WSPs in Wi-Fi 2.0 thus face unique challenges since spectrum availability of the leased channel is time-varying due to the ON/OFF spectrum usage patterns of the legacy users, which necessitates the eviction control of in-service customers at the return of legacy users. As a result, to maximize its profit, a WSP should consider both channel leasing and eviction costs to optimally determine a spectrum band to lease and a service tariff. In this paper, we consider a duopoly Wi-Fi 2.0 market where two co-located WSPs compete for the spectrum and customers. The competition between the WSPs is analyzed using game theory to derive the Nash Equilibria (NE) of the price (of the service tariffs) and the quality (of the leased channel, in terms of channel utilization) competitions. The NE existence condition and market entry barriers are also derived. Via an extensive numerical analysis, we show the tradeoffs between leasing/eviction cost, customer arrivals, and channel usage patterns by the legacy users.
【Keywords】: Internet; cognitive radio; game theory; numerical analysis; pricing; quality of service; radio spectrum management; telecommunication network reliability; time-varying channels; wireless LAN; ISM band; Nash equilibria; ON-OFF spectrum usage pattern; Wi-Fi 2.0 network; Wi-Fi-like Internet access; duopoly cognitive radio wireless service provider; eviction control; game theory; market entry barrier; numerical analysis; time-varying channel; time-varying spectrum availability; whitespace; Argon; Availability; Games; IEEE 802.11 Standards; Markov processes; Numerical analysis; Pricing
【Paper Link】 【Pages】:2462-2470
【Authors】: Cem Tekin ; Mingyan Liu
【Abstract】: We consider an opportunistic spectrum access (OSA) problem where the time-varying condition of each channel (e.g., as a result of random fading or certain primary users' activities) is modeled as an arbitrary finite-state Markov chain. At each instance of time, a (secondary) user probes a channel and collects a certain reward as a function of the state of the channel (e.g., good channel condition results in higher data rate for the user). Each channel has potentially different state space and statistics, both unknown to the user, who tries to learn which one is the best as it goes and maximizes its usage of the best channel. The objective is to construct a good online learning algorithm so as to minimize the difference between the user's performance in total rewards and that of using the best channel (on average) had it known which one is the best from a priori knowledge of the channel statistics (also known as the regret). This is a classic exploration and exploitation problem and results abound when the reward processes are assumed to be iid. Compared to prior work, the biggest difference is that in our case the reward process is assumed to be Markovian, of which iid is a special case. In addition, the reward processes are restless in that the channel conditions will continue to evolve independent of the user's actions. This leads to a restless bandit problem, for which there exists little result on either algorithms or performance bounds in this learning context to the best of our knowledge. In this paper we introduce an algorithm that utilizes regenerative cycles of a Markov chain and computes a samplemean based index policy, and show that under mild conditions on the state transition probabilities of the Markov chains this algorithm achieves logarithmic regret uniformly over time, and that this regret bound is also optimal. We numerically examine the performance of this algorithm along with a few other learning algorithms in the case of an OSA problem with G- - ilbert-Elliot channel models, and discuss how this algorithm may be further improved (in terms of its constant) and how this result may lead to similar bounds for other algorithms.
【Keywords】: Internet; computer aided instruction; radio spectrum management; statistical analysis; time-varying channels; Gilbert-Elliot channel models; OSA problem; channel statistics; finite-state Markov chain; online learning; opportunistic spectrum access; restless bandit approach; time-varying channel; Context; Eigenvalues and eigenfunctions; Fading; Indexes; Markov processes; Probability; Silicon
【Paper Link】 【Pages】:2471-2479
【Authors】: Yifan Zhang ; Qun Li ; Gexin Yu ; Baosheng Wang
【Abstract】: In a dynamic spectrum access (DSA) network, communication rendezvous is the first step for two secondary users to be able to communicate with each other. In this step, the pair of secondary users meet on the same channel, over which they negotiate on the communication parameters, to establish the communication link. This paper presents ETCH, Efficient Channel Hopping based MAC-layer protocols for communication rendezvous in DSA networks. We propose two protocols, SYNC-ETCH and ASYNC-ETCH. Both protocols achieve better time-to-rendezvous and throughput compared to previous work.
【Keywords】: protocols; radio access networks; radio spectrum management; ASYNC-ETCH protocols; DSA networks; MAC-layer protocols; SYNC-ETCH protocols; communication link; communication rendezvous; dynamic spectrum access networks; efficient channel hopping; Jamming; Manganese; Protocols; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:2480-2488
【Authors】: Ziguo Zhong ; Pengpeng Chen ; Tian He
【Abstract】: Time synchronization remains as a challenging task in wireless sensor networks that face severe resource constraints. Unlike previous work's aiming at pure clock accuracy, this paper proposes On-Demand Synchronization (ODS), a design to achieve efficient clock synchronization with customized performance. By carefully modeling the error uncertainty of skew detection and its propagation over time, ODS develops a novel uncertainty-driven mechanism to adaptively adjust each clock calibration interval individually rather than traditional periodic synchronization, for minimum communication overhead while satisfying the desired accuracy. Besides, ODS provides a nice feature of predictable accuracy, allowing nodes to acquire the useful information about real-time qualities of their synchronization. We implemented ODS on the MICAz mote platform, and evaluated it through test-bed experiments with 33 nodes as well as simulations obeying real world conditions. Results show that ODS is practical, flexible, and quickly adapts to varying accuracy requirements and different traffic load in the network for improved system efficiency.
【Keywords】: calibration; clocks; synchronisation; telecommunication traffic; wireless sensor networks; MICAz mote platform; clock accuracy; clock calibration interval; clock synchronization; communication overhead; error uncertainty modeling; network traffic; on-demand time synchronization; resource constraints; uncertainty-driven mechanism; wireless sensor networks; Accuracy; Clocks; Estimation; Frequency measurement; Synchronization; Uncertainty; Wireless sensor networks
【Paper Link】 【Pages】:2489-2497
【Authors】: Pu Wang ; Rui Dai ; Ian F. Akyildiz
【Abstract】: In wireless multimedia sensor networks (WMSNs), visual correlation exist among multiple nearby cameras, thus leading to considerable redundancy in the collected images. This paper addresses the problem of timely and efficiently gathering visually correlated images from camera sensors. Towards this, three fundamental problems are considered, namely, MinMax Degree Hub Location (MDHL), Minimum Sum-entropy Camera Assignment (MSCA), and Maximum Lifetime Scheduling (MLS). The MDHL problem aims to find the optimal locations to place the multimedia processing hubs, which operate on different channels for concurrently collecting images from adjacent cameras, such that the number of channels required for frequency reuse is minimized. With the locations of the hubs determined by the MDHL problem, the objective of the MSCA problem is to assign each camera to a hub in such a way that the global compression gain is maximized by jointly encoding the visually correlated images gathered by each hub. At last, given a hub and its associated cameras, the MLS problem targets at designing a schedule for the cameras such that the network lifetime of the cameras is maximized by letting highly correlated cameras perform differential coding on the fly. It is proven in this paper that the MDHL problem is NP-complete, and the others are NP-hard. Consequently, approximation and heuristic algorithms are proposed. Since the designed algorithms only take the camera settings as inputs, they are independent of specific multimedia applications. Experiments and simulations show that the proposed image gathering schemes effectively enhance network throughput and image compression performance.
【Keywords】: approximation theory; communication complexity; data compression; entropy; frequency allocation; image coding; multimedia communication; wireless sensor networks; NP-complete; NP-hard; approximation algorithm; camera sensor; differential coding; frequency reusing; global compression gain; heuristic algorithm; image compression; image encoding; image gathering scheme; maximum lifetime scheduling problem; minimum sum-entropy camera assignment problem; minmax degree hub location problem; multimedia processing hubs; network throughput; visual correlation; wireless multimedia sensor network; Cameras; Correlation; Encoding; Entropy; Image coding; Joints; Visualization
【Paper Link】 【Pages】:2498-2506
【Authors】: Xiaole Bai ; Ziqiu Yun ; Dong Xuan ; Biao Chen ; Wei Zhao
【Abstract】:
Notice of Violation of IEEE Publication Principles
"Optimal Multiple-coverage of Sensor Networks"
by Xiaole Bai, Ziqiu Yun, Dong Xuan, Biao Chen and Wei Zhao,
in the Proceedings of INFOCOM, pp.2498-2506, April 2011
After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.
This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission. One author (Ziqiu Yun) was responsible for the violation, and this violation was done without the knowledge of the other authors.
"Planar Thinnest Deployment Pattern of Congruent Discs which Achieves 2-coverage"
by Ge Jun,
in his thesis presented at the School of Mathematical Sciences, Soochow University, Suzhou 215006, P. R. China, June 2010.
In wireless sensor networks, multiple-coverage, in which each point is covered by more than one sensor, is often required to improve detection quality and achieve high fault tolerance. However, finding optimal patterns that achieve multiple-coverage in a plane remains a long-lasting open problem. In this paper, we first derive the optimal deployment density bound for two-coverage deployment patterns where Voronoi polygons generated by sensor nodes are congruent.We then propose optimal two-coverage patterns based on the optimal bound. We further extend these patterns by considering the connectivity requirement and design a set of optimal patterns that achieve two-coverage and one-, two-, and three-connectivity. We also study optimal patterns under practical considerations. To our knowledge, our work is the very first that proves the optimality of multiple-coverage deployment patterns.
【Keywords】: computational geometry; wireless sensor networks; Voronoi polygons; fault tolerance; multiple-coverage deployment patterns; optimal deployment density bound; optimal multiple-coverage; sensor nodes; two-coverage deployment patterns; wireless sensor networks; Geometry; Guidelines; Lattices; Sensors; Three dimensional displays; Transforms; Wireless sensor networks
【Paper Link】 【Pages】:2507-2515
【Authors】: Qing Cao ; Xiaorui Wang ; Hairong Qi ; Tian He
【Abstract】: In this paper, we present r-kernel, an operating system kernel foundation specifically designed to improve software reliability in networked embedded systems. The key novelty of r-kernel lies in that it exploits the time dimension of software execution to improve robustness. Specifically, r-kernel keeps track of the execution of applications through checkpoints. If one application has been determined to have failed, r-kernel performs rollback operations to restore its state to one of those checkpoints created earlier. For the second round of operation, r-kernel provides a safe mode environment to avoid triggering the same bugs. Finally, if the whole system has crashed, r-kernel relies on watchdog timers to reset the node, and develops a technique called past-run trace reconstruction to locate and report the thread that had caused the system failure. We have implemented r-kernel based on the LiteOS operating system kernel running on the popular MicaZ platform. We demonstrate that it achieves the desired goals above with acceptable overhead.
【Keywords】: embedded systems; operating system kernels; wireless sensor networks; highly reliable networked embedded systems; operating system kernel foundation; Computer bugs; Instruction sets; Kernel; Message systems; Shadow mapping
【Paper Link】 【Pages】:2516-2524
【Authors】: Mingyi Hong ; Alfredo Garcia ; Jorge Barrera
【Abstract】: Spectrum management has been identified as a crucial step towards enabling the technology of the cognitive radio network (CRN). Most of the current works dealing with spectrum management in the CRN focus on a single task of the problem, e.g., spectrum sensing, spectrum decision, spectrum sharing or spectrum mobility. In this work, we argue that for certain network configurations, jointly performing several tasks of the spectrum management improves the spectrum efficiency. Specifically, we study the uplink resource management problem in a CRN where there exist multiple cognitive users (CUs) and access points (APs), with each AP operates on a set of non-overlapping channels. The CUs, in order to maximize their uplink transmission rates, have to associate to a suitable AP (spectrum decision), and to share the channels belong to this AP with other CUs (spectrum sharing). These tasks are clearly interdependent, and the problem of how they should be carried out efficiently and distributedly is still open in the literature. In this work we formulate this joint spectrum decision and spectrum sharing problem into a non-cooperative game, in which the feasible strategy of a player contains a discrete variable and a continuous vector. The structure of the game is hence very different from most non-cooperative spectrum management game proposed in the literature. We provide characterization of the Nash Equilibrium (NE) of this game, and present a set of novel algorithms that allow the CUs to distributively and efficiently select the suitable AP and share the channels with other CUs. Finally, we study the properties of the proposed algorithms as well as their performance via extensive simulations.
【Keywords】: cognitive radio; cooperative communication; game theory; radio networks; AP; CRN; CU; NE; Nash equilibrium; cognitive radio network; joint distributed access point selection; joint spectrum decision; multiple cognitive user; noncooperative spectrum management game; power allocation; uplink transmission rate; Copper; Games; Interference; Joints; Radio spectrum management; Resource management; Switches
【Paper Link】 【Pages】:2525-2533
【Authors】: Zvi Lotker ; Merav Parter ; David Peleg ; Yvonne Anne Pignolet
【Abstract】: The power control problem for wireless networks in the SINR model requires determining the optimal power assignment for a set of communication requests such that the SINR threshold is met for all receivers. If the network topology is known to all participants, then it is possible to compute an optimal power assignment in polynomial time. In realistic environments, however, such global knowledge is usually not available to every node. In addition, protocols that are based on global computation cannot support mobility and hardly adapt when participants dynamically join or leave the system. In this paper we present and analyze a fully distributed power control protocol that is based on local information. For a set of communication pairs, each consisting of a sender node and a designated receiver node, the algorithm enables the nodes to converge to the optimal power assignment (if there is one under the given constraints) quickly with high probability. Two types of bounded resources are considered, namely, the maximal transmission energy and the maximum distance between any sender and receiver. It is shown that the restriction to local computation increases the convergence rate by only a multiplicative factor of O(log n + log log Ψmax), where Ψmax is the maximal power constraint of the network. If the diameter of the network is bounded by Lmax then the increase in convergence rate is given by O(log n + log log Lmax).
【Keywords】: computational complexity; convergence; power control; protocols; radio networks; radio receivers; telecommunication control; telecommunication network topology; SINR model; SINR threshold; communication pairs; communication requests; convergence rate; designated receiver node; fully distributed power control protocol; global computation; global knowledge; local information; maximal power constraint; maximal transmission energy; maximum distance; multiplicative factor; network topology; optimal power assignment; polynomial time; probability; sender node; wireless networks; Convergence; Interference; Power control; Protocols; Receivers; Signal to noise ratio
【Paper Link】 【Pages】:2534-2542
【Authors】: Haiping Liu ; Xiaoling Qiu ; Dipak Ghosal ; Chen-Nee Chuah ; Xin Liu ; Yueyue Fan
【Abstract】: Traffic density in wireless networks is time- and space-varying as users move from one area to another. For example, the majority of traffic stays in residential areas in the early morning and late evening; but moves to business or commercial areas in daytime. Therefore, it is challenging to efficiently locate base stations during network planning stage, due to the time-varying traffic distribution. Base stations vary from highly congested to seldom utilized depending on time. However, measurement studies show that the movement of the traffic density is highly predictable, and the traffic always travel along similar routes among different parts in a city or town during one day or over a week. Therefore, we introduce the traffic-tracing gateway (TTG), which acts as the base station that tracks the movement of the traffic. Given the traffic distribution of a period, we design an algorithm to determine the optimal trajectories of TTGs that can cover the maximum traffic. Our solution framework can optimally deploy TTGs in the congested areas to provide better coverage and relieve congestion. Our simulation studies based on realistic user mobility show that TTGs can result in significant improvement over fixed infrastructure based network across multiple metrics in multiple scenarios.
【Keywords】: telecommunication network planning; telecommunication traffic; wireless mesh networks; traffic density; traffic-tracing gateway; user mobility; wireless networks; Base stations; Dynamic programming; Indexes; Optimization; Trajectory; Wireless networks
【Paper Link】 【Pages】:2543-2551
【Authors】: Sundeep Rangan ; Ritesh Madan
【Abstract】: We consider a broad class of interference coordination and resource allocation problems for wireless links where the goal is to maximize the sum of functions of individual link rates. Such problems arise in the context of, for example, fractional frequency reuse (FFR) for macro-cellular networks and dynamic interference management in femtocells. The resulting optimization problems are typically hard to solve optimally even using centralized algorithms but are an essential computational step in implementing rate-fair and queue stabilizing scheduling policies in wireless networks. We consider a belief propagation framework to solve such problems approximately. In particular, we construct approximations to the belief propagation iterations to obtain computationally simple and distributed algorithms with low communication overhead. Notably, our methods are very general and apply to, for example, the optimization of transmit powers, transmit beamforming vectors, and sub-band allocation to maximize the above objective. Numerical results for femtocell deployments demonstrate that such algorithms compute a very good operating point in typically just a couple of iterations.
【Keywords】: array signal processing; femtocellular radio; interference (signal); radio links; radio networks; resource allocation; belief propagation methods; dynamic interference management; femtocells; fractional frequency reuse; intercell interference coordination; macro-cellular networks; queue stabilizing scheduling policies; rate-fair scheduling policies; resource allocation problems; sub-band allocation; transmit beamforming vectors; transmit powers; wireless links; wireless networks; Approximation algorithms; Approximation methods; Interference; Optimization; Receivers; Transmitters; Vectors; Interference coordination; belief propagation; cellular systems; femtocells; wireless communications
【Paper Link】 【Pages】:2552-2560
【Authors】: Chee Wei Tan
【Abstract】: Heterogeneous wireless networks employ varying degrees of network coverage using power control in a multi-tier configuration, where low-power femtocells are used to enhance performance, e.g., optimize outage probability. We study the worst outage probability problem under Rayleigh fading. As a by-product, we solve an open problem of convergence for a previously proposed algorithm in the interference-limited case. We then address a total power minimization problem with outage specification constraints and its feasibility condition. We propose a dynamic algorithm that adapts the outage probability specification in a heterogeneous network to minimize the total energy consumption and simultaneously guarantees all the femtocell users a min-max fairness in terms of the worst outage probability.
【Keywords】: Rayleigh channels; femtocellular radio; minimax techniques; optimal control; power control; telecommunication control; Rayleigh fading heterogeneous networks; femtocell users; min-max fairness; multi-tier configuration; network coverage; optimal power control; outage probability; total energy consumption; total power minimization problem; Femtocells; Heuristic algorithms; Interference; Rayleigh channels; Receivers; Signal to noise ratio; Optimization; femtocell networks; nonnegative matrix theory; outage probability; power control
【Paper Link】 【Pages】:2561-2569
【Authors】: Yingsong Huang ; Shiwen Mao
【Abstract】: We investigate the problem of downlink power control for streaming multiple variable bit rate (VBR) videos in a multicell wireless network, where downlink capacities are limited by inter-cell interference. We adopt a deterministic model for VBR traffic that considers video frame sizes and playout buffers at the mobile users. The problem is to find the optimal transmit powers for the base stations, such that VBR video data can be delivered to mobile users without causing playout buffer underflow or overflow. We formulate a nonlinear nonconvex optimization problem and prove the condition for the existence of feasible solutions. We then develop a centralized branch-and-bound algorithm incorporating the Reformulation-Linearization Technique, which can produce (1-ε)-optimal solutions. We also propose a low-complexity distributed algorithm with fast convergence. Through simulations with VBR video traces under fading channels, we find the distributed algorithm can achieve a performance very close to that of the centralized algorithm.
【Keywords】: adjacent channel interference; concave programming; distributed algorithms; fading channels; linearisation techniques; mobile radio; nonlinear programming; power control; tree searching; video streaming; wireless sensor networks; VBR traffic; base stations; branch and bound algorithm; centralized algorithm; distributed algorithm; downlink power control; fading channels; intercell interference; mobile users; multicell wireless networks; nonlinear nonconvex problem; optimization; reformulation-linearization technique; variable bit rate videos; video frame; videos streaming; Downlink; Interference; Schedules; Signal to noise ratio; Upper bound; Videos; Wireless networks
【Paper Link】 【Pages】:2570-2578
【Authors】: Wei Yu ; Taesoo Kwon ; Changyong Shin
【Abstract】: The mitigation of intercell interference is a central issue for future-generation wireless cellular networks where frequencies are reused aggressively and where hierarchical cellular structures may heavily overlap. The paper examines the benefit of coordinating transmission strategies and resource allocation schemes across multiple cells for interference mitigation. For a multicell network serving multiple users per cell sectors and where both the base-stations and the remote users are equipped with multiple antennas, this paper proposes a joint proportionally fair scheduling, spatial multiplexing, and power spectrum adaptation method that coordinates multiple base-stations with an objective of optimizing the overall network utility. The proposed scheme optimizes the user schedule, transmit and receive beamforming vectors, and transmit power spectra jointly, while taking into consideration both the intercell and intracell interference and the fairness among the users. The proposed system is shown to significantly improve the overall network throughput while maintaining fairness as compared to a conventional network with per-cell zero-forcing beamforming and with fixed transmit power spectrum. The proposed system goes toward the vision of a fully coordinated multicell network, whereby transmission strategies and resource allocation schemes (rather than transmit signals) are coordinated across the base-stations as a first step.
【Keywords】: antenna arrays; array signal processing; interference suppression; microcellular radio; space division multiplexing; frequency reuse; future-generation wireless cellular networks; hierarchical cellular structure; intercell interference mitigation; intracell interference; joint proportionally fair scheduling; multicell coordination; multicell network; multiple antennas; per-cell zero-forcing beamforming; power spectrum adaptation; receive beamforming vector; resource allocation scheme; spatial multiplexing; transmit beamforming vector; transmit power spectrum; Array signal processing; Downlink; Interference; Joints; Mobile communication; Optimization; Resource management
【Paper Link】 【Pages】:2579-2587
【Authors】: Bo Ji ; Changhee Joo ; Ness B. Shroff
【Abstract】: Scheduling is a critical and challenging resource allocation mechanism for multi-hop wireless networks. It is well known that scheduling schemes that give a higher priority to the link with larger queue length can achieve high throughput performance. However, this queue-length-based approach could potentially suffer from large (even infinite) packet delays due to the well-known last packet problem, whereby packets may get excessively delayed due to lack of subsequent packet arrivals. Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered. However, the throughput performance of delay-based schemes has largely been an open problem except in limited cases of single-hop networks. In this paper, we investigate delay-based scheduling schemes for multi-hop traffic scenarios. We view packet delays from a different perspective, and develop a scheduling scheme based on a new delay metric. Through rigorous analysis, we show that the proposed scheme achieves the optimal throughput performance. Finally, we conduct extensive simulations to support our analytical results, and show that the delay-based scheduler successfully removes excessive packet delays, while it achieves the same throughput region as the queue-length-based scheme.
【Keywords】: delays; queueing theory; radio networks; resource allocation; scheduling; telecommunication traffic; delay-based back-pressure scheduling; multihop traffic; multihop wireless network; packet delay; queue-length-based approach; resource allocation mechanism; single-hop network
【Paper Link】 【Pages】:2588-2596
【Authors】: Po-Kai Huang ; Xiaojun Lin ; Chih-Chun Wang
【Abstract】: We consider the problem of designing a joint congestion control and scheduling algorithm for multihop wireless networks. The goal is to maximize the total utility and achieve low end-to-end delay simultaneously. Assume that there are M flows inside the network, and each flow m has a fixed route with Hm hops. Further, the network operates under the one-hop interference constraint. We develop a new congestion control and scheduling algorithm that combines a window-based flow control algorithm and a new distributed rate-based scheduling algorithm. For any ϵ, ϵm ∈ (0, 1), by appropriately choosing the number of backoff mini-slots for the scheduling algorithm and the window-size of flow m, our proposed algorithm can guarantee that each flow m achieves throughput no smaller than rm(1 - ϵ)(1 - ϵm), where the total utility of the rate allocation vector r⃗ = [rm] is no smaller than the total utility of any rate vector within half of the capacity region. Furthermore, the end-to-end delay of flow m can be upper bounded by Hm/(rm(1 - ϵ)ϵm). Since a flow-m packet requires at least Hm time slots to reach the destination, the order of the per-flow delay upper bound is optimal with respect to the number of hops. To the best of our knowledge, this is the first fully-distributed joint congestion-control and scheduling algorithm that can guarantee order-optimal per-flow end-to-end delay and utilize close-to-half of the system capacity under the one-hop interference constraint. The throughput and delay bounds are proved by a novel stochastic dominance approach, which could be of independent value and be extended to general interference constraints. Our algorithm can be easily implemented in practice with a low per-node complexity that does not increase with the network size.
【Keywords】: communication complexity; interference (signal); optimisation; radio networks; scheduling; stochastic processes; telecommunication congestion control; distributed rate-based scheduling algorithm; end-to-end delay maximization; general interference constraint; low-complexity congestion control; multihop wireless network; one-hop interference constraint; order-optimal per-flow delay; per-node complexity; rate allocation vector; stochastic dominance approach; total utility maximization; window-based flow control algorithm; Delay; Interference constraints; Joints; Schedules; Scheduling algorithm; Throughput; Wireless networks
【Paper Link】 【Pages】:2597-2605
【Authors】: Bin Li ; Atilla Eryilmaz
【Abstract】: Randomization is a powerful and pervasive strategy for developing efficient and practical transmission scheduling algorithms in interference-limited wireless networks. Yet, despite the presence of a variety of earlier works on the design and analysis of particular randomized schedulers, there does not exist an extensive study of the limitations of randomization on the efficient scheduling in wireless networks. In this work, we aim to fill this gap by proposing a common modeling framework and three functional forms of randomized schedulers that utilize queue-length information to probabilistically schedule non-conflicting transmissions. This framework not only models many existing schedulers operating under a time-scale separation assumption as special cases, but it also contains a much wider class of potential schedulers that have not been analyzed. Our main results are the identification of necessary and sufficient conditions on the network topology and on the functional forms used in the randomization for throughput-optimality. Our analysis reveals an exponential and a sub-exponential class of functions that exhibit differences in the throughput-optimality. Also, we observe the significance of the network's scheduling diversity for throughput-optimality as measured by the number of maximal schedules each link belongs to. We further validate our theoretical results through numerical studies.
【Keywords】: interference suppression; queueing theory; radio networks; telecommunication network topology; interference-limited wireless network; network topology; queue-length information; queue-length-based scheduling; randomization limitation; randomized scheduler; throughput-optimality; time-scale separation assumption; transmission scheduling algorithm; Interference; Network topology; Processor scheduling; Schedules; Silicon; Throughput; Wireless networks
【Paper Link】 【Pages】:2606-2614
【Authors】: Yishay Mansour ; Boaz Patt-Shamir ; Dror Rawitz
【Abstract】: We study an abstract setting, where the basic information units (called “superpackets”) do not fit into a single packet, and are therefore spread over multiple packets. We assume that a superpacket is useful only if the number of its delivered packets is above a certain threshold. Our focus of attention is communication link ingresses, where large arrival bursts result in dropped packets. The algorithmic question we address is which packets to drop so as to maximize goodput. Specifically, suppose that each superpacket consists of k packets, and that a superpacket can be reconstructed if at most β · k of its packets are lost, for some given parameter 0 ≤ β <; 1. We present a simple online distributed randomized algorithm in this model, and prove that in any scenario, its expected goodput is at least O(OPT/(k √(1 - β)σ)), where OPT denotes the best possible goodput by any algorithm, and σ denotes the size of the largest burst (the bound can be improved as a function of burst-size variability). We also analyze the effect of buffers on goodput under the assumption of fixed burst size, and show that in this case, when the buffer is not too small, our algorithm can attain, with high probability, (1 - ε) goodput utilization for any ε >; 0. Finally, we present some simulation results that demonstrate that the behavior of our algorithm in practice is far better than our worst-case analytical bounds.
【Keywords】: packet switching; telecommunication network management; multipart packets; online distributed randomized algorithm; overflow management; superpacket; Algorithm design and analysis; Optimized production technology; Redundancy; Schedules; Silicon; Simulation; Upper bound
【Paper Link】 【Pages】:2615-2623
【Authors】: Cheng Huang ; David A. Maltz ; Jin Li ; Albert G. Greenberg
【Abstract】: Cloud service providers operate data centers around the world, and they depend on Global Traffic Management systems to direct requests from clients to the most appropriate data center to serve the requests. While GTM systems have been in-use for years, they are attracting re-newed interests due to the rapid expansion of cloud service providers' networks, the introduction of public DNS systems, as well as new proposals to alter how they should work and what information local DNS servers (LDNS) should make available to drive the GTM systems. This paper uses large-scale measurements conducted from more than 5M clients to establish properties of the current Internet that affect the design of the GTM systems, such as the stretch between a client's actual position and its LDNS from GTM's perspective, the impact of public DNS systems, and the granularity at which GTM decisions should be made. The results can inform the debate over how GTM systems should be designed.
【Keywords】: client-server systems; cloud computing; computer centres; computer network management; telecommunication traffic; GTM system; Internet; LDNS; cloud service provider; data centers; global traffic management; information local DNS server; public DNS system; Asia; Cities and towns; Europe; Google
【Paper Link】 【Pages】:2624-2632
【Authors】: Jun Li ; Scott Brooks
【Abstract】: Disruptive events such as large-scale power outages, undersea cable cuts, or Internet worms could cause the Internet to deviate from its normal state of operation. This deviation from normalcy is what we refer to as the “impact” on the Internet, or an “Internet earthquake.” As the Internet is a large, complex moving target, to date there has been little successful research on how to observe and quantify the impact on the Internet, whether it is during specific event periods or in real time. In this paper, we devise an Internet seismograph, or I-seismograph, to provide a “Richter scale” for the Internet. Since routing is the most basic function of the Internet and the Border Gateway Protocol (BGP) is the de facto standard inter-domain routing protocol, we focus on BGP. After defining what “impact” means with respect to BGP, we describe how I-seismograph measures the impact, exemplify its usage with several disruptive events, and further validate its accuracy and consistency. We show that we can evaluate the impact on BGP during an arbitrary period, including doing so in real time.
【Keywords】: Internet; computer network security; routing protocols; I-seismograph; Internet earthquake; Internet seismograph; Internet worms; Richter scale; border gateway protocol; disruptive event; inter-domain routing protocol; large-scale power outages; undersea cable cut; Earthquakes; Logic gates; Monitoring; Seismic measurements; Variable speed drives; BGP impact measurement; Border Gateway Protocol (BGP); Internet earthquake; Internet seismograph
【Paper Link】 【Pages】:2633-2641
【Authors】: Xin Hu ; Matthew Knysz ; Kang G. Shin
【Abstract】: This paper considers the global IP-usage patterns exhibited by different types of malicious and benign domains, with a focus on single and double fast-flux domains. We have developed and deployed a lightweight DNS probing engine, called DIGGER, on 240 PlanetLab nodes spanning 4 continents. Collecting DNS data for over 3.5 months on a plethora of domains, our global vantage points enabled us to identify distinguishing behavioral features between them based on their DNS-query results. To help us analyze the enormous amount of data, we have quantified these features and designed an effective classifier capable of accurately discriminating between different types of domains. Applying the classifier on the 3.5-month DNS data allows us to reveal the relative prevalence of different fast-flux domains and conduct detailed studies on them separately. These results provide insight into the current global state of fast-flux botnets and their range in implementation, revealing potential trends for botnet-based services. We also uncover previously-unseen domains whose name servers alone demonstrate fast-flux behavior and a new, cautious IP management strategy currently employed by criminals to evade detection.
【Keywords】: IP networks; DIGGER; DNS probing engine; IP management strategy; benign domains; fast-flux botnets; global IP-usage patterns; malicious domains; Computers; Continents; IP networks; Indexes; Monitoring; Recruitment; Servers
【Paper Link】 【Pages】:2642-2650
【Authors】: Giusi Alfano ; Michele Garetto ; Emilio Leonardi
【Abstract】: Stochastic geometry proves to be a powerful tool for modeling dense wireless networks adopting random MAC protocols such as ALOHA and CSMA. The main strength of this methodology lies in its ability to account for the randomness in the nodes' location jointly with an accurate description at the physical layer, based on the SINR, that allows to consider also random fading on each link. Existing models of CSMA networks adopting the stochastic geometry approach suffer from two important weaknesses: 1) they permit to evaluate only spatial averages of the main performance measures, thus hiding possibly huge discrepancies in the performance achieved by individual nodes; 2) they are analytically tractable only when nodes are distributed over the area according to simple spatial processes (e.g., the Poisson point process). In this paper we show how the stochastic geometry approach can be extended to overcome the above limitations, allowing to obtain node throughput distributions as well as to analyze a significant class of topologies in which nodes are not independently placed.
【Keywords】: carrier sense multiple access; ALOHA; MAC protocols; SINR; dense CSMA networks; node throughput distributions; stochastic geometry analysis; wireless networks; Fading; Geometry; Interference; Laplace equations; Multiaccess communication; Receivers; Throughput
【Paper Link】 【Pages】:2651-2659
【Authors】: Yinjie Chen ; Zhongli Liu ; Benyuan Liu ; Xinwen Fu ; Wei Zhao
【Abstract】: The network address translation technique (NAT) is widely used in wireless routers. It is a low cost solution to IPv4 address space limitations. However, cyber criminals may abuse NAT and hide behind wireless routers to use mobile devices and conduct crimes. To identify a suspect mobile device, we should be able to map the suspect public traffic on the Internet to the private traffic behind the wireless router in WLAN. In this paper, we propose a suite of novel packet size based traffic marking techniques to identify suspect mobiles in encrypted wireless networks as well as open wireless networks. To cope with severe packet loss during wireless sniffing, we proposed to use error correcting codes to improve detection rate. We conducted extensive analysis and experiments to demonstrate the efficiency and accuracy of our schemes, which achieve high detection rate and very small false positive rate. The proposed strategies can be used for law enforcement for combatting cyber crimes in wireless network crime scene investigations.
【Keywords】: IP networks; Internet; computer crime; mobile computing; radio networks; telecommunication network routing; telecommunication security; telecommunication traffic; wireless LAN; IPv4 address; Internet; NAT; WLAN; cyber crimes; encrypted wireless networks; hide behind wireless routers; mobile computing; mobile device; network address translation technique; open wireless networks; packet size based traffic marking techniques; private traffic; public traffic; wireless network crime scene; Cryptography; Demodulation; Encoding; IP networks; Law enforcement; Wireless networks
【Paper Link】 【Pages】:2660-2668
【Authors】: Yan Gao ; Chee Wei Tan ; Ying Huang ; Zheng Zeng ; P. R. Kumar
【Abstract】: Due to the rapid growth of real-time applications and the ubiquity of IEEE 802.11 MAC as a layer-2 protocol for wireless local area networks (WLANs), it is of increasing interest to support quality of service (QoS) in such WLANs. In this paper, we develop a simple but accurate enough analytical model for predicting queueing delay in non-homogeneous random access based WLANs. This leads to tractable solutions for meeting queueing delay specifications of a number of flows. Using this model, we address the feasibility problem of whether the mean delays required by a set of inelastic flows can be guaranteed in WLANs. Based on the model and feasibility analysis, we further develop an optimization technique to minimize the delays for inelastic flows. We present extensive simulation results to demonstrate the accuracy of our model and the performance of the algorithms.
【Keywords】: access protocols; delays; quality of service; telecommunication standards; wireless LAN; IEEE 802.11 MAC protocol; delay guarantees; inelastic flows; layer-2 protocol; non-homogeneous flows; quality of service; queueing delay; wireless LAN; Analytical models; Approximation methods; Delay; IEEE 802.11 Standards; Quality of service; Real time systems; Wireless LAN
【Paper Link】 【Pages】:2669-2677
【Authors】: Zheng Zeng ; Yan Gao ; Kun Tan ; P. R. Kumar
【Abstract】: IEEE 802.11 DCF is the dominant protocol used in existing WLANs. However, the efficiency of DCF progressively degrades with the increase of contending clients in the network as well as the wireless link rate. To address this issue, in this paper, we present a distributed random media access protocol, named CHAIN, which significantly improves uplink performance of WLANs. CHAIN mainly uses overhearing to coordinate clients in a network, and thus introduces little control overhead. The key in CHAIN is a novel piggyback transmission opportunity. In CHAIN, clients maintain a precedence relation among one another, and a client can immediately transmit a new packet after it overhears a successful transmission of its predecessor, without going through the regular contending process. When the network load is low, CHAIN behaves similar to DCF; But when the network becomes congested, clients automatically start chains of transmissions to improve efficiency. CHAIN is derived from DCF and co-exists friendly with it. Moreover, it possesses all the advantages of the 802.11 DCF standard - simplicity, robustness, and scalability. We analytically prove the correctness and fairness of CHAIN. Our extensive simulations on J-SIM verify our analytical results, and demonstrate significant performance gain of CHAIN over DCF.
【Keywords】: access protocols; wireless LAN; CHAIN; IEEE 802.11; WLAN; distributed random media access protocol; minimum controlled coordination; piggyback transmission opportunity; random access MAC; Analytical models; Law; Media; Media Access Protocol; Wireless LAN; Wireless communication
【Paper Link】 【Pages】:2678-2686
【Authors】: Yong Cui ; Tianze Ma ; Xiuzhen Cheng
【Abstract】: Public area WLANs stand for WLANs deployed in public areas such as classroom and office buildings to provide Internet connections. Nevertheless, such a service may not be always available because of limited AP coverage, poor signal strength, or password authentication. Multi-hop access is a feasible approach to facilitate users without direct AP accesses to resort to other online users as relays for data forwarding. This paper employs credit-exchange for multi-hop access in public area WLANs to encourage users to cooperate, and proposes a complete pricing framework. We first investigate a revenue model to define the profit of a relay. Next we point out that cutoff bandwidth allocation is a crucial issue in pricing strategy. Optimal bandwidth allocation schemes are then proposed for two bandwidth demand models. Following that we consider a more practical scenario where the relay's bandwidth capacity and the client's bandwidth demand are bounded, and propose two heuristic algorithms SRMC and MRMC to compute bandwidth allocation and/or relay-client association. Extensive simulation study has been performed to validate our design.
【Keywords】: Internet; bandwidth allocation; heuristic programming; multi-access systems; pricing; wireless LAN; Internet connections; bandwidth demand models; data forwarding; heuristic algorithms; multihop access pricing strategy; optimal bandwidth allocation scheme; password authentication; public area WLAN; relay bandwidth capacity; relay-client association; signal strength; Bandwidth; Bismuth; Channel allocation; Cost function; Mathematical model; Pricing; Relays
【Paper Link】 【Pages】:2687-2695
【Authors】: Yi Gai ; Hua Liu ; Bhaskar Krishnamachari
【Abstract】: We study a novel game theoretic incentive mechanism design problem for network congestion control in the context of selfish users sending data through a single store-and-forward router (a.k.a. “server” in this work). The scenario is modeled as an M/M/1 queueing game with each user (a.k.a. “player”) aiming to optimize a tradeoff between throughput and delay in a selfish distributed manner. We first show that the original game has an inefficient unique Nash Equilibrium (NE). In order to improve the outcome efficiency, we propose an incentivizing packet dropping scheme that can be easily implemented at the server. We then show that if the packet dropping scheme is a function of the sum of arrival rates, we have a modified M/M/1 queueing game that is an ordinal potential game with a unique NE. In particular, for a linear packet dropping scheme, which is similar to the Random Early Detection (RED) algorithm used with TCP, we show that there exists a unique Nash Equilibrium. For this scheme, the social welfare (expressed either as the summation of utilities of all players or log summation of utilities of all players) at the equilibrium point can be arbitrarily close to the social welfare at the global optimal point. Finally, we show that the simple best response dynamic converges to this unique efficient Nash Equilibrium.
【Keywords】: game theory; incentive schemes; network servers; queueing theory; telecommunication congestion control; M/M/1 queues; Nash equilibrium; TCP; equilibrium point; game theoretic incentive mechanism design; global optimal point; linear packet dropping scheme; network congestion control; packet dropping-based incentive mechanism; random early detection; selfish users; social welfare; store-and-forward router; sum of arrival rates; Context; Delay; Games; Nash equilibrium; Optimization; Servers; Throughput
【Paper Link】 【Pages】:2696-2704
【Authors】: Shaolei Ren ; Jaeok Park ; Mihaela van der Schaar
【Abstract】: In order to understand the complex interactions between different technologies in a communications market, it is of fundamental importance to understand how technologies affect the demand of users and competition between network service providers (NSPs). To this end, we analyze user subscription dynamics and revenue maximization in monopoly and duopoly communications markets. First, by considering a monopoly market with only one NSP, we investigate the impact of technologies on the users' dynamic subscription. It is shown that, for any price charged by the NSP, there exists a unique equilibrium point of the considered user subscription dynamics. We also provide a sufficient condition under which the user subscription dynamics converges to the equilibrium point starting from any initial point. We then derive upper and lower bounds on the optimal price and market share that maximize the NSP's revenue. Next, we turn to the analysis of a duopoly market and show that, for any charged prices, the equilibrium point of the considered user subscription dynamics exists and is unique. As in a monopoly market, we derive a sufficient condition on the technologies of the NSPs that ensures the user subscription dynamics to reach the equilibrium point. Then, we model the NSP competition using a non-cooperative game, in which the two NSPs choose their market shares independently, and provide a sufficient condition that guarantees the existence of at least one pure Nash equilibrium in the market competition game.
【Keywords】: game theory; monopoly; telecommunication industry; Nash equilibrium; duopoly communications markets; market competition game; monopoly communications markets; network service providers; noncooperative game; revenue maximization; user subscription dynamics; Convergence; Cost accounting; Degradation; Games; Monopoly; Quality of service; Subscriptions
【Paper Link】 【Pages】:2705-2713
【Authors】: Chandramani Kishore Singh ; Eitan Altman
【Abstract】: We study in this paper the problem of sharing the cost of a multicast service in a wireless network. In a wireless network, multiple users can decode the same signal of the base station provided the received power exceeds a certain minimum threshold. In this work, the cost for broadcasting is taken to be the transmission power. We begin by proposing various schemes to share the cost, and study their properties. We then study the association problem where an user has options of either joining the multicast group or opting for a unicast connection at a given cost. Next, we extend the association problem to the scenarios with partial information - a user knows his own power requirement, but has to make decision without knowledge of the number of other users in the network and their requirements. The unicast alternative that each mobile has, results in limitations on the coverage (area covered by the multicast service) and the capacity (number of users connected to the multicast service). We derive the expected capacity and coverage as a function of the cost sharing mechanism. We finally extend the model to the case where users have the option of joining any one from a given set of multicast service providers. A user's power requirement depends on its association, but its cost share depends on the association profile of all the users. We study the joint problem of the cost allocation and the equilibrium association.
【Keywords】: game theory; multicast communication; radio networks; multicast service; non-cooperative association problem; transmission power; wireless multicast coalition game; wireless network; Context; Cost function; Games; Quality of service; Resource management; Unicast; Wireless networks
【Paper Link】 【Pages】:2714-2722
【Authors】: Sha Hua ; Hang Liu ; Mingquan Wu ; Shivendra S. Panwar
【Abstract】: Recently, a new paradigm for cognitive radio networks has been advocated, where primary users (PUs) recruit some secondary users (SUs) to cooperatively relay the primary traffic. However, all existing work on such cooperative cognitive radio networks (CCRNs) operate in the temporal domain. The PU needs to give out a dedicated portion of channel access time to the SUs for transmitting the secondary data in exchange for the SUs' cooperation, which limits the performance of both PUs and SUs. On the other hand, Multiple Input Multiple Output (MIMO) enables transmission of multiple independent data streams and suppression of interference via beam-forming in the spatial domain over MIMO antenna elements to provide significant performance gains. Researches have not yet explored how to take advantage of the MIMO technique in CCRNs. In this paper, we propose a novel MIMO-CCRN framework, which enables the SUs to utilize the capability provided by the MIMO to cooperatively relay the traffic for the PUs while concurrently accessing the same channel to transmit their own traffic. We design the MIMO-CCRN architecture by considering both the temporal and spatial domains to improve spectrum efficiency. Further we provide theoretical analysis for the primary and secondary transmission rate under MIMO cooperation and then formulate an optimization model based on a Stackelberg game to maximize the utilities of PUs and SUs. Evaluation results show that both primary and secondary users achieve higher utility by leveraging MIMO spatial cooperation in MIMO-CCRN than with conventional schemes.
【Keywords】: MIMO communication; antennas; cognitive radio; cooperative communication; game theory; MIMO antenna; Stackelberg game; channel access time; cooperative cognitive radio networks; multiple input multiple output transmission; optimization model; Decoding; Interference; MIMO; Receivers; Relays; Transmitting antennas
【Paper Link】 【Pages】:2723-2731
【Authors】: Tengyi Zhang ; Danny H. K. Tsang
【Abstract】: Due to the problem of spectrum scarcity and large energy consumption in wireless communications, designing energy-efficient Cognitive Radio Networks (CRNs) becomes important and necessary. In this paper, we consider the problem of optimal Cooperative Sensing Scheduling (CSS) and parameter design to achieve energy efficiency in CRNs using the framework of Partially Observable Markov Decision Process (POMDP). In particular, we consider the CSS problem for a CRN with M Secondary Users (SUs) and N primary channels to determine how many SUs should be assigned to sense each channel in order to maximize the objective function that is related to energy efficiency. By assigning more SUs to sense one channel, higher sensing accuracy can be gained; however, by spreading out the SUs to sense more channels, spectrum opportunities can be better exploited. The CSS problem is formulated as a combinatorial optimization problem. While such problem is generally hard and can only be solved by numerical methods with high computation complexity, in this paper we provide a detailed analysis and the analytical results provide useful and interesting insights. The optimality of the myopic CSS is proved for the case of two channels, and it is also conjectured for the general case. We also study the tradeoff between the sensing and transmission durations. In addition, the structure of the optimal sensing time that maximizes the energy efficiency objective is also analyzed, the condition for the optimality of the myopic sensing time is obtained, and the performance upper bound of the myopic policy is derived. Based on the numerical results, we show that by carefully tuning a punishment parameter, better energy efficiency can be achieved.
【Keywords】: Markov processes; cognitive radio; computational complexity; energy conservation; optimisation; scheduling; combinatorial optimization problem; computation complexity; energy efficiency; energy-efficient cognitive radio networks; myopic policy; objective function; optimal cooperative sensing scheduling; partially observable Markov decision process; upper bound; wireless communications; Cascading style sheets; Cognitive radio; Markov processes; Optimal scheduling; Processor scheduling; Sensors
【Paper Link】 【Pages】:2732-2740
【Authors】: Chengzhi Li ; Huaiyu Dai
【Abstract】: Spectrum sharing systems such as cognitive radio networks have drawn much attention recently due to their potential to resolve the conflict between increasing demand for spectrum and spectrum shortage. Such systems are typically composed of primary and secondary networks; the configuration of the latter depends on spectrum opportunity unexploited in the former. In this paper we explore the characteristics of the single hop transport throughput (STT) of the secondary network with outage constraints imposed on both networks. STT is a new metric that inherits the merits of both the traditional transport capacity and another popular metric, transmission capacity, incorporating transmission distance and outage probability into a uniform framework. We first derive the limit of STT, single hop transport capacity (STC), together with a practical upper bound for it. Then we investigate STT with secondary receivers randomly located in the field of interest. Three models regarding the selection of receivers are considered: optimally selected, randomly selected, or the nearest. Study on these models provides a comprehensive view of achievable secondary network throughput, and offers insights into the configuration of secondary networks. In addition, the broadcast transport throughput (BTT) of the secondary networks is also investigated as an extension of STT, and its similarity with STT in the nearest neighbor model is revealed.
【Keywords】: cognitive radio; probability; radio spectrum management; broadcast transport throughput; cognitive radio network; nearest neighbor model; outage constraint; outage probability; primary network; secondary network; single hop transport capacity; single hop transport throughput; spectrum sharing system; transmission capacity; transmission distance; Capacity planning; Interference; Measurement; Receivers; Throughput; Transmitters; Upper bound
【Paper Link】 【Pages】:2741-2749
【Authors】: Chonggang Wang ; Kazem Sohraby ; Rittwik Jana ; Lusheng Ji ; Mahmoud Daneshmand
【Abstract】: Existing studies have demonstrated that uneven and dynamic usage patterns by the primary users of license-based wireless communication systems can often lead to temporal and spatial spectrum underutilization. This provides an opportunity for the secondary users (SUs) to tap into underutilized frequency bands provided that they are capable of cognitively accessing systems without colliding or impacting the performance of the primary users (PUs). When there are multiple networks with spare spectrum, secondary users can opportunistically choose the best network to access, subject to certain constraints. In cognitive radio systems, this is referred to as the network selection problem for secondary users. This paper develops a Markov queuing model to obtain the maximum allowable arrival rate of secondary users subject to a target collision probability for the primary users. Based on this model, we design a novel Collision-Constrained Network Selection (CCNS) method that maximizes secondary users' throughput subject to a given PU collision probability. Further, we propose two approaches, referred as CCNS-Greedy and CCNS-Energy, which target to reduce collision probability and to decrease energy consumption of secondary users when the system is underloaded. This, however, has one practical drawback in that the proposed CCNS method depends on PU and SU traffic characteristics such as inter-arrival time and service time, which might not be available in real scenario. We next illustrate that a MEAsurement-based Networks Selection (MEANS) scheme can be used to perform network selection for secondary users based on online measurement of PU collision probability of each network. We evaluated the performance based on extensive simulation, which conclusively shows that the proposed schemes achieve the best performance in terms of resulting PU collision probability, SU throughput, and SU energy consumption, when compared to both Random and Greedy strategies.
【Keywords】: Markov processes; cognitive radio; probability; radio spectrum management; telecommunication congestion control; CCNS method; Markov queuing model; cognitive radio; collision constrained network selection; license based wireless communication; measurement based networks selection; online measurement; spatial spectrum underutilization; target collision probability; temporal spectrum underutilization; Algorithm design and analysis; Energy measurement; Irrigation; Queueing analysis; Switches; Tin
【Paper Link】 【Pages】:2750-2758
【Authors】: Zhen Ren ; Gang Zhou ; Andrew J. Pyles ; Matthew Keally ; Weizhen Mao ; Haining Wang
【Abstract】: Body sensor networks (BSNs) have been developed for a set of performance-critical applications, including smart healthcare, assisted living, emergency response, athletic performance evaluation, and interactive controls. Many of these applications require stringent performance assurance in terms of communication throughput and bounded time delay. While solutions exist in literature for providing joint throughput and time delay assurance by proposing specific MAC protocols or extensions, we provide this joint assurance in a novel radio-agnostic manner. In our approach, the underlying MAC and PHY layers can be heterogeneous and their details do not need to be known to upper layers like the resource management. Such a radio-agnostic performance assurance is critical because a range of radio platforms are adopted for practical body sensor usage. Our approach is based on a group-polling scheme that is essential for radio-agnostic BSN design. Through theoretical analysis, we prove that with the group-polling scheme, achieving joint throughput and time delay assurance is an NP-hard problem. For practical system deployment, we propose the BodyT2 framework that assures throughput and time delay performance in a heterogeneous BSN. Through both TelosB mote lab tests and real body experiments in an Android phone-centric BSN, we demonstrate that BodyT2 achieves superior performance over existing solutions.
【Keywords】: body area networks; body sensor networks; Android phone-centric BSN; BodyT2 framework; MAC protocols; NP-hard problem; assisted living; athletic performance evaluation; body sensor networks; bounded time delay; communication throughput; emergency response; group-polling scheme; heterogeneous BSN; interactive control; practical body sensor usage; radio platform; radio-agnostic BSN design; radio-agnostic performance assurance; resource management; smart healthcare; time delay assurance; time delay performance assurance
【Paper Link】 【Pages】:2759-2767
【Authors】: Miao Zhao ; Yuanyuan Yang
【Abstract】: In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and MIMO uploading techniques, which is referred to as LBC-MU. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate MIMO data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to the SenCar for its moving trajectory planning. At the mobile collector layer, the SenCar is equipped with two antennas, which enables multiple cluster heads to simultaneously upload data to the SenCar. The trajectory planning for the SenCar is optimized to fully utilize MIMO uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, the SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-MU scheme. The results show that when each cluster has at most two cluster heads, LBC-MU can reduce the maximum number of transmissions a sensor performs by 90% and the average number of transmissions by 88% compared with the enhanced relay routing scheme. It also results in 25% shorter average data latency compared with the mobile collection sch- - eme with single-head clustering.
【Keywords】: MIMO communication; antenna arrays; pattern clustering; resource allocation; telecommunication network planning; telecommunication network reliability; telecommunication network routing; wireless sensor networks; LBC-MU scheme; MIMO uploading techniques; SenCar; antennas; cluster head information; cluster head layer; distributed load balanced clustering algorithm; energy-saving intercluster communications; enhanced relay routing scheme; load balanced clustering; low data collection latency; mobile collector layer; mobile data collection; mobile data gathering framework; moving trajectory planning; multiple cluster heads; network lifetime; polling points; sensor layer; single-head clustering; static data sink; wireless sensor networks; Clustering algorithms; MIMO; Mobile communication; Peer to peer computing; Relays; Scalability; Trajectory
【Paper Link】 【Pages】:2768-2776
【Authors】: Xuefeng Liu ; Jiannong Cao ; Steven Lai ; Chao Yang ; Hejun Wu ; Youlin Xu
【Abstract】: In recent years, research on using wireless sensor networks (WSNs) for structural health monitoring (SHM) has attracted increasing attention. Unlike other monitoring applications, detection of possible structure damage requires significant amount of domain knowledge that computer science researchers are usually unfamiliar with. As a result, most previous work in WSN-based SHM was done by researchers in civil engineering. However, civil researchers often tend to solve practical engineering problems but rarely consider designing a system in an optimal way, particularly when the limited wireless bandwidth and restricted resources of WSNs need to be addressed. Through the collaboration with civil researchers, we demonstrate that optimization design can significantly help improve the performance of a WSN-based SHM system. We consider a fundamental problem in SHM: modal analysis, which is used to obtain the dynamic structural vibration characteristics. Cluster-based modal analysis approach is adopted. In each cluster, the vibration characteristics are identified and then are assembled together. Different from other applications, clustering in this approach should meet some extra requirements of modal analysis. Moreover, cluster size should be optimized to minimize the total energy consumption. This clustering problem is formally formulated and proven to be NP complete. Two centralized and one distributed algorithms are proposed to solve the problem. The effectiveness and efficiency of the proposed cluster-based modal analysis along with the clustering algorithms are evaluated using both simulation and experiments.
【Keywords】: condition monitoring; modal analysis; optimisation; structural engineering computing; vibration measurement; vibrations; wireless sensor networks; SHM; WSN; centralized algorithm; cluster-based modal analysis; distributed algorithm; dynamic structural vibration; energy efficient clustering; optimization design; structural health monitoring; wireless sensor network; Computer science; Energy consumption; Modal analysis; Monitoring; Shape; Vibrations; Wireless sensor networks; Clustering; In-network Processing; Structural Health Monitoring; WSN
【Paper Link】 【Pages】:2777-2785
【Authors】: Yaling Yang ; Yujun Li ; Mengshu Hou
【Abstract】: In this paper, we study deliverability of greedy routing in wireless sensor networks, where nodes are distributed over a disk area according to a homogeneous Poisson point process. In our work, we model the level of deliverability of a sensor network as the probability that all sensor nodes can successfully send their data to a base station, which is named probability of guaranteed delivery. We study the relationship between the critical transmission power of sensor nodes and the probability of guaranteed delivery, such that when all sensor nodes transmit with a higher power than the critical transmission power, the sensor network can reach the desired probability of guaranteed delivery. We identify two very tight analytical upper bounds on the critical transmission power for the idealistic u-disk model and the realistic log-normal shadowing model respectively. The correctness and tightness of these two upper bounds are verified by extensive simulations.
【Keywords】: probability; stochastic processes; telecommunication network routing; wireless sensor networks; 2D wireless sensor network; base station; critical transmission power; greedy routing; homogeneous Poisson point process; idealistic udisk model; log-normal shadowing model; many-to-one deliverability; probability of guaranteed delivery; USA Councils
【Paper Link】 【Pages】:2786-2794
【Authors】: Zhe Yang ; Yuanqian Luo ; Lin Cai
【Abstract】: We introduce an approach called network modulation which gives us a new dimension to improve wireless network throughput and save energy. In current wireless systems, when a source transmits data to the receiver through a single-hop or multi-hop wireless path, the physical layer modulates and demodulates the information bits hop-by-hop, and the transmission over each hop is treated the same as in a point-to-point communication link. Given the broadcast nature of wireless medium and the wide variation of wireless channel quality, we let a sender transmit messages to multiple receivers simultaneously, using a software mapping technology, called network modulation, to redefine the constellation of typical quadrature amplitude modulation (QAM) schemes. As the software-based network modulation schemes do not require specialized communication hardware, they can be implemented with low cost and high flexibility. Network modulation can be used to improve network performance in a wide range of scenarios, for anycast (broadcast, multicast and unicast) services, one-way or two-way traffic, and single-hop or multi-hop wireless paths, in infrastructure or ad hoc networks. The minimum requirement for applying network modulation is that there are no less than three nodes within each others' transmission ranges, so we can consider modulation, topology control, resource allocation, and routing jointly.
【Keywords】: demodulation; quadrature amplitude modulation; radio networks; receivers; telecommunication links; wireless channels; QAM schemes; ad hoc networks; anycast services; broadcast services; infrastructure networks; multi-hop wireless paths; multicast services; multiple receivers; network performance; one-way traffic; point-to-point communication link; quadrature amplitude modulation; resource allocation; single-hop wireless paths; software mapping; software-based network modulation; topology control; two-way traffic; unicast services; wireless channel; wireless network; wireless systems; Bit error rate; Modulation; Relays; Signal to noise ratio; Throughput; Wireless networks
【Paper Link】 【Pages】:2795-2803
【Authors】: Zih-Ci Lin ; Huai-Lei Fu ; Phone Lin
【Abstract】: In wireless Multicast Broadcast Service (MBS), the common channel is used to multicast the MBS content to the Mobile Stations (MSs) on the MBS calls within the coverage area of a Base Station (BS), which causes interference to the dedicated channels serving the traditional calls, and degrades the system capacity. The MBS zone technology is proposed in Mobile Communications Network (MCN) standards to improve system capacity and reduce the handoff delay for the wireless MBS calls. In the MBS zone technology, a group of BSs form an MBS zone, where the macro diversity is applied in the MS, the BSs synchronize to transmit the MBS content on the same common channel, interference caused by the common channel is reduced, and the MBS MSs need not perform handoff while moving between the BSs in the same MBS zone. However, when there is no MBS MS in a BS with the MBS zone technology, the transmission on the common channel wastes the bandwidth of the BS. It is an important issue to determine the condition for the MBS Controller (MBSC) to enable the MBS zone technology by considering the QoS for traditional calls and MBS calls. In this paper, we propose two Dynamic Channel Allocation schemes: DCA and EDCA by considering the condition for enabling the MBS zone technology. Analysis and simulation experiments are conducted to investigate the performance of DCA and EDCA.
【Keywords】: broadcast channels; channel allocation; mobility management (mobile radio); multicast communication; quality of service; radiofrequency interference; telecommunication services; EDCA; MBS controller; MBS zone technology; MBSC; MCN standards; QoS; base station; dynamic channel allocation; handoff delay; mobile communications network standards; mobile stations; wireless multicast broadcast service; wireless zone-based multicast service
【Paper Link】 【Pages】:2804-2812
【Authors】: Fan Wu ; Nikhil Singh ; Nitin H. Vaidya ; Guihai Chen
【Abstract】: Due to the limitation of radio spectrum resource and fast growing of wireless applications, careful channel allocation is highly needed to mitigate the performance degradation of wireless networks because of interference among different users. While most of the existing works consider allocating fixed-width channels, combining contiguous channels may provide an alternative way to better utilize the available channels. In this paper, we study the problem of adaptive-width channel allocation from a game-theoretic point of view, in which the nodes are rational and always pursue their own objectives. We first model the problem as a strategic game, and show the existence of Nash equilibrium (NE), when there is no exogenous factor to influence players' behavior. We further propose a charging scheme to influence the players' behavior, by which the system is guaranteed to converge to a Dominant Strategy Equilibrium (DSE), a solution concept that gives participants much stronger incentives. We show that, when the system converges to a DSE, it also achieves global optimality, in terms of system-wide throughput without starvation. Numerical results verify that with our charging scheme, the system-wide throughput obtained is higher as compared to the throughput obtained when system is in NE.
【Keywords】: channel allocation; game theory; radio networks; DSE; NE; Nash equilibrium; adaptive-width channel allocation; dominant strategy equilibrium; game theory; multiradio wireless network; noncooperative network; radio spectrum resource; Throughput
【Paper Link】 【Pages】:2813-2821
【Authors】: Mahmoud Al-Ayyoub ; Himanshu Gupta
【Abstract】: In cellular networks, a recent trend is to make spectrum access dynamic in the spatial and temporal dimensions, for the sake of efficient utilization of spectrum. In such a model, the spectrum is divided into channels and periodically allocated to competing base stations using an auction-based market mechanism. An “efficient” auction mechanism is essential to the success of such a dynamic spectrum access model. Two of the key objectives in designing an auction mechanism are “truthfulness” and revenue maximization. In this article, we design a polynomial-time spectrum auction mechanism that is truthful and yields an allocation with O(1)-approximate expected revenue, in the Bayesian setting. Our mechanism generalizes to general interference models. To the best of our knowledge, ours is the first work to design a polynomial-time truthful spectrum auction mechanism with a performance guarantee on the expected revenue. We demonstrate the performance of our designed mechanism through simulations.
【Keywords】: cellular radio; radio spectrum management; approximate revenue; auction-based market mechanism; cellular networks; dynamic spectrum access model; efficient auction mechanism; polynomial-time spectrum auction mechanism; revenue maximization; truthful spectrum auctions; Approximation methods; Base stations; Bayesian methods; Color; Cost accounting; Interference; Resource management
【Paper Link】 【Pages】:2822-2830
【Authors】: Mohit Thakur ; Nadia Fawaz ; Muriel Médard
【Abstract】: We consider the broadcast relay channel (BRC), where a single source transmits to multiple destinations with the help of a relay, in the limit of a large bandwidth. We address the problem of optimal relay positioning and power allocations at source and relay, to maximize the multicast rate from source to all destinations. To solve such a network planning problem, we develop a three-faceted approach based on an underlying information theoretic model, computational geometric aspects, and network optimization tools. Firstly, assuming superposition coding and frequency division between the source and the relay, the information theoretic framework yields a hypergraph model of the wideband BRC, which captures the dependency of achievable rate-tuples on the network topology. As the relay position varies, so does the set of hyperarcs constituting the hypergraph, rendering the combinatorial nature of optimization problem. We show that the convex hull C of all nodes in the 2-D plane can be divided into disjoint regions corresponding to distinct hyperarcs sets. These sets are obtained by superimposing all k-th order Voronoi tessellation of C. We propose an easy and efficient algorithm to compute all hyperarc sets, and prove they are polynomially bounded. Then, we circumvent the combinatorial nature of the problem by introducing continuous switch functions, that allows adapting the network hypergraph in a continuous manner. Using this switched hypergraph approach, we model the original problem as a continuous yet non-convex network optimization program. Ultimately, availing on the techniques of geometric programming and p-norm surrogate approximation, we derive a good convex approximation. We provide a detailed characterization of the problem for collinearly located destinations, and then give a generalization for arbitrarily located destinations. Finally, we show strong gains for the optimal relay positioning compared to seemingly interesting positions.
【Keywords】: broadcast channels; multicast communication; telecommunication network topology; frequency division; hypergraph model; low SNR broadcast relay channels; network planning; optimal relay location; power allocation; superposition coding; AWGN; Optimization; Relays; Signal to noise ratio; Switches; Topology; Wideband; Low SNR; computational geometry; network optimization
【Paper Link】 【Pages】:2831-2839
【Authors】: Pan Li ; Xiaoxia Huang ; Yuguang Fang
【Abstract】: Wireless cellular networks are large-scale networks in which asymptotic capacity investigation is no longer a cliché. A substantial body of work has been carried out to improve the capacity of cellular networks by introducing ad hoc communications, resulting in the so-called multihop cellular networks. Most of the previous research allows ad hoc transmissions between certain source and destination pairs to alleviate base stations' relay burden. However, since reports show that Internet data traffic is becoming more and more dominant in cellular networks, we explore in this paper the capacity of multihop cellular networks with all traffic going through base stations and ad hoc transmissions only acting as relay. We first investigate the capacity of regular multihop cellular networks where both nodes and base stations are regularly placed. By fully exploiting the link rate variability, we find that multihop cellular networks can have higher per-node throughput than traditional cellular networks by a scaling factor of log2n. Then, for the first time we extend our study to the capacity of heterogeneous multihop cellular networks where nodes are distributed according to a general Inhomogeneous Poisson Process and base stations are randomly placed. We show that under certain conditions multihop cellular networks can also outperform traditional cellular networks by a scaling factor of log2n. Moreover, both throughput-fairness and bandwidth-fairness are considered as fairness constraints for both kinds of networks.
【Keywords】: Internet; ad hoc networks; cellular radio; channel capacity; stochastic processes; telecommunication traffic; Internet data traffic; ad hoc communications; asymptotic capacity investigation; bandwidth fairness; base stations relay burden; capacity scaling; destination pairs; fairness constraints; inhomogeneous Poisson process; link rate variability; multihop cellular networks; source pairs; throughput fairness; Ad hoc networks; Bandwidth; Base stations; Downlink; Relays; Spread spectrum communication; Throughput
【Paper Link】 【Pages】:2840-2848
【Authors】: Ionut Trestian ; Supranamaya Ranjan ; Aleksandar Kuzmanovic ; Antonio Nucci
【Abstract】: Smartphones have changed the way people communicate. Most prominently, using commonplace mobile device features (e.g., high resolution cameras), they started producing and uploading large amounts of content that increases at an exponential pace. In the absence of viable technical solutions, some cellular network providers are considering to start charging special usage fees to address the problem. Our contributions are twofold. First, we find that the user-generated content problem is a user-behavioral problem. By analyzing user mobility and data logs of close to 2 million users of a cellular network, we find that (i) users upload content from a small number of locations, typically corresponding to their home or work locations; (ii) because such locations are different for different users, we find that the problem appears ubiquitous, since user-generated content uploads grow exponentially at most locations. However, we also find that (Hi) there exists a significant lag between content generation and uploading times. For example, we find that 55% of content that is uploaded via mobile phones is at least 1 day old. Second, based on the above insights, we propose a new cellular network architecture. Our approach proposes capacity upgrades at a select number of locations called Drop Zones. Although not particularly popular for uploads originally, Drop Zones seamlessly fall within the natural movement patterns of a large number of users. They are therefore better suited for uploading larger quantities of content in a postponed manner. We design infrastructure placement algorithms and demonstrate that by upgrading infrastructure in only 963 base-stations across the entire United States, it is possible to deliver 50% of total content via the Drop Zones.
【Keywords】: cellular radio; data loggers; mobile handsets; mobility management (mobile radio); cellular network architecture; data logs; drop zones; mobile device; mobile networks; smartphones; user mobility; user-behavioral problem; user-generated content problem; Delay; Facebook; Greedy algorithms; Mobile communication; Mobile handsets; User-generated content; Videos
【Paper Link】 【Pages】:2849-2857
【Authors】: Urtzi Ayesta ; Peter Jacko ; Vladimir Novak
【Abstract】: We analyze a comprehensive model for multi-class job scheduling accounting for user abandonment, with the objective of minimizing the total discounted or time-average sum of linear holding costs and abandonment penalties. We assume geometric service times and Bernoulli abandonment probabilities. We solve analytically the case in which there are 1 or 2 users in the system to obtain an optimal index rule. For the case with more users we use recent advances from the restless bandits literature to obtain a new simple index rule, denoted by AJN, which we propose to use also in the system with arrivals. In the problem without abandonment, the proposed rule recovers the cμ-rule which is well-known to be optimal both without and with arrivals. Under certain conditions, our rule is equivalent to the cμ/θ-rule, which was recently proposed and shown to be asymptotically optimal in a multi-server system with overload conditions. We present results of an extensive computational study that suggest that our rule is almost always superior or equivalent to other rules proposed in the literature, and is often optimal.
【Keywords】: scheduling; Bernoulli abandonment probabilities; abandonment penalties; geometric service times; linear holding costs; multiclass job scheduling; multiserver system; nearly-optimal index rule; overload conditions; user abandonment; Indexes; Joints; Markov processes; Numerical models; Optimal scheduling; Servers
【Paper Link】 【Pages】:2858-2866
【Authors】: Meghana M. Amble ; Parimal Parag ; Srinivas Shakkottai ; Lei Ying
【Abstract】: The rapid increase of content delivery over the Internet has led to the proliferation of content distribution networks (CDNs). Management of CDNs requires algorithms for request routing, content placement, and eviction in such a way that user delays are small. We abstract the system of frontend source nodes and backend caches of the CDN in the likeness of the input and output nodes of a switch. In this model, queues of requests for different pieces of content build up at the source nodes, which route these requests to a cache that contains the requested content. For each request that is routed to a cache, a corresponding data file is transmitted back to the requesting source across links of finite capacity. Caches are of finite size, and the content of the caches can be refreshed periodically. Our objective is to design policies for request routing, content placement and content eviction with the goal of small user delays. Stable policies ensure the finiteness of the request queues, while good polices also lead to short queue lengths. We first design a throughput-optimal algorithm that solves the routing-placement-eviction problem. The design yields insight into the impact of different cache refresh policies on queue length, and we construct throughput optimal algorithms that engender short queue lengths. We illustrate the potential of our approach through simulations on different CDN topologies.
【Keywords】: Internet; queueing theory; telecommunication network management; telecommunication network routing; CDN; Internet; content distribution network; content placement; content-aware caching; request routing; routing-placement-eviction problem; short queue length; throughput-optimal algorithm; traffic management; Algorithm design and analysis; Delay; Media; Schedules; Stability analysis; Switches; Throughput
【Paper Link】 【Pages】:2867-2875
【Authors】: Mahdi Lotfinezhad ; Peter Marbach
【Abstract】: In this paper, we consider CSMA policies for scheduling packet transmissions in multihop wireless networks with one-hop traffic. The main contribution of the paper is to propose a novel CSMA policy, called Unlocking CSMA (U-CSMA), that enables to obtain both high throughput and low packet delays in large wireless networks. More precisely, we show that for torus interference graph topologies with one-hop traffic, U-CSMA is throughput optimal and achieves order-optimal delay. For one-hop traffic, the delay performance is defined to be order-optimal if the delay stays bounded as the network-size increases. Simulations that we conducted suggest that (a) U-CSMA is throughput-optimal and achieves order-optimal delay for general geometric interference graphs and (b) that U-CSMA can be combined with congestion control algorithms to maximize the network-wide utility and obtain order-optimal delay. To the best of our knowledge, this is the first time that a simple distributed scheduling policy has been proposed that is both throughput/utility optimal and achieves order-optimal delay.
【Keywords】: carrier sense multiple access; delays; geometry; interference suppression; radio networks; radiofrequency interference; scheduling; telecommunication congestion control; telecommunication network topology; telecommunication traffic; CSMA policy; U-CSMA; congestion control algorithm; distributed scheduling policy; general geometric interference graph topology; multihop wireless network; network-wide utility; one-hop traffic; order-optimal delay; packet delay; packet transmissions scheduling; throughput-optimal random access; unlocking CSMA; Delay; Interference; Lattices; Multiaccess communication; Schedules; Throughput; Wireless networks
【Paper Link】 【Pages】:2876-2884
【Authors】: Vinod Ramaswamy ; Diganto Choudhury ; Srinivas Shakkottai
【Abstract】: A large number of congestion control protocols have been proposed in the last few years, with all having the same purpose-to divide available bandwidth resources among different flows in a fair manner. Each protocol operates on the paradigm of some conception of link price (such as packet losses or packet delays) that determines source transmission rates. Recent work on network utility maximization has brought forth idea that the fundamental price or Lagrange multiplier for a link is proportional the queue length at that link, and that different congestion metrics (such as delays or drops) are essentially ways of interpreting such a Lagrange multiplier. We thus ask the following question: Suppose that each flow has a number of congestion control protocols to choose from, which one (or combination) should it choose? We introduce a framework wherein each flow has a utility that depends on throughput, and also has a disutility that is some function of the queue lengths encountered along the route taken. Flows must choose a combination of protocols that would maximize their payoffs. We study both the socially optimal, as well as the selfish cases to determine the loss of system-wide value incurred through selfish decision making, so characterizing the “price of heterogeneity”. We also propose tolling schemes that incentivize flows to choose one of several different virtual networks catering to particular needs, and show that the total system value is greater, hence making a case for the adoption of such virtual networks.
【Keywords】: protocols; telecommunication congestion control; Lagrange multiplier; congestion control protocols; congestion metrics; heterogeneous congestion controllers; mutual interaction; network utility maximization; queue length; selfish decision making; tolling scheme; virtual networks; Bandwidth; Delay; Equations; Games; Nash equilibrium; Protocols
【Paper Link】 【Pages】:2885-2893
【Authors】: Hyungsoo Jung ; Shin Gyu Kim ; Heon Young Yeom ; Sooyong Kang ; Lavy Libman
【Abstract】: The design of an end-to-end Internet congestion control protocol that could achieve high utilization, fair sharing of bottleneck bandwidth, and fast convergence while remaining TCP-friendly is an ongoing challenge that continues to attract considerable research attention. This paper presents ACP, an Adaptive end-to-end Congestion control Protocol that achieves the above goals in high bandwidth-delay product networks where TCP becomes inefficient. The main contribution of ACP is a new form of congestion window control, combining the estimation of the bottleneck queue size and a measure of fair sharing. Specifically, upon detecting congestion, ACP decreases the congestion window size by the exact amount required to empty the bottleneck queue while maintaining high utilization, while the increases of the congestion window are based on a “fairness ratio” metric of each flow, which ensures fast convergence to a fair equilibrium. We demonstrate the benefits of ACP using both ns-2 simulation and experimental measurements of a Linux prototype implementation. In particular, we show that the new protocol is TCP-friendly and allows TCP and ACP flows to coexist in various circumstances, and that ACP indeed behaves more fairly than other TCP variants under heterogeneous round-trip times (RTT).
【Keywords】: Internet; Linux; bandwidth allocation; queueing theory; telecommunication congestion control; transport protocols; Linux prototype; TCP; adaptive delay-based congestion control; adaptive end-to-end congestion control protocol; bandwidth-delay product network; bottleneck bandwidth sharing; bottleneck queue size estimation; congestion window control; end-to-end Internet congestion control protocol design; experimental measurement; ns-2 simulation; round-trip times; Bandwidth; Convergence; Delay; Estimation; Routing protocols; Throughput
【Paper Link】 【Pages】:2894-2902
【Authors】: Jingyuan Wang ; Jiangtao Wen ; Jun Zhang ; Yuxing Han
【Abstract】: The Transport Control Protocol (TCP) has been widely used by wired and wireless Internet applications such as FTP, email and HTTP. Numerous congestion algorithms have been proposed to improve the performance of TCP in various scenarios, especially for high bandwidth-delay product (BDP) and wireless networks. Although different algorithms may achieve different performance improvements under different network conditions, designing a congestion algorithm that performs well across a wide spectrum of network conditions remains a great challenge. In this paper, we propose a novel congestion control algorithm, named TCP-FIT, which could perform gracefully in both wireless and high BDP networks. The algorithm was inspired by parallel TCP, but with the important distinctions that only one TCP connection with one congestion window is established for each TCP session, and that no modifications to other layers (e.g. the application layer) of the end-to-end system need to be made. Extensive experimental results obtained using both network simulators as well as over “live” wired line, WiFi and 3G networks at different geographical locations and at different times of the day are presented. The performance of the algorithm shown in the experiment results is significantly improved as compared to other state-of-the-art algorithms, while maintaining good fairness.
【Keywords】: 3G mobile communication; Internet; telecommunication congestion control; transport protocols; wireless LAN; 3G networks; TCP congestion control; TCP-FIT control algorithm; Wi-Fi; bandwidth-delay product; network simulators; transport control protocol; wired Internet application; wired line; wireless Internet application; wireless fidelity; Algorithm design and analysis; Bandwidth; Delay; Propagation delay; Protocols; Throughput; Wireless communication
【Paper Link】 【Pages】:2903-2911
【Authors】: Fei Chen ; Bezawada Bruhadeshwar ; Alex X. Liu
【Abstract】: Firewalls have been widely deployed on the Internet for securing private networks. A firewall checks each incoming or outgoing packet to decide whether to accept or discard the packet based on its policy. Optimizing firewall policies is crucial for improving network performance. Prior work on firewall optimization focuses on either intra-firewall or inter-firewall optimization within one administrative domain where the privacy of firewall policies is not a concern. This paper explores inter-firewall optimization across administrative domains for the first time. The key technical challenge is that firewall policies cannot be shared across domains because a firewall policy contains confidential information and even potential security holes, which can be exploited by attackers. In this paper, we propose the first cross-domain privacy-preserving cooperative firewall policy optimization protocol. Specifically, for any two adjacent firewalls belonging to two different administrative domains, our protocol can identify in each firewall the rules that can be removed because of the other firewall. The optimization process involves cooperative computation between the two firewalls without any party disclosing its policy to the other. We implemented our protocol and conducted extensive experiments. The results on real firewall policies show that our protocol can remove as many as 49% of the rules in a firewall whereas the average is 19.4%. The communication cost is less than a few hundred KBs. Our protocol incurs no extra online packet processing overhead and the offline processing time is less than a few hundred seconds.
【Keywords】: Internet; authorisation; computer network security; data privacy; protocols; Internet; cooperative firewall optimization; cross-domain privacy-preserving protocol; firewall administrative domain; firewall privacy; inter-firewall optimization; intra-firewall optimization; private network security; Cryptography; Fires; IP networks; Optimization; Privacy; Protocols; Redundancy
【Paper Link】 【Pages】:2912-2920
【Authors】: Shruti Sanadhya ; Raghupathy Sivakumar
【Abstract】: The focus of this work is to study the efficacy of TCP's flow control algorithm on mobile phones. Specifically, we identify the design limitations of the algorithm when operating in environments, such as mobile phones, where flow control assumes greater importance because of device resource limitations. We then propose an adaptive flow control (AFC) algorithm for TCP that relies not just on the available buffer space but also on the application read-rate at the receiver. We show, using NS2 simulations, that AFC can provide considerable performance benefits over classical TCP flow control.
【Keywords】: adaptive control; mobile handsets; transport protocols; TCP flow control algorithm; adaptive flow control algorithm; buffer space; mobile phones; transmission control protocol
【Paper Link】 【Pages】:2921-2929
【Authors】: Amitabha Ghosh ; Rittwik Jana ; V. Ramaswami ; Jim Rowland ; N. K. Shankaranarayanan
【Abstract】: Server side measurements from several Wi-Fi hot-spots deployed in a nationwide network over different types of venues from small coffee shops to large enterprises are used to highlight differences in traffic volumes and patterns. We develop a common modeling framework for the number of simultaneously present customers. Our approach has many novel elements: (a) We combine statistical clustering with Poisson regression from Generalized Linear Models to fit a non-stationary Poisson process to the arrival counts and demonstrate its remarkable accuracy; (b) We model the heavy tailed distribution of connection durations through fitting a Phase Type distribution to its logarithm so that not only the tail but also the overall distribution is well matched; (c) We obtain the distribution of the number of simultaneously present customers from an Mt/G/∞ queuing model using a novel regenerative argument that is transparent and avoids the customarily made assumption of the queue starting empty at an infinite past; (d) Most importantly, we validate our models by comparison of their predictions and confidence intervals against test data that is not used in fitting the models.
【Keywords】: stochastic processes; wireless LAN; Poisson regression; Wi-Fi traffic; generalized linear models; nonstationary Poisson process; phase type distribution; public hot-spots; queuing model; statistical clustering; Authentication; Portable computers; Queueing analysis; Servers
【Paper Link】 【Pages】:2930-2938
【Authors】: Kyu-Han Kim ; Alexander W. Min ; Dhruv Gupta ; Prasant Mohapatra ; Jatinder Pal Singh
【Abstract】: Mobile data usage over cellular networks has been dramatically increasing over the past years. Wi-Fi based wireless networks offer a high-bandwidth alternative for offloading such data traffic. However, intermittent connectivity, and battery power drain in mobile devices, inhibits always-on connectivity even in areas with good Wi-Fi coverage. This paper presents WiFisense, a system that employs user mobility information retrieved from low-power sensors (e.g., accelerometer) in smartphones, and further includes adaptive Wi-Fi sensing algorithms, to conserve battery power while improving Wi-Fi usage. We implement the proposed system in Android-based smartphones and evaluate the implementation in both indoor and outdoor Wi-Fi networks. Our evaluation results show that WiFisense saves energy consumption for scans by up to 79% and achieves considerable increase in Wi-Fi usage for various scenarios.
【Keywords】: cellular radio; mobile handsets; telecommunication traffic; wireless LAN; Android-based smartphones; Wi-Fi sensing; WiFisense; always-on connectivity; battery power drain; cellular networks; data traffic; energy efficiency; intermittent connectivity; mobile data usage; user mobility information; Accelerometers; IEEE 802.11 Standards; Mobile communication; Monitoring; Sensors; Smart phones
【Paper Link】 【Pages】:2939-2947
【Authors】: Jie Xiong ; Romit Roy Choudhury
【Abstract】: Wireless multicast applications, such as MobiTV, web telecast, and multimedia classrooms, are gaining rapid popularity. The broadcast nature of the wireless channel is amenable to such multicasts because a single packet transmission can be received by all clients. Unfortunately, the rate of this transmission is bottlenecked by data rate of the weakest client, degrading system performance. Attempts to increase the data rate results in lower reliability and higher unfairness. This paper presents PeerCast, a wireless multicast protocol that engages clients in cooperative relaying. The main idea is simple. Instead of multicasting at the bottleneck rate, the access point transmits at a high rate and suitably chooses a few stronger clients to relay the packet to the weaker ones. Multiple transmissions of the same packet, each at higher rate, can achieve better throughput than one transmission at the low, bottleneck rate. We propose a new simultaneous reply-back scheme for clients and detect the power level to estimate the AP's transmission strategy. PeerCast translates these ideas into a functional system using off-the-shelf hardware. Performance evaluation on a 9 node testbed demonstrates consistent throughput and reliability improvements over 802.11. Simulations in QualNet indicate similar trends in large-scale networks.
【Keywords】: multicast protocols; radiocommunication; MobiTV; PeerCast; access point; cooperative relaying; link layer multicast; multimedia classrooms; single packet transmission; web telecast; wireless channel; wireless multicast application; wireless multicast protocol; Manuals; Peer to peer computing; Relays; Reliability theory
【Paper Link】 【Pages】:2948-2956
【Authors】: Gábor Rétvári ; János Tapolcai ; Gábor Enyedi ; András Császár
【Abstract】: IP Fast ReRoute (IPFRR) is the IETF standard for providing fast failure protection in IP and MPLS/LDP networks and Loop Free Alternates (LFA) is a basic specification for implementing it. Even though LFA is simple and unobtrusive, it has a significant drawback: it does not guarantee protection for all possible failure cases. Consequently, many IPFRR proposals have appeared lately, promising full failure coverage at the price of added complexity and non-trivial modifications to IP hardware and software. Meanwhile, LFA remains the only commercially available, and therefore, the only deployable IPFRR solution. Deployment, however, crucially depends on the extent to which LFA can protect failures in operational networks. In this paper, therefore, we revisit LFA in order to give theoretical insights and practical hints to LFA failure coverage analysis. First, we identify the topological properties a network must possess to profit from good failure coverage. Then, we study how coverage varies as new links are added to a network, we show how to do this optimally and, through extensive simulations, we arrive to the conclusion that cleverly adding just a couple of new links can improve the quality of LFA protection drastically.
【Keywords】: IP networks; telecommunication network routing; IP fast reroute; IP hardware; IP software; IPFRR solution; LFA; MPLS/LDP networks; fast failure protection; loop free alternates; Approximation algorithms; Complexity theory; IP networks; Network topology; Proposals; Routing; Topology; IP Fast ReRoute; IP protection; Loop Free Alternates
【Paper Link】 【Pages】:2957-2965
【Authors】: Marco Chiesa ; Luca Cittadini ; Giuseppe Di Battista ; Stefano Vissicchio
【Abstract】: BGP, the core protocol of the Internet backbone, is renowned to be prone to oscillations. Despite prior work shed some light on BGP stability, many problems remain open. For example, determining how hard it is to check that a BGP network is safe, i.e., it is guaranteed to converge, has been an elusive research goal up to now. In this paper, we address several problems related to BGP stability, stating the computational complexity of testing if a given configuration is safe, is robust, or is safe under filtering. Further, we determine the computational complexity of checking popular sufficient conditions for stability. We adopt a model that captures Local Transit policies, i.e., policies that are functions only of the ingress and the egress points. The focus on Local Transit policies is motivated by the fact that they represent a configuration paradigm commonly used by network operators. We also address the same BGP stability problems in the widely adopted SPP model. Unfortunately, we find that the most interesting problems are computationally hard even if policies are restricted to be as expressive as Local Transit policies. Our findings suggest that the computational intractability of BGP stability be an intrinsic property of policy-based path vector routing protocols that allow policies to be specified in complete autonomy.
【Keywords】: Internet; computational complexity; internetworking; routing protocols; stability; BGP stability testing complexity; Internet backbone; SPP model; border gateway protocol; computational complexity; core protocol; local transit policy; policy-based path vector routing protocols; Complexity theory; Computational modeling; Oscillators; Polynomials; Routing; Safety; Stability analysis
【Paper Link】 【Pages】:2966-2974
【Authors】: Martin Suchara ; Alex Fabrikant ; Jennifer Rexford
【Abstract】: We explore BGP safety, the question of whether a BGP system converges to a stable routing, in light of several BGP implementation features that have not been fully included in the previous theoretical analyses. We show that Route Flap Damping, MRAI timers, and other intra-router features can cause a router to briefly send “spurious” announcements of less-preferred routes. We demonstrate that, even in simple configurations, this short-term spurious behavior may cause long-term divergence in global routing. We then present DPVP, a general model that unifies these sources of spurious announcements in order to examine their impact on BGP safety. In this new, more robust model of BGP behavior, we derive a necessary and sufficient condition for safety, which furthermore admits an efficient algorithm for checking BGP safety in most practical circumstances - two complementary results that have been elusive in the past decade's worth of classical studies of BGP convergence in more simple models. We also consider the implications of spurious updates for well-known results on dispute wheels and safety under filtering.
【Keywords】: filtering theory; internetworking; routing protocols; telecommunication security; BGP behavior; BGP implementation features; BGP safety; BGP system; DPVP; MRAI timers; dispute wheels; global routing; intra-router features; less-preferred routes; long-term divergence; necessary and sufficient condition; route flap damping; safety under filtering; short-term spurious behavior; spurious announcements; spurious updates; stable routing; Blades; Convergence; Oscillators; Peer to peer computing; Routing; Safety; Wheels
【Paper Link】 【Pages】:2975-2983
【Authors】: Alex Fabrikant ; Umar Syed ; Jennifer Rexford
【Abstract】: To better support interactive applications, individual network operators are decreasing the timers that affect BGP convergence, leading to greater diversity in the timer settings across the Internet. While decreasing timers is intended to improve routing convergence, we show that, ironically, the resulting timer heterogeneity can make routing convergence substantially worse. We examine the widely-used Min Route Advertisement Interval (MRAI) timer that rate-limits update messages to reduce router overhead. We show that, while routing systems with homogeneous MRAI timers have linear convergence time, diverse MRAIs can cause exponential increases in both the number of BGP messages and the convergence time (as measured in “activations”). We prove tight upper bounds on these metrics in terms of MRAI timer diversity in general dispute-wheel-free networks and economically sensible (Gao-Rexford) settings. We also demonstrate significant impacts on the data plane: blackholes sometimes last throughout the route-convergence process, and forwarding changes, at best, are only polynomially less frequent than routing changes. We show that these problems vanish in contiguous regions of the Internet with homogeneous MRAIs or with next-hop-based routing policies, suggesting practical strategies for mitigating the problem, especially when all routers are administered by one institution.
【Keywords】: Internet; routing protocols; BGP convergence; Internet; MRAI timer; dispute-wheel-free network; economical sensible setting; individual network operator; min route advertisement interval timer; routing convergence system; timing diversity
【Paper Link】 【Pages】:2984-2992
【Authors】: Dongyue Xue ; Eylem Ekici
【Abstract】: Cognitive radio networks enable opportunistic sharing of bandwidth/spectrum. In this paper, we introduce optimal control and scheduling algorithms for multi-hop cognitive radio networks to maximize the throughput of secondary users while stabilizing the cognitive radio network subject to collision rate constraints required by primary users. We show that by employing our proposed optimal algorithm, the achievable network throughput can be arbitrarily close to the optimal value. To reduce complexity, we propose a class of feasible suboptimal algorithms that can achieve at least a fraction of the optimal throughput. In addition, we also analyze the optimal algorithm in the fixed-routing scenario and deduce the corresponding lower-bound of average end-to-end delay across a link set.
【Keywords】: cognitive radio; radio networks; scheduling; bandwidth sharing; collision rate constraints; fixed-routing scenario; guaranteed opportunistic scheduling; lower bound; multihop cognitive radio networks; opportunistic sharing; optimal control; scheduling algorithm; spectrum sharing; Artificial neural networks; Manganese; Throughput
【Paper Link】 【Pages】:2993-3001
【Authors】: Alexander W. Min ; Kyu-Han Kim ; Jatinder Pal Singh ; Kang G. Shin
【Abstract】: Cognitive radios (CRs) can mitigate the impending spectrum scarcity problem by utilizing their capability of accessing licensed spectrum bands opportunistically. While most existing work focuses on enabling such opportunistic spectrum access for stationary CRs, mobility is an important concern to secondary users (SUs) because future mobile devices are expected to incorporate CR functionality. In this paper, we identify and address three fundamental challenges encountered specifically by mobile SUs. First, we model channel availability experienced by a mobile SU as a two-state continuous-time Markov chain (CTMC) and verify its accuracy via in-depth simulation. Then, to protect primary/incumbent communications from SU interference, we introduce guard distance in the space domain and derive the optimal guard distance that maximizes the spatio-temporal spectrum opportunities available to mobile CRs. To facilitate efficient spectrum sharing, we formulate the problem of maximizing secondary network throughput within a convex optimization framework, and derive an optimal, distributed channel selection strategy. Our simulation results show that the proposed spectrum sensing and distributed channel access schemes improve network throughput and fairness significantly, and reduce SU energy consumption for spectrum sensing by up to 74%.
【Keywords】: Markov processes; cognitive radio; mobile radio; continuous-time Markov chain; distributed channel access; licensed spectrum bands; mobile cognitive radios; opportunistic spectrum access; spectrum scarcity problem; Availability; Exponential distribution; Interference; Markov processes; Mobile communication; Sensors; Transmitters
【Paper Link】 【Pages】:3002-3010
【Authors】: Karim Khalil ; Mehmet Karaca ; Özgür Erçetin ; Eylem Ekici
【Abstract】: Optimal transmission scheduling in wireless cognitive networks is considered under the spectrum leasing model. We propose a cooperative scheme in which secondary nodes share the time slot with primary nodes in return for cooperation. Cooperation is feasible only if the system's performance is improved over the non-cooperative case. First, we investigate a scenario where secondary users are interested in immediate rewards. Then, we formulate another problem where the secondary users are guaranteed a portion of the primary utility, on a long term basis, in return for cooperation. In both scenarios, our proposed schemes are shown to outperform non-cooperative scheduling schemes, in terms of both individual and total expected utility, for a given set of feasible constraints. Based on Lyapunov Optimization techniques, we show that our schemes are arbitrarily close to the optimal performance at the price of reduced convergence rate.
【Keywords】: Lyapunov methods; cognitive radio; optimisation; Lyapunov optimization techniques; cooperate-to-join cognitive radio networks; noncooperative scheduling schemes; optimal scheduling; optimal transmission scheduling; spectrum leasing model; wireless cognitive networks; Base stations; Cognitive radio; Optimal scheduling; Relays; Schedules; Scheduling algorithm
【Paper Link】 【Pages】:3011-3019
【Authors】: Yi Song ; Jiang Xie
【Abstract】: Cognitive radio (CR) technology is regarded as a promising solution to the spectrum scarcity problem. Due to the spectrum varying nature of CR networks, unlicensed users are required to perform spectrum handoffs when licensed users reuse the spectrum. In this paper, we study the performance of the spectrum handoff process in a CR ad hoc network under homogeneous primary traffic. We propose a novel three dimensional discrete-time Markov chain to characterize the process of spectrum handoffs and analyze the performance of unlicensed users. Since in real CR networks, a dedicated common control channel is not practical, in our model, we implement a network coordination scheme where no dedicated common control channel is needed. Moreover, in wireless communications, collisions among simultaneous transmissions cannot be immediately detected and the whole collided packets need to be retransmitted, which greatly affects the network performance. With this observation, we also consider the retransmissions of the collided packets in our proposed discrete-time Markov chain. In addition, besides the random channel selection scheme, we study the impact of different channel selection schemes on the performance of the spectrum handoff process. Furthermore, we also consider the spectrum sensing delay in our proposed Markov model and investigate its effect on the network performance. We validate the numerical results obtained from our proposed Markov model against simulation and investigate other parameters of interest in the spectrum handoff scenario. Our proposed analytical model can be applied to various practical network scenarios. It also provides new insights on the process of spectrum handoffs. Currently, no existing analysis has considered the comprehensive aspects of spectrum handoff as what we consider in this paper.
【Keywords】: Markov processes; ad hoc networks; cognitive radio; radio spectrum management; telecommunication traffic; 3D discrete-time Markov chain; cognitive radio ad hoc networks; collided packets; common control channel; homogeneous primary traffic; performance analysis; random channel selection scheme; spectrum handoff; spectrum reuse; unlicensed users
【Paper Link】 【Pages】:3020-3028
【Authors】: Ajay Gopinathan ; Zongpeng Li ; Chuan Wu
【Abstract】: Secondary spectrum access is emerging as a promising approach for mitigating the spectrum scarcity in wireless networks. Coordinated spectrum access for secondary users can be achieved using periodic spectrum auctions. Recent studies on such auction design mostly neglect the repeating nature of such auctions, and focus on greedily maximizing social welfare. Such auctions can cause subsets of users to experience starvation in the long run, reducing their incentive to continue participating in the auction. It is desirable to increase the diversity of users allocated spectrum in each auction round, so that a trade-off between social welfare and fairness is maintained. We study truthful mechanisms towards this objective, for both local and global fairness criteria. For local fairness, we introduce randomization into the auction design, such that each user is guaranteed a minimum probability of being assigned spectrum. Computing an optimal, interference-free spectrum allocation is NP-Hard; we present an approximate solution, and tailor a payment scheme to guarantee truthful bidding is a dominant strategy for all secondary users. For global fairness, we adopt the classic max-min fairness criterion. We tailor another auction by applying linear programming techniques for striking the balance between social welfare and max-min fairness, and for finding feasible channel allocations. In particular, a pair of primal and dual linear programs are utilized to guide the probabilistic selection of feasible allocations towards a desired tradeoff in expectation.
【Keywords】: approximation theory; channel allocation; communication complexity; interference (signal); linear programming; minimax techniques; probability; radio networks; radio spectrum management; NP-hard; approximate solution; auction design; channel allocation; coordinated spectrum access; dual linear program; global fairness criteria; interference-free spectrum allocation; linear programming technique; local fairness criteria; max-min fairness criterion; payment scheme; periodic spectrum auction; primal linear program; probability; secondary spectrum access; social welfare balancing; social welfare maximization; spectrum scarcity mitigation; strategyproof auction; truthful bidding; wireless network; Algorithm design and analysis; Approximation algorithms; Channel allocation; Cognitive radio; Cost accounting; Random variables; Resource management
【Paper Link】 【Pages】:3029-3037
【Authors】: Eyjolfur Ingi Asgeirsson ; Pradipta Mitra
【Abstract】: We consider the capacity problem (or, the single slot scheduling problem) in wireless networks. Our goal is to maximize the number of successful connections in arbitrary wirelessnetworks where a transmission is successful only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. We study a game theoretic approach towards capacity maximization introduced by Andrews and Dinitz (INFOCOM 2009) and Dinitz (INFOCOM 2010). We prove vastly improved bounds for the game theoretic algorithm. In doing so, we achieve the first distributed constant factor approximation algorithm for capacity maximization for the uniform power assignment. When compared to the optimum where links may use an arbitrary power assignment, we prove a O(log Δ) approximation, where Δ is the ratio between the largest and the smallest link in the network. This is an exponential improvement of the approximation factor compared to existing results for distributed algorithms. All our results work for links located in any metric space. In addition, we provide simulation studies clarifying the picture on distributed algorithms for capacity maximization.
【Keywords】: approximation theory; distributed algorithms; game theory; interference suppression; optimisation; radio networks; radiofrequency interference; scheduling; capacity maximization; distributed algorithm; distributed constant factor approximation algorithm; game theoretic approach; power assignment; receiver; signal-to-interference-plus-noise ratio; single slot scheduling problem; wireless network; Approximation algorithms; Approximation methods; Games; Interference; Measurement; Optimized production technology; Wireless networks
【Paper Link】 【Pages】:3038-3046
【Authors】: Qinghe Du ; Xi Zhang
【Abstract】: We propose the QoS-aware BS-selection and the corresponding resource-allocation schemes for downlink multi-user transmissions over the distributed multiple-input-multiple-output (MIMO) links, where multiple location-independent base-stations (BS), controlled by a central server, cooperatively transmit data to multiple mobile users. Our proposed schemes aim at minimizing the BS usages and reducing the interfering range of the distributed MIMO transmissions, while satisfying diverse statistical delay-QoS requirements for all users, which are characterized by the delay-bound violation probability and the effective capacity technique. Specifically, we propose two BS-usage minimization frameworks to develop the QoS-aware BS-selection schemes and the corresponding wireless resource-allocation algorithms across multiple mobile users. The first framework applies the joint block-diagonalization (BD) and probabilistic transmission (PT) to implement multiple access over multiple mobile users, while the second one employs time-division multiple access (TDMA) approach to control multiple users' links. We then derive the optimal BS-selection schemes for these two frameworks, respectively. In addition, we further discuss the PT-only based BS-selection scheme. Also conducted is a set of simulation evaluations to comparatively study the average BS-usage and interfering range of our proposed schemes and to analyze the impact of QoS constraints on the BS selections for distributed MIMO transmissions.
【Keywords】: MIMO communication; quality of service; radio links; radio networks; time division multiple access; QoS provisioning; base-station selection; central server; delay-bound violation probability; distributed MIMO transmission; distributed multiple-input-multiple-output link; distributed multiuser MIMO link; downlink multiuser transmission; joint block-diagonalization; probabilistic transmission; resource allocation; statistical delay-QoS requirement; time division multiple access; wireless networks; wireless resource-allocation algorithm; Fading; Joints; MIMO; Mobile communication; Quality of service; Servers; Wireless communication; Distributed MIMO; broadband wireless networks; statistical QoS provisioning; wireless fading channels
【Paper Link】 【Pages】:3047-3055
【Authors】: Chandrashekhar Thejaswi P. S. ; Tuan Tran ; Junshan Zhang
【Abstract】: This paper studies multicasting compressively sampled signals from a source to many receivers, over lossy wireless channels. Our focus is on the network outage from the perspective of signal distortion across all receivers, for both cases where the transmitter may or may not be capable of reconstructing the compressively sampled signals. Capitalizing on extreme value theory, we characterize the network outage in terms of key system parameters, including the erasure probability, the number of receivers and the sparse structure of the signal. We show that when the transmitter can reconstruct the compressively sensed signal, the strategy of using network coding to multicast the reconstructed signal coefficients can reduce the network outage significantly. We observe, however, that the traditional network coding could result in suboptimal performance with power-law decay signals. Thus motivated, we devise a new method, namely subblock network coding, which involves fragmenting the data into subblocks, and allocating time slots to different subblocks, based on its priority. We formulate the corresponding optimal allocation as an integer programming problem. Since integer programming is often intractable, we develop a heuristic algorithm that prioritizes the time slot allocation by exploiting the inherent priority structure of power-law decay signals. Numerical results show that the proposed schemes outperform the traditional methods with significant margins.
【Keywords】: integer programming; network coding; signal reconstruction; signal sampling; wireless channels; compressive sampling; erasure probability; heuristic algorithm; integer programming problem; outage analysis; power-law decay signals; signal distortion; signal reconstruction; subblock network coding; wireless channels; Compressed sensing; Network coding; Receivers; Redundancy; Sensors; Wireless communication; Wireless sensor networks
【Paper Link】 【Pages】:3056-3064
【Authors】: Wenzhuo Ouyang ; Sugumar Murugesan ; Atilla Eryilmaz ; Ness B. Shroff
【Abstract】: We address the problem of opportunistic multiuser scheduling in downlink networks with Markov-modeled outage channels. We consider the scenario in which the scheduler does not have full knowledge of the channel state information, but instead estimates the channel state information by exploiting the memory inherent in the Markov channels along with ARQ-styled feedback from the scheduled users. Opportunistic scheduling is optimized in two stages: (1) Channel estimation and rate adaptation to maximize the expected immediate rate of the scheduled user; (2) User scheduling, based on the optimized immediate rate, to maximize the overall long term sum-throughput of the downlink. The scheduling problem is a partially observable Markov decision process with the classic `exploitation vs exploration' trade-off that is difficult to quantify. We therefore study the problem in the framework of Restless Multi-armed Bandit Processes (RMBP) and perform a Whittle's indexability analysis. Whittle's indexability is traditionally known to be hard to establish and the index policy derived based on Whittle's indexability is known to have optimality properties in various settings. We show that the problem of downlink scheduling under imperfect channel state information is Whittle indexable and derive the Whittle's index policy in closed form. Via extensive numerical experiments, we show that the index policy has near-optimal performance. Our work reveals that, under incomplete channel state information, exploiting channel memory for opportunistic scheduling can result in significant performance gains and that almost all of these gains can be realized using an easy-to-implement index policy.
【Keywords】: Markov processes; automatic repeat request; channel estimation; multi-access systems; radio links; ARQ-styled feedback; Markov decision process; Markov-modeled outage channel; Whittle index policy; Whittle indexability analysis; channel estimation; channel memory; channel state information; downlink network; downlink scheduling; easy-to-implement index policy; opportunistic multiuser scheduling; rate adaptation; restless multiarmed bandit process; scheduling problem; sum-throughput; Adaptation model; Channel estimation; Data communication; Downlink; Estimation; Indexes; Markov processes
【Paper Link】 【Pages】:3065-3073
【Authors】: Ruogu Li ; Atilla Eryilmaz
【Abstract】: We attack the challenging problem of designing a scheduling policy for end-to-end deadline-constrained traffic with reliability requirements in a multi-hop network. It is well-known that the end-to-end delay performance for a multi-hop flow has a complex dependence on the high-order statistics of the arrival process and the algorithm itself. Thus, neither the earlier optimization based approaches that aim to meet the long-term throughput demands, nor the solutions that focus on a similar problem for single-hop flows directly apply. Moreover, a dynamic programming-based approach becomes intractable for such multi-time scale Quality-of-Service(QoS)-constrained traffic in a multi-hop environment. This motivates us in this work to develop an alternative model that enables us to exploit the degree of freedom in choosing appropriate service discipline. Based on the new model, we propose two alternative solutions, first based on a Lyapunov-drift minimization approach, and second based on a novel relaxed optimization-formulation. We provide extensive numerical results to compare the performance of both of these solutions to throughput-optimal back-pressure-type schedulers and to longest waiting time based schedulers that have provably optimal asymptotic performance characteristics. Our results reveal that the dynamic choice of service discipline of our proposed solutions yields substantial performance improvements compared to both of these types of traditional solutions under non-asymptotic conditions.
【Keywords】: minimisation; quality of service; radio networks; scheduling; telecommunication network reliability; telecommunication traffic; Lyapunov drift minimization; end-to-end deadline constrained traffic; multihop networks; multitime scale quality of service constrained traffic; optimal asymptotic performance; relaxed optimization formulation; reliability requirement; scheduling policy; Delay; Heuristic algorithms; Numerical models; Optimization; Quality of service; Spread spectrum communication; Upper bound
【Paper Link】 【Pages】:3074-3082
【Authors】: Hyunseok Chang ; Murali S. Kodialam ; Ramana Rao Kompella ; T. V. Lakshman ; Myungjin Lee ; Sarit Mukherjee
【Abstract】: Large-scale data processing needs of enterprises today are primarily met with distributed and parallel computing in data centers. MapReduce has emerged as an important programming model for these environments. Since today's data centers run many MapReduce jobs in parallel, it is important to find a good scheduling algorithm that can optimize the completion times of these jobs. While several recent papers focused on optimizing the scheduler, there exists very little theoretical understanding of the scheduling problem in the context of MapReduce. In this paper, we seek to address this problem by first presenting a simplified abstraction of the MapReduce scheduling problem, and then formulate the scheduling problem as an optimization problem.We devise various online and offline algorithms to arrive at a good ordering of jobs to minimize the overall job completion times. Since optimal solutions are hard to compute (NP-hard), we propose approximation algorithms that work within a factor of 3 of the optimal. Using simulations, we also compare our online algorithm with standard scheduling strategies such as FIFO, Shortest Job First and show that our algorithm consistently outperforms these across different job distributions.
【Keywords】: computational complexity; optimisation; parallel processing; scheduling; MapReduce scheduling problem; MapReduce-like systems; NP-hard; approximation algorithm; data centers; distributed computing; fast completion time; optimization problem; parallel computing; programming model; scheduling algorithm; standard scheduling strategies; Approximation algorithms; Approximation methods; Equations; Optimal scheduling; Optimized production technology; Processor scheduling; Program processors
【Paper Link】 【Pages】:3083-3091
【Authors】: Lin Liu ; Zhenghao Zhang ; Yuanyuan Yang
【Abstract】: Optical switching architectures with electronic buffers have been proposed to tackle the lack of optical Random Access Memories (RAM). Out of these architectures, the OpCut switch achieves low latency and minimizes optical-electronic-optical (O/E/O) conversions by allowing packets to cut-through the switch. In an OpCut switch, a packet is converted and sent to the electronic buffers only if it cannot be directly routed to the switch output. As the length of a time slot shrinks with the increase of the line card rate in such a high-speed system, it may become too stringent to calculate a schedule in each single time slot. In such a case, pipelining scheduling can be adopted to relax the time constraint. In this paper, we present a novel mechanism to pipeline the packet scheduling in the OpCut switch by adopting multiple “sub-schedulers.” The computation of a complete schedule for each time slot is done under the collaboration of sub-schedulers and spans multiple time slots, while at any time schedules for different time slots are being calculated simultaneously. We present the implementation details when two sub-schedulers are adopted, and show that in this case our pipelining mechanism eliminates duplicate scheduling which is a common problem in a pipelined environment. With an arbitrary number of sub-schedulers, the duplicate scheduling problem becomes very difficult to eliminate due to the increased scheduling complexity. Nevertheless, we propose several approaches to reducing it. Finally, to minimize the extra delay introduced by pipelining as well as the overall average packet delay under all traffic intensities, we further propose an adaptive pipelining scheme. Our simulation results show that the pipelining mechanism effectively reduces scheduler complexity while maintaining good system performance
【Keywords】: integrated optoelectronics; optical switches; packet switching; pipeline processing; random-access storage; switching networks; RAM; electronic buffer; low latency optical packet switch; multiple subscheduler; multiple time slot; opcut switch; optical-electronic-optical conversion; pipelining packet scheduling; random access memory; scheduling complexity reduction; Ear; Optical buffering; Optical packet switching; Optical receivers; Optical transmitters; Silicon; Optical switches; packet scheduling; pipelined algorithm
【Paper Link】 【Pages】:3092-3100
【Authors】: Lei Yang ; Jinsong Han ; Yong Qi ; Cheng Wang ; Tao Gu ; Yunhao Liu
【Abstract】: Prior work on anti-collision for Radio Frequency IDentification (RFID) systems usually schedule adjacent readers to exclusively interrogate tags for avoiding reader collisions. Although such a pattern can effectively deal with collisions, the lack of readers' collaboration wastes numerous time on the scheduling process and dramatically degrades the throughput of identification. Even worse, the tags within the overlapped interrogation regions of adjacent readers (termed as contentious tags), even if the number of such tags is very small, introduce a significant delay to the identification process. In this paper, we propose a new strategy for collision resolution. First, we shelve the collisions and identify the tags that do not involve reader collisions. Second, we perform a joint identification, in which adjacent readers collaboratively identify the contentious tags. In particular, we find that neighboring readers can cause a new type of collisions, cross-tag-collision, which may impede the joint identification. We propose a protocol stack, named Season, to undertake the tasks in two phases and solve the cross-tag-collision. We conduct extensive simulations and preliminary implementation to demonstrate the efficiency of our scheme. The results show that our scheme can achieve above 6 times improvement on the identification throughput in a large-scale dense reader environment.
【Keywords】: radiofrequency identification; radiofrequency interference; RFID systems; anticollision; collision resolution; cross-tag-collision; joint identification; radio frequency identification systems; scheduling process; shelving interference; Probes; RFID; Reader Collision; Season; Tag Collision
【Paper Link】 【Pages】:3101-3109
【Authors】: Shigang Chen ; Ming Zhang ; Bin Xiao
【Abstract】: Similar to the revolutionary change that the barcode system brought to the retail industry, the RFID technologies are expected to revolutionize the warehouse and inventory management. After RFID tags are deployed to make the attached objects wirelessly identifiable, a natural next step is to invent new ways to benefit from this “infrastructure”. For example, sensors may be added to these tags to gather real-time information about the state of the objects or about the environment where these objects reside. This leads to the problem of designing efficient protocols to collect such information from the tags. It is a new problem that the existing work cannot solve well. In this paper, we first show that a straightforward polling solution will not be efficient. We then propose a single-hash information collection protocol that works much better than the polling solution. However, a wide gap still exists between the execution time of this protocol and a lower bound that we establish. Finally, we propose a multi-hash information collection protocol that further reduces the expected execution time to within 1.61 times the lower bound.
【Keywords】: cryptography; inventory management; protocols; radiofrequency identification; warehouse automation; inventory management; multi-hash information collection protocol; sensor-augmented RFID networks; single-hash information collection protocol; warehouse management; Batteries; Microwave integrated circuits; Protocols; Radiofrequency identification; Silicon carbide; Temperature sensors
【Paper Link】 【Pages】:3110-3118
【Authors】: Nazif C. Tas ; Tamer Nadeem ; Ashok K. Agrawala
【Abstract】: Wireless networks which support multiple modulation techniques and data rates suffer from the so-called multi-rate anomaly problem resulting in unfair environments; in which all the users, regardless of their data rate choices, observe the same, low throughput values. This situation violates the baseline fairness principle which states that the desired throughput of a node competing against any nodes running at different speeds is equal to the throughput that the node would achieve in an existing single-rate network in which all the competing nodes were running at its own rate. In our work presented here, we propose a novel algorithm which utilizes the retry limit parameter of the CSMA/CA algorithm in order to differentiate different data rate user groups. Our algorithm, MORAL (Multiuser data Rate Adjustment Algorithm), dynamically adjusts the retry limits of the users according to the channel characteristics observed and the neighbor information gathered from previous communications. MORAL chooses the most appropriate retry limit to use in order to establish a more baseline-fair environment.
【Keywords】: carrier sense multiple access; modulation; multiuser channels; radio networks; CSMA/CA algorithm; MORAL; baseline fair environment; channel characteristics; data rate user groups; multiple modulation techniques; multirate anomaly problem; multiuser data rate adjustment algorithm; neighbor information; retry limits; wireless networks; Equations; Ethics; IEEE 802.11 Standards; Mathematical model; Throughput; Wireless networks
【Paper Link】 【Pages】:3119-3127
【Authors】: Wei Gao ; Guohong Cao
【Abstract】: Data dissemination is useful for many applications of Disruption Tolerant Networks (DTNs). Current data dissemination schemes are generally network-centric ignoring user interests. In this paper, we propose a novel approach for user-centric data dissemination in DTNs, which considers satisfying user interests and maximizes the cost-effectiveness of data dissemination. Our approach is based on a social centrality metric, which considers the social contact patterns and interests of mobile users simultaneously, and thus ensures effective relay selection. The performance of our approach is evaluated from both theoretical and experimental perspectives. By formal analysis, we show the lower bound on the cost-effectiveness of data dissemination, and analytically investigate the tradeoff between the effectiveness of relay selection and the overhead of maintaining network information. By trace-driven simulations, we show that our approach achieves better cost-effectiveness than existing data dissemination schemes.
【Keywords】: mobile radio; DTN; cost-effectiveness; data dissemination schemes; disruption tolerant networks; formal analysis; mobile users; network-centric; relay selection; social centrality metric; social contact patterns; trace-driven simulations; user interests; user-centric data dissemination; Data models; Exponential distribution; Maintenance engineering; Measurement; Probabilistic logic; Relays; Time factors
【Paper Link】 【Pages】:3128-3136
【Authors】: Kyunghan Lee ; Yoora Kim ; Song Chong ; Injong Rhee ; Yung Yi
【Abstract】: This paper analytically derives the delay-capacity tradeoffs for Lévy mobility: Lévy walks and Lévy flights. Lévy mobility is a random walk with a power-law flight distribution. α is the power-law slope of the distribution and 0 <;; α ≤ 2. While in Lévy flight, each flight takes a constant flight time, in Lévy walk, it has a constant velocity which incurs strong spatio-temporal correlation as flight time depends on traveling distance. Lévy mobility is of special interest because it is known that Lévy mobility and human mobility share several common features including heavy-tail flight distributions. Humans highly influence the mobility of nodes (smartphones and cars) in real mobile networks as they carry or drive mobile nodes. Understanding the fundamental delay-capacity tradeoffs of Lévy mobility provides important insight into understanding the performance of real mobile networks. However, its power-law nature and strong spatio-temporal correlation make the scaling analysis non-trivial. This is in contrast to other random mobility models including Brownian motion, random waypoint and i.i.d. mobility which are amenable for a Markovian analysis. By exploiting the asymptotic characterization of the joint spatio-temporal probability density functions of Lévy models, the order of critical delay, the minimum delay required to achieve more throughput than Θ(1/√n) where n is the number of nodes in the network, is obtained. The results indicate that in Lévy walk, there is a phase transition that for 0 <; α <; 1, the critical delay is constantly Θ(n1/2) and for 1 ≤ α ≤ 2, is Θ(nα/2). In contrast, Lévy flight has critical delay Θ(nα/2) for 0 <; α ≤ 2.
【Keywords】: mobile communication; Brownian motion; Levy flights; Levy mobility; Levy walks; Markovian analysis; asymptotic characterization; delay-capacity tradeoffs; heavy-tail flight distributions; mobile networks; power-law flight distribution; power-law slope; random mobility models; random walk; random waypoint; scaling analysis; spatio-temporal correlation; spatio-temporal probability density functions; Delay; Humans; Joints; Mathematical model; Mobile communication; Mobile computing; Throughput
【Paper Link】 【Pages】:3137-3145
【Authors】: Esa Hyytiä ; Jorma T. Virtamo ; Pasi E. Lassila ; Jussi Kangasharju ; Jörg Ott
【Abstract】: We consider an opportunistic content sharing system designed to store and distribute local spatio-temporal “floating” information in uncoordinated P2P fashion relying solely on the mobile nodes passing through the area of interest, referred to as the anchor zone. Nodes within the anchor zone exchange the information in opportunistic manner, i.e., whenever two nodes come within each others' transmission range. Outside the anchor zone, the nodes are free to delete the information, since it is deemed relevant only for the nodes residing inside the anchor zone. Due to the random nature of the operation, there are no guarantees, e.g., for the information availability. By means of analytical models, we show that such a system, without any supporting infrastructure, can be a viable and surprisingly reliable option for content sharing as long as a certain criterion, referred to as the criticality condition, is met. The important quantity is the average number of encounters a randomly chosen node experiences during its sojourn time in the anchor zone, which again depends on the communication range and the mobility pattern. The theoretical studies are complemented with simulation experiments with various mobility models showing good agreement with the analytical results.
【Keywords】: mobile communication; peer-to-peer computing; anchored information; information availability; local spatio-temporal floating information; mobile nodes; mobility pattern; opportunistic content sharing; uncoordinated P2P; Lead
【Paper Link】 【Pages】:3146-3154
【Authors】: Eitan Altman ; Veeraruna Kavitha ; Francesco De Pellegrini ; Vijay Kamble ; Vivek S. Borkar
【Abstract】: Epidemics dynamics can describe the dissemination of information in delay tolerant networks, in peer to peer networks and in content delivery networks. The control of such dynamics has thus gained a central role in all of these areas. However, a major difficulty in this context is that the objective functions to be optimized are often not additive in time but are rather multiplicative. The classical objective function in DTNs, i.e., the successful delivery probability of a message within a given deadline, falls precisely in this category, because it takes often the form of the expectation of the exponent of some integral cost. So far, models involving such costs have been solved by interchanging the order of expectation and the exponential function. While reducing the problem to a standard optimal control problem, this interchange is only tight in the mean field limit obtained as the population tends to infinity. In this paper we identify a general framework from optimal control in finance, known as risk sensitive control, which let us handle the original (multiplicative) cost and obtain solutions to several novel control problems in DTNs. In particular, we can derive the structure of state-dependent controls that optimize transmission power at the source node. Further, we can account for the propagation loss factor of the wireless medium while obtaining these controls, and, finally, we address power control at the destination node, resulting in a novel threshold optimal activation policy. Combined optimal power control at source and destination nodes is also obtained.
【Keywords】: Markov processes; computer networks; optimal control; telecommunication control; content delivery networks; delay tolerant networks; epidemics dynamics; expectation order; exponential function; message delivery probability; risk sensitive control; risk sensitive optimal control; Equations; Markov processes; Mobile communication; Optimal control; Power control; Switches; Wireless communication; Delay Tolerant Networks; Markov Decision Process; Risk Sensitive Control
【Paper Link】 【Pages】:3155-3163
【Authors】: Junxing Zhang ; Sneha Kumar Kasera ; Neal Patwari ; Piyush Rai
【Abstract】: Perimeter distinction in a wireless network is the ability to distinguish locations belonging to different perimeters. It is complementary to existing localization techniques. A draw-back of the localization method is that when a transmitter is at the edge of an area, an algorithm with isotropic error will estimate its location in the wrong area at least half of the time. In contrast, perimeter distinction classifies the location as being in one area or the adjacent regardless of the transmitter position within the area. In this paper, we use the naturally different wireless fading conditions to accurately distinguish locations across perimeters. We examine the use of two types of wireless measurements: received signal strength (RSS) and wireless link signature (WLS), and propose multiple methods to retain good distinction rates even when the receiver faces power manipulation by malicious transmitters. Using extensive measurements of indoor and outdoor perimeters, we find that WLS outperforms RSS in various fading conditions. Even without using signal power WLS can achieve accurate perimeter distinction up to 80%. When we train our perimeter distinction method with multiple measurements within the same perimeter, we show that we are able to improve the accuracy of perimeter distinction, up to 98%.
【Keywords】: mobile computing; telecommunication links; perimeter distinction; received signal strength; wireless fading; wireless link measurements; wireless link signature; Fading; Feature extraction; History; Power measurement; Receivers; Transmitters; Wireless communication
【Paper Link】 【Pages】:3164-3172
【Authors】: Miao Jin ; Su Xia ; Hongyi Wu ; Xianfeng Gu
【Abstract】: This work proposes a novel connectivity-based localization algorithm, well suitable for large-scale sensor networks with complex shapes and non-uniform nodal distribution. In contrast to current state-of-art connectivity-based localization methods, the proposed algorithm is fully distributed, where each node only needs the information of its neighbors, without cumbersome partitioning and merging process. The algorithm is highly scalable, with limited error propagation and linear computation and communication cost with respect to the size of the network. Moreover, the algorithm is theoretically guaranteed and numerically stable. Extensive simulations and comparison with other methods under various representative network settings are carried out, showing superior performance of the proposed algorithm.
【Keywords】: distributed algorithms; wireless sensor networks; distributed localization; error propagation; large-scale sensor networks; linear computation; mere connectivity; merging process; nonuniform nodal distribution; partitioning process
【Paper Link】 【Pages】:3173-3181
【Authors】: Sándor Laki ; Péter Mátray ; Péter Hága ; Tamas Sebok ; István Csabai ; Gábor Vattay
【Abstract】: The localization of Internet hosts opens space for a wide scope of applications, from targeted, location aware content provision to localizing illegal content. In this paper we present a novel probabilistic approach, called Spotter, for estimating the geographic position of Internet devices with remarkable precision. While the existing methods use landmark specific calibration for building their internal models we show that the delay-distance data follow a generic distribution for each landmark. Hence, instead of describing the delay-distance space in a landmark specific manner our proposed method handles all the calibration points together and derives a common delay-distance model. This fundamental discovery indicates that, in contrast to prior techniques, Spotter is less prone to measurement errors and other anomalies such as indirect routing. To demonstrate the robustness and the accuracy of Spotter we test the performance on PlanetLab nodes as well as on a more realistic reference set collected by CAIDA explicitly for geolocation comparison purposes. To the best of our knowledge, we are the first to use this novel ground truth containing over 23000 network routers with their geographic locations.
【Keywords】: Internet; delays; probability; telecommunication network routing; CAIDA; Internet; PlanetLab node; delay-distance data; geographic position estimation; indirect routing; landmark specific calibration; location aware content; measurement error; model based active geolocation service; network router; probabilistic approach; spotter; Accuracy; Approximation methods; Calibration; Delay; Geology; IP networks; Probabilistic logic
【Paper Link】 【Pages】:3182-3190
【Authors】: Jun-geun Park ; Dorothy Curtis ; Seth J. Teller ; Jonathan Ledlie
【Abstract】: Many indoor localization methods are based on the association of 802.11 wireless RF signals from wireless access points (WAPs) with location labels. An “organic” RF positioning system relies on regular users, not dedicated surveyors, to build the map of RF fingerprints to location labels. However, signal variation due to device heterogeneity may degrade localization performance. We analyze the diversity of those signal characteristics pertinent to indoor localization - signal strength and AP detection - as measured by a variety of 802.11 devices. We first analyze signal strength diversity, and show that pairwise linear transformation alone does not solve the problem. We propose kernel estimation with a wide kernel width to reduce the difference in probability estimates. We also investigate diversity in access point detection. We demonstrate that localization performance may degrade significantly when AP detection rate is used as a feature for localization, and correlate the loss of performance to a device dissimilarity measure captured by Kullback-Leibler divergence. Based on this analysis, we show that using only signal strength, without incorporating negative evidence, achieves good localization performance when devices are heterogeneous.
【Keywords】: diversity reception; estimation theory; probability; wireless LAN; 802.11 wireless RF signal; Kullback-Leibler divergence; access point detection; device diversity; indoor localization method; kernel estimation; organic RF positioning system; organic localization; pairwise linear transformation; probability estimation; signal strength diversity; wireless access point; Estimation; Global Positioning System; Histograms; IEEE 802.11 Standards; Kernel; Linux; Performance evaluation
【Paper Link】 【Pages】:3191-3199
【Authors】: Isaac Keslassy ; Kirill Kogan ; Gabriel Scalosub ; Michael Segal
【Abstract】: Current network processors (NPs) increasingly deal with packets with heterogeneous processing times. As a consequence, packets that require many processing cycles can significantly delay low-latency traffic, because the common approach in today's NPs is to employ run-to-completion processing. These difficulties have led to the emergence of the Multipass NP architecture, where after a processing cycle ends, all processed packets are recycled into the buffer and re-compete for processing resources. In this work we provide a model that captures many of the characteristics of this architecture, and consider several scheduling and buffer management algorithms that are specially designed to optimize the performance of multipass network processors. In particular, we provide analytical guarantees for the throughput performance of our algorithms. We further conduct a comprehensive simulation study that validates our results.
【Keywords】: buffer storage; microprocessor chips; multiprocessing systems; buffer management algorithm; heterogeneous processing time; low-latency traffic; multipass network processors; performance guarantee; run-to-completion processing; Algorithm design and analysis; Delay; Process control; Program processors; Recycling; Scheduling; Throughput
【Paper Link】 【Pages】:3200-3208
【Authors】: Tao Li ; Shigang Chen ; Wen Luo ; Ming Zhang
【Abstract】: Scan detection is one of the most important functions in intrusion detection systems. In order to keep up with the ever-higher line speed, recent research trend is to implement scan detection in fast but small SRAM. This leads to a difficult technical challenge because the amount of traffic to be monitored is huge but the on-die memory space for performing such a monitoring task is very limited. We propose an efficient scan detection scheme based on dynamic bit sharing, which incorporates probabilistic sampling and bit sharing for compact information storage. We design a maximum likelihood estimation method to extract persource information from the shared bits in order to determine the scanners. Our new scheme ensures that the false positive/false negative ratios are bounded with high probability. Moreover, given an arbitrary set of bounds, we develop a systematic approach to determine the optimal system parameters that minimize the amount of memory needed to meet the bounds. Experiments based on a real Internet traffic trace demonstrate that the proposed scan detection scheme reduces memory consumption by three to twenty times when comparing with the best existing work.
【Keywords】: Internet; maximum likelihood detection; probability; telecommunication security; telecommunication traffic; compact information storage; high-speed network; intrusion detection system; maximum likelihood estimation; memory consumption; on-die memory space; optimal dynamic bit sharing; probabilistic sampling; probability; real Internet traffic trace; scan detection; Electrostatic discharge; Maximum likelihood estimation; Memory management; Probabilistic logic; Radiation detectors; Random access memory; Security
【Paper Link】 【Pages】:3209-3217
【Authors】: Lixiong Chen ; Xue Liu ; Qixin Wang ; Yufei Wang
【Abstract】: The rapid scaling up of Networked Control Systems (NCS) is forcing traditional single-hop shared medium industrial fieldbuses (a.k.a. fieldbuses) to evolve toward multi-hop switched fieldbuses. Such evolution faces many challenges. The first is the re-design of switch architecture. To meet the real-time nature of NCS traffic, and to lay a smooth evolution path for switch manufacturers, it is widely agreed that a (if not the) promising switch architecture is an input queueing crossbar architecture running TDMA scheduling. The second challenge is real-time multicast. NCS applications usually involve complex distributed multiple-input-multiple-output interactions, which by their nature necessitate real-time multicast. In shared medium fieldbuses, real-time multicast is straightforward as data sent to the medium is heard by all nodes. On multi-hop switched fieldbuses, however, real-time multicast becomes non-trivial. In this paper, we prove real-time multicast on multi-hop switched fieldbuses is NP-Hard. What is more, real-time multicast on multi-hop switched fieldbuses is fundamentally different from Internet multicast, due to real-time requirement and the homogeneous input queueing crossbar switch architecture. Particularly, switch external links' capacities are no longer mutually independent. Such drastic change of assumptions warrants developing new routing algorithms, and a heuristic algorithm is hereby proposed.
【Keywords】: field buses; multicast communication; queueing theory; telecommunication network routing; telecommunication traffic; time division multiple access; NCS traffic; NP-hard; TDMA scheduling; distributed multiple-input-multiple-output interaction; input queueing crossbar architecture; multihop switched fieldbuses; networked control system; real-time multicast routing scheme; single-hop shared medium industrial fieldbuses; switch architecture; Computer architecture; Internet; Real time systems; Routing; Schedules; Spread spectrum communication; Switches
【Paper Link】 【Pages】:3218-3226
【Authors】: Dinh Nguyen Tran ; Jinyang Li ; Lakshminarayanan Subramanian ; Sherman S. M. Chow
【Abstract】: Most existing large-scale networked systems on the Internet such as peer-to-peer systems are vulnerable to Sybil attacks where a single adversary can introduce many bogus identities. One promising defense of Sybil attacks is to perform social-network based admission control to bound the number of Sybil identities admitted. SybilLimit, the best known Sybil admission control mechanism, can restrict the number of Sybil identities admitted per attack edge to O(log n) with high probability assuming O(n/ log n) attack edges. In this paper, we propose Gatekeeper, a decentralized Sybil-resilient admission control protocol that significantly improves over SybilLimit. Gatekeeper is optimal for the case of O(1) attack edges and admits only O(1) Sybil identities (with high probability) in a random expander social networks (real-world social networks exhibit expander properties). In the face of O(k) attack edges (for any k ∈ O(n/ log n)), Gatekeeper admits O(log k) Sybils per attack edge. This result provides a graceful continuum across the spectrum of attack edges. We demonstrate the effectiveness of Gatekeeper experimentally on real-world social networks and synthetic topologies.
【Keywords】: authorisation; computer network security; social networking (online); telecommunication network topology; Gatekeeper; Internet; Sybil attacks; SybilLimit; decentralized Sybil-resilient admission control protocol; large-scale networked systems; optimal Sybil-resilient node admission control; peer-to-peer systems; social-network based admission control; Admission control; Databases; Face; Logic gates; Open systems; Protocols; Social network services
【Paper Link】 【Pages】:3227-3235
【Authors】: Lu Zhang ; Xueyan Tang
【Abstract】: Distributed Interactive Applications (DIAs) are networked systems that allow multiple participants to interact with one another in real time. Wide spreads of client locations in larges-cale DIAs often require geographical distribution of servers to meet the latency requirements of the applications. In the distributed server architecture, how the clients are assigned to the servers directly affects the network latency involved in the interactions between clients. This paper focuses on the client assignment problem for enhancing the interactivity performance of DIAs. We formulate the problem as a combinational optimization problem on graphs and prove that it is NP-complete. Several heuristic algorithms are proposed for fast computation of good client assignments and are experimentally evaluated. The experimental results show that the proposed greedy algorithms perform close to the optimal assignment and generally outperform the Nearest-Assignment algorithm that assigns each client to its nearest server.
【Keywords】: client-server systems; graph theory; greedy algorithms; interactive systems; optimisation; NP-complete problem; client assignment problem; combinational optimization problem; distributed interactive application; distributed server architecture; graph theory; greedy algorithm; large-scale DIA; network latency; networked system; Computer architecture; Games; Heuristic algorithms; Partitioning algorithms; Polynomials; Routing; Servers
【Paper Link】 【Pages】:3236-3244
【Authors】: Rami Cohen ; Danny Raz
【Abstract】: Overlay routing in a very attractive scheme that allows improving certain properties of the routing without the need to change the standards of the current underlying routing. However, deploying overlay routing requires the placement and maintenance of overlay infrastructure. This gives rise to the following optimization problem: find a minimal set of overlay nodes such that the required routing properties are satisfied. In this paper we rigorously study this optimization problem. We show that it is NP hard and derive a non-trivial approximation algorithm for it, where the approximation ratio depends on specific properties of the problem at hand. We examine the practical aspects of the scheme by evaluating the gain one can get over two real scenarios. The first one is BGP routing, and we show, using up-to-date data reflecting the current BGP routing policy in the Internet, that a relative small number of less than 100 relay servers are sufficient to enable routing over shorter paths from a single source to all ASes, reducing the average path length of inflated paths by 40%. We also demonstrate that using the scheme for TCP performance improvement, results in an almost optimal placement of overlay nodes.
【Keywords】: Internet; approximation theory; computational complexity; resource allocation; telecommunication network routing; transport protocols; BGP routing; Internet; NP hard; cost effective resource allocation; nontrivial approximation algorithm; optimization problem; overlay routing relay nodes; up-to-date data; Algorithm design and analysis; Approximation algorithms; Approximation methods; Internet; Relays; Resource management; Routing