Proceedings of the 18th ACM Conference on Computer and Communications Security, CCS 2011, Chicago, Illinois, USA, October 17-21, 2011. ACM 【DBLP Link】
【Paper Link】 【Pages】:1-2
【Authors】: Farnam Jahanian
【Abstract】: Critical infrastructure, including the Internet, plays a vital role in the economic, political, and social fabric of society. This interdependency leaves society vulnerable to a wide range of threats that impact the security, reliability, availability, and overall trustworthiness of information technology resources. Assuring these properties in the face of adversarial behavior and an Internet that has changed dramatically in size, complexity, and diversity over the last decade has proven to be a critical challenge. In this talk, I will reflect on the evolution of Internet threats - from early threats, such as viruses and worms, to modern botnets. I will explore how changing attacker's technological means (e.g., resilient infrastructure, covert communication) have intertwined with attacker's changing social, behavioral, and economic motives (e.g., vandalism, crime, activism) to create today's large, complex, and diverse ecosystem of threats. I will also touch on how future innovation in the threat landscape will likely be driven by Internet adoption patterns such as the explosive growth of on-line data, the proliferation of mobile devices, and the emergence of the "cloud" computing paradigm. In response to these challenges, I will discuss the need for sustained, long-term research investments in a spectrum of scientific and technical areas with particular emphasis on calls to develop the scientific foundations of cyber-security and to accelerate the transition of knowledge into practice. I will articulate a vision in which a cyber secure society is necessary if we are to achieve the promise of computing to address a wide range of national priorities including health, energy, transportation, education and life-long learning, and public safety/emergency preparedness.
【Keywords】: computing and society; privacy; security; trustworthy cyberspace
【Paper Link】 【Pages】:3-16
【Authors】: Yanlin Li ; Jonathan M. McCune ; Adrian Perrig
【Abstract】: Recent research demonstrates that malware can infect peripherals' firmware in a typical x86 computer system, e.g., by exploiting vulnerabilities in the firmware itself or in the firmware update tools. Verifying the integrity of peripherals' firmware is thus an important challenge. We propose software-only attestation protocols to verify the integrity of peripherals' firmware, and show that they can detect all known software-based attacks. We implement our scheme using a Netgear GA620 network adapter in an x86 PC, and evaluate our system with known attacks.
【Keywords】: integrity of peripherals' firmware; proxy attack; software-based attestation
【Paper Link】 【Pages】:17-28
【Authors】: Mohammad Mannan ; Beom Heyn Kim ; Afshar Ganjali ; David Lie
【Abstract】: Malware and phishing are two major threats for users seeking to perform security-sensitive tasks using computers today. To mitigate these threats, we introduce Unicorn, which combines the phishing protection of standard security tokens and malware protection of trusted computing hardware. The Unicorn security token holds user authentication credentials, but only releases them if it can verify an attestation that the user's computer is free of malware. In this way, the user is released from having to remember passwords, as well as having to decide when it is safe to use them. The user's computer is further verified by either a TPM or a remote server to produce a two-factor attestation scheme. We have implemented a Unicorn prototype using commodity software and hardware, and two Unicorn example applications (termed as uApps, short for Unicorn Applications), to secure access to both remote data services and encrypted local data. Each uApp consists of a small, hardened and immutable OS image, and a single application. Our Unicorn prototype co-exists with a regular user OS, and significantly reduces the time to switch between the secure environment and general purpose environment using a novel mechanism that removes the BIOS from the switch time.
【Keywords】: attestation; authentication; malware; phishing; security token; trusted computing
【Paper Link】 【Pages】:29-40
【Authors】: Bin Zeng ; Gang Tan ; Greg Morrisett
【Abstract】: In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.
【Keywords】: binary rewriting; control-flow integrity; inlined reference monitors; static analysis
【Paper Link】 【Pages】:41-50
【Authors】: Ralf Küsters ; Max Tuengerthal
【Abstract】: Canetti's universal composition theorem and the joint state composition theorems by Canetti and Rabin are useful and widely employed tools for the modular design and analysis of cryptographic protocols. However, these theorems assume that parties participating in a protocol session have pre-established a unique session ID (SID). While the use of such SIDs is a good design principle, existing protocols, in particular real-world security protocols, typically do not use pre-established SIDs, at least not explicitly and not in the particular way stipulated by the theorems. As a result, the composition theorems cannot be applied for analyzing such protocols in a modular and faithful way. In this paper, we therefore present universal and joint state composition theorems which do not assume pre-established SIDs. In our joint state composition theorem, the joint state is an ideal functionality which supports several cryptographic operations, including public-key encryption, (authenticated and unauthenticated) symmetric encryption, MACs, digital signatures, and key derivation. This functionality has recently been proposed by Küsters and Tuengerthal and has been shown to be realizable under standard cryptographic assumptions and for a reasonable class of environments. We demonstrate the usefulness of our composition theorems by several case studies on real-world security protocols, including IEEE 802.11i, SSL/TLS, SSH, IPsec, and EAP-PSK. While our applications focus on real-world security protocols, our theorems, models, and techniques should be useful beyond this domain.
【Keywords】: composition with joint state; real-world security protocols; universal composition theorems
【Paper Link】 【Pages】:51-62
【Authors】: Christina Brzuska ; Marc Fischlin ; Bogdan Warinschi ; Stephen C. Williams
【Abstract】: In this paper we examine composability properties for the fundamental task of key exchange. Roughly speaking, we show that key exchange protocols secure in the prevalent model of Bellare and Rogaway can be composed with arbitrary protocols that require symmetrically distributed keys. This composition theorem holds if the key exchange protocol satisfies an additional technical requirement that our analysis brings to light: it should be possible to determine which sessions derive equal keys given only the publicly available information. What distinguishes our results from virtually all existing work is that we do not rely, neither directly nor indirectly, on the simulation paradigm. Instead, our security notions and composition theorems exclusively use a game-based formalism.We thus avoid several undesirable consequences of simulation-based security notions and support applicability to a broader class of protocols. In particular, we offer an abstract formalization of game-based security that should be of independent interest in other investigations using game-based formalisms.
【Keywords】: bellare-rogaway; composition; key exchange
【Paper Link】 【Pages】:63-74
【Authors】: Véronique Cortier ; Bogdan Warinschi
【Abstract】: Computational soundness results show that under certain conditions it is possible to conclude computational security whenever symbolic security holds. Unfortunately, each soundness result is usually established for some set of cryptographic primitives and extending the result to encompass new primitives typically requires redoing most of the work. In this paper we suggest a way of getting around this problem. We propose a notion of computational soundness that we term deduction soundness. As for other soundness notions, our definition captures the idea that a computational adversary does not have any more power than a symbolic adversary. However, a key aspect of deduction soundness is that it considers, intrinsically, the use of the primitives in the presence of functions specified by the adversary. As a consequence, the resulting notion is amenable to modular extensions. We prove that a deduction sound implementation of some arbitrary primitives can be extended to include asymmetric encryption and public data-structures (e.g. pairings or list), without repeating the original proof effort. Furthermore, our notion of soundness concerns cryptographic primitives in a way that is independent of any protocol specification language. Nonetheless, we show that deduction soundness leads to computational soundness for languages (or protocols) that satisfy a so called commutation property.
【Keywords】: composability; computational soundness
【Paper Link】 【Pages】:75-86
【Authors】: Nils Ole Tippenhauer ; Christina Pöpper ; Kasper Bonne Rasmussen ; Srdjan Capkun
【Abstract】: An increasing number of wireless applications rely on GPS signals for localization, navigation, and time synchronization. However, civilian GPS signals are known to be susceptible to spoofing attacks which make GPS receivers in range believe that they reside at locations different than their real physical locations. In this paper, we investigate the requirements for successful GPS spoofing attacks on individuals and groups of victims with civilian or military GPS receivers. In particular, we are interested in identifying from which locations and with which precision the attacker needs to generate its signals in order to successfully spoof the receivers. We will show, for example, that any number of receivers can easily be spoofed to one arbitrary location; however, the attacker is restricted to only few transmission locations when spoofing a group of receivers while preserving their constellation. In addition, we investigate the practical aspects of a satellite-lock takeover, in which a victim receives spoofed signals after first being locked on to legitimate GPS signals. Using a civilian GPS signal generator, we perform a set of experiments and find the minimal precision of the attacker's spoofing signals required for covert satellite-lock takeover.
【Keywords】: GPS; spoofing; spoofing countermeasures
【Paper Link】 【Pages】:87-98
【Authors】: Stephen E. McLaughlin ; Patrick McDaniel ; William Aiello
【Abstract】: The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence's aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.
【Keywords】: load monitor; privacy; smart meter
【Paper Link】 【Pages】:99-110
【Authors】: Ashlesh Sharma ; Lakshminarayanan Subramanian ; Eric A. Brewer
【Abstract】: Paper forgery is among the leading causes of corruption in many developing regions. In this paper, we introduce PaperSpeckle, a robust system that leverages the natural randomness property present in paper to generate a fingerprint for any piece of paper. Our goal in developing PaperSpeckle is to build a low-cost paper based authentication mechanism for applications in rural regions such as microfinance, healthcare, land ownership records, supply chain services and education which heavily rely on paper based records. Unlike prior paper fingerprinting techniques that have extracted fingerprints based on the fiber structure of paper, PaperSpeckle uses the texture speckle pattern, a random bright/dark region formation at the microscopic level when light falls on to the paper, to extract a unique fingerprint to identify paper. In PaperSpeckle, we show how to extract a "repeatable" texture speckle pattern of a microscopic region of a paper using low-cost machinery involving paper, pen and a cheap microscope. Using extensive testing on different types of paper, we show that PaperSpeckle can produce a robust repeatable fingerprint even if paper is damaged due to crumpling, printing or scribbling, soaking in water or aging with time.
【Keywords】: paper fingerprinting; paper speckle
【Paper Link】 【Pages】:111-124
【Authors】: Amir Moradi ; Alessandro Barenghi ; Timo Kasper ; Christof Paar
【Abstract】: Over the last two decades FPGAs have become central components for many advanced digital systems, e.g., video signal processing, network routers, data acquisition and military systems. In order to protect the intellectual property and to prevent fraud, e.g., by cloning a design embedded into an FPGA or manipulating its content, many current FPGAs employ a bitstream encryption feature. We develop a successful attack on the bitstream encryption engine integrated in the widespread Virtex-II Pro FPGAs from Xilinx, using side-channel analysis. After measuring the power consumption of a single power-up of the device and a modest amount of off-line computation, we are able to recover all three different keys used by its triple DES module. Our method allows extracting secret keys from any real-world device where the bitstream encryption feature of Virtex-II Pro is enabled. As a consequence, the target product can be cloned and manipulated at the will of the attacker since no side-channel protection was included into the design of the decryption module. Also, more advanced attacks such as reverse engineering or the introduction of hardware Trojans become potential threats. While performing the side-channel attack, we were able to deduce a hypothetical architecture of the hardware encryption engine. To our knowledge, this is the first attack against the bitstream encryption of a commercial FPGA reported in the open literature.
【Keywords】: FPGA; bitstream encryption; side-channel attacks; triple des
【Paper Link】 【Pages】:125-138
【Authors】: Elie Bursztein ; Matthieu Martin ; John C. Mitchell
【Abstract】: We carry out a systematic study of existing visual CAPTCHAs based on distorted characters that are augmented with anti-segmentation techniques. Applying a systematic evaluation methodology to 15 current CAPTCHA schemes from popular web sites, we find that 13 are vulnerable to automated attacks. Based on this evaluation, we identify a series of recommendations for CAPTCHA designers and attackers, and possible future directions for producing more reliable human/computer distinguishers.
【Keywords】: CAPTCHA; human interaction proof; machine learning; vision
【Paper Link】 【Pages】:139-150
【Authors】: Nan Zheng ; Aaron Paloski ; Haining Wang
【Abstract】: Biometric authentication verifies a user based on its inherent, unique characteristics --- who you are. In addition to physiological biometrics, behavioral biometrics has proven very useful in authenticating a user. Mouse dynamics, with their unique patterns of mouse movements, is one such behavioral biometric. In this paper, we present a user verification system using mouse dynamics, which is both accurate and efficient enough for future usage. The key feature of our system lies in using much more fine-grained (point-by-point) angle-based metrics of mouse movements for user verification. These new metrics are relatively unique from person to person and independent of the computing platform. Moreover, we utilize support vector machines (SVMs) for accurate and fast classification. Our technique is robust across different operating platforms, and no specialized hardware is required. The efficacy of our approach is validated through a series of experiments. Our experimental results show that the proposed system can verify a user in an accurate and timely manner, and induced system overhead is minor.
【Keywords】: angle-based metrics; mouse dynamics; user verification
【Paper Link】 【Pages】:151-162
【Authors】: Deepak Garg ; Limin Jia ; Anupam Datta
【Abstract】: We present the design, implementation and evaluation of an algorithm that checks audit logs for compliance with privacy and security policies. The algorithm, which we name reduce, addresses two fundamental challenges in compliance checking that arise in practice. First, in order to be applicable to realistic policies, reduce operates on policies expressed in a first-order logic that allows restricted quantification over infinite domains. We build on ideas from logic programming to identify the restricted form of quantified formulas. The logic can, in particular, express all 84 disclosure-related clauses of the HIPAA Privacy Rule, which involve quantification over the infinite set of messages containing personal information. Second, since audit logs are inherently incomplete (they may not contain sufficient information to determine whether a policy is violated or not), reduce proceeds iteratively: in each iteration, it provably checks as much of the policy as possible over the current log and outputs a residual policy that can only be checked when the log is extended with additional information. We prove correctness, termination, time and space complexity results for reduce. We implement reduce and optimize the base implementation using two heuristics for database indexing that are guided by the syntactic structure of policies. The implementation is used to check simulated audit logs for compliance with the HIPAA Privacy Rule. Our experimental results demonstrate that the algorithm is fast enough to be used in practice.
【Keywords】: audit; formal logic; incomplete logs; privacy policy
【Paper Link】 【Pages】:163-174
【Authors】: Karthick Jayaraman ; Vijay Ganesh ; Mahesh V. Tripunitara ; Martin C. Rinard ; Steve J. Chapin
【Abstract】: Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present a new abstraction-refinement technique for automatically finding errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. Underlying our approach is a change in mindset: we propose that error finding complements verification, can be more scalable, and allows for the use of a wider variety of techniques. In our approach, we use an abstraction-refinement technique to first identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step), and then restore such abstracted roles incrementally (the refinement steps). Errors are one-sided: if there is an error in the abstracted policy, then there is an error in the original policy. If there is an error in a policy whose role-dependency graph diameter is smaller than a certain bound, then we find the error. Our abstraction-refinement technique complements conventional state-space exploration techniques such as model checking. We have implemented our technique in an access-control policy analysis tool. We show empirically that our tool scales well to realistic policies, and is orders of magnitude faster than prior tools.
【Keywords】: access control; model checking; program verification
【Paper Link】 【Pages】:175-186
【Authors】: Aaron Johnson ; Paul F. Syverson ; Roger Dingledine ; Nick Mathewson
【Abstract】: We introduce a novel model of routing security that incorporates the ordinarily overlooked variations in trust that users have for different parts of the network. We focus on anonymous communication, and in particular onion routing, although we expect the approach to apply more broadly. This paper provides two main contributions. First, we present a novel model to consider the various security concerns for route selection in anonymity networks when users vary their trust over parts of the network. Second, to show the usefulness of our model, we present as an example a new algorithm to select paths in onion routing. We analyze its effectiveness against deanonymization and other information leaks, and particularly how it fares in our model versus existing algorithms, which do not consider trust. In contrast to those, we find that our trust-based routing strategy can protect anonymity against an adversary capable of attacking a significant fraction of the network.
【Keywords】: anonymous communication; onion routing; privacy; trust
【Paper Link】 【Pages】:187-200
【Authors】: Amir Houmansadr ; Giang T. K. Nguyen ; Matthew Caesar ; Nikita Borisov
【Abstract】: Many users face surveillance of their Internet communications and a significant fraction suffer from outright blocking of certain destinations. Anonymous communication systems allow users to conceal the destinations they communicate with, but do not hide the fact that the users are using them. The mere use of such systems may invite suspicion, or access to them may be blocked. We therefore propose Cirripede, a system that can be used for unobservable communication with Internet destinations. Cirripede is designed to be deployed by ISPs; it intercepts connections from clients to innocent-looking destinations and redirects them to the true destination requested by the client. The communication is encoded in a way that is indistinguishable from normal communications to anyone without the master secret key, while public-key cryptography is used to eliminate the need for any secret information that must be shared with Cirripede users. Cirripede is designed to work scalably with routers that handle large volumes of traffic while imposing minimal overhead on ISPs and not disrupting existing traffic. This allows Cirripede proxies to be strategically deployed at central locations, making access to Cirripede very difficult to block. We built a proof-of-concept implementation of Cirripede and performed a testbed evaluation of its performance properties.
【Keywords】: censorship-resistance; unobservability
【Paper Link】 【Pages】:201-214
【Authors】: Swagatika Prusty ; Brian Neil Levine ; Marc Liberatore
【Abstract】: OneSwarm is a system for anonymous p2p file sharing in use by thousands of peers. It aims to provide Onion Routing-like privacy and BitTorrent-like performance. We demonstrate several flaws in OneSwarm's design and implementation through three different attacks available to forensic investigators. First, we prove that the current design is vulnerable to a novel timing attack that allows just two attackers attached to the same target to determine if it is the source of queried content. When attackers comprise 15% of OneSwarm peers, we expect over 90% of remaining peers will be attached to two attackers and therefore vulnerable. Thwarting the attack increases OneSwarm query response times, making them longer than the equivalent in Onion Routing. Second, we show that OneSwarm's vulnerability to traffic analysis by colluding attackers is much greater than was previously reported, and is much worse than Onion Routing. We show for this second attack that when investigators comprise 25% of peers, over 40% of the network can be investigated with 80% precision to find the sources of content. Our examination of the OneSwarm source code found differences with the technical paper that significantly reduce security. For the implementation in use by thousands of people, attackers that comprise 25% of the network can successfully use this second attack against 98% of remaining peers with 95% precision. Finally, we show that a novel application of a known TCP-based attack allows a single attacker to identify whether a neighbor is the source of data or a proxy for it. Users that turn off the default rate-limit setting are exposed. Each attack can be repeated as investigators leave and rejoin the network. All of our attacks are successful in a forensics context: Law enforcement can use them legally ahead of a warrant. Furthermore, private investigators, who have fewer restrictions on their behavior, can use them more easily in pursuit of evidence for such civil suits as copyright infringement.
【Keywords】: child sexual exploitation; digital forensics; p2p networks
【Paper Link】 【Pages】:215-226
【Authors】: Prateek Mittal ; Ahmed Khurshid ; Joshua Juen ; Matthew Caesar ; Nikita Borisov
【Abstract】: Anonymity systems such as Tor aim to enable users to communicate in a manner that is untraceable by adversaries that control a small number of machines. To provide efficient service to users, these anonymity systems make full use of forwarding capacity when sending traffic between intermediate relays. In this paper, we show that doing this leaks information about the set of Tor relays in a circuit (path). We present attacks that, with high confidence and based solely on throughput information, can (a) reduce the attacker's uncertainty about the bottleneck relay of any Tor circuit whose throughput can be observed, (b) exactly identify the guard relay(s) of a Tor user when circuit throughput can be observed over multiple connections, and (c) identify whether two concurrent TCP connections belong to the same Tor user, breaking unlinkability. Our attacks are stealthy, and cannot be readily detected by a user or by Tor relays. We validate our attacks using experiments over the live Tor network. We find that the attacker can substantially reduce the entropy of a bottleneck relay distribution of a Tor circuit whose throughput can be observed-the entropy gets reduced by a factor of 2 in the median case. Such information leaks from a single Tor circuit can be combined over multiple connections to exactly identify a user's guard relay(s). Finally, we are also able to link two connections from the same initiator with a crossover error rate of less than 1.5% in under 5 minutes. Our attacks are also more accurate and require fewer resources than previous attacks on Tor.
【Keywords】: anonymity; attacks; throughput
【Paper Link】 【Pages】:227-238
【Authors】: Eric Yawei Chen ; Jason Bau ; Charles Reis ; Adam Barth ; Collin Jackson
【Abstract】: Many browser-based attacks can be prevented by using separate browsers for separate web sites. However, most users access the web with only one browser. We explain the security benefits that using multiple browsers provides in terms of two concepts: entry-point restriction and state isolation. We combine these concepts into a general app isolation mechanism that can provide the same security benefits in a single browser. While not appropriate for all types of web sites, many sites with high-value user data can opt in to app isolation to gain defenses against a wide variety of browser-based attacks. We implement app isolation in the Chromium browser and verify its security properties using finite-state model checking. We also measure the performance overhead of app isolation and conduct a large-scale study to evaluate its adoption complexity for various types of sites, demonstrating how the app isolation mechanisms are suitable for protecting a number of high-value Web applications, such as online banking.
【Keywords】: cross-site request forgery; cross-site scripting; isolation; security modeling; web application security; web browser architecture
【Paper Link】 【Pages】:239-250
【Authors】: Mario Heiderich ; Tilman Frosch ; Meiko Jensen ; Thorsten Holz
【Abstract】: Scalable Vector Graphics (SVG) images so far played a rather small role on the Internet, mainly due to the lack of proper browser support. Recently, things have changed: the W3C and WHATWG draft specifications for HTML5 require modern web browsers to support SVG images to be embedded in a multitude of ways. Now SVG images can be embedded through the classical method via specific tags such as In this paper, we introduce several novel attack techniques targeted at major websites, as well as modern browsers, email clients and other comparable tools. In particular, we illustrate that SVG images embedded via In this paper, we introduce several novel attack techniques targeted at major websites, as well as modern browsers, email clients and other comparable tools. In particular, we illustrate that SVG images embedded via
【Keywords】: active image injections; browser security; cross site scripting; scalable vector graphics; web security
【Paper Link】 【Pages】:251-262
【Authors】: Adam Doupé ; Bryce Boe ; Christopher Kruegel ; Giovanni Vigna
【Abstract】: The complexity of modern web applications makes it difficult for developers to fully understand the security implications of their code. Attackers exploit the resulting security vulnerabilities to gain unauthorized access to the web application environment. Previous research into web application vulnerabilities has mostly focused on input validation flaws, such as cross site scripting and SQL injection, while logic flaws have received comparably less attention. In this paper, we present a comprehensive study of a relatively unknown logic flaw in web applications, which we call Execution After Redirect, or EAR. A web application developer can introduce an EAR by calling a redirect method under the assumption that execution will halt. A vulnerability occurs when server-side execution continues after the developer's intended halting point, which can lead to broken/insufficient access controls and information leakage. We start with an analysis of how susceptible applications written in nine web frameworks are to EAR vulnerabilities. We then discuss the results from the EAR challenge contained within the 2010 International Capture the Flag Competition. Finally, we present an open-source, white-box, static analysis tool to detect EARs in Ruby on Rails web applications. This tool found 3,944 EAR instances in 18,127 open-source applications. Finally, we describe an approach to prevent EARs in web frameworks.
【Keywords】: execution after redirect; static analysis; web applications
【Paper Link】 【Pages】:263-274
【Authors】: Peter Chapman ; David Evans
【Abstract】: Web applications divide their state between the client and the server. The frequent and highly dynamic client-server communication that is characteristic of modern web applications leaves them vulnerable to side-channel leaks, even over encrypted connections. We describe a black-box tool for detecting and quantifying the severity of side-channel vulnerabilities by analyzing network traffic over repeated crawls of a web application. By viewing the adversary as a multi-dimensional classifier, we develop a methodology to more thoroughly measure the distinguishably of network traffic for a variety of classification metrics. We evaluate our detection system on several deployed web applications, accounting for proposed client and server-side defenses. Our results illustrate the limitations of entropy measurements used in previous work and show how our new metric based on the Fisher criterion can be used to more robustly reveal side-channels in web applications.
【Keywords】: side-channel leaks; web applications
【Paper Link】 【Pages】:275-284
【Authors】: Kevin Coogan ; Gen Lu ; Saumya K. Debray
【Abstract】: When new malware are discovered, it is important for researchers to analyze and understand them as quickly as possible. This task has been made more difficult in recent years as researchers have seen an increasing use of virtualization-obfuscated malware code. These programs are difficult to comprehend and reverse engineer, since they are resistant to both static and dynamic analysis techniques. Current approaches to dealing with such code first reverse-engineer the byte code interpreter, then use this to work out the logic of the byte code program. This outside-in approach produces good results when the structure of the interpreter is known, but cannot be applied to all cases. This paper proposes a different approach to the problem that focuses on identifying instructions that affect the observable behavior of the obfuscated code. This inside-out approach requires fewer assumptions, and aims to complement existing techniques by broadening the domain of obfuscated programs eligible for automated analysis. Results from a prototype tool on real-world malicious code are encouraging.
【Keywords】: deobfuscation; dynamic analysis; virtualization
【Paper Link】 【Pages】:285-296
【Authors】: Clemens Kolbitsch ; Engin Kirda ; Christopher Kruegel
【Abstract】: Malware continues to remain one of the most important security problems on the Internet today. Whenever an anti-malware solution becomes popular, malware authors typically react promptly and modify their programs to evade defense mechanisms. For example, recently, malware authors have increasingly started to create malicious code that can evade dynamic analysis. One recent form of evasion against dynamic analysis systems is stalling code. Stalling code is typically executed before any malicious behavior. The attacker's aim is to delay the execution of the malicious activity long enough so that an automated dynamic analysis system fails to extract the interesting malicious behavior. This paper presents the first approach to detect and mitigate malicious stalling code, and to ensure forward progress within the amount of time allocated for the analysis of a sample. Experimental results show that our system, called HASTEN, works well in practice, and that it is able to detect additional malicious behavior in real-world malware samples.
【Keywords】: emulation; evasion; malware analysis
【Paper Link】 【Pages】:297-308
【Authors】: Giorgos Vasiliadis ; Michalis Polychronakis ; Sotiris Ioannidis
【Abstract】: Network intrusion detection systems are faced with the challenge of identifying diverse attacks, in extremely high speed networks. For this reason, they must operate at multi-Gigabit speeds, while performing highly-complex per-packet and per-flow data processing. In this paper, we present a multi-parallel intrusion detection architecture tailored for high speed networks. To cope with the increased processing throughput requirements, our system parallelizes network traffic processing and analysis at three levels, using multi-queue NICs, multiple CPUs, and multiple GPUs. The proposed design avoids locking, optimizes data transfers between the different processing units, and speeds up data processing by mapping different operations to the processing units where they are best suited. Our experimental evaluation shows that our prototype implementation based on commodity off-the-shelf equipment can reach processing speeds of up to 5.2 Gbit/s with zero packet loss when analyzing traffic in a real network, whereas the pattern matching engine alone reaches speeds of up to 70 Gbit/s, which is an almost four times improvement over prior solutions that use specialized hardware.
【Keywords】: GPU; NIDs; acceleration; intrusion detection; pattern matching
【Paper Link】 【Pages】:309-320
【Authors】: Jiyong Jang ; David Brumley ; Shobha Venkataraman
【Abstract】: The sheer volume of new malware found each day is growing at an exponential pace. This growth has created a need for automatic malware triage techniques that determine what malware is similar, what malware is unique, and why. In this paper, we present BitShred, a system for large-scale malware similarity analysis and clustering, and for automatically uncovering semantic inter- and intra-family relationships within clusters. The key idea behind BitShred is using feature hashing to dramatically reduce the high-dimensional feature spaces that are common in malware analysis. Feature hashing also allows us to mine correlated features between malware families and samples using co-clustering techniques. Our evaluation shows that BitShred speeds up typical malware triage tasks by up to 2,365x and uses up to 82x less memory on a single CPU, all with comparable accuracy to previous approaches. We also develop a parallelized version of BitShred, and demonstrate scalability within the Hadoop framework.
【Keywords】: co-clustering; feature hashing; hadoop; malware triage
【Paper Link】 【Pages】:321-330
【Authors】: Vincent Cheval ; Hubert Comon-Lundh ; Stéphanie Delaune
【Abstract】: We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy-type properties, like anonymity, vote-privacy, and unlinkability. In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions).
【Keywords】: constraint solving; security protocols
【Paper Link】 【Pages】:331-340
【Authors】: Mihhail Aizatulin ; Andrew D. Gordon ; Jan Jürjens
【Abstract】: Consider the problem of verifying security properties of a cryptographic protocol coded in C. We propose an automatic solution that needs neither a pre-existing protocol description nor manual annotation of source code. First, symbolically execute the C program to obtain symbolic descriptions for the network messages sent by the protocol. Second, apply algebraic rewriting to obtain a process calculus description. Third, run an existing protocol analyser (ProVerif) to prove security properties or find attacks. We formalise our algorithm and appeal to existing results for ProVerif to establish computational soundness under suitable circumstances. We analyse only a single execution path, so our results are limited to protocols with no significant branching. The results in this paper provide the first computationally sound verification of weak secrecy and authentication for (single execution paths of) C code.
【Keywords】: C; ProVerif; model extraction; symbolic execution
【Paper Link】 【Pages】:341-350
【Authors】: Cédric Fournet ; Markulf Kohlweiss ; Pierre-Yves Strub
【Abstract】: Type systems are effective tools for verifying the security of cryptographic programs. They provide automation, modularity and scalability, and have been applied to large security protocols. However, they traditionally rely on abstract assumptions on the underlying cryptographic primitives, expressed in symbolic models. Cryptographers usually reason on security assumptions using lower level, computational models that precisely account for the complexity and success probability of attacks. These models are more realistic, but they are harder to formalize and automate. We present the first modular automated program verification method based on standard cryptographic assumptions. We show how to verify ideal functionalities and protocols written in ML by typing them against new cryptographic interfaces using F7, a refinement type checker coupled with an SMT-solver. We develop a probabilistic core calculus for F7 and formalize its type safety in Coq. We build typed module and interfaces for MACs, signatures, and encryptions, and establish their authenticity and secrecy properties. We relate their ideal functionalities and concrete implementations, using game-based program transformations behind typed interfaces. We illustrate our method on a series of protocol implementations.
【Keywords】: cryptography; refinement types; security protocols
【Paper Link】 【Pages】:351-360
【Authors】: Cédric Fournet ; Jérémy Planul ; Tamara Rezk
【Abstract】: We develop a flexible information-flow type system for a range of encryption primitives, precisely reflecting their diverse functional and security features. Our rules enable encryption, blinding, homomorphic computation, and decryption, with selective key re-use for different types of payloads. We show that, under standard cryptographic assumptions, any well-typed probabilistic program using encryptions is secure that is, computationally non-interferent) against active adversaries, both for confidentiality and integrity. We illustrate our approach using %on classic schemes such as ElGamal and Paillier encryption. We present two applications of cryptographic verification by typing: (1) private search on data streams; and (2) the bootstrapping part of Gentry's fully homomorphic encryption. We provide a prototype typechecker for our system.
【Keywords】: confidentiality; cryptography; integrity; non-interference; secure information flow; type systems
【Paper Link】 【Pages】:361-362
【Authors】: Jan Camenisch
【Abstract】: Using the Internet and other electronic media for our daily tasks has become common. Thereby a lot of sensitive information is exchanged, processed, and stored at many different laces. Once released, controlling the dispersal of this information is virtually impossible. Worse, the press reports daily on incidents where sensitive information has been lost, stolen, or misused - often involving large and reputable organizations. Privacy-enhancing technologies can help to minimize the amount of information that needs to be revealed in transactions, on the one hand, and to limit the dispersal, on the other hand. Many of these technologies build on common cryptographic primitives that allow for data to be authenticated and encrypted in such a way that it is possible to efficiently prove possession and/or properties of data revealing the data or side-information about it. Proving such statements is of course possible for any signature and encryption scheme. However, if the result is to be practical, special cryptographic primitives and proof protocols are needed. In this talk we will first consider a few example scenarios and motivate the need for such cryptograph building block before we then present and discuss these. We start with efficient discrete logarithms based proof protocols often referred to as generalized Schnorr proofs. They allow one to prove knowledge of different discrete logarithms (exponents) and relations among them. Now, to be able to prove possession of a (valid) signature and a message with generalized Schnorr proofs, it is necessary that the signature and the message signed are exponents and that no hash-function is used in the signature verification. Similarly, for encryption schemes, the plain text needs to be an exponent. We will present and discuss a number of such signature and encryption schemes. To show the power of these building blocks, we will consider a couple of example protocols such as anonymous access control and anonymous polling. We then conclude with a discussion on security definition and proofs. We hope that the presented building blocks will enable many new privacy-preserving protocols and and applications in the future.
【Keywords】: cryptographic protocols; privacy
【Paper Link】 【Pages】:363-374
【Authors】: Deepa Srinivasan ; Zhi Wang ; Xuxian Jiang ; Dongyan Xu
【Abstract】: Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based out-of-VM solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside the VMs to outside, the out-of-VM solutions securely isolate the anti-malware software from the vulnerable system. However, the presence of semantic gap also leads to the compatibility problem in not supporting existing defense software. In this paper, we present process out-grafting, an architectural approach to address both isolation and compatibility challenges in out-of-VM approaches for fine-grained process-level execution monitoring. Specifically, by relocating a suspect process from inside a VM to run side-by-side with the out-of-VM security tool, our technique effectively removes the semantic gap and supports existing user-mode process monitoring tools without any modification. Moreover, by forwarding the system calls back to the VM, we can smoothly continue the execution of the out-grafted process without weakening the isolation of the monitoring tool. We have developed a KVM-based prototype and used it to natively support a number of existing tools without any modification. The evaluation results including measurement with benchmark programs show it is effective and practical with a small performance overhead.
【Keywords】: process monitoring; semantic gap; virtualization
【Paper Link】 【Pages】:375-388
【Authors】: Ahmed M. Azab ; Peng Ning ; Xiaolan Zhang
【Abstract】: SICE is a novel framework to provide hardware-level isolation and protection for sensitive workloads running on x86 platforms in compute clouds. Unlike existing isolation techniques, SICE does not rely on any software component in the host environment (i.e., an OS or a hypervisor). Instead, the security of the isolated environments is guaranteed by a trusted computing base that only includes the hardware, the BIOS, and the System Management Mode (SMM). SICE provides fast context switching to and from an isolated environment, allowing isolated workloads to time-share the physical platform with untrusted workloads. Moreover, SICE supports a large range (up to 4GB) of isolated memory. Finally, the most unique feature of SICE is the use of multicore processors to allow the isolated environments to run concurrently and yet securely beside the untrusted host. We have implemented a SICE prototype using an AMD x86 hardware platform. Our experiments show that SICE performs fast context switching (67 microseconds) to and from the isolated environment and that it imposes a reasonable overhead (3% on all but one benchmark) on the operation of an isolated Linux virtual machine. Our prototype demonstrates that, subject to a careful security review of the BIOS software and the SMM hardware implementation, current hardware architecture already provides abstractions that can support building strong isolation mechanisms using a very small SMM software foundation of about 300 lines of code.
【Keywords】: isolation; trusted computing; virtualization security
【Paper Link】 【Pages】:389-400
【Authors】: Sven Bugiel ; Stefan Nürnberger ; Thomas Pöppelmann ; Ahmad-Reza Sadeghi ; Thomas Schneider
【Abstract】: Cloud Computing is an emerging technology promising new business opportunities and easy deployment of web services. Much has been written about the risks and benefits of cloud computing in the last years. The literature on clouds often points out security and privacy challenges as the main obstacles, and proposes solutions and guidelines to avoid them. However, most of these works deal with either malicious cloud providers or customers, but ignore the severe threats caused by unaware users. In this paper we consider security and privacy aspects of real-life cloud deployments, independently from malicious cloud providers or customers. We focus on the popular Amazon Elastic Compute Cloud (EC2) and give a detailed and systematic analysis of various crucial vulnerabilities in publicly available and widely used Amazon Machine Images (AMIs) and show how to eliminate them. Our Amazon Image Attacks (AmazonIA) deploy an automated tool that uses only publicly available interfaces and makes no assumptions on the underlying cloud infrastructure. We were able to extract highly sensitive information (including passwords, keys, and credentials) from a variety of publicly available AMIs. The extracted information allows to (i) start (botnet) instances worth thousands of dollars per day, (ii) provide backdoors into the running machines, (iii) launch impersonation attacks, or (iv) access the source code of the entire web service. Our attacks can be used to completely compromise several real web services offered by companies (including IT-security companies), e.g., for website statistics/user tracking, two-factor authentication, or price comparison. Further, we show mechanisms to identify the AMI of certain running instances. Following the maxim "security and privacy by design" we show how our automated tools together with changes to the user interface can be used to mitigate our attacks.
【Keywords】: app store; attacks; awareness; cloud computing; privacy; secure shell; virtual machine images
【Paper Link】 【Pages】:401-412
【Authors】: Jakub Szefer ; Eric Keller ; Ruby B. Lee ; Jennifer Rexford
【Abstract】: Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.
【Keywords】: attack vectors; hardware security; hypervisor security; multicore; secure cloud computing; virtualization
【Paper Link】 【Pages】:413-422
【Authors】: Tibor Jager ; Juraj Somorovsky
【Abstract】: XML Encryption was standardized by W3C in 2002, and is implemented in XML frameworks of major commercial and open-source organizations like Apache, redhat, IBM, and Microsoft. It is employed in a large number of major web-based applications, ranging from business communications, e-commerce, and financial services over healthcare applications to governmental and military infrastructures. In this work we describe a practical attack on XML Encryption, which allows to decrypt a ciphertext by sending related ciphertexts to a Web Service and evaluating the server response. We show that an adversary can decrypt a ciphertext by performing only 14 requests per plaintext byte on average. This poses a serious and truly practical security threat on all currently used implementations of XML Encryption. In a sense the attack can be seen as a generalization of padding oracle attacks (Vaudenay, Eurocrypt 2002). It exploits a subtle correlation between the block cipher mode of operation, the character encoding of encrypted text, and the response behaviour of a Web Service if an XML message cannot be parsed correctly.
【Keywords】: XML encryption; padding oracle attacks; web service security
【Paper Link】 【Pages】:423-432
【Authors】: Mihir Bellare ; David Cash ; Sriram Keelveedhi
【Abstract】: In response to needs of disk encryption standardization bodies, we provide the first tweakable ciphers that are proven to securely encipher their own keys. We provide both a narrowblock design StE and a wideblock design EtE. Our proofs assume only standard PRP-CCA security of the underlying tweakable ciphers.
【Keywords】: AES; cipher; disk encryption
【Paper Link】 【Pages】:433-444
【Authors】: Ali Bagherzandi ; Stanislaw Jarecki ; Nitesh Saxena ; Yanbin Lu
【Abstract】: We revisit the problem of protecting user's private data against adversarial compromise of user's device(s) which store this data. We formalize the solution we propose as Password-Protected Secret-Sharing (PPSS), which allows a user to secret-share her data among n trustees in such a way that (1) the user can retrieve the shared secret upon entering a correct password into a reconstruction protocol, which succeeds as long as at least t+1 uncorrupted trustees are accessible, and (2) the shared data remains secret even if the adversary which corrupts t trustees, with the level of protection expected of password-authentication, i.e. the probability that the adversary learns anything useful about the secret is at most q/|D| where q is the number of reconstruction protocol the adversary manages to trigger and |D| is the size of the password dictionary. We propose an efficient PPSS protocol in the PKI model, secure under the DDH assumption, using non-interactive zero-knowledge proofs with efficient instantiations in the Random Oracle Model. Our protocol is practical, with fewer than 16 exponentiations per trustee and 8t+17 exponentiations per user, with O(1) bandwidth between the user and each trustee, and only three message flows, implying a single round of interaction in the on-line phase. As a side benefit our PPSS protocol yields a new Threshold Password Authenticated Key Exchange (T-PAKE) protocol in the PKI model with significantly lower message, communication, and server computation complexities then existing T-PAKE's.
【Keywords】: intrusion tolerance; password authentication; secret sharing
【Paper Link】 【Pages】:445-454
【Authors】: Ran Canetti ; Ben Riva ; Guy N. Rothblum
【Abstract】: The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.
【Keywords】: cloud computing; verifiable computation
【Paper Link】 【Pages】:455-466
【Authors】: Tyler Moore ; Nektarios Leontiadis ; Nicolas Christin
【Abstract】: Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of "trending" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve.
【Keywords】: advertisements; malware; online crime; search engines
【Paper Link】 【Pages】:467-476
【Authors】: Long Lu ; Roberto Perdisci ; Wenke Lee
【Abstract】: Search engine optimization (SEO) techniques are often abused to promote websites among search results. This is a practice known as blackhat SEO. In this paper we tackle a newly emerging and especially aggressive class of blackhat SEO, namely search poisoning. Unlike other blackhat SEO techniques, which typically attempt to promote a website's ranking only under a limited set of search keywords relevant to the website's content, search poisoning techniques disregard any term relevance constraint and are employed to poison popular search keywords with the sole purpose of diverting large numbers of users to short-lived traffic-hungry websites for malicious purposes. To accurately detect search poisoning cases, we designed a novel detection system called SURF. SURF runs as a browser component to extract a number of robust (i.e., difficult to evade) detection features from search-then-visit browsing sessions, and is able to accurately classify malicious search user redirections resulted from user clicking on poisoned search results. Our evaluation on real-world search poisoning instances shows that SURF can achieve a detection rate of 99.1% at a false positive rate of 0.9%. Furthermore, we applied SURF to analyze a large dataset of search-related browsing sessions collected over a period of seven months starting in September 2010. Through this long-term measurement study we were able to reveal new trends and interesting patterns related to a great variety of poisoning cases, thus contributing to a better understanding of the prevalence and gravity of the search poisoning problem.
【Keywords】: detection; malicious search engine redirection; measurement; search engine poisoning
【Paper Link】 【Pages】:477-490
【Authors】: David Y. Wang ; Stefan Savage ; Geoffrey M. Voelker
【Abstract】: Cloaking is a common 'bait-and-switch' technique used to hide the true nature of a Web site by delivering blatantly different semantic content to different user segments. It is often used in search engine optimization (SEO) to obtain user traffic illegitimately for scams. In this paper, we measure and characterize the prevalence of cloaking on different search engines, how this behavior changes for targeted versus untargeted advertising and ultimately the response to site cloaking by search engine providers. Using a custom crawler, called Dagger, we track both popular search terms (e.g., as identified by Google, Alexa and Twitter) and targeted keywords (focused on pharmaceutical products) for over five months, identifying when distinct results were provided to crawlers and browsers. We further track the lifetime of cloaked search results as well as the sites they point to, demonstrating that cloakers can expect to maintain their pages in search results for several days on popular search engines and maintain the pages themselves for longer still.
【Keywords】: cloaking; search engine optimization; web spam
【Paper Link】 【Pages】:491-500
【Authors】: Shai Halevi ; Danny Harnik ; Benny Pinkas ; Alexandra Shulman-Peleg
【Abstract】: Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.
【Keywords】: cloud storage; deduplication; merkle trees; proofs of ownership
【Paper Link】 【Pages】:501-514
【Authors】: Kevin D. Bowers ; Marten van Dijk ; Ari Juels ; Alina Oprea ; Ronald L. Rivest
【Abstract】: This paper presents a new challenge--verifying that a remote server is storing a file in a fault-tolerant manner, i.e., such that it can survive hard-drive failures. We describe an approach called the Remote Assessment of Fault Tolerance (RAFT). The key technique in a RAFT is to measure the time taken for a server to respond to a read request for a collection of file blocks. The larger the number of hard drives across which a file is distributed, the faster the read-request response. Erasure codes also play an important role in our solution. We describe a theoretical framework for RAFTs and offer experimental evidence that RAFTs can work in practice in several settings of interest.
【Keywords】: auditing; cloud storage; erasure codes; fault tolerance
【Paper Link】 【Pages】:515-526
【Authors】: Kehuan Zhang ; Xiao-yong Zhou ; Yangyi Chen ; XiaoFeng Wang ; Yaoping Ruan
【Abstract】: The emergence of cost-effective cloud services offers organizations great opportunity to reduce their cost and increase productivity. This development, however, is hampered by privacy concerns: a significant amount of organizational computing workload at least partially involves sensitive data and therefore cannot be directly outsourced to the public cloud. The scale of these computing tasks also renders existing secure outsourcing techniques less applicable. A natural solution is to split a task, keeping the computation on the private data within an organization's private cloud while moving the rest to the public commercial cloud. However, this hybrid cloud computing is not supported by today's data-intensive computing frameworks, MapReduce in particular, which forces the users to manually split their computing tasks. In this paper, we present a suite of new techniques that make such privacy-aware data-intensive computing possible. Our system, called Sedic, leverages the special features of MapReduce to automatically partition a computing job according to the security levels of the data it works on, and arrange the computation across a hybrid cloud. Specifically, we modified MapReduce's distributed file system to strategically replicate data, moving sanitized data blocks to the public cloud. Over this data placement, map tasks are carefully scheduled to outsource as much workload to the public cloud as possible, given sensitive data always stay on the private cloud. To minimize inter-cloud communication, our approach also automatically analyzes and transforms the reduction structure of a submitted job to aggregate the map outcomes within the public cloud before sending the result back to the private cloud for the final reduction. This also allows the users to interact with our system in the same way they work with MapReduce, and directly run their legacy code in our framework. We implemented Sedic on Hadoop and evaluated it using both real and synthesized computing jobs on a large-scale cloud test-bed. The study shows that our techniques effectively protect sensitive user data, offload a large amount of computation to the public cloud and also fully preserve the scalability of MapReduce.
【Keywords】: automatic program analysis; cloud security; computation split; data privacy; mapreduce
【Paper Link】 【Pages】:527-536
【Authors】: Rahul Raguram ; Andrew M. White ; Dibyendusekhar Goswami ; Fabian Monrose ; Jan-Michael Frahm
【Abstract】: We investigate the implications of the ubiquity of personal mobile devices and reveal new techniques for compromising the privacy of users typing on virtual keyboards. Specifi- cally, we show that so-called compromising reflections (in, for example, a victim's sunglasses) of a device's screen are sufficient to enable automated reconstruction, from video, of text typed on a virtual keyboard. Despite our deliberate use of low cost commodity video cameras, we are able to compensate for variables such as arbitrary camera and device positioning and motion through the application of advanced computer vision and machine learning techniques. Using footage captured in realistic environments (e.g., on a bus), we show that we are able to reconstruct fluent translations of recorded data in almost all of the test cases, correcting users' typing mistakes at the same time. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies.
【Keywords】: compromising emanations; computer vision; language modeling; mobile devices; privacy
【Paper Link】 【Pages】:537-550
【Authors】: Miro Enev ; Sidhant Gupta ; Tadayoshi Kohno ; Shwetak N. Patel
【Abstract】: We conduct an extensive study of information leakage over the powerline infrastructure from eight televisions (TVs) spanning multiple makes, models, and underlying technologies. In addition to being of scientific interest, our findings contribute to the overall debate of whether or not measurements of residential powerlines reveal significant information about the activities within a home. We find that the power supplies of modern TVs produce discernible electromagnetic interference (EMI) signatures that are indicative of the video content being displayed. We measure the stability of these signatures over time and across multiple instances of the same TV model, as well as the robustness of these signatures in the presence of other noisy electronic devices connected to the same powerline.
【Keywords】: electromagnetic interference; information leakage; powerline security
【Paper Link】 【Pages】:551-562
【Authors】: Philip Marquardt ; Arunabh Verma ; Henry Carter ; Patrick Traynor
【Abstract】: Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80%. In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).
【Keywords】: accelerometer; information leakage; mobile phones
【Paper Link】 【Pages】:563-574
【Authors】: Danfeng Zhang ; Aslan Askarov ; Andrew C. Myers
【Abstract】: Timing channels remain a difficult and important problem for information security. Recent work introduced predictive mitigation, a new way to mitigating leakage through timing channels; this mechanism works by predicting timing from past behavior, and then enforcing the predictions. This paper generalizes predictive mitigation to a larger and important class of systems: systems that receive input requests from multiple clients and deliver responses. The new insight is that timing predictions may be a function of any public information, rather than being a function simply of output events. Based on this insight, a more general mechanism and theory of predictive mitigation becomes possible. The result is that bounds on timing leakage can be tightened, achieving asymptotically logarithmic leakage under reasonable assumptions. By applying it to web applications, the generalized predictive mitigation mechanism is shown to be effective in practice.
【Keywords】: information flow; interactive systems; mitigation; timing channels
【Paper Link】 【Pages】:575-586
【Authors】: Prithvi Bisht ; Timothy L. Hinrichs ; Nazari Skrupsky ; V. N. Venkatakrishnan
【Abstract】: Parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client. Malicious users who circumvent the client can capitalize on the missing server validation. In this paper, we describe WAPTEC, a tool that is designed to automatically identify parameter tampering vulnerabilities and generate exploits by construction to demonstrate those vulnerabilities. WAPTEC involves a new approach to whitebox analysis of the server's code. We tested WAPTEC on six open source applications and found previously unknown vulnerabilities in every single one of them.
【Keywords】: constraint solving; exploit construction; parameter tampering; program analysis
【Paper Link】 【Pages】:587-600
【Authors】: Mike Samuel ; Prateek Saxena ; Dawn Song
【Abstract】: Scripting vulnerabilities, such as cross-site scripting (XSS), plague web applications today. Most research on defense techniques has focused on securing existing legacy applications written in general-purpose languages, such as Java and PHP. However, recent and emerging applications have widely adopted web templating frameworks that have received little attention in research. Web templating frameworks offer an ideal opportunity to ensure safety against scripting attacks by secure construction, but most of today's frameworks fall short of achieving this goal. We propose a novel and principled type-qualifier based mechanism that can be bolted onto existing web templating frameworks. Our solution permits rich expressiveness in the templating language while achieving backwards compatibility, performance and formal security through a context-sensitive auto-sanitization (CSAS) engine. To demonstrate its practicality, we implement our mechanism in Google Closure Templates, a commercially used open-source templating framework that is used in GMail, Google Docs and other applications. Our approach is fast, precise and retrofits to existing commercially deployed template code without requiring any changes or annotations.
【Keywords】: cross-site scripting; type systems; web frameworks
【Paper Link】 【Pages】:601-614
【Authors】: Prateek Saxena ; David Molnar ; Benjamin Livshits
【Abstract】: We empirically analyzed sanitizer use in a shipping web ap- plication with over 400,000 lines of code and over 23,244 methods, the largest empirical analysis of sanitizer use of which we are aware. Our analysis reveals two novel classes of errors: context-mismatched sanitization and inconsistent multiple sanitization. Both of these arise not because sanitizers are incorrectly implemented, but rather because they are not placed in code correctly. Much of the work on crosssite scripting detection to date has focused on finding missing sanitizers in programs of average size. In large legacy applications, other sanitization issues leading to cross-site scripting emerge. To address these errors, we propose ScriptGard, a system for ASP.NET applications which can detect and repair the incorrect placement of sanitizers. ScriptGard serves both as a testing aid to developers as well as a runtime mitigation technique. While mitigations for cross site scripting attacks have seen intense prior research, we consider both server and browser context, none of them achieve the same degree of precision, and many other mitigation techniques require major changes to server side code or to browsers. Our approach, in contrast, can be incrementally retrofitted to legacy systems with no changes to the source code and no browser changes. With our optimizations, when used for mitigation, ScriptGard incurs virtually no statistically significant overhead.
【Keywords】: cross-site scripting; runtime analysis; web applications
【Paper Link】 【Pages】:615-626
【Authors】: Shuo Tang ; Nathan Dautenhahn ; Samuel T. King
【Abstract】: Browser designers create security mechanisms to help web developers protect web applications, but web developers are usually slow to use these features in web-based applications (web apps). In this paper we introduce Zan, a browser-based system for applying new browser security mechanisms to legacy web apps automatically. Our key insight is that web apps often contain enough information, via web developer source-code patterns or key properties of web-app objects, to allow the browser to infer opportunities for applying new security mechanisms to existing web apps. We apply this new concept to protect authentication cookies, prevent web apps from being framed unwittingly, and perform JavaScript object deserialization safely. We evaluate Zan on up to the 1000 most popular websites for each of the three cases. We find that Zan can provide complimentary protection for the majority of potentially applicable websites automatically without requiring additional code from the web developers and with negligible incompatibility impact.
【Keywords】: client-side defense; cookies; frame busting; json; web security
【Paper Link】 【Pages】:627-638
【Authors】: Adrienne Porter Felt ; Erika Chin ; Steve Hanna ; Dawn Song ; David Wagner
【Abstract】: Android provides third-party applications with an extensive API that includes access to phone hardware, settings, and user data. Access to privacy- and security-relevant parts of the API is controlled with an install-time application permission system. We study Android applications to determine whether Android developers follow least privilege with their permission requests. We built Stowaway, a tool that detects overprivilege in compiled Android applications. Stowaway determines the set of API calls that an application uses and then maps those API calls to permissions. We used automated testing tools on the Android API in order to build the permission map that is necessary for detecting overprivilege. We apply Stowaway to a set of 940 applications and find that about one-third are overprivileged. We investigate the causes of overprivilege and find evidence that developers are trying to follow least privilege but sometimes fail due to insufficient API documentation.
【Keywords】: android; least privilege; permissions
【Paper Link】 【Pages】:639-652
【Authors】: Peter Hornyack ; Seungyeop Han ; Jaeyeon Jung ; Stuart E. Schechter ; David Wetherall
【Abstract】: We examine two privacy controls for Android smartphones that empower users to run permission-hungry applications while protecting private data from being exfiltrated: (1) covertly substituting shadow data in place of data that the user wants to keep private, and (2) blocking network transmissions that contain data the user made available to the application for on-device use only. We retrofit the Android operating system to implement these two controls for use with unmodified applications. A key challenge of imposing shadowing and exfiltration blocking on existing applications is that these controls could cause side effects that interfere with user-desired functionality. To measure the impact of side effects, we develop an automated testing methodology that records screenshots of application executions both with and without privacy controls, then automatically highlights the visual differences between the different executions. We evaluate our privacy controls on 50 applications from the Android Market, selected from those that were both popular and permission-hungry. We find that our privacy controls can successfully reduce the effective permissions of the application without causing side effects for 66% of the tested applications. The remaining 34% of applications implemented user-desired functionality that required violating the privacy requirements our controls were designed to enforce; there was an unavoidable choice between privacy and user-desired functionality.
【Keywords】: android; privacy; smartphone
【Paper Link】 【Pages】:653-666
【Authors】: Raluca A. Popa ; Andrew J. Blumberg ; Hari Balakrishnan ; Frank H. Li
【Abstract】: A significant and growing class of location-based mobile applications aggregate position data from individual devices at a server and compute aggregate statistics over these position streams. Because these devices can be linked to the movement of individuals, there is significant danger that the aggregate computation will violate the location privacy of individuals. This paper develops and evaluates PrivStats, a system for computing aggregate statistics over location data that simultaneously achieves two properties: first, provable guarantees on location privacy even in the face of any side information about users known to the server, and second, privacy-preserving accountability (i.e., protection against abusive clients uploading large amounts of spurious data). PrivStats achieves these properties using a new protocol for uploading and aggregating data anonymously as well as an efficient zero-knowledge proof of knowledge protocol we developed from scratch for accountability. We implemented our system on Nexus One smartphones and commodity servers. Our experimental results demonstrate that PrivStats is a practical system: computing a common aggregate (e.g., count) over the data of 10,000 clients takes less than 0.46 s at the server and the protocol has modest latency (0.6 s) to upload data from a Nexus phone. We also validated our protocols on real driver traces from the CarTel project.
【Keywords】: aggregate statistics; location privacy
【Paper Link】 【Pages】:667-676
【Authors】: Alexey Reznichenko ; Saikat Guha ; Paul Francis
【Abstract】: Online tracking of users in support of behavioral advertising is widespread. Several researchers have proposed non-tracking online advertising systems that go well beyond the requirements of the Do-Not-Track initiative launched by the US Federal Trace Commission (FTC). The primary goal of these systems is to allow for behaviorally targeted advertising without revealing user behavior (clickstreams) or user profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider the role of the ad auctions, which today are central to the operation of online advertising systems. This paper looks at the problem of running auctions that leverage user profiles for ad ranking while keeping the user profile private. We define the problem, broadly explore the solution space, and discuss the pros and cons of these solutions. We analyze the performance of our solutions using data from Microsoft Bing advertising auctions. We conclude that, while none of our auctions are ideal in all respects, they are adequate and practical solutions.
【Keywords】: auctions; online advertising; privacy; targeting
【Paper Link】 【Pages】:677-690
【Authors】: Ryan Henry ; Femi G. Olumofin ; Ian Goldberg
【Abstract】: We extend Goldberg's multi-server information-theoretic private information retrieval (PIR) with a suite of protocols for privacy-preserving e-commerce. Our first protocol adds support for single-payee tiered pricing, wherein users purchase database records without revealing the indices or prices of those records. Tiered pricing lets the seller set prices based on each user's status within the system; e.g., non-members may pay full price while members may receive a discounted rate. We then extend tiered pricing to support group-based access control lists with record-level granularity; this allows the servers to set access rights based on users' price tiers. Next, we show how to do some basic bookkeeping to implement a novel top-K replication strategy that enables the servers to construct bestsellers lists, which facilitate faster retrieval for these most popular records. Finally, we build on our bookkeeping functionality to support multiple payees, thus enabling several sellers to offer their digital goods through a common database while enabling the database servers to determine to what portion of revenues each seller is entitled. Our protocols maintain user anonymity in addition to query privacy; that is, queries do not leak information about the index or price of the record a user purchases, the price tier according to which the user pays, the user's remaining balance, or even whether the user has ever queried the database before. No other priced PIR or oblivious transfer protocol supports tiered pricing, access control lists, multiple payees, or top-K replication, whereas ours supports all of these features while preserving PIR's sublinear communication complexity. We have implemented our protocols as an add-on to Percy++, an open source implementation of Goldberg's PIR scheme. Measurements indicate that our protocols are practical for deployment in real-world e-commerce applications.
【Keywords】: access control; e-commerce; pets; pir; privacy-enhancing technologies; private information retrieval; zero-knowledge proofs
【Paper Link】 【Pages】:691-702
【Authors】: Pierre Baldi ; Roberta Baronio ; Emiliano De Cristofaro ; Paolo Gasti ; Gene Tsudik
【Abstract】: Recent advances in DNA sequencing technologies have put ubiquitous availability of fully sequenced human genomes within reach. It is no longer hard to imagine the day when everyone will have the means to obtain and store one's own DNA sequence. Widespread and affordable availability of fully sequenced genomes immediately opens up important opportunities in a number of health-related fields. In particular, common genomic applications and tests performed in vitro today will soon be conducted computationally, using digitized genomes. New applications will be developed as genome-enabled medicine becomes increasingly preventive and personalized. However, this progress also prompts significant privacy challenges associated with potential loss, theft, or misuse of genomic data. In this paper, we begin to address genomic privacy by focusing on three important applications: Paternity Tests, Personalized Medicine, and Genetic Compatibility Tests. After carefully analyzing these applications and their privacy requirements, we propose a set of efficient techniques based on private set operations. This allows us to implement in in silico some operations that are currently performed via in vitro methods, in a secure fashion. Experimental results demonstrate that proposed techniques are both feasible and practical today.
【Keywords】: cryptographic protocols; dna; privacy
【Paper Link】 【Pages】:703-714
【Authors】: Florian Kerschbaum
【Abstract】: On the one hand, compilers for secure computation protocols, such as FairPlay or FairPlayMP, have significantly simplified the development of such protocols. On the other hand, optimized protocols with high performance for special problems demand manual development and security verification. The question considered in this paper is: Can we construct a compiler that produces optimized protocols? We present an optimization technique based on logic inference about what is known from input and output. Using the example of median computation we can show that our program analysis and rewriting technique translates a FairPlay program into an equivalent -- in functionality and security -- program that corresponds to the protocol by Aggarwal et al. Nevertheless our technique is general and can be applied to optimize a wide variety of secure computation protocols.
【Keywords】: optimization; programming; secure two-party computation
【Paper Link】 【Pages】:715-724
【Authors】: Lior Malka
【Abstract】: Garbled circuit play a key role in secure computation, but existing implementations do not scale and are not modular. In this paper we present VMCrypt, a library for secure computation. This library introduces novel algorithms that, regardless of the circuit being garbled or its size, have a very small memory requirement and use no disk storage. By providing an API (Abstract Programming Interface), VMCrypt can be integrated into existing projects and customized without any modifications to its source code. We measured the performance of VMCrypt on several circuits with undreds of millions of gates. These are the largest scalable secure computations done to date.
【Keywords】: scalable; secure computation; software API
【Paper Link】 【Pages】:725-728
【Authors】: Florian Adamsky ; Hassan Khan ; Muttukrishnan Rajarajan ; Syed Ali Khayam ; Rudolf Jäger
【Abstract】: BitTorrent protocol incentivizes sharing through its choking algorithm. BitTorrent choking algorithm creates clusters of leechers with similar upload capacity to achieve higher overall transfer rates. We show that a malicious peer can exploit BitTorrent's choking algorithm to reduce the upload utilization of high bandwidth leechers. We use a testbed comprising of 24 nodes to provide experimental evidence of a distributed attack in which the malicious peers increase the download time for high bandwidth leechers by up to 16% and increases average download time of the swarm by up to 15% by using distributed and loosely-coupled malicious peers which comprise only 4.7% of the swarm. The countermeasures of this attack are a part of our ongoing research work.
【Keywords】: BitTorrent; attack; choke algorithm; peer-to-peer
【Paper Link】 【Pages】:729-732
【Authors】: Seyed Ali Ahmadzadeh ; Gordon B. Agnew
【Abstract】: In this work, we investigate the application of geometric representation of hash vectors of the information packets in multicast authentication protocols. To this end, a new authentication approach based on geometric properties of hash vectors in an $n-$dimensional vector space is proposed. The proposed approach enables the receiver to authenticate the source packets and removes malicious packets that may have been injected by an adversary into the channel. A salient feature of the proposed scheme is that its bandwidth overhead is independent from the number of injected packets. Moreover, the performance analysis verifies that the proposed scheme significantly reduces the bandwidth overhead as compared to the well known multicast authentication protocols in the literature (e.g., PRABS).
【Keywords】: adversarial channel; multicast authentication; stream authentication
【Paper Link】 【Pages】:733-736
【Authors】: Patrik Bichsel ; Franz-Stefan Preiss
【Abstract】: Authentication is an all-embracing mechanism in today's (digital) society. While current systems require users to provide much personal data and offer many attack vectors due to using a username/passwords combination, systems that allow for minimizing the data released during authentication exist. Implementing such data-minimizing authentication reduces the number of attack vectors, enables enterprises to reduce the risk associated with possession of sensitive user data, and realizes better privacy for users. Our prototype demonstrates the use of data-minimizing authentication using the scenario of accessing a teenage chat room in a privacy-preserving way. The prototype allows a user to retrieve credentials, which may be seen as the digital equivalent of the plastic cards we carry in our wallets today. It also implements a service provider who requires authentication with respect to a service-specific policy. The prototype determines whether and how the user can fulfill the policy with her credentials, which typically results in various options. A graphical user interface then allows the user to select one of these options. Based on the user's input, the prototype generates an Identity Mixer proof that shows fulfillment of the service provider's policy without revealing unnecessary information. Finally, this proof is sent to the service provider for verification. Our prototype is the first implementation of such far-reaching data-minimizing authentication, where we provide the building blocks of our implementation as open-source software.
【Keywords】: anonymous credentials; authentication; digital credentials; policy languages; privacy
【Paper Link】 【Pages】:737-740
【Authors】: Erik-Oliver Blass ; Kaoutar Elkhiyaoui ; Refik Molva ; Olivier Savry ; Cédric Vérhilac
【Abstract】: In this demo, we present the realization and evaluation of a wireless hardware prototype of the previously proposed RFID authentication protocol 'Ff'. The motivation has been to get as close as possible to the (expensive) construction of a wafer and to analyze and demonstrate Ff's real-world feasibility and functional correctness in the field. Besides showing Ff's feasibility, our objective is to show implications of embedding authentication into an industry RFID communication standard. Apart from the documentation at hand, the demonstrator comprises the Ff RFID tag and reader prototypes and a standard EPC tag and reader. The hardware is connected to a laptop controlling the hardware and simulating attacks against authentication.
【Keywords】: F f ; RFID; authentication; gates; hardware; privacy; prototype
【Paper Link】 【Pages】:741-744
【Authors】: Sven Bugiel ; Lucas Davi ; Alexandra Dmitrienko ; Thomas Fischer ; Ahmad-Reza Sadeghi ; Bhargava Shastry
【Abstract】: In this paper we present the design and implementation of a security framework that extends the reference monitor of the Android middleware and deploys a mandatory access control on Linux kernel (based on Tomoyo [9]) aiming at detecting and preventing application-level privilege escalation attacks at runtime. In contrast to existing solutions, our framework is system-centric, efficient, detects attacks that involve communication channels controlled by both, Android middleware and the Linux kernel (particularly, Binder IPC, Internet sockets and file system). It can prevent known confused deputy attacks without false positives and is also flexible enough to prevent unknown confused deputy attacks and attacks by colluding applications (e.g., Soundcomber [11]) at the cost of a small rate of false positives.
【Keywords】: android; confused deputy attacks; mobile security; privilege escalation attacks
【Paper Link】 【Pages】:745-748
【Authors】: Yinzhi Cao ; Vinod Yegneswaran ; Phillip A. Porras ; Yan Chen
【Abstract】: Worms exploiting JavaScript XSS vulnerabilities rampantly infect millions of webpages, while drawing the ire of helpless users. To date, users across all of the popular social networks, including FaceBook, MySpace, Orkut and Twitters, have been vulnerable to XSS worms. We propose PathCutter as a new approach to severing the self-propagation path of JavaScript worms. PathCutter works by blocking two critical steps in the propagation path of an XSS worm: (i) DOM access to different views at the client-side and (ii) unauthorized HTTP request to the server. As a result, although an XSS vulnerability is successfully exercised at the client, the XSS worm is prevented from successfully propagating to the would be victim's own social network page. PathCutter is effective against all of the current forms of XSS worms, including those that exploit traditional XSS, DOM-based XSS, and content sniffing XSS vulnerabilities. We demonstrate PathCutter using WordPress and perform a preliminary evaluation on a proof-of-concept JavaScript Worm.
【Keywords】: cross site scripting (XSS); javascript worms; security; social network
【Paper Link】 【Pages】:749-752
【Authors】: Lucas Davi ; Alexandra Dmitrienko ; Manuel Egele ; Thomas Fischer ; Thorsten Holz ; Ralf Hund ; Stefan Nürnberger ; Ahmad-Reza Sadeghi
【Abstract】: Despite extensive research over the last two decades, runtime attacks on software are still prevalent. Recently, smartphones, of which millions are in use today, have become an attractive target for adversaries. However, existing solutions are either ad-hoc or limited in their effectiveness. In this poster, we present a general countermeasure against runtime attacks on smartphone platforms. Our approach makes use of control-flow integrity (CFI), and tackles unique challenges of the ARM architecture and smartphone platforms. Our framework and implementation is efficient, since it requires no access to source code, performs CFI enforcement on-the-fly during runtime, and is compatible to memory randomization and code signing/encryption. We chose Apple iPhone for our reference implementation, because it has become an attractive target for runtime attacks. Our performance evaluation on a real iOS device demonstrates that our implementation does not induce any notable overhead when applied to popular iOS applications.
【Keywords】: arm; control-flow integrity; software security
【Paper Link】 【Pages】:753-756
【Authors】: Shlomi Dolev ; Niv Gilboa ; Ofer Hermoni
【Abstract】: Traditional public key infrastructure is an example for basing the security of communication among users and servers on trusting a Certificate Authority (CA) which is a Trusted Authority (TA). A traditional, centralized CA or TA should only be involved in a setup stage for communication, or risk causing a bottleneck. Peer to peer assistance may replace the CA during the actual communication transactions. We introduce such assistants that we call arbitrators. Arbitrators are semi-trusted entities that facilitate communication or business transactions. The communicating parties, users and servers, agree before a communication transaction on a set of arbitrators that they trust (reputation systems may support their choice). Then, the arbitrators receive resources, e.g. a deposit, and a service level agreement between participants such that the resources of a participant are returned if and only if the participant acts according to the agreement. We demonstrate the usage of arbitrators in the scope of conditional (positive) anonymity. A user may interact anonymously with a server as long as the terms for anonymous communication are honored. In case the server finds a violation of the terms, the server proves to the arbitrators that a violation took place and the arbitrators publish the identity of the user. Since the arbitrators may be corrupted, the scheme ensures that only a large enough set of arbitrators may reveal user's identity, which is the deposited resource in the case of conditional anonymity.
【Keywords】: anonymous communication; arbitrators; certificate authority
【Paper Link】 【Pages】:757-760
【Authors】: Shlomi Dolev ; Niv Gilboa ; Marina Kopeetsky
【Abstract】: We propose a new and efficient scheme for broadcast encryption. A broadcast encryption system allows a broadcaster to send an encrypted message to a dynamically chosen subset RS, |RS|=n of a given set of users, such that only users in this subset can decrypt the message. An important component of broadcast encryption schemes is revocation of users by the broadcaster, thereby updating the subset RS. Revocation may be either temporary, for a specific ciphertext, or permanent. We present the first public key broadcast encryption scheme that support permanent revocation of users. Our scheme is fully collusion resistant. In other words, even if all the users in the network collude with a revoked user, the revoked user cannot encrypt messages without receiving new keys from the broadcaster. The procedure is based on Cipher-text Policy Attribute-Based Encryption (CP-ABE). The overhead of our system is O(log n) in all major performance measures including length of private and public keys, computational complexity, user's storage space, and computational complexity of encryption and decryption.
【Keywords】: attribute based encryption; broadcast encryption
【Paper Link】 【Pages】:761-764
【Authors】: Carol J. Fung ; Quanyan Zhu ; Raouf Boutaba ; Tamer Basar
【Abstract】: Intrusion Detection Systems (IDSs) are designed to monitor network traffic and computer activities in order to alert users about suspicious intrusions. Collaboration among IDSs allows users to benefit from the collective knowledge and information from their collaborators and achieve more accurate intrusion detection. However, most existing collaborative intrusion detection networks rely on the exchange of intrusion data which raises privacy concerns. To overcome this problem, we propose SMURFEN: a knowledge-based intrusion detection network, which provides a platform for IDS users to effectively share their customized detection knowledge in an IDS community. An automatic knowledge propagation mechanism is proposed based on a decentralized two-level optimization problem formulation, leading to a Nash equilibrium solution which is proved to be scalable, incentive compatible, fair, efficient and robust.
【Keywords】: collaboration; decentralized optimization; game theory; incentive mechanisms; intrusion detection systems
【Paper Link】 【Pages】:765-768
【Authors】: Ma'ayan Gafny ; Asaf Shabtai ; Lior Rokach ; Yuval Elovici
【Abstract】: In this paper, we propose a new unsupervised approach for identifying suspicious access to sensitive relational data. In the proposed method, a tree-like model encapsulates the characteristics of the result-set (i.e., data) that the user normally access within each possible context. During the detection phase, result-sets are examined against the induced model and a similarity score is derived.
【Keywords】: data leakage; data misuse; insider threat; one class decision tree; unsupervised learning
【Paper Link】 【Pages】:769-772
【Authors】: Hongyu Gao ; Yan Chen ; Kathy Lee ; Diana Palsetia ; Alok N. Choudhary
【Abstract】: Online social networks (OSNs) are popular collaboration and communication tools for millions of users and their friends. Unfortunately, in the wrong hands, they are also effective tools for executing spam campaigns. In this paper, we present an online spam filtering system that can be deployed as a component of the OSN platform to inspect messages generated by users in real-time. We propose to reconstruct spam messages into campaigns for classification rather than examine them individually. Although campaign identification has been used for offline spam analysis, we apply this technique to aid the online spam detection problem with sufficiently low overhead. Accordingly, our system adopts a set of novel features that effectively distinguish spam campaigns. It is highly accurate and confidently drops messages classified as "spam" before they reach the intended recipients, thus protecting them from various kinds of fraud. In addition, the system achieves an average throughput of 1580 messages/sec and an average processing latency of 69.7ms for each message. The high throughput and low latency guarantees that it will not become the bottleneck of the whole OSN platform.
【Keywords】: online social networks; spam
【Paper Link】 【Pages】:773-776
【Authors】: Xi Gong ; Ting Yu ; Adam J. Lee
【Abstract】: Reputation plays a critical role in managing trust in decentralized systems. Quite a few reputation-based trust functions have been proposed in the literature for many different application domains. However, one cannot always obtain all information required by the trust evaluation process. For example, access control restrictions or high collect costs might limit the ability gather all required records. Thus, one key question is how to analytically quantify the quality of scores computed using incomplete information. In this paper, we start a first effort to answer the above question by studying the following problem: given the existence of certain missing information, what are the worst and best trust scores (i.e., the bounds of trust) a target entity can be assigned? We formulate this problem based on a general model of reputation systems, and examine the monotonicity property of representative trust functions in the literature. We show that most existing trust functions are monotonic in terms of direct missing information about the target of a trust evaluation.
【Keywords】: missing information; reputation systems; trust functions
【Paper Link】 【Pages】:777-780
【Authors】: Weili Han ; Zheran Fang ; Weifeng Chen ; Wenyuan Xu ; Chang Lei
【Abstract】: Policy driven management is widely used to manage networked resources and protect sensitive resources. Existing policy-driven management strategies rely heavily on policy administrators to specify and validate policies, which not only require in depth understanding of policy languages and domain knowledge, but also are error-prone. To simplify the tasks of policy administration, this paper proposes a novel policy administration framework, named collaborative policy administration (CPA). Essentially, the idea is that applications with similar functionalities shall have similar policies. Thus, to specify or validate policies, CPA will examine policies already specified by other applications in the same category and perform collaborative recommendation. In this paper, we consider the Android systems as a case study and show that CPA can strengthen Android security.
【Keywords】: android platform; policy making; policy validation
【Paper Link】 【Pages】:781-784
【Authors】: Weili Han ; Chenguang Shen ; Yuliang Yin ; Yun Gu ; Chen Chen
【Abstract】: Risk and benefit are two implicit key factors to determine accesses in secure information sharing. Recent researches have shown that they can be explicitly quantified and used to improve the flexibility in information systems. This paper introduces the motivation and a technical design of Quantified riSk and Benefit adaptive Access Control (QSBAC) to strengthen the security of information sharing. The paper also introduces the key issues to design policies in QSBAC.
【Keywords】: QSBAC; quantified benefit; quantified risk; secure information sharing
【Paper Link】 【Pages】:785-788
【Authors】: Jun Hu ; Hongyu Gao ; Zhichun Li ; Yan Chen
【Abstract】: The prevalence of spam URLs in Internet services, such as email, social networks, blogs and online forums has become a serious problem. These spam URLs host spam advertisements, phishing attempts, and malwares, which are harmful for normal users. Existing URL blacklist approaches offer limited protection. Although recentmachine learning based URL classification approaches demonstrate good accuracy and reasonable throughput, they are based on observations fromexisting spamURLs and hard to detect new spam URLs when attackers employ new strategies. In this paper, we present CUD (Crowdsourcing for URL spam detection) as a supplement of existing detection tools. CUD leverages human intelligence for URL classification through crowdsourcing. CUD crawls existing user comments about spamURLs already on the Internet, and employs sentiment analysis from nature language processing to analyze the user comments automatically for detecting spam URLs. Since CUD does not using features directly associated with the URLs and their landing pages, it is more robust when attackers change their strategies. Through evaluation, we find up to 70% of URLs have user comments online. CUD achieves an accuracy of 86.8% in terms of true positive rate with a false positive rate 0.9%. Moreover, about 75% of spam URLs CUD detects are missed by other approaches. Therefore, CUD can be used as a good complement to other approaches.
【Keywords】: crowdsourcing; spam detection
【Paper Link】 【Pages】:789-792
【Authors】: Ashar Javed
【Abstract】: Modern web applications combine content from several sources (with varying security characteristics), and incorporate significant portion of user-supplied contents to enrich browsing experience. However, the de facto web protection model, the same-origin policy (SOP), has not adequately evolved to manage the security consequences of this additional complexity. As a result, making web applications subject to a broad sphere of attacks (cross-site scripting, cross-site request forgery and others). The fundamental problem is the failure of access control. To solve this, in this work, we present DIEGO, a new fine-grained access control model for web browsers. Our overall design approach is to combine mandatory access-control (MAC) principles of operating system with tag pairing isolation technique in order to provide stealthy protection. To support backwards compatibility, DIEGO defaults to the same-origin policy (SOP) for web applications.
【Keywords】: DIEGO; access control; browser; fine-grained; same-origin policy
【Paper Link】 【Pages】:793-796
【Authors】: Arjan Jeckmans ; Qiang Tang ; Pieter H. Hartel
【Abstract】: Currently, none of the existing online social networks (OSNs) enables its users to make new friends without revealing their private information. This leaves the users in a vulnerable position when searching for new friends. We propose a solution which enables a user to compute her profile similarity with another user in a privacy-preserving way. Our solution is designed for a realistic OSN environment, where a pair of users is unlikely to be online at the same time.
【Keywords】: matching; online social network; privacy
【Paper Link】 【Pages】:797-800
【Authors】: Ünal Koçabas ; Ahmad-Reza Sadeghi ; Christian Wachsmann ; Steffen Schulz
【Abstract】: We present the design and implementation of a lightweight remote attestation scheme for embedded devices that combines software attestation with Physically Unclonable Functions (PUFs). In contrast to standard software attestation, our scheme (i) is secure against collusion attacks to forge the attestation checksum, (ii) allows for the authentication and attestation of remote provers, and (iii) enables the detection of hardware attacks on the prover.
【Keywords】: embedded devices; physically unclonable functions (PUFs); remote attestation; software-based attestation
【Paper Link】 【Pages】:801-804
【Authors】: Yao Liu ; Peng Ning
【Abstract】: Wireless link signature is a physical layer authentication mechanism, which uses the multi-path effect between a transmitter and a receiver to provide authentication of wireless signals. We identify a new attack, called mimicry attack, against the wireless link signature scheme in [7]. It is assumed in the past that an attacker cannot "spoof" an arbitrary link signature and that the attacker will not have the same link signature at the receiver unless it is at exactly the same location as the legitimate transmitter. However, we show that an attacker can forge an arbitrary link signature as long as it knows the legitimate signal at the receiver's location, and the attacker does not have to be at exactly the same location as the legitimate transmitter in order to forge its link signature.
【Keywords】: attacks; link signature; wireless security
【Paper Link】 【Pages】:805-808
【Authors】: Stefano Maggi ; Alberto Volpatto ; Simone Gasparini ; Giacomo Boracchi ; Stefano Zanero
【Abstract】: Touchscreen devices increase the risk of shoulder surfing to such an extent that attackers could steal sensitive information by simply following the victim and observe his or her portable device. We underline this concern by proposing an automatic shoulder surfing attack against modern touchscreen keyboards that display magnified keys in predictable positions. We demonstrate this attack against the Apple iPhone - although it can work with other layouts and different devices - and show that it recognizes up to 97.07% (91.03% on average) of the keystrokes, with only 1.15% of errors, at 37 to 51 keystrokes per minute: About eight times faster than a human analyzing a recorded video. Our attack, described thoroughly in [2], accurately recovers the sequence of keystrokes input by the user. The attack described in [1], which targeted desktop scenarios and thus worked with very restrictive settings, is similar in spirit to ours. However, as it assumes that camera and target keyboard are both in fixed, perpendicular position, it cannot suite mobile settings, characterized by moving target and skewed, rotated viewpoints. Our attack, instead, requires no particular settings and even allows for natural movements of both target device and shoulder surfer's camera. In addition, our attack yields accurate output without any grammar or syntax checks, so that it can detect large context-free text or non-dictionary words. In summary: - We are the first studying the practical risks brought forth by mainstream touchscreen keyboards. - We design a practical attack that detects keystrokes on modern touchscreen keyboards: The attacker requires not to stand exactly behind the victim nor to observe the screen perpendicularly. Our attack is robust to occlusions (eg, typing fingers), thanks to our efficient filtering technique that validates detected keys and reconstructs keystroke sequences accurately.
【Keywords】: computer vision; shoulder surfing
【Paper Link】 【Pages】:809-812
【Authors】: Shah Mahmood ; Yvo Desmedt
【Abstract】: In this paper we provide a preliminary analysis of Google+ privacy. We identified that Google+ shares photo metadata with users who can access the photograph and discuss its potential impact on privacy. We also identified that Google+ encourages the provision of other names including maiden name, which may help criminals performing identity theft. We show that Facebook lists are a superset of Google+ circles, both functionally and logically, even though Google+ provides a better user interface. Finally we compare the use of encryption and depth of privacy control in Google+ versus in Facebook.
【Keywords】: Facebook; Google+; privacy; social network
【Paper Link】 【Pages】:813-816
【Authors】: Nayantara Mallesh ; Matthew K. Wright
【Abstract】: While it is important to design anonymity systems to be robust against attacks, it is also important to provide good performance to users. We explore ways to improve the security and performance of anonymity systems by building both security and performance properties into the network topology. In particular, we study an expander graph based network topology and apply link-based performance metrics in order to build the topology graph. Such a network can be constructed to have enhanced performance and similar security properties to restricted route topologies with random links. Results show that a sparse, D-regular expander graph topology provides nearly the same security, as measured by the likelihood of an incoming stream exiting through any node in the network, as with a fully-connected graph. Further, when the expander graph is constructed with a bias towards faster links, there is a considerable gain in performance without much loss of security.
【Keywords】: anonymous communications; network topology; online privacy; performance-enhanced overlay networks
【Paper Link】 【Pages】:817-820
【Authors】: Ramon Francisco Pacquiao Mejia ; Yuichi Kaji ; Hiroyuki Seki
【Abstract】: Role-Based Access Control (RBAC) is a powerful and versatile access control system for large-scale access control management within an organization. Most studies so far consider RBAC models that have a single consistent access control policy, which implicitly confine an RBAC system to one organization. However, many real-world requirements of access control span multiple organizations; thus, there is a need to design scalable RBAC models for such use cases. We propose a trans-organizational RBAC model that enables access control within and across organizations. A formal definition of trans-organizational RBAC is presented. We show that the model is scalable in a multi-organization setup, and does not require the creation of federations. Finally, a security issue in the model is identified and possible approaches to address this are discussed.
【Keywords】: information security; role-based access control; service coalition; trans-organizational role
【Paper Link】 【Pages】:821-824
【Authors】: Mohamed Nabeel ; Elisa Bertino
【Abstract】: Attribute based systems enable fine-grained access control among a group of users each identified by a set of attributes. Secure collaborative applications need such flexible attribute based systems for managing and distributing group keys. However, current group key management schemes are not well designed to manage group keys based on the attributes of the group members. In this poster, we propose a novel key management scheme that allows users whose attributes satisfy a certain policy to derive the group key. Our scheme efficiently supports rekeying operations when the group changes due to joins or leaves of group members. During a rekey operation, the private information issued to existing members remains unaffected and only the public information is updated to change the group key. Our scheme is expressive; it is able to support any monotonic policy over a set of attributes. Our scheme is resistant to collusion attacks; group members are unable to pool their attributes and derive the group key which they cannot derive individually.
【Keywords】: attribute based; broadcast encryption; key management
【Paper Link】 【Pages】:825-828
【Authors】: Rishab Nithyanand ; Radu Sion ; John Solis
【Abstract】: Physical Unclonable Functions (PUFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure but difficult to clone. The unclonability property is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as authentication, software protection/licensing, and certified execution. In this abstract, we claim that any multi-core computer is usable as a timing-PUF and can be measured via simple benchmarking tools (i.e., no specialized hardware required). We investigate several characterstics of standard off-the-shelf computers and present initial experimental results justifying our claim. Additionally, we argue that PUFs which are intrinsically involved in computations over sensitive data are preferable to peripheral device PUFs -- especially for intellectual property protection and continuous device authentication.
【Keywords】: authentication; hardware; physical unclonable functions; software protection
【Paper Link】 【Pages】:829-832
【Abstract】: Software suffers from security vulnerabilities and to our best knowledge, no silver bullet exists to make all the software absolutely secure. Network software applications, e.g. network servers, due to historic reasons, often have a monolithic architecture. Therefore, the whole application stays in a single protection domain, and any vulnerability of any part would jeopardize the whole application. The principle of least privilege provides an alternative way to design and implement software with better security. uPro is a software compartmentalization tool supporting fine-grained and flexible configuration. The configuration is provided by the developers and it specifies the protection domain partition of the software application and the corresponding privilege of each partition. The configuration file is simple and extensible. Based on the configuration file, uPro loads all the protection domains to a single address space and locates all the protection domains to non-interleaved memory regions. The protection domain separation is achieved at the user level so that uPro is totally OS-neutral. uPro supports concurrent execution. The execution units and the protection domains are orthogonal and their implementation is based on threads, so the context-switch time of the execution units in uPro is lightweight compared to process implementation.
【Keywords】: compartmentalization; configuration; security
【Paper Link】 【Pages】:833-836
【Authors】: Peng Liao ; Xiang Cui ; Shuhao Li ; Chaoge Liu
【Abstract】: In this paper, we introduce the design of Hybot, a botnet which could recover its command and control (C&C) channel in a tolerable delay in case most of critical resources are destroyed. Hybot exploits a hybrid C&C structure, hybrid P2P and URL Flux, to ensure both robustness and effectiveness. Our preliminary results show that the design of Hybot is feasible, consequently posing potential threat for Internet security. The goal of our work is to increase the understanding of advanced botnets which will promote the development of more efficient countermeasures.
【Keywords】: C&C; botnet; hybrid; recoverable
【Paper Link】 【Pages】:837-840
【Authors】: Henning Perl ; Michael Brenner ; Matthew Smith
【Abstract】: Since the discovery of a fully homomorphic cryptographic scheme by Gentry, a number of different schemes have been proposed that apply the bootstrap technique of Gentry's original approach. However, to date no implementation of fully homomorphic encryption has been publicly released. This poster presents a working implementation of the Smart-Vercauteren scheme that will be freely available and gives substantial implementation hints.
【Keywords】: homomorphic encryption; implementation
【Paper Link】 【Pages】:841-844
【Authors】: Muhammad Rizwan Asghar ; Giovanni Russello ; Bruno Crispo
【Abstract】: The enforcement of security policies is an open challenge in environments where the IT infrastructure has been outsourced to a third party. Although the outsourcing allows companies to gain economical benefits and scalability, it imposes the threat of leaking the private information about the sensitive data managed and processed by untrusted parties. In this work, we propose an architecture to enforce Role-Based Access Control (RBAC) style of authorisation policies in outsourced environments. As a proof of concept, we have implemented a demo and measured the performance overhead incurred by the proposed architecture.
【Keywords】: data outsourcing; encrypted RBAC; encrypted policy enforcement; privacy; security
【Paper Link】 【Pages】:845-448
【Authors】: Mohammad Saiful Islam ; Mehmet Kuzu ; Murat Kantarcioglu
【Abstract】: The advent of cloud computing has ushered in an era of mass data storage in remote servers. Remote data storage offers reduced data management overhead for data owners in a cost effective manner. Sensitive documents, however, need to be stored in encrypted format due to security concerns. But, encrypted storage makes it difficult to search on the stored documents. Therefore, this poses a major barrier towards selective retrieval of encrypted documents from the remote servers. Various protocols have been proposed for keyword search over encrypted data (commonly referred to as searchable encryption) to address this issue. Oblivious RAM type protocols offer secure search over encrypted data, but are too expensive to be used in practical applications. Unfortunately, all of the symmetric key based encryption protocols leak data access patterns due to efficiency reasons. In this poster, we are the first to analyze the effects of access pattern disclosure. To that end, we introduce a novel attack model that exploits access pattern leakage to disclose significant amount of sensitive information using a modicum of prior knowledge. We also present a preliminary set of empirical results on a real dataset to justify our claim.
【Keywords】: searchable encryption
【Paper Link】 【Pages】:849-852
【Authors】: Axel Schröpfer ; Florian Kerschbaum
【Abstract】: Secure computation, e.g. using Yao's garbled circuit protocol, allows two parties to compute arbitrary functions without disclosing their inputs. A profitable application of secure computation is business optimization. It is characterized by a monetary benefit for all participants and a high confidentiality of their respective input data. In most instances the consequences of input disclosure, e.g. loss of bargaining power, outweigh the benefits of collaboration. Therefore these optimizations are currently not performed in industrial practice. Our demo shows such an optimization as a secure computation. The joint economic lot size (JELS) is the optimal order quantity between a buyer and supplier. We implemented Yao's protocol in JavaScript, such that it can be executed using two web browsers. This has the additional benefit that the software can be offered as a service (SaaS) and can be easily integrated with other SaaS offerings, e.g. using mash-up technology.
【Keywords】: JavaScript; business optimization; mash-up; secure computation
【Paper Link】 【Pages】:853-856
【Authors】: Chao Shen ; Zhongmin Cai ; Xiaohong Guan
【Abstract】: Mouse dynamics is the process of verifying the identity of computer users on the basis of their mouse operating characteristics, which are derived from the movement and click events. Some researchers have explored this domain and reported encouraging results, but few focused on applicability in a realistic setting. Specifically, many of the existing approaches require an impractically long verification time to achieve a reasonable accuracy. In this work, we investigate the mouse dynamics of 26 subjects under a tightly-controlled environment. Using procedural features such as speed and acceleration curves to more accurately characterize mouse activity, and adopting distance metrics to overcome the within-class variability, we achieved a promising performance with a false-acceptance rate of 8.87%, a false-rejection rate of 7.16%, and an average verification time of 11.8 seconds. We find that while this level of accuracy comes close to meeting the requirements of identity verification, a tradeoff must be made between security and user acceptability. We also suggest opportunities for further investigation through additional, controlled experimental environments.
【Keywords】: authentication; human computer interaction; identity verification; mouse dynamics biometric
【Paper Link】 【Pages】:857-860
【Authors】: Patrick Stewin ; Jean-Pierre Seifert ; Collin Mulliner
【Abstract】: Malware residing in dedicated isolated hardware containing an auxiliary processor such as present in network, video, and CPU chipsets is an emerging security threat. To attack the host system, this kind of malware uses the direct memory access (DMA) functionality. By utilizing DMA, the host system can be fully compromised bypassing any kind of kernel level protection. Traditional anti-virus software is not capable to detect this kind of malware since the auxiliary systems are completely isolated from the host CPU. In this work we present our novel method that is capable of detecting this kind of malware. To understand the properties of such malware we evaluated a prototype that attacks the host via DMA. Our prototype is executed in the chipset of an x86 architecture. We identified key properties of such malware that are crucial for our detection method. Our detection mechanism is based on monitoring the side effects of rogue DMA usage performed by the malware. We believe that our detection mechanism is general and the first step in the detection of malware in dedicated isolated hardware.
【Keywords】: Intel active management technology (IAMT); dedicated hardware; manageability engine (ME); northbridge; rootkit
【Paper Link】 【Pages】:861-864
【Authors】: Pengfei Sun ; Qingni Shen ; Ying Chen ; Zhonghai Wu ; Cong Zhang ; Anbang Ruan ; Liang Gu
【Abstract】: Load balancing has been widely used on the field of Cloud Computing, which makes sure that none of the existing resources are idle while other physical machines are being utilized by Cloud Computing providers. However, VMs of tenants may be migrated to a physical machine with potential attacks which may use memory caches as side channels. So the security problem coexisting on the same physical machine is an important barrier for enterprise to adopt of cloud computing. We present a new security load balancing architecture--Load Balancing based on Multilateral Security (LBMS) which can migrate tenants' VMs automatically to the ideal security physical machine when reach peak-load by index and negotiation. We are implementing our prototype based on CloudSim, a Cloud computing simulation. Our architecture makes an effort to avoid potential attacks when VMs migrate to physical machine due to load balancing.
【Keywords】: IAAS; cloud computing; co-residency; load balancing; multilateral security; negotiation
【Paper Link】 【Pages】:865-868
【Authors】: Daniel Trivellato ; Nicola Zannone ; Sandro Etalle
【Abstract】: Systems of Systems (SoS) are dynamic, distributed coalitions of autonomous and heterogeneous systems that collaborate to achieve a common goal. While offering several advantages in terms of scalability and flexibility, the SoS paradigm has a strong impact on system interoperability and on the security requirements of collaborating parties. In this demo we present a prototype implementation of POLIPO, a security framework that combines context-aware access control with trust management and ontology-based services to protect information in SoS.
【Keywords】: protection of information; security framework; system interoperability; systems of systems
【Paper Link】 【Pages】:869-872
【Authors】: Xiaoxin Wu ; Lei Xu ; Xinwen Zhang
【Abstract】: We propose CL-PRE, a certificateless proxy re-encryption scheme for data sharing with cloud. In CL-PRE, a data owner encrypts shared data in cloud with an encryption key, which is further encrypted and transformed by cloud, and then distributed to legitimate recipients for access control. Uniquely, the cloud-based transformation leverages re-encryption keys derived from private key of data owner and public keys of receipts, and eliminates the key escrow problem with identity based cryptography and the need of certificate. While preserving data and key privacy from semi-trusted cloud, CL-PRE maximumly leverages cloud resources to reduce the computing and communication cost for data owner. We implement CL-PRE and evaluate its security and performance.
【Keywords】: access control; certificateless public key cryptography; cloud computing; cloud storage; proxy re-encryption
【Paper Link】 【Pages】:873-876
【Authors】: Zhi Yang ; Lihua Yin ; Miyi Duan ; Shuyuan Jin
【Abstract】: Decentralized information flow control (DIFC) is a recent important innovation with flexible mechanisms to improve the availability of traditional information flow models. However, the flexibility of DIFC models also makes specifying and managing DIFC policies a challenging problem. The formal policy verification techniques can improve the current state of the art of policy specification and management. We show that in general these problems of policy verification of the main DIFC systems are NP-hard, and show that several subcases remain NP-complete. We also propose an approach of model checking to solve these problems. Experiments are presented to show that this approach is effective.
【Keywords】: DIFC; NP-hard; formal method; model checking; verification
【Paper Link】 【Pages】:877-880
【Authors】: Ji Zhu ; Mudhakar Srivatsa
【Abstract】:
【Keywords】: quantitative information flow
【Paper Link】 【Pages】:881-884
【Authors】: Yan Zhu ; Hongxin Hu ; Gail-Joon Ahn ; Xiaorui Gong ; Shimin Chen
【Abstract】: There has been little work that explores cryptographic temporal constraints, especially for data sharing in cloud computing. In this paper, we present a temporal attribute-based encryption (TABE) scheme to implement temporal constraints for data access control in clouds. This scheme has a constant size for ciphertext, private-key, and a nearly linear-time complexity. In addition, we implement a prototype system to evaluate our proposed approach. Our experimental results not only validate the effectiveness of our scheme and algorithms, but also show our scheme has better performance for integer comparison than BSW's bitwise comparison scheme.
【Keywords】: access control; attribute-based encryption; cryptography; integer comparison; temporal