32nd IEEE Symposium on Security and Privacy, S&P 2011, 22-25 May 2011, Berkeley, California, USA. IEEE Computer Society 【DBLP Link】
【Paper Link】 【Pages】:3-18
【Authors】: Andrew M. White ; Austin R. Matthews ; Kevin Z. Snow ; Fabian Monrose
【Abstract】: In this work, we unveil new privacy threats against Voice-over-IP (VoIP) communications. Although prior work has shown that the interaction of variable bit-rate codecs and length-preserving stream ciphers leaks information, we show that the threat is more serious than previously thought. In particular, we derive approximate transcripts of encrypted VoIP conversations by segmenting an observed packet stream into subsequences representing individual phonemes and classifying those subsequences by the phonemes they encode. Drawing on insights from the computational linguistics and speech recognition communities, we apply novel techniques for unmasking parts of the conversation. We believe our ability to do so underscores the importance of designing secure (yet efficient) ways to protect the confidentiality of VoIP conversations.
【Keywords】: Internet telephony; computer network security; Hookt on Fon-iks; Voice-over-IP; bitrate codecs; computational linguistics; encrypted VoIP conversations; packet stream; phonotactic reconstruction; privacy threats; speech recognition communities; Codecs; Cryptography; Hidden Markov models; Pragmatics; Privacy; Speech; Speech coding
【Paper Link】 【Pages】:19-31
【Authors】: Elie Bursztein ; Romain Beauxis ; Hristo S. Paskov ; Daniele Perito ; Celine Fabry ; John C. Mitchell
【Abstract】: CAPTCHAs, which are automated tests intended to distinguish humans from programs, are used on many web sites to prevent bot-based account creation and spam. To avoid imposing undue user friction, CAPTCHAs must be easy for humans and difficult for machines. However, the scientific basis for successful CAPTCHA design is still emerging. This paper examines the widely used class of audio CAPTCHAs based on distorting non-continuous speech with certain classes of noise and demonstrates that virtually all current schemes, including ones from Microsoft, Yahoo, and eBay, are easily broken. More generally, we describe a set of fundamental techniques, packaged together in our Decaptcha system, that effectively defeat a wide class of audio CAPTCHAs based on non-continuous speech. Decaptcha's performance on actual observed and synthetic CAPTCHAs indicates that such speech CAPTCHAs are inherently weak and, because of the importance of audio for various classes of users, alternative audio CAPTCHAs must be developed.
【Keywords】: Web sites; audio signal processing; security of data; speech processing; bot-based account creation; noise-based non-continuous audio CAPTCHA; web sites; Cepstrum; Discrete Fourier transforms; Humans; Noise; Semantics; Speech; Training
【Paper Link】 【Pages】:32-46
【Authors】: Hugh Wimberly ; Lorie M. Liebrock
【Abstract】: Choosing the security architecture and policies for a system is a demanding task that must be informed by an understanding of user behavior. We investigate the hypothesis that adding visible security features to a system increases user confidence in the security of a system and thereby causes users to reduce how much effort they spend in other security areas. In our study, 96 volunteers each created a pair of accounts, one secured only by a password and one secured by both a password and a fingerprint reader. Our results strongly support our hypothesis - on average. When using the fingerprint reader, users created passwords that would take one three-thousandth as long to break, thereby potentially negating the advantage two-factor authentication could have offered.
【Keywords】: authorisation; fingerprint identification; fingerprint authentication; fingerprint reader; security architecture; system security reduction; user behavior; user confidence; Authentication; Complexity theory; Entropy; Frequency measurement; Markov processes; risk compensation; security policy; two-factor authentication; user study
【Paper Link】 【Pages】:49-63
【Authors】: Adam Waksman ; Simha Sethumadhavan
【Abstract】: Hardware components can contain hidden backdoors, which can be enabled with catastrophic effects or for ill-gotten profit. These backdoors can be inserted by a malicious insider on the design team or a third-party IP provider. In this paper, we propose techniques that allow us to build trustworthy hardware systems from components designed by untrusted designers or procured from untrusted third-party IP providers. We present the first solution for disabling digital, design-level hardware backdoors. The principle is that rather than try to discover the malicious logic in the design -- an extremely hard problem -- we make the backdoor design problem itself intractable to the attacker. The key idea is to scramble inputs that are supplied to the hardware units at runtime, making it infeasible for malicious components to acquire the information they need to perform malicious actions. We show that the proposed techniques cover the attack space of deterministic, digital HDL backdoors, provide probabilistic security guarantees, and can be applied to a wide variety of hardware components. Our evaluation with the SPEC 2006 benchmarks shows negligible performance loss (less than 1% on average) and that our techniques can be integrated into contemporary microprocessor designs.
【Keywords】: Hardware; Hardware design languages; Microprocessors; Nonvolatile memory; Security; System-on-a-chip; Testing; backdoors; hardware; obfuscation; performance; security; triggers
【Paper Link】 【Pages】:64-77
【Authors】: Cynthia Sturton ; Matthew Hicks ; David Wagner ; Samuel T. King
【Abstract】: In previous work Hicks et al. proposed a method called Unused Circuit Identification (UCI) for detecting malicious backdoors hidden in circuits at design time. The UCI algorithm essentially looks for portions of the circuit that go unused during design-time testing and flags them as potentially malicious. In this paper we construct circuits that have malicious behavior, but that would evade detection by the UCI algorithm and still pass design-time test cases. To enable our search for such circuits, we define one class of malicious circuits and perform a bounded exhaustive enumeration of all circuits in that class. Our approach is simple and straight forward, yet it proves to be effective at finding circuits that can thwart UCI. We use the results of our search to construct a practical attack on an open-source processor. Our malicious backdoor allows any user-level program running on the processor to enter supervisor mode through the use of a secret â knock. We close with a discussion on what we see as a major challenge facing any future design-time malicious hardware detection scheme: identifying a sufficient class of malicious circuits to defend against.
【Keywords】: invasive software; UCI; design-time testing; malicious behavior; malicious circuits; malicious hardware; malicious hardware detection scheme; open-source processor; unused circuit identification; user-level program; Algorithm design and analysis; Hardware; Logic gates; Open source software; Security; Testing; Wires; attack; hardware; security
【Paper Link】 【Pages】:81-95
【Authors】: Ryan Henry ; Ian Goldberg
【Abstract】: Anonymous communications networks, such as Tor, help to solve the real and important problem of enabling users to communicate privately over the Internet. However, in doing so, anonymous communications networks introduce an entirely new problem for the service providers - such as websites, IRC networks or mail servers - with which these users interact, in particular, since all anonymous users look alike, there is no way for the service providers to hold individual misbehaving anonymous users accountable for their actions. Recent research efforts have focused on using anonymous blacklisting systems (which are sometimes called anonymous revocation systems) to empower service providers with the ability to revoke access from abusive anonymous users. In contrast to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide users with a way to authenticate anonymously with a service provider, while enabling the service provider to revoke access from any users that misbehave, without revealing their identities. In this paper, we introduce the anonymous blacklisting problem and survey the literature on anonymous blacklisting systems, comparing and contrasting the architecture of various existing schemes, and discussing the tradeoffs inherent with each design. The literature on anonymous blacklisting systems lacks a unified set of definitions, each scheme operates under different trust assumptions and provides different security and privacy guarantees. Therefore, before we discuss the existing approaches in detail, we first propose a formal definition for anonymous blacklisting systems, and a set of security and privacy properties that these systems should possess. We also outline a set of new performance requirements that anonymous blacklisting systems should satisfy to maximize their potential for real-world adoption, and give formal definitions for several optional features already supported by some sche- - mes in the literature.
【Keywords】: Internet; computer network security; IRC networks; Internet; abusive anonymous users; anonymous blacklisting systems; anonymous communications networks; anonymous revocation systems; formalizing anonymous blacklisting systems; mail servers; privacy properties; real-world adoption; service provider; service providers; trust assumptions; Authentication; Internet; Privacy; Protocols; Relays; Resistance; anonymity; anonymous blacklisting; authentication; privacy enhancing technologies; privacy-enhanced revocation
【Paper Link】 【Pages】:96-111
【Authors】: Michael Becher ; Felix C. Freiling ; Johannes Hoffmann ; Thorsten Holz ; Sebastian Uellenbeck ; Christopher Wolf
【Abstract】: We are currently moving from the Internet society to a mobile society where more and more access to information is done by previously dumb phones. For example, the number of mobile phones using a full blown OS has risen to nearly 200% from Q3/2009 to Q3/2010. As a result, mobile security is no longer immanent, but imperative. This survey paper provides a concise overview of mobile network security, attack vectors using the back end system and the web browser, but also the hardware layer and the user as attack enabler. We show differences and similarities between "normal" security and mobile security, and draw conclusions for further research opportunities in this area.
【Keywords】: Internet; mobile computing; security of data; Internet society; Web browser; bolts; mobile devices; mobile network security; nuts; Computers; Mobile communication; Mobile computing; Operating systems; Security; Smart phones; mobile security; smartphones; survey
【Paper Link】 【Pages】:115-130
【Authors】: Arjun Guha ; Matthew Fredrikson ; Benjamin Livshits ; Nikhil Swamy
【Abstract】: Popup blocking, form filling, and many other features of modern web browsers were first introduced as third-party extensions. New extensions continue to enrich browsers in unanticipated ways. However, powerful extensions require capabilities, such as cross-domain network access and local storage, which, if used improperly, pose a security risk. Several browsers try to limit extension capabilities, but an empirical survey we conducted shows that many extensions are over-privileged under existing mechanisms. This paper presents ibex, a new framework for authoring, analyzing, verifying, and deploying secure browser extensions. Our approach is based on using type-safe, high-level languages to program extensions against an API providing access to a variety of browser features. We propose using Data log to specify fine-grained access control and dataflow policies to limit the ways in which an extension can use this API, thus restricting its privilege over security-sensitive web content and browser resources. We formalize the semantics of policies in terms of a safety property on the execution of extensions and develop a verification methodology that allows us to statically check extensions for policy compliance. Additionally, we provide visualization tools to assist with policy analysis, and compilers to translate extension source code to either. NET byte code or JavaScript, facilitating cross-browser deployment of extensions. We evaluate our work by implementing and verifying~NumExt extensions with a diverse set of features and security policies. We deploy our extensions in Internet Explorer, Chrome, Fire fox, and a new experimental HTML5 platform called C3. In so doing, we demonstrate the versatility and effectiveness of our approach.
【Keywords】: application program interfaces; data visualisation; online front-ends; program compilers; security of data; API; access control; browser extensions; compilers; data log; high-level languages; policy analysis; program extensions; security-sensitive Web content; verified security; visualization tools; Browsers; Fires; History; Internet; Security; Web pages; extensions; policy languages; security; type system; verification; web browsers
【Paper Link】 【Pages】:131-146
【Authors】: Matthew Fredrikson ; Benjamin Livshits
【Abstract】: We present RePriv, a system that combines the goals of privacy and content personalization in the browser. RePriv discovers user interests and shares them with third parties, but only with an explicit permission of the user. We demonstrate how always-on user interest mining can effectively infer user interests in a real browser. We go on to discuss an extension framework that allows third-party code to extract and disseminate more detailed information, as well as language-based techniques for verifying the absence of privacy leaks in this untrusted code. To demonstrate the effectiveness of our model, we present RePriv extensions that perform personalization for Netflix, Twitter, Bing, and Get Glue. This paper evaluates important aspects of RePriv in realistic scenarios. We show that RePriv's default in-browser mining can be done with no noticeable overhead to normal browsing, and that the results it produces converge quickly. We demonstrate that RePriv personalization yields higher quality results than those that maybe obtained about the user from public sources. We then go onto show similar results for each of our case studies: that RePrivenables high-quality personalization, as shown by cases studies in news and search result personalization we evaluated on thousands of instances, and that the performance impact each case has on the browser is minimal. We conclude that personalized content and individual privacy on the web are not mutually exclusive.
【Keywords】: data privacy; online front-ends; social networking (online); Bing; Get Glue; Netflix; RePriv; RePriv extensions; Twitter; content personalization; extension framework; inbrowser privacy; language based techniques; privacy personalization; public sources; reimagining content personalization; untrusted code; Advertising; Browsers; Data mining; History; Privacy; Taxonomy; Web sites; Personalization; Privacy; Software Verification; Web Applications
【Paper Link】 【Pages】:147-161
【Authors】: Zachary Weinberg ; Eric Yawei Chen ; Pavithra Ramesh Jayaraman ; Collin Jackson
【Abstract】: History sniffing attacks allow web sites to learn about users' visits to other sites. The major browsers have recently adopted a defense against the current strategies for history sniffing. In a user study with 307 participants, we demonstrate that history sniffing remains feasible via interactive techniques which are not covered by the defense. While these techniques are slower and cannot hope to learn as much about users' browsing history, we see no practical way to defend against them.
【Keywords】: Web sites; online front-ends; security of data; Web sites; history browsing; history sniffing attacks; interactive techniques; side channel attacks; Browsers; Cascading style sheets; Color; History; Probes; Security; Servers; browsing history; privacy; web security
【Paper Link】 【Pages】:165-179
【Authors】: Aleksandar Nanevski ; Anindya Banerjee ; Deepak Garg
【Abstract】: We present Relational Hoare Type Theory (RHTT), a novel language and verification system capable of expressing and verifying rich information flow and access control policies via dependent types. We show that a number of security policies which have been formalized separately in the literature can all be expressed in RHTT using only standard type-theoretic constructions such as monads, higher-order functions, abstract types, abstract predicates, and modules. Example security policies include conditional declassification, information erasure, and state-dependent information flow and access control. RHTT can reason about such policies in the presence of dynamic memory allocation, deallocation, pointer aliasing and arithmetic. The system, theorems and examples have all been formalized in Coq.
【Keywords】: authorisation; formal verification; RHTT; Relational Hoare Type Theory; abstract predicates; abstract types; access control policies; deallocation; dependent types; dynamic memory allocation; higher order functions; information flow; information flow verification; language system; modules; pointer aliasing; pointer arithmetic; security policies; verification system; Access control; Context; Dynamic scheduling; Resource management; Semantics; Shape; Access Control; Information Flow; Type Theory
【Paper Link】 【Pages】:180-195
【Authors】: Jeffrey A. Vaughan ; Stephen Chong
【Abstract】: We explore the inference of expressive human-readable declassification policies as a step towards providing practical tools and techniques for strong language-based information security. Security-type systems can enforce expressive information-security policies, but can require enormous programmer effort before any security benefit is realized. To reduce the burden on the programmer, we focus on inference of expressive yet intuitive information-security policies from programs with few programmer annotations. We define a novel security policy language that can express what information a program may release, under what conditions (or, when) such release may occur, and which procedures are involved with the release (or, where in the code the release occur). We describe a dataflow analysis for precisely inferring these policies, and build a tool that instantiates this analysis for the Java programming language. We validate the policies, analysis, and our implementation by applying the tool to a collection of simple Java programs.
【Keywords】: Java; data flow analysis; inference mechanisms; security of data; Java programming language; dataflow analysis; expressive human-readable declassification policies; language-based information security; security policy language; security-type systems; Information security; Java; Observers; Semantics; Syntactics; declassification policies; inference of security policies; information flow; language-based security
【Paper Link】 【Pages】:196-211
【Authors】: Sebastian Eggert ; Ron van der Meyden ; Henning Schnoor ; Thomas Wilke
【Abstract】: The paper considers several definitions of information flow security for intransitive policies from the point of view of the complexity of verifying whether a finite-state system is secure. The results are as follows. Checking (i) P-security (Goguen and Meseguer), (ii) IP-security (Haigh and Young), and (iii) TA-security (van der Meyden) are all in PTIME, while checking TO-security (van der Meyden) is undecidable. The most important ingredients in the proofs of the PTIME upper bounds are new characterizations of the respective security notions, which also enable the algorithms to return simple counterexamples demonstrating insecurity. Our results for IP-security improve a previous doubly exponential bound of Hadj-Alouane et al.
【Keywords】: computational complexity; finite state machines; security of data; IP-security; P-security; PTIME; TA-security; TO-security; finite-state system; information flow security; intransitive noninterference; Access control; Complexity theory; Cryptography; Resource management; Semantics; System analysis and design; information flow; noninterference; verification
【Paper Link】 【Pages】:212-227
【Authors】: Xin Zhang ; Hsu-Chun Hsiao ; Geoffrey Hasker ; Haowen Chan ; Adrian Perrig ; David G. Andersen
【Abstract】: We present the first Internet architecture designed to provide route control, failure isolation, and explicit trust information for end-to-end communications. SCION separates ASes into groups of independent routing sub-planes, called trust domains, which then interconnect to form complete routes. Trust domains provide natural isolation of routing failures and human misconfiguration, give endpoints strong control for both inbound and outbound traffic, provide meaningful and enforceable trust, and enable scalable routing updates with high path freshness. As a result, our architecture provides strong resilience and security properties as an intrinsic consequence of good design principles, avoiding piecemeal add-on protocols as security patches. Meanwhile, SCION only assumes that a few top-tier ISPs in the trust domain are trusted for providing reliable end-to-end communications, thus achieving a small Trusted Computing Base. Both our security analysis and evaluation results show that SCION naturally prevents numerous attacks and provides a high level of resilience, scalability, control, and isolation.
【Keywords】: Internet; computer network security; next generation networks; Internet architecture; SCION; failure isolation; next-generation networks; route control; trust domains; trust information; trusted computing base; Computer architecture; Internet; Law; Peer to peer computing; Routing; Routing protocols; Security
【Paper Link】 【Pages】:231-246
【Authors】: Joseph A. Calandrino ; Ann Kilzer ; Arvind Narayanan ; Edward W. Felten ; Vitaly Shmatikov
【Abstract】: Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.
【Keywords】: Internet; Web sites; consumer behaviour; data privacy; groupware; inference mechanisms; information filtering; recommender systems; Amazon; Hunch; Internet user; Last.fm; Library Thing; collaborative filtering; commercial Web sites; customer transactions; inference attacks; privacy risks; recommender systems; Accuracy; Collaboration; Covariance matrix; History; Inference algorithms; Privacy; Recommender systems
【Paper Link】 【Pages】:247-262
【Authors】: Reza Shokri ; George Theodorakopoulos ; Jean-Yves Le Boudec ; Jean-Pierre Hubaux
【Abstract】: It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual- - locations.
【Keywords】: data privacy; mobile computing; statistical analysis; LPPM; attackers model; formal framework; location inference attacks; location privacy protection mechanisms; mobile users; personal communication devices; quantify location privacy; quantifying location privacy; statistical methods; systematic method; users location privacy; wrong estimation; Accuracy; Data privacy; Gold; Measurement; Mobile communication; Privacy; Random variables; Evaluation Framework; Location Privacy; Location Traces; Location-Privacy Meter; Quantifying Metric
【Paper Link】 【Pages】:263-278
【Authors】: Philip W. L. Fong
【Abstract】: In Face book-style Social Network Systems (FSNSs), which are a generalization of the access control model of Face book, an access control policy specifies a graph-theoretic relationship between the resource owner and resource access or that must hold in the social graph in order for access to be granted. Pseudonymous identities may collude to alter the topology of the social graph and gain access that would otherwise be forbidden. We formalize Denning's Principle of Privilege Attenuation (POPA) as a run-time property, and demonstrate that it is a necessary and sufficient condition for preventing the above form of Sybil attacks. A static policy analysis is then devised for verifying that an FSNS is POPA compliant (and thus Sybil free). The static analysis is proven to be both sound and complete. We also extend our analysis to cover a peculiar feature of FSNS, namely, what Fong et al. dubbed as Stage-I Authorization. We discuss the anomalies resulted from this extension, and point out the need to redesign Stage-I Authorization to support a rational POPA-compliance analysis.
【Keywords】: authorisation; graph theory; program diagnostics; social networking (online); Facebook; POPA-compliance analysis; Sybil attack prevention; access control model; gain access; graph-theoretic relationship; principle of privilege attenuation; social graph; social network systems; stage-I authorization; static policy analysis; Authorization; Facebook; Topology; Vocabulary; Principle of Privilege Attenuation; Sybil attacks; access control; completeness of static analysis; social network systems; soundness
【Paper Link】 【Pages】:281-296
【Authors】: Chris A. Owen ; Duncan A. Grove ; Tristan Newby ; Alex Murray ; Chris J. North ; Michael Pope
【Abstract】: We describe how to combine a minimal Trusted Computing Base (TCB) with polyinstantiated and slightly augmented Commercial Off The Shelf (COTS) software programs in separate Single Level Secure (SLS) partitions to create MultiLevel Secure (MLS) applications. These MLS applications can coordinate fine grained (intra-document) Bell LaPadula (BLP) [6] separation between information at multiple security levels. The untrusted COTS programs in the SLS partitions send at-level file edits as diff transactions to the TCB. The TCB verifies that BLP semantics will be observed and then patches these transactions into its canonical representation of the file. Finally, it releases appropriately filtered versions back to each SLS partition for re-assembly into the COTS program's standard file format. Furthermore, by judiciously restricting how the user can interact with the system the multiple SLS instantiations of the COTS program can be made to appear as if they are a single MLS instantiation. We demonstrate the utility of this approach using Microsoft Word and DokuWiki.
【Keywords】: security of data; BLP; Bell LaPadula; COTS; Commercial Off The Shelf; DokuWiki; MLS; Microsoft Word; MultiLevel Secure; PRISM; SLS; TCB; filtered versions; program replication and integration for seamless MILS; single level secure; software programs; trusted computing base; Computer architecture; Internet; Monitoring; Operating systems; Security; Three dimensional displays; Application virtualization; Computer security; Data security; Data storage systems; File systems; Information entropy; Information security; Military computing; Multilevel systems; Software architecture
【Paper Link】 【Pages】:297-312
【Authors】: Brendan Dolan-Gavitt ; Tim Leek ; Michael Zhivich ; Jonathon T. Giffin ; Wenke Lee
【Abstract】: Introspection has featured prominently in many recent security solutions, such as virtual machine-based intrusion detection, forensic memory analysis, and low-artifact malware analysis. Widespread adoption of these approaches, however, has been hampered by the semantic gap: in order to extract meaningful information about the current state of a virtual machine, detailed knowledge of the guest operating system's inner workings is required. In this paper, we present a novel approach for automatically creating introspection tools for security applications with minimal human effort. By analyzing dynamic traces of small, in-guest programs that compute the desired introspection information, we can produce new programs that retrieve the same information from outside the guest virtual machine. We demonstrate the efficacy of our techniques by automatically generating 17 programs that retrieve security information across 3 different operating systems, and show that their functionality is unaffected by the compromise of the guest system. Our technique allows introspection tools to be effortlessly generated for multiple platforms, and enables the development of rich introspection-based security applications.
【Keywords】: security of data; virtual machines; forensic memory analysis; malware analysis; operating system; virtual machine introspection; virtual machine-based intrusion detection; virtuoso; Data mining; Kernel; Malware; Training; Virtual machining; dynamic analysis; security; virtual machine introspection; virtualization
【Paper Link】 【Pages】:313-328
【Authors】: Yinqian Zhang ; Ari Juels ; Alina Oprea ; Michael K. Reiter
【Abstract】: Security is a major barrier to enterprise adoption of cloud computing. Physical co-residency with other tenants poses a particular risk, due to pervasive virtualization in the cloud. Recent research has shown how side channels in shared hardware may enable attackers to exfiltrate sensitive data across virtual machines (VMs). In view of such risks, cloud providers may promise physically isolated resources to select tenants, but a challenge remains: Tenants still need to be able to verify physical isolation of their VMs. We introduce Home Alone, a system that lets a tenant verify its VMs' exclusive use of a physical machine. The key idea in Home Alone is to invert the usual application of side channels. Rather than exploiting a side channel as a vector of attack, Home Alone uses a side-channel (in the L2 memory cache) as a novel, defensive detection tool. By analyzing cache usage during periods in which "friendly" VMs coordinate to avoid portions of the cache, a tenant using Home Alone can detect the activity of a co-resident "foe" VM. Key technical contributions of Home Alone include classification techniques to analyze cache usage and guest operating system kernel modifications that minimize the performance impact of friendly VMs sidestepping monitored cache portions. Home Alone requires no modification of existing hyper visors and no special action or cooperation by the cloud provider.
【Keywords】: cache storage; cloud computing; formal verification; operating system kernels; security of data; ubiquitous computing; virtual machines; virtualisation; HomeAlone; L2 memory cache; cache usage; cloud computing; co-residency detection; co-resident foe VM activity detection; defensive detection tool; guest operating system kernel modifications; pervasive virtualization; physical co-residency; side-channel analysis; virtual machines; Cloud computing; Hardware; Monitoring; Probes; Timing; Virtual machine monitors; Virtual machining; Cloud computing; Infrastructure-as-a-Service (IaaS); co-residency detection; side-channel analysis
【Paper Link】 【Pages】:329-344
【Authors】: Suman Jana ; Donald E. Porter ; Vitaly Shmatikov
【Abstract】: TxBox is a new system for sand boxing untrusted applications. It speculatively executes the application in a system transaction, allowing security checks to be parallelized and yielding significant performance gains for techniques such as on-access anti-virus scanning. TxBox is not vulnerable to TOCTTOU attacks and incorrect mirroring of kernel state. Furthermore, TxBox supports automatic recovery: if a violation is detected, the sand boxed program is terminated and all of its effects on the host are rolled back. This enables effective enforcement of security policies that span multiple system calls.
【Keywords】: security of data; TOCTTOU attacks; TxBox; automatic recovery; building security; efficient sandboxes; kernel state; onaccess antivirus scanning; sand boxed program; sand boxing untrusted applications; security checks; security policies; system transaction; system transactions; Codecs; Instruments; Kernel; Malware; Monitoring; Semantics; sandbox; speculative execution; transaction
【Paper Link】 【Pages】:347-362
【Authors】: Noah M. Johnson ; Juan Caballero ; Kevin Zhijie Chen ; Stephen McCamant ; Pongsin Poosankam ; Daniel Reynaud ; Dawn Song
【Abstract】: A security analyst often needs to understand two runs of the same program that exhibit a difference in program state or output. This is important, for example, for vulnerability analysis, as well as for analyzing a malware program that features different behaviors when run in different environments. In this paper we propose a differential slicing approach that automates the analysis of such execution differences. Differential slicing outputs a causal difference graph that captures the input differences that triggered the observed difference and the causal path of differences that led from those input differences to the observed difference. The analyst uses the graph to quickly understand the observed difference. We implement differential slicing and evaluate it on the analysis of 11 real-world vulnerabilities and 2 malware samples with environment-dependent behaviors. We also evaluate it in an informal user study with two vulnerability analysts. Our results show that differential slicing successfully identifies the input differences that caused the observed difference and that the causal difference graph significantly reduces the amount of time and effort required for an analyst to understand the observed difference.
【Keywords】: program slicing; security of data; causal difference graph; causal execution differences; differential slicing; malware program; Algorithm design and analysis; Argon; Computer crashes; Indexing; Malware; Resource management
【Paper Link】 【Pages】:363-378
【Authors】: Ankur Taly ; Úlfar Erlingsson ; John C. Mitchell ; Mark S. Miller ; Jasvir Nagra
【Abstract】: JavaScript is widely used to provide client-side functionality in Web applications. To provide services ranging from maps to advertisements, Web applications may incorporate untrusted JavaScript code from third parties. The trusted portion of each application may then expose an API to untrusted code, interposing a reference monitor that mediates access to security-critical resources. However, a JavaScript reference monitor can only be effective if it cannot be circumvented through programming tricks or programming language idiosyncrasies. In order to verify complete mediation of critical resources for applications of interest, we define the semantics of a restricted version of JavaScript devised by the ECMA Standards committee for isolation purposes, and develop and test an automated tool that can soundly establish that a given API cannot be circumvented or subverted. Our tool reveals a previously-undiscovered vulnerability in the widely-examined Yahoo! AD Safe filter and verifies confinement of the repaired filter and other examples from the Object-Capability literature.
【Keywords】: Java; application program interfaces; information filtering; security of data; JavaScript code; Yahoo AD Safe filter; automated analysis; client side functionality; critical resources; object capability literature; programming language idiosyncrasies; programming tricks; security critical JavaScript API; security critical resources; untrusted code; Arrays; Encapsulation; Monitoring; Prototypes; Reactive power; Semantics; Syntactics; APIs; Javascript; Language-Based Security; Points-to Analysis
【Paper Link】 【Pages】:379-394
【Authors】: Bryan Parno ; Jacob R. Lorch ; John R. Douceur ; James W. Mickens ; Jonathan M. McCune
【Abstract】: To protect computation, a security architecture must safeguard not only the software that performs it but also the state on which the software operates. This requires more than just preserving state confidentiality and integrity, since, e.g., software may err if its state is rolled back to a correct but stale version. For this reason, we present Memoir, the first system that fully ensures the continuity of a protected software module's state. In other words, it ensures that a module's state remains persistently and completely inviolate. A key contribution of Memoir is a technique to ensure rollback resistance without making the system vulnerable to system crashes. It does this by using a deterministic module, storing a concise summary of the module's request history in protected NVRAM, and allowing only safe request replays after crashes. Since frequent NVRAM writes are impractical on modern hardware, we present a novel way to leverage limited trusted hardware to minimize such writes. To ensure the correctness of our design, we develop formal, machine-verified proofs of safety. To demonstrate Memoir's practicality, we have built it and conducted evaluations demonstrating that it achieves reasonable performance on real hardware. Furthermore, by building three useful Memoir-protected modules that rely critically on state continuity, we demonstrate Memoir's versatility.
【Keywords】: security of data; software architecture; Memoir; NVRAM; deterministic module; security architecture; Autobiographies; Computer crashes; Cryptography; Hardware; Nonvolatile memory; Radiation detectors; Random access memory
【Paper Link】 【Pages】:397-412
【Authors】: Frederik Armknecht ; Roel Maes ; Ahmad-Reza Sadeghi ; François-Xavier Standaert ; Christian Wachsmann
【Abstract】: Physical attacks against cryptographic devices typically take advantage of information leakage (e.g., side-channels attacks) or erroneous computations (e.g., fault injection attacks). Preventing or detecting these attacks has become a challenging task in modern cryptographic research. In this context intrinsic physical properties of integrated circuits, such as Physical(ly) Unclonable Functions~(PUFs), can be used to complement classical cryptographic constructions, and to enhance the security of cryptographic devices. PUFs have recently been proposed for various applications, including anti-counterfeiting schemes, key generation algorithms, and in the design of block ciphers. However, currently only rudimentary security models for PUFs exist, limiting the confidence in the security claims of PUF-based security primitives. A useful model should at the same time (i) define the security properties of PUFs abstractly and naturally, allowing to design and formally analyze PUF-based security solutions, and (ii) provide practical quantification tools allowing engineers to evaluate PUF instantiations. In this paper, we present a formal foundation for security primitives based on PUFs. Our approach requires as little as possible from the physics and focuses more on the main properties at the heart of most published works on PUFs: robustness (generation of stable answers), unclonability (not provided by algorithmic solutions), and unpredictability. We first formally define these properties and then show that they can be achieved by previously introduced PUF instantiations. We stress that such a consolidating work allows for a meaningful security analysis of security primitives taking advantage of physical properties, becoming increasingly important in the development of the next generation secure information systems.
【Keywords】: cryptography; PUF; algorithmic solutions; anticounterfeiting schemes; block ciphers; context intrinsic physical properties; cryptographic constructions; cryptographic devices; cryptographic research; erroneous computations; fault injection attacks; information leakage; information systems security; integrated circuits; key generation algorithms; physical attacks; physical functions; physical unclonable functions; quantification tools; rudimentary security models; security features; side channels attacks; Adaptive optics; Cryptography; Integrated optics; Manufacturing; Noise measurement; Physics; Formal Security Model; Physically Unclonable Function (PUF); Robustness; Unclonability; Unpredictability
【Paper Link】 【Pages】:413-428
【Authors】: Vineeth Kashyap ; Ben Wiedermann ; Ben Hardekopf
【Abstract】: Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.
【Keywords】: security of data; covert channels; secret information; secure information flow; Computational modeling; Lattices; Processor scheduling; Security; Semantics; Sensitivity; Timing
【Paper Link】 【Pages】:431-446
【Authors】: Kirill Levchenko ; Andreas Pitsillidis ; Neha Chachra ; Brandon Enright ; Márk Félegyházi ; Chris Grier ; Tristan Halvorson ; Chris Kanich ; Christian Kreibich ; He Liu ; Damon McCoy ; Nicholas Weaver ; Vern Paxson ; Geoffrey M. Voelker ; Stefan Savage
【Abstract】: Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprise's full structure, and thus most anti-Spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown).In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email -- including naming, hosting, payment and fulfillment -- using extensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain, 95% of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.
【Keywords】: information filtering; unsolicited e-mail; Spam based advertising; URL blacklisting; anti Spam interventions; anti spam industry; click trajectories; end-to-end analysis; extensive measurements; hosting infrastructures; merchant services; naming infrastructures; profitable enterprise; replica products; software products; spam advertised sites; spam email; spam filtering; spam value chain; spam-advertised pharmaceutical; widespread antipathy; Advertising; Business; Crawlers; Electronic mail; Feeds; Servers; Web sites
【Paper Link】 【Pages】:447-462
【Authors】: Kurt Thomas ; Chris Grier ; Justin Ma ; Vern Paxson ; Dawn Song
【Abstract】: On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs/day -- for a bit under $800/day.
【Keywords】: Web services; information filtering; invasive software; social networking (online); unsolicited e-mail; Monarch scalability; Twitter spam; URL shorteners; email based spam filtering techniques; malware; phishing; public web hosting; real-time URL Spam filtering service; redirector services; scams; social networks; underlying characteristics; web services; Browsers; Electronic mail; Feature extraction; HTML; IP networks; Real time systems; Web services
【Paper Link】 【Pages】:465-480
【Authors】: Rui Wang ; Shuo Chen ; XiaoFeng Wang ; Shaz Qadeer
【Abstract】: Web applications increasingly integrate third-party services. The integration introduces new security challenges due to the complexity for an application to coordinate its internal states with those of the component services and the web client across the Internet. In this paper, we study the security implications of this problem to merchant websites that accept payments through third-party cashiers (e.g., PayPal, Amazon Payments and Google Checkout), which we refer to as Cashier-as-a-Service or CaaS. We found that leading merchant applications (e.g., NopCommerce and Interspire), popular online stores (e.g., Buy.com and JR.com) and a prestigious CaaS provider (Amazon Payments) all contain serious logic flaws that can be exploited to cause inconsistencies between the states of the CaaS and the merchant. As a result, a malicious shopper can purchase an item at an arbitrarily low price, shop for free after paying for one item, or even avoid payment. We reported our findings to the affected parties. They either updated their vulnerable software or continued to work on the fixes with high priorities. We further studied the complexity in finding this type of logic flaws in typical CaaS-based checkout systems, and gained a preliminary understanding of the effort that needs to be made to improve the security assurance of such systems during their development and testing processes.
【Keywords】: Internet; Web sites; electronic commerce; financial data processing; retail data processing; security of data; Amazon Payments; Buy.com; CaaS-based checkout systems; Google Checkout; Internet; Interspire; JR.com; NopCommerce; PayPal; cashier-as-a-service based Web stores; merchant Web sites; security analysis; third-party cashiers; third-party services; Browsers; Complexity theory; Google; Security; Servers; Software; Web services; Cashier-as-a-Service; e-Commerce security; logic bug; program verification; web API
【Paper Link】 【Pages】:481-489
【Authors】: Thai Duong ; Juliano Rizzo
【Abstract】: This paper discusses how cryptography is misused in the security design of a large part of the Web. Our focus is on ASP.NET, the web application framework developed by Microsoft that powers 25% of all Internet web sites. We show that attackers can abuse multiple cryptographic design flaws to compromise ASP.NET web applications. We describe practical and highly efficient attacks that allow attackers to steal cryptographic secret keys and forge authentication tokens to access sensitive information. The attacks combine decryption oracles, unauthenticated encryptions, and the reuse of keys for different encryption purposes. Finally, we give some reasons why cryptography is often misused in web technologies, and recommend steps to avoid these mistakes.
【Keywords】: Internet; Web sites; cryptography; ASP.NET; Internet web sites; cryptographic design flaws; decryption oracles; forge authentication tokens; security design; sensitive information; steal cryptographic secret keys; unauthenticated encryptions; web application framework; Assembly; Authentication; Cryptography; Internet; Servers; Software; Application Security; Cryptography; Decryption oracle attack; Unauthenticated encryption; Web security
【Paper Link】 【Pages】:490-505
【Authors】: David Gullasch ; Endre Bangerter ; Stephan Krenn
【Abstract】: Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
【Keywords】: Linux; cache storage; cryptography; AES block cipher; AES-128; CFS; OpenS SL 0.9.8n; access-based cache attacks; advanced encryption standard; cache games; cryptographic systems; current Linux systems; denial of service attack; memory location leakage; side channel attacks; task scheduler; Cryptography; Linux; Matrices; Memory management; Monitoring; Random access memory; AES; access-based cache attacks; side channel
【Paper Link】 【Pages】:506-520
【Authors】: Elie Bursztein ; Mike Hamburg ; Jocelyn Lagarenne ; Dan Boneh
【Abstract】: We present a generic tool, Kartograph, that lifts the fog of war in online real-time strategy games by snooping on the memory used by the game. Kartograph is passive and cannot be detected remotely. Motivated by these passive attacks, we present secure protocols for distributing game state among players so that each client only has data it is allowed to see. Our system, Open Conflict, runs real-time games with distributed state. To support our claim that Open Conflict is sufficiently fast for real-time strategy games, we show the results of an extensive study of 1000 replays of Star craft II games between expert players. At the peak of a typical game, Open Conflict needs only 22 milliseconds on one CPU core each time state is synchronized.
【Keywords】: cartography; computer games; games of skill; security of data; CPU core; Kartograph; OpenConflict; Star craft II games; online real-time strategy games by; passive attacks; real time map hacks prevention; secure protocols; Computer crime; Computer hacking; Data visualization; Games; Heating; Instruments; Real time systems; map hacks; multi-player games
【Paper Link】 【Pages】:523-537
【Authors】: Ryan Henry ; Ian Goldberg
【Abstract】: We present several extensions to the Nymble framework for anonymous blacklisting systems. First, we show how to distribute the Verinym Issuer as a threshold entity. This provides liveness against a threshold Byzantine adversary and protects against denial-of-service attacks. Second, we describe how to revoke a user for a period spanning multiple link ability windows. This gives service providers more flexibility in deciding how long to block individual users. We also point out how our solution enables efficient blacklist transferability among service providers. Third, we augment the Verinym Acquisition Protocol for Tor-aware systems (that utilize IP addresses as a unique identifier) to handle two additional cases: 1) the operator of a Tor exit node wishes to access services protected by the system, and 2) a user's access to the Verinym Issuer (and the Tor network) is blocked by a firewall. Finally, we revisit the objective blacklisting mechanism used in Jack, and generalize this idea to enable objective blacklisting in other Nymble-like systems. We illustrate the approach by showing how to implement it in Nymble and Nymbler.
【Keywords】: IP networks; Internet; Web sites; computer network security; data privacy; protocols; IP addresses; Internet; Nymble-like systems; Tor-aware systems; Verinym Issuer; Verinym acquisition protocol; Web sites; anonymous blacklisting systems; blacklist transferability; denial-of-service attacks; privacy-enhanced revocation; threshold Byzantine adversary; threshold entity; Communication networks; IP networks; Joining processes; Nickel; Protocols; Public key; anonymity; anonymous blacklisting; authentication; privacy enhancing technologies; privacy-enhanced revocation
【Paper Link】 【Pages】:538-553
【Authors】: Ralf Küsters ; Tomasz Truderung ; Andreas Vogt
【Abstract】: In this paper, we present new insights into central properties of voting systems, namely verifiability, privacy, and coercion-resistance. We demonstrate that the combination of the two forms of verifiability considered in the literature -- individual and universal verifiability -- are, unlike commonly believed, insufficient to guarantee overall verifiability. We also demonstrate that the relationship between coercion-resistance and privacy is more subtle than suggested in the literature. Our findings are partly based on a case study of prominent voting systems, Three Ballot and VAV, for which, among others, we show that, unlike commonly believed, they do not provide any reasonable level of verifiability, even though they satisfy individual and universal verifiability. Also, we show that the original variants of Three Ballot and VAV provide a better level of coercion-resistance than of privacy.
【Keywords】: data privacy; government data processing; Three Ballot; VAV; coercion-resistance; privacy; verifiability; voting systems; Nominations and elections; Observers; Privacy; Probability distribution; Protocols; Resistance; Security; coercion-resistance; privacy; protocol analysis; verifiability; voting