2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015. IEEE Computer Society 【DBLP Link】
【Paper Link】 【Pages】:3-19
【Authors】: Le Guan ; Jingqiang Lin ; Bo Luo ; Jiwu Jing ; Jing Wang
【Abstract】: Cryptography plays an important role in computer and communication security. In practical implementations of cryptosystems, the cryptographic keys are usually loaded into the memory as plaintext, and then used in the cryptographic algorithms. Therefore, the private keys are subject to memory disclosure attacks that read unauthorized data from RAM. Such attacks could be performed through software methods (e.g., Open SSL Heart bleed) even when the integrity of the victim system's executable binaries is maintained. They could also be performed through physical methods (e.g., Cold-boot attacks on RAM chips) even when the system is free of software vulnerabilities. In this paper, we propose Mimosa that protects RSA private keys against the above software-based and physical memory attacks. When the Mimosa service is in idle, private keys are encrypted and reside in memory as cipher text. During the cryptographic computing, Mimosa uses hardware transactional memory (HTM) to ensure that (a) whenever a malicious process other than Mimosa attempts to read the plaintext private key, the transaction aborts and all sensitive data are automatically cleared with hardware mechanisms, due to the strong atomicity guarantee of HTM, and (b) all sensitive data, including private keys and intermediate states, appear as plaintext only within CPU-bound caches, and are never loaded to RAM chips. To the best of our knowledge, Mimosa is the first solution to use transactional memory to protect sensitive data against memory disclosure attacks. We have implemented Mimosa on a commodity machine with Intel Core i7 Haswell CPUs. Through extensive experiments, we show that Mimosa effectively protects cryptographic keys against various attacks that attempt to read sensitive data from memory, and it only introduces a small performance overhead.
【Keywords】: Cryptography; Hardware; Kernel; Random access memory; Registers
【Paper Link】 【Pages】:20-37
【Authors】: Robert N. M. Watson ; Jonathan Woodruff ; Peter G. Neumann ; Simon W. Moore ; Jonathan Anderson ; David Chisnall ; Nirav H. Dave ; Brooks Davis ; Khilan Gudka ; Ben Laurie ; Steven J. Murdoch ; Robert Norton ; Michael Roe ; Stacey Son ; Munraj Vadera
【Abstract】: CHERI extends a conventional RISC Instruction-Set Architecture, compiler, and operating system to support fine-grained, capability-based memory protection to mitigate memory-related vulnerabilities in C-language TCBs. We describe how CHERI capabilities can also underpin a hardware-software object-capability model for application compartmentalization that can mitigate broader classes of attack. Prototyped as an extension to the open-source 64-bit BERI RISC FPGA soft-core processor, Free BSD operating system, and LLVM compiler, we demonstrate multiple orders-of-magnitude improvement in scalability, simplified programmability, and resulting tangible security benefits as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs. We evaluate incrementally deployable CHERI-based compartmentalization using several real-world UNIX libraries and applications.
【Keywords】: data protection; operating systems (computers); program compilers; reduced instruction set computing; software architecture; C-language TCB; CHERI; LLVM compiler; RISC instruction-set architecture; capability-based memory protection; hardware-software object-capability model; hybrid capability-system architecture; operating system; software compartmentalization; Hardware; Kernel; Libraries; Reduced instruction set computing; Registers; Security; CHERI processor; capability system; computer architecture; memory protection; object capabilities; software compartmentalization
【Paper Link】 【Pages】:38-54
【Authors】: Felix Schuster ; Manuel Costa ; Cédric Fournet ; Christos Gkantsidis ; Marcus Peinado ; Gloria Mainar-Ruiz ; Mark Russinovich
【Abstract】: We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.
【Keywords】: cloud computing; data analysis; data integrity; trusted computing; SGX; TCB; VC3; average runtime overhead; base security guarantees; cloud; hypervisor; memory regions; read-write integrity; region self-integrity invariants; secure distributed MapReduce computations; trustworthy data analytics; unmodified Hadoop; Encryption; Operating systems; Program processors; Protocols; Virtual machine monitors
【Paper Link】 【Pages】:55-69
【Authors】: Fengwei Zhang ; Kevin Leach ; Angelos Stavrou ; Haining Wang ; Kun Sun
【Abstract】: With the rapid proliferation of malware attacks on the Internet, understanding these malicious behaviors plays a critical role in crafting effective defense. Advanced malware analysis relies on virtualization or emulation technology to run samples in a confined environment, and to analyze malicious activities by instrumenting code execution. However, virtual machines and emulators inevitably create artifacts in the execution environment, making these approaches vulnerable to detection or subversion. In this paper, we present MALT, a debugging framework that employs System Management Mode, a CPU mode in the x86 architecture, to transparently study armored malware. MALT does not depend on virtualization or emulation and thus is immune to threats targeting such environments. Our approach reduces the attack surface at the software level, and advances state-of-the-art debugging transparency. MALT embodies various debugging functions, including register/memory accesses, breakpoints, and four stepping modes. We implemented a prototype of MALT on two physical machines, and we conducted experiments by testing an array of existing anti-virtualization, anti-emulation, and packing techniques against MALT. The experimental results show that our prototype remains transparent and undetected against the samples. Furthermore, our prototype of MALT introduces moderate but manageable overheads on both Windows and Linux platforms.
【Keywords】: Internet; Linux; invasive software; program debugging; software architecture; virtual machines; virtualisation; Internet; Linux platforms; MALT; Windows platforms; debugging transparency; emulation technology; hardware features; malicious behaviors; malware attacks; rapid proliferation; system management mode; virtual machines; virtualization technology; x86 architecture; Debugging; Hardware; Kernel; Malware; Registers; Servers; Virtualization; SMM; malware debugging; transparency
【Paper Link】 【Pages】:70-85
【Authors】: Ulrich Rührmair ; J. L. Martinez-Hurtado ; Xiaolin Xu ; Christian Kraeh ; Christian Hilgers ; Dima Kononchuk ; Jonathan J. Finley ; Wayne P. Burleson
【Abstract】: We discuss the question of how physical statements can be proven over digital communication channels between two parties (a "prover" and a "verifier") residing in two separate local systems. Examples include: (i) "a certain object in the prover's system has temperature X°C", (ii) "two certain objects in the prover's system are positioned at distance X", or (iii) "a certain object in the prover's system has been irreversibly altered or destroyed". As illustrated by these examples, our treatment goes beyond classical security sensors in considering more general physical statements. Another distinctive aspect is the underlying security model: We neither assume secret keys in the prover's system, nor do we suppose classical sensor hardware in his system which is tamper-resistant and trusted by the verifier. Without an established name, we call this new type of security protocol a "virtual proof of reality" or simply a "virtual proof" (VP). In order to illustrate our novel concept, we give example VPs based on temperature sensitive integrated circuits, disordered optical scattering media, and quantum systems. The corresponding protocols prove the temperature, relative position, or destruction/modification of certain physical objects in the prover's system to the verifier. These objects (so-called "witness objects") are prepared by the verifier and handed over to the prover prior to the VP. Furthermore, we verify the practical validity of our method for all our optical and circuit-based VPs in detailed proof-of-concept experiments. Our work touches upon, and partly extends, several established concepts in cryptography and security, including physical unclonable functions, quantum cryptography, interactive proof systems, and, most recently, physical zero-knowledge proofs. We also discuss potential advancements of our method, for example "public virtual proofs" that function without exchanging witness objects between the verifier and the prover.
【Keywords】: cryptographic protocols; private key cryptography; quantum cryptography; trusted computing; circuit-based VP; digital communication channels; disordered optical scattering media; interactive proof systems; optical-based VP; physical implementation; physical statements; physical unclonable functions; physical zero-knowledge proofs; proof-of-concept experiments; prover system; public virtual proofs; quantum cryptography; quantum systems; secret keys; security protocol; temperature sensitive integrated circuits; virtual proof of reality; witness objects; Cryptography; Protocols; Temperature distribution; Temperature measurement; Temperature sensors; Interactive Proof Systems; Keyless Security Sensors; Physical Cryptography; Physical Unclonable Functions (PUFs); Physical Zero-Knowledge Proofs; Quantum Cryptography; Virtual Proofs (VPs) of Reality
【Paper Link】 【Pages】:89-103
【Authors】: Ittay Eyal
【Abstract】: An open distributed system can be secured by requiring participants to present proof of work and rewarding them for participation. The Bit coin digital currency introduced this mechanism, which is adopted by almost all contemporary digital currencies and related services. A natural process leads participants of such systems to form pools, where members aggregate their power and share the rewards. Experience with Bit coin shows that the largest pools are often open, allowing anyone to join. It has long been known that a member can sabotage an open pool by seemingly joining it but never sharing its proofs of work. The pool shares its revenue with the attacker, and so each of its participants earns less. We define and analyze a game where pools use some of their participants to infiltrate other pools and perform such an attack. With any number of pools, no-pool-attacks is not a Nash equilibrium. We study the special cases where either two pools or any number of identical pools play the game and the rest of the participants are uninvolved. In both of these cases there exists an equilibrium that constitutes a tragedy of the commons where the participating pools attack one another and earn less than they would have if none had attacked. For two pools, the decision whether or not to attack is the miner's dilemma, an instance of the iterative prisoner's dilemma. The game is played daily by the active Bit coin pools, which apparently choose not to attack. If this balance breaks, the revenue of open pools might diminish, making them unattractive to participants.
【Keywords】: data mining; distributed processing; financial data processing; game theory; iterative methods; Bit coin digital currency; contemporary digital currencies; game; iterative prisoner's dilemma; miner's dilemma; no-pool-attacks; open distributed system; Data mining; Exponential distribution; Games; Nash equilibrium; Online banking; Protocols; Registers; bitcoin; cryptocurrency; mining; pools; proof of work
【Paper Link】 【Pages】:104-121
【Authors】: Joseph Bonneau ; Andrew Miller ; Jeremy Clark ; Arvind Narayanan ; Joshua A. Kroll ; Edward W. Felten
【Abstract】: Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison.
【Keywords】: cryptography; data privacy; electronic money; financial data processing; protocols; Bitcoin; computational puzzle; consensus mechanism; cryptocurrency; cryptographic currency; currency allocation mechanism; disinter mediation protocol; key management tool; privacy-enhancing proposal; Communities; Cryptography; Online banking; Peer-to-peer computing; Proposals; Protocols
【Paper Link】 【Pages】:122-134
【Authors】: Alex Biryukov ; Ivan Pustogarov
【Abstract】: Bit coin is a decentralized P2P digital currency in which coins are generated by a distributed set of miners and transactions are broadcasted via a peer-to-peer network. While Bit coin provides some level of anonymity (or rather pseudonymity) by encouraging the users to have any number of random-looking Bit coin addresses, recent research shows that this level of anonymity is rather low. This encourages users to connect to the Bit coin network through anonymizers like Tor and motivates development of default Tor functionality for popular mobile SPV clients. In this paper we show that combining Tor and Bit coin creates a new attack vector. A low-resource attacker can gain full control of information flows between all users who chose to use Bit coin over Tor. In particular the attacker can link together user's transactions regardless of pseudonyms used, control which Bit coin blocks and transactions are relayed to user and can delay or discard user's transactions and blocks. Moreover, we show how an attacker can fingerprint users and then recognize them and learn their IP addresses when they decide to connect to the Bit coin network directly.
【Keywords】: IP networks; peer-to-peer computing; security of data; Bit coin network; Bitcoin; IP address; decentralized P2P digital currency; default Tor functionality; information flow; low-resource attacker; peer-to-peer network; popular mobile SPV client; pseudonymity; random-looking Bit coin address; user transactions; Bandwidth; Databases; IP networks; Online banking; Peer-to-peer computing; Relays; Servers; Anonymity; Bitcoin; P2P; Security; Tor; cryptocurrency
【Paper Link】 【Pages】:135-150
【Authors】: Mohammad Taha Khan ; Xiang Huo ; Zhou Li ; Chris Kanich
【Abstract】: While we have a good understanding of how cyber crime is perpetrated and the profits of the attackers, the harm experienced by humans is less well understood, and reducing this harm should be the ultimate goal of any security intervention. This paper presents a strategy for quantifying the harm caused by the cyber crime of typo squatting via the novel technique of intent inference. Intent inference allows us to define a new metric for quantifying harm to users, develop a new methodology for identifying typo squatting domain names, and quantify the harm caused by various typo squatting perpetrators. We find that typo squatting costs the typical user 1.3 seconds per typo squatting event over the alternative of receiving a browser error page, and legitimate sites lose approximately 5% of their mistyped traffic over the alternative of an unregistered typo. Although on average perpetrators increase the time it takes a user to find their intended site, many typo squatters actually improve the latency between a typo and its correction, calling into question the necessity of harsh penalties or legal intervention against this flavor of cyber crime.
【Keywords】: computer crime; law; browser error page; cybercrime; intent inference; legal intervention; security intervention; typo squatting domain name identification; typosquatting; Accuracy; Browsers; Computer crime; Internet; Malware; Measurement; cybercrime; economics; measurement
【Paper Link】 【Pages】:151-167
【Authors】: Kurt Thomas ; Elie Bursztein ; Chris Grier ; Grant Ho ; Nav Jagpal ; Alexandros Kapravelos ; Damon McCoy ; Antonio Nappa ; Vern Paxson ; Paul Pearce ; Niels Provos ; Moheeb Abu Rajab
【Abstract】: Today, web injection manifests in many forms, but fundamentally occurs when malicious and unwanted actors tamper directly with browser sessions for their own profit. In this work we illuminate the scope and negative impact of one of these forms, ad injection, in which users have ads imposed on them in addition to, or different from, those that websites originally sent them. We develop a multi-staged pipeline that identifies ad injection in the wild and captures its distribution and revenue chains. We find that ad injection has entrenched itself as a cross-browser monetization platform impacting more than 5% of unique daily IP addresses accessing Google -- tens of millions of users around the globe. Injected ads arrive on a client's machine through multiple vectors: our measurements identify 50,870 Chrome extensions and 34,407 Windows binaries, 38% and 17% of which are explicitly malicious. A small number of software developers support the vast majority of these injectors who in turn syndicate from the larger ad ecosystem. We have contacted the Chrome Web Store and the advertisers targeted by ad injectors to alert each of the deceptive practices involved.
【Keywords】: advertising; online front-ends; profitability; Chrome Web store; Web injection; browser sessions; deceptive advertisement modifications; distribution chains; multistaged pipeline; profit; revenue chains; Browsers; Ecosystems; Google; Internet; Libraries; Pipelines; Security; ad fraud; ad injection; web injection
【Paper Link】 【Pages】:171-186
【Authors】: Liang Zhu ; Zi Hu ; John S. Heidemann ; Duane Wessels ; Allison Mankin ; Nikita Somaiya
【Abstract】: The Domain Name System (DNS) seems ideal for connectionless UDP, yet this choice results in challenges of eavesdropping that compromises privacy, source-address spoofing that simplifies denial-of-service (DoS) attacks on the server and third parties, injection attacks that exploit fragmentation, and reply-size limits that constrain key sizes and policy choices. We propose T-DNS to address these problems. It uses TCP to smoothly support large payloads and to mitigate spoofing and amplification for DoS. T-DNS uses transport-layer security (TLS) to provide privacy from users to their DNS resolvers and optionally to authoritative servers. TCP and TLS are hardly novel, and expectations about DNS suggest connections will balloon client latency and overwhelm server with state. Our contribution is to show that T-DNS significantly improves security and privacy: TCP prevents denial-of-service (DoS) amplification against others, reduces the effects of DoS on the server, and simplifies policy choices about key size. TLS protects against eavesdroppers to the recursive resolver. Our second contribution is to show that with careful implementation choices, these benefits come at only modest cost: end-to-end latency from TLS to the recursive resolver is only about 9% slower when UDP is used to the authoritative server, and 22% slower with TCP to the authoritative. With diverse traces we show that connection reuse can be frequent (60 -- 95% for stub and recursive resolvers, although half that for authoritative servers), and after connection establishment, experiments show that TCP and TLS latency is equivalent to UDP. With conservative timeouts (20 s at authoritative servers and 60 s elsewhere) and estimated per-connection memory, we show that server memory requirements match current hardware: a large recursive resolver may have 24k active connections requiring about 3.6 GB additional RAM. Good performance requires key design and implementation decisions we identify: query pipelinin- , out-of-order responses, TCP fast-open and TLS connection resumption, and plausible timeouts.
【Keywords】: Internet; computer network security; data privacy; query processing; transport protocols; DoS attacks; T-DNS; TCP fast-open; TLS connection resumption; connection-oriented DNS; connectionless UDP; denial-of-service attacks; domain name system; out-of-order responses; privacy; query pipelining; source-address spoofing; transport-layer security; Computer crime; IP networks; Pipeline processing; Privacy; Protocols; Servers
【Paper Link】 【Pages】:187-198
【Authors】: Ryan Rasti ; Mukul Murthy ; Nicholas Weaver ; Vern Paxson
【Abstract】: We introduce "temporal lensing": a technique that concentrates a relatively low-bandwidth flood into a short, high-bandwidth pulse. By leveraging existing DNS infrastructure, we experimentally explore lensing and the properties of the pulses it creates. We also empirically show how attackers can use lensing alone to achieve peak bandwidths more than an order of magnitude greater than their upload bandwidth. While formidable by itself in a pulsing DoS attack, attackers can also combine lensing with amplification to potentially produce pulses with peak bandwidths orders of magnitude larger than their own.
【Keywords】: computer network security; DNS infrastructure; amplification; denial-of-service attacks; domain name system; pulsing DoS attack; temporal lensing technique; Bandwidth; Computer crime; Internet; Optimal scheduling; Schedules; Servers; Throughput; DoS; attacks; pulsing
【Paper Link】 【Pages】:199-213
【Authors】: Matthias Schäfer ; Vincent Lenders ; Jens B. Schmitt
【Abstract】: We propose a new approach for securely verifying sequences of location claims from mobile nodes. The key idea is to exploit the inherent mobility of the nodes in order to constrain the degree of freedom of an attacker when spoofing consecutive location updates along a claimed track. We show that in the absence of noise, our approach is able to securely verify any 2-D track with a minimum of three verifiers or any 3-D track with four verifiers. Our approach is lightweight in the sense that it considerably relaxes the system requirements compared to previous secure location verification schemes which are all agnostic to mobility. As opposed to previous schemes, our track verification solution is at the same time (i) passive, (ii) does not require any time synchronization among the verifiers, (iii) does not need to keep the location of the verifiers secret, (iv) nor does it require specialized hardware. This makes our solution particularly suitable for large-scale deployments. We have evaluated our solution in a realistic air traffic monitoring scenario using real-world data. Our results show that 25 position claims on a track are sufficient to detect spoofing attacks with a false positive rate of 1.4% and a false negative rate of 1.2%. For tracks with more than 40 claims, the false positive and false negative rates drop to zero.
【Keywords】: aerospace computing; air traffic; formal verification; security of data; air traffic monitoring scenario; consecutive location update spoofing; degree of freedom; location claims; sequence verification; spoofing attack detection; track verification security; Aircraft; Aircraft navigation; Clocks; Mathematical model; Noise; Propagation delay; Tracking
【Paper Link】 【Pages】:214-231
【Authors】: Robert Lychev ; Samuel Jero ; Alexandra Boldyreva ; Cristina Nita-Rotaru
【Abstract】: QUIC is a secure transport protocol developed by Google and implemented in Chrome in 2013, currently representing one of the most promising solutions to decreasing latency while intending to provide security properties similar with TLS. In this work we shed some light on QUIC's strengths and weaknesses in terms of its provable security and performance guarantees in the presence of attackers. We first introduce a security model for analyzing performance-driven protocols like QUIC and prove that QUIC satisfies our definition under reasonable assumptions on the protocol's building blocks. However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS, e.g., TLS-DHE. Our analyses also reveal that with simple bit-flipping and replay attacks on some public parameters exchanged during the handshake, an adversary could easily prevent QUIC from achieving minimal latency advantages either by having it fall back to TCP or by causing the client and server to have an inconsistent view of their handshake leading to a failure to complete the connection. We have implemented these attacks and demonstrated that they are practical. Our results suggest that QUIC's security weaknesses are introduced by the very mechanisms used to reduce latency, which highlights the seemingly inherent trade off between minimizing latency and providing "good" security guarantees.
【Keywords】: client-server systems; computer network security; transport protocols; Chrome; Google; QUIC; TLS-DHE; bit-flipping; performance analysis; performance guarantee; provable security; secure transport protocol; Encryption; IP networks; Protocols; Public key; Servers
【Paper Link】 【Pages】:232-249
【Authors】: Nik Unger ; Sergej Dechand ; Joseph Bonneau ; Sascha Fahl ; Henning Perl ; Ian Goldberg ; Matthew Smith
【Abstract】: Motivated by recent revelations of widespread state surveillance of personal communication, many solutions now claim to offer secure and private messaging. This includes both a large number of new projects and many widely adopted tools that have added security features. The intense pressure in the past two years to deliver solutions quickly has resulted in varying threat models, incomplete objectives, dubious security claims, and a lack of broad perspective on the existing cryptographic literature on secure communication. In this paper, we evaluate and systematize current secure messaging solutions and propose an evaluation framework for their security, usability, and ease-of-adoption properties. We consider solutions from academia, but also identify innovative and promising approaches used "in-the-wild" that are not considered by the academic literature. We identify three key challenges and map the design landscape for each: trust establishment, conversation security, and transport privacy. Trust establishment approaches offering strong security and privacy features perform poorly from a usability and adoption perspective, whereas some hybrid approaches that have not been well studied in the academic literature might provide better trade-offs in practice. In contrast, once trust is established, conversation security can be achieved without any user involvement in most two-party conversations, though conversations between larger groups still lack a good solution. Finally, transport privacy appears to be the most difficult problem to solve without paying significant performance penalties.
【Keywords】: cryptography; data privacy; message passing; SoK; conversation security; cryptographic literature; personal communication; private messaging; secure messaging; state surveillance; transport privacy; Encryption; Privacy; Protocols; Synthetic aperture sonar; Usability
【Paper Link】 【Pages】:253-270
【Authors】: Craig Costello ; Cédric Fournet ; Jon Howell ; Markulf Kohlweiss ; Benjamin Kreuter ; Michael Naehrig ; Bryan Parno ; Samee Zahur
【Abstract】: Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client's cost to verify the correctness of their results, but the overhead to produce proofs remains largely impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi QAPs, Geppetto reduces the cost of sharing state between computations (e.g, For MapReduce) or within a single computation by up to two orders of magnitude. Via a careful choice of cryptographic primitives, Geppetto's instantiation of bounded proof bootstrapping improves on prior bootstrapped systems by up to five orders of magnitude, albeit at some cost in universality. Geppetto also efficiently verifies the correct execution of proprietary (i.e, Secret) algorithms. Finally, Geppetto's use of energy-saving circuits brings the prover's costs more in line with the program's actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler and runtime that consume LLVM code generated from a variety of source C programs and cryptographic libraries.
【Keywords】: cloud computing; computer bootstrapping; cryptographic protocols; program compilers; program verification; Geppetto; LLVM code generation; QAPs; bootstrapped systems; bounded proof bootstrapping; cloud computing; compiler; correctness verification; cryptographic libraries; cryptographic primitives; energy-saving circuits; outsource computation security; prover flexibility; prover overhead reduction; source C programs; verifiable computation protocols; Cryptography; Generators; Libraries; Logic gates; Protocols; Random access memory; Schedules
【Paper Link】 【Pages】:271-286
【Authors】: Michael Backes ; Manuel Barbosa ; Dario Fiore ; Raphael M. Reischuk
【Abstract】: We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., The third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source -- even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland'13), ADSNARK achieves up to 25x improvement in proof-computation time and a 20x reduction in prover storage space.
【Keywords】: computational complexity; cryptography; data privacy; message authentication; trusted computing; ADSNARK; authenticated data; cryptography; general business-to-business interactions; privacy-preserving proofs; proof-computation time; prover storage space; scalability; smart metering; third party; three-party model; trusted source; usability; wearable computing; Computational modeling; Cryptography; Data privacy; Logic gates; Polynomials; Wires; authentication; privacy; privacy-preserving proofs
【Paper Link】 【Pages】:287-304
【Authors】: Eli Ben-Sasson ; Alessandro Chiesa ; Matthew Green ; Eran Tromer ; Madars Virza
【Abstract】: Non-interactive zero-knowledge proofs (NIZKs) are a powerful cryptographic tool, with numerous potential applications. However, succinct NIZKs (e.g., zk-SNARK schemes) necessitate a trusted party to generate and publish some public parameters, to be used by all provers and verifiers. This party is trusted to correctly run a probabilistic algorithm (specified by the the proof system) that outputs the public parameters, and publish them, without leaking any other information (such as the internal randomness used by the algorithm), violating either requirement may allow malicious parties to produce convincing "proofs" of false statements. This trust requirement poses a serious impediment to deploying NIZKs in many applications, because a party that is trusted by all users of the envisioned system may simply not exist. In this work, we show how public parameters for a class of NIZKs can be generated by a multi-party protocol, such that if at least one of the parties is honest, then the result is secure (in both aforementioned senses) and can be subsequently used for generating and verifying numerous proofs without any further trust. We design and implement such a protocol, tailored to efficiently support the state-of-the-art NIZK constructions with short and easy-to-verify proofs (Parno et al. IEEE S&P '13, Ben-Sasson et al. USENIX Sec '14, Danezis et al., ASIACRYPT '14). Applications of our system include generating public parameters for systems such as Zero cash (Ben-Sasson et al. IEEE S&P '13) and the scalable zero-knowledge proof system of (Ben-Sasson et al. CRYPTO '14).
【Keywords】: cryptography; trusted computing; NIZK; cryptographic tool; multiparty protocol; noninteractive zero-knowledge proofs; probabilistic algorithm; public parameter generation; public parameter sampling security; scalable zero-knowledge proof system; succinct zero knowledge proofs; trusted party; Complexity theory; Cryptography; Encoding; Frequency modulation; Logic gates; Protocols; Servers; distributed key generation; succinct non-interactive arguments; zero knowledge
【Paper Link】 【Pages】:305-320
【Authors】: Matthew D. Green ; Ian Miers
【Abstract】: In this paper we investigate new mechanisms for achieving forward secure encryption in store and forward messaging systems such as email and SMS. In a forward secure encryption scheme, a user periodically updates her secret key so that past messages remain confidential in the event that her key is compromised. A primary contribution of our work is to introduce a new form of encryption that we name puncturable encryption. Using a puncturable encryption scheme, recipients may repeatedly update their decryption keys to revoke decryption capability for selected messages, recipients or time periods. Most importantly, this update process does not require the recipients to communicate with or distribute new key material to senders. We show how to combine puncturable encryption with the forward-secure public key encryption proposal of Canetti et al. To achieve practical forward-secure messaging with low overhead. We implement our schemes and provide experimental evidence that the new constructions are practical.
【Keywords】: public key cryptography; SMS; decryption capability; email; forward messaging system; forward secure asynchronous messaging; forward secure encryption scheme; forward-secure messaging; forward-secure public key encryption proposal; puncturable encryption; secret key; Electronic mail; Encryption; Games; Proposals; Public key
【Paper Link】 【Pages】:321-338
【Authors】: Henry Corrigan-Gibbs ; Dan Boneh ; David Mazières
【Abstract】: This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours.
【Keywords】: Internet; data protection; electronic mail; electronic messaging; information retrieval; security of data; social networking (online); Riposte; Twitter; Wikileaks; anonymous broadcast messaging system; denial-of-service; latency-tolerant workload; multiparty computation security; private information retrieval; traffic-analysis attack protection; Cryptography; Databases; Privacy; Protocols; Resistance; Servers; anonymity; messaging; privacy; private information retrieval
【Paper Link】 【Pages】:341-358
【Authors】: Matteo Maffei ; Giulio Malavolta ; Manuel Reinert ; Dominique Schröder
【Abstract】: Cloud storage has rapidly become a cornerstone of many IT infrastructures, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. Putting user data in the direct control of cloud service providers, however, raises security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. Furthermore, even if the cloud provider is trusted, users having access to outsourced files might be malicious and misbehave. These concerns are particularly serious in sensitive applications like personal health records and credit score systems. To tackle this problem, we present GORAM, a cryptographic system that protects the secrecy and integrity of outsourced data with respect to both an untrusted server and malicious clients, guarantees the anonymity and unlink ability of accesses to such data, and allows the data owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. In the process of designing an efficient construction, we developed two new, generally applicable cryptographic schemes, namely, batched zero-knowledge proofs of shuffle and an accountability technique based on chameleon signatures, which we consider of independent interest. We implemented GORAM in Amazon Elastic Compute Cloud (EC2) and ran a performance evaluation demonstrating the scalability and efficiency of our construction.
【Keywords】: authorisation; cloud computing; cryptography; data integrity; data protection; outsourcing; storage management; Amazon elastic compute cloud; EC2; GORAM cryptographic system; IT infrastructures; access control; accountability technique; batched zero-knowledge shuffle proofs; chameleon signatures; cloud provider; cloud storage; credit score systems; data backup; data sharing; data synchronization; outsourced data integrity; outsourced data protection; outsourced data secrecy; outsourced personal records; outsourced storage privacy property; performance evaluation; Encryption; Indexes; Protocols; Servers
【Paper Link】 【Pages】:359-376
【Authors】: Chang Liu ; Xiao Shaun Wang ; Kartik Nayak ; Yan Huang ; Elaine Shi
【Abstract】: We design and develop ObliVM, a programming framework for secure computation. ObliVM offers a domain specific language designed for compilation of programs into efficient oblivious representations suitable for secure computation. ObliVM offers a powerful, expressive programming language and user-friendly oblivious programming abstractions. We develop various showcase applications such as data mining, streaming algorithms, graph algorithms, genomic data analysis, and data structures, and demonstrate the scalability of ObliVM to bigger data sizes. We also show how ObliVM significantly reduces development effort while retaining competitive performance for a wide range of applications in comparison with hand-crafted solutions. We are in the process of open-sourcing ObliVM and our rich libraries to the community (www.oblivm.com), offering a reusable framework to implement and distribute new cryptographic algorithms.
【Keywords】: cryptography; programming; specification languages; ObliVM programming framework; cryptographic algorithms; domain specific language; program compilation; programming abstraction; secure computation; Cryptography; Libraries; Logic gates; Program processors; Programming; Protocols; Compiler; Oblivious Algorithms; Oblivious RAM; Programming Language; Secure Computation; Type System
【Paper Link】 【Pages】:377-394
【Authors】: Kartik Nayak ; Xiao Shaun Wang ; Stratis Ioannidis ; Udi Weinsberg ; Nina Taft ; Elaine Shi
【Abstract】: We propose introducing modern parallel programming paradigms to secure computation, enabling their secure execution on large datasets. To address this challenge, we present Graph SC, a framework that (i) provides a programming paradigm that allows non-cryptography experts to write secure code, (ii) brings parallelism to such secure implementations, and (iii) meets the need for obliviousness, thereby not leaking any private information. Using Graph SC, developers can efficiently implement an oblivious version of graph-based algorithms (including sophisticated data mining and machine learning algorithms) that execute in parallel with minimal communication overhead. Importantly, our secure version of graph-based algorithms incurs a small logarithmic overhead in comparison with the non-secure parallel version. We build Graph SC and demonstrate, using several algorithms as examples, that secure computation can be brought into the realm of practicality for big data analysis. Our secure matrix factorization implementation can process 1 million ratings in 13 hours, which is a multiple order-of-magnitude improvement over the only other existing attempt, which requires 3 hours to process 16K ratings.
【Keywords】: matrix decomposition; parallel programming; security of data; GraphSC framework; graph-based algorithms; logarithmic overhead; matrix factorization; parallel programming paradigm; parallel secure computation; secure code writing; Algorithm design and analysis; Clustering algorithms; Computational modeling; Data mining; Machine learning algorithms; Parallel processing; Programming; graph algorithms; oblivious algorithms; parallel algorithms; secure computation
【Paper Link】 【Pages】:395-410
【Authors】: Ben A. Fisch ; Binh Vo ; Fernando Krell ; Abishek Kumarasubramanian ; Vladimir Kolesnikov ; Tal Malkin ; Steven M. Bellovin
【Abstract】: The Blind Seer system (Oakland 2014) is an efficient and scalable DBMS that affords both client query privacy and server data protection. It also provides the ability to enforce authorization policies on the system, restricting client's queries while maintaining the privacy of both query and policy. Blind Seer supports a rich query set, including arbitrary boolean formulas, and is provably secure with respect to a controlled amount of search pattern leakage. No other system to date achieves this tradeoff of performance, generality, and provable privacy. A major shortcoming of Blind Seer is its reliance on semi-honest security, particularly for access control and data protection. A malicious client could easily cheat the query authorization policy and obtain any database records satisfying any query of its choice, thus violating basic security features of any standard DBMS. In sum, Blind Seer offers additional privacy to a client, but sacrifices a basic security tenet of DBMS. In the present work, we completely resolve the issue of a malicious client. We show how to achieve robust access control and data protection in Blind Seer with virtually no added cost to performance or privacy. Our approach also involves a novel technique for a semi-private function secure function evaluation (SPF-SFE) that may have independent applications. We fully implement our solution and report on its performance.
【Keywords】: Boolean functions; authorisation; data protection; database management systems; query processing; Blind Seer system; Boolean formulas; SPF-SFE; authorization policies; client query privacy; malicious-client security; query authorization policy; robust access control; scalable private DBMS; search pattern leakage; semiprivate function secure function evaluation; server data protection; Cryptography; Indexes; Logic gates; Privacy; Protocols; Servers; applied cryptography; private DBMS; searchable encryption
【Paper Link】 【Pages】:411-428
【Authors】: Ebrahim M. Songhori ; Siam U. Hussain ; Ahmad-Reza Sadeghi ; Thomas Schneider ; Farinaz Koushanfar
【Abstract】: We introduce Tiny Garble, a novel automated methodology based on powerful logic synthesis techniques for generating and optimizing compressed Boolean circuits used in secure computation, such as Yao's Garbled Circuit (GC) protocol. Tiny Garble achieves an unprecedented level of compactness and scalability by using a sequential circuit description for GC. We introduce new libraries and transformations, such that our sequential circuits can be optimized and securely evaluated by interfacing with available garbling frameworks. The circuit compactness makes the memory footprint of the garbling operation fit in the processor cache, resulting in fewer cache misses and thereby less CPU cycles. Our proof-of-concept implementation of benchmark functions using Tiny Garble demonstrates a high degree of compactness and scalability. We improve the results of existing automated tools for GC generation by orders of magnitude, for example, Tiny Garble can compress the memory footprint required for 1024-bit multiplication by a factor of 4,172, while decreasing the number of non-XOR gates by 67%. Moreover, with Tiny Garble we are able to implement functions that have never been reported before, such as SHA-3. Finally, our sequential description enables us to design and realize a garbled processor, using the MIPS I instruction set, for private function evaluation. To the best of our knowledge, this is the first scalable emulation of a general purpose processor.
【Keywords】: logic circuits; logic design; TinyGarble methodology; Yao garbled circuit protocol; compactness degree; compressed Boolean circuits; general purpose processor; instruction set; logic synthesis techniques; private function evaluation; scalability degree; sequential description; sequential garbled circuits; Hardware design languages; Libraries; Logic gates; Optimization; Protocols; Sequential circuits; Wires; Garbled Circuit; Hardware Synthesis; Logic Design; Secure Function Evaluation
【Paper Link】 【Pages】:431-446
【Authors】: John Vilk ; David Molnar ; Benjamin Livshits ; Eyal Ofek ; Chris Rossbach ; Alexander Moshchuk ; Helen J. Wang ; Ran Gal
【Abstract】: Immersive experiences that mix digital and real-world objects are becoming reality, but they raise serious privacy concerns as they require real-time sensor input. These experiences are already present on smartphones and game consoles via Kinect, and will eventually emerge on the web platform. However, browsers do not expose the display interfaces needed to render immersive experiences. Previous security research focuses on controlling application access to sensor input alone, and do not deal with display interfaces. Recent research in human computer interactions has explored a variety of high-level rendering interfaces for immersive experiences, but these interfaces reveal sensitive data to the application. Bringing immersive experiences to the web requires a high-level interface that mitigates privacy concerns. This paper presents Surround Web, the first 3D web browser, which provides the novel functionality of rendering web content onto a room while tackling many of the inherent privacy challenges. Following the principle of least privilege, we propose three abstractions for immersive rendering: 1) the room skeleton lets applications place content in response to the physical dimensions and locations of render able surfaces in a room, 2) the detection sandbox lets applications declaratively place content near recognized objects in the room without revealing if the object is present, and 3) satellite screens let applications display content across devices registered with Surround Web. Through user surveys, we validate that these abstractions limit the amount of revealed information to an acceptable degree. In addition, we show that a wide range of immersive experiences can be implemented with acceptable performance.
【Keywords】: Internet; augmented reality; authorisation; data privacy; human computer interaction; online front-ends; rendering (computer graphics); 3D Web browser; Kinect; SurroundWeb; Web content rendering; application access control; detection sandbox; game consoles; high-level rendering interfaces; human computer interactions; immersive experiences; immersive rendering; least privilege principle; privacy concerns; real-time sensor input; room skeleton; satellite screens; smartphones; Browsers; Cascading style sheets; Privacy; Rendering (computer graphics); Satellites; Skeleton; Three-dimensional displays; JavaScript; augmented reality; projection mapping; web browser
【Paper Link】 【Pages】:447-462
【Authors】: Zhicong Huang ; Erman Ayday ; Jacques Fellay ; Jean-Pierre Hubaux ; Ari Juels
【Abstract】: Secure storage of genomic data is of great and increasing importance. The scientific community's improving ability to interpret individuals' genetic materials and the growing size of genetic database populations have been aggravating the potential consequences of data breaches. The prevalent use of passwords to generate encryption keys thus poses an especially serious problem when applied to genetic data. Weak passwords can jeopardize genetic data in the short term, but given the multi-decade lifespan of genetic data, even the use of strong passwords with conventional encryption can lead to compromise. We present a tool, called Geno Guard, for providing strong protection for genomic data both today and in the long term. Geno Guard incorporates a new theoretical framework for encryption called honey encryption (HE): it can provide information-theoretic confidentiality guarantees for encrypted data. Previously proposed HE schemes, however, can be applied to messages from, unfortunately, a very restricted set of probability distributions. Therefore, Geno Guard addresses the open problem of applying HE techniques to the highly non-uniform probability distributions that characterize sequences of genetic data. In Geno Guard, a potential adversary can attempt exhaustively to guess keys or passwords and decrypt via a brute-force attack. We prove that decryption under any key will yield a plausible genome sequence, and that Geno Guard offers an information-theoretic security guarantee against message-recovery attacks. We also explore attacks that use side information. Finally, we present an efficient and parallelized software implementation of Geno Guard.
【Keywords】: biology computing; cryptography; data privacy; genetics; statistical distributions; storage management; GenoGuard; HE; brute-force attacks; data breaches; encryption keys; genetic database populations; genetic materials; genomic data protection; honey encryption; information-theoretic confidentiality; parallelized software implementation; passwords; probability distributions; storage security; Bioinformatics; Encoding; Encryption; Genomics; brute-force attack; distribution-transforming encoder; genomic privacy; honey encryption
【Paper Link】 【Pages】:463-480
【Authors】: Yinzhi Cao ; Junfeng Yang
【Abstract】: Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations -- asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.
【Keywords】: data privacy; learning (artificial intelligence); recommender systems; security of data; complex data propagation network; data lineage; feature modeling; feature selection; forgetting systems; machine learning algorithms; machine unlearning; privacy risks; recommendation engine; security perspective; statistical query learning; summation form; usability perspective; Computational modeling; Data models; Data privacy; Feature extraction; Learning systems; Machine learning algorithms; Training data; Adversarial Machine Learning; Forgetting System; Machine Unlearning
【Paper Link】 【Pages】:481-498
【Authors】: Rahul Chatterjee ; Joseph Bonneau ; Ari Juels ; Thomas Ristenpart
【Abstract】: Password vaults are increasingly popular applications that store multiple passwords encrypted under a single master password that the user memorizes. A password vault can greatly reduce the burden on a user of remembering passwords, but introduces a single point of failure. An attacker that obtains a user's encrypted vault can mount offline brute-force attacks and, if successful, compromise all of the passwords in the vault. In this paper, we investigate the construction of encrypted vaults that resist such offline cracking attacks and force attackers instead to mount online attacks. Our contributions are as follows. We present an attack and supporting analysis showing that a previous design for cracking-resistant vaults -- the only one of which we are aware -- actually degrades security relative to conventional password-based approaches. We then introduce a new type of secure encoding scheme that we call a natural language encoder (NLE). An NLE permits the construction of vaults which, when decrypted with the wrong master password, produce plausible-looking decoy passwords. We show how to build NLEs using existing tools from natural language processing, such as n-gram models and probabilistic context-free grammars, and evaluate their ability to generate plausible decoys. Finally, we present, implement, and evaluate a full, NLE-based cracking-resistant vault system called NoCrack.
【Keywords】: context-free grammars; cryptography; encoding; natural language processing; probability; NLE; NoCrack; cracking-resistant password vaults; cracking-resistant vault system; encoding scheme security; encrypted vault construction; force attackers; n-gram models; natural language encoders; natural language processing; offline brute-force attacks; offline cracking attacks; password encryption; plausible decoys; plausible-looking decoy passwords; probabilistic context-free grammars; Dictionaries; Encryption; Force; MySpace; Natural languages; Honey Encryption; Language Model; PCFG; Passowrd Model; Password Vault
【Paper Link】 【Pages】:499-516
【Authors】: David Bernhard ; Véronique Cortier ; David Galindo ; Olivier Pereira ; Bogdan Warinschi
【Abstract】: We critically survey game-based security definitions for the privacy of voting schemes. In addition to known limitations, we unveil several previously unnoticed shortcomings. Surprisingly, the conclusion of our study is that none of the existing definitions is satisfactory: they either provide only weak guarantees, or can be applied only to a limited class of schemes, or both. Based on our findings, we propose a new game-based definition of privacy which we call BPRIV. We also identify a new property which we call strong consistency, needed to express that tallying does not leak sensitive information. We validate our security notions by showing that BPRIV, strong consistency (and an additional simple property called strong correctness) for a voting scheme imply its security in a simulation-based sense. This result also yields a proof technique for proving entropy-based notions of privacy which offer the strongest security guarantees but are hard to prove directly: first prove your scheme BPRIV, strongly consistent (and correct), then study the entropy-based privacy of the result function of the election, which is a much easier task.
【Keywords】: data privacy; game theory; politics; BPRIV scheme; SoK; entropy-based privacy; game-based ballot privacy definition comprehensive analysis; simulation-based sense; voting schemes; Cleaning; Computational modeling; Cryptography; Nominations and elections; Privacy; Protocols; Voting; ballot privacy; cryptography
【Paper Link】 【Pages】:519-534
【Authors】: Boyuan He ; Vaibhav Rastogi ; Yinzhi Cao ; Yan Chen ; V. N. Venkatakrishnan ; Runqing Yang ; Zhenrui Zhang
【Abstract】: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
【Keywords】: Linux; application program interfaces; formal verification; program diagnostics; protocols; security of data; API design; Linux distributions; SSL usage vetting; SSL-TLS protocols; SSLINT; Ubuntu; application program interfaces; automatic logic verification; code quality; logic correctness; network attacks; secure sockets layer; static analysis system; transport layer security; Accuracy; Libraries; Protocols; Security; Servers; Software; Testing
【Paper Link】 【Pages】:535-552
【Authors】: Benjamin Beurdouche ; Karthikeyan Bhargavan ; Antoine Delignat-Lavaud ; Cédric Fournet ; Markulf Kohlweiss ; Alfredo Pironti ; Pierre-Yves Strub ; Jean Karim Zinzindohoue
【Abstract】: Implementations of the Transport Layer Security (TLS) protocol must handle a variety of protocol versions and extensions, authentication modes, and key exchange methods. Confusingly, each combination may prescribe a different message sequence between the client and the server. We address the problem of designing a robust composite state machine that correctly multiplexes between these different protocol modes. We systematically test popular open-source TLS implementations for state machine bugs and discover several critical security vulnerabilities that have lain hidden in these libraries for years, and have now finally been patched due to our disclosures. Several of these vulnerabilities, including the recently publicized FREAK flaw, enable a network attacker to break into TLS connections between authenticated clients and servers. We argue that state machine bugs stem from incorrect compositions of individually correct state machines. We present the first verified implementation of a composite TLS state machine in C that can be embedded into OpenSSL and accounts for all its supported cipher suites. Our attacks expose the need for the formal verification of core components in cryptographic protocol libraries, our implementation demonstrates that such mechanized proofs are within reach, even for mainstream TLS implementations.
【Keywords】: cryptographic protocols; formal verification; TLS protocol; authentication modes; composite TLS state machine; key exchange methods; message sequence; open-source TLS; security vulnerabilities; transport layer security protocol; Authentication; Computer bugs; Cryptography; Libraries; Protocols; Servers; Transport Layer Security; cryptographic protocols; formal methods; man-in-the-middle attacks; software verification
【Paper Link】 【Pages】:553-570
【Authors】: Joppe W. Bos ; Craig Costello ; Michael Naehrig ; Douglas Stebila
【Abstract】: Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today's commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
【Keywords】: cryptographic protocols; digital signatures; public key cryptography; quantum cryptography; 2-core desktop computer; Apache Web server; R-LWE cipher suites; RLWE-ECDSA-AES128-GCM-SHA256 HTTPS; RSA keys; TLS protocol; authentication; commercial certificate authority; elliptic curve Diffie-Hellman; elliptic curve digital signatures; handshake size; lattice-based cryptographic primitives; lattice-based key exchange; nonquantum-safe key exchange; open SSL library; post-quantum key exchange; quantum attackers; quantum computers; ring learning with error problem; security level; transport layer security protocol; Authentication; Computers; Cryptography; Lattices; Protocols; Quantum computing; Transport Layer Security (TLS); key exchange; learning with errors; post-quantum
【Paper Link】 【Pages】:571-587
【Authors】: Michel Abdalla ; Fabrice Benhamouda ; Philip MacKenzie
【Abstract】: J-PAKE is an efficient password-authenticated key exchange protocol that is included in the Open SSL library and is currently being used in practice. We present the first proof of security for this protocol in a well-known and accepted model for authenticated key-exchange, that incorporates online and offline password guessing, concurrent sessions, forward secrecy, server compromise, and loss of session keys. This proof relies on the Decision Square Diffie-Hellman assumption, as well as a strong security assumption for the non-interactive zero-knowledge (NIZK) proofs in the protocol (specifically, simulation-sound extractability). We show that the Schnorr proof-of-knowledge protocol, which was recommended for the J-PAKE protocol, satisfies this strong security assumption in a model with algebraic adversaries and random oracles, and extend the full J-PAKE proof of security to this model. Finally, we show that by modifying the recommended labels in the Schnorr protocol used in J-PAKE, we can achieve a security proof for J-PAKE with a tighter security reduction.
【Keywords】: cryptographic protocols; J-PAKE password-authenticated key exchange protocol; NIZK proofs; Open SSL library; Schnorr proof-of-knowledge protocol; algebraic adversaries; concurrent sessions; decision square Diffie-Hellman assumption; forward secrecy; noninteractive zero-knowledge proofs; offline password guessing; online password guessing; random oracles; security proof; security reduction; server compromise; session keys loss; simulation-sound extractability; Computational modeling; Cryptography; Dictionaries; Protocols; Servers; Standards
【Paper Link】 【Pages】:591-604
【Authors】: Gorka Irazoqui Apecechea ; Thomas Eisenbarth ; Berk Sunar
【Abstract】: The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS's through sand boxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level/size. We demonstrate the viability of the attack by targeting an Open SSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS's and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers.
【Keywords】: cache storage; cloud computing; security of data; virtual machines; AES keys; L1 cache; Open SSL1.0.1f implementation; S$A; VM sandboxing; cloud computing infrastructure; cloud servers; cloud services; probe cache attack; shared cache attack; virtual machines; virtualized servers; Cloud computing; Cryptography; Hardware; Monitoring; Program processors; Servers; Cross-VM; cache attacks; flush+reload; huge pages; memory deduplication; prime and probe
【Paper Link】 【Pages】:605-622
【Authors】: Fangfei Liu ; Yuval Yarom ; Qian Ge ; Gernot Heiser ; Ruby B. Lee
【Abstract】: We present an effective implementation of the Prime+Probe side-channel attack against the last-level cache. We measure the capacity of the covert channel the attack creates and demonstrate a cross-core, cross-VM attack on multiple versions of GnuPG. Our technique achieves a high attack resolution without relying on weaknesses in the OS or virtual machine monitor or on sharing memory between attacker and victim.
【Keywords】: cache storage; cloud computing; security of data; virtual machines; GnuPG; IaaS cloud computing; Prime+Probe side-channel attack; covert channel; cross-VM attack; cross-core attack; last-level cache side-channel attacks; virtual machine monitor; Cryptography; Indexes; Memory management; Monitoring; Multicore processing; Probes; Virtual machine monitors; ElGamal; covert channel; cross-VM side channel; last-level cache; side-channel attack
【Paper Link】 【Pages】:623-639
【Authors】: Marc Andrysco ; David Kohlbrenner ; Keaton Mowery ; Ranjit Jhala ; Sorin Lerner ; Hovav Shacham
【Abstract】: We identify a timing channel in the floating point instructions of modern x86 processors: the running time of floating point addition and multiplication instructions can vary by two orders of magnitude depending on their operands. We develop a benchmark measuring the timing variability of floating point operations and report on its results. We use floating point data timing variability to demonstrate practical attacks on the security of the Fire fox browser (versions 23 through 27) and the Fuzz differentially private database. Finally, we initiate the study of mitigations to floating point data timing channels with libfixedtimefixedpoint, a new fixed-point, constant-time math library. Modern floating point standards and implementations are sophisticated, complex, and subtle, a fact that has not been sufficiently recognized by the security community. More work is needed to assess the implications of the use of floating point instructions in security-relevant software.
【Keywords】: adders; cryptography; floating point arithmetic; multiplying circuits; Fire fox browser; Fuzz differentially private database; abnormal timing; cryptography; fixed-point constant-time math library; floating point addition instructions; floating point data timing channels; floating point data timing variability; floating point multiplication instructions; floating point operations; floating point standards; libfixedtimefixedpoint; modern x86 processors; operands; running time; security attacks; subnormal floating point; Browsers; Libraries; Program processors; Security; Standards; Timing
【Paper Link】 【Pages】:640-656
【Authors】: Yuanzhong Xu ; Weidong Cui ; Marcus Peinado
【Abstract】: The presence of large numbers of security vulnerabilities in popular feature-rich commodity operating systems has inspired a long line of work on excluding these operating systems from the trusted computing base of applications, while retaining many of their benefits. Legacy applications continue to run on the untrusted operating system, while a small hyper visor or trusted hardware prevents the operating system from accessing the applications' memory. In this paper, we introduce controlled-channel attacks, a new type of side-channel attack that allows an untrusted operating system to extract large amounts of sensitive information from protected applications on systems like Overshadow, Ink Tag or Haven. We implement the attacks on Haven and Ink Tag and demonstrate their power by extracting complete text documents and outlines of JPEG images from widely deployed application libraries. Given these attacks, it is unclear if Over shadow's vision of protecting unmodified legacy applications from legacy operating systems running on off-the-shelf hardware is still tenable.
【Keywords】: cryptography; document handling; operating systems (computers); trusted computing; Haven; Ink Tag; JPEG images; Overshadow; controlled-channel attacks; deterministic side channels; feature-rich commodity operating systems; hypervisor; legacy operating systems; security vulnerabilities; text documents; trusted computing; trusted hardware; untrusted operating systems; Control systems; Data mining; Hardware; Monitoring; Operating systems; Resource management; Virtual machine monitors; SGX; side-channel attack; untrusted operating system; virtualization
【Paper Link】 【Pages】:659-673
【Authors】: Xabier Ugarte-Pedrero ; Davide Balzarotti ; Igor Santos ; Pablo Garcia Bringas
【Abstract】: Run-time packers are often used by malware-writers to obfuscate their code and hinder static analysis. The packer problem has been widely studied, and several solutions have been proposed in order to generically unpack protected binaries. Nevertheless, these solutions commonly rely on a number of assumptions that may not necessarily reflect the reality of the packers used in the wild. Moreover, previous solutions fail to provide useful information about the structure of the packer or its complexity. In this paper, we describe a framework for packer analysis and we propose a taxonomy to measure the runtime complexity of packers. We evaluated our dynamic analysis system on two datasets, composed of both off-the-shelf packers and custom packed binaries. Based on the results of our experiments, we present several statistics about the packers complexity and their evolution over time.
【Keywords】: invasive software; SoK; code obfuscation; custom packed binaries; deep packer inspection; dynamic analysis system; malware-writers; off-the-shelf packers; packer analysis; packer problem; packer structure; run-time packers complexity; Complexity theory; Instruments; Malware; Memory management; Monitoring; Runtime; Taxonomy; custom packer; malware analysis; unpacking
【Paper Link】 【Pages】:674-691
【Authors】: Babak Yadegari ; Brian Johannesmeyer ; Ben Whitely ; Saumya Debray
【Abstract】: Malicious software are usually obfuscated to avoid detection and resist analysis. When new malware is encountered, such obfuscations have to be penetrated or removed ("deobfuscated") in order to understand the internal logic of the code and devise countermeasures. This paper discusses a generic approach for deobfuscation of obfuscated executable code. Our approach does not make any assumptions about the nature of the obfuscations used, but instead uses semantics-preserving program transformations to simplify away obfuscation code. We have applied a prototype implementation of our ideas to a variety of different kinds of obfuscation, including emulation-based obfuscation, emulation-based obfuscation with runtime code unpacking, and return-oriented programming. Our experimental results are encouraging and suggest that this approach can be effective in extracting the internal logic from code obfuscated using a variety of obfuscation techniques, including tools such as Themida that previous approaches could not handle.
【Keywords】: invasive software; programming; Themida tool; automatic executable code deobfuscation; emulation-based obfuscation; generic approach; malicious software; obfuscation techniques; return-oriented programming; runtime code unpacking; Algorithm design and analysis; IP networks; Libraries; Programming; Reverse engineering; Security; Semantics; Deobfuscation; Return Oriented Programming; Virtualization-Obfuscation
【Paper Link】 【Pages】:692-708
【Authors】: Antonio Nappa ; Richard Johnson ; Leyla Bilge ; Juan Caballero ; Tudor Dumitras
【Abstract】: Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch.
【Keywords】: invasive software; software reliability; Symantec WINE platform; application-specific factors; automated updating mechanism; malware delivery; network vulnerability scanners; shared code; software lifecycle analysis; software updating mechanisms; spear phishing attacks; user-specific factors; vulnerability patching; Databases; Delays; Libraries; Security; Sociology; Software; Statistics; client applications; patch deployment; shared code; software vulnerabilities; vulnerability exploits
【Paper Link】 【Pages】:709-724
【Authors】: Jannik Pewny ; Behrad Garmany ; Robert Gawlik ; Christian Rossow ; Thorsten Holz
【Abstract】: With the general availability of closed-source software for various CPU architectures, there is a need to identify security-critical vulnerabilities at the binary level to perform a vulnerability assessment. Unfortunately, existing bug finding methods fall short in that they i) require source code, ii) only work on a single architecture (typically x86), or iii) rely on dynamic analysis, which is inherently difficult for embedded devices. In this paper, we propose a system to derive bug signatures for known bugs. We then use these signatures to find bugs in binaries that have been deployed on different CPU architectures (e.g., x86 vs. MIPS). The variety of CPU architectures imposes many challenges, such as the incomparability of instruction set architectures between the CPU models. We solve this by first translating the binary code to an intermediate representation, resulting in assignment formulas with input and output variables. We then sample concrete inputs to observe the I/O behavior of basic blocks, which grasps their semantics. Finally, we use the I/O behavior to find code parts that behave similarly to the bug signature, effectively revealing code parts that contain the bug. We have designed and implemented a tool for cross architecture bug search in executables. Our prototype currently supports three instruction set architectures (x86, ARM, and MIPS) and can find vulnerabilities in buggy binary code for any of these architectures. We show that we can find Heart bleed vulnerabilities, regardless of the underlying software instruction set. Similarly, we apply our method to find backdoors in closed source firmware images of MIPS- and ARM-based routers.
【Keywords】: binary codes; computer architecture; computer debugging; instruction sets; safety-critical software; source code (software); ARM; CPU architectures; CPU models; Heart bleed vulnerabilities; I/O behavior; MIPS; binary executables; binary level; bug finding methods; bug signatures; buggy binary code vulnerabilities; closed-source software; code parts; cross-architecture bug search; dynamic analysis; embedded devices; instruction set architectures; security-critical vulnerabilities; single architecture; software instruction set; source code; vulnerability assessment; x86; Binary codes; Computer architecture; Computer bugs; Concrete; Measurement; Semantics; Software; Bug Search; Code Similarity; Cross-Architecture; Software Security
【Paper Link】 【Pages】:725-741
【Authors】: Sang Kil Cha ; Maverick Woo ; David Brumley
【Abstract】: We present the design of an algorithm to maximize the number of bugs found for black-box mutational fuzzing given a program and a seed input. The major intuition is to leverage white-box symbolic analysis on an execution trace for a given program-seed pair to detect dependencies among the bit positions of an input, and then use this dependency relation to compute a probabilistically optimal mutation ratio for this program-seed pair. Our result is promising: we found an average of 38.6% more bugs than three previous fuzzers over 8 applications using the same amount of fuzzing time.
【Keywords】: fuzzy set theory; probability; program debugging; bit positions; black-box mutational fuzzing; bugs; dependency relation; execution trace; fuzzing time; probabilistically optimal mutation ratio; program-adaptive mutational fuzzing; program-seed pair; white-box symbolic analysis; Computer bugs; Hamming distance; Optimization; Security; Software; Testing; fuzzing; mutation ratio optimization; mutational fuzzing; software testing
【Paper Link】 【Pages】:745-762
【Authors】: Felix Schuster ; Thomas Tendyck ; Christopher Liebchen ; Lucas Davi ; Ahmad-Reza Sadeghi ; Thorsten Holz
【Abstract】: Code reuse attacks such as return-oriented programming (ROP) have become prevalent techniques to exploit memory corruption vulnerabilities in software programs. A variety of corresponding defenses has been proposed, of which some have already been successfully bypassed -- and the arms race continues. In this paper, we perform a systematic assessment of recently proposed CFI solutions and other defenses against code reuse attacks in the context of C++. We demonstrate that many of these defenses that do not consider object-oriented C++ semantics precisely can be generically bypassed in practice. Our novel attack technique, denoted as counterfeit object-oriented programming (COOP), induces malicious program behavior by only invoking chains of existing C++ virtual functions in a program through corresponding existing call sites. COOP is Turing complete in realistic attack scenarios and we show its viability by developing sophisticated, real-world exploits for Internet Explorer 10 on Windows and Fire fox 36 on Linux. Moreover, we show that even recently proposed defenses (CPS, T-VIP, vfGuard, and VTint) that specifically target C++ are vulnerable to COOP. We observe that constructing defenses resilient to COOP that do not require access to source code seems to be challenging. We believe that our investigation and results are helpful contributions to the design and implementation of future defenses against control flow hijacking attacks.
【Keywords】: C++ language; Turing machines; object-oriented programming; security of data; C++ applications; C++ virtual functions; CFI solutions; COOP; CPS; Firefox 36; Internet Explorer 10; Linux; ROP; T-VIP; Turing complete; VTint; Windows; code reuse attack prevention; code reuse attacks; control flow hijacking attacks; counterfeit object-oriented programming; malicious program behavior; memory corruption vulnerabilities; return-oriented programming; software programs; source code; vfGuard; Aerospace electronics; Arrays; Layout; Object oriented programming; Runtime; Semantics; C++; CFI; ROP; code reuse attacks
【Paper Link】 【Pages】:763-780
【Authors】: Stephen Crane ; Christopher Liebchen ; Andrei Homescu ; Lucas Davi ; Per Larsen ; Ahmad-Reza Sadeghi ; Stefan Brunthaler ; Michael Franz
【Abstract】: Code-reuse attacks such as return-oriented programming (ROP) pose a severe threat to modern software. Designing practical and effective defenses against code-reuse attacks is highly challenging. One line of defense builds upon fine-grained code diversification to prevent the adversary from constructing a reliable code-reuse attack. However, all solutions proposed so far are either vulnerable to memory disclosure or are impractical for deployment on commodity systems. In this paper, we address the deficiencies of existing solutions and present the first practical, fine-grained code randomization defense, called Read actor, resilient to both static and dynamic ROP attacks. We distinguish between direct memory disclosure, where the attacker reads code pages, and indirect memory disclosure, where attackers use code pointers on data pages to infer the code layout without reading code pages. Unlike previous work, Read actor resists both types of memory disclosure. Moreover, our technique protects both statically and dynamically generated code. We use a new compiler-based code generation paradigm that uses hardware features provided by modern CPUs to enable execute-only memory and hide code pointers from leakage to the adversary. Finally, our extensive evaluation shows that our approach is practical -- we protect the entire Google Chromium browser and its V8 JIT compiler -- and efficient with an average SPEC CPU2006 performance overhead of only 6.4%.
【Keywords】: online front-ends; program compilers; Google Chromium browser; ROP; Readactor; V8 JIT compiler; code randomization; code-reuse attacks; compiler-based code generation paradigm; memory disclosure; return-oriented programming; Hardware; Layout; Operating systems; Program processors; Security; Virtual machine monitors
【Paper Link】 【Pages】:781-796
【Authors】: Isaac Evans ; Sam Fingeret ; Julian Gonzalez ; Ulziibayar Otgonbaatar ; Tiffany Tang ; Howard Shrobe ; Stelios Sidiroglou-Douskos ; Martin Rinard ; Hamed Okhravi
【Abstract】: Memory corruption attacks continue to be a major vector of attack for compromising modern systems. Numerous defenses have been proposed against memory corruption attacks, but they all have their limitations and weaknesses. Stronger defenses such as complete memory safety for legacy languages (C/C++) incur a large overhead, while weaker ones such as practical control flow integrity have been shown to be ineffective. A recent technique called code pointer integrity (CPI) promises to balance security and performance by focusing memory safety on code pointers thus preventing most control-hijacking attacks while maintaining low overhead. CPI protects access to code pointers by storing them in a safe region that is protected by instruction level isolation. On x86-32, this isolation is enforced by hardware, on x86-64 and ARM, isolation is enforced by information hiding. We show that, for architectures that do not support segmentation in which CPI relies on information hiding, CPI's safe region can be leaked and then maliciously modified by using data pointer overwrites. We implement a proof-of-concept exploit against Nginx and successfully bypass CPI implementations that rely on information hiding in 6 seconds with 13 observed crashes. We also present an attack that generates no crashes and is able to bypass CPI in 98 hours. Our attack demonstrates the importance of adequately protecting secrets in security mechanisms and the dangers of relying on difficulty of guessing without guaranteeing the absence of memory leaks.
【Keywords】: data protection; security of data; ARM; C-C++; CPI safe region; code pointer integrity effectiveness; code pointer protection; control flow integrity; control-hijacking attacks; data pointer overwrites; information hiding; instruction level isolation; legacy languages; memory corruption attacks; memory safety; security mechanisms; time 98 hour; Computer crashes; Delays; Libraries; Safety; Security
【Paper Link】 【Pages】:797-812
【Authors】: Fabian Yamaguchi ; Alwin Maier ; Hugo Gascon ; Konrad Rieck
【Abstract】: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered "Heart bleed" vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
【Keywords】: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project- specific API; search pattern; security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; Security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities
【Paper Link】 【Pages】:813-830
【Authors】: Arthur Azevedo de Amorim ; Maxime Dénès ; Nick Giannarakis ; Catalin Hritcu ; Benjamin C. Pierce ; Antal Spector-Zabusky ; Andrew Tolmach
【Abstract】: Recent advances in hardware design have demonstrated mechanisms allowing a wide range of low-level security policies (or micro-policies) to be expressed using rules on metadata tags. We propose a methodology for defining and reasoning about such tag-based reference monitors in terms of a high-level "symbolic machine" and we use this methodology to define and formally verify micro-policies for dynamic sealing, compartmentalization, control-flow integrity, and memory safety, in addition, we show how to use the tagging mechanism to protect its own integrity. For each micro-policy, we prove by refinement that the symbolic machine instantiated with the policy's rules embodies a high-level specification characterizing a useful security property. Last, we show how the symbolic machine itself can be implemented in terms of a hardware rule cache and a software controller.
【Keywords】: cache storage; inference mechanisms; meta data; security of data; formally verified security monitors; hardware design; hardware rule cache; metadata tags; micro-policies; reasoning; software controller; tag-based reference monitors; tag-based security monitors; Concrete; Hardware; Monitoring; Registers; Safety; Transfer functions
【Paper Link】 【Pages】:833-849
【Authors】: Eric Y. Chen ; Shuo Chen ; Shaz Qadeer ; Rui Wang
【Abstract】: The prevalence of security flaws in multiparty online services (e.g., Single-sign-on, third-party payment, etc.) calls for rigorous engineering supported by formal program verification. However, the adoption of program verification faces several hurdles in the real world: how to formally specify logic properties given that protocol specifications are often informal and vague, how to precisely model the attacker and the runtime platform, how to deal with the unbounded set of all potential transactions. We introduce Certification of Symbolic Transaction (CST), an approach to significantly lower these hurdles. CST tries to verify a protocol-independent safety property jointly defined over all parties, thus avoids the burden of individually specifying every party's property for every protocol, CST invokes static verification at runtime, i.e., It symbolically verifies every transaction on-the-fly, and thus (1) avoids the burden of modeling the attacker and the runtime platform, (2) reduces the proof obligation from considering all possible transactions to considering only the one at hand. We have applied CST on five commercially deployed applications, and show that, with only tens (or 100+) of lines of code changes per party, the original implementations are enhanced to achieve the objective of CST. Our security analysis shows that 12 out of 14 logic flaws reported in the literature will be prevented by CST. We also stress-tested CST by building a gambling system integrating four different services, for which there is no existing protocol to follow. Because transactions are symbolic and cacheable, CST has near-zero amortized runtime overhead. We make the source code of these implementations public, which are ready to be deployed for real-world uses.
【Keywords】: cache storage; formal verification; security of data; symbol manipulation; CST; attacker modeling; cacheable transactions; certification of symbolic transaction; code changes; commercially deployed applications; formal program verification; gambling system; logic flaws; logic properties; multiparty online service security; near-zero amortized runtime overhead; proof obligation; protocol specifications; protocol-independent safety property; runtime platform; security analysis; security flaws; static verification; Certification; Data structures; Facebook; Protocols; Runtime; Security; Servers; CST; multiparty protocol; online payment; single-sign-on; symbolic transaction; verification
【Paper Link】 【Pages】:850-865
【Authors】: Yuchen Zhou ; David Evans
【Abstract】: Modern web applications make frequent use of third-party scripts, often in ways that allow scripts loaded from external servers to make unrestricted changes to the embedding page and access critical resources including private user information. This paper introduces tools to assist site administrators in understanding, monitoring, and restricting the behavior of third-party scripts embedded in their site. We developed Script Inspector, a modified browser that can intercept, record, and check third-party script accesses to critical resources against security policies, along with a Visualizer tool that allows users to conveniently view recorded script behaviors and candidate policies and a Policy Generator tool that aids script providers and site administrators in writing policies. Site administrators can manually refine these policies with minimal effort to produce policies that effectively and robustly limit the behavior of embedded scripts. Policy Generator is able to generate effective policies for all scripts embedded on 72 out of the 100 test sites with minor human assistance. In this paper, we present the designs of our tools, report on what we've learned about script behaviors using them, evaluate the value of our approach for website administrator.
【Keywords】: Internet; data privacy; online front-ends; security of data; Policy Generator; Script Inspector; Visualizer tool; Web application; Web browser; Web script; critical resource access; private user information; security policy; third-party script; Advertising; Browsers; Monitoring; Privacy; Robustness; Security; Visualization; Anomaly Detection; Security and Privacy Policy; Web security and Privacy
【Paper Link】 【Pages】:866-879
【Authors】: Jonas Wagner ; Volodymyr Kuznetsov ; George Candea ; Johannes Kinder
【Abstract】: Security vulnerabilities plague modern systems because writing secure systems code is hard. Promising approaches can retrofit security automatically via runtime checks that implement the desired security policy, these checks guard critical operations, like memory accesses. Alas, the induced slowdown usually exceeds by a wide margin what system users are willing to tolerate in production, so these tools are hardly ever used. As a result, the insecurity of real-world systems persists. We present an approach in which developers/operators can specify what level of overhead they find acceptable for a given workload (e.g., 5%), our proposed tool ASAP then automatically instruments the program to maximize its security while staying within the specified "overhead budget." Two insights make this approach effective: most overhead in existing tools is due to only a few "hot" checks, whereas the checks most useful to security are typically "cold" and cheap. We evaluate ASAP on programs from the Phoronix and SPEC benchmark suites. It can precisely select the best points in the security-performance spectrum. Moreover, we analyzed existing bugs and security vulnerabilities in RIPE, Open SSL, and the Python interpreter, and found that the protection level offered by the ASAP approach is sufficient to protect against all of them.
【Keywords】: security of data; ASAP tool; Open SSL; Phoronix benchmark suites; Python interpreter; RIPE; SPEC benchmark suites; code writing; high system-code security; runtime checks; security policy; security vulnerabilities; security-performance spectrum; Computer bugs; Instruments; Production; Safety; Security; Software; Memory Safety; Security; Software Hardening; Software Instrumentation
【Paper Link】 【Pages】:880-896
【Authors】: Beom Heyn Kim ; David Lie
【Abstract】: Cloud storage services such as Amazon S3, Drop Box, Google Drive and Microsoft One Drive have become increasingly popular. However, users may be reluctant to completely trust a cloud service. Current proposals in the literature to protect the confidentiality, integrity and consistency of data stored in the cloud all have shortcomings when used on battery-powered devices -- they either require devices to be on longer so they can communicate directly with each other, rely on a trusted service to relay messages, or cannot provide timely detection of attacks. We propose Caelus, which addresses these shortcoming. The key insight that enables Caelus to do this is having the cloud service declare the timing and order of operations on the cloud service. This relieves Caelus devices from having to record and send the timing and order of operations to each other -- instead, they need to only ensure that the timing and order of operations both conforms to the cloud's promised consistency model and that it is perceived identically on all devices. In addition, we show that Caelus is general enough to support popular consistency models such as strong, eventual and causal consistency. Our experiments show that Caelus can detect consistency violations on Amazon's S3 service when the desired consistency requirements set by the user are stricter than what S3 provides. Caelus achieves this with a roughly 12.6% increase in CPU utilization on clients, 1.3% of network bandwidth overhead and negligible impact on the battery life of devices.
【Keywords】: cloud computing; power aware computing; storage management; trusted computing; Amazon S3; CPU utilization; Caelus; Drop Box; Google Drive; Microsoft One Drive; battery-powered devices; cloud services consistency verification; cloud storage services; data confidentiality; data consistency; data integrity; network bandwidth; trusted service; Batteries; Cloud computing; History; Security; Servers; Timing; Cloud security; Distributed systems security; System security
【Paper Link】 【Pages】:899-914
【Authors】: Mingyuan Xia ; Lu Gong ; Yuanhao Lyu ; Zhengwei Qi ; Xue Liu
【Abstract】: Mobile applications can access both sensitive personal data and the network, giving rise to threats of data leaks. App auditing is a fundamental program analysis task to reveal such leaks. Currently, static analysis is the de facto technique which exhaustively examines all data flows and pinpoints problematic ones. However, static analysis generates false alarms for being over-estimated and requires minutes or even hours to examine a real app. These shortcomings greatly limit the usability of automatic app auditing. To overcome these limitations, we design AppAudit that relies on the synergy of static and dynamic analysis to provide effective real-time app auditing. AppAudit embodies a novel dynamic analysis that can simulate the execution of part of the program and perform customized checks at each program state. AppAudit utilizes this to prune false positives of an efficient but over-estimating static analysis. Overall, AppAudit makes app auditing useful for app market operators, app developers and mobile end users, to reveal data leaks effectively and efficiently. We apply AppAudit to more than 1,000 known malware and 400 real apps from various markets. Overall, AppAudit reports comparative number of true data leaks and eliminates all false positives, while being 8.3x faster and using 90% less memory compared to existing approaches. AppAudit also uncovers 30 data leaks in real apps. Our further study reveals the common patterns behind these leaks: 1) most leaks are caused by 3rd-party advertising modules; 2) most data are leaked with simple unencrypted HTTP requests. We believe AppAudit serves as an effective tool to identify data-leaking apps and provides implications to design promising runtime techniques against data leaks.
【Keywords】: invasive software; mobile computing; program diagnostics; smart phones; 3rd-party advertising modules; AppAudit design; app developers; app market operators; automatic app auditing; data flows; data leaks; data-leaking apps; dynamic analysis; malware; mobile applications; mobile end users; program analysis task; program execution simulation; program state checks; real-time Android application auditing; real-time app auditing; runtime techniques; sensitive personal data; static analysis; unencrypted HTTP requests; Androids; Approximation methods; Concrete; Context; Humanoid robots; Java; Registers; android; approximated execution; data leak; program analysis
【Paper Link】 【Pages】:915-930
【Authors】: Nan Zhang ; Kan Yuan ; Muhammad Naveed ; Xiao-yong Zhou ; XiaoFeng Wang
【Abstract】: Stealing of sensitive information from apps is always considered to be one of the most critical threats to Android security. Recent studies show that this can happen even to the apps without explicit implementation flaws, through exploiting some design weaknesses of the operating system, e.g., Shared communication channels such as Bluetooth, and side channels such as memory and network-data usages. In all these attacks, a malicious app needs to run side-by-side with the target app (the victim) to collect its runtime information. Examples include recording phone conversations from the phone app, gathering WebMD's data usages to infer the disease condition the user looks at, etc. This runtime-information-gathering (RIG) threat is realistic and serious, as demonstrated by prior research and our new findings, which reveal that the malware monitoring popular Android-based home security systems can figure out when the house is empty and the user is not looking at surveillance cameras, and even turn off the alarm delivered to her phone. To defend against this new category of attacks, we propose a novel technique that changes neither the operating system nor the target apps, and provides immediate protection as soon as an ordinary app (with only normal and dangerous permissions) is installed. This new approach, called App Guardian, thwarts a malicious app's runtime monitoring attempt by pausing all suspicious background processes when the target app (called principal) is running in the foreground, and resuming them after the app stops and its runtime environment is cleaned up. Our technique leverages a unique feature of Android, on which third-party apps running in the background are often considered to be disposable and can be stopped anytime with only a minor performance and utility implication. We further limit such an impact by only focusing on a small set of suspicious background apps, which are identified by their behaviors inferred from their side channels (e.g., - hread names, CPU scheduling and kernel time). App Guardian is also carefully designed to choose the right moments to start and end the protection procedure, and effectively protect itself against malicious apps. Our experimental studies show that this new technique defeated all known RIG attacks, with small impacts on the utility of legitimate apps and the performance of the OS. Most importantly, the idea underlying our approach, including app-level protection, side-channel based defense and lightweight response, not only significantly raises the bar for the RIG attacks and the research on this subject but can also inspire the follow-up effort on new detection systems practically deployable in the fragmented Android ecosystem.
【Keywords】: Internet of Things; cryptography; invasive software; mobile computing; smart phones; Android security; App Guardian; Internet of Things; IoT; RIG threat; app-level protection; malware monitoring; runtime information gathering; side-channel based defense; Androids; Bluetooth; Humanoid robots; Monitoring; Runtime; Security; Smart phones
【Paper Link】 【Pages】:931-948
【Authors】: Antonio Bianchi ; Jacopo Corbetta ; Luca Invernizzi ; Yanick Fratantonio ; Christopher Kruegel ; Giovanni Vigna
【Abstract】: Mobile applications are part of the everyday lives of billions of people, who often trust them with sensitive information. These users identify the currently focused app solely by its visual appearance, since the GUIs of the most popular mobile OSes do not show any trusted indication of the app origin. In this paper, we analyze in detail the many ways in which Android users can be confused into misidentifying an app, thus, for instance, being deceived into giving sensitive information to a malicious app. Our analysis of the Android platform APIs, assisted by an automated state-exploration tool, led us to identify and categorize a variety of attack vectors (some previously known, others novel, such as a non-escapable full screen overlay) that allow a malicious app to surreptitiously replace or mimic the GUI of other apps and mount phishing and click-jacking attacks. Limitations in the system GUI make these attacks significantly harder to notice than on a desktop machine, leaving users completely defenseless against them. To mitigate GUI attacks, we have developed a two-layer defense. To detect malicious apps at the market level, we developed a tool that uses static analysis to identify code that could launch GUI confusion attacks. We show how this tool detects apps that might launch GUI attacks, such as ransom ware programs. Since these attacks are meant to confuse humans, we have also designed and implemented an on-device defense that addresses the underlying issue of the lack of a security indicator in the Android GUI. We add such an indicator to the system navigation bar, this indicator securely informs users about the origin of the app with which they are interacting (e.g., The Pay Pal app is backed by "Pay Pal, Inc."). We demonstrate the effectiveness of our attacks and the proposed on-device defense with a user study involving 308 human subjects, whose ability to detect the attacks increased significantly when using a system equipped with our defense.
【Keywords】: Android (operating system); graphical user interfaces; invasive software; program diagnostics; smart phones; Android platform API; Android user interface; GUI confusion attacks; app origin; attack vectors; automated state-exploration tool; click-jacking attacks; desktop machine; malicious app; mobile OS; mobile applications; on-device defense; phishing attacks; ransomware programs; security indicator; sensitive information; static analysis; system navigation bar; trusted indication; two-layer defense; visual appearance; Androids; Graphical user interfaces; Humanoid robots; Navigation; Security; Smart phones; mobile-security; static-analysis; usable-security