2014 IEEE Symposium on Security and Privacy, SP 2014, Berkeley, CA, USA, May 18-21, 2014. IEEE Computer Society 【DBLP Link】
【Paper Link】 【Pages】:3-18
【Authors】: Zhou Li ; Sumayah A. Alrwais ; XiaoFeng Wang ; Eihal Alowaisheq
【Abstract】: Compromised websites that redirect web traffic to malicious hosts play a critical role in organized web crimes, serving as doorways to all kinds of malicious web activities (e.g., drive-by downloads, phishing etc.). They are also among the most elusive components of a malicious web infrastructure and extremely difficult to hunt down, due to the simplicity of redirect operations, which also happen on legitimate sites, and extensive use of cloaking techniques. Making the detection even more challenging is the recent trend of injecting redirect scripts into JavaScript (JS) files, as those files are not indexed by search engines and their infections are therefore more difficult to catch. In our research, we look at the problem from a unique angle: the adversary's strategy and constraints for deploying redirect scripts quickly and stealthily. Specifically, we found that such scripts are often blindly injected into both JS and HTML files for a rapid deployment, changes to the infected JS files are often made minimum to evade detection and also many JS files are actually JS libraries (JS-libs) whose uninfected versions are publicly available. Based upon those observations, we developed JsRED, a new technique for the automatic detection of unknown redirect-script injections. Our approach analyzes the difference between a suspicious JS-lib file and its clean counterpart to identify malicious redirect scripts and further searches for similar scripts in other JS and HTML files. This simple, lightweight approach is found to work effectively against redirect injection campaigns: our evaluation shows that JsRED captured most of compromised websites with almost no false positives, significantly outperforming a commercial detection service in terms of finding unknown JS infections. Based upon the compromised websites reported by JsRED, we further conducted a measurement study that reveals interesting features of redirect payloads and a new Peer-to-Peer network the adversary const- ucted to evade detection.
【Keywords】: Internet; Java; Web sites; authoring languages; hypermedia markup languages; peer-to-peer computing; security of data; HTML files; JS files; JavaScript; JsRED; Web sites; malicious Web infrastructure; peer-to-peer network; redirect-script injection detection; Browsers; Feeds; HTML; Libraries; Payloads; Security; Servers; Compromised Web Sites; Differential Analysis; Web Redirection
【Paper Link】 【Pages】:19-33
【Authors】: Sangho Lee ; Youngsok Kim ; Jangwoo Kim ; Jong Kim
【Abstract】: Graphics processing units (GPUs) are important components of modern computing devices for not only graphics rendering, but also efficient parallel computations. However, their security problems are ignored despite their importance and popularity. In this paper, we first perform an in-depth security analysis on GPUs to detect security vulnerabilities. We observe that contemporary, widely-used GPUs, both NVIDIA's and AMD's, do not initialize newly allocated GPU memory pages which may contain sensitive user data. By exploiting such vulnerabilities, we propose attack methods for revealing a victim program's data kept in GPU memory both during its execution and right after its termination. We further show the high applicability of the proposed attacks by applying them to the Chromium and Firefox web browsers which use GPUs for accelerating webpage rendering. We detect that both browsers leave rendered webpage textures in GPU memory, so that we can infer which web pages a victim user has visited by analyzing the remaining textures. The accuracy of our advanced inference attack that uses both pixel sequence matching and RGB histogram matching is up to 95.4%.
【Keywords】: Browsers; Chromium; Context; Graphics processing units; Kernel; Memory management; Security
【Paper Link】 【Pages】:34-48
【Authors】: Yuan Tian ; Ying Chuan Liu ; Amar Bhosale ; Lin-Shung Huang ; Patrick Tague ; Collin Jackson
【Abstract】: HTML5 changes many aspects in the browser world by introducing numerous new concepts, in particular, the new HTML5 screen sharing API impacts the security implications of browsers tremendously. One of the core assumptions on which browser security is built is that there is no cross-origin feedback loop from the client to the server. However, the screen sharing API allows creating a cross-origin feedback loop. Consequently, websites will potentially be able to see all visible content from the user's screen, irrespective of its origin. This cross-origin feedback loop, when combined with human vision limitations, can introduce new vulnerabilities. An attacker can capture sensitive information from victim's screen using the new API without the consensus of the victim. We investigate the security implications of the screen sharing API and discuss how existing defenses against traditional web attacks fail during screen sharing. We show that several attacks are possible with the help of the screen sharing API: cross-site request forgery, history sniffing, and information stealing. We discuss how popular websites such as Amazon and Wells Fargo can be attacked using this API and demonstrate the consequences of the attacks such as economic losses, compromised account and information disclosure. The objective of this paper is to present the attacks using the screen sharing API, analyze the fundamental cause and motivate potential defenses to design a more secure screen sharing API.
【Keywords】: Web sites; application program interfaces; hypermedia markup languages; security of data; HTML5 screen sharing API; Web attacks; Web sites; history sniffing; information stealing; Browsers; Feedback loop; Google; History; Receivers; Security; Servers; Browser Security; HTML5; Screen Sharing
【Paper Link】 【Pages】:49-64
【Authors】: Mike Bond ; Omar Choudary ; Steven J. Murdoch ; Sergei P. Skorobogatov ; Ross J. Anderson
【Abstract】: EMV, also known as "Chip and PIN", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a "pre-play" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design an- implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat.
【Keywords】: financial data processing; invasive software; ATM malware; Asia; EMV card cloning; Europe; North America; POS malware; POS terminals; automated teller machines; card payments; counters; home-grown algorithms; man-in-the-middle attack; point-of-sale terminals; preplay attack; proof-of-concept attacks; timestamps; unpredictable number; Authentication; Authorization; Cloning; Cryptography; Online banking; Protocols; Radiation detectors
【Paper Link】 【Pages】:67-82
【Authors】: Jinjin Liang ; Jian Jiang ; Hai-Xin Duan ; Kang Li ; Tao Wan ; Jianping Wu
【Abstract】: Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current Web PKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions.
【Keywords】: Internet; computer network security; protocols; CDN; DANE protocol; DNS-based authentication of named entities; HTTPS; Web technology; content delivery network; delegated service authentication; domain name system; hypertext transfer protocol secure; stale certificates; Authentication; Browsers; Protocols; Servers; Uniform resource locators
【Paper Link】 【Pages】:83-97
【Authors】: Lin-Shung Huang ; Alex Rice ; Erling Ellingsen ; Collin Jackson
【Abstract】: The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed.
【Keywords】: certification; security of data; Facebook; SSL man-in-the-middle attack; antivirus software; corporate-scale content filters; encrypted connections; forged SSL certificate analysis; global Web site; secure socket layer; Browsers; Cryptography; Java; Protocols; Servers; Sockets; SSL; certificates; man-in-the-middle attack
【Paper Link】 【Pages】:98-113
【Authors】: Karthikeyan Bhargavan ; Antoine Delignat-Lavaud ; Cédric Fournet ; Alfredo Pironti ; Pierre-Yves Strub
【Abstract】: TLS was designed as a transparent channel abstraction to allow developers with no cryptographic expertise to protect their application against attackers that may control some clients, some servers, and may have the capability to tamper with network connections. However, the security guarantees of TLS fall short of those of a secure channel, leading to a variety of attacks. We show how some widespread false beliefs about these guarantees can be exploited to attack popular applications and defeat several standard authentication methods that rely too naively on TLS. We present new client impersonation attacks against TLS renegotiations, wireless networks, challenge-response protocols, and channel-bound cookies. Our attacks exploit combinations of RSA and Diffie-Hellman key exchange, session resumption, and renegotiation to bypass many recent countermeasures. We also demonstrate new ways to exploit known weaknesses of HTTP over TLS. We investigate the root causes for these attacks and propose new countermeasures. At the protocol level, we design and implement two new TLS extensions that strengthen the authentication guarantees of the handshake. At the application level, we develop an exemplary HTTPS client library that implements several mitigations, on top of a previously verified TLS implementation, and verify that their composition provides strong, simple application security.
【Keywords】: authorisation; client-server systems; computer network security; cryptographic protocols; data protection; public key cryptography; transport protocols; Diffie-Hellman key exchange; HTTPS client library; RSA; TLS implementation; TLS renegotiations; TLS security guarantees; application protection; application security; authentication; challenge-response protocols; channel-bound cookies; client impersonation attacks; cookie cutters; network connections; secure channel; session resumption; transparent channel abstraction; triple-handshakes; wireless networks; Authentication; Browsers; Cryptography; Libraries; Protocols; Servers
【Paper Link】 【Pages】:114-129
【Authors】: Chad Brubaker ; Suman Jana ; Baishakhi Ray ; Sarfraz Khurshid ; Vitaly Shmatikov
【Abstract】: Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencert- is a powerful methodology for discovering security flaws in SSL/TLS implementations.
【Keywords】: authorisation; computer network security; online front-ends; protocols; Chrome; CyaSSL; Frankencerts; GnuTLS; MatrixSSL; NSS; OpenSSL; PolarSSL; SSL-TLS clients; SSL-TLS handshake protocol; SSL-TLS implementations; Safari; X.509 certificates; automated adversarial testing; certificate validation errors; certificate validation logic; differential testing; man-in-the-middle attacks; modern network security; network attacks; oracle; secure sockets layer protocols; security vulnerabilities; self-signed certificate; server authentication; synthetic certificates; transport layer security protocols; Authentication; Browsers; Computer bugs; Protocols; Servers; Testing; SSL; automated testing; certificate validation
【Paper Link】 【Pages】:133-148
【Authors】: Aaron Blankstein ; Michael J. Freedman
【Abstract】: In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees-Passe correctly enforced 96% of the applications' policies-with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.
【Keywords】: Web services; security of data; Django web framework; HTML5 sandboxing techniques; Passe system; Web services; XSS attack; client-facing applications; control-flow relationship; cross-site scripting attack; data-flow relationship; database queries; query results; sandboxed process; security guarantee; shared-memory-space application; Browsers; Databases; Libraries; Prototypes; Runtime; Security; Servers; capabilities; isolation; principle of least privilege; security policy inference; web security
【Paper Link】 【Pages】:149-162
【Authors】: Collin Mulliner ; William K. Robertson ; Engin Kirda
【Abstract】: Graphical user interfaces (GUIs) are the predominant means by which users interact with modern programs. GUIs contain a number of common visual elements or widgets such as labels, text fields, buttons, and lists, and GUIs typically provide the ability to set attributes on these widgets to control their visibility, enabled status, and whether they are writable. While these attributes are extremely useful to provide visual cues to users to guide them through an application's GUI, they can also be misused for purposes they were not intended. In particular, in the context of GUI-based applications that include multiple privilege levels within the application, GUI element attributes are often misused as a mechanism for enforcing access control policies. In this work, we introduce GEMs, or instances of GUI element misuse, as a novel class of access control vulnerabilities in GUI-based applications. We present a classification of different GEMs that can arise through misuse of widget attributes, and describe a general algorithm for identifying and confirming the presence of GEMs in vulnerable applications. We then present GEM Miner, an implementation of our GEM analysis for the Windows platform. We evaluate GEM Miner over a test set of three complex, real-world GUI-based applications targeted at the small business and enterprise markets, and demonstrate the efficacy of our analysis by finding numerous previously unknown access control vulnerabilities in these applications. We have reported the vulnerabilities we discovered to the developers of each application, and in one case have received confirmation of the issue.
【Keywords】: authorisation; graphical user interfaces; pattern classification; GEM Miner; GEM analysis; GEM classification; GUI element attributes; GUI element misuse; Windows platform; access control policies; access control vulnerabilities; automated discovery; business; buttons; enterprise markets; graphical user interfaces; labels; lists; real-world GUI-based applications; text fields; visual cues; visual elements; widget attributes; Licenses; Privacy; Security; GUI; access control; automation; vulnerability analysis; widget
【Paper Link】 【Pages】:163-178
【Authors】: Steve Kremer ; Robert Künnemann
【Abstract】: Security APIs, key servers and protocols that need to keep the status of transactions, require to maintain a global, non-monotonic state, e.g., in the form of a database or register. However, existing automated verification tools do not support the analysis of such stateful security protocols - sometimes because of fundamental reasons, such as the encoding of the protocol as Horn clauses, which are inherently monotonic. An exception is the recent tamarin prover which allows specifying protocols as multiset rewrite (MSR) rules, a formalism expressive enough to encode state. As multiset rewriting is a "low-level" specification language with no direct support for concurrent message passing, encoding protocols correctly is a difficult and error-prone process. We propose a process calculus which is a variant of the applied pi calculus with constructs for manipulation of a global state by processes running in parallel. We show that this language can be translated to MSR rules whilst preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which useqs the tamarin prover as a backend. We apply the tool to several case studies among which a simplified fragment of PKCS#11, the Yubikey security token, and an optimistic contract signing protocol.
【Keywords】: Horn clauses; application program interfaces; cryptographic protocols; formal verification; message passing; pi calculus; specification languages; Horn clauses; PKCS#11; Yubikey security token; automated security protocol analysis; automated verification tools; concurrent message passing; encoding protocols; error-prone process; first-order logic; key servers; low-level specification language; msr rules; multiset rewrite rules; multiset rewriting; nonmonotonic state; optimistic contract signing protocol; pi calculus; process calculus; prototype tool; security API; security properties; tamarin prover; Analytical models; Calculus; Contracts; Mathematical model; Protocols; Security; Semantics; Multiset rewriting; Protocol Analysis; Security APIs
【Paper Link】 【Pages】:179-194
【Authors】: Benedikt Schmidt ; Ralf Sasse ; Cas Cremers ; David A. Basin
【Abstract】: We advance the state-of-the-art in automated symbolic cryptographic protocol analysis by providing the first algorithm that can handle Diffie-Hellman exponentiation, bilinear pairing, and AC-operators. Our support for AC-operators enables protocol specifications to use multisets, natural numbers, and finite maps. We implement the algorithm in the TAMARIN prover and provide the first symbolic correctness proofs for group key agreement protocols that use Diffie-Hellman or bilinear pairing, loops, and recursion, while at the same time supporting advanced security properties, such as perfect forward secrecy and eCK-security. We automatically verify a set of protocols, including the STR, group Joux, and GDH protocols, thereby demonstrating the effectiveness of our approach.
【Keywords】: cryptographic protocols; public key cryptography; set theory; AC-operators; Diffie-Hellman exponentiation; GDH protocols; STR; TAMARIN prover; advanced security properties; automated symbolic cryptographic protocol analysis; bilinear pairing; eCK-security; finite maps; group Joux; group key agreement protocols; multisets; natural numbers; perfect forward secrecy; protocol specifications; symbolic correctness proofs; Cognition; Cryptography; DH-HEMTs; Mathematical model; Protocols; Semantics
【Paper Link】 【Pages】:197-211
【Authors】: Nedim Srndic ; Pavel Laskov
【Abstract】: Learning-based classifiers are increasingly used for detection of various forms of malicious data. However, if they are deployed online, an attacker may attempt to evade them by manipulating the data. Examples of such attacks have been previously studied under the assumption that an attacker has full knowledge about the deployed classifier. In practice, such assumptions rarely hold, especially for systems deployed online. A significant amount of information about a deployed classifier system can be obtained from various sources. In this paper, we experimentally investigate the effectiveness of classifier evasion using a real, deployed system, PDFrate, as a test case. We develop a taxonomy for practical evasion strategies and adapt known evasion algorithms to implement specific scenarios in our taxonomy. Our experimental results reveal a substantial drop of PDFrate's classification scores and detection accuracy after it is exposed even to simple attacks. We further study potential defense mechanisms against classifier evasion. Our experiments reveal that the original technique proposed for PDFrate is only effective if the executed attack exactly matches the anticipated one. In the discussion of the findings of our study, we analyze some potential techniques for increasing robustness of learning-based systems against adversarial manipulation of data.
【Keywords】: learning (artificial intelligence); pattern classification; security of data; PDFrate; adversarial data manipulation; learning-based classifier evasion; learning-based systems; malicious data detection; Feature extraction; Learning systems; Malware; Portable document format; Taxonomy; Training
【Paper Link】 【Pages】:212-226
【Authors】: Sadia Afroz ; Aylin Caliskan Islam ; Ariel Stolerman ; Rachel Greenstadt ; Damon McCoy
【Abstract】: Stylometry is a method for identifying anonymous authors of anonymous texts by analyzing their writing style. While stylometric methods have produced impressive results in previous experiments, we wanted to explore their performance on a challenging dataset of particular interest to the security research community. Analysis of underground forums can provide key information about who controls a given bot network or sells a service, and the size and scope of the cybercrime underworld. Previous analyses have been accomplished primarily through analysis of limited structured metadata and painstaking manual analysis. However, the key challenge is to automate this process, since this labor intensive manual approach clearly does not scale. We consider two scenarios. The first involves text written by an unknown cybercriminal and a set of potential suspects. This is standard, supervised stylometry problem made more difficult by multilingual forums that mix l33t-speak conversations with data dumps. In the second scenario, you want to feed a forum into an analysis engine and have it output possible doppelgangers, or users with multiple accounts. While other researchers have explored this problem, we propose a method that produces good results on actual separate accounts, as opposed to data sets created by artificially splitting authors into multiple identities. For scenario 1, we achieve 77% to 84% accuracy on private messages. For scenario 2, we achieve 94% recall with 90% precision on blogs and 85.18% precision with 82.14% recall for underground forum users. We demonstrate the utility of our approach with a case study that includes applying our technique to the Carders forum and manual analysis to validate the results, enabling the discovery of previously undetected doppelganger accounts.
【Keywords】: Internet; computer crime; meta data; standards; Carders forum; Doppelganger Finder; anonymous author identification; anonymous texts; cybercrime underworld; cybercriminal; data dumps; l33t-speak conversations; multilingual forums; security research community; structured metadata; stylometric methods; supervised stylometry problem; underground forum analysis; writing style analysis; Accuracy; Blogs; Detectors; Electronic mail; Manuals; Social network services; Stylometry; cybercrime; underground forum
【Paper Link】 【Pages】:227-242
【Authors】: Andrea Bittau ; Adam Belay ; Ali José Mashtizadeh ; David Mazières ; Dan Boneh
【Abstract】: We show that it is possible to write remote stack buffer overflow exploits without possessing a copy of the target binary or source code, against services that restart after a crash. This makes it possible to hack proprietary closed-binary services, or open-source servers manually compiled and installed from source where the binary remains unknown to the attacker. Traditional techniques are usually paired against a particular binary and distribution where the hacker knows the location of useful gadgets for Return Oriented Programming (ROP). Our Blind ROP (BROP) attack instead remotely finds enough ROP gadgets to perform a write system call and transfers the vulnerable binary over the network, after which an exploit can be completed using known techniques. This is accomplished by leaking a single bit of information based on whether a process crashed or not when given a particular input string. BROP requires a stack vulnerability and a service that restarts after a crash. We implemented Braille, a fully automated exploit that yielded a shell in under 4,000 requests (20 minutes) against a contemporary nginx vulnerability, yaSSL + MySQL, and a toy proprietary server written by a colleague. The attack works against modern 64-bit Linux with address space layout randomization (ASLR), no-execute page protection (NX) and stack canaries.
【Keywords】: Linux; security of data; Braille; Linux; address space layout randomization; blind ROP attack; nginx vulnerability; no-execute page protection; open-source servers hacking; proprietary closed-binary services hacking; return oriented programming; stack buffer overflow; stack canaries; write system call; Computer crashes; Layout; Libraries; Linux; Registers; Servers; Sockets
【Paper Link】 【Pages】:243-258
【Authors】: Erik Bosman ; Herbert Bos
【Abstract】: Signal handling has been an integral part of UNIX systems since the earliest implementation in the 1970s. Nowadays, we find signals in all common flavors of UNIX systems, including BSD, Linux, Solaris, Android, and Mac OS. While each flavor handles signals in slightly different ways, the implementations are very similar. In this paper, we show that signal handling can be used as an attack method in exploits and backdoors. The problem has been a part of UNIX from the beginning, and now that advanced security measures like ASLR, DEP and stack cookies have made simple exploitation much harder, our technique is among the lowest hanging fruit available to an attacker. Specifically, we describe Sigreturn Oriented Programming (SROP), a novel technique for exploits and backdoors in UNIX-like systems. Like return-oriented programming (ROP), sigreturn oriented programming constructs what is known as a 'weird machine' that can be programmed by attackers to change the behavior of a process. To program the machine, attackers set up fake signal frames and initiate returns from signals that the kernel never really delivered. This is possible, because UNIX stores signal frames on the process' stack. Sigreturn oriented programming is interesting for attackers, OS developers and academics. For attackers, the technique is very versatile, with pre-conditions that are different from those of existing exploitation techniques like ROP. Moreover, unlike ROP, sigreturn oriented programming programs are portable. For OS developers, the technique presents a problem that has been present in one of the two main operating system families from its inception, while the fixes (which we also present) are non-trivial. From a more academic viewpoint, it is also interesting because we show that sigreturn oriented programming is Turing complete. We demonstrate the usefulness of the technique in three applications. First, we describe the exploitation of a vulnerable web server on different Linux dist- ibutions. Second, we build a very stealthy proof-of-concept backdoor. Third, we use SROP to bypass Apple's code signing and security vetting process by building an app that can execute arbitrary system calls. Finally, we discuss mitigation techniques.
【Keywords】: Linux; computer crime; ASLR; Android; Apples code signing; BSD; DEP; Linux distributions; Mac OS; OS developers; SROP; Solaris; Turing complete; UNIX systems; advanced security measures; attack method; attackers; exploitation techniques; fake signal frames; mitigation techniques; portable shellcode; process behavior; proof-of-concept backdoor; security vetting process; signal handling; sigreturn oriented programming; stack cookies; vulnerable Web server; weird machine; Context; Kernel; Linux; Program processors; Programming; Registers; Security; Operatings system security; backdoors; exploits
【Paper Link】 【Pages】:261-275
【Authors】: James Mickens
【Abstract】: Pivot is a new JavaScript isolation framework for web applications. Pivot uses iframes as its low-level isolation containers, but it uses code rewriting to implement synchronous cross-domain interfaces atop the asynchronous cross-frame postMessage( ) primitive. Pivot layers a distributed scheduling abstraction across the frames, essentially treating each frame as a thread which can invoke RPCs that are serviced by external threads. By rewriting JavaScript call sites, Pivot can detect RPC invocations, Pivot exchanges RPC requests and responses via postMessage( ), and it pauses and restarts frames using a novel rewriting technique that translates each frame's JavaScript code into a restart able generator function. By leveraging both iframes and rewriting, Pivot does not need to rewrite all code, providing an order-of-magnitude performance improvement over rewriting-only solutions. Compared to iframe-only approaches, Pivot provides synchronous RPC semantics, which developers typically prefer over asynchronous RPCs. Pivot also allows developers to use the full, unrestricted JavaScript language, including powerful statements like eval( ).
【Keywords】: Java; Web sites; program compilers; rewriting systems; JavaScript call sites; JavaScript code; JavaScript isolation framework; Pivot; RPC invocations; RPC requests; Web applications; asynchronous cross-frame postMessage() primitive; code rewriting; distributed scheduling abstraction; eval(); generator chains; iframe-only approaches; iframes; low-level isolation containers; order-of-magnitude performance improvement; restartable generator function; rewriting technique; rewriting-only solutions; synchronous cross-domain interfaces; synchronous mashup isolation; unrestricted JavaScript language; Browsers; Generators; Libraries; Reactive power; Runtime; Satellites; Security
【Paper Link】 【Pages】:276-291
【Authors】: Per Larsen ; Andrei Homescu ; Stefan Brunthaler ; Michael Franz
【Abstract】: The idea of automatic software diversity is at least two decades old. The deficiencies of currently deployed defenses and the transition to online software distribution (the "App store" model) for traditional and mobile computers has revived the interest in automatic software diversity. Consequently, the literature on diversity grew by more than two dozen papers since 2008. Diversity offers several unique properties. Unlike other defenses, it introduces uncertainty in the target. Precise knowledge of the target software provides the underpinning for a wide range of attacks. This makes diversity a broad rather than narrowly focused defense mechanism. Second, diversity offers probabilistic protection similar to cryptography-attacks may succeed by chance so implementations must offer high entropy. Finally, the design space of diversifying program transformations is large. As a result, researchers have proposed multiple approaches to software diversity that vary with respect to threat models, security, performance, and practicality. In this paper, we systematically study the state-of-the-art in software diversity and highlight fundamental trade-offs between fully automated approaches. We also point to open areas and unresolved challenges. These include "hybrid solutions", error reporting, patching, and implementation disclosure attacks on diversified software.
【Keywords】: cryptography; mobile computing; probability; software performance evaluation; App store model; SoK; automated software diversity; cryptography; error reporting; implementation disclosure attacks; mobile computers; online software distribution; patching attacks; probabilistic protection; program transformations; software attacks; Encoding; Layout; Monitoring; Operating systems; Registers; Security
【Paper Link】 【Pages】:292-307
【Authors】: John Criswell ; Nathan Dautenhahn ; Vikram S. Adve
【Abstract】: We present a new system, KCoFI, that is the first we know of to provide complete Control-Flow Integrity protection for commodity operating systems without using heavyweight complete memory safety. Unlike previous systems, KCoFI protects commodity operating systems from classical control-flow hijack attacks, return-to-user attacks, and code segment modification attacks. We formally verify a subset of KCoFI's design by modeling several features in small-step semantics and providing a partial proof that the semantics maintain control-flow integrity. The model and proof account for operations such as page table management, trap handlers, context switching, and signal delivery. Our evaluation shows that KCoFI prevents all the gadgets found by an open-source Return Oriented Programming (ROP) gadget-finding tool in the FreeBSD kernel from being used, it also reduces the number of indirect control-flow targets by 98.18%. Our evaluation also shows that the performance impact of KCoFI on web server bandwidth is negligible while file transfer bandwidth using OpenSSH is reduced by an average of 13%, and at worst 27%, across a wide range of file sizes. Postmark, an extremely file-system intensive benchmark, shows 2x overhead. Where comparable numbers are available, the overheads of KCoFI are far lower than heavyweight memory-safety techniques.
【Keywords】: Internet; file servers; object-oriented programming; operating system kernels; public domain software; FreeBSD kernel; KCoFI; OpenSSH; PostMark; Web server bandwidth; code segment modification attacks; commodity operating system kernels; control-flow hijack attacks; control-flow integrity; file transfer bandwidth; file-system intensive benchmark; formal verification; heavyweight memory-safety techniques; indirect control-flow targets; open-source ROP gadget-finding tool; open-source return oriented programming gadget-finding tool; page table management; return-to-user attacks; signal delivery; small-step semantics; trap handlers; Context; Hardware; Instruction sets; Instruments; Kernel; Security; Free BSD; compiler; control-flow integrity; formal verification; operating systems
【Paper Link】 【Pages】:308-323
【Authors】: Zongwei Zhou ; Miao Yu ; Virgil D. Gligor
【Abstract】: To be trustworthy, security-sensitive applications must be formally verified and hence small and simple, i.e., wimpy. Thus, they cannot include a variety of basic services available only in large and untrustworthy commodity systems, i.e., in giants. Hence, wimps must securely compose with giants to survive on commodity systems, i.e., rely on giants' services but only after efficiently verifying their results. This paper presents a security architecture based on a wimpy kernel that provides on-demand isolated I/O channels for wimp applications, without bloating the underlying trusted computing base. The size and complexity of the wimpy kernel are minimized by safely outsourcing I/O subsystem functions to an untrusted commodity operating system and exporting driver and I/O subsystem code to wimp applications. Using the USB subsystem as a case study, this paper illustrates the dramatic reduction of wimpy-kernel size and complexity, e.g., over 99% of the USB code base is removed. Performance measurements indicate that the wimpy-kernel architecture exhibits the desired execution efficiency.
【Keywords】: formal verification; operating systems (computers); peripheral interfaces; software architecture; trusted computing; I/O subsystem functions; USB code base; USB subsystem; commodity systems; formal verification; giants services; on-demand isolated I/O channels; security architecture; security-sensitive applications; trusted computing; trustworthy; untrusted commodity operating system; wimp applications; wimpy kernel complexity; wimpy kernel size; wimpy-kernel architecture; Complexity theory; Hardware; Kernel; Linux; Security; Universal Serial Bus
【Paper Link】 【Pages】:327-342
【Authors】: Shayak Sen ; Saikat Guha ; Anupam Datta ; Sriram K. Rajamani ; Janice Y. Tsai ; Jeannette M. Wing
【Abstract】: With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
【Keywords】: Big Data; Web services; cloud computing; computer bootstrapping; conformance testing; data privacy; parallel programming; search engines; source code (software); Bing; Grok data inventory; Legal ease language; Map-Reduce-like Big Data systems; automatic privacy policy compliance checking; business imperative privacy policies; cloud services; code-level schema element mapping; datatypes; information flow types; minimal human input; personalized user experiences; privacy compliance bootstrapping; privacy policy specification; program annotation; source code; user data handling; user trust; Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis
【Paper Link】 【Pages】:343-358
【Authors】: Ralf Küsters ; Tomasz Truderung ; Andreas Vogt
【Abstract】: Mix nets with randomized partial checking (RPC mix nets) have been introduced by Jakobsson, Juels, and Rivest as particularly simple and efficient verifiable mix nets. These mix nets have been used in several implementations of prominent e-voting systems to provide vote privacy and verifiability. In RPC mix nets, higher efficiency is traded for a lower level of privacy and verifiability. However, these mix nets have never undergone a rigorous formal analysis. Recently, Kahazei and Wikstroem even pointed out several severe problems in the original proposal and in implementations of RPC mix nets in e-voting systems, both for so-called re-encryption and Chaumian RPC mix nets. While Kahazei and Wikstroem proposed several fixes, the security status of Chaumian RPC mix nets (with the fixes applied) has been left open, re-encryption RPC mix nets, as they suggest, should not be used at all. In this paper, we provide the first formal security analysis of Chaumian RPC mix nets. We propose security definitions that allow one to measure the level of privacy and verifiability RPC mix nets offer, and then based on these definitions, carry out a rigorous analysis. Altogether, our results show that these mix nets provide a reasonable level of privacy and verifiability, and that they are still an interesting option for the use in e-voting systems.
【Keywords】: cryptography; data privacy; government data processing; politics; Chaumian RPC mix nets; e-voting systems; formal security analysis; randomized partial checking; reencryption RPC mix nets; vote privacy; vote verifiability; Electronic voting; Privacy; Protocols; Public key; Servers; Accountability; Cryptographic Analysis; Mix Nets; Privacy; Random Partial Checking; Verifiability
【Paper Link】 【Pages】:359-374
【Authors】: Vasilis Pappas ; Fernando Krell ; Binh Vo ; Vladimir Kolesnikov ; Tal Malkin ; Seung Geol Choi ; Wesley George ; Angelos D. Keromytis ; Steve Bellovin
【Abstract】: Query privacy in secure DBMS is an important feature, although rarely formally considered outside the theoretical community. Because of the high overheads of guaranteeing privacy in complex queries, almost all previous works addressing practical applications consider limited queries (e.g., just keyword search), or provide a weak guarantee of privacy. In this work, we address a major open problem in private DB: efficient sub linear search for arbitrary Boolean queries. We consider scalable DBMS with provable security for all parties, including protection of the data from both server (who stores encrypted data) and client (who searches it), as well as protection of the query, and access control for the query. We design, build, and evaluate the performance of a rich DBMS system, suitable for real-world deployment on today medium-to large-scale DBs. On a modern server, we are able to query a formula over 10TB, 100M-record DB, with 70 searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL, although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges, support for stemming, and free keyword searches over text fields. We identify and permit a reasonable and controlled amount of leakage, proving that no further leakage is possible. In particular, we allow leakage of some search pattern information, but protect the query and data, provide a high level of privacy for individual terms in the executed search formula, and hide the difference between a query that returned no results and a query that returned a very small result set. We also support private and complex access policies, integrated in the search process so that a query with empty result set and a query that fails the policy are hard to tell apart.
【Keywords】: data privacy; database management systems; query processing; Blind Seer; MySQL; arbitrary Boolean queries; data protection; encrypted data; free keyword searches; medium-to large-scale DB; query control; query privacy; scalable private DBMS; search pattern information; secure DBMS; stemming support; sublinear search; text fields; Cryptography; Data privacy; Indexes; Privacy; Servers; Private database; private search
【Paper Link】 【Pages】:375-389
【Authors】: Susan Hohenberger ; Steven Myers ; Rafael Pass ; Abhi Shelat
【Abstract】: A secure ad-hoc survey scheme enables a survey authority to independently (without any interaction) select an ad-hoc group of registered users based only on their identities (e.g., their email addresses), and create a survey where only selected users can anonymously submit exactly one response. We present a formalization of secure ad-hoc surveys and a provably-secure implementation in the random oracle model, called ANONIZE. Our performance analysis shows that ANONIZE enables securely implementing million-person anonymous surveys using a single modern workstation. As far as we know, ANONIZE constitutes the first implementation of a large-scale secure computation protocol (of non-trivial functionalities) that scales to millions of users.
【Keywords】: authorisation; cryptographic protocols; ANONIZE; ad-hoc group; email addresses; large-scale anonymous survey system; large-scale secure computation protocol; modern workstation; nontrivial functionalities; person anonymous surveys; provably-secure implementation; random oracle model; secure ad-hoc survey scheme; survey authority; user identities; Abstracts; Educational institutions; Electronic mail; Protocols; Public key; Registers; ANONIZE; secure anonymous survey system
【Paper Link】 【Pages】:393-408
【Authors】: Luyi Xing ; Xiaorui Pan ; Rui Wang ; Kan Yuan ; XiaoFeng Wang
【Abstract】: Android is a fast evolving system, with new updates coming out one after another. These updates often completely overhaul a running system, replacing and adding tens of thousands of files across Android's complex architecture, in the presence of critical user data and applications (apps for short). To avoid accidental damages to such data and existing apps, the upgrade process involves complicated program logic, whose security implications, however, are less known. In this paper, we report the first systematic study on the Android updating mechanism, focusing on its Package Management Service (PMS). Our research brought to light a new type of security-critical vulnerabilities, called Pileup flaws, through which a malicious app can strategically declare a set of privileges and attributes on a low-version operating system (OS) and wait until it is upgraded to escalate its privileges on the new system. Specifically, we found that by exploiting the Pileup vulnerabilities, the app can not only acquire a set of newly added system and signature permissions but also determine their settings (e.g., protection levels), and it can further substitute for new system apps, contaminate their data (e.g., cache, cookies of Android default browser) to steal sensitive user information or change security configurations, and prevent installation of critical system services. We systematically analyzed the source code of PMS using a program verification tool and confirmed the presence of those security flaws on all Android official versions and over 3000 customized versions. Our research also identified hundreds of exploit opportunities the adversary can leverage over thousands of devices across different device manufacturers, carriers and countries. To mitigate this threat without endangering user data and apps during an upgrade, we also developed a new detection service, called SecUP, which deploys a scanner on the user's device to capture the malicious apps designed to exploit Pileu- vulnerabilities, based upon the vulnerability-related information automatically collected from newly released Android OS images.
【Keywords】: Android (operating system); formal verification; invasive software; mobile computing; Android OS images; PMS; Pileup flaws; low-version operating system; malicious apps; malware; mobile OS updating; operating systems; package management service; program verification tool; security configuration; security-critical vulnerability; user information; Androids; Google; Humanoid robots; Mobile communication; Registers; Security; Smart phones; Android; OS update; Package Manager Service; Privilege Escalation
【Paper Link】 【Pages】:409-423
【Authors】: Xiao-yong Zhou ; Yeonjoon Lee ; Nan Zhang ; Muhammad Naveed ; XiaoFeng Wang
【Abstract】: Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. However, the security implications of this practice are less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at a better understanding of the security risks in this customization process. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touch screen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and helped the manufacturers fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.
【Keywords】: Android (operating system); computer crime; device drivers; flaw detection; smart phones; ADDICTED tool; Android Linux device driver customizations; Android OS; Android phone manufacturers; Linux files; Linux layer; Samsung phones; customized driver protection; customized phone; device files; dynamic analysis; end-to-end attacks; flaws detection; phone models; security flaws; security hazards; security implications; security risks; security settings; security-sensitive device; Cameras; Hardware; Kernel; Linux; Performance evaluation; Security; Smart phones; Android Security; Privacy
【Paper Link】 【Pages】:424-439
【Authors】: Byoungyoung Lee ; Long Lu ; Tielei Wang ; Taesoo Kim ; Wenke Lee
【Abstract】: There have been many research efforts to secure Android applications and the high-level system mechanisms. The low-level operating system designs have been overlooked partially due to the belief that security issues at this level are similar to those on Linux, which are well-studied. However, we identify that certain Android modifications are at odds with security and result in serious vulnerabilities that need to be addressed immediately. In this paper, we analyze the Zygote process creation model, an Android operating system design for speeding up application launches. Zygote weakens Address Space Layout Randomization (ASLR) because all application processes are created with largely identical memory layouts. We design both remote and local attacks capable of bypassing the weakened ASLR and executing return-oriented programming on Android. We demonstrate the attacks using real applications, such as the Chrome Browser and VLC Media Player. Further, we design and implement Morula, a secure replacement for Zygote. Morula introduces a small amount of code to the Android operating system and can be easily adopted by device vendors. Our evaluation shows that, compared to Zygote, Morula incurs a 13 MB memory increase for each running application but allows each Android process to have an individually randomized memory layout and even a slightly shorter average launch time.
【Keywords】: Android (operating system); security of data; Android modifications; Android operating system design; Chrome browser; Morula; VLC media player; Zygote process creation model; address space layout randomization; individually randomized memory layout; low-level operating system designs; return-oriented programming; security issues; weakened ASLR fortification; Androids; Browsers; Humanoid robots; Layout; Libraries; Media; Security; ASLR; Android; Security
【Paper Link】 【Pages】:443-458
【Authors】: Marcin Andrychowicz ; Stefan Dziembowski ; Daniel Malinowski ; Lukasz Mazurek
【Abstract】: Bit coin is a decentralized digital currency, introduced in 2008, that has recently gained noticeable popularity. Its main features are: (a) it lacks a central authority that controls the transactions, (b) the list of transactions is publicly available, and (c) its syntax allows more advanced transactions than simply transferring the money. The goal of this paper is to show how these properties of Bit coin can be used in the area of secure multiparty computation protocols (MPCs). Firstly, we show that the Bit coin system provides an attractive way to construct a version of "timed commitments", where the committer has to reveal his secret within a certain time frame, or to pay a fine. This, in turn, can be used to obtain fairness in some multiparty protocols. Secondly, we introduce a concept of multiparty protocols that work "directly on Bit coin". Recall that the standard definition of the MPCs guarantees only that the protocol "emulates the trusted third party". Hence ensuring that the inputs are correct, and the outcome is respected is beyond the scope of the definition. Our observation is that the Bit coin system can be used to go beyond the standard "emulation-based" definition, by constructing protocols that link their inputs and the outputs with the real Bit coin transactions. As an instantiation of this idea we construct protocols for secure multiparty lotteries using the Bit coin currency, without relying on a trusted authority (one of these protocols uses the Bit coin-based timed commitments mentioned above). Our protocols guarantee fairness for the honest parties no matter how the loser behaves. For example: if one party interrupts the protocol then her money is transferred to the honest participants. Our protocols are practical (to demonstrate it we performed their transactions in the actual Bit coin system), and can be used in real life as a replacement for the online gambling sites. We think that this paradigm can have also other applications. We discu- s some of them.
【Keywords】: cryptographic protocols; electronic money; Bitcoin; MPC; decentralized digital currency; emulation-based definition; online gambling sites; secure multiparty computation protocols; secure multiparty lotteries; timed commitments; Cryptography; Games; Internet; Online banking; Protocols; Standards; bitocoin; lottery; multiparty computations
【Paper Link】 【Pages】:459-474
【Authors】: Eli Ben-Sasson ; Alessandro Chiesa ; Christina Garman ; Matthew Green ; Ian Miers ; Eran Tromer ; Madars Virza
【Abstract】: Bit coin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bit coin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zero coin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality. In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs). First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security. Second, we build Zero cash, a practical instantiation of our DAP scheme construction. In Zero cash, transactions are less than 1 kB and take under 6 ms to verify - orders of magnitude more efficient than the less-anonymous Zero coin and competitive with plain Bit coin.
【Keywords】: data privacy; electronic money; Bitcoin; DAP schemes; Zero cash; Zerocash; decentralized anonymous payment schemes; decentralized anonymous payments; full-fledged ledger-based digital currency; payment transactions; privacy guarantees; public decentralized ledger; zero-knowledge succinct noninteractive arguments of knowledge; zk-SNARKs; Logic gates; Online banking; Privacy; Protocols; Public key; Bitcoin; decentralized electronic cash; zero knowledge
【Paper Link】 【Pages】:475-490
【Authors】: Andrew Miller ; Ari Juels ; Elaine Shi ; Bryan Parno ; Jonathan Katz
【Abstract】: Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole.
【Keywords】: data mining; digital storage; electronic money; Bit coin mining; Bitcoin; Perm coin; Permacoin; archival data; computational puzzles; computational resources; data preservation; distributed storage; e-cash system; proofs-of-retrievability; rational economic agents; scratch-off puzzle; Data mining; Investment; Online banking; Outsourcing; Peer-to-peer computing; Public key
【Paper Link】 【Pages】:493-508
【Authors】: Sai Teja Peddinti ; Aleksandra Korolova ; Elie Bursztein ; Geetanjali Sampemane
【Abstract】: Most of what we understand about data sensitivity is through user self-report (e.g., surveys), this paper is the first to use behavioral data to determine content sensitivity, via the clues that users give as to what information they consider private or sensitive through their use of privacy enhancing product features. We perform a large-scale analysis of user anonymity choices during their activity on Quora, a popular question-and-answer site. We identify categories of questions for which users are more likely to exercise anonymity and explore several machine learning approaches towards predicting whether a particular answer will be written anonymously. Our findings validate the viability of the proposed approach towards an automatic assessment of data sensitivity, show that data sensitivity is a nuanced measure that should be viewed on a continuum rather than as a binary concept, and advance the idea that machine learning over behavioral data can be effectively used in order to develop product features that can help keep users safe.
【Keywords】: data privacy; learning (artificial intelligence); Quora; automatic assessment; behavioral data; cloak; content sensitivity; data sensitivity; machine learning; privacy enhancing product features; question-and-answer site; swagger; user anonymity; user self-report; Context; Crawlers; Data privacy; Facebook; Privacy; Search engines; Sensitivity
【Paper Link】 【Pages】:509-523
【Authors】: Jose Lopes ; Nuno Neves
【Abstract】: RaptorQ is the most advanced fountain code proposed so far. Its properties make it attractive for forward error correction (FEC), offering high reliability at low overheads (i.e., for a small amount of repair information) and efficient encoding and decoding operations. Since RaptorQ's emergence, it has already been standardized by the IETF, and there is the expectation that it will be adopted by several other standardization bodies, in areas related to digital media broadcast, cellular networks, and satellite communications. The paper describes a new attack on RaptorQ that breaks the near ideal FEC performance, by carefully choosing which packets are allowed to reach the receiver. Furthermore, the attack was extended to be performed over secure channels with IPsec/ESP. The paper also proposes a few solutions to protect the code from the attack, which could be easily integrated into the implementations.
【Keywords】: cryptography; data protection; decoding; encoding; forward error correction; telecommunication security; FEC; IETF; IPsec-ESP; RaptorQ; advanced fountain code; cellular networks; code protection; decoding operations; digital media broadcast; encoding operations; forward error correction; satellite communications; secure channels; Decoding; Encoding; Equations; Generators; Maintenance engineering; Mathematical model; Receivers; fountain codes; malicious faults; raptor codes; resilience
【Paper Link】 【Pages】:524-539
【Authors】: Michael Rushanan ; Aviel D. Rubin ; Denis Foo Kune ; Colleen M. Swanson
【Abstract】: Balancing security, privacy, safety, and utility is a necessity in the health care domain, in which implantable medical devices (IMDs) and body area networks (BANs) have made it possible to continuously and automatically manage and treat a number of health conditions. In this work, we survey publications aimed at improving security and privacy in IMDs and health-related BANs, providing clear definitions and a comprehensive overview of the problem space. We analyze common themes, categorize relevant results, and identify trends and directions for future research. We present a visual illustration of this analysis that shows the progression of IMD/BAN research and highlights emerging threats. We identify three broad research categories aimed at ensuring the security and privacy of the telemetry interface, software, and sensor interface layers and discuss challenges researchers face with respect to ensuring reproducibility of results. We find that while the security of the telemetry interface has received much attention in academia, the threat of software exploitation and the sensor interface layer deserve further attention. In addition, we observe that while the use of physiological values as a source of entropy for cryptographic keys holds some promise, a more rigorous assessment of the security and practicality of these schemes is required.
【Keywords】: biomedical equipment; biomedical telemetry; body area networks; data privacy; health care; private key cryptography; prosthetics; radiotelemetry; IMD; SoK; body area networks; cryptographic keys; entropy source; health care domain; health conditions; health-related BAN; implantable medical devices; physiological values; privacy improvement; safety issue; security improvement; sensor interface layers; software exploitation threat; telemetry interface; utility issue; Communication system security; Medical diagnostic imaging; Privacy; Security; Telemetry; Wireless communication; Wireless sensor networks; body area networks; health; implantable medical devices; privacy; security
【Paper Link】 【Pages】:540-555
【Authors】: Piotr Mardziel ; Mário S. Alvim ; Michael W. Hicks ; Michael R. Clarkson
【Abstract】: A metric is proposed for quantifying leakage of information about secrets and about how secrets change over time. The metric is used with a model of information flow for probabilistic, interactive systems with adaptive adversaries. The model and metric are implemented in a probabilistic programming language and used to analyze several examples. The analysis demonstrates that adaptivity increases information flow.
【Keywords】: cryptography; high level languages; interactive systems; probability; dynamic secrets; information flow; information leakage; interactive systems; probabilistic programming language; probabilistic systems; Adaptation models; Automata; Context; History; Measurement; Probabilistic logic; Security; dynamic secret; gain function; probabilistic programming; quantitative information flow; vulnerability
【Paper Link】 【Pages】:559-574
【Authors】: Adam Everspaugh ; Yan Zhai ; Robert Jellinek ; Thomas Ristenpart ; Michael M. Swift
【Abstract】: Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.
【Keywords】: Linux; virtual machines; Linux RNG systems; VM boots; VM snapshots; Whirlwind RNG; cryptographic secrets; entropy sources; not-so-random numbers; software-based random number generators; virtual machine; virtualized Linux; virtualized environments; Cryptography; Entropy; Hardware; Instruments; Kernel; Linux; random number generator; virtualization
【Paper Link】 【Pages】:575-589
【Authors】: Enes Göktas ; Elias Athanasopoulos ; Herbert Bos ; Georgios Portokalidis
【Abstract】: As existing defenses like ASLR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents flows of control that were not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations.
【Keywords】: programming; security of data; CFI; Internet Explorer; code reuse attacks; control flow integrity; return oriented programming; Electronic mail; Integrated circuits; Internet; Payloads; Programming; Security; Software; Control-flow integrity evaluation; code-reuse attack
【Paper Link】 【Pages】:590-604
【Authors】: Fabian Yamaguchi ; Nico Golde ; Daniel Arp ; Konrad Rieck
【Abstract】: The vast majority of security breaches encountered today are a direct result of insecure code. Consequently, the protection of computer systems critically depends on the rigorous identification of vulnerabilities in software, a tedious and error-prone process requiring significant expertise. Unfortunately, a single flaw suffices to undermine the security of a system and thus the sheer amount of code to audit plays into the attacker's cards. In this paper, we present a method to effectively mine large amounts of source code for vulnerabilities. To this end, we introduce a novel representation of source code called a code property graph that merges concepts of classic program analysis, namely abstract syntax trees, control flow graphs and program dependence graphs, into a joint data structure. This comprehensive representation enables us to elegantly model templates for common vulnerabilities with graph traversals that, for instance, can identify buffer overflows, integer overflows, format string vulnerabilities, or memory disclosures. We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.
【Keywords】: computational linguistics; graphs; source code (software); trees (mathematics); Linux kernel; abstract syntax trees; buffer overflows; classic program analysis; code property graphs; computer systems; control flow graphs; data structure; discovering vulnerabilities; error-prone process; format string vulnerabilities; graph traversals; insecure code; integer overflows; memory disclosures; modeling; popular graph database; program dependence graphs; rigorous identification; security breaches; single flaw suffices; source code; Abstracts; Databases; Joints; Kernel; Security; Syntactics; Graph Databases; Static Analysis; Vulnerabilities
【Paper Link】 【Pages】:605-620
【Authors】: Bhushan Jain ; Mirza Basim Baig ; Dongli Zhang ; Donald E. Porter ; Radu Sion
【Abstract】: An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor's hardware level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a fresh look at the VMI design space. This paper organizes previous work based on the essential design considerations when building a VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been under-explored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest for meaningful trust in virtualized environments through VM introspection.
【Keywords】: information retrieval; security of data; trusted computing; virtual machines; virtualisation; VMI system; semantic gap; semantic meaning extraction; system security; trusted computing base; virtual machine introspection; virtualized environments; Data structures; Kernel; Linux; Monitoring; Security; Semantics; Virtual machine monitors; VM Introspection; semantic gap; trust
【Paper Link】 【Pages】:623-638
【Authors】: Chang Liu ; Yan Huang ; Elaine Shi ; Jonathan Katz ; Michael W. Hicks
【Abstract】: RAM-model secure computation addresses the inherent limitations of circuit-model secure computation considered in almost all previous work. Here, we describe the first automated approach for RAM-model secure computation in the semi-honest model. We define an intermediate representation called SCVM and a corresponding type system suited for RAM-model secure computation. Leveraging compile-time optimizations, our approach achieves order-of-magnitude speedups compared to both circuit-model secure computation and the state-of-art RAM-model secure computation.
【Keywords】: program compilers; random-access storage; security of data; SCVM; automated RAM-model secure computation approach; compile-time optimizations; intermediate representation; order-of-magnitude speedups; random access machine; Arrays; Computational modeling; Integrated circuit modeling; Program processors; Protocols; Random access memory; Security
【Paper Link】 【Pages】:639-654
【Authors】: Muhammad Naveed ; Manoj Prabhakaran ; Carl A. Gunter
【Abstract】: Dynamic Searchable Symmetric Encryption allows a client to store a dynamic collection of encrypted documents with a server, and later quickly carry out keyword searches on these encrypted documents, while revealing minimal information to the server. In this paper we present a new dynamic SSE scheme that is simpler and more efficient than existing schemes while revealing less information to the server than prior schemes, achieving fully adaptive security against honest-but-curious servers. We implemented a prototype of our scheme and demonstrated its efficiency on datasets from prior work. Apart from its concrete efficiency, our scheme is also simpler: in particular, it does not require the server to support any operation other than upload and download of data. Thus the server in our scheme can be based solely on a cloud storage service, rather than a cloud computation service as well, as in prior work. In building our dynamic SSE scheme, we introduce a new primitive called Blind Storage, which allows a client to store a set of files on a remote server in such a way that the server does not learn how many files are stored, or the lengths of the individual files, as each file is retrieved, the server learns about its existence (and can notice the same file being downloaded subsequently), but the file's name and contents are not revealed. This is a primitive with several applications other than SSE, and is of independent interest.
【Keywords】: cloud computing; cryptography; storage management; blind storage; cloud computation service; cloud storage service; dynamic SSE scheme; dynamic searchable symmetric encryption; fully adaptive security; honest-but-curious server security; keyword search; Adaptation models; Cloud computing; Encryption; Privacy; Servers; cloud security; dynamic searchable encryption; secure cloud storage
【Paper Link】 【Pages】:655-670
【Authors】: Aseem Rastogi ; Matthew A. Hammer ; Michael Hicks
【Abstract】: In a Secure Multiparty Computation (SMC), mutually distrusting parties use cryptographic techniques to cooperatively compute over their private data, in the process each party learns only explicitly revealed outputs. In this paper, we present Wysteria, a high-level programming language for writing SMCs. As with past languages, like Fairplay, Wysteria compiles secure computations to circuits that are executed by an underlying engine. Unlike past work, Wysteria provides support for mixed-mode programs, which combine local, private computations with synchronous SMCs. Wysteria complements a standard feature set with built-in support for secret shares and with wire bundles, a new abstraction that supports generic n-party computations. We have formalized Wysteria, its refinement type system, and its operational semantics. We show that Wysteria programs have an easy-to-understand single-threaded interpretation and prove that this view corresponds to the actual multi-threaded semantics. We also prove type soundness, a property we show has security ramifications, namely that information about one party's data can only be revealed to another via (agreed upon) secure computations. We have implemented Wysteria, and used it to program a variety of interesting SMC protocols from the literature, as well as several new ones. We find that Wysteria's performance is competitive with prior approaches while making programming far easier, and more trustworthy.
【Keywords】: cryptographic protocols; data privacy; high level languages; multi-threading; SMC protocols; Wysteria compiles; Wysteria performance; cryptographic techniques; formalized Wysteria programs; generic n-party computations; high level programming language; mixed mode multiparty computations; mixed-mode programs; multithreaded semantics; operational semantics; private data; refinement type system; secure computations; secure multiparty computation; security ramifications; synchronous SMC; Cryptography; Educational institutions; Protocols; Semantics; Standards; Wires; Writing; Dependent type system; Functional language; Secure multi-party computation
【Paper Link】 【Pages】:673-688
【Authors】: Daniel Fett ; Ralf Küsters ; Guido Schmitz
【Abstract】: The web constitutes a complex infrastructure and, as demonstrated by numerous attacks, rigorous analysis of standards and web applications is indispensable. Inspired by successful prior work, in particular the work by Akhawe et al. as well as Bansal et al., in this work we propose a formal model for the web infrastructure. While unlike prior works, which aim at automatic analysis, our model so far is not directly amenable to automation, it is much more comprehensive and accurate with respect to the standards and specifications. As such, it can serve as a solid basis for the analysis of a broad range of standards and applications. As a case study and another important contribution of our work, we use our model to carry out the first rigorous analysis of the Browser ID system (a.k.a. Mozilla Persona), a recently developed complex real-world single sign-on system that employs technologies such as AJAX, cross-document messaging, and HTML5 web storage. Our analysis revealed a number of very critical flaws that could not have been captured in prior models. We propose fixes for the flaws, formally state relevant security properties, and prove that the fixed system in a setting with a so-called secondary identity provider satisfies these security properties in our model. The fixes for the most critical flaws have already been adopted by Mozilla and our findings have been rewarded by the Mozilla Security Bug Bounty Program.
【Keywords】: Internet; online front-ends; program debugging; security of data; AJAX; HTML5 web storage; Mozilla security bug bounty program; Web applications; Web infrastructure; Web security; automatic analysis; browserID SSO system; cross-document messaging; formal model; formal security analysis; rigorous analysis; secondary identity provider; single sign-on system; state relevant security properties; Analytical models; Browsers; Mathematical model; Security; Standards; Web servers; Formal Security Analysis; Single Sign-on; Web Model; Web Security
【Paper Link】 【Pages】:689-704
【Authors】: Jerry Ma ; Weining Yang ; Min Luo ; Ninghui Li
【Abstract】: A probabilistic password model assigns a probability value to each string. Such models are useful for research into understanding what makes users choose more (or less) secure passwords, and for constructing password strength meters and password cracking utilities. Guess number graphs generated from password models are a widely used method in password research. In this paper, we show that probability-threshold graphs have important advantages over guess-number graphs. They are much faster to compute, and at the same time provide information beyond what is feasible in guess-number graphs. We also observe that research in password modeling can benefit from the extensive literature in statistical language modeling. We conduct a systematic evaluation of a large number of probabilistic password models, including Markov models using different normalization and smoothing methods, and found that, among other things, Markov models, when done correctly, perform significantly better than the Probabilistic Context-Free Grammar model proposed in Weir et al., which has been used as the state-of-the-art password model in recent research.
【Keywords】: Markov processes; graph theory; probability; security of data; Markov models; guess number graphs; password cracking utilities; password strength meters; probabilistic password models; probability-threshold graphs; secure passwords; statistical language modeling; Computational modeling; Dictionaries; Educational institutions; Markov processes; Probabilistic logic; Testing; Training
【Paper Link】 【Pages】:705-720
【Authors】: Shrirang Mare ; Andres Molina-Markham ; Cory Cornelius ; Ronald A. Peterson ; David Kotz
【Abstract】: Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same - the user's hand movement. In our experiments ZEBRA performed continuous authentication with 85% accuracy in verifying the correct user and identified all adversaries within 11s. For a different threshold that trades security for usability, ZEBRA correctly verified 90% of users and identified all adversaries within 50s.
【Keywords】: accelerometers; authorisation; gyroscopes; human computer interaction; interactive terminals; message authentication; ZEBRA; authentication methods; behavioral biometric authentication; built-in accelerometer; computer terminal; fingerprints; gyroscope; inactivity timeouts; keystroke dynamics; one-time authentication; passwords; radio; security risk; tokens; user bracelet; user hand movement; user interaction; user proximity; users authentication; wrist movement; zero-effort bilateral recurring authentication; Accelerometers; Authentication; Gyroscopes; Keyboards; Mice; Wrist; continuous authentication; deauthentication; security; usability; wearable