the ACM Conference on Computer and Communications Security, CCS'12, Raleigh, NC, USA, October 16-18, 2012. ACM 【DBLP Link】
【Paper Link】 【Pages】:1
【Authors】: Virgil D. Gligor
【Abstract】: A general theory of trust in networks of humans and computers must be built on both a theory of behavioral trust and a theory of computational trust.1 This argument is motivated by increased participation of people in online social networking, crowdsourcing, human computation, and socio-economic protocols; e.g., protocols modeled by trust and gift-exchange games, norms-establishing contracts, and scams/deception. We illustrate a class of interactive social protocols that relies both on trustworthy properties of commodity systems2 (e.g., verifiable end-to-end trusted path) and participant trust, since on-line verification of protocol compliance is often impractical; e.g., it can lead to undecidable problems, co-NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants. Both preferences and beliefs can be enhanced whenever protocol non-compliance leads to punishment of untrustworthy participants; i.e., it seems natural that betrayal aversion can be decreased and belief in trustworthiness increased by properly defined punishment. Similarly, risk aversion can be decreased and trustworthiness increased by feasible recovery from participant non-compliance. A general theory of trust which focuses on the establishment of new trust relations where none were possible before would help create new economic opportunities. New trust relations would increase the pool of services available to users, remove cooperation barriers, and enable the "network effect" where it really matters; i.e., at the application level. Hence, it seems important that security research should enable and promote trust-enhancement infrastructures in human and computer networks; e.g., trust networks. Finally, we argue that a general theory of trust should mirror human expectations and mental models without relying on false metaphors and analogies with the physical world. Virgil D. Gligor received his B.Sc., M.Sc., and Ph.D. degrees from the University of California at Berkeley. He taught at the University of Maryland between 1976 and 2007, and is currently a Professor of Electrical and Computer Engineering at Carnegie Mellon University and co-Director of CyLab. Over the past thirty-five years, his research interests ranged from access control mechanisms, penetration analysis, and denial-of-service protection to cryptographic protocols and applied cryptography. Gligor was an editorial board member of several IEEE and ACM journals, and the Editor in Chief of the IEEE Transactions on Dependable and Secure Computing. He received the 2006 National Information Systems Security Award jointly given by NIST and NSA in the US, and the 2011 Outstanding Innovation Award given by the ACM Special Interest Group on Security, Audit and Control.
【Keywords】: social protocols; trust; trustworthy systems
【Paper Link】 【Pages】:2-13
【Authors】: Raoul Strackx ; Frank Piessens
【Abstract】: Protecting commodity operating systems against software exploits is known to be challenging, because of their sheer size. The same goes for key software applications such as web browsers or mail clients. As a consequence, a significant fraction of internet-connected computers is infected with malware. To mitigate this threat, we propose a combined approach of (1) a run-time security architecture that can efficiently protect fine-grained software modules executing on a standard operating system, and (2) a compiler that compiles standard C source code modules to such protected binary modules. The offered security guarantees are significant: relying on a TCB of only a few thousand lines of code, we show that the power of arbitrary kernel-level or process-level malware is reduced to interacting with the module through the module's public API. With a proper API design and implementation, modules are fully protected. The run-time architecture can be loaded on demand and only incurs performance overhead when it is loaded. Benchmarks show that, once loaded, it incurs a 3.22% system-wide performance cost. For applications that make intensive use of protected modules, and hence benefit most of the security guarantees provided, the performance cost is up to 14%.
【Keywords】: fully abstract compilation; secure execution; trusted computing
【Paper Link】 【Pages】:14-27
【Authors】: Yu-Yuan Chen ; Pramod A. Jamkhedkar ; Ruby B. Lee
【Abstract】: We propose a software-hardware architecture, DataSafe, that realizes the concept of self-protecting data: data that is protected by a given policy whenever it is accessed by any application -- including unvetted third-party applications. Our architecture provides dynamic instantiations of secure data compartments (SDCs), with hardware monitoring of the information flows from the compartment using hardware policy tags associated with the data at runtime. Unbypassable hardware output control prevents confidential information from being leaked out. Unlike previous hardware information flow tracking systems, DataSafe software architecture bridges the semantic gap by supporting flexible, high-level software policies for the data, seamlessly translating these policies to efficient hardware tags at runtime. Applications need not be modified to interface to these software-hardware mechanisms. DataSafe architecture is designed to prevent illegitimate secondary dissemination of protected plaintext data by authorized recipients, to track and protect data derived from sensitive data, and to provide lifetime enforcement of the confidentiality policies associated with the sensitive data.
【Keywords】: architecture; information flow tracking; policy languages; security; self-protecting data; trusted computing
【Paper Link】 【Pages】:28-37
【Authors】: Hyungon Moon ; Hojoon Lee ; Jihoon Lee ; Kihwan Kim ; Yunheung Paek ; Brent ByungHoon Kang
【Abstract】: In this paper, we present Vigilare system, a kernel integrity monitor that is architected to snoop the bus traffic of the host system from a separate independent hardware. This snoop-based monitoring enabled by the Vigilare system, overcomes the limitations of the snapshot-based monitoring employed in previous kernel integrity monitoring solutions. Being based on inspecting snapshots collected over a certain interval, the previous hardware-based monitoring solutions cannot detect transient attacks that can occur in between snapshots. We implemented a prototype of the Vigilare system on Gaisler's grlib-based system-on-a-chip (SoC) by adding Snooper hardware connections module to the host system for bus snooping. To evaluate the benefit of snoop-based monitoring, we also implemented similar SoC with a snapshot-based monitor to be compared with. The Vigilare system detected all the transient attacks without performance degradation while the snapshot-based monitor could not detect all the attacks and induced considerable performance degradation as much as 10% in our tuned STREAM benchmark test.
【Keywords】: hardware-based integrity monitor; kernel integrity monitor; transient attack
【Paper Link】 【Pages】:38-49
【Authors】: Martin Georgiev ; Subodh Iyengar ; Suman Jana ; Rishita Anubhai ; Dan Boneh ; Vitaly Shmatikov
【Abstract】: SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations.
【Keywords】: https; public-key certificates; public-key infrastructure; security vulnerabilities; ssl; tls
【Paper Link】 【Pages】:50-61
【Authors】: Sascha Fahl ; Marian Harbach ; Thomas Muders ; Matthew Smith ; Lars Baumgärtner ; Bernd Freisleben
【Abstract】: Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated.
【Keywords】: android; apps; mitma; security; ssl
【Paper Link】 【Pages】:62-72
【Authors】: Nikos Mavrogiannopoulos ; Frederik Vercauteren ; Vesselin Velichkov ; Bart Preneel
【Abstract】: This paper describes a cross-protocol attack on all versions of TLS; it can be seen as an extension of the Wagner and Schneier attack on SSL 3.0. The attack presents valid explicit elliptic curve Diffie-Hellman parameters signed by a server to a client that incorrectly interprets these parameters as valid plain Diffie-Hellman parameters. Our attack enables an adversary to successfully impersonate a server to a random client after obtaining 240 signed elliptic curve keys from the original server. While attacking a specific client is improbable due to the high number of signed keys required during the lifetime of one TLS handshake, it is not completely unrealistic for a setting where the server has high computational power and the attacker contents itself with recovering one out of many session keys. We remark that popular open-source server implementations are not susceptible to this attack, since they typically do not support the explicit curve option. Finally we propose a fix that renders the protocol immune to this family of cross-protocol attacks.
【Keywords】: cross-protocol attack; man-in-the-middle; server impersonation attack; ssl; tls
【Paper Link】 【Pages】:73-84
【Authors】: Mashael AlSabah ; Kevin S. Bauer ; Ian Goldberg
【Abstract】: Tor is a low-latency anonymity-preserving network that enables its users to protect their privacy online. It consists of volunteer-operated routers from all around the world that serve hundreds of thousands of users every day. Due to congestion and a low relay-to-client ratio, Tor suffers from performance issues that can potentially discourage its wider adoption, and result in an overall weaker anonymity to all users. We seek to improve the performance of Tor by defining different classes of service for its traffic. We recognize that although the majority of Tor traffic is interactive web browsing, a relatively small amount of bulk downloading consumes an unfair amount of Tor's scarce bandwidth. Furthermore, these traffic classes have different time and bandwidth constraints; therefore, they should not be given the same Quality of Service (QoS), which Tor offers them today. We propose and evaluate DiffTor, a machine-learning-based approach that classifies Tor's encrypted circuits by application in real time and subsequently assigns distinct classes of service to each application. Our experiments confirm that we are able to classify circuits we generated on the live Tor network with an extremely high accuracy that exceeds 95%. We show that our real-time classification in combination with QoS can considerably improve the experience of Tor clients, as our simple techniques result in a 75% improvement in responsiveness and an 86% reduction in download times at the median for interactive users.
【Keywords】: machine learning; quality of service; tor; traffic classification
【Paper Link】 【Pages】:85-96
【Authors】: Max Schuchard ; John Geddes ; Christopher Thompson ; Nicholas Hopper
【Abstract】: Decoy Routing is a new approach to Internet censorship circumvention that was recently and independently proposed at FOCI'11, USENIX Security'11 and CCS'11. Decoy routing aims to hamper nation-state level Internet censorship by having routers, rather than end hosts, relay traffic to blocked destinations. We analyze the security of these schemes against a routing capable adversary, a censoring authority that is willing to make routing decisions in response to decoy routing systems. We explore China, Syria, Iran, and Egypt as routing capable adversaries, and evaluate several attacks that defeat the security goals of existing decoy routing proposals. In particular, we show that a routing capable adversary can enumerate the participating routers implementing these protocols; can successfully avoid sending traffic along routes containing these routers with little or no adverse effects; can identify users of these schemes through active and passive attacks; and in some cases can probabilistically identify connections to targeted destinations.
【Keywords】: bgp; decoy routing; telex; cirripede; censorship
【Paper Link】 【Pages】:97-108
【Authors】: Hooman Mohajeri Moghaddam ; Baiyu Li ; Mohammad Derakhshani ; Ian Goldberg
【Abstract】: The Tor network is designed to provide users with low-latency anonymous communications. Tor clients build circuits with publicly listed relays to anonymously reach their destinations. However, since the relays are publicly listed, they can be easily blocked by censoring adversaries. Consequently, the Tor project envisioned the possibility of unlisted entry points to the Tor network, commonly known as bridges. We address the issue of preventing censors from detecting the bridges by observing the communications between them and nodes in their network. We propose a model in which the client obfuscates its messages to the bridge in a widely used protocol over the Internet. We investigate using Skype video calls as our target protocol and our goal is to make it difficult for the censoring adversary to distinguish between the obfuscated bridge connections and actual Skype calls using statistical comparisons. We have implemented our model as a proof-of-concept pluggable transport for Tor, which is available under an open-source licence. Using this implementation we observed the obfuscated bridge communications and compared it with those of Skype calls and presented the results.
【Keywords】: bridges; censorship circumvention; pluggable transports; protocol obfuscation; skype; steganography; tor
【Paper Link】 【Pages】:109-120
【Authors】: Zachary Weinberg ; Jeffrey Wang ; Vinod Yegneswaran ; Linda Briesemeister ; Steven Cheung ; Frank Wang ; Dan Boneh
【Abstract】: Internet censorship by governments is an increasingly common practice worldwide. Internet users and censors are locked in an arms race: as users find ways to evade censorship schemes, the censors develop countermeasures for the evasion tactics. One of the most popular and effective circumvention tools, Tor, must regularly adjust its network traffic signature to remain usable. We present StegoTorus, a tool that comprehensively disguises Tor from protocol analysis. To foil analysis of packet contents, Tor's traffic is steganographed to resemble an innocuous cover protocol, such as HTTP. To foil analysis at the transport level, the Tor circuit is distributed over many shorter-lived connections with per-packet characteristics that mimic cover-protocol traffic. Our evaluation demonstrates that StegoTorus improves the resilience of Tor to fingerprinting attacks and delivers usable performance.
【Keywords】: anticensorship; circumvention tools; steganography
【Paper Link】 【Pages】:121-132
【Authors】: Qiyan Wang ; Xun Gong ; Giang T. K. Nguyen ; Amir Houmansadr ; Nikita Borisov
【Abstract】: A key challenge in censorship-resistant web browsing is being able to direct legitimate users to redirection proxies while preventing censors, posing as insiders, from discovering their addresses and blocking them. We propose a new framework for censorship-resistant web browsing called CensorSpoofer that addresses this challenge by exploiting the asymmetric nature of web browsing traffic and making use of IP spoofing. CensorSpoofer de-couples the upstream and downstream channels, using a low-bandwidth indirect channel for delivering outbound requests (URLs) and a high-bandwidth direct channel for downloading web content. The upstream channel hides the request contents using steganographic encoding within Email or instant messages, whereas the downstream channel uses IP address spoofing so that the real address of the proxies is not revealed either to legitimate users or censors. We built a proof-of-concept prototype that uses encrypted VoIP for this downstream channel and demonstrated the feasibility of using the CensorSpoofer framework in a realistic environment.
【Keywords】: asymmetric communication; censorship resistance; ip spoofing; voice-over-ip
【Paper Link】 【Pages】:133-144
【Authors】: Dimitris Geneiatakis ; Georgios Portokalidis ; Vasileios P. Kemerlis ; Angelos D. Keromytis
【Abstract】: Applications can be logically separated to parts that face different types of threats, or suffer dissimilar exposure to a particular threat because of external events or innate properties of the software. Based on this observation, we propose the virtual partitioning of applications that will allow the selective and targeted application of those protection mechanisms that are most needed on each partition, or manage an application's attack surface by protecting the most exposed partition. We demonstrate the value of our scheme by introducing a methodology to automatically partition software, based on the intrinsic property of user authentication. Our approach is able to automatically determine the point where users authenticate, without access to source code. At runtime, we employ a monitor that utilizes the identified authentication points, as well as events like accessing specific files, to partition execution and adapt defenses by switching between protection mechanisms of varied intensity, such as dynamic taint analysis and instruction-set randomization. We evaluate our approach using seven well-known network applications, including the MySQL database server. Our results indicate that our methodology can accurately discover authentication points. Furthermore, we show that using virtual partitioning to apply costly protection mechanisms can reduce performance overhead by up to 5x, depending on the nature of the application.
【Keywords】: adaptive defenses; application partitioning; authentication; dynamic taint analysis; information flow tracking; instruction-set randomization; risk management
【Paper Link】 【Pages】:145-156
【Authors】: Divya Muthukumaran ; Trent Jaeger ; Vinod Ganapathy
【Abstract】: When servers manage resources on behalf of multiple, mutually-distrusting clients, they must mediate access to those resources to ensure that each client request complies with an authorization policy. This goal is typically achieved by placing authorization hooks at appropriate locations in server code. The goal of authorization hook placement is to completely mediate all security-sensitive operations on shared resources. To date, authorization hook placement in code bases, such as the X server and postgresql, has largely been a manual procedure, driven by informal analysis of server code and discussions on developer forums. Often, there is a lack of consensus about basic concepts, such as whatconstitutes a security-sensitive operation. In this paper, we propose an automated hook placement approach that is motivated by a novel observation --- that the deliberate choices made by clients for objects from server collections and for processing those objects must all be authorized. We have built a tool that uses this observation to statically analyze the server source. Using real-world examples (the X server and postgresql), we show that the hooks placed by our method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers.
【Keywords】: authorization hooks; static taint analysis
【Paper Link】 【Pages】:157-168
【Authors】: Richard Wartell ; Vishwath Mohan ; Kevin W. Hamlen ; Zhiqiang Lin
【Abstract】: Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack that reuses instruction gadgets from the static binary code of victim applications. This paper introduces binary stirring, a new technique that imbues x86 native code with the ability to self-randomize its instruction addresses each time it is launched. The input to STIR is only the application binary code without any source code, debug symbols, or relocation information. The output is a new binary whose basic block addresses are dynamically determined at load-time. Therefore, even if an attacker can find code gadgets in one instance of the binary, the instruction addresses in other instances are unpredictable. An array of binary transformation techniques enable STIR to transparently protect large, realistic applications that cannot be perfectly disassembled due to computed jumps, code-data interleaving, OS callbacks, dynamic linking and a variety of other difficult binary features. Evaluation of STIR for both Windows and Linux platforms shows that stirring introduces about 1.6% overhead on average to application runtimes.
【Keywords】: obfuscation; randomization; return-oriented programming; software security
【Paper Link】 【Pages】:169-182
【Authors】: Joan Calvet ; José M. Fernandez ; Jean-Yves Marion
【Abstract】: Analyzing cryptographic implementations has important applications, especially for malware analysis where they are an integral part both of the malware payload and the unpacking code that decrypts this payload. These implementations are often based on well-known cryptographic functions, whose description is publicly available. While potentially very useful for malware analysis, the identification of such cryptographic primitives is made difficult by the fact that they are usually obfuscated. Current state-of-the-art identification tools are ineffective due to the absence of easily identifiable static features in obfuscated code. However, these implementations still maintain the input-output (I/O) relationship of the original function. In this paper, we present a tool that leverages this fact to identify cryptographic functions in obfuscated programs, by retrieving their I/O parameters in an implementation-independent fashion, and comparing them with those of known cryptographic functions. In experimental evaluation, we successfully identified the cryptographic functions TEA, RC4, AES and MD5 both in synthetic examples protected by a commercial-grade packer (AsProtect), and in several obfuscated malware samples (Sality, Waledac, Storm Worm and SilentBanker). In addition, our tool was able to recognize basic operations done in asymmetric ciphers such as RSA.
【Keywords】: binary program analysis; cryptography; malware
【Paper Link】 【Pages】:183-194
【Authors】: Nigel Edwards ; Liqun Chen
【Abstract】: This paper examines historical releases of Sendmail, Postfix, Apache httpd and OpenSSL by using static source code analysis and the entry-rate in the Common Vulnerabilities and Exposures dictionary (CVE) for a release, which we take as a measure of the rate of discovery of exploitable bugs. We show that the change in number and density of issues reported by the source code analyzer is indicative of the change in rate of discovery of exploitable bugs for new releases --- formally we demonstrate a statistically significant correlation of moderate strength. The strength of the correlation is an artifact of other factors such as the degree of scrutiny: the number of security analysts investigating the software. This also demonstrates that static source code analysis can be used to make some assessment of risk even when constraints do not permit human review of the issues identified by the analysis. We find only a weak correlation between absolute values measured by the source code analyzer and rate of discovery of exploitable bugs, so in general it is unsafe to use absolute values of number of issues or issue densities to compare different applications or software. Our results demonstrate that software quality, as measured by the number of issues, issue density or number of exploitable bugs, does not always improve with each new release. However, generally the rate of discovery of exploitable bugs begins to drop three to five years after the initial release.
【Keywords】: open source software; risk analysis; static analysis
【Paper Link】 【Pages】:195-204
【Authors】: Chunyi Peng ; Chi-Yu Li ; Guan-Hua Tu ; Songwu Lu ; Lixia Zhang
【Abstract】: 3G/4G cellular networks adopt usage-based charging. Mobile users are billed based on the traffic volume when accessing data service. In this work, we assess both this metered accounting architecture and application-specific charging policies by operators from the security perspective. We have identified loopholes in both, and discovered two effective attacks exploiting the loopholes. The "toll-free-data-access-attack" enables the attacker to access any data service for free. The "stealth-spam-attack" incurs any large traffic volume to the victim, while the victim may not be even aware of such spam traffic.Our experiments on two operational 3G networks have confirmed the feasibility and simplicity of such attacks. We also propose defense remedies.
【Keywords】: accounting attacks; cellular networks; mobile data services
【Paper Link】 【Pages】:205-216
【Authors】: Myrto Arapinis ; Loretta Ilaria Mancini ; Eike Ritter ; Mark Ryan ; Nico Golde ; Kevin Redon ; Ravishankar Borgaonkar
【Abstract】: Mobile telephony equipment is daily carried by billions of subscribers everywhere they go. Avoiding linkability of subscribers by third parties, and protecting the privacy of those subscribers is one of the goals of mobile telecommunication protocols. We use formal methods to model and analyse the security properties of 3G protocols. We expose two novel threats to the user privacy in 3G telephony systems, which make it possible to trace and identify mobile telephony subscribers, and we demonstrate the feasibility of a low cost implementation of these attacks. We propose fixes to these privacy issues, which also take into account and solve other privacy attacks known from the literature. We successfully prove that our privacy-friendly fixes satisfy the desired unlinkability and anonymity properties using the automatic verification tool ProVerif.
【Keywords】: anonymity; mobile telephony; proverif; unlinkability
【Paper Link】 【Pages】:217-228
【Authors】: Kathy Wain Yee Au ; Yi Fan Zhou ; Zhen Huang ; David Lie
【Abstract】: Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the mechanisms these systems use to protect users is a permission system, which requires developers to declare what sensitive resources their applications will use, has users agree with this request when they install the application and constrains the application to the requested resources during runtime. As these permission systems become more common, questions have risen about their design and implementation. In this paper, we perform an analysis of the permission system of the Android smartphone OS in an attempt to begin answering some of these questions. Because the documentation of Android's permission system is incomplete and because we wanted to be able to analyze several versions of Android, we developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis. PScout overcomes several challenges, such as scalability due to Android's 3.4 million line code base, accounting for permission enforcement across processes due to Android's use of IPC, and abstracting Android's diverse permission checking mechanisms into a single primitive for analysis. We use PScout to analyze 4 versions of Android spanning version 2.2 up to the recently released Android 4.0. Our main findings are that while Android has over 75 permissions, there is little redundancy in the permission specification. However, if applications could be constrained to only use documented APIs, then about 22% of the non-system permissions are actually unnecessary. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves.
【Keywords】: android; permissions; smartphone
【Paper Link】 【Pages】:229-240
【Authors】: Long Lu ; Zhichun Li ; Zhenyu Wu ; Wenke Lee ; Guofei Jiang
【Abstract】: An enormous number of apps have been developed for Android in recent years, making it one of the most popular mobile operating systems. However, the quality of the booming apps can be a concern [4]. Poorly engineered apps may contain security vulnerabilities that can severally undermine users' security and privacy. In this paper, we study a general category of vulnerabilities found in Android apps, namely the component hijacking vulnerabilities. Several types of previously reported app vulnerabilities, such as permission leakage, unauthorized data access, intent spoofing, and etc., belong to this category. We propose CHEX, a static analysis method to automatically vet Android apps for component hijacking vulnerabilities. Modeling these vulnerabilities from a data-flow analysis perspective, CHEX analyzes Android apps and detects possible hijack-enabling flows by conducting low-overhead reachability tests on customized system dependence graphs. To tackle analysis challenges imposed by Android's special programming paradigm, we employ a novel technique to discover component entry points in their completeness and introduce app splitting to model the asynchronous executions of multiple entry points in an app. We prototyped CHEX based on Dalysis, a generic static analysis framework that we built to support many types of analysis on Android app bytecode. We evaluated CHEX with 5,486 real Android apps and found 254 potential component hijacking vulnerabilities. The median execution time of CHEX on an app is 37.02 seconds, which is fast enough to be used in very high volume app vetting and testing scenarios.
【Keywords】: app splitting; component hijacking vulnerability; static analysis
【Paper Link】 【Pages】:241-252
【Authors】: Hao Peng ; Christopher S. Gates ; Bhaskar Pratim Sarma ; Ninghui Li ; Yuan Qi ; Rahul Potharaju ; Cristina Nita-Rotaru ; Ian Molloy
【Abstract】: One of Android's main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a "tand-alone" ashion and in a way that requires too much technical knowledge and time to distill useful information. We introduce the notion of risk scoring and risk ranking for Android apps, to improve risk communication for Android apps, and identify three desiderata for an effective risk scoring scheme. We propose to use probabilistic generative models for risk scoring schemes, and identify several such models, ranging from the simple Naive Bayes, to advanced hierarchical mixture models. Experimental results conducted using real-world datasets show that probabilistic general models significantly outperform existing approaches, and that Naive Bayes models give a promising risk scoring approach.
【Keywords】: data mining; malware; mobile; risk
【Paper Link】 【Pages】:253-264
【Authors】: Shakeel Butt ; H. Andrés Lagar-Cavilla ; Abhinav Srivastava ; Vinod Ganapathy
【Abstract】: Modern cloud computing infrastructures use virtual machine monitors (VMMs) that often include a large and complex administrative domain with privileges to inspect client VM state. Attacks against or misuse of the administrative domain can compromise client security and privacy. Moreover, these VMMs provide clients inflexible control over their own VMs, as a result of which clients have to rely on the cloud provider to deploy useful services, such as VM introspection-based security tools. We introduce a new self-service cloud (SSC) computing model that addresses these two shortcomings. SSC splits administrative privileges between a system-wide domain and per-client administrative domains. Each client can manage and perform privileged system tasks on its own VMs, thereby providing flexibility. The system-wide administrative domain cannot inspect the code, data or computation of client VMs, thereby ensuring security and privacy. SSC also allows providers and clients to establish mutually trusted services that can check regulatory compliance while respecting client privacy. We have implemented SSC by modifying the Xen hypervisor. We demonstrate its utility by building user domains to perform privileged tasks such as memory introspection, storage intrusion detection, and anomaly detection.
【Keywords】: cloud computing; privacy; security; trust
【Paper Link】 【Pages】:265-280
【Authors】: Marten van Dijk ; Ari Juels ; Alina Oprea ; Ronald L. Rivest ; Emil Stefanov ; Nikos Triandopoulos
【Abstract】: We consider the following challenge: How can a cloud storage provider prove to a tenant that it's encrypting files at rest, when the provider itself holds the corresponding encryption keys? Such proofs demonstrate sound encryption policies and file confidentiality. (Cheating, cost-cutting, or misconfigured providers may bypass the computation/management burdens of encryption and store plaintext only.) To address this problem, we propose hourglass schemes, protocols that prove correct encryption of files at rest by imposing a resource requirement (e.g., time, storage or computation) on the process of translating files from one encoding domain (i.e., plaintext) to a different, target domain (i.e., ciphertext). Our more practical hourglass schemes exploit common cloud infrastructure characteristics, such as limited file-system parallelism and the use of rotational hard drives for at-rest files. For files of modest size, we describe an hourglass scheme that exploits trapdoor one-way permutations to prove correct file encryption whatever the underlying storage medium. We also experimentally validate the practicality of our proposed schemes, the fastest of which incurs minimal overhead beyond the cost of encryption. As we show, hourglass schemes can be used to verify properties other than correct encryption, e.g., embedding of "provenance tags" in files for tracing the source of leaked files. Of course, even if a provider is correctly storing a file as ciphertext, it could also store a plaintext copy to service tenant requests more efficiently. Hourglass schemes cannot guarantee ciphertext-only storage, a problem inherent when the cloud manages keys. By means of experiments in Amazon EC2, however, we demonstrate that hourglass schemes provide strong incentives for economically rational cloud providers against storage of extra plaintext file copies.
【Keywords】: challenge-response protocol; cloud auditing; cloud storage security; economic security model
【Paper Link】 【Pages】:281-292
【Authors】: Venkatanathan Varadarajan ; Thawan Kooburat ; Benjamin Farley ; Thomas Ristenpart ; Michael M. Swift
【Abstract】: Cloud computing promises great efficiencies by multiplexing resources among disparate customers. For example, Amazon's Elastic Compute Cloud (EC2), Microsoft Azure, Google's Compute Engine, and Rack-space Hosting all offer Infrastructure as a Service (IaaS) solutions that pack multiple customer virtual machines (VMs) onto the same physical server. The gained efficiencies have some cost: past work has shown that the performance of one customer's VM can suffer due to interference from another. In experiments on a local testbed, we found that the performance of a cache-sensitive benchmark can degrade by more than 80% because of interference from another VM. This interference incentivizes a new class of attacks, that we call resource-freeing attacks (RFAs). The goal is to modify the workload of a victim VM in a way that frees up resources for the attacker's VM. We explore in depth a particular example of an RFA. Counter-intuitively, by adding load to a co-resident victim, the attack speeds up a class of cache-bound workloads. In a controlled lab setting we show that this can improve performance of synthetic benchmarks by up to 60% over not running the attack. In the noisier setting of Amazon's EC2, we still show improvements of up to 13%.
【Keywords】: cloud computing; resource-freeing attacks; scheduling; security; virtualization
【Paper Link】 【Pages】:293-304
【Authors】: Peter Williams ; Radu Sion
【Abstract】: We present SR-ORAM1, the first single-round-trip polylogarithmic time Oblivious RAM that requires only logarithmic client storage. Taking only a single round trip to perform a query, SR-ORAM has an online communication / computation cost of O(log n log log n), and an offline, overall amortized per-query communication cost of O(log2 n log log n), requiring under 2 round trips. The client folds an entire interactive sequence of Oblivious RAM requests into a single query object that the server can unlock incrementally, to satisfy a query without learning its result. This results in an Oblivious RAM secure against an actively malicious adversary, with unprecedented speeds in accessing large data sets over high-latency links. We show this to be the most efficient storage-free-client Oblivious RAM to date for today's Internet-scale network latencies.
【Keywords】: access privacy; cloud computing; oblivious ram
【Paper Link】 【Pages】:305-316
【Authors】: Yinqian Zhang ; Ari Juels ; Michael K. Reiter ; Thomas Ristenpart
【Abstract】: This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co-locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library.
【Keywords】: cache-based side channel; cross-vm side channel; side-channel attack
【Paper Link】 【Pages】:317-328
【Authors】: Muhammad Asim Jamshed ; Jihyung Lee ; Sangwoo Moon ; Insu Yun ; Deokjin Kim ; Sungryoul Lee ; Yung Yi ; KyoungSoo Park
【Abstract】: As high-speed networks are becoming commonplace, it is increasingly challenging to prevent the attack attempts at the edge of the Internet. While many high-performance intrusion detection systems (IDSes) employ dedicated network processors or special memory to meet the demanding performance requirements, it often increases the cost and limits functional flexibility. In contrast, existing software-based IDS stacks fail to achieve a high throughput despite modern hardware innovations such as multicore CPUs, manycore GPUs, and 10 Gbps network cards that support multiple hardware queues. We present Kargus, a highly-scalable software-based IDS that exploits the full potential of commodity computing hardware. First, Kargus batch processes incoming packets at network cards and achieves up to 40 Gbps input rate even for minimum-sized packets. Second, it exploits high processing parallelism by balancing the pattern matching workloads with multicore CPUs and heterogeneous GPUs, and benefits from extensive batch processing of multiple packets per each IDS function call. Third, Kargus adapts its resource usage depending on the input rate, significantly saving the power in a normal situation. Our evaluation shows that Kargus on a 12-core machine with two GPUs handles up to 33 Gbps of normal traffic and achieves 9 to 10 Gbps even when all packets contain attack signatures, a factor of 1.9 to 4.3 performance improvements over the existing state-of-the-art software IDS. We design Kargus to be compatible with the most popular software IDS, Snort.
【Keywords】: batch processing; gpu; intrusion detection; pattern matching
【Paper Link】 【Pages】:329-340
【Authors】: Chi-Yao Hong ; Fang Yu ; Yinglian Xie
【Abstract】: Populated IP addresses (PIP) -- IP addresses that are associated with a large number of user requests are important for online service providers to efficiently allocate resources and to detect attacks. While some PIPs serve legitimate users, many others are heavily abused by attackers to conduct malicious activities such as scams, phishing, and malware distribution. Unfortunately, commercial proxy lists like Quova have a low coverage of PIP addresses and offer little support for distinguishing good PIPs from abused ones. In this study, we propose PIPMiner, a fully automated method to extract and classify PIPs through analyzing service logs. Our methods combine machine learning and time series analysis to distinguish good PIPs from abused ones with over 99.6% accuracy. When applying the derived PIP list to several applications, we can identify millions of malicious Windows Live accounts right on the day of their sign-ups, and detect millions of malicious Hotmail accounts well before the current detection system captures them.
【Keywords】: ip blacklisting; populated ip addresses; proxy; spam detection
【Paper Link】 【Pages】:341-352
【Authors】: Antonio Bianchi ; Yan Shoshitaishvili ; Christopher Kruegel ; Giovanni Vigna
【Abstract】: The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker's presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions. In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host. We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider's network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep's usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.
【Keywords】: computer security; kernel-based rootkits; malicious software; malware detection; rootkit detection
【Paper Link】 【Pages】:353-364
【Authors】: Yinglian Xie ; Fang Yu ; Qifa Ke ; Martín Abadi ; Eliot Gillum ; Krish Vitaldevaria ; Jason Walter ; Junxian Huang ; Zhuoqing Morley Mao
【Abstract】: This paper presents the design and implementation of Souche, a system that recognizes legitimate users early in online services. This early recognition contributes to both usability and security. Souche leverages social connections established over time. Legitimate users help identify other legitimate users through an implicit vouching process, strategically controlled within vouching trees. Souche is lightweight and fully transparent to users. In our evaluation on a real dataset of several hundred million users, Souche can efficiently identify 85% of legitimate users early, while reducing the percentage of falsely admitted malicious users from 44% to 2.4%. Our evaluation further indicates that Souche is robust in the presence of compromised accounts. It is generally applicable to enhance usability and security for a wide class of online services.
【Keywords】: account hijacking; legitimate user recognition; social graph; vouching
【Paper Link】 【Pages】:365-377
【Authors】: Cristian Bravo-Lillo ; Lorrie Faith Cranor ; Julie S. Downs ; Saranga Komanduri ; Stuart E. Schechter ; Manya Sleeper
【Abstract】: When asking users to enter credentials, today's desktop operating systems often use windows that provide scant evidence that a trusted path has been established; evidence that would allow a user to know that a request is genuine and that the password will not be read by untrusted principals. We measure the efficacy of web-based attacks that spoof these operating system credential-entry windows to steal users' device-login passwords. We recruited 504 users of Amazon's Mechanical Turk to evaluate a series of games on third-party websites. The third such website indicated that it needed to install software from the publisher that provided the participants' operating system: Microsoft's Silverlight for Windows Vista/7 users and Apple's QuickTime for Mac OS users. The website then displayed a spoofed replica of a window the participant's client operating system would use to request a user's device credentials. In our most effective attacks, over 20% of participants entered passwords that they later admitted were the genuine credentials used to login to their devices. Even among those who declined to enter their credentials, many participants were oblivious to the spoofing attack. Participants were more likely to confirm that they were worried about the consequences of installing software from a legitimate source than to report that they thought the credential-entry window might have appeared as a result of an attempt to steal their password.
【Keywords】: spoofing attack; trusted path; usable security; user interface
【Paper Link】 【Pages】:378-390
【Authors】: San-Tsai Sun ; Konstantin Beznosov
【Abstract】: Millions of web users today employ their Facebook accounts to sign into more than one million relying party (RP) websites. This web-based single sign-on (SSO) scheme is enabled by OAuth 2.0, a web resource authorization protocol that has been adopted by major service providers. The OAuth 2.0 protocol has proven secure by several formal methods, but whether it is indeed secure in practice remains an open question. We examine the implementations of three major OAuth identity providers (IdP) (Facebook, Microsoft, and Google) and 96 popular RP websites that support the use of Facebook accounts for login. Our results uncover several critical vulnerabilities that allow an attacker to gain unauthorized access to the victim user's profile and social graph, and impersonate the victim on the RP website. Closer examination reveals that these vulnerabilities are caused by a set of design decisions that trade security for implementation simplicity. To improve the security of OAuth 2.0 SSO systems in real-world settings, we suggest simple and practical improvements to the design and implementation of IdPs and RPs that can be adopted gradually by individual sites.
【Keywords】: oauth 2.0; web single sign-on
【Paper Link】 【Pages】:391-403
【Authors】: Tiffany Hyun-Jin Kim ; Payas Gupta ; Jun Han ; Emmanuel Owusu ; Jason I. Hong ; Adrian Perrig ; Debin Gao
【Abstract】: Malware continues to thrive on the Internet. Besides automated mechanisms for detecting malware, we provide users with trust evidence information to enable them to make informed trust decisions. To scope the problem, we study the challenge of assisting users with judging the trustworthiness of software downloaded from the Internet. Through expert elicitation, we deduce indicators for trust evidence, then analyze these indicators with respect to scalability and robustness. We design OTO, a system for communicating these trust evidence indicators to users, and we demonstrate through a user study the effectiveness of OTO, even with respect to IE's SmartScreen Filter (SSF). The results from the between-subjects experiment with 58 participants confirm that the OTO interface helps people make correct trust decisions compared to the SSF interface regardless of their security knowledge, education level, occupation, age, or gender.
【Keywords】: human factors; software download; trust evidence; trust validation for uncertified software; user interfaces for security
【Paper Link】 【Pages】:404-414
【Authors】: Alexei Czeskis ; Michael Dietz ; Tadayoshi Kohno ; Dan S. Wallach ; Dirk Balfanz
【Abstract】: User authentication systems are at an impasse. The most ubiquitous method -- the password -- has numerous problems, including susceptibility to unintentional exposure via phishing and cross-site password reuse. Second-factor authentication schemes have the potential to increase security but face usability and deployability challenges. For example, conventional second-factor schemes change the user authentication experience. Furthermore, while more secure than passwords, second-factor schemes still fail to provide sufficient protection against (single-use) phishing attacks. We present PhoneAuth, a system intended to provide security assurances comparable to or greater than that of conventional two-factor authentication systems while offering the same authentication experience as traditional passwords alone. Our work leverages the following key insights. First, a user's personal device (eg a phone) can communicate directly with the user's computer (and hence the remote web server) without any interaction with the user. Second, it is possible to provide a layered approach to security, whereby a web server can enact different policies depending on whether or not the user's personal device is present. We describe and evaluate our server-side, Chromium web browser, and Android phone implementations of PhoneAuth.
【Keywords】: authentication; login; second factor; web
【Paper Link】 【Pages】:415-427
【Authors】: Weining Yang ; Ninghui Li ; Yuan Qi ; Wahbeh H. Qardaji ; Stephen E. McLaughlin ; Patrick McDaniel
【Abstract】: Smart electric meters pose a substantial threat to the privacy of individuals in their own homes. Combined with non-intrusive load monitors, smart meter data can reveal precise home appliance usage information. An emerging solution to behavior leakage in smart meter measurement data is the use of battery-based load hiding. In this approach, a battery is used to store and supply power to home devices at strategic times to hide appliance loads from smart meters. A few such battery control algorithms have already been studied in the literature, but none have been evaluated from an adversarial point of view. In this paper, we first consider two well known battery privacy algorithms, Best Effort (BE) and Non-Intrusive Load Leveling (NILL), and demonstrate attacks that recover precise load change information, which can be used to recover appliance behavior information, under both algorithms. We then introduce a stepping approach to battery privacy algorithms that fundamentally differs from previous approaches by maximizing the error between the load demanded by a home and the external load seen by a smart meter. By design, precise load change recovery attacks are impossible. We also propose mutual-information based measurements to evaluate the privacy of different algorithms. We implement and evaluate four novel algorithms using the stepping approach, and show that under the mutual-information metrics they outperform BE and NILL.
【Keywords】: load monitor; privacy; smart meter
【Paper Link】 【Pages】:428-438
【Authors】: Wei-Hong Chuang ; Ravi Garg ; Min Wu
【Abstract】: A time stamp based on the power network signature called the Electrical Network Frequency (ENF) has been used by an emerging class of approaches for authenticating digital audio and video recordings in computer-to-computer communications. However, the presence of adversaries may render the time stamp insecure, and it is crucial to understand the robustness of ENF analysis against anti-forensic operations. This paper investigates possible anti-forensic operations that can remove and alter the ENF signal while trying to preserve the host signal, and develops detection methods targeting these operations. Improvements over anti-forensic operations that can circumvent the detection are also examined, for which various trade-offs are discussed. To develop an understanding of the dynamics between a forensic analyst and an adversary, an evolutionary perspective and a game-theoretical perspective are proposed, which allow for a comprehensive characterization of plausible anti-forensic strategies and countermeasures. Such an understanding has the potential to lead to more secure and reliable time stamp schemes based on ENF analysis.
【Keywords】: digital recording authentication; electrical network frequency; game theory; information forensics; time stamp
【Paper Link】 【Pages】:439-449
【Authors】: Stephen E. McLaughlin ; Patrick McDaniel
【Abstract】: Programmable Logic Controllers (PLCs) drive the behavior of industrial control systems according to uploaded programs. It is now known that PLCs are vulnerable to the uploading of malicious code that can have severe physical consequences. What is not understood is whether an adversary with no knowledge of the PLC's interface to the control system can execute a damaging, targeted, or stealthy attack against a control system using the PLC. In this paper, we present SABOT, a tool that automatically maps the control instructions in a PLC to an adversary-provided specification of the target control system's behavior. This mapping recovers sufficient semantics of the PLC's internal layout to instantiate arbitrary malicious controller code. This lowers the prerequisite knowledge needed to tailor an attack to a control system. SABOT uses an incremental model checking algorithm to map a few plant devices at a time, until a mapping is found for all adversary-specified devices. At this point, a malicious payload can be compiled and uploaded to the PLC. Our evaluation shows that SABOT correctly compiles payloads for all tested control systems when the adversary correctly specifies full system behavior, and for 4 out of 5 systems in most cases where there where unspecified features. Furthermore, SABOT completed all analyses in under 2 minutes.
【Keywords】: attack; critical infrastructure; programmable logic controller
【Paper Link】 【Pages】:450-461
【Authors】: Tyler Nighswander ; Brent M. Ledvina ; Jonathan Diamond ; Robert Brumley ; David Brumley
【Abstract】: Since its creation, the Global Positioning System (GPS) has grown from a limited purpose positioning system to a ubiquitous trusted source for positioning, navigation, and timing data. To date, researchers have essentially taken a signal processing approach to GPS security and shown that GPS is vulnerable to jamming and spoofing. In this work, we systematically map out a larger attack surface by viewing GPS as a computer system. Our surface includes higher level GPS protocol messages than previous work, as well as the GPS OS and downstream dependent systems. We develop a new hardware platform for GPS attacks, and develop novel attacks against GPS infrastructure. Our experiments on consumer and professional-grade receivers show that GPS and GPS-dependent systems are significantly more vulnerable than previously thought. For example, we show that remote attacks via malicious GPS broadcasts are capable of bringing down up to 30% and 20% of the global CORS navigation and NTRIP networks, respectively, using hardware that costs about the same as a laptop. In order to improve security, we propose systems-level defenses and principles that can be deployed to secure critical GPS and dependent systems.
【Keywords】: gps; rf attacks; security
【Paper Link】 【Pages】:462-473
【Authors】: Ishtiaq Rouf ; Hossen A. Mustafa ; Miao Xu ; Wenyuan Xu ; Robert D. Miller ; Marco Gruteser
【Abstract】: Research on smart meters has shown that fine-grained energy usage data poses privacy risks since it allows inferences about activities inside the home. While smart meter deployments are very limited, more than 40 million meters in the United States have been equipped with Automatic Meter Reading (AMR) technology over the past decades. AMR utilizes wireless communication for remotely collecting usage data from electricity, gas, and water meters. Yet to the best of our knowledge, AMR has so far received no attention from the security research community. In this paper, we conduct a security and privacy analysis of this technology. Based on our reverse engineering and experimentation, we find that the technology lacks basic security measures to ensure privacy, integrity, and authenticity of the data. Moreover, the AMR meters we examined continuously broadcast their energy usage data over insecure wireless links every 30s, even though these broadcasts can only be received when a truck from the utility company passes by. We show how this design allows any individual to monitor energy usage from hundreds of homes in a neighborhood with modest technical effort and how this data allows identifying unoccupied residences or people's routines. To cope with the issues, we recommend security remedies, including a solution based on defensive jamming that may be easier to deploy than upgrading the meters themselves.
【Keywords】: amr meter; privacy; reverse engineering; spoofing
【Paper Link】 【Pages】:474-487
【Authors】: Joseph A. Akinyele ; Matthew Green ; Susan Hohenberger ; Matthew W. Pagano
【Abstract】: As devices everywhere increasingly communicate with each other, many security applications will require low-bandwidth signatures that can be processed quickly. Pairing-based signatures can be very short, but are often costly to verify. Fortunately, they also tend to have efficient batch verification algorithms. Finding these batching algorithms by hand, however, can be tedious and error prone. We address this by presenting AutoBatch, an automated tool for generating batch verification code in either Python or C++ from a high level representation of a signature scheme. AutoBatch outputs both software and, for transparency, a LaTeX file describing the batching algorithm and arguing that it preserves the unforgeability of the original scheme. We tested AutoBatch on over a dozen pairing-based schemes to demonstrate that a computer could find competitive batching solutions in a reasonable amount of time. Indeed, it proved highly competitive. In particular, it found an algorithm that is significantly faster than a batching algorithm from Eurocrypt 2010. Another novel contribution is that it handles cross-scheme batching, where it searches for a common algebraic structure between two distinct schemes and attempts to batch them together. We describe other features and performance details herein. AutoBatch is a useful tool for cryptographic designers and implementors, and to our knowledge, it is the first attempt to outsource to machines the design, proof writing and implementation of signature batch verification schemes.
【Keywords】: automation; batch verification; cryptographic compilers; digital signatures; pairing-based cryptography
【Paper Link】 【Pages】:488-500
【Authors】: José Bacelar Almeida ; Manuel Barbosa ; Endre Bangerter ; Gilles Barthe ; Stephan Krenn ; Santiago Zanella Béguelin
【Abstract】: Developers building cryptography into security-sensitive applications face a daunting task. Not only must they understand the security guarantees delivered by the constructions they choose, they must also implement and combine them correctly and efficiently. Cryptographic compilers free developers from this task by turning high-level specifications of security goals into efficient implementations. Yet, trusting such tools is hard as they rely on complex mathematical machinery and claim security properties that are subtle and difficult to verify. In this paper we present ZKCrypt, an optimizing cryptographic compiler achieving an unprecedented level of assurance without sacrificing practicality for a comprehensive class of cryptographic protocols, known as Zero-Knowledge Proofs of Knowledge. The pipeline of ZKCrypt integrates purpose-built verified compilers and verifying compilers producing formal proofs in the CertiCrypt framework. By combining the guarantees delivered by each stage, ZKCrypt provides assurance that the output implementation securely realizes the abstract proof goal given as input. We report on the main characteristics of ZKCrypt, highlight new definitions and concepts at its foundations, and illustrate its applicability through a representative example of an anonymous credential system
【Keywords】: cryptographic compiler; verifying compilation; zero-knowledge
【Paper Link】 【Pages】:501-512
【Authors】: Dario Fiore ; Rosario Gennaro
【Abstract】: Outsourced computations (where a client requests a server to perform some computation on its behalf) are becoming increasingly important due to the rise of Cloud Computing and the proliferation of mobile devices. Since cloud providers may not be trusted, a crucial problem is the verification of the integrity and correctness of such computation, possibly in a public way, i.e., the result of a computation can be verified by any third party, and requires no secret key -- akin to a digital signature on a message. We present new protocols for publicly verifiable secure outsourcing of Evaluation of High Degree Polynomials and Matrix Multiplication. Compared to previously proposed solutions, ours improve in efficiency and offer security in a stronger model. The paper also discusses several practical applications of our protocols.
【Keywords】: pseudorandom functions; verifiable computation
【Paper Link】 【Pages】:513-524
【Authors】: S. Dov Gordon ; Jonathan Katz ; Vladimir Kolesnikov ; Fernando Krell ; Tal Malkin ; Mariana Raykova ; Yevgeniy Vahlis
【Abstract】: Traditional approaches to generic secure computation begin by representing the function f being computed as a circuit. If f depends on each of its input bits, this implies a protocol with complexity at least linear in the input size. In fact, linear running time is inherent for non-trivial functions since each party must "touch" every bit of their input lest information about the other party's input be leaked. This seems to rule out many applications of secure computation (e.g., database search) in scenarios where inputs are huge. Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two-party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine (RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our approach combines generic secure two-party computation with oblivious RAM (ORAM) protocols. We present an optimized version of our approach using Yao's garbled-circuit protocol and a recent ORAM construction of Shi et al. We describe an implementation of our resulting protocol, and evaluate its performance for obliviously searching a database with over 1 million entries. Our implementation outperforms off-the-shelf secure-computation protocols for databases containing more than 218 entries.
【Keywords】: cryptography; secure computation
【Paper Link】 【Pages】:525-536
【Authors】: Jan Camenisch ; Anna Lysyanskaya ; Gregory Neven
【Abstract】: Password-authenticated secret sharing (PASS) schemes, first introduced by Bagherzandi et al. at CCS 2011, allow users to distribute data among several servers so that the data can be recovered using a single human-memorizable password, but no single server (or even no collusion of servers up to a certain size) can mount an off-line dictionary attack on the password or learn anything about the data. We propose a new, universally composable (UC) security definition for the two-server case (2PASS) in the public-key setting that addresses a number of relevant limitations of the previous, non-UC definition. For example, our definition makes no prior assumptions on the distribution of passwords, preserves security when honest users mistype their passwords, and guarantees secure composition with other protocols in spite of the unavoidable non-negligible success rate of online dictionary attacks. We further present a concrete 2PASS protocol and prove that it meets our definition. Given the strong security guarantees, our protocol is surprisingly efficient: in its most efficient instantiation under the DDH assumption in the random-oracle model, it requires fewer than twenty elliptic-curve exponentiations on the user's device. We achieve our results by careful protocol design and by exclusively focusing on the two-server public-key setting.
【Keywords】: password-authenticated secret sharing; universal composability
【Paper Link】 【Pages】:541-552
【Authors】: Alexandra Boldyreva ; Robert Lychev
【Abstract】: This paper provides the provable-security treatment of path vector routing protocols. We first design a security definition for routing path vector protocols by studying, generalizing, and formalizing numerous known threats. Our model incorporates three major security goals. It is quite strong, yet simple to use. We prove by reduction that S-BGP satisfies two out of the security model's three goals, assuming the underlying signature scheme is secure. Under the same assumption, we next show how the protocol can be modified to meet all three security goals simultaneously. Finally, we study security of partial PKI deployment of path vector protocols when not all nodes have public keys. We investigate the possibilities of relaxing the PKI requirement and relying on the non-cryptographic physical security of the protocol in order to achieve possibly weaker, but still well-defined, notions of security. We also present the necessary and sufficient conditions to achieve full security in the partial PKI deployment scenario. We believe our conclusions will prove useful for protocol developers, standards bodies and government agencies.
【Keywords】: path vector protocols; provable security; secure bgp
【Paper Link】 【Pages】:553-566
【Authors】: Guanhua Yan ; Ritchie Lee ; Alex Kent ; David H. Wolpert
【Abstract】: With a long history of compromising Internet security, Distributed Denial-of-Service (DDoS) attacks have been intensively investigated and numerous countermeasures have been proposed to defend against them. In this work, we propose a non-standard game-theoretic framework that facilitates evaluation of DDoS attacks and defense. Our framework can be used to study diverse DDoS attack scenarios where multiple layers of protection are deployed and a number of uncertain factors affect the decision making of the players, and it also allows us to model different sophistication levels of reasoning by both the attacker and the defender. We conduct a variety of experiments to evaluate DDoS attack and defense scenarios where one or more layers of defense mechanisms are deployed, and demonstrate that our framework sheds light on the interplay between decision makings of both the attacker and the defender, as well as how they affect the outcomes of DDoS attack and defense games.
【Keywords】: bayesian networks; ddos attacks and defense; game theory
【Paper Link】 【Pages】:567-580
【Authors】: Haifeng Yu ; Phillip B. Gibbons ; Chenwei Shi
【Abstract】: A key challenge in large-scale collaborative distributed systems is to properly incentivize the rational/selfish users so that they will properly collaborate. Within such a context, this paper focuses on designing incentive mechanisms for overlay multicast systems. A key limitation shared by existing proposals on the problem is that they are no longer able to provide proper incentives and thus will collapse when rational users collude or launch sybil attacks. This work explicitly aims to properly sustain collaboration despite collusion and sybil attacks by rational users. To this end, we propose a new decentralized DCast multicast protocol that uses a novel mechanism with debt-links and circulating debts. We formally prove that the protocol offers a novel concept of safety-net guarantee: A user running the protocol will always obtain a reasonably good utility despite the deviation of any number of rational users that potentially collude or launch sybil attacks. Our prototyping as well as simulation demonstrates the feasibility and safety-net guarantee of our design in practice.
【Keywords】: algorithmic mechanism design; incentive mechanism; overlay multicast; rational collusion; sybil attack; whitewashing attack
【Paper Link】 【Pages】:581-592
【Authors】: Zhaoyan Xu ; Lingfeng Chen ; Guofei Gu ; Christopher Kruegel
【Abstract】: We propose a new, active scheme for fast and reliable detection of P2P malware by exploiting the enemies' strength against them. Our new scheme works in two phases: host-level dynamic binary analysis to automatically extract built-in remotely-accessible/controllable mechanisms (referred to as Malware Control Birthmarks or MCB) in P2P malware, followed by network-level informed probing for detection. Our new design demonstrates a novel combination of the strengths from both host-based and network-based approaches. Compared with existing detection solutions, it is fast, reliable, and scalable in its detection scope. Furthermore, it can be applicable to more than just P2P malware, more broadly any malware that opens a service port for network communications (e.g., many Trojans/backdoors). We develop a prototype system, PeerPress, and evaluate it on many representative real-world P2P malware (including Storm, Conficker, and more recent Sality). The results show that it can effectively detect the existence of malware when MCBs are extracted, and the detection occurs in an early stage during which other tools (e.g., BotHunter) typically do not have sufficient information to detect. We further discuss its limitations and implications, and we believe it is a great complement to existing passive detection solutions.
【Keywords】: malware analysis; malware detection; p2p
【Paper Link】 【Pages】:593-604
【Authors】: Zhiyun Qian ; Zhuoqing Morley Mao ; Yinglian Xie
【Abstract】: In this study, we discover a new class of unknown side channels --- "sequence-number-dependent" host packet counters --- that exist in Linux/Android and BSD/Mac OS to enable TCP sequence number inference attacks. It allows a piece of unprivileged on-device malware to collaborate with an off-path attacker to infer the TCP sequence numbers used between a client and a server, leading to TCP injection and hijacking attacks. We show that the inference takes, in common cases, under a second to complete and is quick enough for attackers to inject malicious Javascripts into live Facebook sessions and to perform malicious actions on behalf of a victim user. Since supporting unprivileged access to global packet counters is an intentional design choice, we believe our findings provide important lessons and offer insights on future system and network design.
【Keywords】: network packet counters; tcp hijacking; tcp sequence number
【Paper Link】 【Pages】:605-616
【Authors】: Xiang Cai ; Xin Cheng Zhang ; Brijesh Joshi ; Rob Johnson
【Abstract】: We present a novel web page fingerprinting attack that is able to defeat several recently proposed defenses against traffic analysis attacks, including the application-level defenses HTTPOS and randomized pipelining over Tor. Regardless of the defense scheme, our attack was able to guess which of 100 web pages a victim was visiting at least 50% of the time and, with some defenses, over 90% of the time. Our attack is based on a simple model of network behavior and out-performs previously proposed ad hoc attacks. We then build a web site fingerprinting attack that is able to identify whether a victim is visiting a particular web site with over 90% accuracy in our experiments. Our results strongly suggest that ad hoc defenses against traffic analysis are not likely to succeed. Consequently, we describe a defense scheme that provides provable security properties, albeit with potentially higher overheads.
【Keywords】: anonymity; website fingerprinting attacks
【Paper Link】 【Pages】:617-627
【Authors】: Reza Shokri ; George Theodorakopoulos ; Carmela Troncoso ; Jean-Pierre Hubaux ; Jean-Yves Le Boudec
【Abstract】: The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.
【Keywords】: location inference attacks; location privacy; location-based services; optimal defense strategy; privacy protection; service quality; stackelberg bayesian games
【Paper Link】 【Pages】:628-637
【Authors】: Mudhakar Srivatsa ; Mike Hicks
【Abstract】: Location-based services, which employ data from smartphones, vehicles, etc., are growing in popularity. To reduce the threat that shared location data poses to a user's privacy, some services anonymize or obfuscate this data. In this paper, we show these methods can be effectively defeated: a set of location traces can be deanonymized given an easily obtained social network graph. The key idea of our approach is that a user may be identified by those she meets: a "contact graph" identifying meetings between anonymized users in a set of traces can be structurally correlated with a social network graph, thereby identifying anonymized users. We demonstrate the effectiveness of our approach using three real world datasets: University of St Andrews mobility trace and social network (27 nodes each), SmallBlue contact trace and Facebook social network (125 nodes), and Infocom 2006 bluetooth contact traces and conference attendees' DBLP social network (78 nodes). Our experiments show that 80% of users are identified precisely, while only 8% are identified incorrectly, with the remainder mapped to a small set of users.
【Keywords】: graph deanonymization; information flow; spatio-temporal data
【Paper Link】 【Pages】:638-649
【Authors】: Rui Chen ; Gergely Ács ; Claude Castelluccia
【Abstract】: Sequential data is being increasingly used in a variety of applications. Publishing sequential data is of vital importance to the advancement of these applications. However, as shown by the re-identification attacks on the AOL and Netflix datasets, releasing sequential data may pose considerable threats to individual privacy. Recent research has indicated the failure of existing sanitization techniques to provide claimed privacy guarantees. It is therefore urgent to respond to this failure by developing new schemes with provable privacy guarantees. Differential privacy is one of the only models that can be used to provide such guarantees. Due to the inherent sequentiality and high-dimensionality, it is challenging to apply differential privacy to sequential data. In this paper, we address this challenge by employing a variable-length n-gram model, which extracts the essential information of a sequential database in terms of a set of variable-length n-grams. Our approach makes use of a carefully designed exploration tree structure and a set of novel techniques based on the Markov assumption in order to lower the magnitude of added noise. The published n-grams are useful for many purposes. Furthermore, we develop a solution for generating a synthetic database, which enables a wider spectrum of data analysis tasks. Extensive experiments on real-life datasets demonstrate that our approach substantially outperforms the state-of-the-art techniques.
【Keywords】: differential privacy; markov assumption; n-gram model; sequential data
【Paper Link】 【Pages】:650-661
【Authors】: Ilya Mironov
【Abstract】: We describe a new type of vulnerability present in many implementations of differentially private mechanisms. In particular, all four publicly available general purpose systems for differentially private computations are susceptible to our attack. The vulnerability is based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism. Unlike its mathematical abstraction, the textbook sampling procedure results in a porous distribution over double-precision numbers that allows one to breach differential privacy with just a few queries into the mechanism. We propose a mitigating strategy and prove that it satisfies differential privacy under some mild assumptions on available implementation of floating-point arithmetic.
【Keywords】: differential privacy; floating point arithmetic
【Paper Link】 【Pages】:662-673
【Authors】: Michaela Hardt ; Suman Nath
【Abstract】: Mobile advertising is an increasingly important driver in the Internet economy. We point out fundamental trade-offs between important variables in the mobile advertisement ecosystem. In order to increase relevance, ad campaigns tend to become more targeted and personalized by using context information extracted from user's interactions and smartphone's sensors. This raises privacy concerns that are hard to overcome due to the limited resources (energy and bandwidth) available on the phones. We point out that in the absence of a trusted third party, it is impossible to maximize these three variables - ad relevance, privacy, and efficiency - in a single system. This leads to the natural question: can we formalize a common framework for personalized ad delivery that can be instantiated to any desired trade-off point? We propose such a flexible ad-delivery framework where personalization is done jointly by the server and the phone. We show that the underlying optimization problem is NP-hard and present an efficient algorithm with a tight approximation guarantee. Since tuning personalization rules requires implicit user feedback (clicks), we ask how can we, in an efficient and privacy-preserving way, gather statistics over a dynamic population of mobile users? This is needed for end-to-end privacy of an ad system. We propose the first differentially-private distributed protocol that works even in the presence of a dynamic and malicious set of users. We evaluate our methods with a large click log of location-aware searches in Microsoft Bing for mobile. Our experiments show that our framework can simultaneously achieve reasonable levels of privacy, efficiency, and ad relevance and can efficiently support a high churn rate of users during the gathering statistics that are required for personalization.
【Keywords】: distributed systems; privacy; targeted advertising
【Paper Link】 【Pages】:674-686
【Authors】: Zhou Li ; Kehuan Zhang ; Yinglian Xie ; Fang Yu ; XiaoFeng Wang
【Abstract】: With the Internet becoming the dominant channel for marketing and promotion, online advertisements are also increasingly used for illegal purposes such as propagating malware, scamming, click frauds, etc. To understand the gravity of these malicious advertising activities, which we call malvertising, we perform a large-scale study through analyzing ad-related Web traces crawled over a three-month period. Our study reveals the rampancy of malvertising: hundreds of top ranking Web sites fell victims and leading ad networks such as DoubleClick were infiltrated. To mitigate this threat, we identify prominent features from malicious advertising nodes and their related content delivery paths, and leverage them to build a new detection system called MadTracer. MadTracer automatically generates detection rules and utilizes them to inspect advertisement delivery processes and detect malvertising activities. Our evaluation shows that MadTracer was capable of capturing a large number of malvertising cases, 15 times as many as Google Safe Browsing and Microsoft Forefront did together, at a low false detection rate. It also detected new attacks, including a type of click-fraud attack that has never been reported before.
【Keywords】: malvertising; online advertising; statistical learning
【Paper Link】 【Pages】:687-698
【Authors】: Istemi Ekin Akkus ; Ruichuan Chen ; Michaela Hardt ; Paul Francis ; Johannes Gehrke
【Abstract】: Today, websites commonly use third party web analytics services t obtain aggregate information about users that visit their sites. This information includes demographics and visits to other sites as well as user behavior within their own sites. Unfortunately, to obtain this aggregate information, web analytics services track individual user browsing behavior across the web. This violation of user privacy has been strongly criticized, resulting in tools that block such tracking as well as anti-tracking legislation and standards such as Do-Not-Track. These efforts, while improving user privacy, degrade the quality of web analytics. This paper presents the first design of a system that provides web analytics without tracking. The system gives users differential privacy guarantees, can provide better quality analytics than current services, requires no new organizational players, and is practical to deploy. This paper describes and analyzes the design, gives performance benchmarks, and presents our implementation and deployment across several hundred users.
【Keywords】: differential privacy; tracking; web analytics
【Paper Link】 【Pages】:699-711
【Authors】: Michael Backes ; Ankit Malik ; Dominique Unruh
【Abstract】: The abstraction of cryptographic operations by term algebras, called Dolev-Yao models, is essential in almost all tool-supported methods for verifying security protocols. Recently significant progress was made in establishing computational soundness results: these results prove that Dolev-Yao style models can be sound with respect to actual cryptographic realizations and security definitions. However, these results came at the cost of imposing various constraints on the set of permitted security protocols: e.g., dishonestly generated keys must not be used, key cycles need to be avoided, and many more. In a nutshell, the cryptographic security definitions did not adequately capture these cases, but were considered carved in stone; in contrast, the symbolic abstractions were bent to reflect cryptographic features and idiosyncrasies, thereby requiring adaptations of existing verification tools. In this paper, we pursue the opposite direction: we consider a symbolic abstraction for public-key encryption and identify two cryptographic definitions called PROG-KDM (programmable key-dependent message) security and MKE (malicious-key extractable) security that we jointly prove to be sufficient for obtaining computational soundness without imposing assumptions on the protocols using this abstraction. In particular, dishonestly generated keys obtained from the adversary can be sent, received, and used. The definitions can be met by existing cryptographic schemes in the random oracle model. This yields the first computational soundness result for trace-properties that holds for arbitrary protocols using this abstraction (in particular permitting to send and receive dishonestly generated keys), and that is accessible to all existing tools for reasoning about Dolev-Yao models without further adaptations.
【Keywords】: computational soundness; key dependent messages; sending keys
【Paper Link】 【Pages】:712-723
【Authors】: Mihhail Aizatulin ; Andrew D. Gordon ; Jan Jürjens
【Abstract】: We verify cryptographic protocols coded in C for correspondence properties with respect to the computational model of cryptography. The first step uses symbolic execution to extract a process calculus model from a C implementation of the protocol. The new contribution is the second step in which we translate the extracted model to a CryptoVerif protocol description, such that successful verification with CryptoVerif implies the security of the original C implementation. We implement our method and apply it to verify several protocols out of reach of previous work in the symbolic model (using ProVerif), either due to the use of XOR and Diffie-Hellman commitments, or due to the lack of an appropriate computational soundness result. We analyse only a single execution path, so our tool is limited to code following a fixed protocol narration. This is the first security analysis of C code to target a verifier for the computational model. We successfully verify over 3000 LOC. One example (about 1000 LOC) is independently written and currently in testing phase for industrial deployment; during its analysis we uncovered a vulnerability now fixed by its author.
【Keywords】: computational; cryptoverif; protocols; security; symbolic execution; verification
【Paper Link】 【Pages】:724-735
【Authors】: Gilles Barthe ; David Pointcheval ; Santiago Zanella Béguelin
【Abstract】: Verified security provides a firm foundation for cryptographic proofs by means of rigorous programming language techniques and verification methods. EasyCrypt is a framework that realizes the verified security paradigm and supports the machine-checked construction and verification of cryptographic proofs using state-of-the-art SMT solvers, automated theorem provers and interactive proof assistants. Previous experiments have shown that EasyCrypt is effective for a posteriori validation of cryptographic systems. In this paper, we report on the first application of verified security to a novel cryptographic construction, with strong security properties and interesting practical features. Specifically, we use EasyCrypt to prove in the Random Oracle Model the IND-CCA security of a redundancy-free public-key encryption scheme based on trapdoor one-way permutations. Somewhat surprisingly, we show that even with a zero-length redundancy, Boneh's SAEP scheme (an OAEP-like construction with a single-round Feistel network rather than two) converts a trapdoor one-way permutation into an IND-CCA-secure scheme, provided the permutation satisfies two additional properties. We then prove that the Rabin function and RSA with short exponent enjoy these properties, and thus can be used to instantiate the construction we propose to obtain efficient encryption schemes. The reduction that justifies the security of our construction is tight enough to achieve practical security with reasonable key sizes.
【Keywords】: machine-checked proofs; provable security; public-key encryption
【Paper Link】 【Pages】:736-747
【Authors】: Nick Nikiforakis ; Luca Invernizzi ; Alexandros Kapravelos ; Steven Van Acker ; Wouter Joosen ; Christopher Kruegel ; Frank Piessens ; Giovanni Vigna
【Abstract】: JavaScript is used by web developers to enhance the interactivity of their sites, offload work to the users' browsers and improve their sites' responsiveness and user-friendliness, making web pages feel and behave like traditional desktop applications. An important feature of JavaScript, is the ability to combine multiple libraries from local and remote sources into the same page, under the same namespace. While this enables the creation of more advanced web applications, it also allows for a malicious JavaScript provider to steal data from other scripts and from the page itself. Today, when developers include remote JavaScript libraries, they trust that the remote providers will not abuse the power bestowed upon them. In this paper, we report on a large-scale crawl of more than three million pages of the top 10,000 Alexa sites, and identify the trust relationships of these sites with their library providers. We show the evolution of JavaScript inclusions over time and develop a set of metrics in order to assess the maintenance-quality of each JavaScript provider, showing that in some cases, top Internet sites trust remote providers that could be successfully compromised by determined attackers and subsequently serve malicious JavaScript. In this process, we identify four, previously unknown, types of vulnerabilities that attackers could use to attack popular web sites. Lastly, we review some proposed ways of protecting a web application from malicious remote scripts and show that some of them may not be as effective as previously thought.
【Keywords】: javascript; remote inclusions; trust
【Paper Link】 【Pages】:748-759
【Authors】: Willem De Groef ; Dominique Devriese ; Nick Nikiforakis ; Frank Piessens
【Abstract】: We present FlowFox, the first fully functional web browser that implements a precise and general information flow control mechanism for web scripts based on the technique of secure multi-execution. We demonstrate how FlowFox subsumes many ad-hoc script containment countermeasures developed over the last years. We also show that FlowFox is compatible with the current web, by investigating its behavior on the Alexa top-500 web sites, many of which make intricate use of JavaScript. The performance and memory cost of FlowFox is substantial (a performance cost of around 20% on macro benchmarks for a simple two level policy), but not prohibitive. Our prototype implementation shows that information flow enforcement based on secure multi-execution can be implemented in full-scale browsers. It can support powerful, yet precise policies refining the same-origin-policy in a way that is compatible with existing websites.
【Keywords】: information flow; web browser architecture; web security
【Paper Link】 【Pages】:760-771
【Authors】: Mario Heiderich ; Marcus Niemietz ; Felix Schuster ; Thorsten Holz ; Jörg Schwenk
【Abstract】: Due to their high practical impact, Cross-Site Scripting (XSS) attacks have attracted a lot of attention from the security community members. In the same way, a plethora of more or less effective defense techniques have been proposed, addressing the causes and effects of XSS vulnerabilities. NoScript, and disabling scripting code in non-browser applications such as e-mail clients or instant messengers. As a result, an adversary often can no longer inject or even execute arbitrary scripting code in several real-life scenarios. In this paper, we examine the attack surface that remains after XSS and similar scripting attacks are supposedly mitigated by preventing an attacker from executing JavaScript code. We address the question of whether an attacker really needs JavaScript or similar functionality to perform attacks aiming for information theft. The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files. Through several case studies, we introduce the so called scriptless attacks and demonstrate that an adversary might not need to execute code to preserve his ability to extract sensitive information from well protected websites. More precisely, we show that an attacker can use seemingly benign features to build side channel attacks that measure and exfiltrate almost arbitrary data displayed on a given website. We conclude this paper with a discussion of potential mitigation techniques against this class of attacks. In addition, we have implemented a browser patch that enables a website to make a vital determination as to being loaded in a detached view or pop-up window. This approach proves useful for prevention of certain types of attacks we here discuss.
【Keywords】: attack fonts; css; html5; scriptless attacks; svg; xss
【Paper Link】 【Pages】:772-783
【Authors】: Andreas Holzer ; Martin Franz ; Stefan Katzenbeisser ; Helmut Veith
【Abstract】: The practical application of Secure Two-Party Computation is hindered by the difficulty to implement secure computation protocols. While recent work has proposed very simple programming languages which can be used to specify secure computations, it is still difficult for practitioners to use them, and cumbersome to translate existing source code into this format. Similarly, the manual construction of two-party computation protocols, in particular ones based on the approach of garbled circuits, is labor intensive and error-prone. The central contribution of the current paper is a tool which achieves Secure Two-Party Computation for ANSI C. Our work is based on a combination of model checking techniques and two-party computation based on garbled circuits. Our key insight is a nonstandard use of the bit-precise model checker CBMC which enables us to translate C programs into equivalent Boolean circuits. To this end, we modify the standard CBMC translation from programs into Boolean formulas whose variables correspond to the memory bits manipulated by the program. As CBMC attempts to minimize the size of the formulas, the circuits obtained by our tool chain are also size efficient; to improve the efficiency of the garbled circuit evaluation, we perform optimizations on the circuits. Experimental results with the new tool CBMC-GC demonstrate the practical usefulness of our approach.
【Keywords】: compilers; model checking; privacy; secure computations
【Paper Link】 【Pages】:784-796
【Authors】: Mihir Bellare ; Viet Tung Hoang ; Phillip Rogaway
【Abstract】: Garbled circuits, a classical idea rooted in the work of Yao, have long been understood as a cryptographic technique, not a cryptographic goal. Here we cull out a primitive corresponding to this technique. We call it a garbling scheme. We provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions. The most basic of these, privacy, suffices for two-party secure function evaluation (SFE) and private function evaluation (PFE). Starting from a PRF, we provide an efficient garbling scheme achieving privacy and we analyze its concrete security. We next consider obliviousness and authenticity, properties needed for private and verifiable outsourcing of computation. We extend our scheme to achieve these ends. We provide highly efficient blockcipher-based instantiations of both schemes. Our treatment of garbling schemes presages more efficient garbling, more rigorous analyses, and more modularly designed higher-level protocols.
【Keywords】: garbled circuits; garbling schemes; provable security; secure function evaluation; yao's protocol
【Paper Link】 【Pages】:797-808
【Authors】: Seny Kamara ; Payman Mohassel ; Ben Riva
【Abstract】: Secure function evaluation (SFE) allows a set of mutually distrustful parties to evaluate a function of their joint inputs without revealing their inputs to each other. SFE has been the focus of active research and recent work suggests that it can be made practical. Unfortunately, current protocols and implementations have inherent limitations that are hard to overcome using standard and practical techniques. Among them are: (1) requiring participants to do work linear in the size of the circuit representation of the function; (2) requiring all parties to do the same amount of work; and (3) not being able to provide complete fairness. A promising approach for overcoming these limitations is to augment the SFE setting with a small set of untrusted servers that have no input to the computation and that receive no output, but that make their computational resources available to the parties. In this model, referred to as server-aided SFE, the goal is to tradeoff the parties' work at the expense of the servers. Motivated by the emergence of public cloud services such as Amazon EC2 and Microsoft Azure, recent work has explored the extent to which server-aided SFE can be achieved with a single server. In this work, we revisit the sever-aided setting from a practical perspective and design single-server-aided SFE protocols that are considerably more efficient than all previously-known protocols. We achieve this in part by introducing several new techniques for garbled-circuit-based protocols, including a new and efficient input-checking mechanism for cut-and-choose and a new pipelining technique that works in the presence of malicious adversaries. Furthermore, we extend the server-aided model to guarantee fairness which is an important property to achieve in practice. Finally, we implement and evaluate our constructions experimentally and show that our protocols (regardless of the number of parties involved) yield implementations that are 4 and 6 times faster than the most optimized two-party SFE implementation when the server is assumed to be malicious and covert, respectively.
【Keywords】: cloud computing; multi-party computation; secure computation; server-aided computation
【Paper Link】 【Pages】:809-820
【Authors】: Markus Kammerstetter ; Christian Platzer ; Gilbert Wondracek
【Abstract】: Today, a large amount of software products include mechanisms to counter software piracy. However, most protection mechanisms can be easily circumvented by applying software patches (cracks) or license key generators (keygens) with seemingly no financial incentives. Our research shows that the distribution of cracks and keygens not only allows miscreants to generate revenue (e.g. through advertising or malware infections), but it also leads to high risks for the end-users of pirated software. We collected more than 43,900 download links and analyzed more than 23,100 (3,551 unique) real-world cracks, showing that these tools are heavily used by criminals to spread malware. Our results indicate that even state of the art virus scanners can not fully protect users from these threats. Moreover, we conducted a manual analysis, showing how many cracks and keygens actually work and how much effort is necessary to acquire them. In addition, we made our data-set publicly available to the research community.
【Keywords】: cracks; infection; malware
【Paper Link】 【Pages】:821-832
【Authors】: Chris Grier ; Lucas Ballard ; Juan Caballero ; Neha Chachra ; Christian J. Dietrich ; Kirill Levchenko ; Panayiotis Mavrommatis ; Damon McCoy ; Antonio Nappa ; Andreas Pitsillidis ; Niels Provos ; M. Zubair Rafique ; Moheeb Abu Rajab ; Christian Rossow ; Kurt Thomas ; Vern Paxson ; Stefan Savage ; Geoffrey M. Voelker
【Abstract】: We investigate the emergence of the exploit-as-a-service model for driveby browser compromise. In this regime, attackers pay for an exploit kit or service to do the "dirty work" of exploiting a victim's browser, decoupling the complexities of browser and plugin vulnerabilities from the challenges of generating traffic to a website under the attacker's control. Upon a successful exploit, these kits load and execute a binary provided by the attacker, effectively transferring control of a victim's machine to the attacker. In order to understand the impact of the exploit-as-a-service paradigm on the malware ecosystem, we perform a detailed analysis of the prevalence of exploit kits, the families of malware installed upon a successful exploit, and the volume of traffic that malicious web sites receive. To carry out this study, we analyze 77,000 malicious URLs received from Google Safe Browsing, along with a crowd-sourced feed of blacklisted URLs known to direct to exploit kits. These URLs led to over 10,000 distinct binaries, which we ran in a contained environment. Our results show that many of the most prominent families of malware now propagate through driveby downloads--32 families in all. Their activities are supported by a handful of exploit kits, with Blackhole accounting for 29% of all malicious URLs in our data, followed in popularity by Incognito. We use DNS traffic from real networks to provide a unique perspective on the popularity of malware families based on the frequency that their binaries are installed by drivebys, as well as the lifetime and popularity of domains funneling users to exploits.
【Keywords】: malware
【Paper Link】 【Pages】:833-844
【Authors】: Leyla Bilge ; Tudor Dumitras
【Abstract】: Little is known about the duration and prevalence of zero-day attacks, which exploit vulnerabilities that have not been disclosed publicly. Knowledge of new vulnerabilities gives cyber criminals a free pass to attack any target of their choosing, while remaining undetected. Unfortunately, these serious threats are difficult to analyze, because, in general, data is not available until after an attack is discovered. Moreover, zero-day attacks are rare events that are unlikely to be observed in honeypots or in lab experiments. In this paper, we describe a method for automatically identifying zero-day attacks from field-gathered data that records when benign and malicious binaries are downloaded on 11 million real hosts around the world. Searching this data set for malicious files that exploit known vulnerabilities indicates which files appeared on the Internet before the corresponding vulnerabilities were disclosed. We identify 18 vulnerabilities exploited before disclosure, of which 11 were not previously known to have been employed in zero-day attacks. We also find that a typical zero-day attack lasts 312 days on average and that, after vulnerabilities are disclosed publicly, the volume of attacks exploiting them increases by up to 5 orders of magnitude.
【Keywords】: full disclosure; vulnerabilities; zero-day attacks
【Paper Link】 【Pages】:845-856
【Authors】: Damon McCoy ; Hitesh Dharmdasani ; Christian Kreibich ; Geoffrey M. Voelker ; Stefan Savage
【Abstract】: Large-scale abusive advertising is a profit-driven endeavor. Without consumers purchasing spam-advertised Viagra, search-advertised counterfeit software or malware-advertised fake anti-virus, these campaigns could not be economically justified. Thus, in addition to the numerous efforts focused on identifying and blocking individual abusive advertising mechanisms, a parallel research direction has emerged focused on undermining the associated means of monetization: payment networks. In this paper we explain the complex role of payment processing in monetizing the modern affiliate program ecosystem and characterize the dynamics of these banking relationships over two years within the counterfeit pharmaceutical and software sectors. By opportunistically combining our own active purchasing data with contemporary disruption efforts by brand-holders and payment card networks, we gather the first empirical dataset concerning this approach. We discuss how well such payment interventions work, how abusive merchants respond in kind and the role that the payments ecosystem is likely to play in the future.
【Keywords】: payment interventions
【Paper Link】 【Pages】:857-868
【Authors】: Jason Crampton ; Gregory Gutin ; Anders Yeo
【Abstract】: A workflow specification defines a set of steps and the order in which those steps must be executed. Security requirements may impose constraints on which groups of users are permitted to perform subsets of those steps. A workflow specification is said to be satisfiable if there exists an assignment of users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications, and for the construction of run-time reference monitors for workflow management systems. Finding such an assignment is a hard problem in general, but work by Wang and Li in 2010 using the theory of parameterized complexity suggests that efficient algorithms exist under reasonable assumptions about workflow specifications. In this paper, we improve the complexity bounds for the workflow satisfiability problem. We also generalize and extend the types of constraints that may be defined in a workflow specification and prove that the satisfiability problem remains fixed-parameter tractable for such constraints.
【Keywords】: authorization constraints; parameterized complexity; workflow satisfiability
【Paper Link】 【Pages】:869-880
【Authors】: Kai Engelhardt ; Ron van der Meyden ; Chenyi Zhang
【Abstract】: This paper addresses the question of how TA-security, a semantics for intransitive information-flow policies in deterministic systems, can be generalized to nondeterministic systems. Various definitions are proposed, including definitions that state that the system enforces as much of the policy as possible in the context of attacks in which groups of agents collude by sharing information through channels that lie outside the system. Relationships between the various definitions proposed are characterized, and an unwinding-based proof technique is developed. Finally, it is shown that on a specific class of systems, access control systems with local non-determinism, the strongest definition can be verified by checking a simple static property.
【Keywords】: access control; information-flow; nondeterminism; noninterference; security
【Paper Link】 【Pages】:881-893
【Authors】: Scott Moore ; Aslan Askarov ; Stephen Chong
【Abstract】: Program progress (or termination) is a covert channel that may leak sensitive information. To control information leakage on this channel, semantic definitions of security should be progress sensitive and enforcement mechanisms should restrict the channel's capacity. However, most state-of-the-art language-based information-flow mechanisms are progress insensitive---allowing arbitrary information leakage through this channel---and current progress-sensitive enforcement techniques are overly restrictive. We propose a type system and instrumented semantics that together enforce progress-sensitive security more precisely than existing approaches. Our system is permissive in that it is able to accept programs in which the termination behavior depends only on low-security (e.g., public or trusted) information. Our system is parameterized on a termination oracle, and controls the progress channel precisely, modulo the ability of the oracle to determine the termination behavior of a program based on low-security information. We have instantiated the oracle for a simple imperative language with a logical abstract interpretation that uses an SMT solver to synthesize linear rank functions. In addition, we extend the system to permit controlled leakage through the progress channel, with the leakage bound by an explicit budget. We empirically analyze progress channels in existing Jif code. Our evaluation suggests that security-critical programs appear to satisfy progress-sensitive security.
【Keywords】: information flow; progress channels; termination channels
【Paper Link】 【Pages】:894-905
【Authors】: Mads Dam ; Gurvan Le Guernic ; Andreas Lundblad
【Abstract】: Current approaches to security policy monitoring are based on linear control flow constraints such as 'runQuery' may be evaluated only after 'sanitize'. However, realistic security policies must be able to conveniently capture data flow constraints as well. An example is a policy stating that arguments to the function 'runQuery' must be either constants, outputs of a function 'sanitize', or concatenations of any such values. We present a novel approach to security policy monitoring that uses tree automata to capture constraints on the way data is processed along an execution. We present a »-calculus based model of the framework, investigate some of the models meta-properties, and show how it can be implemented using labels corresponding to automaton states to reflect the computational histories of each data item. We show how a standard denotational semantics induces the expected monitoring regime on a simple "while" language. Finally we implement the framework for the Dalvik VM using TaintDroid as the underlying data flow tracking mechanism, and evaluate its functionality and performance on five case studies.
【Keywords】: policy enforcement; runtime monitoring; tree automata
【Paper Link】 【Pages】:906-917
【Authors】: Ghassan Karame ; Elli Androulaki ; Srdjan Capkun
【Abstract】: Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments.
【Keywords】: bitcoin; countermeasures; double-spending; fast payments
【Paper Link】 【Pages】:918-928
【Authors】: Véronique Cortier ; Graham Steel ; Cyrille Wiedling
【Abstract】: While extensive research addresses the problem of establishing session keys through cryptographic protocols, relatively little work has appeared addressing the problem of revocation and update of long term keys. We present an API for symmetric key management on embedded devices that supports key establishment and revocation, and prove security properties of our design in the symbolic model of cryptography. Our API supports two modes of revocation: a passive mode where keys have an expiration time, and an active mode where revocation messages are sent to devices. For the first we show that once enough time has elapsed after the compromise of a key, the system returns to a secure state, i.e. the API is robust against attempts by the attacker to use a compromised key to compromise other keys or to keep the compromised key alive past its validity time. For the second we show that once revocation messages have been received the system immediately returns to a secure state. Notable features of our designs are that all secret values on the device are revocable, and the device returns to a functionally equivalent state after revocation is complete.
【Keywords】: api; formal methods; revocation
【Paper Link】 【Pages】:929-940
【Authors】: Man Ho Au ; Apu Kapadia
【Abstract】: Some users may misbehave under the cover of anonymity by, e.g., defacing webpages on Wikipedia or posting vulgar comments on YouTube. To prevent such abuse, a few anonymous credential schemes have been proposed that revoke access for misbehaving users while maintaining their anonymity such that no trusted third party (TTP) is involved in the revocation process. Recently we proposed BLACR, a TTP-free scheme that supports `reputation-based blacklisting' --- the service provider can score users' anonymous sessions (e.g., good vs. inappropriate comments) and users with insufficient reputation are denied access. The major drawback of BLACR is the linear computational overhead in the size of the reputation list, which allows it to support reputation for only a few thousand user sessions in practical settings. We propose PERM, a revocation-window-based scheme (misbehaviors must be caught within a window of time), which makes computation independent of the size of the reputation list. PERM thus supports millions of user sessions and makes reputation-based blacklisting practical for large-scale deployments.
【Keywords】: accountable anonymity; anonymous blacklisting; revocation
【Paper Link】 【Pages】:941-952
【Authors】: David Bernhard ; Véronique Cortier ; Olivier Pereira ; Bogdan Warinschi
【Abstract】: We propose a new measure for privacy of votes. Our measure relies on computational conditional entropy, an extension of the traditional notion of entropy that incorporates both information-theoretic and computational aspects. As a result, we capture in a unified manner privacy breaches due to two orthogonal sources of insecurity: combinatorial aspects that have to do with the number of participants, the distribution of their votes and published election outcome as well as insecurity of the cryptography used in an implementation. Our privacy measure overcomes limitations of two previous approaches to defining vote privacy and we illustrate its applicability through several case studies. We offer a generic way of applying our measure to a large class of cryptographic protocols that includes the protocols implemented in Helios. We also describe a practical application of our metric on Scantegrity audit data from a real election.
【Keywords】: cryptography; entropy; privacy; voting
【Paper Link】 【Pages】:953-964
【Authors】: Dominique Schröder ; Heike Schröder
【Abstract】: In a verifiable data streaming protocol, the client streams a long string to the server who stores it in its database. The stream is verifiable in the sense that the server can neither change the order of the elements nor manipulate them. The client may also retrieve data from the database and update them. The content of the database is publicly verifiable such that any party in possession of some value $s$ and a proof Ö can check that s is indeed in the database. We introduce the notion of verifiable data streaming and present an efficient instantiation that supports an exponential number of values based on general assumptions. Our main technique is an authentication tree in which the leaves are not fixed in advanced such that the user, knowing some trapdoor, can authenticate a new element on demand without pre- or re-computing all other leaves. We call this data structure chameleon authentication tree (CAT). We instantiate our scheme with primitives that are secure under the discrete logarithm assumption. The algebraic properties of this assumption allow us to obtain a very efficient verification algorithm. As a second application of CATs, we present a new transformation from any one-time to many-time signature scheme that is more efficient than previously known solutions.
【Keywords】: outsourcing; streaming; verifiable outsourcing
【Paper Link】 【Pages】:965-976
【Authors】: Seny Kamara ; Charalampos Papamanthou ; Tom Roeder
【Abstract】: Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it. SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems. To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.
【Keywords】: cloud computing; cloud storage; searchable symmetric encryption
【Paper Link】 【Pages】:977-988
【Authors】: Peter Williams ; Radu Sion ; Alin Tomescu
【Abstract】: PrivateFS is an oblivious file system that enables access to remote storage, while keeping both the file contents and client access patterns secret. PrivateFS is based on a new parallel Oblivious RAM mechanism (PD-ORAM)---instead of waiting for the completion of all ongoing client-server transactions, client threads can now engage a server in parallel without loss of privacy. This critical piece is missing from existing Oblivious RAMs (ORAM), which can not allow multiple clients threads to operate simultaneously without revealing intra- and inter-query correlations and thus incurring privacy leaks. And since ORAMs often require many communication rounds, this significantly and unnecessarily constrains throughput. The mechanisms introduced here eliminate this constraint, allowing overall throughput to be bound by server bandwidth only, and thus to increase by an order of magnitude. Further, new de-amortization techniques bring the worst case query cost in line with the average cost. Both of these results are shown to be fundamental to any ORAM. Extensions providing fork consistency against an actively malicious adversary are then presented. A high performance, fully functional PD-ORAM implementation was designed, built and analyzed. It performs multiple queries per second on a 1TB+ database across 50ms latency links, with unamortized, bound query latencies. Based on PD-ORAM, PrivateFS was built and deployed on Linux as a userspace file system.
【Keywords】: access privacy; cloud computing; oblivious ram
【Paper Link】 【Pages】:989-991
【Authors】: Marian Harbach ; Sascha Fahl ; Thomas Muders ; Matthew Smith
【Abstract】: Security systems frequently rely on warning messages to convey important information, especially when a machine is not able to assess a situation automatically. For a long time, researchers have investigated the effects of warning messages to optimise their reception by a user. Design guidelines and best practises help the developer or interaction designer to adequately channel urgent information. In this poster, we investigate the application of readability measures to assess the difficulty of the descriptive text in warning messages. Adapting such a measure to fit the needs of warning message design allows objective feedback on the quality of a warning's descriptive text. An automated process will be able to assist software developers and designers in creating more readable and hence more understandable security warning messages. We present an initial exploration of the use of readability measures on the descriptive text of warning messages. Existing measures were evaluated on warning messages extracted from current browsers using an experimental study with 15 undergrad students. While our data did not yield conclusive results yet, we argue that readability measures can provide valuable assistance when implementing security systems.
【Keywords】: readability; usable security; user interface design; warning messages
【Paper Link】 【Pages】:992-994
【Authors】: Lung-Hao Lee ; Yen-Cheng Juan ; Kuei-Ching Lee ; Wei-Lin Tseng ; Hsin-Hsi Chen ; Yuen-Hsien Tseng
【Abstract】: This paper studies the feasibility of an early warning system that prevents users from the dangerous situations they may fall into during web surfing. Our approach adopts behavioral Hidden Markov Models to explore collective intelligence embedded in users' browsing behaviors for context-aware category prediction, and applies the results to web security threat prevention. Large-scale experiments show that our proposed method performs accuracy 0.463 for predicting the fine-grained categories of users' next accesses. In real-life filtering simulations, our method can achieve macro-averaging blocking rate 0.4293 to find web security threats that cannot be detected by the existing security protection solutions at the early stage, while accomplishes a low macro-averaging over-blocking rate 0.0005 with the passage of time. In addition, behavioral HMM is able to alert users for avoiding security threats by 8.4 hours earlier than the current URL filtering engine does. Our simulations show that the shortening of this lag time is critical to avoid severe diffusions of security threats.
【Keywords】: collaborative filtering; collective intelligence; security assurance
【Paper Link】 【Pages】:995-997
【Authors】: Érik Archambault ; Craig A. Shue
【Abstract】: Anonymity networks have been studied for decades, from both theoretical and practical perspectives. Several anonymity systems, such as Tor and Java Anon Proxy (JAP), have become popular enough for general Internet users. However, both noticeably constrain the performance of a user's network and are considered too complicated for some users. A new anonymity service, SurfEasy, has created a physical device that purports to provide easy, high performance anonymous network usage. However, the service does not readily describe the anonymity system it uses. In this work, we examine Tor, JAP, and SurfEasy from a performance and end-user perspective to characterize the tradeoffs in these systems and to provide a guide for analyzing future anonymity systems. In doing so, we find that SurfEasy does indeed offer better browsing performance in some cases, but at the cost of robust anonymity.
【Keywords】: anonymity networks; online privacy
【Paper Link】 【Pages】:998-1000
【Authors】: Giovanni Russello ; Mauro Conti ; Bruno Crispo ; Earlence Fernandes ; Yury Zhauniarovich
【Abstract】: In this paper, we describe a demo of a light virtualisation solution for Android phones. We named our solution MOSES (MOde-of-uses SEcurity Separation). MOSES is a policy-based framework for enforcing software isolation of applications and data. In MOSES, it is possible to define distinct security profiles within a single smartphone. Each security profile is associated with a set of policies that control the access to applications and data. One of the main characteristics of MOSES is the dynamic switching from one security profile to another. Each profile is associated with a context as well. Through the smartphones sensors, MOSES is able to detect changes in context and to dynamically switch to the security profile associated with the current context. Our current implementation of MOSES shows minimal overhead compared to standard Android in terms of latencies and battery consumption.
【Keywords】: android security extension; light virtualisation; separation of modes
【Paper Link】 【Pages】:1001-1003
【Authors】: Abedelaziz Mohaisen ; Xinwen Zhang ; Max Schuchard ; Haiyong Xie ; Yongdae Kim
【Abstract】: In information centric network (ICN), contents are fetched by their names from caches deployed in the network or from origin servers. Once the contents are fetched from the origin server, it is replicated and cached in all routers along the routing and forwarding paths from the user that issues the interest to the origin server, thus allowing further "interests" by other users to be fulfilled quickly. However, the way ICN caching and interest fulfillment work pose a great privacy risk; the time difference between response for interest of cached and uncached contents can be used as an indicator to infer whether or not a near-by user previously requested the same contents requested by the adversary. This work introduces the extent to which the problem is applicable in ICN and provides several solutions to address it.
【Keywords】: caching; information centric networks; privacy
【Paper Link】 【Pages】:1004-1006
【Authors】: Eitan Menahem ; Gabi Nakibly ; Yuval Elovici
【Abstract】: In this work we investigate a new approach for detecting network-wide attacks that aim to degrade the network's Quality of Service (QoS). To this end, a new network-based intrusion detection system (NIDS) is proposed. In contrast to the passive approach which most contemporary NIDS follow and which relies solely on production traffic monitoring, the propose NIDS takes the active approach where special crafted probes are sent according to a known probability distribution in order to monitor the network for anomalous behavior. The proposed approach takes away much of the variability of network traffic that makes it so difficult to classify, and therefore can detect subtle attacks which would not be detected passively. Furthermore, the active probing approach allows the NIDS to be effectively trained using only examples of the network's normal states, hence enabling an effective detection of zero-day attacks. Preliminary results on a real-life ISP network topology demonstrate the advantages of the proposed NIDS.
【Keywords】: intrusion detection; machine learning
【Paper Link】 【Pages】:1007-1009
【Authors】: Kenrick J. Mock ; Bogdan Hoanca ; Justin Weaver ; Mikal Milton
【Abstract】: The majority of today's authentication systems, including password and fingerprint scanners, are based on one-time, static authentication methods. A continuous, real-time authentication system opens up the possibility for greater security, but such a system must be unobtrusive and secure. In this work we studied whether a commercial eye tracker can be used for unobtrusive, continuous, real-time user authentication via iris recognition. In a user study, all 37 participants could be authenticated with 11% equal error rate (EER). For 14 of the 37 users, iris occlusion was sufficiently small to authenticate with 9% EER. When classified using a k-nearest neighbors algorithm and only the right iris, the same data set allowed 100% accuracy for k = 3. Although these error rates are too high for standalone use, iris recognition via an eye tracker might enable real-time continuous authentication when combined with other more reliable authentication means (e.g., a password). As eye trackers become widely available their capabilities for multiple factor, continuous authentication will become compelling.
【Keywords】: authentication; eye tracking; iris recognition
【Paper Link】 【Pages】:1010-1012
【Authors】: Giuseppe Petracca ; Anna Cinzia Squicciarini ; William G. Horne ; Marco Casassa Mont
【Abstract】: We provide an approach for real-time analysis of ongoing events in a controlled network. We propose ReasONets, i.e. Reasoning on Networks, a distributed and lightweight system, able to process and reason about anomalies and incidents observed in closed net- works. To the best of our knowledge this is the first system combining detections and classification of network events with real-time reasoning. Our demo will show a running prototype of the ReasONets, demonstrating the power and accuracy of the reasoning process in presence of incidents of various nature.
【Keywords】: reasoning; situational awareness
【Paper Link】 【Pages】:1013-1015
【Authors】: Xian Pan ; Zhen Ling ; Aniket Pingley ; Wei Yu ; Nan Zhang ; Xinwen Fu
【Abstract】: Raw mouse movement data can be sniffed via off-the-shelf tools. In this demo, we show that such data, while seemingly harmless, may reveal extremely sensitive information such as passwords. Nonetheless, such a Bluetooth-mouse-sniffing attack can be challenging to perform mainly because of two reasons: (i) packet loss is common for Bluetooth traffic, and (ii) modern operating systems use complex mouse acceleration strategies, which make it extremely difficult, if not impossible, to reconstruct the precise on-screen cursor coordinates from raw mouse movements. To address those challenges, we have conducted an extensive and careful study, over multiple operating systems, on the reconstruction of mouse cursor trajectory from raw mouse data and the inference of privacy-sensitive information - e.g., user password - from the reconstructed trajectory. Our experimental data demonstrate the severity of privacy leaking from un-encrypted Bluetooth mouse. To the best of our knowledge, our work is the first to retrieve sensitive information from sniffed mouse raw data. Video links of successful replay attack for different target OS are given in Section 3.2.
【Keywords】: bluetooth; mouse; passwords; privacy; sniffing
【Paper Link】 【Pages】:1016-1018
【Authors】: Aditi Gupta ; Sam Kerr ; Michael S. Kirkpatrick ; Elisa Bertino
【Abstract】: Code-reuse attacks, including return-oriented programming (ROP) and jump-oriented programming, bypass defenses against code injection by repurposing existing executable code in application binaries and shared libraries toward a malicious end. A common feature of these attacks is the reliance on the knowledge of the layout of the executable code. We propose a fine grained randomization based approach that modifies the layout of executable code and hinders code-reuse attack. Our solution consists solely of a modified dynamic loader that randomizes the internal structure of the executable code, thereby denying the attacker the necessary apriori knowledge for constructing the desired sequence of gadgets. Our approach has the advantage that it can be applied to any ELF binary and every execution of this binary uses a different randomization. We describe the initial implementation of Marlin, a customized loader for randomization of executable code. Our work shows that such an approach is feasible and significantly increases the level of security against code-reuse attacks.
【Keywords】: malware; return-oriented programming; security
【Paper Link】 【Pages】:1019-1021
【Authors】: Xiang Cui ; Binxing Fang ; Peng Liao ; Chaoge Liu
【Abstract】: Nowadays, most of research on botnet survivability only focuses on the advanced design of downstream (from botmasters to bots, used to deliver commands) command and control (C&C) channel. However, the upstream (from bots to botmasters, used to upload the collected data on victims) C&C channel remains vulnerable and low-efficiency in most of botnets to this day. To address the problem, we propose a C&C channel division scheme and then establish a Botnet Triple-Channel Model (BTM). BTM divides a traditional C&C channel into three independent sub-channels, denoting as Command Download Channel (CDC), Registration Channel (RC) and Data Upload Channel (DUC), respectively. To illuminate the feasibility and advantages, we implement a BTM botnet prototype which exploits URL Flux for CDC, Domain-flux for RC, and introduces a new approach (Cloud-based File Hosting and URL Shortening Services) for DUC. Compared with current botnets, the proposed BTM botnet will promise to be as robust as P2P botnets and as efficient as centralized botnets. The ultimate goal of our work is to increase the understanding of advanced botnets which will promote the development of more efficient countermeasures.
【Keywords】: BTM; Botnet; C&C; Cloud; URL Shortening
【Paper Link】 【Pages】:1022-1024
【Authors】: Bilal Shebaro ; Salmin Sultana ; Shakthidhar Reddy Gopavaram ; Elisa Bertino
【Abstract】: The popularity of sensor networks and their many uses in critical domains such as military and healthcare make them more vulnerable to malicious attacks. In such contexts, trustworthiness of sensor data and their provenance is critical for decision-making. In this demonstration, we present an efficient and secure approach for transmitting provenance information about sensor data. Our provenance approach uses light-weight in-packet Bloom filters that are encoded as sensor data travels through intermediate sensor nodes, and are decoded and verified at the base station. Our provenance technique is also able to defend against malicious attacks such as packet dropping and allows one to detect the responsible node for packet drops. As such it makes possible to modify the transmission route to avoid nodes that could be compromised or malfunctioning. Our technique is designed to create a trustworthy environment for sensor nodes where only trusted data is processed.
【Keywords】: bloom filters; data trustworthiness; malicious attacks; provenance; sensor networks
【Paper Link】 【Pages】:1025-1027
【Authors】: Zhaoyu Gao ; Haojin Zhu ; Yao Liu ; Muyuan Li ; Zhenfu Cao
【Abstract】: The Database-driven Cognitive Radio Network is regarded as a promising way for a better utilization of radio channels without introducing the interference to the primary user. However, it is also facing a series of security threats. In this study, we identify a new kind of location privacy related attack which could geo-locate a secondary user from the spectrum he used. We propose a Spectrum Utilization based Location Inference Algorithm, which is based on the intersection of the possible location sets revealed by each channel access or channel transition event under the presence of the primary user. We implement our algorithm on the data extracted from Google Earth Coverage Maps released by FCC. Our experiement results show that, $80\%$ SUs could be located to 10 cells based on 25 or less channels.
【Keywords】: db-driven cognitive radio network; location privacy; spectrum utilization
【Paper Link】 【Pages】:1028-1030
【Authors】: Jiawei Yuan ; Lu Shi ; Shucheng Yu ; Ming Li
【Abstract】: Simultaneously realizing device authentication and fast secret key extraction is a challenging issue in wireless networks. Most existing works solve this problem by utilizing either advanced hardware or out-of-band channel, which is not always available for commercial-off-the-shelf wireless devices. In this work, we solve this challenging issue under the setting of Wireless Body Area Network (BAN) and propose a lightweight authenticated secret key extraction scheme, namely ASK-BAN. The proposed scheme is just based on wireless channel measurement and does not introduce any advanced hardware or rely on any out-of-band channel. Experimental results show that our proposed scheme can offer a high secret key extraction rate while providing effective device authentication simultaneously.
【Keywords】: authenticated key generation; ban; physical layer; sensor
【Paper Link】 【Pages】:1031-1033
【Authors】: Shumin Guo ; Keke Chen
【Abstract】: This poster presents a preliminary study on the PerturBoost approach that aims to provide efficient and secure classifier learning in the cloud with both data and model privacy preserved.
【Keywords】: boosting; cloud computing; rasp perturbation; secure data mining
【Paper Link】 【Pages】:1034-1036
【Authors】: Chao Yang ; Vinod Yegneswaran ; Phillip A. Porras ; Guofei Gu
【Abstract】: The prevalence of malware in Android marketplaces is a growing and significant problem. Among the most worrisome concerns are with regarding to malicious Android applications that attempt to steal money from unsuspecting users. These malicious applications get uploaded under the guise of benign applications, typically to third-party alternative market places that lack proper security vetting procedures, and are subsequently downloaded and executed by unsuspecting victims. In this work, we propose "Money-Guard", a systematic approach to detect stealthy moneystealing applications in popular Android markets. Our technique relies on detecting two key behavioral heuristics that seem to be common across many money-stealing Android malware: hardcoded exfiltration and notification suppression. In our preliminary analysis of 47 SMS-based money stealing applications, we confirm that 41 of these applications follow the above pattern, and describe a light weight detection approach that will identify this behavioral pattern.
【Keywords】: Android; malicious apps; security
【Paper Link】 【Pages】:1037-1039
【Authors】: Zhaoyan Xu ; Jialong Zhang ; Guofei Gu ; Zhiqiang Lin
【Abstract】: Inspired by the biological vaccines, we explore the possibility of developing similar vaccines for malware immunization. We provide the first systematic study towards this direction and present a prototype system, AGAMI, for automatic generation of vaccines for malware immunization. With a novel use of several dynamic malware analysis techniques, we show that it is possible to extract a lightweight vaccine from current malware, and after injecting such vaccine on clean machines, they can be immune from future infection from the same malware family. We evaluate AGAMI on a large set of real-world malware samples and successfully extract working vaccines for many families such as Conficker and Zeus. We believe it is an appealing complementary technique to existing malware defense solutions.
【Keywords】: malware analysis; malware detection
【Paper Link】 【Pages】:1040-1042
【Authors】: Jidong Xiao ; Zhang Xu ; Hai Huang ; Haining Wang
【Abstract】: Memory deduplication has been widely used in various commodity hypervisors. However, while this technique improves memory efficiency, it has an impact on system security. In particular, memory deduplication is usually implemented using a variant of copy-on-write techniques, for which, writing to a shared page would incur a longer access time than those non-shared. By exploiting this artifact, we demonstrate a new covert channel can be built in a virtualized environment.
【Keywords】: covert channel; memory deduplication; virtualization
【Paper Link】 【Pages】:1043-1045
【Authors】: Bo Chen ; Reza Curtmola
【Abstract】: Remote Data Checking (RDC) allows clients to efficiently check the integrity of data stored at untrusted servers. This allows data owners to assess the risk of outsourcing data in the public cloud, making RDC a valuable tool for data auditing. Early RDC schemes have focused on static data, whereas later schemes such as DPDP support the full range of dynamic operations on the outsourced data, including insertions, modifications, and deletions. Robustness is required for both static and dynamic RDC schemes that rely on spot checking for efficiency. In this paper, we propose the first RDC schemes that provide robustness and, at the same time, support dynamic updates, while requiring small, constant, client storage.
【Keywords】: cloud storage; dynamic provable data possession; remote data integrity checking; robustness; small corruption
【Paper Link】 【Pages】:1046-1048
【Authors】: Supriyo Chakraborty ; Kasturi Rangan Raghavan ; Mani B. Srivastava ; Harris Teague
【Abstract】: Smart phones with increased computation and sensing capabilities have enabled the growth of a new generation of applications which are organic and designed to react depending on the user contexts. These contexts typically define the personal, social, work and urban spaces of an individual and are derived from the underlying sensor measurements. The shared context streams therefore embed in them information, which when stitched together can reveal behavioral patterns and possible sensitive inferences, raising serious privacy concerns. In this paper, we propose a model based technique to capture the relationship between these contexts, and better understand the privacy implications of sharing them. We further demonstrate that by using a generative model of the context streams we can simultaneously meet the utility objectives of the context-aware applications while maintaining individual privacy. We present our current implementation which uses offline model learning with online inferencing performed on the smart phone. Preliminary results are presented to provide proof-of-concept of our proposed technique.
【Keywords】: context streams; context-awareness; dynamic bayesian networks; information leakage; privacy
【Paper Link】 【Pages】:1049-1051
【Authors】: Dongxi Liu ; Shenlu Wang
【Abstract】: The cloud database services are attractive for managing outsourced databases. However, the data security and privacy is a big concern hampering the acceptance of cloud database services. A straightforward way to address this concern is to encrypt the database, but an encrypted database cannot be easily queried. In this demo paper, we demonstrate that aggregate SQL queries with range conditions can be performed efficiently over encrypted databases, without decrypting the databases first, by using our new homomorphic encryption scheme. The techniques in this paper can be applied to existing Database Management Systems (DBMSs). Moreover, the techniques do not need to predetermine the maximum sum and number of data in one database table column. These features make our technologies suitable to manage long-standing and large encrypted databases.
【Keywords】: database; homomorphic encryption; order-preserving index; sql query
【Paper Link】 【Pages】:1052
【Authors】: Ruby Lee ; Simha Sethumadhavan ; G. Edward Suh
【Abstract】: Building a secure computing system requires careful coordination among all layers in the system from hardware to software. Even if secure by itself, a higher layer protection mechanism may be bypassed if lower layer software or hardware is vulnerable. Additionally, hardware complements software through its efficiency, tamper resistance, etc. There have been significant efforts recently in hardware communities that aim to leverage hardware strengths to secure software layers, and also to secure hardware itself. This tutorial presents some of these hardware-enhanced security techniques to the security community.
【Keywords】: hardware security; hardware trojan; puf; secure cloud computing; secure processors; side channel attacks
【Paper Link】 【Pages】:1053
【Authors】: Stuart S. Shapiro
【Abstract】: "Privacy by design" (PbD) represents a distinct philosophical movement and a shift away from the dominant legal-oriented approach to privacy and toward an approach that is more proactive, technical, and embedded. However, it suffers from the general absence of organized systematic techniques for carrying it out. In part, this gap reflects a failure to appropriately leverage and organize existing ideas and methods, but it also reflects the need to develop new methods capable of addressing the complexities of new socio-technical systems. This tutorial aims to survey the state of PbD and what will be required to move it into the realm of actionable and structured techniques.
【Keywords】: privacy by design
【Paper Link】 【Pages】:1054-1055
【Authors】: David Dagon
【Abstract】: DNS data is increasingly used in security analysis, intrusion detection, and research. Even small DNS collection systems can generate enormous amounts of DNS traffic, requiring tera-scale storage. As a result, researchers looking at DNS traffic must often develop real-time, in-line analysis tools.
【Keywords】: dns security; large-scale data analysis
【Paper Link】 【Pages】:1056-1057
【Authors】: Alvaro A. Cárdenas ; Blaine Nelson ; Benjamin I. P. Rubinstein
【Abstract】: The Workshop on Artificial Intelligence and Security (AISec) focuses on using Artificial Intelligence (AI) and Machine Learning methods to address the unique problems posed within the Security and Privacy application areas and on the implications of using those methods to solve such adversarial problems. The workshop serves as the premier venue for this particular fusion of application, algorithms, and theory and continues to attract submissions from a diverse set of researchers, who address newly arising problems within this ever growing field. The main goal of the workshop is to provide a forum for researchers within the Security, Privacy, Artificial Intelligence (AI), and Machine Learning communities to discuss the role of AI and learning in security and privacy applications and, conversely, to present the unique needs of these problems to the AI and learning communities.
【Keywords】: artificial intelligence; computer privacy; computer security; machine learning
【Paper Link】 【Pages】:1058-1059
【Authors】: Xinwen Zhang ; Xuhua Ding
【Abstract】: Trusted computing plays a pivotal role to facilitate a party to evaluate the integrity of others or to ensure desired security assurance, which is a very challenging task in large-scale and heterogeneous computing environments. Built upon the success from 2006 to 2011, the seventh ACM Workshop on Scalable Trusted Computing continues to serve as a forum for researchers as well as practitioners to disseminate and discuss recent advances and emerging issues. This proceedings includes selected papers that focus on system architectures, enabling mechanisms, and novel applications of trusted computing.
【Keywords】: ccs workshop; scalability; stc; trusted computing
【Paper Link】 【Pages】:1060-1061
【Authors】: Srdjan Capkun ; Seny Kamara
【Abstract】: Notwithstanding the latest buzzword (grid, cloud, utility computing, SaaS, etc.), large-scale computing and cloud-like infrastructures are here to stay. How exactly they will look like tomorrow is still for the markets to decide, yet one thing is certain: clouds bring with them new untested deployment and associated adversarial models and vulnerabilities. It is essential that our community becomes involved at this early stage. The CCSW workshop aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing to act as a fertile ground for creative debate and interaction in security-sensitive areas of computing impacted by clouds.
【Keywords】: cloud computing
【Paper Link】 【Pages】:1062-1063
【Authors】: William Enck ; Xuxian Jiang
【Abstract】: Mobile devices such as smartphones and Internet-capable tablets have achieved computing and networking capabilities comparable to traditional personal computers. Their successful consumerization has also become a source of pain for adopting users and organizations. The operating systems supporting these new devices have both advantages and disadvantages with respect to offered security. On one hand, they use application sandboxing to contain exploits and limit privileges given to malware. On the other hand, they collect and organize many forms of security- and privacy-sensitive information simply as a matter of operation, and make that information easily accessible to downloaded third-party applications. Recognizing smartphone security and privacy as the emerging area, this workshop intends to provide a venue for interested researchers and practitioners to get together and exchange ideas, thus to deepen our understanding to various security and privacy issues on smartphones, specifically the platforms such as iOS and Android. To this end, the workshop solicits both technical and position paper submissions with a variety of relevant topics and further strongly encourages novel paradigms and controversial ideas. With strong engagement from our community, the workshop emerges as a vibrant venue for creative debate and interaction in security- and privacy-sensitive areas of computing and communication broadly impacted by smartphones and mobile devices.
【Keywords】: malware; mobile devices; permission; privacy; security; smartphones
【Paper Link】 【Pages】:1064-1065
【Authors】: Nikita Borisov
【Abstract】: The need for privacy-aware policies, regulations, and techniques has been widely recognized. This workshop discusses the problems of privacy in the global interconnected societies and possible solutions. The 2012 Workshop, held in conjunction with the ACM CCS conference, is the eleventh in a yearly forum for papers on all the different aspects of privacy in today's electronic society.
【Keywords】: privacy
【Paper Link】 【Pages】:1066-1067
【Authors】: Mihai Christodorescu
【Abstract】: As more and more systems are controlled by computers and connected to the Internet, a tremendous amount of activity is generated when these systems interact with each other and with their users. The continuous stream of data related to such Internet-connected and computer-controlled systems is known as Big Data and introduces problems at new scales. Computer security and privacy can both benefit and suffer from Big Data, as tools and techniques are made readily available to process security-event streams and as attacks are harder to detect in the huge volume of event data. The BADGERS'12 workshop is a forum for discussing the challenges in and potential solutions for taking advantage of Big Data for improving computer security and privacy. The workshop builds on the successful first edition of BADGERS which was held in conjunction with the EuroSys 2011 conference in Salzburg, Austria [3].
【Keywords】: big data; data analysis; privacy; security; variety; velocity; volume