arXiv daily

Cryptography and Security (cs.CR)

Thu, 18 May 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.BrutePrint: Expose Smartphone Fingerprint Authentication to Brute-force Attack

Authors:Yu Chen, Yiling He

Abstract: Fingerprint authentication has been widely adopted on smartphones to complement traditional password authentication, making it a tempting target for attackers. The smartphone industry is fully aware of existing threats, and especially for the presentation attack studied by most prior works, the threats are nearly eliminated by liveness detection and attempt limit. In this paper, we study the seemingly impossible fingerprint brute-force attack on off-the-shelf smartphones and propose a generic attack framework. We implement BrutePrint to automate the attack, that acts as a middleman to bypass attempt limit and hijack fingerprint images. Specifically, the bypassing exploits two zero-day vulnerabilities in smartphone fingerprint authentication (SFA) framework, and the hijacking leverages the simplicity of SPI protocol. Moreover, we consider a practical cross-device attack scenario and tackle the liveness and matching problems with neural style transfer (NST). We also propose a method based on neural style transfer to generate valid brute-forcing inputs from arbitrary fingerprint images. A case study shows that we always bypasses liveness detection and attempt limit while 71% spoofs are accepted. We evaluate BrutePrint on 10 representative smartphones from top-5 vendors and 3 typical types of applications involving screen lock, payment, and privacy. As all of them are vulnerable to some extent, fingerprint brute-force attack is validated on on all devices except iPhone, where the shortest time to unlock the smartphone without prior knowledge about the victim is estimated at 40 minutes. Furthermore, we suggest software and hardware mitigation measures.

2.GraphMoco:a Graph Momentum Contrast Model that Using Multimodel Structure Information for Large-scale Binary Function Representation Learning

Authors:Sun RuiJin, Guo ShiZe, Guo Xi, Pan ZhiSong

Abstract: The ability to compute similarity scores of binary code at the function level is essential for cyber security. A single binary file can contain tens of thousands of functions. A deployable learning framework for cybersecurity applications needs to work not only accurately but also efficiently with large amounts of data. Traditional methods suffer from two drawbacks. First, it is very difficult to annotate different pairs of functions with accurate labels. These supervised learning methods can easily be overtrained with inaccurate labels. The second is that they either use the pre-trained encoder or use the fine-grained graph comparison. However, these methods have shortcomings in terms of time or memory consumption. We focus on large-scale Binary Code Similarity Detection (BCSD) and to mitigate the traditional problems, we propose GraphMoco: a graph momentum contrast model that uses multimodal structure information for large-scale binary function representation learning. We take an unsupervised learning approach and make full use of the structural information in the binary code. It does not require manually labelled similar or dissimilar information. Our models perform efficiently on large amounts of training data. Our experimental results show that our method outperforms the state-of-the-art in terms of accuracy.

3.Amplification by Shuffling without Shuffling

Authors:Borja Balle, James Bell, Adrià Gascón

Abstract: Motivated by recent developments in the shuffle model of differential privacy, we propose a new approximate shuffling functionality called Alternating Shuffle, and provide a protocol implementing alternating shuffling in a single-server threat model where the adversary observes all communication. Unlike previous shuffling protocols in this threat model, the per-client communication of our protocol only grows sub-linearly in the number of clients. Moreover, we study the concrete efficiency of our protocol and show it can improve per-client communication by one or more orders of magnitude with respect to previous (approximate) shuffling protocols. We also show a differential privacy amplification result for alternating shuffling analogous to the one for uniform shuffling, and demonstrate that shuffling-based protocols for secure summation based a construction of Ishai et al. (FOCS'06) remain secure under the Alternating Shuffle. In the process we also develop a protocol for exact shuffling in single-server threat model with amortized logarithmic communication per-client which might be of independent interest.

4.Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial Network Packet Generation

Authors:Soumyadeep Hore, Jalal Ghadermazi, Diwas Paudel, Ankit Shah, Tapas K. Das, Nathaniel D. Bastian

Abstract: Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms, coupled with the availability of faster computing infrastructure, have enhanced the security posture of cybersecurity operations centers (defenders) through the development of ML-aided network intrusion detection systems (NIDS). Concurrently, the abilities of adversaries to evade security have also increased with the support of AI/ML models. Therefore, defenders need to proactively prepare for evasion attacks that exploit the detection mechanisms of NIDS. Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations. Perturbations made to the flow-based features are difficult to reverse-engineer, while samples generated with perturbations to the packet-based features are not playable. Our methodological framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature. By taking raw malicious network packets as inputs and systematically making perturbations on them, Deep PackGen camouflages them as benign packets while still maintaining their functionality. In our experiments, using publicly available data, Deep PackGen achieved an average adversarial success rate of 66.4\% against various ML models and across different attack types. Our investigation also revealed that more than 45\% of the successful adversarial samples were out-of-distribution packets that evaded the decision boundaries of the classifiers. The knowledge gained from our study on the adversary's ability to make specific evasive perturbations to different types of malicious packets can help defenders enhance the robustness of their NIDS against evolving adversarial attacks.