arXiv daily

Cryptography and Security (cs.CR)

Fri, 12 May 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.A Lightweight Authentication Protocol against Modeling Attacks based on a Novel LFSR-APUF

Authors:Yao Wang, Xue Mei, Zhengtai Chang, Wenbing Fan, Benqing Guo, Zhi Quan

Abstract: Simple authentication protocols based on conventional physical unclonable function (PUF) are vulnerable to modeling attacks and other security threats. This paper proposes an arbiter PUF based on a linear feedback shift register (LFSR-APUF). Different from the previously reported linear feedback shift register for challenge extension, the proposed scheme feeds the external random challenges into the LFSR module to obfuscate the linear mapping relationship between the challenge and response. It can prevent attackers from obtaining valid challenge-response pairs (CRPs), increasing its resistance to modeling attacks significantly. A 64-stage LFSR-APUF has been implemented on a field programmable gate array (FPGA) board. The experimental results reveal that the proposed design can effectively resist various modeling attacks such as logistic regression (LR), evolutionary strategy (ES), Artificial Neuro Network (ANN), and support vector machine (SVM) with a prediction rate of 51.79% and a slight effect on the randomness, reliability, and uniqueness. Further, a lightweight authentication protocol is established based on the proposed LFSR-APUF. The protocol incorporates a low-overhead, ultra-lightweight, novel private bit conversion Cover function that is uniquely bound to each device in the authentication network. A dynamic and timevariant obfuscation scheme in combination with the proposed LFSR-APUF is implemented in the protocol. The proposed authentication protocol not only resists spoofing attacks, physical attacks, and modeling attacks effectively, but also ensures the security of the entire authentication network by transferring important information in encrypted form from the server to the database even when the attacker completely controls the server.

2.Two-in-One: A Model Hijacking Attack Against Text Generation Models

Authors:Wai Man Si, Michael Backes, Yang Zhang, Ahmed Salem

Abstract: Machine learning has progressed significantly in various applications ranging from face recognition to text generation. However, its success has been accompanied by different attacks. Recently a new attack has been proposed which raises both accountability and parasitic computing risks, namely the model hijacking attack. Nevertheless, this attack has only focused on image classification tasks. In this work, we broaden the scope of this attack to include text generation and classification models, hence showing its broader applicability. More concretely, we propose a new model hijacking attack, Ditto, that can hijack different text classification tasks into multiple generation ones, e.g., language translation, text summarization, and language modeling. We use a range of text benchmark datasets such as SST-2, TweetEval, AGnews, QNLI, and IMDB to evaluate the performance of our attacks. Our results show that by using Ditto, an adversary can successfully hijack text generation models without jeopardizing their utility.

3.Differentially Private Set-Based Estimation Using Zonotopes

Authors:Mohammed M. Dawoud, Changxin Liu, Amr Alanwar, Karl H. Johansson

Abstract: For large-scale cyber-physical systems, the collaboration of spatially distributed sensors is often needed to perform the state estimation process. Privacy concerns naturally arise from disclosing sensitive measurement signals to a cloud estimator that predicts the system state. To solve this issue, we propose a differentially private set-based estimation protocol that preserves the privacy of the measurement signals. Compared to existing research, our approach achieves less privacy loss and utility loss using a numerically optimized truncated noise distribution. The proposed estimator is perturbed by weaker noise than the analytical approaches in the literature to guarantee the same level of privacy, therefore improving the estimation utility. Numerical and comparison experiments with truncated Laplace noise are presented to support our approach. Zonotopes, a less conservative form of set representation, are used to represent estimation sets, giving set operations a computational advantage. The privacy-preserving noise anonymizes the centers of these estimated zonotopes, concealing the precise positions of the estimated zonotopes.

4.Unconditionally Secure Access Control Encryption

Authors:Cheuk Ting Li, Sherman S. M. Chow

Abstract: Access control encryption (ACE) enforces, through a sanitizer as the mediator, that only legitimate sender-receiver pairs can communicate, without the sanitizer knowing the communication metadata, including its sender and recipient identity, the policy over them, and the underlying plaintext. Any illegitimate transmission is indistinguishable from pure noise. Existing works focused on computational security and require trapdoor functions and possibly other heavyweight primitives. We present the first ACE scheme with information-theoretic security (unconditionally against unbounded adversaries). Our novel randomization techniques over matrices realize sanitization (traditionally via homomorphism over a fixed randomness space) such that the secret message in the hidden message subspace remains intact if and only if there is no illegitimate transmission.