arXiv daily

Cryptography and Security (cs.CR)

Thu, 08 Jun 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

Authors:Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Chulin Xie, Kai Zhang, Qifan Zhang, Yuhui Zhang, Chaoyang He, Salman Avestimehr

Abstract: This paper introduces FedMLSecurity, a benchmark that simulates adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances the security assessment capacity of FedML. FedMLSecurity comprises two principal components: FedMLAttacker, which simulates attacks injected into FL training, and FedMLDefender, which emulates defensive strategies designed to mitigate the impacts of the attacks. FedMLSecurity is open-sourced 1 and is customizable to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). Experimental evaluations in this paper also demonstrate the ease of application of FedMLSecurity to Large Language Models (LLMs), further reinforcing its versatility and practical utility in various scenarios.

2.Machine Learning in Digital Forensics: A Systematic Literature Review

Authors:Tahereh Nayerifard, Haleh Amintoosi, Abbas Ghaemi Bafghi, Ali Dehghantanha

Abstract: Development and exploitation of technology have led to the further expansion and complexity of digital crimes. On the other hand, the growing volume of data and, subsequently, evidence is a severe challenge in digital forensics. In recent years, the application of machine learning techniques to identify and analyze evidence has been on the rise in different digital forensics domains. This paper offers a systematic literature review of the research published in major academic databases from January 2010 to December 2021 on the application of machine learning in digital forensics, which was not presented yet to the best of our knowledge as comprehensive as this. The review also identifies the domains of digital forensics and machine learning methods that have received the most attention in the previous papers and finally introduces remaining research gaps. Our findings demonstrate that image forensics has obtained the greatest benefit from using machine learning methods, compared to other forensic domains. Moreover, CNN-based models are the most important machine learning methods that are increasingly being used in digital forensics. We present a comprehensive mind map to provide a proper perspective for valuable analytical results. Furthermore, visual analysis has been conducted based on the keywords of the papers, providing different thematic relevance topics. This research will give digital forensics investigators, machine learning developers, security researchers, and enthusiasts a broad view of the application of machine learning in digital forensics.

3.G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks through Attributed Client Graph Clustering

Authors:Hao Yu, Chuan Ma, Meng Liu, Xinwang Liu, Zhe Liu, Ming Ding

Abstract: As a collaborative paradigm, Federated Learning (FL) empowers clients to engage in collective model training without exchanging their respective local data. Nevertheless, FL remains vulnerable to backdoor attacks in which an attacker compromises malicious clients, and injects poisoned model weights into the aggregation process to yield attacker-chosen predictions for particular samples. Existing countermeasures, mainly based on anomaly detection, may erroneously reject legitimate weights while accepting malicious ones, which is due to inadequacies in quantifying client model similarities. Other defense mechanisms prove effective exclusively when confronted with a restricted number of malicious clients, e.g., less than 10%. To address these vulnerabilities, we present G$^2$uardFL, a protective framework that reframes the detection of malicious clients as an attributed graph clustering problem, thereby safeguarding FL systems. This framework employs a client graph clustering technique to identify malicious clients and incorporates an adaptive method to amplify the disparity between the aggregated model and poisoned client models, thereby eliminating previously embedded backdoors. A theoretical analysis of convergence is also performed to demonstrate that the global model closely approximates the model untouched by any backdoor. Through empirical evaluation compared to cutting-edge defenses and against various backdoor attacks, our experimental results indicate that G$^2$uardFL considerably undermines the effectiveness of backdoor attacks while maintaining a negligible impact on the benign sample performance.

4.Parallel and Asynchronous Smart Contract Execution

Authors:Jian Liu, Peilun Li, Raymond~Cheng, N. Asokan, Dawn Song

Abstract: Today's blockchains suffer from low throughput and high latency, which impedes their widespread adoption of more complex applications like smart contracts. In this paper, we propose a novel paradigm for smart contract execution. It distinguishes between consensus nodes and execution nodes: different groups of execution nodes can execute transactions in parallel; meanwhile, consensus nodes can asynchronously order transactions and process execution results. Moreover, it requires no coordination among execution nodes and can effectively prevent livelocks. We show two ways of applying this paradigm to blockchains. First, we show how we can make Ethereum support parallel and asynchronous contract execution \emph{without hard-forks}. Then, we propose a new public, permissionless blockchain. Our benchmark shows that, with a fast consensus layer, it can provide a high throughput even for complex transactions like Cryptokitties gene mixing. It can also protect simple transactions from being starved by complex transactions.

5.SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts

Authors:Monika di Angelo, Thomas Durieux, João F. Ferreira, Gernot Salzer

Abstract: Smart contracts are blockchain programs that often handle valuable assets. Writing secure smart contracts is far from trivial, and any vulnerability may lead to significant financial losses. To support developers in identifying and eliminating vulnerabilities, methods and tools for the automated analysis have been proposed. However, the lack of commonly accepted benchmark suites and performance metrics makes it difficult to compare and evaluate such tools. Moreover, the tools are heterogeneous in their interfaces and reports as well as their runtime requirements, and installing several tools is time-consuming. In this paper, we present SmartBugs 2.0, a modular execution framework. It provides a uniform interface to 19 tools aimed at smart contract analysis and accepts both Solidity source code and EVM bytecode as input. After describing its architecture, we highlight the features of the framework. We evaluate the framework via its reception by the community and illustrate its scalability by describing its role in a study involving 3.25 million analyses.

6.Re-aligning Shadow Models can Improve White-box Membership Inference Attacks

Authors:Ana-Maria Cretu, Daniel Jones, Yves-Alexandre de Montjoye, Shruti Tople

Abstract: Machine learning models have been shown to leak sensitive information about their training datasets. As models are being increasingly used, on devices, to automate tasks and power new applications, there have been concerns that such white-box access to its parameters, as opposed to the black-box setting which only provides query access to the model, increases the attack surface. Directly extending the shadow modelling technique from the black-box to the white-box setting has been shown, in general, not to perform better than black-box only attacks. A key reason is misalignment, a known characteristic of deep neural networks. We here present the first systematic analysis of the causes of misalignment in shadow models and show the use of a different weight initialisation to be the main cause of shadow model misalignment. Second, we extend several re-alignment techniques, previously developed in the model fusion literature, to the shadow modelling context, where the goal is to re-align the layers of a shadow model to those of the target model.We show re-alignment techniques to significantly reduce the measured misalignment between the target and shadow models. Finally, we perform a comprehensive evaluation of white-box membership inference attacks (MIA). Our analysis reveals that (1) MIAs suffer from misalignment between shadow models, but that (2) re-aligning the shadow models improves, sometimes significantly, MIA performance. On the CIFAR10 dataset with a false positive rate of 1\%, white-box MIA using re-aligned shadow models improves the true positive rate by 4.5\%.Taken together, our results highlight that on-device deployment increase the attack surface and that the newly available information can be used by an attacker.

7.PriSampler: Mitigating Property Inference of Diffusion Models

Authors:Hailong Hu, Jun Pang

Abstract: Diffusion models have been remarkably successful in data synthesis. Such successes have also driven diffusion models to apply to sensitive data, such as human face data, but this might bring about severe privacy concerns. In this work, we systematically present the first privacy study about property inference attacks against diffusion models, in which adversaries aim to extract sensitive global properties of the training set from a diffusion model, such as the proportion of the training data for certain sensitive properties. Specifically, we consider the most practical attack scenario: adversaries are only allowed to obtain synthetic data. Under this realistic scenario, we evaluate the property inference attacks on different types of samplers and diffusion models. A broad range of evaluations shows that various diffusion models and their samplers are all vulnerable to property inference attacks. Furthermore, one case study on off-the-shelf pre-trained diffusion models also demonstrates the effectiveness of the attack in practice. Finally, we propose a new model-agnostic plug-in method PriSampler to mitigate the property inference of diffusion models. PriSampler can be directly applied to well-trained diffusion models and support both stochastic and deterministic sampling. Extensive experiments illustrate the effectiveness of our defense and it makes adversaries infer the proportion of properties as close as random guesses. PriSampler also shows its significantly superior performance to diffusion models trained with differential privacy on both model utility and defense performance.

8.Ownership Protection of Generative Adversarial Networks

Authors:Hailong Hu, Jun Pang

Abstract: Generative adversarial networks (GANs) have shown remarkable success in image synthesis, making GAN models themselves commercially valuable to legitimate model owners. Therefore, it is critical to technically protect the intellectual property of GANs. Prior works need to tamper with the training set or training process, and they are not robust to emerging model extraction attacks. In this paper, we propose a new ownership protection method based on the common characteristics of a target model and its stolen models. Our method can be directly applicable to all well-trained GANs as it does not require retraining target models. Extensive experimental results show that our new method can achieve the best protection performance, compared to the state-of-the-art methods. Finally, we demonstrate the effectiveness of our method with respect to the number of generations of model extraction attacks, the number of generated samples, different datasets, as well as adaptive attacks.

9.Detecting Neural Trojans Through Merkle Trees

Authors:Joshua Strubel

Abstract: Deep neural networks are utilized in a growing number of industries. Much of the current literature focuses on the applications of deep neural networks without discussing the security of the network itself. One security issue facing deep neural networks is neural trojans. Through a neural trojan, a malicious actor may force the deep neural network to act in unintended ways. Several potential defenses have been proposed, but they are computationally expensive, complex, or unusable in commercial applications. We propose Merkle trees as a novel way to detect and isolate neural trojans.