arXiv daily

Cryptography and Security (cs.CR)

Tue, 15 Aug 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Block-Wise Encryption for Reliable Vision Transformer models

Authors:Hitoshi Kiya, Ryota Iijima, Teru Nagamori

Abstract: This article presents block-wise image encryption for the vision transformer and its applications. Perceptual image encryption for deep learning enables us not only to protect the visual information of plain images but to also embed unique features controlled with a key into images and models. However, when using conventional perceptual encryption methods, the performance of models is degraded due to the influence of encryption. In this paper, we focus on block-wise encryption for the vision transformer, and we introduce three applications: privacy-preserving image classification, access control, and the combined use of federated learning and encrypted images. Our scheme can have the same performance as models without any encryption, and it does not require any network modification. It also allows us to easily update the secret key. In experiments, the effectiveness of the scheme is demonstrated in terms of performance degradation and access control on the CIFAR10 and CIFAR-100 datasets.

2.A Scalable Formal Verification Methodology for Data-Oblivious Hardware

Authors:Lucas Deutschmann, Johannes Mueller, Mohammad Rahmani Fadiheh, Dominik Stoffel, Wolfgang Kunz

Abstract: The importance of preventing microarchitectural timing side channels in security-critical applications has surged in recent years. Constant-time programming has emerged as a best-practice technique for preventing the leakage of secret information through timing. It is based on the assumption that the timing of certain basic machine instructions is independent of their respective input data. However, whether or not an instruction satisfies this data-independent timing criterion varies between individual processor microarchitectures. In this paper, we propose a novel methodology to formally verify data-oblivious behavior in hardware using standard property checking techniques. The proposed methodology is based on an inductive property that enables scalability even to complex out-of-order cores. We show that proving this inductive property is sufficient to exhaustively verify data-obliviousness at the microarchitectural level. In addition, the paper discusses several techniques that can be used to make the verification process easier and faster. We demonstrate the feasibility of the proposed methodology through case studies on several open-source designs. One case study uncovered a data-dependent timing violation in the extensively verified and highly secure IBEX RISC-V core. In addition to several hardware accelerators and in-order processors, our experiments also include RISC-V BOOM, a complex out-of-order processor, highlighting the scalability of the approach.

3.Fairness and Privacy in Federated Learning and Their Implications in Healthcare

Authors:Navya Annapareddy, Jade Preston, Judy Fox

Abstract: Currently, many contexts exist where distributed learning is difficult or otherwise constrained by security and communication limitations. One common domain where this is a consideration is in Healthcare where data is often governed by data-use-ordinances like HIPAA. On the other hand, larger sample sizes and shared data models are necessary to allow models to better generalize on account of the potential for more variability and balancing underrepresented classes. Federated learning is a type of distributed learning model that allows data to be trained in a decentralized manner. This, in turn, addresses data security, privacy, and vulnerability considerations as data itself is not shared across a given learning network nodes. Three main challenges to federated learning include node data is not independent and identically distributed (iid), clients requiring high levels of communication overhead between peers, and there is the heterogeneity of different clients within a network with respect to dataset bias and size. As the field has grown, the notion of fairness in federated learning has also been introduced through novel implementations. Fairness approaches differ from the standard form of federated learning and also have distinct challenges and considerations for the healthcare domain. This paper endeavors to outline the typical lifecycle of fair federated learning in research as well as provide an updated taxonomy to account for the current state of fairness in implementations. Lastly, this paper provides added insight into the implications and challenges of implementing and supporting fairness in federated learning in the healthcare domain.

4.Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models

Authors:Yugeng Liu, Tianshuo Cong, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Abstract: Large Language Models (LLMs) have led to significant improvements in many tasks across various domains, such as code interpretation, response generation, and ambiguity handling. These LLMs, however, when upgrading, primarily prioritize enhancing user experience while neglecting security, privacy, and safety implications. Consequently, unintended vulnerabilities or biases can be introduced. Previous studies have predominantly focused on specific versions of the models and disregard the potential emergence of new attack vectors targeting the updated versions. Through the lens of adversarial examples within the in-context learning framework, this longitudinal study addresses this gap by conducting a comprehensive assessment of the robustness of successive versions of LLMs, vis-\`a-vis GPT-3.5. We conduct extensive experiments to analyze and understand the impact of the robustness in two distinct learning categories: zero-shot learning and few-shot learning. Our findings indicate that, in comparison to earlier versions of LLMs, the updated versions do not exhibit the anticipated level of robustness against adversarial attacks. In addition, our study emphasizes the increased effectiveness of synergized adversarial queries in most zero-shot learning and few-shot learning cases. We hope that our study can lead to a more refined assessment of the robustness of LLMs over time and provide valuable insights of these models for both developers and users.

5.SplITS: Split Input-to-State Mapping for Effective Firmware Fuzzing

Authors:Guy Farrelly, Paul Quirk, Salil S. Kanhere, Seyit Camtepe, Damith C. Ranasinghe

Abstract: Ability to test firmware on embedded devices is critical to discovering vulnerabilities prior to their adversarial exploitation. State-of-the-art automated testing methods rehost firmware in emulators and attempt to facilitate inputs from a diversity of methods (interrupt driven, status polling) and a plethora of devices (such as modems and GPS units). Despite recent progress to tackle peripheral input generation challenges in rehosting, a firmware's expectation of multi-byte magic values supplied from peripheral inputs for string operations still pose a significant roadblock. We solve the impediment posed by multi-byte magic strings in monolithic firmware. We propose feedback mechanisms for input-to-state mapping and retaining seeds for targeted replacement mutations with an efficient method to solve multi-byte comparisons. The feedback allows an efficient search over a combinatorial solution-space. We evaluate our prototype implementation, SplITS, with a diverse set of 21 real-world monolithic firmware binaries used in prior works, and 3 new binaries from popular open source projects. SplITS automatically solves 497% more multi-byte magic strings guarding further execution to uncover new code and bugs compared to state-of-the-art. In 11 of the 12 real-world firmware binaries with string comparisons, including those extensively analyzed by prior works, SplITS outperformed, statistically significantly. We observed up to 161% increase in blocks covered and discovered 6 new bugs that remained guarded by string comparisons. Significantly, deep and difficult to reproduce bugs guarded by comparisons, identified in prior work, were found consistently. To facilitate future research in the field, we release SplITS, the new firmware data sets, and bug analysis at https://github.com/SplITS-Fuzzer

6.Domain-Adaptive Device Fingerprints for Network Access Authentication Through Multifractal Dimension Representation

Authors:Benjamin Johnson, Bechir Hamdaoui

Abstract: RF data-driven device fingerprinting through the use of deep learning has recently surfaced as a potential solution for automated network access authentication. Traditional approaches are commonly susceptible to the domain adaptation problem where a model trained on data from one domain performs badly when tested on data from a different domain. Some examples of a domain change include varying the device location or environment and varying the time or day of data collection. In this work, we propose using multifractal analysis and the variance fractal dimension trajectory (VFDT) as a data representation input to the deep neural network to extract device fingerprints that are domain generalizable. We analyze the effectiveness of the proposed VFDT representation in detecting device-specific signatures from hardware-impaired IQ signals, and evaluate its robustness in real-world settings, using an experimental testbed of 30 WiFi-enabled Pycom devices under different locations and at different scales. Our results show that the VFDT representation improves the scalability, robustness and generalizability of the deep learning models significantly compared to when using raw IQ data.