arXiv daily

Cryptography and Security (cs.CR)

Wed, 02 Aug 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Authors:Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

Abstract: Despite the broad application of Machine Learning models as a Service (MLaaS), they are vulnerable to model stealing attacks. These attacks can replicate the model functionality by using the black-box query process without any prior knowledge of the target victim model. Existing stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers. However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice. To address the problems, this paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses. Instead of deploying auxiliary defense modules that introduce redundant inference time, InI directly trains a defensive model by isolating the adversary's training gradient from the expected gradient, which can effectively reduce the inference computational cost. In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries, which can induce the adversary to extract little useful knowledge from victim models with minimal impact on the benign performance. Extensive experiments on several visual classification datasets (e.g., MNIST and CIFAR10) demonstrate the superior robustness (up to 48% reduction on stealing accuracy) and speed (up to 25.4x faster) of our InI over other state-of-the-art methods. Our codes can be found in https://github.com/DIG-Beihang/InI-Model-Stealing-Defense.

2.Integrating Homomorphic Encryption and Trusted Execution Technology for Autonomous and Confidential Model Refining in Cloud

Authors:Pinglan Liu, Wensheng Zhang

Abstract: With the popularity of cloud computing and machine learning, it has been a trend to outsource machine learning processes (including model training and model-based inference) to cloud. By the outsourcing, other than utilizing the extensive and scalable resource offered by the cloud service provider, it will also be attractive to users if the cloud servers can manage the machine learning processes autonomously on behalf of the users. Such a feature will be especially salient when the machine learning is expected to be a long-term continuous process and the users are not always available to participate. Due to security and privacy concerns, it is also desired that the autonomous learning preserves the confidentiality of users' data and models involved. Hence, in this paper, we aim to design a scheme that enables autonomous and confidential model refining in cloud. Homomorphic encryption and trusted execution environment technology can protect confidentiality for autonomous computation, but each of them has their limitations respectively and they are complementary to each other. Therefore, we further propose to integrate these two techniques in the design of the model refining scheme. Through implementation and experiments, we evaluate the feasibility of our proposed scheme. The results indicate that, with our proposed scheme the cloud server can autonomously refine an encrypted model with newly provided encrypted training data to continuously improve its accuracy. Though the efficiency is still significantly lower than the baseline scheme that refines plaintext-model with plaintext-data, we expect that it can be improved by fully utilizing the higher level of parallelism and the computational power of GPU at the cloud server.

3.Evaluate and Guard the Wisdom of Crowds: Zero Knowledge Proofs for Crowdsourcing Truth Inference

Authors:Xuanming Liu, Xinpeng Yang, Xun Zhang, Xiaohu Yang

Abstract: Due to the risks of correctness and security in outsourced cloud computing, we consider a new paradigm called crowdsourcing: distribute tasks, receive answers and aggregate the results from multiple entities. Through this approach, we can aggregate the wisdom of the crowd to complete tasks, ensuring the accuracy of task completion while reducing the risks posed by the malicious acts of a single entity. However, the ensuing question is, how can we ensure that the aggregator has done its work honestly and each contributor's work has been evaluated fairly? In this paper, we propose a new scheme called $\mathsf{zkTI}$. This scheme ensures that the aggregator has honestly completed the aggregation and each data source is fairly evaluated. We combine a cryptographic primitive called \textit{zero-knowledge proof} with a class of \textit{truth inference algorithms} which is widely studied in AI/ML scenarios. Under this scheme, various complex outsourced tasks can be solved with efficiency and accuracy. To build our scheme, a novel method to prove the precise computation of floating-point numbers is proposed, which is nearly optimal and well-compatible with existing argument systems. This may become an independent point of interest. Thus our work can prove the process of aggregation and inference without loss of precision. We fully implement and evaluate our ideas. Compared with recent works, our scheme achieves $2-4 \times$ efficiency improvement and is robust to be widely applied.

4.Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Tim

Authors:Xinfeng Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu

Abstract: Automatic speech recognition (ASR) systems have been shown to be vulnerable to adversarial examples (AEs). Recent success all assumes that users will not notice or disrupt the attack process despite the existence of music/noise-like sounds and spontaneous responses from voice assistants. Nonetheless, in practical user-present scenarios, user awareness may nullify existing attack attempts that launch unexpected sounds or ASR usage. In this paper, we seek to bridge the gap in existing research and extend the attack to user-present scenarios. We propose VRIFLE, an inaudible adversarial perturbation (IAP) attack via ultrasound delivery that can manipulate ASRs as a user speaks. The inherent differences between audible sounds and ultrasounds make IAP delivery face unprecedented challenges such as distortion, noise, and instability. In this regard, we design a novel ultrasonic transformation model to enhance the crafted perturbation to be physically effective and even survive long-distance delivery. We further enable VRIFLE's robustness by adopting a series of augmentation on user and real-world variations during the generation process. In this way, VRIFLE features an effective real-time manipulation of the ASR output from different distances and under any speech of users, with an alter-and-mute strategy that suppresses the impact of user disruption. Our extensive experiments in both digital and physical worlds verify VRIFLE's effectiveness under various configurations, robustness against six kinds of defenses, and universality in a targeted manner. We also show that VRIFLE can be delivered with a portable attack device and even everyday-life loudspeakers.

5.An Adaptable Approach for Successful SIEM Adoption in Companies

Authors:Maximilian Rosenberg, Bettina Schneider, Christopher Scherb, Petra Maria Asprion

Abstract: In corporations around the world, the topic of cybersecurity and information security is becoming increasingly important as the number of cyberattacks on themselves continues to grow. Nowadays, it is no longer just a matter of protecting against cyberattacks, but rather of detecting such attacks at an early stage and responding accordingly. There is currently no generic methodological approach for the implementation of Security Information and Event Management (SIEM) systems that takes academic aspects into account and can be applied independently of the product or developers of the systems. Applying Hevner's design science research approach, the goal of this paper is to develop a holistic procedure model for implementing respective SIEM systems in corporations. According to the study during the validation phase, the procedure model was verified to be applicable. As desire for future research, the procedure model should be applied in various implementation projects in different enterprises to analyze its applicability and completeness.

6.A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards

Authors:Joshua Harrison, Ehsan Toreini, Maryam Mehrnezhad

Abstract: With recent developments in deep learning, the ubiquity of micro-phones and the rise in online services via personal devices, acoustic side channel attacks present a greater threat to keyboards than ever. This paper presents a practical implementation of a state-of-the-art deep learning model in order to classify laptop keystrokes, using a smartphone integrated microphone. When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model. When trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium. Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms. We discuss a series of mitigation methods to protect users against these series of attacks.

7.Stake Your Claim: Zero-Trust Validator Deployment Leveraging NFTs and Smart Contracts in Proof-of-Stake Networks

Authors:Scott Seidenberger, Alec Sokol, Anindya Maiti

Abstract: We present a novel method for a multi-party, zero-trust validator infrastructure deployment arrangement via smart contracts to secure Proof-of-Stake (PoS) blockchains. The proposed arrangement architecture employs a combination of non-fungible tokens (NFTs), a treasury contract, and validator smart contract wallets to facilitate trustless participation in staking mechanisms. The NFT minting process allows depositors to exchange their capital for an NFT representing their stake in a validator, while the treasury contract manages the registry of NFT holders and handles rewards distribution. Validator smart contract wallets are employed to create a trustless connection between the validator operator and the treasury, enabling autonomous staking and unstaking processes based on predefined conditions. In addition, the proposed system incorporates protection mechanisms for depositors, such as triggered exits in case of non-payment of rewards and a penalty payout from the validator operator. The arrangement benefits from the extensibility and interoperability of web3 technologies, with potential applications in the broader digital ecosystem. This zero-trust staking mechanism aims to serve users who desire increased privacy, trust, and flexibility in managing their digital wealth, while promoting greater decentralization and transparency in the PoS ecosystem.

8.Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator

Authors:Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang

Abstract: DNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks. Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%.

9.LSF-IDM: Lightweight Deep Learning Models for Automotive Intrusion Detection Model Based on Semantic Fusion

Authors:Pengzhou Cheng, Lei Hua, Haobin Jiang, Mohammad Samie, Gongshen Liu

Abstract: Autonomous vehicles (AVs) are more vulnerable to network attacks due to the high connectivity and diverse communication modes between vehicles and external networks. Deep learning-based Intrusion detection, an effective method for detecting network attacks, can provide functional safety as well as a real-time communication guarantee for vehicles, thereby being widely used for AVs. Existing works well for cyber-attacks such as simple-mode but become a higher false alarm with a resource-limited environment required when the attack is concealed within a contextual feature. In this paper, we present a lightweight intrusion detection model based on semantic fusion, named LSF-IDM. Our motivation is based on the observation that, when injected the malicious packets to the in-vehicle networks (IVNs), the packet log presents a strict order of context feature because of the periodicity and broadcast nature of the CAN bus. Therefore, this model first captures the context as the semantic feature of messages by the BERT language framework. Thereafter, the lightweight model (e.g., BiLSTM) learns the fused feature from an input packet's classification and its output distribution in BERT based on knowledge distillation. Experiment results demonstrate the effectiveness of our methods in defending against several representative attacks from IVNs. We also perform the difference analysis of the proposed method with lightweight models and Bert to attain a deeper understanding of how the model balance detection performance and model complexity.

10.A Large-Scale Study of Phishing PDF Documents

Authors:Giada Stivala, Sahar Abdelnabi, Andrea Mengascini, Mariano Graziano, Mario Fritz, Giancarlo Pellegrino

Abstract: Phishing PDFs are malicious PDF documents that do not embed malware but trick victims into visiting malicious web pages leading to password theft or drive-by downloads. While recent reports indicate a surge of phishing PDFs, prior works have largely neglected this new threat, positioning phishing PDFs as accessories distributed via email phishing campaigns. This paper challenges this belief and presents the first systematic and comprehensive study centered on phishing PDFs. Starting from a real-world dataset, we first identify 44 phishing PDF campaigns via clustering and characterize them by looking at their volumetric, temporal, and visual features. Among these, we identify three large campaigns covering 89% of the dataset, exhibiting significantly different volumetric and temporal properties compared to classical email phishing, and relying on web UI elements as visual baits. Finally, we look at the distribution vectors and show that phishing PDFs are not only distributed via attachments but also via SEO attacks, placing phishing PDFs outside the email distribution ecosystem. This paper also assesses the usefulness of the VirusTotal scoring system, showing that phishing PDFs are ranked considerably low, creating a blind spot for organizations. While URL blocklists can help to prevent victims from visiting the attack web pages, PDF documents seem not subjected to any form of content-based filtering or detection.

11.BRNES: Enabling Security and Privacy-aware Experience Sharing in Multiagent Robotic and Autonomous Systems

Authors:Md Tamjid Hossain, Hung Manh La, Shahriar Badsha, Anton Netchaev

Abstract: Although experience sharing (ES) accelerates multiagent reinforcement learning (MARL) in an advisor-advisee framework, attempts to apply ES to decentralized multiagent systems have so far relied on trusted environments and overlooked the possibility of adversarial manipulation and inference. Nevertheless, in a real-world setting, some Byzantine attackers, disguised as advisors, may provide false advice to the advisee and catastrophically degrade the overall learning performance. Also, an inference attacker, disguised as an advisee, may conduct several queries to infer the advisors' private information and make the entire ES process questionable in terms of privacy leakage. To address and tackle these issues, we propose a novel MARL framework (BRNES) that heuristically selects a dynamic neighbor zone for each advisee at each learning step and adopts a weighted experience aggregation technique to reduce Byzantine attack impact. Furthermore, to keep the agent's private information safe from adversarial inference attacks, we leverage the local differential privacy (LDP)-induced noise during the ES process. Our experiments show that our framework outperforms the state-of-the-art in terms of the steps to goal, obtained reward, and time to goal metrics. Particularly, our evaluation shows that the proposed framework is 8.32x faster than the current non-private frameworks and 1.41x faster than the private frameworks in an adversarial setting.

12.Delegated Time-Lock Puzzle

Authors:Aydin Abadi, Dan Ristea, Steven J. Murdoch

Abstract: Time-Lock Puzzles (TLPs) are cryptographic protocols that enable a client to lock a message in such a way that a server can only unlock it after a specific time period. However, existing TLPs have certain limitations: (i) they assume that both the client and server always possess sufficient computational resources and (ii) they solely focus on the lower time bound for finding a solution, disregarding the upper bound that guarantees a regular server can find a solution within a certain time frame. Additionally, existing TLPs designed to handle multiple puzzles either (a) entail high verification costs or (b) lack generality, requiring identical time intervals between consecutive solutions. To address these limitations, this paper introduces, for the first time, the concept of a "Delegated Time-Lock Puzzle" and presents a protocol called "Efficient Delegated Time-Lock Puzzle" (ED-TLP) that realises this concept. ED-TLP allows the client and server to delegate their resource-demanding tasks to third-party helpers. It facilitates real-time verification of solution correctness and efficiently handles multiple puzzles with varying time intervals. ED-TLP ensures the delivery of solutions within predefined time limits by incorporating both an upper bound and a fair payment algorithm. We have implemented ED-TLP and conducted a comprehensive analysis of its overheads, demonstrating the efficiency of the construction.