arXiv daily

Information Theory (cs.IT)

Wed, 19 Apr 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Mon, 10 Apr 2023
1.Contrastive Learning based Semantic Communication for Wireless Image Transmission

Authors:Shunpu Tang, Qianqian Yang, Lisheng Fan, Xianfu Lei, Yansha Deng, Arumugam Nallanathan

Abstract: Recently, semantic communication has been widely applied in wireless image transmission systems as it can prioritize the preservation of meaningful semantic information in images over the accuracy of transmitted symbols, leading to improved communication efficiency. However, existing semantic communication approaches still face limitations in achieving considerable inference performance in downstream AI tasks like image recognition, or balancing the inference performance with the quality of the reconstructed image at the receiver. Therefore, this paper proposes a contrastive learning (CL)-based semantic communication approach to overcome these limitations. Specifically, we regard the image corruption during transmission as a form of data augmentation in CL and leverage CL to reduce the semantic distance between the original and the corrupted reconstruction while maintaining the semantic distance among irrelevant images for better discrimination in downstream tasks. Moreover, we design a two-stage training procedure and the corresponding loss functions for jointly optimizing the semantic encoder and decoder to achieve a good trade-off between the performance of image recognition in the downstream task and reconstructed quality. Simulations are finally conducted to demonstrate the superiority of the proposed method over the competitive approaches. In particular, the proposed method can achieve up to 56\% accuracy gain on the CIFAR10 dataset when the bandwidth compression ratio is 1/48.

2.Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields

Authors:Omar Alrabiah, Venkatesan Guruswami, Ray Li

Abstract: Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our proof is based on the proof of Guo and Zhang, and additionally exploits symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the list-decoding problem, we include an alternate presentation of ideas of Brakensiek, Gopi, and Makam that more directly connects the list-decoding problem to the GM-MDS theorem via a hypergraph orientation theorem.

3.Resource Allocation in the RIS Assisted SCMA Cellular Network Coexisting with D2D Communications

Authors:Yukai Liu, Wen Chen, Kunlun Wang

Abstract: The cellular network coexisting with device-to-device (D2D) communications has been studied extensively. Reconfigurable intelligent surface (RIS) and non-orthogonal multiple access (NOMA) are promising technologies for the evolution of 5G, 6G and beyond. Besides, sparse code multiple access (SCMA) is considered suitable for next-generation wireless network in code-domain NOMA. In this paper, we consider the RIS-aided uplink SCMA cellular network simultaneously with D2D users. We formulate the optimization problem which aims to maximize the cellular sum-rate by jointly designing D2D users resource block (RB) association, the transmitted power for both cellular users and D2D users, and the phase shifts at the RIS. The power limitation and users communication requirements are considered. The problem is non-convex, and it is challenging to solve it directly. To handle this optimization problem, we propose an efficient iterative algorithm based on block coordinate descent (BCD) method. The original problem is decoupled into three subproblems to solve separately. Simulation results demonstrate that the proposed scheme can significantly improve the sum-rate performance over various schemes.

4.Entropy Estimation via Uniformization

Authors:Ziqiao Ao, Jinglai Li

Abstract: Entropy estimation is of practical importance in information theory and statistical science. Many existing entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high-dimensional problems. In this work we propose a transform-based method for high-dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. The performance of the proposed method is compared against several existing entropy estimators, with both mathematical examples and real-world applications.

5.Weakly Secure Summation with Colluding Users

Authors:Zhou Li, Yizhou Zhao, Hua Sun

Abstract: In secure summation, $K$ users, each holds an input, wish to compute the sum of the inputs at a server without revealing any information about {\em all the inputs} even if the server may collude with {\em an arbitrary subset of users}. In this work, we relax the security and colluding constraints, where the set of inputs whose information is prohibited from leakage is from a predetermined collection of sets (e.g., any set of up to $S$ inputs) and the set of colluding users is from another predetermined collection of sets (e.g., any set of up to $T$ users). For arbitrary collection of security input sets and colluding user sets, we characterize the optimal randomness assumption, i.e., the minimum number of key bits that need to be held by the users, per input bit, for weakly secure summation to be feasible, which generally involves solving a linear program.

6.Optimal Codes Detecting Deletions in Concatenated Binary Strings Applied to Trace Reconstruction

Authors:Serge Kas Hanna

Abstract: Consider two or more strings $\mathbf{x}^1,\mathbf{x}^2,\ldots,$ that are concatenated to form $\mathbf{x}=\langle \mathbf{x}^1,\mathbf{x}^2,\ldots \rangle$. Suppose that up to $\delta$ deletions occur in each of the concatenated strings. Since deletions alter the lengths of the strings, a fundamental question to ask is: how much redundancy do we need to introduce in $\mathbf{x}$ in order to recover the boundaries of $\mathbf{x}^1,\mathbf{x}^2,\ldots$? This boundary problem is equivalent to the problem of designing codes that can detect the exact number of deletions in each concatenated string. In this work, we answer the question above by first deriving converse results that give lower bounds on the redundancy of deletion-detecting codes. Then, we present a marker-based code construction whose redundancy is asymptotically optimal in $\delta$ among all families of deletion-detecting codes, and exactly optimal among all block-by-block decodable codes. To exemplify the usefulness of such deletion-detecting codes, we apply our code to trace reconstruction and design an efficient coded reconstruction scheme that requires a constant number of traces.