Information Theory (cs.IT)
Mon, 11 Sep 2023
1.Beamforming in Wireless Coded-Caching Systems
Authors:Sneha Madhusudan, Charitha Madapatha, Behrooz Makki, Hao Guo, Tommy Svensson
Abstract: Increased capacity in the access network poses capacity challenges on the transport network due to the aggregated traffic. However, there are spatial and time correlation in the user data demands that could potentially be utilized. To that end, we investigate a wireless transport network architecture that integrates beamforming and coded-caching strategies. Especially, our proposed design entails a server with multiple antennas that broadcasts content to cache nodes responsible for serving users. Traditional caching methods face the limitation of relying on the individual memory with additional overhead. Hence, we develop an efficient genetic algorithm-based scheme for beam optimization in the coded-caching system. By exploiting the advantages of beamforming and coded-caching, the architecture achieves gains in terms of multicast opportunities, interference mitigation, and reduced peak backhaul traffic. A comparative analysis of this joint design with traditional, un-coded caching schemes is also conducted to assess the benefits of the proposed approach. Additionally, we examine the impact of various buffering and decoding methods on the performance of the coded-caching scheme. Our findings suggest that proper beamforming is useful in enhancing the effectiveness of the coded-caching technique, resulting in significant reduction in peak backhaul traffic.
2.Low Peak-to-Average Power Ratio FBMC-OQAM System based on Data Mapping and DFT Precoding
Authors:Liming Li, Liqin Ding, Yang Wang, Jiliang Zhang
Abstract: Filter bank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) is an alternative to OFDM for enhanced spectrum flexible usage. To reduce the peak-to-average power ratio (PAPR), DFT spreading is usually adopted in OFDM systems. However, in FBMC-OQAM systems, because the OQAM pre-processing splits the spread data into the real and imaginary parts, the DFT spreading can result in only marginal PAPR reduction. This letter proposes a novel map-DFT-spread FBMC-OQAM scheme. In this scheme, the transmitting data symbols are first mapped with a conjugate symmetry rule and then coded by the DFT. According to this method, the OQAM pre-processing can be avoided. Compared with the simple DFT-spread scheme, the proposed scheme achieves a better PAPR reduction. In addition, the effect of the prototype filter on the PAPR is studied via numerical simulation and a trade-off exists between the PAPR and out-of-band performances.
3.On the Structure of the Linear Codes with a Given Automorphism
Authors:Stefka Bouyuklieva
Abstract: The purpose of this paper is to present the structure of the linear codes over a finite field with q elements that have a permutation automorphism of order m. These codes can be considered as generalized quasi-cyclic codes. Quasi-cyclic codes and almost quasi-cyclic codes are discussed in detail, presenting necessary and sufficient conditions for which linear codes with such an automorphism are self-orthogonal, self-dual, or linear complementary dual.
4.Iterative Interference Cancellation for Time Reversal Division Multiple Access
Authors:Ali Mokh, George C. Alexandropoulos, Mohamed Kamoun, Abdelwaheb Ourir, Arnaud Tourin, Mathias Fink, Julien de Rosny
Abstract: Time Reversal (TR) has been proposed as a competitive precoding strategy for low-complexity devices, relying on ultra-wideband waveforms. This transmit processing paradigm can address the need for low power and low complexity receivers, which is particularly important for the Internet of Things, since it shifts most of the communications signal processing complexity to the transmitter side. Due to its spatio-temporal focusing property, TR has also been used to design multiple access schemes for multi-user communications scenarios. However, in wideband time-division multiple access schemes, the signals received by users suffer from significant levels of inter-symbol interference as well as interference from uncoordinated users, which often require additional processing at the receiver side. This paper proposes an iterative TR scheme that aims to reduce the level of interference in wideband multi-user settings, while keeping the processing complexity only at the transmitter side. The performance of the proposed TR-based protocol is evaluated using analytical derivations. In addition, its superiority over the conventional Time Reversal Division Multiple Access (TRDMA) scheme is demonstrated through simulations as well as experimental measurements at $2.5$ GHz carrier frequency with variable bandwidth values.
5.Low-Complexity Vector Source Coding for Discrete Long Sequences with Unknown Distributions
Authors:Leah Woldemariam, Hang Liu, Anna Scaglione
Abstract: In this paper, we propose a source coding scheme that represents data from unknown distributions through frequency and support information. Existing encoding schemes often compress data by sacrificing computational efficiency or by assuming the data follows a known distribution. We take advantage of the structure that arises within the spatial representation and utilize it to encode run-lengths within this representation using Golomb coding. Through theoretical analysis, we show that our scheme yields an overall bit rate that nears entropy without a computationally complex encoding algorithm and verify these results through numerical experiments.
6.Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck
Authors:K. Michael Martini, Ilya Nemenman
Abstract: The Symmetric Information Bottleneck (SIB), an extension of the more familiar Information Bottleneck, is a dimensionality reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the Generalized Symmetric Information Bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the dataset size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that, in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.