arXiv daily

Optimization and Control (math.OC)

Wed, 05 Jul 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.A Mini-Batch Quasi-Newton Proximal Method for Constrained Total-Variation Nonlinear Image Reconstruction

Authors:Tao Hong, Thanh-an Pham, Irad Yavneh, Michael Unser

Abstract: Over the years, computational imaging with accurate nonlinear physical models has drawn considerable interest due to its ability to achieve high-quality reconstructions. However, such nonlinear models are computationally demanding. A popular choice for solving the corresponding inverse problems is accelerated stochastic proximal methods (ASPMs), with the caveat that each iteration is expensive. To overcome this issue, we propose a mini-batch quasi-Newton proximal method (BQNPM) tailored to image-reconstruction problems with total-variation regularization. It involves an efficient approach that computes a weighted proximal mapping at a cost similar to that of the proximal mapping in ASPMs. However, BQNPM requires fewer iterations than ASPMs to converge. We assess the performance of BQNPM on three-dimensional inverse-scattering problems with linear and nonlinear physical models. Our results on simulated and real data show the effectiveness and efficiency of BQNPM,

2.Mixed Leader-Follower Dynamics

Authors:Hsin-Lun Li

Abstract: The original Leader-Follower (LF) model partitions all agents whose opinion is a number in $[-1,1]$ to a follower group, a leader group with a positive target opinion in $[0,1]$ and a leader group with a negative target opinion in $[-1,0]$. A leader group agent has a constant degree to its target and mixes it with the average opinion of its group neighbors at each update. A follower has a constant degree to the average opinion of the opinion neighbors of each leader group and mixes it with the average opinion of its group neighbors at each update. In this paper, we consider a variant of the LF model, namely the mixed model, in which the degrees can vary over time, the opinions can be high dimensional, and the number of leader groups can be more than two. We investigate circumstances under which all agents achieve a consensus. In particular, a few leaders can dominate the whole population.

3.Ill-posed linear inverse problems with box constraints: A new convex optimization approach

Authors:Henryk Gzyl

Abstract: Consider the linear equation $\mathbf{A}\mathbf{x}=\mathbf{y}$, where $\mathbf{A}$ is a $k\times N$-matrix, $\mathbf{x}\in\mathcal{K}\subset \mathbb{R}^N$ and $\mathbf{y}\in\mathbb{R}^M$ a given vector. When $\mathcal{K}$ is a convex set and $M\not= N$ this is a typical ill-posed, linear inverse problem with convex constraints. Here we propose a new way to solve this problem when $\mathcal{K} = \prod_j[a_j,b_j]$. It consists of regarding $\mathbf{A}\mathbf{x}=\mathbf{y}$ as the constraint of a convex minimization problem, in which the objective (cost) function is the dual of a moment generating function. This leads to a nice minimization problem and some interesting comparison results. More importantly, the method provides a solution that lies in the interior of the constraint set $\mathcal{K}$. We also analyze the dependence of the solution on the data and relate it to the Le Chatellier principle.

4.From NeurODEs to AutoencODEs: a mean-field control framework for width-varying Neural Networks

Authors:Cristina Cipriani, Massimo Fornasier, Alessandro Scagliotti

Abstract: In our work, we build upon the established connection between Residual Neural Networks (ResNets) and continuous-time control systems known as NeurODEs. By construction, NeurODEs have been limited to constant-width layers, making them unsuitable for modeling deep learning architectures with width-varying layers. In this paper, we propose a continuous-time Autoencoder, which we call AutoencODE, and we extend to this case the mean-field control framework already developed for usual NeurODEs. In this setting, we tackle the case of low Tikhonov regularization, resulting in potentially non-convex cost landscapes. While the global results obtained for high Tikhonov regularization may not hold globally, we show that many of them can be recovered in regions where the loss function is locally convex. Inspired by our theoretical findings, we develop a training method tailored to this specific type of Autoencoders with residual connections, and we validate our approach through numerical experiments conducted on various examples.

5.Extended team orienteering problem: Algorithms and applications

Authors:Wen Ji, Ke Han, Qian Ge

Abstract: The team orienteering problem (TOP) determines a set of routes, each within a time or distance budget, which collectively visit a set of points of interest (POIs) such that the total score collected at those visited points are maximized. This paper proposes an extension of the TOP (ETOP) by allowing the POIs to be visited multiple times to accumulate scores. Such an extension is necessary for application scenarios like urban sensing where each POI needs to be continuously monitored, or disaster relief where certain locations need to be repeatedly covered. We present two approaches to solve the ETOP, one based on the adaptive large neighborhood search (ALNS) algorithm and the other being a bi-level matheuristic method. Sensitivity analyses are performed to fine-tune the algorithm parameters. Test results on complete graphs with different problem sizes show that: (1) both algorithms significantly outperform a greedy heuristic, with improvements ranging from 9.43% to 27.68%; and (2) while the ALNS-based algorithm slightly outperform the matheuristic in terms of solution optimality, the latter is far more computationally efficient, by 11 to 385 times faster. Finally, a real-world case study of VOCs sensing is presented and formulated as ETOP on a road network (incomplete graph), where the ALNS is outperformed by matheuristic in terms of optimality as the destroy and repair operators yield limited perturbation of existing solutions when constrained by a road network.

6.QUBO.jl: A Julia Ecosystem for Quadratic Unconstrained Binary Optimization

Authors:Pedro Maciel Xavier, Pedro Ripper, Tiago Andrade, Joaquim Dias Garcia, Nelson Maculan, David E. Bernal Neira

Abstract: We present QUBO.jl, an end-to-end Julia package for working with QUBO (Quadratic Unconstrained Binary Optimization) instances. This tool aims to convert a broad range of JuMP problems for straightforward application in many physics and physics-inspired solution methods whose standard optimization form is equivalent to the QUBO. These methods include quantum annealing, quantum gate-circuit optimization algorithms (Quantum Optimization Alternating Ansatz, Variational Quantum Eigensolver), other hardware-accelerated platforms, such as Coherent Ising Machines and Simulated Bifurcation Machines, and more traditional methods such as simulated annealing. Besides working with reformulations, QUBO.jl allows its users to interface with the aforementioned hardware, sending QUBO models in various file formats and retrieving results for subsequent analysis. QUBO.jl was written as a JuMP / MathOptInterface (MOI) layer that automatically maps between the input and output frames, thus providing a smooth modeling experience.

7.AI4OPT: AI Institute for Advances in Optimization

Authors:Pascal Van Hentenryck, Kevin Dalmeijer

Abstract: This article is a short introduction to AI4OPT, the NSF AI Institute for Advances in Optimization. AI4OPT fuses AI and Optimization, inspired by end-use cases in supply chains, energy systems, chip design and manufacturing, and sustainable food systems. AI4OPT also applies its "teaching the teachers" philosophy to provide longitudinal educational pathways in AI for engineering.