arXiv daily

Optimization and Control (math.OC)

Thu, 01 Jun 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.The Mini-batch Stochastic Conjugate Algorithms with the unbiasedness and Minimized Variance Reduction

Authors:Feifei Gao, Caixia Kou

Abstract: We firstly propose the new stochastic gradient estimate of unbiasedness and minimized variance in this paper. Secondly, we propose the two algorithms: Algorithml and Algorithm2 which apply the new stochastic gradient estimate to modern stochastic conjugate gradient algorithms SCGA 7and CGVR 8. Then we prove that the proposed algorithms can obtain linearconvergence rate under assumptions of strong convexity and smoothness. Finally, numerical experiments show that the new stochastic gradient estimatecan reduce variance of stochastic gradient effectively. And our algorithms compared SCGA and CGVR can convergent faster in numerical experimentson ridge regression model.

2.Optimization Algorithm Synthesis based on Integral Quadratic Constraints: A Tutorial

Authors:Carsten W. Scherer, Christian Ebenbauer, Tobias Holicki

Abstract: We expose in a tutorial fashion the mechanisms which underly the synthesis of optimization algorithms based on dynamic integral quadratic constraints. We reveal how these tools from robust control allow to design accelerated gradient descent algorithms with optimal guaranteed convergence rates by solving small-sized convex semi-definite programs. It is shown that this extends to the design of extremum controllers, with the goal to regulate the output of a general linear closed-loop system to the minimum of an objective function. Numerical experiments illustrate that we can not only recover gradient decent and the triple momentum variant of Nesterov's accelerated first order algorithm, but also automatically synthesize optimal algorithms even if the gradient information is passed through non-trivial dynamics, such as time-delays.

3.Robust Exponential Stability and Invariance Guarantees with General Dynamic O'Shea-Zames-Falb Multipliers

Authors:Carsten W. Scherer

Abstract: We propose novel time-domain dynamic integral quadratic constraints with a terminal cost for exponentially weighted slope-restricted gradients of not necessarily convex functions. This extends recent results for subdifferentials of convex function and their link to so-called O'Shea-Zames-Falb multipliers. The benefit of merging time-domain and frequency-domain techniques is demonstrated for linear saturated systems.

4.Data-driven optimal control under safety constraints using sparse Koopman approximation

Authors:Hongzhe Yu, Joseph Moyalan, Umesh Vaidya, Yongxin Chen

Abstract: In this work we approach the dual optimal reach-safe control problem using sparse approximations of Koopman operator. Matrix approximation of Koopman operator needs to solve a least-squares (LS) problem in the lifted function space, which is computationally intractable for fine discretizations and high dimensions. The state transitional physical meaning of the Koopman operator leads to a sparse LS problem in this space. Leveraging this sparsity, we propose an efficient method to solve the sparse LS problem where we reduce the problem dimension dramatically by formulating the problem using only the non-zero elements in the approximation matrix with known sparsity pattern. The obtained matrix approximation of the operators is then used in a dual optimal reach-safe problem formulation where a linear program with sparse linear constraints naturally appears. We validate our proposed method on various dynamical systems and show that the computation time for operator approximation is greatly reduced with high precision in the solutions.

5.Gauss-Southwell type descent methods for low-rank matrix optimization

Authors:Guillaume Olikier, André Uschmajew, Bart Vandereycken

Abstract: We consider gradient-related methods for low-rank matrix optimization with a smooth cost function. The methods operate on single factors of the low-rank factorization and share aspects of both alternating and Riemannian optimization. Two possible choices for the search directions based on Gauss-Southwell type selection rules are compared: one using the gradient of a factorized non-convex formulation, the other using the Riemannian gradient. While both methods provide gradient convergence guarantees that are similar to the unconstrained case, the version based on Riemannian gradient is significantly more robust with respect to small singular values and the condition number of the cost function, as illustrated by numerical experiments. As a side result of our approach, we also obtain new convergence results for the alternating least squares method.

6.Mean-field limit for stochastic control problems under state constraint

Authors:Samuel Daudin

Abstract: We study the convergence problem of mean-field control theory in the presence of state constraints and non-degenerate idiosyncratic noise. Our main result is the convergence of the value functions associated to stochastic control problems for many interacting particles subject to symmetric, almost-sure constraints toward the value function of a control problem of mean-field type, set on the space of probability measures. The key step of the proof is to show that admissible controls for the limit problem can be turned into admissible controls for the $N$-particle problem up to a correction which vanishes as the number of particles increases. The rest of the proof relies on compactness methods. We also provide optimality conditions for the mean-field problem and discuss the regularity of the optimal controls. Finally we present some applications and connections with large deviations for weakly interacting particle systems.