arXiv daily

Optimization and Control (math.OC)

Wed, 31 May 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.On the Linear Convergence of Policy Gradient under Hadamard Parameterization

Authors:Jiacai Liu, Jinchi Chen, Ke Wei

Abstract: The convergence of deterministic policy gradient under the Hadamard parametrization is studied in the tabular setting and the global linear convergence of the algorithm is established. To this end, we first show that the error decreases at an $O(\frac{1}{k})$ rate for all the iterations. Based on this result, we further show that the algorithm has a faster local linear convergence rate after $k_0$ iterations, where $k_0$ is a constant that only depends on the MDP problem and the step size. Overall, the algorithm displays a linear convergence rate for all the iterations with a loose constant than that for the local linear convergence rate.

2.A converse Lyapunov-type theorem for control systems with regulated cost

Authors:Anna Chiara Lai, Monica Motta

Abstract: Given a nonlinear control system, a target set, a nonnegative integral cost, and a continuous function $W$, we say that the system is globally asymptotically controllable to the target with W-regulated cost, whenever, starting from any point z, among the strategies that achieve classical asymptotic controllability we can select one that also keeps the cost less than W(z). In this paper, assuming mild regularity hypotheses on the data, we prove that a necessary and sufficient condition for global asymptotic controllability with regulated cost is the existence of a special, continuous Control Lyapunov function, called a Minimum Restraint function. The main novelty is the necessity implication, obtained here for the first time. Nevertheless, the sufficiency condition extends previous results based on semiconcavity of the Minimum Restraint function, while we require mere continuity.

3.Bilevel Optimal Control: Theory, Algorithms, and Applications

Authors:Stephan Dempe, Markus Friedemann, Felix Harder, Patrick Mehlitz, Gerd Wachsmuth

Abstract: In this chapter, we are concerned with inverse optimal control problems, i.e., optimization models which are used to identify parameters in optimal control problems from given measurements. Here, we focus on linear-quadratic optimal control problems with control constraints where the reference control plays the role of the parameter and has to be reconstructed. First, it is shown that pointwise M-stationarity, associated with the reformulation of the hierarchical model as a so-called mathematical problem with complementarity constraints (MPCC) in function spaces, provides a necessary optimality condition under some additional assumptions on the data. Second, we review two recently developed algorithms (an augmented Lagrangian method and a nonsmooth Newton method) for the computational identification of M-stationary points of finite-dimensional MPCCs. Finally, a numerical comparison of these methods, based on instances of the appropriately discretized inverse optimal control problem of our interest, is provided.

4.Convergence of the vertical Gradient flow for the Gaussian Monge problem

Authors:Erik Jansson, Klas Modin

Abstract: We investigate a matrix dynamical system related to optimal mass transport in the linear category, namely, the problem of finding an optimal invertible matrix by which two covariance matrices are congruent. We first review the differential geometric structure of the problem in terms of a principal fiber bundle. The dynamical system is a gradient flow restricted to the fibers of the bundle. We prove global existence of solutions to the flow, with convergence to the polar decomposition of the matrix given as initial data. The convergence is illustrated in a numerical example.

5.A fresh look at nonsmooth Levenberg--Marquardt methods with applications to bilevel optimization

Authors:Lateef O. Jolaoso, Patrick Mehlitz, Alain B. Zemkoho

Abstract: In this paper, we revisit the classical problem of solving over-determined systems of nonsmooth equations numerically. We suggest a nonsmooth Levenberg--Marquardt method for its solution which, in contrast to the existing literature, does not require local Lipschitzness of the data functions. This is possible when using Newton-differentiability instead of semismoothness as the underlying tool of generalized differentiation. Conditions for fast local convergence of the method are given. Afterwards, in the context of over-determined mixed nonlinear complementarity systems, our findings are applied, and globalized solution methods, based on a residual induced by the maximum and the Fischer--Burmeister function, respectively, are constructed. The assumptions for fast local convergence are worked out and compared. Finally, these methods are applied for the numerical solution of bilevel optimization problems. We recall the derivation of a stationarity condition taking the shape of an over-determined mixed nonlinear complementarity system involving a penalty parameter, formulate assumptions for local fast convergence of our solution methods explicitly, and present results of numerical experiments. Particularly, we investigate whether the treatment of the appearing penalty parameter as an additional variable is beneficial or not.

6.Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators

Authors:Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas

Abstract: We propose a novel machine learning framework for solving optimization problems governed by large-scale partial differential equations (PDEs) with high-dimensional random parameters. Such optimization under uncertainty (OUU) problems may be computational prohibitive using classical methods, particularly when a large number of samples is needed to evaluate risk measures at every iteration of an optimization algorithm, where each sample requires the solution of an expensive-to-solve PDE. To address this challenge, we propose a new neural operator approximation of the PDE solution operator that has the combined merits of (1) accurate approximation of not only the map from the joint inputs of random parameters and optimization variables to the PDE state, but also its derivative with respect to the optimization variables, (2) efficient construction of the neural network using reduced basis architectures that are scalable to high-dimensional OUU problems, and (3) requiring only a limited number of training data to achieve high accuracy for both the PDE solution and the OUU solution. We refer to such neural operators as multi-input reduced basis derivative informed neural operators (MR-DINOs). We demonstrate the accuracy and efficiency our approach through several numerical experiments, i.e. the risk-averse control of a semilinear elliptic PDE and the steady state Navier--Stokes equations in two and three spatial dimensions, each involving random field inputs. Across the examples, MR-DINOs offer $10^{3}$--$10^{7} \times$ reductions in execution time, and are able to produce OUU solutions of comparable accuracies to those from standard PDE based solutions while being over $10 \times$ more cost-efficient after factoring in the cost of construction.

7.Alternating Minimization for Regression with Tropical Rational Functions

Authors:Alex Dunbar, Lars Ruthotto

Abstract: We propose an alternating minimization heuristic for regression over the space of tropical rational functions with fixed exponents. The method alternates between fitting the numerator and denominator terms via tropical polynomial regression, which is known to admit a closed form solution. We demonstrate the behavior of the alternating minimization method experimentally. Experiments demonstrate that the heuristic provides a reasonable approximation of the input data. Our work is motivated by applications to ReLU neural networks, a popular class of network architectures in the machine learning community which are closely related to tropical rational functions.