arXiv daily

Optimization and Control (math.OC)

Thu, 15 Jun 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Optimization on product manifolds under a preconditioned metric

Authors:Bin Gao, Renfeng Peng, Ya-xiang Yuan

Abstract: Since optimization on Riemannian manifolds relies on the chosen metric, it is appealing to know that how the performance of a Riemannian optimization method varies with different metrics and how to exquisitely construct a metric such that a method can be accelerated. To this end, we propose a general framework for optimization problems on product manifolds where the search space is endowed with a preconditioned metric, and we develop the Riemannian gradient descent and Riemannian conjugate gradient methods under this metric. Specifically, the metric is constructed by an operator that aims to approximate the diagonal blocks of the Riemannian Hessian of the cost function, which has a preconditioning effect. We explain the relationship between the proposed methods and the variable metric methods, and show that various existing methods, e.g., the Riemannian Gauss--Newton method, can be interpreted by the proposed framework with specific metrics. In addition, we tailor new preconditioned metrics and adapt the proposed Riemannian methods to the canonical correlation analysis and the truncated singular value decomposition problems, and we propose the Gauss--Newton method to solve the tensor ring completion problem. Numerical results among these applications verify that a delicate metric does accelerate the Riemannian optimization methods.

2.Optimal control of port-Hamiltonian systems: energy, entropy, and exergy

Authors:Friedrich Philipp, Manuel Schaller, Karl Worthmann, Timm Faulwasser, Bernhard Maschke

Abstract: We consider irreversible and coupled reversible-irreversible nonlinear port-Hamiltonian systems and the respective sets of thermodynamic equilibria. In particular, we are concerned with optimal state transitions and output stabilization on finite-time horizons. We analyze a class of optimal control problems, where the performance functional can be interpreted as a linear combination of energy supply, entropy generation, or exergy supply. Our results establish the integral turnpike property towards the set of thermodynamic equilibria providing a rigorous connection of optimal system trajectories to optimal steady states. Throughout the paper, we illustrate our findings by means of two examples: a network of heat exchangers and a gas-piston system.

3.iNALM: An inexact Newton Augmented Lagrangian Method for Zero-One Composite Optimization

Authors:Penghe Zhang, Naihua Xiu, Hou-Duo Qi

Abstract: Zero-One Composite Optimization (0/1-COP) is a prototype of nonsmooth, nonconvex optimization problems and it has attracted much attention recently. The augmented Lagrangian Method (ALM) has stood out as a leading methodology for such problems. The main purpose of this paper is to extend the classical theory of ALM from smooth problems to 0/1-COP. We propose, for the first time, second-order optimality conditions for 0/1-COP. In particular, under a second-order sufficient condition (SOSC), we prove the R-linear convergence rate of the proposed ALM. In order to identify the subspace used in SOSC, we employ the proximal operator of the 0/1-loss function, leading to an active-set identification technique. Built around this identification process, we design practical stopping criteria for any algorithm to be used for the subproblem of ALM. We justify that Newton's method is an ideal candidate for the subproblem and it enjoys both global and local quadratic convergence. Those considerations result in an inexact Newton ALM (iNALM). The method of iNALM is unique in the sense that it is active-set based, it is inexact (hence more practical), and SOSC plays an important role in its R-linear convergence analysis. The numerical results on both simulated and real datasets show the fast running speed and high accuracy of iNALM when compared with several leading solvers.

4.Distributionally Robust Stratified Sampling for Stochastic Simulations with Multiple Uncertain Input Models

Authors:Seung Min Baik, Eunshin Byon, Young Myoung Ko

Abstract: This paper presents a robust version of the stratified sampling method when multiple uncertain input models are considered for stochastic simulation. Various variance reduction techniques have demonstrated their superior performance in accelerating simulation processes. Nevertheless, they often use a single input model and further assume that the input model is exactly known and fixed. We consider more general cases in which it is necessary to assess a simulation's response to a variety of input models, such as when evaluating the reliability of wind turbines under nonstationary wind conditions or the operation of a service system when the distribution of customer inter-arrival time is heterogeneous at different times. Moreover, the estimation variance may be considerably impacted by uncertainty in input models. To address such nonstationary and uncertain input models, we offer a distributionally robust (DR) stratified sampling approach with the goal of minimizing the maximum of worst-case estimator variances among plausible but uncertain input models. Specifically, we devise a bi-level optimization framework for formulating DR stochastic problems with different ambiguity set designs, based on the $L_2$-norm, 1-Wasserstein distance, parametric family of distributions, and distribution moments. In order to cope with the non-convexity of objective function, we present a solution approach that uses Bayesian optimization. Numerical experiments and the wind turbine case study demonstrate the robustness of the proposed approach.

5.Kinetic based optimization enhanced by genetic dynamics

Authors:Giacomo Albi, Federica Ferrarese, Claudia Totzeck

Abstract: We propose and analyse a variant of the recently introduced kinetic based optimization method that incorporates ideas like survival-of-the-fittest and mutation strategies well-known from genetic algorithms. Thus, we provide a first attempt to reach out from the class of consensus/kinetic-based algorithms towards genetic metaheuristics. Different generations of genetic algorithms are represented via two species identified with different labels, binary interactions are prescribed on the particle level and then we derive a mean-field approximation in order to analyse the method in terms of convergence. Numerical results underline the feasibility of the approach and show in particular that the genetic dynamics allows to improve the efficiency, of this class of global optimization methods in terms of computational cost.

6.Two sided ergodic singular control and mean field game for diffusions

Authors:Sören Christensen, Ernesto Mordecki, Facundo Oliú Eguren

Abstract: Consider two independent controlled linear diffusions with the same dynamics and the same ergodic controls, the first corresponding to an individual player, the second to the market. Let us also consider a cost function that depends on the first diffusion and the expectation of the second one. In this framework, we study the mean-field game consisting in finding the equilibrium points where the controls chosen by the player to minimize an ergodic integrated cost coincide with the market controls. We first show that in the control problem, without market dependence, the best policy is to reflect the process within two boundaries. We use these results to get criteria for the optimal and market controls to coincide (i.e., equilibrium existence), and give a pair of nonlinear equations to find these equilibrium points. We also get criteria for the existence and uniqueness of equilibrium points for the mean-field games under study. These results are illustrated through several examples where the existence and uniqueness of the equilibrium points depend on the values of the parameters defining the underlying diffusion.

7.A Score-based Nonlinear Filter for Data Assimilation

Authors:Feng Bao, Zezhong Zhang, Guannan Zhang

Abstract: We introduce a score-based generative sampling method for solving the nonlinear filtering problem with robust accuracy. A major drawback of existing nonlinear filtering methods, e.g., particle filters, is the low stability. To overcome this issue, we adopt the diffusion model framework to solve the nonlinear filtering problem. In stead of storing the information of the filtering density in finite number of Monte Carlo samples, in the score-based filter we store the information of the filtering density in the score model. Then, via the reverse-time diffusion sampler, we can generate unlimited samples to characterize the filtering density. Moreover, with the powerful expressive capabilities of deep neural networks, it has been demonstrated that a well trained score in diffusion model can produce samples from complex target distributions in very high dimensional spaces. Extensive numerical experiments show that our score-based filter could potentially address the curse of dimensionality in very high dimensional problems.