arXiv daily

Optimization and Control (math.OC)

Mon, 21 Aug 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.A relaxation method for binary orthogonal optimization problems with its applications

Authors:Lianghai Xiao, Yitian Qian, Shaohua Pan

Abstract: This paper focuses on a class of binary orthogonal optimization problems frequently arising in semantic hashing. Consider that this class of problems may have an empty feasible set, rendering them not well-defined. We introduce an equivalent model involving a restricted Stiefel manifold and a matrix box set, and then investigate its penalty problems induced by the $\ell_1$-distance from the box set and its Moreau envelope. The two penalty problems are always well-defined, and moreover, they serve as the global exact penalties provided that the original model is well-defined. Notably, the penalty problem induced by the Moreau envelope is a smooth optimization over an embedded submanifold with a favorable structure. We develop a retraction-based nonmonotone line-search Riemannian gradient method to address this penalty problem to achieve a desirable solution for the original binary orthogonal problems. Finally, the proposed method is applied to supervised and unsupervised hashing tasks and is compared with several popular methods on the MNIST and CIFAR-10 datasets. The numerical comparisons reveal that our algorithm is significantly superior to other solvers in terms of feasibility violation, and it is comparable even superior to others in terms of evaluation metrics related to the Hamming distance.

2.Universal Approximation of Parametric Optimization via Neural Networks with Piecewise Linear Policy Approximation

Authors:Hyunglip Bae, Jang Ho Kim, Woo Chang Kim

Abstract: Parametric optimization solves a family of optimization problems as a function of parameters. It is a critical component in situations where optimal decision making is repeatedly performed for updated parameter values, but computation becomes challenging when complex problems need to be solved in real-time. Therefore, in this study, we present theoretical foundations on approximating optimal policy of parametric optimization problem through Neural Networks and derive conditions that allow the Universal Approximation Theorem to be applied to parametric optimization problems by constructing piecewise linear policy approximation explicitly. This study fills the gap on formally analyzing the constructed piecewise linear approximation in terms of feasibility and optimality and show that Neural Networks (with ReLU activations) can be valid approximator for this approximation in terms of generalization and approximation error. Furthermore, based on theoretical results, we propose a strategy to improve feasibility of approximated solution and discuss training with suboptimal solutions.

3.The Unique Solvability Conditions for the Generalized Absolute Value Equations

Authors:Shubham Kumar, Deepmala

Abstract: This paper investigates the conditions that guarantee unique solvability and unsolvability for the generalized absolute value equations (GAVE) given by $Ax - B \vert x \vert = b$. Further, these conditions are also valid to determine the unique solution of the generalized absolute value matrix equations (GAVME) $AX - B \vert X \vert =F$. Finally, certain aspects related to the solvability and unsolvability of the absolute value equations (AVE) have been deliberated upon.

4.Decentralized Riemannian Conjugate Gradient Method on the Stiefel Manifold

Authors:Jun Chen, Haishan Ye, Mengmeng Wang, Tianxin Huang, Guang Dai, Ivor W. Tsang, Yong Liu

Abstract: The conjugate gradient method is a crucial first-order optimization method that generally converges faster than the steepest descent method, and its computational cost is much lower than the second-order methods. However, while various types of conjugate gradient methods have been studied in Euclidean spaces and on Riemannian manifolds, there has little study for those in distributed scenarios. This paper proposes a decentralized Riemannian conjugate gradient descent (DRCGD) method that aims at minimizing a global function over the Stiefel manifold. The optimization problem is distributed among a network of agents, where each agent is associated with a local function, and communication between agents occurs over an undirected connected graph. Since the Stiefel manifold is a non-convex set, a global function is represented as a finite sum of possibly non-convex (but smooth) local functions. The proposed method is free from expensive Riemannian geometric operations such as retractions, exponential maps, and vector transports, thereby reducing the computational complexity required by each agent. To the best of our knowledge, DRCGD is the first decentralized Riemannian conjugate gradient algorithm to achieve global convergence over the Stiefel manifold.

5.Restricted inverse optimal value problem on linear programming under weighted $l_1$ norm

Authors:Junhua Jia, Xiucui Guan, Xinqiang Qian, Panos M. Pardalos

Abstract: We study the restricted inverse optimal value problem on linear programming under weighted $l_1$ norm (RIOVLP $_1$). Given a linear programming problem $LP_c: \min \{cx|Ax=b,x\geq 0\}$ with a feasible solution $x^0$ and a value $K$, we aim to adjust the vector $c$ to $\bar{c}$ such that $x^0$ becomes an optimal solution of the problem LP$_{\bar c}$ whose objective value $\bar{c}x^0$ equals $K$. The objective is to minimize the distance $\|\bar c - c\|_1=\sum_{j=1}^nd_j|\bar c_j-c_j|$ under weighted $l_1$ norm.Firstly, we formulate the problem (RIOVLP$_1$) as a linear programming problem by dual theories. Secondly, we construct a sub-problem $(D^z)$, which has the same form as $LP_c$, of the dual (RIOVLP$_1$) problem corresponding to a given value $z$. Thirdly, when the coefficient matrix $A$ is unimodular, we design a binary search algorithm to calculate the critical value $z^*$ corresponding to an optimal solution of the problem (RIOVLP$_1$). Finally, we solve the (RIOV) problems on Hitchcock and shortest path problem, respectively, in $O(T_{MCF}\log\max\{d_{max},x^0_{max},n\})$ time, where we solve a sub-problem $(D^z)$ by minimum cost flow in $T_{MCF}$ time in each iteration. The values $d_{max},x^0_{max}$ are the maximum values of $d$ and $x^0$, respectively.

6.Feedback rectifiable pairs and stabilization of switched linear systems

Authors:Maria C. Honecker, Hannes Gernandt, Kai Wulff, Carsten Trunk, Johann Reger

Abstract: We address the feedback design problem for switched linear systems. In particular we aim to design a switched state-feedback such that the resulting closed-loop switched system is in upper triangular form. To this effect we formulate and analyse the feedback rectification problem for pairs of matrices. We present necessary and sufficient conditions for the feedback rectifiability of pairs for two subsystems and give a constructive procedure to design stabilizing state-feedback for a class of switched systems. Several examples illustrate the characteristics of the problem considered and the application of the proposed constructive procedure.

7.A Homogenization Approach for Gradient-Dominated Stochastic Optimization

Authors:Jiyuan Tan, Chenyu Xue, Chuwen Zhang, Qi Deng, Dongdong Ge, Yinyu Ye

Abstract: Gradient dominance property is a condition weaker than strong convexity, yet it sufficiently ensures global convergence for first-order methods even in non-convex optimization. This property finds application in various machine learning domains, including matrix decomposition, linear neural networks, and policy-based reinforcement learning (RL). In this paper, we study the stochastic homogeneous second-order descent method (SHSODM) for gradient-dominated optimization with $\alpha \in [1, 2]$ based on a recently proposed homogenization approach. Theoretically, we show that SHSODM achieves a sample complexity of $O(\epsilon^{-7/(2 \alpha) +1})$ for $\alpha \in [1, 3/2)$ and $\tilde{O}(\epsilon^{-2/\alpha})$ for $\alpha \in [3/2, 2]$. We further provide a SHSODM with a variance reduction technique enjoying an improved sample complexity of $O( \epsilon ^{-( 7-3\alpha ) /( 2\alpha )})$ for $\alpha \in [1,3/2)$. Our results match the state-of-the-art sample complexity bounds for stochastic gradient-dominated optimization without \emph{cubic regularization}. Since the homogenization approach only relies on solving extremal eigenvector problems instead of Newton-type systems, our methods gain the advantage of cheaper iterations and robustness in ill-conditioned problems. Numerical experiments on several RL tasks demonstrate the efficiency of SHSODM compared to other off-the-shelf methods.