Optimization and Control (math.OC)
Thu, 20 Apr 2023
1.A Riemannian Dimention-reduced Second Order Method with Application in Sensor Network Localization
Authors:Tianyun Tang, Kim-Chuan Toh, Nachuan Xiao, Yinyu Ye
Abstract: In this paper, we propose a cubic-regularized Riemannian optimization method (RDRSOM), which partially exploits the second order information and achieves the iteration complexity of $\mathcal{O}(1/\epsilon^{3/2})$. In order to reduce the per-iteration computational cost, we further propose a practical version of (RDRSOM), which is an extension of the well known Barzilai-Borwein method and achieves the iteration complexity of $\mathcal{O}(1/\epsilon^{3/2})$. We apply our method to solve a nonlinear formulation of the wireless sensor network localization problem whose feasible set is a Riemannian manifold that has not been considered in the literature before. Numerical experiments are conducted to verify the high efficiency of our algorithm compared to state-of-the-art Riemannian optimization methods and other nonlinear solvers.
2.An Adaptive Multi-Level Max-Plus Method for Deterministic Optimal Control Problems
Authors:Marianne Akian, Stéphane Gaubert, Shanqing Liu
Abstract: We introduce a new numerical method to approximate the solution of a finite horizon deterministic optimal control problem. We exploit two Hamilton-Jacobi-Bellman PDE, arising by considering the dynamics in forward and backward time. This allows us to compute a neighborhood of the set of optimal trajectories, in order to reduce the search space. The solutions of both PDE are successively approximated by max-plus linear combinations of appropriate basis functions, using a hierarchy of finer and finer grids. We show that the sequence of approximate value functions obtained in this way does converge to the viscosity solution of the HJB equation in a neighborhood of optimal trajectories. Then, under certain regularity assumptions, we show that the number of arithmetic operations needed to compute an approximate optimal solution of a $d$-dimensional problem, up to a precision $\varepsilon$, is bounded by $O(C^d (1/\varepsilon) )$, for some constant $C>1$, whereas ordinary grid-based methods have a complexity in$O(1/\varepsilon^{ad}$) for some constant $a>0$.
3.Uncertainty over Uncertainty in Environmental Policy Adoption: Bayesian Learning of Unpredictable Socioeconomic Costs
Authors:Matteo Basei, Giorgio Ferrari, Neofytos Rodosthenous
Abstract: The socioeconomic impact of pollution naturally comes with uncertainty due to, e.g., current new technological developments in emissions' abatement or demographic changes. On top of that, the trend of the future costs of the environmental damage is unknown: Will global warming dominate or technological advancements prevail? The truth is that we do not know which scenario will be realised and the scientific debate is still open. This paper captures those two layers of uncertainty by developing a real-options-like model in which a decision maker aims at adopting a once-and-for-all costly reduction in the current emissions rate, when the stochastic dynamics of the socioeconomic costs of pollution are subject to Brownian shocks and the drift is an unobservable random variable. By keeping track of the actual evolution of the costs, the decision maker is able to learn the unknown drift and to form a posterior dynamic belief of its true value. The resulting decision maker's timing problem boils down to a truly two-dimensional optimal stopping problem which we address via probabilistic free-boundary methods and a state-space transformation. We show that the optimal timing for implementing the emissions reduction policy is the first time that the learning process has become ``decisive'' enough; that is, when it exceeds a time-dependent percentage. This is given in terms of an endogenously determined threshold uniquely solving a nonlinear integral equation, which we can solve numerically. We discuss the implications of the optimal policy and also perform comparative statics to understand the role of the relevant model's parameters in the optimal policy.
4.Secondary Controller Design for the Safety of Nonlinear Systems via Sum-of-Squares Programming
Authors:Yankai Lin, Michelle S. Chong, Carlos Murguia
Abstract: We consider the problem of ensuring the safety of nonlinear control systems under adversarial signals. Using Lyapunov based reachability analysis, we first give sufficient conditions to assess safety, i.e., to guarantee that the states of the control system, when starting from a given initial set, always remain in a prescribed safe set. We consider polynomial systems with semi-algebraic safe sets. Using the S-procedure for polynomial functions, safety conditions can be formulated as a Sum-Of-Squares (SOS) programme, which can be solved efficiently. When safety cannot be guaranteed, we provide tools via SOS to synthesize polynomial controllers that enforce safety of the closed loop system. The theoretical results are illustrated through numerical simulations.
5.The spectral radius of a square matrix can be approximated by its "weighted" spectral norm
Authors:Yongqiang Wang
Abstract: In distributed optimization or Nash-equilibrium seeking over directed graphs, it is crucial to find a matrix norm under which the disagreement of individual agents' states contracts. In existing results, the matrix norm is usually defined by approximating the spectral radius of the matrix, which is possible when the matrix is real and has zero row-sums. In this brief note, we show that this technique can be applied to general square complex matrices. More specifically, we prove that the spectral radius of any complex square matrix can be approximated by its "weighted" spectral norm to an arbitrary degree of accuracy.
6.Projective Proximal Gradient Descent for A Class of Nonconvex Nonsmooth Optimization Problems: Fast Convergence Without Kurdyka-Lojasiewicz (KL) Property
Authors:Yingzhen Yang, Ping Li
Abstract: Nonconvex and nonsmooth optimization problems are important and challenging for statistics and machine learning. In this paper, we propose Projected Proximal Gradient Descent (PPGD) which solves a class of nonconvex and nonsmooth optimization problems, where the nonconvexity and nonsmoothness come from a nonsmooth regularization term which is nonconvex but piecewise convex. In contrast with existing convergence analysis of accelerated PGD methods for nonconvex and nonsmooth problems based on the Kurdyka-\L{}ojasiewicz (K\L{}) property, we provide a new theoretical analysis showing local fast convergence of PPGD. It is proved that PPGD achieves a fast convergence rate of $\cO(1/k^2)$ when the iteration number $k \ge k_0$ for a finite $k_0$ on a class of nonconvex and nonsmooth problems under mild assumptions, which is locally Nesterov's optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Experimental results demonstrate the effectiveness of PPGD.