Optimization and Control (math.OC)
Mon, 10 Jul 2023
1.Invex Programs: First Order Algorithms and Their Convergence
Authors:Adarsh Barik, Suvrit Sra, Jean Honorio
Abstract: Invex programs are a special kind of non-convex problems which attain global minima at every stationary point. While classical first-order gradient descent methods can solve them, they converge very slowly. In this paper, we propose new first-order algorithms to solve the general class of invex problems. We identify sufficient conditions for convergence of our algorithms and provide rates of convergence. Furthermore, we go beyond unconstrained problems and provide a novel projected gradient method for constrained invex programs with convergence rate guarantees. We compare and contrast our results with existing first-order algorithms for a variety of unconstrained and constrained invex problems. To the best of our knowledge, our proposed algorithm is the first algorithm to solve constrained invex programs.
2.Tropical convexity in location problems
Authors:Andrei Comăneci
Abstract: We investigate location problems whose optimum lies in the tropical convex hull of the input points. Firstly, we study geodesically star-convex sets under the asymmetric tropical distance and introduce the class of tropically quasiconvex functions whose sub-level sets have this shape. The latter are related to monotonic functions. Then we show that location problems whose distances are measured by tropically quasiconvex functions as before give an optimum in the tropical convex hull of the input points. We also show that a similar result holds if we replace the input points by tropically convex sets. Finally, we focus on applications to phylogenetics presenting properties of consensus methods arising from our class of location problems.
3.An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization
Authors:Guy Kornowski, Ohad Shamir
Abstract: We study the complexity of producing $(\delta,\epsilon)$-stationary points of Lipschitz objectives which are possibly neither smooth nor convex, using only noisy function evaluations. Recent works proposed several stochastic zero-order algorithms that solve this task, all of which suffer from a dimension-dependence of $\Omega(d^{3/2})$ where $d$ is the dimension of the problem, which was conjectured to be optimal. We refute this conjecture by providing a faster algorithm that has complexity $O(d\delta^{-1}\epsilon^{-3})$, which is optimal (up to numerical constants) with respect to $d$ and also optimal with respect to the accuracy parameters $\delta,\epsilon$, thus solving an open question due to Lin et al. (NeurIPS'22). Moreover, the convergence rate achieved by our algorithm is also optimal for smooth objectives, proving that in the nonconvex stochastic zero-order setting, nonsmooth optimization is as easy as smooth optimization. We provide algorithms that achieve the aforementioned convergence rate in expectation as well as with high probability. Our analysis is based on a simple yet powerful geometric lemma regarding the Goldstein-subdifferential set, which allows utilizing recent advancements in first-order nonsmooth nonconvex optimization.