# Optimization and Control (math.OC)

## Mon, 24 Jul 2023

##### 1.Decentralized Optimization Over Slowly Time-Varying Graphs: Algorithms and Lower Bounds

**Authors:**Dmitry Metelev, Aleksandr Beznosikov, Alexander Rogozin, Alexander Gasnikov, Anton Proskurnikov

**Abstract:** We consider a decentralized convex unconstrained optimization problem, where the cost function can be decomposed into a sum of strongly convex and smooth functions, associated with individual agents, interacting over a static or time-varying network. Our main concern is the convergence rate of first-order optimization algorithms as a function of the network's graph, more specifically, of the condition numbers of gossip matrices. We are interested in the case when the network is time-varying but the rate of changes is restricted. We study two cases: randomly changing network satisfying Markov property and a network changing in a deterministic manner. For the random case, we propose a decentralized optimization algorithm with accelerated consensus. For the deterministic scenario, we show that if the graph is changing in a worst-case way, accelerated consensus is not possible even if only two edges are changed at each iteration. The fact that such a low rate of network changes is sufficient to make accelerated consensus impossible is novel and improves the previous results in the literature.

##### 2.Finite-sum optimization: Adaptivity to smoothness and loopless variance reduction

**Authors:**Bastien Batardière, Julien Chiquet, Joon Kwon

**Abstract:** For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaVR, which combines the AdaGrad algorithm with variance-reduced gradient estimators such as SAGA or L-SVRG. We assess that AdaVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of $L$-smooth convex functions we establish a gradient complexity of $O(n+(L+\sqrt{nL})/\varepsilon)$ without prior knowledge of $L$. Numerical experiments demonstrate the superiority of AdaVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

##### 3.Simultaneous Optimization of Launch Vehicle Stage and Trajectory Considering Operational Safety Constraints

**Authors:**Jaeyoul Ko, Jaewoo Kim, Jimin Choi, Jaemyung Ahn

**Abstract:** A conceptual design of a launch vehicle involves the optimization of trajectory and stages considering its launch operations. This process encompasses various disciplines, such as structural design, aerodynamics, propulsion systems, flight control, and stage sizing. Traditional approaches used for the conceptual design of a launch vehicle conduct the stage and trajectory designs sequentially, often leading to high computational complexity and suboptimal results. This paper presents an optimization framework that addresses both trajectory optimization and staging in an integrated way. The proposed framework aims to maximize the payload-to-liftoff mass ratio while satisfying the constraints required for safe launch operations (e.g., the impact points of burnt stages and fairing). A case study demonstrates the advantage of the proposed framework compared to the traditional sequential optimization approach.

##### 4.Dissipative State and Output Estimation of Systems with General Delays

**Authors:**Qian Feng, Feng Xiao, Xiaoyu Wang

**Abstract:** Dissipative state and output estimation for continuous time-delay systems pose a significant challenge when an unlimited number of pointwise and general distributed delays (DDs) are concerned. We propose an effective solution to this open problem using the Krasovski\u{\i} functional (KF) framework in conjunction with a quadratic supply rate function, where both the plant and the estimator can accommodate an unlimited number of pointwise and general DDs. All DDs can contain an unlimited number of square-integrable kernel functions, which are treated by an equivalent decomposition-approximation scheme. This novel approach allows for the factorization or approximation of any kernel function without introducing conservatism, and facilitates the construction of a complete-type KF with integral kernels that can encompass any number of differentiable (weak derivatives) and linearly independent functions. Our proposed solution is expressed as convex semidefinite programs presented in two theorems along with an iterative algorithm, which eliminates the need of nonlinear solvers. We demonstrate the effectiveness of our method using two challenging numerical experiments, including a system stabilized by a non-smooth controller.

##### 5.Accelerated Zero-Order SGD Method for Solving the Black Box Optimization Problem under "Overparametrization" Condition

**Authors:**Aleksandr Lobanov, Alexander Gasnikov

**Abstract:** This paper is devoted to solving a convex stochastic optimization problem in a overparameterization setup for the case where the original gradient computation is not available, but an objective function value can be computed. For this class of problems we provide a novel gradient-free algorithm, whose creation approach is based on applying a gradient approximation with $l_2$ randomization instead of a gradient oracle in the biased Accelerated SGD algorithm, which generalizes the convergence results of the AC-SA algorithm to the case where the gradient oracle returns a noisy (inexact) objective function value. We also perform a detailed analysis to find the maximum admissible level of adversarial noise at which we can guarantee to achieve the desired accuracy. We verify the theoretical results of convergence using a model example.

##### 6.Open Problem: Polynomial linearly-convergent method for geodesically convex optimization?

**Authors:**Christopher Criscitiello, David Martínez-Rubio, Nicolas Boumal

**Abstract:** Let $f \colon \mathcal{M} \to \mathbb{R}$ be a Lipschitz and geodesically convex function defined on a $d$-dimensional Riemannian manifold $\mathcal{M}$. Does there exist a first-order deterministic algorithm which (a) uses at most $O(\mathrm{poly}(d) \log(\epsilon^{-1}))$ subgradient queries to find a point with target accuracy $\epsilon$, and (b) requires only $O(\mathrm{poly}(d))$ arithmetic operations per query? In convex optimization, the classical ellipsoid method achieves this. After detailing related work, we provide an ellipsoid-like algorithm with query complexity $O(d^2 \log^2(\epsilon^{-1}))$ and per-query complexity $O(d^2)$ for the limited case where $\mathcal{M}$ has constant curvature (hemisphere or hyperbolic space). We then detail possible approaches and corresponding obstacles for designing an ellipsoid-like method for general Riemannian manifolds.

##### 7.Impulsive optimal control problems with time delays in the drift term

**Authors:**Giovanni Fusco, Monica Motta

**Abstract:** We introduce a notion of bounded variation solution for a new class of nonlinear control systems with ordinary and impulsive controls, in which the drift function depends not only on the state, but also on its past history, through a finite number of time delays. After proving the well posedness of such solutions and the continuity of the corresponding input output map with respect to suitable topologies, we establish necessary optimality conditions for an associated optimal control problem. The approach, which involves approximating the problem by a non impulsive optimal control problem with time delays and using Ekeland principle combined with a recent, nonsmooth version of the Maximum Principle for conventional delayed systems, allows us to deal with mild regularity assumptions and a general endpoint constraint.

##### 8.Optimal Algorithm with Complexity Separation for Strongly Convex-Strongly Concave Composite Saddle Point Problems

**Authors:**Ekaterina Borodich, Georgiy Kormakov, Dmitry Kovalev, Aleksandr Beznosikov, Alexander Gasnikov

**Abstract:** In this work, we focuses on the following saddle point problem $\min_x \max_y p(x) + R(x,y) - q(y)$ where $R(x,y)$ is $L_R$-smooth, $\mu_x$-strongly convex, $\mu_y$-strongly concave and $p(x), q(y)$ are convex and $L_p, L_q$-smooth respectively. We present a new algorithm with optimal overall complexity $\mathcal{O}\left(\left(\sqrt{\frac{L_p}{\mu_x}} + \frac{L_R}{\sqrt{\mu_x \mu_y}} + \sqrt{\frac{L_q}{\mu_y}}\right)\log \frac{1}{\varepsilon}\right)$ and separation of oracle calls in the composite and saddle part. This algorithm requires $\mathcal{O}\left(\left(\sqrt{\frac{L_p}{\mu_x}} + \sqrt{\frac{L_q}{\mu_y}}\right) \log \frac{1}{\varepsilon}\right)$ oracle calls for $\nabla p(x)$ and $\nabla q(y)$ and $\mathcal{O} \left( \max\left\{\sqrt{\frac{L_p}{\mu_x}}, \sqrt{\frac{L_q}{\mu_y}}, \frac{L_R}{\sqrt{\mu_x \mu_y}} \right\}\log \frac{1}{\varepsilon}\right)$ oracle calls for $\nabla R(x,y)$ to find an $\varepsilon$-solution of the problem. To the best of our knowledge, we are the first to develop optimal algorithm with complexity separation in the case $\mu_x \not = \mu_y$. Also, we apply this algorithm to a bilinear saddle point problem and obtain the optimal complexity for this class of problems.