arXiv daily

Optimization and Control (math.OC)

Mon, 12 Jun 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Convergence Rates of the Regularized Optimal Transport : Disentangling Suboptimality and Entropy

Authors:Hugo Malamut CEREMADE, Maxime Sylvestre CEREMADE

Abstract: We study the convergence of the transport plans $\gamma$$\epsilon$ towards $\gamma$0 as well as the cost of the entropy-regularized optimal transport (c, $\gamma$$\epsilon$) towards (c, $\gamma$0) as the regularization parameter $\epsilon$ vanishes in the setting of finite entropy marginals. We show that under the assumption of infinitesimally twisted cost and compactly supported marginals the distance W2($\gamma$$\epsilon$, $\gamma$0) is asymptotically greater than C $\sqrt$ $\epsilon$ and the suboptimality (c, $\gamma$$\epsilon$) -- (c, $\gamma$0) is of order $\epsilon$. In the quadratic cost case the compactness assumption is relaxed into a moment of order 2 + $\delta$ assumption. Moreover, in the case of a Lipschitz transport map for the non-regularized problem, the distance W2($\gamma$$\epsilon$, $\gamma$0) converges to 0 at rate $\sqrt$ $\epsilon$. Finally, if in addition the marginals have finite Fisher information, we prove (c, $\gamma$$\epsilon$) -- (c, $\gamma$0) $\sim$ d$\epsilon$/2 and we provide a companion expansion of H($\gamma$$\epsilon$). These results are achieved by disentangling the role of the cost and the entropy in the regularized problem. Contents

2.Sensitivity Analysis in Parametric Convex Vector Optimization

Authors:Duong Thi Viet An, Le Thanh Tung

Abstract: In this paper, sensitivity analysis of the efficient sets in parametric convex vector optimization is considered. Namely, the perturbation, weak perturbation, and proper perturbation maps are defined as set-valued maps. We establish the formulas for computing the Fr\'{e}chet coderivative of the profile of the above three kinds of perturbation maps. Because of the convexity assumptions, the conditions set are fairly simple if compared to those in the general case. In addition, our conditions are stated directly on the data of the problem. It is worth emphasizing that our approach is based on convex analysis tools which are different from those in the general case.

3.Towards continuous-time MPC: a novel trajectory optimization algorithm

Authors:Souvik Das, Siddhartha Ganguly, Muthyala Anjali, Debasish Chatterjee

Abstract: This article introduces a numerical algorithm that serves as a preliminary step toward solving continuous-time model predictive control (MPC) problems directly without explicit time-discretization. The chief ingredients of the underlying optimal control problem (OCP) are a linear time-invariant system, quadratic instantaneous and terminal cost functions, and convex path constraints. The thrust of the method involves finitely parameterizing the admissible space of control trajectories and solving the OCP satisfying the given constraints at every time instant in a tractable manner without explicit time-discretization. The ensuing OCP turns out to be a convex semi-infinite program (SIP), and some recently developed results are employed to obtain an optimal solution to this convex SIP. Numerical illustrations on some benchmark models are included to show the efficacy of the algorithm.

4.An agent-based decentralized threshold policy finding the constrained shortest paths

Authors:Francesca Rosset, Raffaele Pesenti, Franco Blanchini

Abstract: We consider a problem where autonomous agents enter a dynamic and unknown environment described by a network of weighted arcs. These agents move within the network from node to node according to a decentralized policy using only local information, with the goal of finding a path to an unknown sink node to leave the network. This policy makes each agent move to some adjacent node or stop at the current node. The transition along an arc is allowed or denied based on a threshold mechanism that takes into account the number of agents already accumulated in the arc's end nodes and the arc's weight. We show that this policy ensures path-length optimality in the sense that, in a finite time, all new agents entering the network reach the closer sinks by the shortest paths. Our approach is later extended to support constraints on the paths that agents can follow.

5.On the Computation-Communication Trade-Off with A Flexible Gradient Tracking Approach

Authors:Yan Huang, Jinming Xu

Abstract: We propose a flexible gradient tracking approach with adjustable computation and communication steps for solving distributed stochastic optimization problem over networks. The proposed method allows each node to perform multiple local gradient updates and multiple inter-node communications in each round, aiming to strike a balance between computation and communication costs according to the properties of objective functions and network topology in non-i.i.d. settings. Leveraging a properly designed Lyapunov function, we derive both the computation and communication complexities for achieving arbitrary accuracy on smooth and strongly convex objective functions. Our analysis demonstrates sharp dependence of the convergence performance on graph topology and properties of objective functions, highlighting the trade-off between computation and communication. Numerical experiments are conducted to validate our theoretical findings.

6.Analysis of the vanishing discount limit for optimal control problems in continuous and discrete time

Authors:Piermarco Cannarsa, Stephane Gaubert, Cristian Mendico, Marc Quincampoix

Abstract: A classical problem in ergodic continuous time control consists of studying the limit behavior of the optimal value of a discounted cost functional with infinite horizon as the discount factor $\lambda$ tends to zero. In the literature, this problem has been addressed under various controllability or ergodicity conditions ensuring that the rescaled value function converges uniformly to a constant limit. In this case the limit can be characterized as the unique constant such that a suitable Hamilton-Jacobi equation has at least one continuous viscosity solution. In this paper, we study this problem without such conditions, so that the aforementioned limit needs not be constant. Our main result characterizes the uniform limit (when it exists) as the maximal subsolution of a system of Hamilton-Jacobi equations. Moreover, when such a subsolution is a viscosity solution, we obtain the convergence of optimal values as well as a rate of convergence. This mirrors the analysis of the discrete time case, where we characterize the uniform limit as the supremum over a set of sub-invariant half-lines of the dynamic programming operator. The emerging structure in both discrete and continuous time models shows that the supremum over sub-invariato half-lines with respect to the Lax-Oleinik semigroup/dynamic programming operator, captures the behavior of the limit cost as discount vanishes.