arXiv daily

Optimization and Control (math.OC)

Mon, 03 Jul 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Quantifying Distributional Model Risk in Marginal Problems via Optimal Transport

Authors:Yanqin Fan, Hyeonseok Park, Gaoqian Xu

Abstract: This paper studies distributional model risk in marginal problems, where each marginal measure is assumed to lie in a Wasserstein ball centered at a fixed reference measure with a given radius. Theoretically, we establish several fundamental results including strong duality, finiteness of the proposed Wasserstein distributional model risk, and the existence of an optimizer at each radius. In addition, we show continuity of the Wasserstein distributional model risk as a function of the radius. Using strong duality, we extend the well-known Makarov bounds for the distribution function of the sum of two random variables with given marginals to Wasserstein distributionally robust Markarov bounds. Practically, we illustrate our results on four distinct applications when the sample information comes from multiple data sources and only some marginal reference measures are identified. They are: partial identification of treatment effects; externally valid treatment choice via robust welfare functions; Wasserstein distributionally robust estimation under data combination; and evaluation of the worst aggregate risk measures.

2.Variational theory and algorithms for a class of asymptotically approachable nonconvex problems

Authors:Hanyang Li, Ying Cui

Abstract: We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function can be merely lower semicontinuous instead of continuous. It covers a range of important yet challenging applications, including the composite value functions of nonlinear programs, the weighted value-at-risk for continuously distributed random variables, and composite rank functions. We propose an asymptotic decomposition of the composite function that guarantees epi-convergence to the original function, leading to necessary optimality conditions for the corresponding minimization problems. The proposed decomposition also enables us to design a numerical algorithm that is provably convergent to a point satisfying the newly introduced optimality conditions. These results expand on the study of so-called amenable functions introduced by Poliquin and Rockafellar in 1992, which are compositions of convex functions with smooth maps, and the prox-linear methods for their minimization.

3.Monte Carlo Policy Gradient Method for Binary Optimization

Authors:Cheng Chen, Ruitao Chen, Tianyou Li, Ruichen Ao, Zaiwen Wen

Abstract: Binary optimization has a wide range of applications in combinatorial optimization problems such as MaxCut, MIMO detection, and MaxSAT. However, these problems are typically NP-hard due to the binary constraints. We develop a novel probabilistic model to sample the binary solution according to a parameterized policy distribution. Specifically, minimizing the KL divergence between the parameterized policy distribution and the Gibbs distributions of the function value leads to a stochastic optimization problem whose policy gradient can be derived explicitly similar to reinforcement learning. For coherent exploration in discrete spaces, parallel Markov Chain Monte Carlo (MCMC) methods are employed to sample from the policy distribution with diversity and approximate the gradient efficiently. We further develop a filter scheme to replace the original objective function by the one with the local search technique to broaden the horizon of the function landscape. Convergence to stationary points in expectation of the policy gradient method is established based on the concentration inequality for MCMC. Numerical results show that this framework is very promising to provide near-optimal solutions for quite a few binary optimization problems.

4.Global stabilization of sterile insect technique model by feedback laws

Authors:Kala Agbo Bidi LJLL, Luis Almeida LJLL, Jean-Michel Coron LJLL

Abstract: The Sterile Insect Technique or SIT is presently one of the most ecological methods for controlling insect pests responsible for disease transmission or crop destruction worldwide. This technique consists of releasing sterile males into the insect pest population. This approach aims at reducing fertility in the population and, consequently, reduce significantly the native insect population after a few generations. In this work, we study the global stabilization of a pest population at extinction equilibrium by the SIT method. We construct explicit feedback laws that stabilize the model and do numerical simulations to show the efficiency of our feedback laws. The different feedback laws are also compared taking into account their possible implementation in field interventions.

5.Minimal-time nonlinear control via semi-infinite programming

Authors:Antoine Oustry OptimiX, LIX, ENPC, Matteo Tacchi GIPSA-lab

Abstract: We address the problem of computing a control for a time-dependent nonlinear system to reach a target set in a minimal time. To solve this minimal time control problem, we introduce a hierarchy of linear semi-infinite programs, the values of which converge to the value of the control problem. These semi-infinite programs are increasing restrictions of the dual of the nonlinear control problem, which is a maximization problem over the subsolutions of the Hamilton-Jacobi-Bellman (HJB) equation. Our approach is compatible with generic dynamical systems and state constraints. Specifically, we use an oracle that, for a given differentiable function, returns a point at which the function violates the HJB inequality. We solve the semi-infinite programs using a classical convex optimization algorithm with a convergence rate of O(1/k), where k is the number of calls to the oracle. This algorithm yields subsolutions of the HJB equation that approximate the value function and provide a lower bound on the optimal time. We study the closed-loop control built on the obtained approximate value functions, and we give theoretical guarantees on its performance depending on the approximation error for the value function. We show promising numerical results for three non-polynomial systems with up to 6 state variables and 5 control variables.

6.Coefficient Control of Variational Inequalities

Authors:Andreas Hehl, Denis Khimin, Ira Neitzel, Nicolai Simon, Thomas Wick, Winnifried Wollner

Abstract: Within this chapter, we discuss control in the coefficients of an obstacle problem. Utilizing tools from H-convergence, we show existence of optimal solutions. First order necessary optimality conditions are obtained after deriving directional differentiability of the coefficient to solution mapping for the obstacle problem. Further, considering a regularized obstacle problem as a constraint yields a limiting optimality system after proving, strong, convergence of the regularized control and state variables. Numerical examples underline convergence with respect to the regularization. Finally, some numerical experiments highlight the possible extension of the results to coefficient control in phase-field fracture.

7.On the stochastic inventory problem under order capacity constraints

Authors:Roberto Rossi, Zhen Chen, S. Armagan Tarim

Abstract: We consider the single-item single-stocking location stochastic inventory system under a fixed ordering cost component. A long-standing problem is that of determining the structure of the optimal control policy when this system is subject to order quantity capacity constraints; to date, only partial characterisations of the optimal policy have been discussed. An open question is whether a policy with a single continuous interval over which ordering is prescribed is optimal for this problem. Under the so-called "continuous order property" conjecture, we show that the optimal policy takes the modified multi-$(s,S)$ form. Moreover, we provide a numerical counterexample in which the continuous order property is violated, and hence show that a modified multi-$(s,S)$ policy is not optimal in general. However, in an extensive computational study, we show that instances violating the continuous order property are extremely rare in practice, and that the plans generated by a modified multi-$(s,S)$ policy can therefore be considered, for all practical purposes, optimal. Finally, we show that a modified $(s,S)$ policy also performs well in practice.

8.Fast Convergence of Inertial Multiobjective Gradient-like Systems with Asymptotic Vanishing Damping

Authors:Konstantin Sonntag, Sebastian Peitz

Abstract: We present a new gradient-like dynamical system related to unconstrained convex smooth multiobjective optimization which involves inertial effects and asymptotic vanishing damping. To the best of our knowledge, this system is the first inertial gradient-like system for multiobjective optimization problems including asymptotic vanishing damping, expanding the ideas laid out in [H. Attouch and G. Garrigos, Multiobjective optimization: an inertial approach to Pareto optima, preprint, arXiv:1506.02823, 201]. We prove existence of solutions to this system in finite dimensions and further prove that its bounded solutions converge weakly to weakly Pareto optimal points. In addition, we obtain a convergence rate of order $O(t^{-2})$ for the function values measured with a merit function. This approach presents a good basis for the development of fast gradient methods for multiobjective optimization.

9.Feasibility problems via paramonotone operators in a convex setting

Authors:J. Camacho, M. J. Cánovas, J. E. Martínez-Legaz, J. Parra

Abstract: This paper is focused on some properties of paramonotone operators on Banach spaces and their application to certain feasibility problems for convex sets in a Hilbert space and convex systems in the Euclidean space. In particular, it shows that operators that are simultaneously paramonotone and bimonotone are constant on their domains, and this fact is applied to tackle two particular situations. The first one, closely related to simultaneous projections, deals with a finite amount of convex sets with an empty intersection and tackles the problem of finding the smallest perturbations (in the sense of translations) of these sets to reach a nonempty intersection. The second is focused on the distance to feasibility; specifically, given an inconsistent convex inequality system, our goal is to compute/estimate the smallest right-hand side perturbations that reach feasibility. We advance that this work derives lower and upper estimates of such a distance, which become the exact value when confined to linear systems.

10.Stochastic Recursive Optimal Control of McKean-Vlasov Type: A Viscosity Solution Approach

Authors:Liangquan Zhang

Abstract: In this paper, we study a kind of optimal control problem for forward-backward stochastic differential equations (FBSDEs for short) of McKean--Vlasov type via the dynamic programming principle (DPP for short) motivated by studying the infinite dimensional Hamilton--Jacobi--Bellman (HJB for short) equation derived from the decoupling field of the FBSDEs posed by Carmona and Delarue (\emph{Ann Probab}, 2015, \cite{cd15}). At the beginning, by considering the cost functional defined by the backward component of the solution of the controlled FBSDEs alluded to earlier, on one hand, we can prove the value function is deterministic function with respect to the initial random variable; On the other hand, we can show that the value function is \emph{law-invariant}, i.e., depend on only via its distribution by virtue of BSDE property. Afterward, utilizing the notion of differentiability with respect to probability measures introduced by P.L. Lions \cite{Lions2012}, we are able to establish a DPP for the value function in the Wasserstein space of probability measures based on the application of BSDE approach, particularly, employing the notion of stochastic \emph{backward semigroups} associated with stochastic optimal control problems and It\^{o}'s formula along a flow property of the conditional law of the controlled forward state process. We prove that the value function is the unique viscosity solutions of the associated generalized HJB equations in some sparable Hilbert space. Finally, as an application, we formulate an optimal control problem for linear stochastic differential equations with quadratic cost functionals of McKean-Vlasov type under nonlinear expectation, $g$-expectation introduced by Peng \cite{Peng04} and derive the optimal feedback control explicitly by means of several groups of Riccati equations.

11.Incomplete Information Linear-Quadratic Mean-Field Games and Related Riccati Equations

Authors:Min Li, Tianyang Nie, Shunjun Wang, Ke Yan

Abstract: We study a class of linear-quadratic mean-field games with incomplete information. For each agent, the state is given by a linear forward stochastic differential equation with common noise. Moreover, both the state and control variables can enter the diffusion coefficients of the state equation. We deduce the open-loop adapted decentralized strategies and feedback decentralized strategies by mean-field forward-backward stochastic differential equation and Riccati equations, respectively. The well-posedness of the corresponding consistency condition system is obtained and the limiting state-average turns out to be the solution of a mean-field stochastic differential equation driven by common noise. We also verify the $\varepsilon$-Nash equilibrium property of the decentralized control strategies. Finally, a network security problem is studied to illustrate our results as an application.

12.Hoffman constant of the argmin mapping in linear optimization

Authors:J. Camacho, M. J. Cánovas, H. Gfrerer, J. Parra

Abstract: The main contribution of this paper consists of providing an explicit formula to compute the Hoffman constant of the argmin mapping in linear optimization. The work is developed in the context of right-hand side perturbations of the constraint system as the Hoffman constant is always infinite when we perturb the objective function coefficients, unless the left-hand side of the constraints reduces to zero. In our perturbation setting, the argmin mapping is a polyhedral mapping whose graph is the union of convex polyhedral sets which assemble in a so nice way that global measures of the stability (Hoffman constants) can be computed through semilocal and local ones (as Lipschitz upper semicontinuity and calmness moduli, whose computation has been developed in previous works). Indeed, we isolate this nice behavior of the graph in the concept of well-connected polyhedral mappings and, in a first step, the paper focuses on Hoffman constant for these multifunctions. When confined to the optimal set, some specifics on directional stability are also presented.

13.Synthesizing Control Laws from Data using Sum-of-Squares Optimization

Authors:Jason J. Bramburger, Steven Dahdah, James Richard Forbes

Abstract: The control Lyapunov function (CLF) approach to nonlinear control design is well established. Moreover, when the plant is control affine and polynomial, sum-of-squares (SOS) optimization can be used to find a polynomial controller as a solution to a semidefinite program. This letter considers the use of data-driven methods to design a polynomial controller by leveraging Koopman operator theory, CLFs, and SOS optimization. First, Extended Dynamic Mode Decomposition (EDMD) is used to approximate the Lie derivative of a given CLF candidate with polynomial lifting functions. Then, the polynomial Koopman model of the Lie derivative is used to synthesize a polynomial controller via SOS optimization. The result is a flexible data-driven method that skips the intermediary process of system identification and can be applied widely to control problems. The proposed approach is used to successfully synthesize a controller to stabilize an inverted pendulum on a cart.

14.Analyzing and Improving Greedy 2-Coordinate Updates for Equality-Constrained Optimization via Steepest Descent in the 1-Norm

Authors:Amrutha Varshini Ramesh, Aaron Mishkin, Mark Schmidt, Yihan Zhou, Jonathan Wilder Lavington, Jennifer She

Abstract: We consider minimizing a smooth function subject to a summation constraint over its variables. By exploiting a connection between the greedy 2-coordinate update for this problem and equality-constrained steepest descent in the 1-norm, we give a convergence rate for greedy selection under a proximal Polyak-Lojasiewicz assumption that is faster than random selection and independent of the problem dimension $n$. We then consider minimizing with both a summation constraint and bound constraints, as arises in the support vector machine dual problem. Existing greedy rules for this setting either guarantee trivial progress only or require $O(n^2)$ time to compute. We show that bound- and summation-constrained steepest descent in the L1-norm guarantees more progress per iteration than previous rules and can be computed in only $O(n \log n)$ time.

15.A numerical algorithm for attaining the Chebyshev bound in optimal learning

Authors:Pradyumna Paruchuri, Debasish Chatterjee

Abstract: Given a compact subset of a Banach space, the Chebyshev center problem consists of finding a minimal circumscribing ball containing the set. In this article we establish a numerically tractable algorithm for solving the Chebyshev center problem in the context of optimal learning from a finite set of data points. For a hypothesis space realized as a compact but not necessarily convex subset of a finite-dimensional subspace of some underlying Banach space, this algorithm computes the Chebyshev radius and the Chebyshev center of the hypothesis space, thereby solving the problem of optimal recovery of functions from data. The algorithm itself is based on, and significantly extends, recent results for near-optimal solutions of convex semi-infinite problems by means of targeted sampling, and it is of independent interest. Several examples of numerical computations of Chebyshev centers are included in order to illustrate the effectiveness of the algorithm.

16.A geometric framework for discrete time port-Hamiltonian systems

Authors:Karim Cherifi, Hannes Gernandt, Dorothea Hinsen, Volker Mehrmann

Abstract: Port-Hamiltonian systems provide an energy-based formulation with a model class that is closed under structure preserving interconnection. For continuous-time systems these interconnections are constructed by geometric objects called Dirac structures. In this paper, we derive this geometric formulation and the interconnection properties for scattering passive discrete-time port-Hamiltonian systems.