arXiv daily

Optimization and Control (math.OC)

Tue, 22 Aug 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Distorted optimal transport

Authors:Haiyan Liu, Bin Wang, Ruodu Wang, Sheng Chao Zhuang

Abstract: Classic optimal transport theory is built on minimizing the expected cost between two given distributions. We propose the framework of distorted optimal transport by minimizing a distorted expected cost. This new formulation is motivated by concrete problems in decision theory, robust optimization, and risk management, and it has many distinct features compared to the classic theory. We choose simple cost functions and study different distortion functions and their implications on the optimal transport plan. We show that on the real line, the comonotonic coupling is optimal for the distorted optimal transport problem when the distortion function is convex and the cost function is submodular and monotone. Some forms of duality and uniqueness results are provided. For inverse-S-shaped distortion functions and linear cost, we obtain the unique form of optimal coupling for all marginal distributions, which turns out to have an interesting ``first comonotonic, then counter-monotonic" dependence structure; for S-shaped distortion functions a similar structure is obtained. Our results highlight several challenges and features in distorted optimal transport, offering a new mathematical bridge between the fields of probability, decision theory, and risk management.

2.A Tight Formulation for the Dial-a-Ride Problem

Authors:Daniela Gaul, Kathrin Klamroth, Christian Pfeiffer, Arne Schulz, Michael Stiglmayr

Abstract: Ridepooling services play an increasingly important role in modern transportation systems. With soaring demand and growing fleet sizes, the underlying route planning problems become increasingly challenging. In this context, we consider the dial-a-ride problem (DARP): Given a set of transportation requests with pick-up and delivery locations, passenger numbers, time windows, and maximum ride times, an optimal routing for a fleet of vehicles, including an optimized passenger assignment, needs to be determined. We present tight mixed-integer linear programming (MILP) formulations for the DARP by combining two state-of-the-art models into novel location-augmented-event-based formulations. Strong valid inequalities and lower and upper bounding techniques are derived to further improve the formulations. We then demonstrate the theoretical and computational superiority of the new model: First, the formulation is tight in the sense that, if time windows shrink to a single point in time, the linear programming relaxation yields integer (and hence optimal) solutions. Second, extensive numerical experiments on benchmark instances show that computational times are on average reduced by 49.7% compared to state-of-the-art event-based approaches.

3.Reproducing kernel approach to linear quadratic mean field control problems

Authors:Pierre-Cyril Aubin-Frankowski, Alain Bensoussan

Abstract: Mean-field control problems have received continuous interest over the last decade. Despite being more intricate than in classical optimal control, the linear-quadratic setting can still be tackled through Riccati equations. Remarkably, we demonstrate that another significant attribute extends to the mean-field case: the existence of an intrinsic reproducing kernel Hilbert space associated with the problem. Our findings reveal that this Hilbert space not only encompasses deterministic controlled push-forward mappings but can also represent of stochastic dynamics. Specifically, incorporating Brownian noise affects the deterministic kernel through a conditional expectation, to make the trajectories adapted. Introducing reproducing kernels allows us to rewrite the mean-field control problem as optimizing over a Hilbert space of trajectories rather than controls. This framework even accommodates nonlinear terminal costs, without resorting to adjoint processes or Pontryagin's maximum principle, further highlighting the versatility of the proposed methodology.

4.Iterative risk-constrained model predictive control: A data-driven distributionally robust approach

Authors:Alireza Zolanvari, Ashish Cherukuri

Abstract: This paper proposes an iterative distributionally robust model predictive control (MPC) scheme to solve a risk-constrained infinite-horizon optimal control problem. In each iteration, the algorithm generates a trajectory from the starting point to the target equilibrium state with the aim of respecting risk constraints with high probability (that encodes safe operation of the system) and improving the cost of the trajectory as compared to previous iterations. At the end of each iteration, the visited states and observed samples of the uncertainty are stored and accumulated with the previous observations. For each iteration, the states stored previously are considered as terminal constraints of the MPC scheme, and samples obtained thus far are used to construct distributionally robust risk constraints. As iterations progress, more data is obtained and the environment is explored progressively to ensure better safety and cost optimality. We prove that the MPC scheme in each iteration is recursively feasible and the resulting trajectories converge asymptotically to the target while ensuring safety with high probability. We identify conditions under which the cost-to-go reduces as iterations progress. For systems with locally one-step reachable target, we specify scenarios that ensure finite-time convergence of iterations. We provide computationally tractable reformulations of the risk constraints for total variation and Wasserstein distance-based ambiguity sets. A simulation example illustrates the application of our results in finding a risk-constrained path for two mobile robots facing an uncertain obstacle.

5.Risk-Minimizing Two-Player Zero-Sum Stochastic Differential Game via Path Integral Control

Authors:Apurva Patil, Yujing Zhou, David Fridovich-Keil, Takashi Tanaka

Abstract: This paper addresses a continuous-time risk-minimizing two-player zero-sum stochastic differential game (SDG), in which each player aims to minimize its probability of failure. Failure occurs in the event when the state of the game enters into predefined undesirable domains, and one player's failure is the other's success. We derive a sufficient condition for this game to have a saddle-point equilibrium and show that it can be solved via a Hamilton-Jacobi-Isaacs (HJI) partial differential equation (PDE) with Dirichlet boundary condition. Under certain assumptions on the system dynamics and cost function, we establish the existence and uniqueness of the saddle-point of the game. We provide explicit expressions for the saddle-point policies which can be numerically evaluated using path integral control. This allows us to solve the game online via Monte Carlo sampling of system trajectories. We implement our control synthesis framework on two classes of risk-minimizing zero-sum SDGs: a disturbance attenuation problem and a pursuit-evasion game. Simulation studies are presented to validate the proposed control synthesis framework.

6.Decision-Making for Land Conservation: A Derivative-Free Optimization Framework with Nonlinear Inputs

Authors:Cassidy K. Buhler, Hande Y. Benson

Abstract: Protected areas (PAs) are designated spaces where human activities are restricted to preserve critical habitats. Decision-makers are challenged with balancing a trade-off of financial feasibility with ecological benefit when establishing PAs. Given the long-term ramifications of these decisions and the constantly shifting environment, it is crucial that PAs are carefully selected with long-term viability in mind. Using AI tools like simulation and optimization is common for designating PAs, but current decision models are primarily linear. In this paper, we propose a derivative-free optimization framework paired with a nonlinear component, population viability analysis (PVA). Formulated as a mixed integer nonlinear programming (MINLP) problem, our model allows for linear and nonlinear inputs. Connectivity, competition, crowding, and other similar concerns are handled by the PVA software, rather than expressed as constraints of the optimization model. In addition, we present numerical results that serve as a proof of concept, showing our models yield PAs with similar expected risk to that of preserving every parcel in a habitat, but at a significantly lower cost. The overall goal is to promote interdisciplinary work by providing a new mathematical programming tool for conservationists that allows for nonlinear inputs and can be paired with existing ecological software.