arXiv daily

Optimization and Control (math.OC)

Tue, 12 Sep 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Relating Electric Vehicle Charging to Speed Scaling with Job-Specific Speed Limits

Authors:Leoni Winschermann, Marco E. T. Gerards, Antonios Antoniadis, Gerwin Hoogsteen, Johann Hurink

Abstract: Due to the ongoing electrification of transport in combination with limited power grid capacities, efficient ways to schedule electric vehicles (EVs) are needed for intraday operation of, for example, large parking lots. Common approaches like model predictive control repeatedly solve a corresponding offline problem. In this work, we present and analyze the Flow-based Offline Charging Scheduler (FOCS), an offline algorithm to derive an optimal EV charging schedule for a fleet of EVs that minimizes an increasing, convex and differentiable function of the corresponding aggregated power profile. To this end, we relate EV charging to mathematical speed scaling models with job-specific speed limits. We prove our algorithm to be optimal. Furthermore, we derive necessary and sufficient conditions for any EV charging profile to be optimal.

2.Inexact Decentralized Dual Gradient Tracking for Constraint-Coupled Optimization

Authors:Jingwang Li, Housheng Su

Abstract: We propose an inexact decentralized dual gradient tracking method (iDDGT) for distributed optimization problems with a globally coupled equality constraint. Unlike existing algorithms that rely on either the exact dual gradient or an inexact one obtained through single-step gradient descent, iDDGT introduces a new approach: utilizing an inexact dual gradient with controllable levels of inexactness. Numerical experiments demonstrate that iDDGT achieves significantly higher computational efficiency compared to state-of-the-art methods. Furthermore, it is proved that iDDGT can achieve linear convergence over directed graphs without imposing any conditions on the constraint matrix. This expands its applicability beyond existing algorithms that require the constraint matrix to have full row rank and undirected graphs for achieving linear convergence.

3.Stochastic Bridges over Ensemble of Linear Systems

Authors:Daniel Owusu Adu, Yongxin Chen

Abstract: We consider particles that are conditioned to initial and final states. The trajectory of these particles is uniquely shaped by the intricate interplay of internal and external sources of randomness. The internal randomness is aptly modelled through a parameter varying over a deterministic set, thereby giving rise to an ensemble of systems. Concurrently, the external randomness is introduced through the inclusion of white noise. Within this context, our primary objective is to effectively generate the stochastic bridge through the optimization of a random differential equation. As a deviation from the literature, we show that the optimal control mechanism, pivotal in the generation of the bridge, does not conform to the typical Markov strategy. Instead, it adopts a non-Markovian strategy, which can be more precisely classified as a stochastic feedforward control input. This unexpected divergence from the established strategies underscores the complex interrelationships present in the dynamics of the system under consideration.

4.Symmetric Stair Preconditioning of Linear Systems for Parallel Trajectory Optimization

Authors:Xueyi Bu, Brian Plancher

Abstract: There has been a growing interest in parallel strategies for solving trajectory optimization problems. One key step in many algorithmic approaches to trajectory optimization is the solution of moderately-large and sparse linear systems. Iterative methods are particularly well-suited for parallel solves of such systems. However, fast and stable convergence of iterative methods is reliant on the application of a high-quality preconditioner that reduces the spread and increase the clustering of the eigenvalues of the target matrix. To improve the performance of these approaches, we present a new parallel-friendly symmetric stair preconditioner. We prove that our preconditioner has advantageous theoretical properties when used in conjunction with iterative methods for trajectory optimization such as a more clustered eigenvalue spectrum. Numerical experiments with typical trajectory optimization problems reveal that as compared to the best alternative parallel preconditioner from the literature, our symmetric stair preconditioner provides up to a 34% reduction in condition number and up to a 25% reduction in the number of resulting linear system solver iterations.