By: Tianfu Luo, Yelin Feng, Qingfu Huang, Zongliang Zhang, Mingjiao Yan, Zaihong Yang, Dawei Zheng, Yang Yang
A Physics-Informed Neural Network (PINN) provides a distinct advantage by synergizing neural networks' capabilities with the problem's governing physical laws. In this study, we introduce an innovative approach for solving seepage problems by utilizing the PINN, harnessing the capabilities of Deep Neural Networks (DNNs) to approximate hydraulic head distributions in seepage analysis. To effectively train the PINN model, we introduce a compr... more
A Physics-Informed Neural Network (PINN) provides a distinct advantage by synergizing neural networks' capabilities with the problem's governing physical laws. In this study, we introduce an innovative approach for solving seepage problems by utilizing the PINN, harnessing the capabilities of Deep Neural Networks (DNNs) to approximate hydraulic head distributions in seepage analysis. To effectively train the PINN model, we introduce a comprehensive loss function comprising three components: one for evaluating differential operators, another for assessing boundary conditions, and a third for appraising initial conditions. The validation of the PINN involves solving four benchmark seepage problems. The results unequivocally demonstrate the exceptional accuracy of the PINN in solving seepage problems, surpassing the accuracy of FEM in addressing both steady-state and free-surface seepage problems. Hence, the presented approach highlights the robustness of the PINN and underscores its precision in effectively addressing a spectrum of seepage challenges. This amalgamation enables the derivation of accurate solutions, overcoming limitations inherent in conventional methods such as mesh generation and adaptability to complex geometries. less
By: Sudhi Sharma, Pierre Jolivet, Victorita Dolean, Abhijit Sarkar
This article discusses the uncertainty quantification (UQ) for time-independent linear and nonlinear partial differential equation (PDE)-based systems with random model parameters carried out using sampling-free intrusive stochastic Galerkin method leveraging multilevel scalable solvers constructed combining two-grid Schwarz method and AMG. High-resolution spatial meshes along with a large number of stochastic expansion terms increase the s... more
This article discusses the uncertainty quantification (UQ) for time-independent linear and nonlinear partial differential equation (PDE)-based systems with random model parameters carried out using sampling-free intrusive stochastic Galerkin method leveraging multilevel scalable solvers constructed combining two-grid Schwarz method and AMG. High-resolution spatial meshes along with a large number of stochastic expansion terms increase the system size leading to significant memory consumption and computational costs. Domain decomposition (DD)-based parallel scalable solvers are developed to this end for linear and nonlinear stochastic PDEs. A generalized minimum residual (GMRES) iterative solver equipped with a multilevel preconditioner consisting of restricted additive Schwarz (RAS) for the fine grid and algebraic multigrid (AMG) for the coarse grid is constructed to improve scalability. Numerical experiments illustrate the scalabilities of the proposed solver for stochastic linear and nonlinear Poisson problems. less
By: Christian Faßbender, Tim Bürchner, Philipp Kopp, Ernst Rank, Stefan Kollmannsberger
Immersed boundary methods simplify mesh generation by embedding the domain of interest into an extended domain that is easy to mesh, introducing the challenge of dealing with cells that intersect the domain boundary. Combined with explicit time integration schemes, the finite cell method introduces a lower bound for the critical time step size. Explicit transient analyses commonly use the spectral element method due to its natural way of ob... more
Immersed boundary methods simplify mesh generation by embedding the domain of interest into an extended domain that is easy to mesh, introducing the challenge of dealing with cells that intersect the domain boundary. Combined with explicit time integration schemes, the finite cell method introduces a lower bound for the critical time step size. Explicit transient analyses commonly use the spectral element method due to its natural way of obtaining diagonal mass matrices through nodal lumping. Its combination with the finite cell method is called the spectral cell method. Unfortunately, a direct application of nodal lumping in the spectral cell method is impossible due to the special quadrature necessary to treat the discontinuous integrand inside the cut cells. We analyze an implicit-explicit (IMEX) time integration method to exploit the advantages of the nodal lumping scheme for uncut cells on one side and the unconditional stability of implicit time integration schemes for cut cells on the other. In this hybrid, immersed Newmark IMEX approach, we use explicit second-order central differences to integrate the uncut degrees of freedom that lead to a diagonal block in the mass matrix and an implicit trapezoidal Newmark method to integrate the remaining degrees of freedom (those supported by at least one cut cell). The immersed Newmark IMEX approach preserves the high-order convergence rates and the geometric flexibility of the finite cell method. We analyze a simple system of spring-coupled masses to highlight some of the essential characteristics of Newmark IMEX time integration. We then solve the scalar wave equation on two- and three-dimensional examples with significant geometric complexity to show that our approach is more efficient than state-of-the-art time integration schemes when comparing accuracy and runtime. less
By: Nastaran Dabiran, Brandon Robinson, Rimple Sandhu, Mohammad Khalil, Chris L. Pettit, Dominique Poirel, Abhijit Sarkar
Sparse Bayesian learning (SBL) has been extensively utilized in data-driven modeling to combat the issue of overfitting. While SBL excels in linear-in-parameter models, its direct applicability is limited in models where observations possess nonlinear relationships with unknown parameters. Recently, a semi-analytical Bayesian framework known as nonlinear sparse Bayesian learning (NSBL) was introduced by the authors to induce sparsity among ... more
Sparse Bayesian learning (SBL) has been extensively utilized in data-driven modeling to combat the issue of overfitting. While SBL excels in linear-in-parameter models, its direct applicability is limited in models where observations possess nonlinear relationships with unknown parameters. Recently, a semi-analytical Bayesian framework known as nonlinear sparse Bayesian learning (NSBL) was introduced by the authors to induce sparsity among model parameters during the Bayesian inversion of nonlinear-in-parameter models. NSBL relies on optimally selecting the hyperparameters of sparsity-inducing Gaussian priors. It is inherently an approximate method since the uncertainty in the hyperparameter posterior is disregarded as we instead seek the maximum a posteriori (MAP) estimate of the hyperparameters (type-II MAP estimate). This paper aims to investigate the hierarchical structure that forms the basis of NSBL and validate its accuracy through a comparison with a one-level hierarchical Bayesian inference as a benchmark in the context of three numerical experiments: (i) a benchmark linear regression example with Gaussian prior and Gaussian likelihood, (ii) the same regression problem with a highly non-Gaussian prior, and (iii) an example of a dynamical system with a non-Gaussian prior and a highly non-Gaussian likelihood function, to explore the performance of the algorithm in these new settings. Through these numerical examples, it can be shown that NSBL is well-suited for physics-based models as it can be readily applied to models with non-Gaussian prior distributions and non-Gaussian likelihood functions. Moreover, we illustrate the accuracy of the NSBL algorithm as an approximation to the one-level hierarchical Bayesian inference and its ability to reduce the computational cost while adequately exploring the parameter posteriors. less
By: Florian Holzinger, Andreas Beham
Industrial manufacturing is currently amidst it's fourth great revolution, pushing towards the digital transformation of production processes. One key element of this transformation is the formalization and digitization of processes, creating an increased potential to monitor, understand and optimize existing processes. However, one major obstacle in this process is the increased diversification and specialisation, resulting in the dependen... more
Industrial manufacturing is currently amidst it's fourth great revolution, pushing towards the digital transformation of production processes. One key element of this transformation is the formalization and digitization of processes, creating an increased potential to monitor, understand and optimize existing processes. However, one major obstacle in this process is the increased diversification and specialisation, resulting in the dependency on multiple experts, which are rarely amalgamated in small to medium sized companies. To mitigate this issue, this paper presents a novel approach for multi-criteria optimization of workflow-based assembly tasks in manufacturing by combining a workflow modeling framework and the HeuristicLab optimization framework. For this endeavour, a new generic problem definition is implemented in HeuristicLab, enabling the optimization of arbitrary workflows represented with the modeling framework. The resulting Pareto front of the multi-criteria optimization provides the decision makers a set of optimal workflows from which they can choose to optimally fit the current demands. The advantages of the herein presented approach are highlighted with a real world use case from an ongoing research project. less
By: Weian Mao, Muzhi Zhu, Zheng Sun, Shuaike Shen, Lin Yuanbo Wu, Hao Chen, Chunhua Shen
Innovations like protein diffusion have enabled significant progress in de novo protein design, which is a vital topic in life science. These methods typically depend on protein structure encoders to model residue backbone frames, where atoms do not exist. Most prior encoders rely on atom-wise features, such as angles and distances between atoms, which are not available in this context. Thus far, only several simple encoders, such as IPA, h... more
Innovations like protein diffusion have enabled significant progress in de novo protein design, which is a vital topic in life science. These methods typically depend on protein structure encoders to model residue backbone frames, where atoms do not exist. Most prior encoders rely on atom-wise features, such as angles and distances between atoms, which are not available in this context. Thus far, only several simple encoders, such as IPA, have been proposed for this scenario, exposing the frame modeling as a bottleneck. In this work, we proffer the Vector Field Network (VFN), which enables network layers to perform learnable vector computations between coordinates of frame-anchored virtual atoms, thus achieving a higher capability for modeling frames. The vector computation operates in a manner similar to a linear layer, with each input channel receiving 3D virtual atom coordinates instead of scalar values. The multiple feature vectors output by the vector computation are then used to update the residue representations and virtual atom coordinates via attention aggregation. Remarkably, VFN also excels in modeling both frames and atoms, as the real atoms can be treated as the virtual atoms for modeling, positioning VFN as a potential universal encoder. In protein diffusion (frame modeling), VFN exhibits an impressive performance advantage over IPA, excelling in terms of both designability (67.04% vs. 53.58%) and diversity (66.54% vs. 51.98%). In inverse folding (frame and atom modeling), VFN outperforms the previous SoTA model, PiFold (54.7% vs. 51.66%), on sequence recovery rate. We also propose a method of equipping VFN with the ESM model, which significantly surpasses the previous ESM-based SoTA (62.67% vs. 55.65%), LM-Design, by a substantial margin. less
By: James Forster
In this paper we present a framework of key algorithms and data-structures for efficiently generating timetables for any number of AGVs from any given positioning on any given graph to accomplish any given demands as long as a few easily satisfiable assumptions are met. Our proposed algorithms provide guaranteed solutions in predictable polynomial running-times, which is fundamental to any real-time application. We also develop an improved ... more
In this paper we present a framework of key algorithms and data-structures for efficiently generating timetables for any number of AGVs from any given positioning on any given graph to accomplish any given demands as long as a few easily satisfiable assumptions are met. Our proposed algorithms provide guaranteed solutions in predictable polynomial running-times, which is fundamental to any real-time application. We also develop an improved geographic reservation algorithm that provides a substantial run-time improvement of the previously best-known algorithm from $O(nm)$ to $O(n)$. less
By: Zhenyu Gao, John-Paul Clarke, Javid Mardanov, Karen Marais
Unmanned Aerial Systems (UAS), an integral part of the Advanced Air Mobility (AAM) vision, are capable of performing a wide spectrum of tasks in urban environments. The societal integration of UAS is a pivotal challenge, as these systems must operate harmoniously within the constraints imposed by regulations and societal concerns. In complex urban environments, UAS safety has been a perennial obstacle to their large-scale deployment. To mit... more
Unmanned Aerial Systems (UAS), an integral part of the Advanced Air Mobility (AAM) vision, are capable of performing a wide spectrum of tasks in urban environments. The societal integration of UAS is a pivotal challenge, as these systems must operate harmoniously within the constraints imposed by regulations and societal concerns. In complex urban environments, UAS safety has been a perennial obstacle to their large-scale deployment. To mitigate UAS safety risk and facilitate risk-aware UAS operations planning, we propose a novel concept called \textit{3D virtual risk terrain}. This concept converts public risk constraints in an urban environment into 3D exclusion zones that UAS operations should avoid to adequately reduce risk to Entities of Value (EoV). To implement the 3D virtual risk terrain, we develop a conditional probability framework that comprehensively integrates most existing basic models for UAS ground risk. To demonstrate the concept, we build risk terrains on a Chicago downtown model and observe their characteristics under different conditions. We believe that the 3D virtual risk terrain has the potential to become a new routine tool for risk-aware UAS operations planning, urban airspace management, and policy development. The same idea can also be extended to other forms of societal impacts, such as noise, privacy, and perceived risk. less
By: Prabhat Kumar, Josh Pinskier, David Howard, Matthijs Langelaar
Compliant mechanisms actuated by pneumatic loads are receiving increasing attention due to their direct applicability as soft robots that perform tasks using their flexible bodies. Using multiple materials to build them can further improve their performance and efficiency. Due to developments in additive manufacturing, the fabrication of multi-material soft robots is becoming a real possibility. To exploit this opportunity, there is a need ... more
Compliant mechanisms actuated by pneumatic loads are receiving increasing attention due to their direct applicability as soft robots that perform tasks using their flexible bodies. Using multiple materials to build them can further improve their performance and efficiency. Due to developments in additive manufacturing, the fabrication of multi-material soft robots is becoming a real possibility. To exploit this opportunity, there is a need for a dedicated design approach. This paper offers a systematic approach to developing such mechanisms using topology optimization. The extended SIMP scheme is employed for multi-material modeling. The design-dependent nature of the pressure load is modeled using the Darcy law with a volumetric drainage term. Flow coefficient of each element is interpolated using a smoothed Heaviside function. The obtained pressure field is converted to consistent nodal loads. The adjoint-variable approach is employed to determine the sensitivities. A robust formulation is employed, wherein a min-max optimization problem is formulated using the output displacements of the eroded and blueprint designs. Volume constraints are applied to the blueprint design, whereas the strain energy constraint is formulated with respect to the eroded design. The efficacy and success of the approach are demonstrated by designing pneumatically actuated multi-material gripper and contractor mechanisms. A numerical study confirms that multiple-material mechanisms perform relatively better than their single-material counterparts. less
By: Long Chen, Jan Rottmayer, Lisa Kusch, Nicolas R. Gauger, Yinyu Ye
We formulate and solve data-driven aerodynamic shape design problems with distributionally robust optimization (DRO) approaches. Building on the findings of the work \cite{gotoh2018robust}, we study the connections between a class of DRO and the Taguchi method in the context of robust design optimization. Our preliminary computational experiments on aerodynamic shape optimization in transonic turbulent flow show promising design results.
We formulate and solve data-driven aerodynamic shape design problems with distributionally robust optimization (DRO) approaches. Building on the findings of the work \cite{gotoh2018robust}, we study the connections between a class of DRO and the Taguchi method in the context of robust design optimization. Our preliminary computational experiments on aerodynamic shape optimization in transonic turbulent flow show promising design results. less