arXiv daily

Robotics (cs.RO)

Fri, 21 Apr 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Learning Semantic-Agnostic and Spatial-Aware Representation for Generalizable Visual-Audio Navigation

Authors:Hongcheng Wang, Yuxuan Wang, Fangwei Zhong, Mingdong Wu, Jianwei Zhang, Yizhou Wang, Hao Dong

Abstract: Visual-audio navigation (VAN) is attracting more and more attention from the robotic community due to its broad applications, \emph{e.g.}, household robots and rescue robots. In this task, an embodied agent must search for and navigate to the sound source with egocentric visual and audio observations. However, the existing methods are limited in two aspects: 1) poor generalization to unheard sound categories; 2) sample inefficient in training. Focusing on these two problems, we propose a brain-inspired plug-and-play method to learn a semantic-agnostic and spatial-aware representation for generalizable visual-audio navigation. We meticulously design two auxiliary tasks for respectively accelerating learning representations with the above-desired characteristics. With these two auxiliary tasks, the agent learns a spatially-correlated representation of visual and audio inputs that can be applied to work on environments with novel sounds and maps. Experiment results on realistic 3D scenes (Replica and Matterport3D) demonstrate that our method achieves better generalization performance when zero-shot transferred to scenes with unseen maps and unheard sound categories.

2.Contrastive Language, Action, and State Pre-training for Robot Learning

Authors:Krishan Rana, Andrew Melnik, Niko Sünderhauf

Abstract: In this paper, we introduce a method for unifying language, action, and state information in a shared embedding space to facilitate a range of downstream tasks in robot learning. Our method, Contrastive Language, Action, and State Pre-training (CLASP), extends the CLIP formulation by incorporating distributional learning, capturing the inherent complexities and one-to-many relationships in behaviour-text alignment. By employing distributional outputs for both text and behaviour encoders, our model effectively associates diverse textual commands with a single behaviour and vice-versa. We demonstrate the utility of our method for the following downstream tasks: zero-shot text-behaviour retrieval, captioning unseen robot behaviours, and learning a behaviour prior for language-conditioned reinforcement learning. Our distributional encoders exhibit superior retrieval and captioning performance on unseen datasets, and the ability to generate meaningful exploratory behaviours from textual commands, capturing the intricate relationships between language, action, and state. This work represents an initial step towards developing a unified pre-trained model for robotics, with the potential to generalise to a broad range of downstream tasks.

3.A Comprehensive Review on Ontologies for Scenario-based Testing in the Context of Autonomous Driving

Authors:Maximilian Zipfl, Nina Koch, J. Marius Zöllner

Abstract: The verification and validation of autonomous driving vehicles remains a major challenge due to the high complexity of autonomous driving functions. Scenario-based testing is a promising method for validating such a complex system. Ontologies can be utilized to produce test scenarios that are both meaningful and relevant. One crucial aspect of this process is selecting the appropriate method for describing the entities involved. The level of detail and specific entity classes required will vary depending on the system being tested. It is important to choose an ontology that properly reflects these needs. This paper summarizes key representative ontologies for scenario-based testing and related use cases in the field of autonomous driving. The considered ontologies are classified according to their level of detail for both static facts and dynamic aspects. Furthermore, the ontologies are evaluated based on the presence of important entity classes and the relations between them.

4.Inverse Universal Traffic Quality -- a Criticality Metric for Crowded Urban Traffic Scenes

Authors:Barbara Schütt, Maximilian Zipfl, J. Marius Zöllner, Eric Sax

Abstract: An essential requirement for scenario-based testing the identification of critical scenes and their associated scenarios. However, critical scenes, such as collisions, occur comparatively rarely. Accordingly, large amounts of data must be examined. A further issue is that recorded real-world traffic often consists of scenes with a high number of vehicles, and it can be challenging to determine which are the most critical vehicles regarding the safety of an ego vehicle. Therefore, we present the inverse universal traffic quality, a criticality metric for urban traffic independent of predefined adversary vehicles and vehicle constellations such as intersection trajectories or car-following scenarios. Our metric is universally applicable for different urban traffic situations, e.g., intersections or roundabouts, and can be adjusted to certain situations if needed. Additionally, in this paper, we evaluate the proposed metric and compares its result to other well-known criticality metrics of this field, such as time-to-collision or post-encroachment time.

5.1001 Ways of Scenario Generation for Testing of Self-driving Cars: A Survey

Authors:Barbara Schütt, Joshua Ransiek, Thilo Braun, Eric Sax

Abstract: Scenario generation is one of the essential steps in scenario-based testing and, therefore, a significant part of the verification and validation of driver assistance functions and autonomous driving systems. However, the term scenario generation is used for many different methods, e.g., extraction of scenarios from naturalistic driving data or variation of scenario parameters. This survey aims to give a systematic overview of different approaches, establish different categories of scenario acquisition and generation, and show that each group of methods has typical input and output types. It shows that although the term is often used throughout literature, the evaluated methods use different inputs and the resulting scenarios differ in abstraction level and from a systematical point of view. Additionally, recent research and literature examples are given to underline this categorization.

6.AMP in the wild: Learning robust, agile, natural legged locomotion skills

Authors:Yikai Wang, Zheyuan Jiang, Jianyu Chen

Abstract: The successful transfer of a learned controller from simulation to the real world for a legged robot requires not only the ability to identify the system, but also accurate estimation of the robot's state. In this paper, we propose a novel algorithm that can infer not only information about the parameters of the dynamic system, but also estimate important information about the robot's state from previous observations. We integrate our algorithm with Adversarial Motion Priors and achieve a robust, agile, and natural gait in both simulation and on a Unitree A1 quadruped robot in the real world. Empirical results demonstrate that our proposed algorithm enables traversing challenging terrains with lower power consumption compared to the baselines. Both qualitative and quantitative results are presented in this paper.

7.Online Time-Optimal Trajectory Planning on Three-Dimensional Race Tracks

Authors:Matthias Rowold, Levent Ögretmen, Ulf Kasolowsky, Boris Lohmann

Abstract: We propose an online planning approach for racing that generates the time-optimal trajectory for the upcoming track section. The resulting trajectory takes the current vehicle state, effects caused by \acl{3D} track geometries, and speed limits dictated by the race rules into account. In each planning step, an optimal control problem is solved, making a quasi-steady-state assumption with a point mass model constrained by gg-diagrams. For its online applicability, we propose an efficient representation of the gg-diagrams and identify negligible terms to reduce the computational effort. We demonstrate that the online planning approach can reproduce the lap times of an offline-generated racing line during single vehicle racing. Moreover, it finds a new time-optimal solution when a deviation from the original racing line is necessary, e.g., during an overtaking maneuver. Motivated by the application in a rule-based race, we also consider the scenario of a speed limit lower than the current vehicle velocity. We introduce an initializable slack variable to generate feasible trajectories despite the constraint violation while reducing the velocity to comply with the rules.

8.IBBT: Informed Batch Belief Trees for Motion Planning Under Uncertainty

Authors:Dongliang Zheng, Panagiotis Tsiotras

Abstract: In this work, we propose the Informed Batch Belief Trees (IBBT) algorithm for motion planning under motion and sensing uncertainties. The original stochastic motion planning problem is divided into a deterministic motion planning problem and a graph search problem. We solve the deterministic planning problem using sampling-based methods such as PRM or RRG to construct a graph of nominal trajectories. Then, an informed cost-to-go heuristic for the original problem is computed based on the nominal trajectory graph. Finally, we grow a belief tree by searching over the graph using the proposed heuristic. IBBT interleaves between batch state sampling, nominal trajectory graph construction, heuristic computing, and search over the graph to find belief space motion plans. IBBT is an anytime, incremental algorithm. With an increasing number of batches of samples added to the graph, the algorithm finds motion plans that converge to the optimal one. IBBT is efficient by reusing results between sequential iterations. The belief tree searching is an ordered search guided by an informed heuristic. We test IBBT in different planning environments. Our numerical investigation confirms that IBBT finds non-trivial motion plans and is faster compared with previous similar methods.

9.RGB-D Inertial Odometry for a Resource-Restricted Robot in Dynamic Environments

Authors:Jianheng Liu, Xuanfu Li, Yueqian Liu, Haoyao Chen

Abstract: Current simultaneous localization and mapping (SLAM) algorithms perform well in static environments but easily fail in dynamic environments. Recent works introduce deep learning-based semantic information to SLAM systems to reduce the influence of dynamic objects. However, it is still challenging to apply a robust localization in dynamic environments for resource-restricted robots. This paper proposes a real-time RGB-D inertial odometry system for resource-restricted robots in dynamic environments named Dynamic-VINS. Three main threads run in parallel: object detection, feature tracking, and state optimization. The proposed Dynamic-VINS combines object detection and depth information for dynamic feature recognition and achieves performance comparable to semantic segmentation. Dynamic-VINS adopts grid-based feature detection and proposes a fast and efficient method to extract high-quality FAST feature points. IMU is applied to predict motion for feature tracking and moving consistency check. The proposed method is evaluated on both public datasets and real-world applications and shows competitive localization accuracy and robustness in dynamic environments. Yet, to the best of our knowledge, it is the best-performance real-time RGB-D inertial odometry for resource-restricted platforms in dynamic environments for now. The proposed system is open source at: https://github.com/HITSZ-NRSL/Dynamic-VINS.git

10.Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation

Authors:Iris Andrussow, Huanbo Sun, Katherine J. Kuchenbecker, Georg Martius

Abstract: Intelligent interaction with the physical world requires perceptual abilities beyond vision and hearing; vibrant tactile sensing is essential for autonomous robots to dexterously manipulate unfamiliar objects or safely contact humans. Therefore, robotic manipulators need high-resolution touch sensors that are compact, robust, inexpensive, and efficient. The soft vision-based haptic sensor presented herein is a miniaturized and optimized version of the previously published sensor Insight. Minsight has the size and shape of a human fingertip and uses machine learning methods to output high-resolution maps of 3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing performance, with a mean absolute force error of 0.07 N and contact location error of 0.6 mm across its surface area. Minsight's utility is shown in two robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables the robot to track the movements of a human finger based only on tactile data. Second, the informative value of the sensor output is shown by detecting whether a hard lump is embedded within a soft elastomer with an accuracy of 98%. These findings indicate that Minsight can give robots the detailed fingertip touch sensing needed for dexterous manipulation and physical human-robot interaction.

11.Multi-level decision framework collision avoidance algorithm in emergency scenarios

Authors:Guoying Chen, Xinyu Wang, Min Hua, Wei Liu

Abstract: With the rapid development of autonomous driving, the attention of academia has increasingly focused on the development of anti-collision systems in emergency scenarios, which have a crucial impact on driving safety. While numerous anti-collision strategies have emerged in recent years, most of them only consider steering or braking. The dynamic and complex nature of the driving environment presents a challenge to developing robust collision avoidance algorithms in emergency scenarios. To address the complex, dynamic obstacle scene and improve lateral maneuverability, this paper establishes a multi-level decision-making obstacle avoidance framework that employs the safe distance model and integrates emergency steering and emergency braking to complete the obstacle avoidance process. This approach helps avoid the high-risk situation of vehicle instability that can result from the separation of steering and braking actions. In the emergency steering algorithm, we define the collision hazard moment and propose a multi-constraint dynamic collision avoidance planning method that considers the driving area. Simulation results demonstrate that the decision-making collision avoidance logic can be applied to dynamic collision avoidance scenarios in complex traffic situations, effectively completing the obstacle avoidance task in emergency scenarios and improving the safety of autonomous driving.

12.Robot-Enabled Construction Assembly with Automated Sequence Planning based on ChatGPT: RoboGPT

Authors:Hengxu You, Yang Ye, Tianyu Zhou, Qi Zhu, Jing Du

Abstract: Robot-based assembly in construction has emerged as a promising solution to address numerous challenges such as increasing costs, labor shortages, and the demand for safe and efficient construction processes. One of the main obstacles in realizing the full potential of these robotic systems is the need for effective and efficient sequence planning for construction tasks. Current approaches, including mathematical and heuristic techniques or machine learning methods, face limitations in their adaptability and scalability to dynamic construction environments. To expand the ability of the current robot system in sequential understanding, this paper introduces RoboGPT, a novel system that leverages the advanced reasoning capabilities of ChatGPT, a large language model, for automated sequence planning in robot-based assembly applied to construction tasks. The proposed system adapts ChatGPT for construction sequence planning and demonstrate its feasibility and effectiveness through experimental evaluation including Two case studies and 80 trials about real construction tasks. The results show that RoboGPT-driven robots can handle complex construction operations and adapt to changes on the fly. This paper contributes to the ongoing efforts to enhance the capabilities and performance of robot-based assembly systems in the construction industry, and it paves the way for further integration of large language model technologies in the field of construction robotics.