arXiv daily

Robotics (cs.RO)

Fri, 14 Apr 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in vision-based Human-Robot Collaboration

Authors:Dianhao Zhang, Mien Van, Pantelis Sopasakis, Seán McLoone

Abstract: To enable safe and effective human-robot collaboration (HRC) in smart manufacturing, seamless integration of sensing, cognition, and prediction into the robot controller is critical for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute a safe path planning based on feedback from a vision system. In order to satisfy the requirement of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times NMPC solutions are approximate, and hence the safety of the system cannot be guaranteed. To address this we formulate a novel safety-critical paradigm with an exponential control barrier function (ECBF) used as a safety filter. We also design a simple human-robot collaboration scenario using V-REP to evaluate the performance of the proposed controller and investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework. It yields a 19.8% reduction in execution time for the HRC task considered.

2.Study on Soft Robotic Pinniped Locomotion

Authors:Dimuthu D. K. Arachchige, Tanmay Varshney, Umer Huzaifa, Iyad Kanj, Thrishantha Nanayakkara, Yue Chen, Hunter B. Gilbert, Isuru S. Godage

Abstract: Legged locomotion is a highly promising but under-researched subfield within the field of soft robotics. The compliant limbs of soft-limbed robots offer numerous benefits, including the ability to regulate impacts, tolerate falls, and navigate through tight spaces. These robots have the potential to be used for various applications, such as search and rescue, inspection, surveillance, and more. The state-of-the-art still faces many challenges, including limited degrees of freedom, a lack of diversity in gait trajectories, insufficient limb dexterity, and limited payload capabilities. To address these challenges, we develop a modular soft-limbed robot that can mimic the locomotion of pinnipeds. By using a modular design approach, we aim to create a robot that has improved degrees of freedom, gait trajectory diversity, limb dexterity, and payload capabilities. We derive a complete floating-base kinematic model of the proposed robot and use it to generate and experimentally validate a variety of locomotion gaits. Results show that the proposed robot is capable of replicating these gaits effectively. We compare the locomotion trajectories under different gait parameters against our modeling results to demonstrate the validity of our proposed gait models.

3.Collaborative Ground-Aerial Multi-Robot System for Disaster Response Missions with a Low-Cost Drone Add-On for Off-the-Shelf Drones

Authors:Shalutha Rajapakshe, Dilanka Wickramasinghe, Sahan Gurusinghe, Deepana Ishtaweera, Bhanuka Silva, Peshala Jayasekara, Nick Panitz, Paul Flick, Navinda Kottege

Abstract: In disaster-stricken environments, it's vital to assess the damage quickly, analyse the stability of the environment, and allocate resources to the most vulnerable areas where victims might be present. These missions are difficult and dangerous to be conducted directly by humans. Using the complementary capabilities of both the ground and aerial robots, we investigate a collaborative approach of aerial and ground robots to address this problem. With an increased field of view, faster speed, and compact size, the aerial robot explores the area and creates a 3D feature-based map graph of the environment while providing a live video stream to the ground control station. Once the aerial robot finishes the exploration run, the ground control station processes the map and sends it to the ground robot. The ground robot, with its higher operation time, static stability, payload delivery and tele-conference capabilities, can then autonomously navigate to identified high-vulnerability locations. We have conducted experiments using a quadcopter and a hexapod robot in an indoor modelled environment with obstacles and uneven ground. Additionally, we have developed a low-cost drone add-on with value-added capabilities, such as victim detection, that can be attached to an off-the-shelf drone. The system was assessed for cost-effectiveness, energy efficiency, and scalability.

4.Near Field iToF LIDAR Depth Improvement from Limited Number of Shots

Authors:Mena Nagiub, Thorsten Beuth, Ganesh Sistu, Heinrich Gotzig, Ciar án Eising

Abstract: Indirect Time of Flight LiDARs can indirectly calculate the scene's depth from the phase shift angle between transmitted and received laser signals with amplitudes modulated at a predefined frequency. Unfortunately, this method generates ambiguity in calculated depth when the phase shift angle value exceeds $2\pi$. Current state-of-the-art methods use raw samples generated using two distinct modulation frequencies to overcome this ambiguity problem. However, this comes at the cost of increasing laser components' stress and raising their temperature, which reduces their lifetime and increases power consumption. In our work, we study two different methods to recover the entire depth range of the LiDAR using fewer raw data sample shots from a single modulation frequency with the support of sensor's gray scale output to reduce the laser components' stress and power consumption.

5.FM-Loc: Using Foundation Models for Improved Vision-based Localization

Authors:Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard

Abstract: Visual place recognition is essential for vision-based robot localization and SLAM. Despite the tremendous progress made in recent years, place recognition in changing environments remains challenging. A promising approach to cope with appearance variations is to leverage high-level semantic features like objects or place categories. In this paper, we propose FM-Loc which is a novel image-based localization approach based on Foundation Models that uses the Large Language Model GPT-3 in combination with the Visual-Language Model CLIP to construct a semantic image descriptor that is robust to severe changes in scene geometry and camera viewpoint. We deploy CLIP to detect objects in an image, GPT-3 to suggest potential room labels based on the detected objects, and CLIP again to propose the most likely location label. The object labels and the scene label constitute an image descriptor that we use to calculate a similarity score between the query and database images. We validate our approach on real-world data that exhibit significant changes in camera viewpoints and object placement between the database and query trajectories. The experimental results demonstrate that our method is applicable to a wide range of indoor scenarios without the need for training or fine-tuning.

6.A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians

Authors:Sara Casao, Andrés Otero, Álvaro Serra-Gómez, Ana C. Murillo, Javier Alonso-Mora, Eduardo Montijano

Abstract: Robotic applications involving people often require advanced perception systems to better understand complex real-world scenarios. To address this challenge, photo-realistic and physics simulators are gaining popularity as a means of generating accurate data labeling and designing scenarios for evaluating generalization capabilities, e.g., lighting changes, camera movements or different weather conditions. We develop a photo-realistic framework built on Unreal Engine and AirSim to generate easily scenarios with pedestrians and mobile robots. The framework is capable to generate random and customized trajectories for each person and provides up to 50 ready-to-use people models along with an API for their metadata retrieval. We demonstrate the usefulness of the proposed framework with a use case of multi-target tracking, a popular problem in real pedestrian scenarios. The notable feature variability in the obtained perception data is presented and evaluated.

7.Plant-inspired behavior-based controller to enable reaching in redundant continuum robot arms

Authors:Enrico Donato, Yasmin Tauqeer Ansari, Cecilia Laschi, Egidio Falotico

Abstract: Enabling reaching capabilities in highly redundant continuum robot arms is an active area of research. Existing solutions comprise of task-space controllers, whose proper functioning is still limited to laboratory environments. In contrast, this work proposes a novel plant-inspired behaviour-based controller that exploits information obtained from proximity sensing embedded near the end-effector to move towards a desired spatial target. The controller is tested on a 9-DoF modular cable-driven continuum arm for reaching multiple setpoints in space. The results are promising for the deployability of these systems into unstructured environments.

8.EV-Catcher: High-Speed Object Catching Using Low-latency Event-based Neural Networks

Authors:Ziyun Wang, Fernando Cladera Ojeda, Anthony Bisulco, Daewon Lee, Camillo J. Taylor, Kostas Daniilidis, M. Ani Hsieh, Daniel D. Lee, Volkan Isler

Abstract: Event-based sensors have recently drawn increasing interest in robotic perception due to their lower latency, higher dynamic range, and lower bandwidth requirements compared to standard CMOS-based imagers. These properties make them ideal tools for real-time perception tasks in highly dynamic environments. In this work, we demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects. We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency, as well as a learning-based approach that allows real-time inference of a confidence-enabled control signal to the robot. To validate our approach, we present an experimental catching system in which we catch fast-flying ping-pong balls. We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms such as the Nvidia Jetson NX.

9.Learning Perceptive Bipedal Locomotion over Irregular Terrain

Authors:Bart van Marum, Matthia Sabatelli, Hamidreza Kasaei

Abstract: In this paper we propose a novel bipedal locomotion controller that uses noisy exteroception to traverse a wide variety of terrains. Building on the cutting-edge advancements in attention based belief encoding for quadrupedal locomotion, our work extends these methods to the bipedal domain, resulting in a robust and reliable internal belief of the terrain ahead despite noisy sensor inputs. Additionally, we present a reward function that allows the controller to successfully traverse irregular terrain. We compare our method with a proprioceptive baseline and show that our method is able to traverse a wide variety of terrains and greatly outperforms the state-of-the-art in terms of robustness, speed and efficiency.

10.An Open Source Design Optimization Toolbox Evaluated on a Soft Finger

Authors:Stefan Escaida Navarro, Tanguy Navez, Olivier Goury, Luis Molina, Christian Duriez

Abstract: In this paper, we introduce a novel open source toolbox for design optimization in Soft Robotics. We consider that design optimization is an important trend in Soft Robotics that is changing the way in which designs will be shared and adopted. We evaluate this toolbox on the example of a cable-driven, sensorized soft finger. For devices like these, that feature both actuation and sensing, the need for multi-objective optimization capabilities naturally arises, because at the very least, a trade-off between these two aspects has to be found. Thus, multi-objective optimization capability is one of the central features of the proposed toolbox. We evaluate the optimization of the soft finger and show that extreme points of the optimization trade-off between sensing and actuation are indeed far apart on actually fabricated devices for the established metrics. Furthermore, we provide an in depth analysis of the sim-to-real behavior of the example, taking into account factors such as the mesh density in the simulation, mechanical parameters and fabrication tolerances.