arXiv daily

Robotics (cs.RO)

Mon, 17 Apr 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Fri, 04 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.Enabling safe walking rehabilitation on the exoskeleton Atalante: experimental results

Authors:Maxime Brunet CAS, Marine Pétriaux CAS, Florent Di Meglio CAS, Nicolas Petit CAS

Abstract: This paper exposes a control architecture enabling rehabilitation of walking impaired patients with the lower-limb exoskeleton Atalante. Atalante's control system is modified to allow the patient to contribute to the walking motion through their efforts. Only the swing leg degree of freedom along the nominal path is relaxed. An online trajectory optimization checks that the muscle forces do not jeopardize stability. The optimization generates reference trajectories that satisfy several key constraints from the current point to the end of the step. One of the constraints requires that the center or pressure remains inside the support polygon, which ensures that the support leg subsystem successfully tracks the reference trajectory. As a result of the presented works, the robot provides a non-zero force in the direction of motion only when required, helping the patient go fast enough to maintain balance (or preventing him from going too fast). Experimental results are reported. They illustrate that variations of $\pm$50% of the duration of the step can be achieved in response to the patient's efforts and that many steps are achieved without falling. A video of the experiments can be viewed at https://youtu.be/_1A-2nLy5ZE

2.2D Forward Looking Sonar Simulation with Ground Echo Modeling

Authors:Yusheng Wang, Chujie Wu, Yonghoon Ji, Hiroshi Tsuchiya, Hajime Asama, Atsushi Yamashita

Abstract: Imaging sonar produces clear images in underwater environments, independent of water turbidity and lighting conditions. The next generation 2D forward looking sonars are compact in size and able to generate high-resolution images which facilitate underwater robotics research. Considering the difficulties and expenses of implementing experiments in underwater environments, tremendous work has been focused on sonar image simulation. However, sonar artifacts like multi-path reflection were not sufficiently discussed, which cannot be ignored in water tank environments. In this paper, we focus on the influence of echoes from the flat ground. We propose a method to simulate the ground echo effect physically in acoustic images. We model the multi-bounce situations using the single-bounce framework for computation efficiency. We compare the real image captured in the water tank with the synthetic images to validate the proposed methods.

3.The Impact of Frame-Dropping on Performance and Energy Consumption for Multi-Object Tracking

Authors:Matti Henning, Michael Buchholz, Klaus Dietmayer

Abstract: The safety of automated vehicles (AVs) relies on the representation of their environment. Consequently, state-of-the-art AVs employ potent sensor systems to achieve the best possible environment representation at all times. Although these high-performing systems achieve impressive results, they induce significant requirements for the processing capabilities of an AV's computational hardware components and their energy consumption. To enable a dynamic adaptation of such perception systems based on the situational perception requirements, we introduce a model-agnostic method for the scalable employment of single-frame object detection models using frame-dropping in tracking-by-detection systems. We evaluate our approach on the KITTI 3D Tracking Benchmark, showing that significant energy savings can be achieved at acceptable performance degradation, reaching up to 28% reduction of energy consumption at a performance decline of 6.6% in HOTA score.

4.Underwater Autonomous Tank Cleaning Rover

Authors:Aditya Sundarajan, Jaideepnath Anand, Kevin Timothy Muller, Mangal Das

Abstract: In order to keep aquatic ecosystems safe and healthy, it is imperative that cleaning be done frequently. This research suggests the use of autonomous underwater rovers for effective underwater cleaning as a novel approach to this issue. The enhanced sensing and navigational capabilities of the autonomous rovers enable them to independently navigate underwater environments and find and remove underwater garbage and uneaten fish feed which can be recycled. The suggested solution not only does away with the requirement for human divers, but also provides a more effective and affordable technique for underwater cleaning. The paper also examines the creation, testing, and potential of the autonomous underwater rovers.

5.Control and Coordination of a SWARM of Unmanned Surface Vehicles using Deep Reinforcement Learning in ROS

Authors:Shrudhi R S, Sreyash Mohanty, Dr. Susan Elias

Abstract: An unmanned surface vehicle (USV) can perform complex missions by continuously observing the state of its surroundings and taking action toward a goal. A SWARM of USVs working together can complete missions faster, and more effectively than a single USV alone. In this paper, we propose an autonomous communication model for a swarm of USVs. The goal of this system is to implement a software system using Robot Operating System (ROS) and Gazebo. With the main objective of coordinated task completion, the Markov decision process (MDP) provides a base to formulate a task decision problem to achieve efficient localization and tracking in a highly dynamic water environment. To coordinate multiple USVs performing real-time target tracking, we propose an enhanced multi-agent reinforcement learning approach. Our proposed scheme uses MA-DDPG, or Multi-Agent Deep Deterministic Policy Gradient, an extension of the Deep Deterministic Policy Gradients (DDPG) algorithm that allows for decentralized control of multiple agents in a cooperative environment. MA-DDPG's decentralised control allows each and every agent to make decisions based on its own observations and objectives, which can lead to superior gross performance and improved stability. Additionally, it provides communication and coordination among agents through the use of collective readings and rewards.

6.ATTACH Dataset: Annotated Two-Handed Assembly Actions for Human Action Understanding

Authors:Dustin Aganian, Benedict Stephan, Markus Eisenbach, Corinna Stretz, Horst-Michael Gross

Abstract: With the emergence of collaborative robots (cobots), human-robot collaboration in industrial manufacturing is coming into focus. For a cobot to act autonomously and as an assistant, it must understand human actions during assembly. To effectively train models for this task, a dataset containing suitable assembly actions in a realistic setting is crucial. For this purpose, we present the ATTACH dataset, which contains 51.6 hours of assembly with 95.2k annotated fine-grained actions monitored by three cameras, which represent potential viewpoints of a cobot. Since in an assembly context workers tend to perform different actions simultaneously with their two hands, we annotated the performed actions for each hand separately. Therefore, in the ATTACH dataset, more than 68% of annotations overlap with other annotations, which is many times more than in related datasets, typically featuring more simplistic assembly tasks. For better generalization with respect to the background of the working area, we did not only record color and depth images, but also used the Azure Kinect body tracking SDK for estimating 3D skeletons of the worker. To create a first baseline, we report the performance of state-of-the-art methods for action recognition as well as action detection on video and skeleton-sequence inputs. The dataset is available at https://www.tu-ilmenau.de/neurob/data-sets-code/attach-dataset .

7.Base Placement Optimization for Coverage Mobile Manipulation Tasks

Authors:Huiwen Zhang, Kai Mi, Zhijun Zhang

Abstract: Base placement optimization (BPO) is a fundamental capability for mobile manipulation and has been researched for decades. However, it is still very challenging for some reasons. First, compared with humans, current robots are extremely inflexible, and therefore have higher requirements on the accuracy of base placements (BPs). Second, the BP and task constraints are coupled with each other. The optimal BP depends on the task constraints, and in BP will affect task constraints in turn. More tricky is that some task constraints are flexible and non-deterministic. Third, except for fulfilling tasks, some other performance metrics such as optimal energy consumption and minimal execution time need to be considered, which makes the BPO problem even more complicated. In this paper, a Scale-like disc (SLD) representation of the workspace is used to decouple task constraints and BPs. To evaluate reachability and return optimal working pose over SLDs, a reachability map (RM) is constructed offline. In order to optimize the objectives of coverage, manipulability, and time cost simultaneously, this paper formulates the BPO as a multi-objective optimization problem (MOOP). Among them, the time optimal objective is modeled as a traveling salesman problem (TSP), which is more in line with the actual situation. The evolutionary method is used to solve the MOOP. Besides, to ensure the validity and optimality of the solution, collision detection is performed on the candidate BPs, and solutions from BPO are further fine-tuned according to the specific given task. Finally, the proposed method is used to solve a real-world toilet coverage cleaning task. Experiments show that the optimized BPs can significantly improve the coverage and efficiency of the task.

8.PaaS: Planning as a Service for reactive driving in CARLA Leaderboard

Authors:Truong Nhat Hao, Mai Huu Thien, Tran Tuan Anh, Tran Minh Quang, Nguyen Duc Duy, Pham Ngoc Viet Phuong

Abstract: End-to-end deep learning approaches has been proven to be efficient in autonomous driving and robotics. By using deep learning techniques for decision-making, those systems are often referred to as a black box, and the result is driven by data. In this paper, we propose PaaS (Planning as a Service), a vanilla module to generate local trajectory planning for autonomous driving in CARLA simulation. Our method is submitted in International CARLA Autonomous Driving Leaderboard (CADL), which is a platform to evaluate the driving proficiency of autonomous agents in realistic traffic scenarios. Our approach focuses on reactive planning in Frenet frame under complex urban street's constraints and driver's comfort. The planner generates a collection of feasible trajectories, leveraging heuristic cost functions with controllable driving style factor to choose the optimal-control path that satisfies safe travelling criteria. PaaS can provide sufficient solutions to handle well under challenging traffic situations in CADL. As the strict evaluation in CADL Map Track, our approach ranked 3rd out of 9 submissions regarding the measure of driving score. However, with the focus on minimizing the risk of maneuver and ensuring passenger safety, our figures corresponding to infraction penalty dominate the two leading submissions for 20%.

9.Integration of Reinforcement Learning Based Behavior Planning With Sampling Based Motion Planning for Automated Driving

Authors:Marvin Klimke, Benjamin Völz, Michael Buchholz

Abstract: Reinforcement learning has received high research interest for developing planning approaches in automated driving. Most prior works consider the end-to-end planning task that yields direct control commands and rarely deploy their algorithm to real vehicles. In this work, we propose a method to employ a trained deep reinforcement learning policy for dedicated high-level behavior planning. By populating an abstract objective interface, established motion planning algorithms can be leveraged, which derive smooth and drivable trajectories. Given the current environment model, we propose to use a built-in simulator to predict the traffic scene for a given horizon into the future. The behavior of automated vehicles in mixed traffic is determined by querying the learned policy. To the best of our knowledge, this work is the first to apply deep reinforcement learning in this manner, and as such lacks a state-of-the-art benchmark. Thus, we validate the proposed approach by comparing an idealistic single-shot plan with cyclic replanning through the learned policy. Experiments with a real testing vehicle on proving grounds demonstrate the potential of our approach to shrink the simulation to real world gap of deep reinforcement learning based planning approaches. Additional simulative analyses reveal that more complex multi-agent maneuvers can be managed by employing the cycling replanning approach.

10.Robust human position estimation in cooperative robotic cells

Authors:António Amorim, Diana Guimarães, Tiago Mendonça, Pedro Neto, Paulo Costa, António Paulo Moreira

Abstract: Robots are increasingly present in our lives, sharing the workspace and tasks with human co-workers. However, existing interfaces for human-robot interaction / cooperation (HRI/C) have limited levels of intuitiveness to use and safety is a major concern when humans and robots share the same workspace. Many times, this is due to the lack of a reliable estimation of the human pose in space which is the primary input to calculate the human-robot minimum distance (required for safety and collision avoidance) and HRI/C featuring machine learning algorithms classifying human behaviours / gestures. Each sensor type has its own characteristics resulting in problems such as occlusions (vision) and drift (inertial) when used in an isolated fashion. In this paper, it is proposed a combined system that merges the human tracking provided by a 3D vision sensor with the pose estimation provided by a set of inertial measurement units (IMUs) placed in human body limbs. The IMUs compensate the gaps in occluded areas to have tracking continuity. To mitigate the lingering effects of the IMU offset we propose a continuous online calculation of the offset value. Experimental tests were designed to simulate human motion in a human-robot collaborative environment where the robot moves away to avoid unexpected collisions with de human. Results indicate that our approach is able to capture the human\textsc's position, for example the forearm, with a precision in the millimetre range and robustness to occlusions.

11.Topology, dynamics, and control of an octopus-analog muscular hydrostat

Authors:Arman Tekinalp, Noel Naughton, Seung-Hyun Kim, Udit Halder, Rhanor Gillette, Prashant G. Mehta, William Kier, Mattia Gazzola

Abstract: Muscular hydrostats, such as octopus arms or elephant trunks, lack bones entirely, endowing them with exceptional dexterity and reconfigurability. Key to their unmatched ability to control nearly infinite degrees of freedom is the architecture into which muscle fibers are weaved. Their arrangement is, effectively, the instantiation of a sophisticated mechanical program that mediates, and likely facilitates, the control and realization of complex, dynamic morphological reconfigurations. Here, by combining medical imaging, biomechanical data, live behavioral experiments and numerical simulations, we synthesize a model octopus arm entailing ~200 continuous muscles groups, and begin to unravel its complexity. We show how 3D arm motions can be understood in terms of storage, transport, and conversion of topological quantities, effected by simple muscle activation templates. These, in turn, can be composed into higher-level control strategies that, compounded by the arm's compliance, are demonstrated in a range of object manipulation tasks rendered additionally challenging by the need to appropriately align suckers, to sense and grasp. Overall, our work exposes broad design and algorithmic principles pertinent to muscular hydrostats, robotics, and dynamics, while significantly advancing our ability to model muscular structures from medical imaging, with potential implications for human health and care.

12.Applications of Uncalibrated Image Based Visual Servoing in Micro- and Macroscale Robotics

Authors:Yifan Yin, Yutai Wang, Yunpu Zhang, Russell H. Taylor, Balazs P. Vagvolgyi

Abstract: We present a robust markerless image based visual servoing method that enables precision robot control without hand-eye and camera calibrations in 1, 3, and 5 degrees-of-freedom. The system uses two cameras for observing the workspace and a combination of classical image processing algorithms and deep learning based methods to detect features on camera images. The only restriction on the placement of the two cameras is that relevant image features must be visible in both views. The system enables precise robot-tool to workspace interactions even when the physical setup is disturbed, for example if cameras are moved or the workspace shifts during manipulation. The usefulness of the visual servoing method is demonstrated and evaluated in two applications: in the calibration of a micro-robotic system that dissects mosquitoes for the automated production of a malaria vaccine, and a macro-scale manipulation system for fastening screws using a UR10 robot. Evaluation results indicate that our image based visual servoing method achieves human-like manipulation accuracy in challenging setups even without camera calibration.

13.Affordances from Human Videos as a Versatile Representation for Robotics

Authors:Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak

Abstract: Building a robot that can understand and learn to interact by watching humans has inspired several vision problems. However, despite some successful results on static datasets, it remains unclear how current models can be used on a robot directly. In this paper, we aim to bridge this gap by leveraging videos of human interactions in an environment centric manner. Utilizing internet videos of human behavior, we train a visual affordance model that estimates where and how in the scene a human is likely to interact. The structure of these behavioral affordances directly enables the robot to perform many complex tasks. We show how to seamlessly integrate our affordance model with four robot learning paradigms including offline imitation learning, exploration, goal-conditioned learning, and action parameterization for reinforcement learning. We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild. Results, visualizations and videos at https://robo-affordances.github.io/