arXiv daily

Robotics (cs.RO)

Fri, 04 Aug 2023

Other arXiv digests in this category:Thu, 14 Sep 2023; Wed, 13 Sep 2023; Tue, 12 Sep 2023; Mon, 11 Sep 2023; Fri, 08 Sep 2023; Tue, 05 Sep 2023; Fri, 01 Sep 2023; Thu, 31 Aug 2023; Wed, 30 Aug 2023; Tue, 29 Aug 2023; Mon, 28 Aug 2023; Fri, 25 Aug 2023; Thu, 24 Aug 2023; Wed, 23 Aug 2023; Tue, 22 Aug 2023; Mon, 21 Aug 2023; Fri, 18 Aug 2023; Thu, 17 Aug 2023; Wed, 16 Aug 2023; Tue, 15 Aug 2023; Mon, 14 Aug 2023; Fri, 11 Aug 2023; Thu, 10 Aug 2023; Wed, 09 Aug 2023; Tue, 08 Aug 2023; Mon, 07 Aug 2023; Thu, 03 Aug 2023; Wed, 02 Aug 2023; Tue, 01 Aug 2023; Mon, 31 Jul 2023; Fri, 28 Jul 2023; Thu, 27 Jul 2023; Wed, 26 Jul 2023; Tue, 25 Jul 2023; Mon, 24 Jul 2023; Fri, 21 Jul 2023; Thu, 20 Jul 2023; Wed, 19 Jul 2023; Tue, 18 Jul 2023; Mon, 17 Jul 2023; Fri, 14 Jul 2023; Thu, 13 Jul 2023; Wed, 12 Jul 2023; Tue, 11 Jul 2023; Mon, 10 Jul 2023; Fri, 07 Jul 2023; Thu, 06 Jul 2023; Wed, 05 Jul 2023; Tue, 04 Jul 2023; Mon, 03 Jul 2023; Fri, 30 Jun 2023; Thu, 29 Jun 2023; Wed, 28 Jun 2023; Tue, 27 Jun 2023; Mon, 26 Jun 2023; Fri, 23 Jun 2023; Thu, 22 Jun 2023; Wed, 21 Jun 2023; Tue, 20 Jun 2023; Fri, 16 Jun 2023; Thu, 15 Jun 2023; Tue, 13 Jun 2023; Mon, 12 Jun 2023; Fri, 09 Jun 2023; Thu, 08 Jun 2023; Wed, 07 Jun 2023; Tue, 06 Jun 2023; Mon, 05 Jun 2023; Fri, 02 Jun 2023; Thu, 01 Jun 2023; Wed, 31 May 2023; Tue, 30 May 2023; Mon, 29 May 2023; Fri, 26 May 2023; Thu, 25 May 2023; Wed, 24 May 2023; Tue, 23 May 2023; Mon, 22 May 2023; Fri, 19 May 2023; Thu, 18 May 2023; Wed, 17 May 2023; Tue, 16 May 2023; Mon, 15 May 2023; Fri, 12 May 2023; Thu, 11 May 2023; Wed, 10 May 2023; Tue, 09 May 2023; Mon, 08 May 2023; Fri, 05 May 2023; Thu, 04 May 2023; Wed, 03 May 2023; Tue, 02 May 2023; Mon, 01 May 2023; Fri, 28 Apr 2023; Thu, 27 Apr 2023; Wed, 26 Apr 2023; Tue, 25 Apr 2023; Mon, 24 Apr 2023; Fri, 21 Apr 2023; Thu, 20 Apr 2023; Wed, 19 Apr 2023; Tue, 18 Apr 2023; Mon, 17 Apr 2023; Fri, 14 Apr 2023; Thu, 13 Apr 2023; Wed, 12 Apr 2023; Tue, 11 Apr 2023; Mon, 10 Apr 2023
1.World-Model-Based Control for Industrial box-packing of Multiple Objects using NewtonianVAE

Authors:Yusuke Kato, Ryo Okumura, Tadahiro Taniguchi

Abstract: The process of industrial box-packing, which involves the accurate placement of multiple objects, requires high-accuracy positioning and sequential actions. When a robot is tasked with placing an object at a specific location with high accuracy, it is important not only to have information about the location of the object to be placed, but also the posture of the object grasped by the robotic hand. Often, industrial box-packing requires the sequential placement of identically shaped objects into a single box. The robot's action should be determined by the same learned model. In factories, new kinds of products often appear and there is a need for a model that can easily adapt to them. Therefore, it should be easy to collect data to train the model. In this study, we designed a robotic system to automate real-world industrial tasks, employing a vision-based learning control model. We propose in-hand-view-sensitive Newtonian variational autoencoder (ihVS-NVAE), which employs an RGB camera to obtain in-hand postures of objects. We demonstrate that our model, trained for a single object-placement task, can handle sequential tasks without additional training. To evaluate efficacy of the proposed model, we employed a real robot to perform sequential industrial box-packing of multiple objects. Results showed that the proposed model achieved a 100% success rate in industrial box-packing tasks, thereby outperforming the state-of-the-art and conventional approaches, underscoring its superior effectiveness and potential in industrial tasks.

2.Learning to Shape by Grinding: Cutting-surface-aware Model-based Reinforcement Learning

Authors:Takumi Hachimine, Jun Morimoto, Takamitsu Matsubara

Abstract: Object shaping by grinding is a crucial industrial process in which a rotating grinding belt removes material. Object-shape transition models are essential to achieving automation by robots; however, learning such a complex model that depends on process conditions is challenging because it requires a significant amount of data, and the irreversible nature of the removal process makes data collection expensive. This paper proposes a cutting-surface-aware Model-Based Reinforcement Learning (MBRL) method for robotic grinding. Our method employs a cutting-surface-aware model as the object's shape transition model, which in turn is composed of a geometric cutting model and a cutting-surface-deviation model, based on the assumption that the robot action can specify the cutting surface made by the tool. Furthermore, according to the grinding resistance theory, the cutting-surface-deviation model does not require raw shape information, making the model's dimensions smaller and easier to learn than a naive shape transition model directly mapping the shapes. Through evaluation and comparison by simulation and real robot experiments, we confirm that our MBRL method can achieve high data efficiency for learning object shaping by grinding and also provide generalization capability for initial and target shapes that differ from the training data.

3.ExploitFlow, cyber security exploitation routes for Game Theory and AI research in robotics

Authors:Víctor Mayoral-Vilches, Gelei Deng, Yi Liu, Martin Pinzger, Stefan Rass

Abstract: This paper addresses the prevalent lack of tools to facilitate and empower Game Theory and Artificial Intelligence (AI) research in cybersecurity. The primary contribution is the introduction of ExploitFlow (EF), an AI and Game Theory-driven modular library designed for cyber security exploitation. EF aims to automate attacks, combining exploits from various sources, and capturing system states post-action to reason about them and understand potential attack trees. The motivation behind EF is to bolster Game Theory and AI research in cybersecurity, with robotics as the initial focus. Results indicate that EF is effective for exploring machine learning in robot cybersecurity. An artificial agent powered by EF, using Reinforcement Learning, outperformed both brute-force and human expert approaches, laying the path for using ExploitFlow for further research. Nonetheless, we identified several limitations in EF-driven agents, including a propensity to overfit, the scarcity and production cost of datasets for generalization, and challenges in interpreting networking states across varied security settings. To leverage the strengths of ExploitFlow while addressing identified shortcomings, we present Malism, our vision for a comprehensive automated penetration testing framework with ExploitFlow at its core.

4.Automated Vehicle Platform with Connected Driving Capabilities

Authors:Oskars Teikmanis, Aleksandrs Levinskis, Andris Ivars Mackus, Artis Rušiņš, Amr Elkenawy, Marta Tropa, Modris Greitans

Abstract: Augmenting automated vehicles to wirelessly detect and respond to external events before they are detectable by onboard sensors is crucial for developing context-aware driving strategies. To this end, we present an automated vehicle platform, designed with connectivity, ease of use and modularity in mind, both in hardware and software. It is based on the Kia Soul EV with a modified version of the Open-Source Car Control (OSCC) drive-by-wire module, uses the open-source Robot Operating System (ROS and ROS 2) in its software architecture, and provides a straightforward solution for transitioning from simulations to real-world tests. We demonstrate the effectiveness of the platform through a synchronised driving test, where sensor data is exchanged wirelessly, and a model-predictive controller is used to actuate the automated vehicle.

5.Online Obstacle evasion with Space-Filling Curves

Authors:Ashay Wakode, Arpita Sinha

Abstract: The paper presents a strategy for robotic exploration problems using Space-Filling curves (SFC). The region of interest is first tessellated, and the tiles/cells are connected using some SFC. A robot follows the SFC to explore the entire area. However, there could be obstacles that block the systematic movement of the robot. We overcome this problem by providing an evading technique that avoids the blocked tiles while ensuring all the free ones are visited at least once. The proposed strategy is online, implying that prior knowledge of the obstacles is not mandatory. It works for all SFCs, but for the sake of demonstration, we use Hilbert curve. We present the completeness of the algorithm and discuss its desirable properties with examples. We also address the non-uniform coverage problem using our strategy.

6.Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact Joints

Authors:Yasunori Toshimitsu, Benedek Forrai, Barnabas Gavin Cangan, Ulrich Steger, Manuel Knecht, Stefan Weirich, Robert K. Katzschmann

Abstract: Biomimetic, dexterous robotic hands have the potential to replicate much of the tasks that a human can do, and to achieve status as a general manipulation platform. Recent advances in reinforcement learning (RL) frameworks have achieved remarkable performance in quadrupedal locomotion and dexterous manipulation tasks. Combined with GPU-based highly parallelized simulations capable of simulating thousands of robots in parallel, RL-based controllers have become more scalable and approachable. However, in order to bring RL-trained policies to the real world, we require training frameworks that output policies that can work with physical actuators and sensors as well as a hardware platform that can be manufactured with accessible materials yet is robust enough to run interactive policies. This work introduces the biomimetic tendon-driven Faive Hand and its system architecture, which uses tendon-driven rolling contact joints to achieve a 3D printable, robust high-DoF hand design. We model each element of the hand and integrate it into a GPU simulation environment to train a policy with RL, and achieve zero-shot transfer of a dexterous in-hand sphere rotation skill to the physical robot hand.

7.Nonprehensile Planar Manipulation through Reinforcement Learning with Multimodal Categorical Exploration

Authors:Juan Del Aguila Ferrandis, João Moura, Sethu Vijayakumar

Abstract: Developing robot controllers capable of achieving dexterous nonprehensile manipulation, such as pushing an object on a table, is challenging. The underactuated and hybrid-dynamics nature of the problem, further complicated by the uncertainty resulting from the frictional interactions, requires sophisticated control behaviors. Reinforcement Learning (RL) is a powerful framework for developing such robot controllers. However, previous RL literature addressing the nonprehensile pushing task achieves low accuracy, non-smooth trajectories, and only simple motions, i.e. without rotation of the manipulated object. We conjecture that previously used unimodal exploration strategies fail to capture the inherent hybrid-dynamics of the task, arising from the different possible contact interaction modes between the robot and the object, such as sticking, sliding, and separation. In this work, we propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies for arbitrary starting and target object poses, i.e. positions and orientations, and with improved accuracy. We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers. Furthermore, we validate the transferability of the learned policies, trained entirely in simulation, to a physical robot hardware using the KUKA iiwa robot arm. See our supplemental video: https://youtu.be/vTdva1mgrk4.