ProcNet: Deep Predictive Coding Model for Robust-to-occlusion Visual Segmentation and Pose Estimation

Avatar
Poster
Voices Powered byElevenlabs logo
Connected to paperThis paper is a preprint and has not been certified by peer review

ProcNet: Deep Predictive Coding Model for Robust-to-occlusion Visual Segmentation and Pose Estimation

Authors

Michael Zechmair, Alban Bornet, Yannick Morel

Abstract

Systems involving human-robot collaboration necessarily require that steps be taken to ensure safety of the participating human. This is usually achievable if accurate, reliable estimates of the human's pose are available. In this paper, we present a deep Predictive Coding (PC) model supporting visual segmentation, which we extend to pursue pose estimation. The model is designed to offer robustness to the type of transient occlusion naturally occurring when human and robot are operating in close proximity to one another. Impact on performance of relevant model parameters is assessed, and comparison to an alternate pose estimation model (NVIDIA's PoseCNN) illustrates efficacy of the proposed approach.

Follow Us on

0 comments

Add comment