Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation

Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, Animesh Garg

[Arxiv Paper] [Video]

Abstract

A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interaction in real-world environments requires integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form-factors provide an extended workspace by combining the dexterity of a manipulator arm with the extended reach of a mobile base, their real-world adoption has been limited. This limitation is in part due to two main reasons: (1) inability to interact with unknown human-scale objects such as cabinets and ovens, as well as (2) inefficient joint control of the arm and the base.Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles.

In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage uses a learned model to estimate the articulated model of the target object from online RGB-D input and predicts an action-conditional sequence of states. The second stage comprises a whole-body motion controller to manipulate the object along the computed kinematic plan. We show that our proposed pipeline can handle complicated static and dynamic kitchen settings. Moreover, we demonstrate that the proposed approach achieves better performance than commonly used control methods in mobile manipulation.

Main Video


Acknowledgements

This project is supported by the following.