Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation

Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, Animesh Garg

[Arxiv Paper] [Video]

Abstract

A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form factors provide an extended workspace, their real-world adoption has been limited. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage, object-centric planner, only focuses on the object to provide an action-conditional sequence of states for manipulation using RGB-D data. The second stage, agent-centric planner, formulates the whole-body motion control as an optimal control problem that ensures safe tracking of the generated plan, even in scenes with moving obstacles. We show that the proposed pipeline can handle complex static and dynamic kitchen settings for both wheel-based and legged mobile manipulators. Compared to other agent-centric planners, our proposed planner achieves a higher success rate and a lower execution time. We also perform hardware tests on a legged mobile manipulator to interact with various articulated objects in a kitchen.

Method Overview

Main Video

Results

Simulation deployment

We create different kitchen layouts using NVIDIA Isaac Sim. The kitchens differ in architecture, free space for mobility, and articulated objects instances. We use the platform Mabi-Mobile which comprises of a 6-DoF robotic arm on a wheeled-base.

Hardware deployment

We deploy the resulting system on ALMA, a legged mobile manipulator. The robot is able to perform manipulation of various articulated objects in the kitchen. During the approach and manipulation phase, we set the robot’s gait schedule to trot.

Interaction evaluation

We consider the manipulation of a drawer with a long handle. By uniformly discretizing the grasp locations on the handle, we generate interaction plans. The agent-centric planner evaluates the merit function for these plans by using the optimal control formulationwith a long planning horizon. The merit function combines the task objective with measures of constraints violation. A lower merit implies a safer interaction plan.

For two different static scenes, we observe that the same interaction plans yield different merits. Plans that are closer to the obstacle have higher scores due to the collision avoidance penalty (environment cost). When the collision possibility is low, other factors such as the input cost and joint constraints play a role in decision-making (embodiment cost). Thus, by combining these two factors, the proposed formulation for the agent-centric planner can also help in deciding between interaction plans.

Interaction Evaluation Results


Acknowledgements

This project is supported by the following.