LbW Physical Imitation of Manipulation Skills from Human Videos

Learning by Watching

Physical Imitation of Manipulation Skills from Human Videos

Haoyu Xiong, Quanzhou Li, Yun-Chun Chen, Homanga Bharadhwaj, Samrath Sinha, Animesh Garg

Paper

Abstract

We present an approach for physical imitation from human videos for robot manipulation tasks. The key idea of our method lies in explicitly exploiting the kinematics and motion information embedded in the video to learn structured representations that endow the robot with the ability to imagine how to perform manipulation tasks in its own context. To achieve this, we design a perception module that learns to translate human videos to the robot domain followed by unsupervised keypoint detection.

LbW Overview Image

The resulting keypoint-based representations provide semantically meaningful information that can be directly used for reward computing and policy learning. We evaluate the effectiveness of our approach on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing. Detailed experimental evalu- ations demonstrate that our method performs favorably against previous approaches.

Motivation

In robotic imitation learning, collecting expert demonstrations remains expensive and challenging as it assumes access to both observations and actions. We aim to relax this expert supervision to human videos alone.

To bridge the human-robot domain gap, one way is to translate the human videos to the robot domain using a generative modeling approach. However, translation is generally imperfect.

Our key insight: explicitly exploiting the kinematics and motion information embedded in the video to learn structured representations via translation and unsupervised keypoint detection.

Contribution

1) We propose a perception module for physical imitation from human videos using human to robot translation and unsupervised keypoint detection.

2) The resulting keypoint-based representations can be used to compute the task reward with a simple distance metric.

3) Experimental results on five robot manipulation tasks show that our method greatly outperforms previous works

Method

LbW Method Image

Our LbW model is composed of three main components: an image-to-image translation network T, a keypoint detector Ψ, and a policy network π. The image-to-image translation network translates the input human demonstration video frame by frame to generate a robot demonstration video. Next, the keypoint detector takes the generated robot demonstration video as input and extracts the keypoint-based representation for each frame to form a keypoints trajectory. At each time step, the keypoint detector also extracts the keypoint-based representation for the current observation. The reward for physical imitation is defined by a distance metric d that measures the distance between the keypoint-based representation of the current observation and those in the keypoints trajectory. Finally, the keypoint-based representation of the current observation are passed to the policy network to predict an action that is used to interact with the environment.

Qualitative Results

LbW Method Image

To verify the effectiveness of our perception module fairly, we implement two baselines using the same control model as LbW, but different reward learning methods. we use success rate as the metric to compare our method with the baselines. At test time, the task is considered to be a success if the robot is able to complete the task within a specified number of time steps. The results are evaluated by 10 test episodes for each task.

LbW Method Image

Given a human video as input in the first row, we present the translated images of CycleGAN in the second row. In the third row, we visualize our translated images and the detected keypoints produced by the perception module. Our perception module accurately detects the robot arm pose and the location of the interacting object.

LbW Method Image

We finally show the performance videos of LbW.

Video Summary