PlaTe Visually-Grounded Planning with Transformers in Procedural Tasks

PlaTe

Visually-Grounded Planning with Transformers in Procedural Tasks

Jiankai Sun, De-An Huang, Bo Lu, Yun-Hui Liu, Bolei Zhou, Animesh Garg

Paper / Code

Abstract

In this work, we study the problem of how to leverage instructional videos to facilitate the understanding of human decision-making processes, focusing on training a model with the ability to plan a goal-directed procedure from real-world videos. Learning structured and plannable state and action spaces directly from unstructured videos is the key technical challenge of our task. There are two problems: first, the appearance gap between the training and validation datasets could be large for unstructured videos; second, these gaps lead to decision errors that compound over the steps. We address these limitations with Planning Transformer (PlaTe), which has the advantage of circumventing the compounding prediction errors that occur with single-step models during long model-based rollouts. Our method simultaneously learns the latent state and action information of assigned tasks and the representations of the decision-making process from human demonstrations. Experiments conducted on real-world instructional videos show that our method can achieve a better performance in reaching the indicated goal than previous algorithms. We also validated the possibility of applying procedural tasks on a UR-5 platform.

Main Video

More Qualitative Results on CrossTask

Representative samples on CrossTask and a qualitative comparison of our methods with baselines.

Make Bread and Butter Pickles
Make French Toast
Make Kimchi Fried Rice

More Qualitative Results on ActioNet

Representative samples on ActioNet and a qualitative comparison of our methods with baselines.

Boil Water with A Kettle
Wash Dirty Cloth

Failure Cases

Reasons for failure cases:

  • The task is complex
  • The state contains multiple semantics
  • The states looks very similar
Make Kerala Fish Curry
Make Irish Coffee

Acknowledgements

This project is supported by the following.