Imitation learning (IL) is a compelling method for training robots to complete a wide variety of manipulation tasks. While many modern IL algorithms use RGB observations as a default, several recent works have shown that 3D scene representations lifted from calibrated RGBD cameras can be useful for completing more complicated tasks, generalizing between camera viewpoints, generalizing to new instances of objects, and learning in low-data regimes. However, these works generally either focus on 3D keyframe prediction, utilize the scene information in a way that is highly specific to one action decoder, or omit semantic information important to task success. In this work, we argue that these 3D scene representations are useful for a variety of IL algorithms, even those originally designed to work with 2D inputs. To that end, we introduce Adaptive 3D Scene Representation (Adapt3R), a general-purpose 3D observation encoder. Adapt3R uses a novel architecture to synthesize data from one or more depth cameras into a single vector which can then be used as conditioning for a variety of IL algorithms. We show that when combined with SOTA multitask IL algorithms, Adapt3R maintains their multitask learning capacity while enabling zero-shot transfer to novel embodiments and camera poses.
@misc{wilcox2025adapt3radaptive3dscene,
title={Adapt3R: Adaptive 3D Scene Representation for Domain Transfer in Imitation Learning},
author={Albert Wilcox and Mohamed Ghanem and Masoud Moghani and Pierre Barroso and Benjamin Joffe and Animesh Garg},
year={2025},
eprint={2503.04877},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.04877}}