People, AI, & Robots. Research Group
PAIR Lab is directed by Animesh Garg in the Department of Computer Science at the University of Toronto.
Our research vision is to build the Algorithmic Foundations for Generalizable Autonomy, that enables robots to acquire skills, at both cognitive & dexterous levels, and to seamlessly interact & collaborate with humans in novel environments. We focus on understanding structured inductive biases and causality on a quest for general-purpose embodied intelligence that learns from imprecise information and achieves flexibility & efficiency of human reasoning.
Research: Our current research focuses on machine learning algorithms for perception and control in robotics. The principal focus of this research is to understand representations and algorithms to enable the efficiency and generality of learning for interaction in autonomous agents. We work on challenging open problems at the intersection of computer vision, machine learning, and robotics. We develop algorithms and systems that unify reinforcement learning, control theoretic modeling, causality, and 2D/3D visual scene understanding to teach robots to perceive and to interact with the physical world. Read more
Research Interests: Robotics, Reinforcement Learning, Causality, Perception
Current Applications: Mobile-Manipulation in Retail/Warehouse, Personal/Sevice, and Surgical/Medical robotics.
Please note that PAIR lab is transitioning to Georgia Tech in 2023.
PAIR members will affiliated with Interactive Computing, Robotics, and Machine Learning centers.
We are accepting new students at all levels!
Please see openings for details.
|Apr 3, 2023||Isaac ORBIT is now accepted at RA-L and will be presented at IROS 2023|
|Jan 21, 2023||2 papers at ICLR: Slotformer & SEA for Structured Exploration.|
|Jan 15, 2023||5 New Papers at ICRA 2023|
|Sep 15, 2022||2 papers at CoRL: RoboTube & Bayesian Obj Models.|
|Sep 12, 2022||3 papers at Neurips: MoCoda, Breaking Bad & SMPL.|
|Jul 8, 2022||New ECCV Paper on Differentiable Simulation for Grasping.|
|Jun 30, 2022||Workshop Talks at ICRA and RSS 2022|
|Jun 30, 2022||2 papers at IROS: Scalable Sim2Real & Mobile-Manip with Articulated Objects.|
|May 25, 2022||3 New RL papers: Koopman-RL @ICML, LFIW @L4DC, & GLIDE @WAFR|
|Mar 2, 2022||MAC, NSM and X-Pool accepted at CVPR 22.|
|Vector Institute||UofT Missisauga||UofT Robotics Institute||UofT Engineering|