ENGINEERING INTERACTIVE LEARNING IN ARTIFICIAL SYSTEMS
​Most successful machine learning algorithms of today rely on either carefully curated, human-labeled datasets, or extremely large amounts of interactive experience with simple environments. This reliance has critical drawbacks: the expansive and careful human effort in curating the data is expensive, and models trained in this way struggle to generalize beyond the scope of the data. In short, current AI is data-hungry, and in particular, for large-scale, carefully-crafted human input. This creates challenges, not only of expense and scale, but also of ethics: the desire to collect more and better data conflicts with people’s desire for privacy.
Kim et al, "Active World Model Learning in Agent-Rich Environments with Progress Curiosity," 2020.
People, on the other hand, learn by virtue of their agency: they interact with various environments, exploring and building complex mental models of their world so as to flexibly adapt to a wide variety of tasks. This loop of action and perception is inseparable from the human learning process, as they decide what to look at, how to manipulate objects, and what to say to others.
Kim et al, "Active World Model Learning in Agent-Rich Environments with Progress Curiosity," 2020.
Thus far, we have developed curious, self-supervised deep reinforcement learning agents that explore and learn to model 3D physical environments and other agents. In doing so, they gain predictive models and useful visual representations. In the long term, we would like to grow these learning algorithms into a variety of robust technologies:
-
Curious, exploring robots that autonomously adapt to new surroundings and perform a variety of tasks.
-
Social AI that interacts with you and understand your beliefs & understandings.
-
AI that accesses, harnesses, and welds human knowledge.
​
-
Kim, Kuno, Megumi Sano, Julian De Freitas, Nick Haber,* & Daniel LK Yamins.* "Active World Model Learning in Agent-rich Environments with Progress Curiosity." International Conference on Machine Learning (ICML), 2020 (to appear).
-
Haber, Nick, Damian Mrowca, Stephanie Wang, Li F. Fei-Fei, and Daniel L. Yamins. Learning to play with intrinsically-motivated, self-aware agents. In Advances in Neural Information Processing Systems, pp. 8388-8399. 2018.
-
Mrowca, Damian, Chengxu Zhuang, Elias Wang, Nick Haber, Li F. Fei-Fei, Josh Tenenbaum, and Daniel L. Yamins. Flexible neural representation for physics prediction. In Advances in neural information processing systems, pp. 8799-8810. 2018.