{"id":4040,"date":"2015-12-08T04:38:21","date_gmt":"2015-12-08T04:38:21","guid":{"rendered":"http:\/\/www.kurzweilai.net\/?p=267891"},"modified":"2015-12-08T04:38:21","modified_gmt":"2015-12-08T04:38:21","slug":"how-robots-can-learn-from-babies","status":"publish","type":"post","link":"https:\/\/hoo.central12.com\/fugic\/2015\/12\/08\/how-robots-can-learn-from-babies\/","title":{"rendered":"How robots can learn from babies"},"content":{"rendered":"<div id=\"attachment_268230\" class=\"wp-caption aligncenter\" style=\"width: 610px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><a href=\"http:\/\/www.kurzweilai.net\/how-robots-can-learn-from-babies\/robots-and-kids-gaze-comparison\" rel=\"attachment wp-att-268230\"><img class=\"size-full wp-image-268230\" title=\"robots-and-kids-gaze-comparison\" src=\"http:\/\/www.kurzweilai.net\/images\/robots-and-kids-gaze-comparison.jpg\" alt=\"\" width=\"600\" height=\"189\" \/><\/a><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">A collaboration between UW developmental psychologists and computer scientists aims to enable robots to learn in the same way that children naturally do. The team used research on how babies follow an adult\u2019s gaze to \u201cteach\u201d a robot to perform the same task. (credit: University of Washington)<\/p><\/div>\n<p>Babies learn about the world by exploring how their bodies move in space, grabbing toys, pushing things off tables and by watching and imitating what adults are doing. So instead of laboriously writing code (or moving a robot\u2019s arm or body to show it how to perform an action), why not just let them learn like babies?<\/p>\n<p>That&#8217;s exactly what University of Washington (UW) developmental psychologists and computer scientists have now demonstrated in experiments that suggest that robots can \u201clearn\u201d much like kids &#8212; by amassing data through exploration, watching a human do something, and determining how to perform that task on its own.<\/p>\n<p>That new method would allow someone who doesn&#8217;t know anything about computer programming to be able to teach a robot by demonstration \u2014 showing the robot how to clean your dishes, fold your clothes, or do household chores.<\/p>\n<p>&#8220;But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own,\u201d said <a href=\"https:\/\/homes.cs.washington.edu\/~rao\/\" >Rajesh Rao<\/a>, a UW professor of computer science and engineering and senior author of an <a href=\"http:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0141965\" >open-access<\/a> paper\u00a0in the journal\u00a0<em>PLoS ONE<\/em>.<\/p>\n<p>In the paper, the UW team developed a new probabilistic model aimed at solving a fundamental challenge in robotics: building robots that can learn new skills by watching people and imitating them. The roboticists collaborated with UW psychology professor and I-LABS co-director <a href=\"http:\/\/ilabs.uw.edu\/institute-faculty\/bio\/i-labs-andrew-n-meltzoff-phd\" >Andrew Meltzoff<\/a>, whose seminal research has shown that children as young as 18 months can infer the goal of an adult\u2019s actions and develop alternate ways of reaching that goal themselves.<\/p>\n<p>In one example, infants saw an adult try to pull apart a barbell-shaped toy, but the adult failed to achieve that goal because the toy was stuck together and his hands slipped off the ends. The infants watched carefully and then decided to use alternate methods \u2014 they wrapped their tiny fingers all the way around the ends and yanked especially hard \u2014 duplicating what the adult intended to do.<\/p>\n<p><strong>Machine-learning algorithms based on play<\/strong><\/p>\n<div id=\"attachment_268229\" class=\"wp-caption aligncenter\" style=\"width: 607px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><a href=\"http:\/\/www.kurzweilai.net\/how-robots-can-learn-from-babies\/robots-kids-tabletop-image\" rel=\"attachment wp-att-268229\"><img class=\"size-full wp-image-268229\" title=\"Robots-kids-tabletop-image\" src=\"http:\/\/www.kurzweilai.net\/images\/Robots-kids-tabletop-image.jpg\" alt=\"\" width=\"597\" height=\"300\" \/><\/a><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">This robot used the new UW model to imitate a human moving toy food objects around a tabletop. By learning which actions worked best with its own geometry, the robot could use different means to achieve the same goal &#8212; a key to enabling robots to learn through imitation. (credit: University of Washington)<\/p><\/div>\n<p>Children acquire intention-reading skills, in part, through self-exploration that helps them learn the laws of physics and how their own actions influence objects, eventually allowing them to amass enough knowledge to learn from others and to interpret their intentions. Meltzoff thinks that one of the reasons babies learn so quickly is that they are so playful.<\/p>\n<p>\u201cBabies engage in what looks like mindless play, but this enables future learning. It\u2019s a baby\u2019s secret sauce for innovation,\u201d Meltzoff said. \u201cIf they\u2019re trying to figure out how to work a new toy, they\u2019re actually using knowledge they gained by playing with other toys. During play they\u2019re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else\u2019s intentions.\u201d<\/p>\n<p>Rao\u2019s team used that infant research to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then the robot uses that learned probabilistic model to infer what a human wants it to do and complete the task, and even to \u201cask\u201d for help if it\u2019s not certain it can.<\/p>\n<p><strong>How to follow a human&#8217;s gaze<\/strong><\/p>\n<p>The team tested its robotic model in two different scenarios: a computer simulation experiment in which a robot learns to follow a human\u2019s gaze, and another experiment in which an actual robot learns to imitate human actions involving moving toy food objects to different areas on a tabletop.<\/p>\n<p>In the gaze experiment, the robot learns a model of its own head movements and assumes that the human\u2019s head is governed by the same rules. The robot tracks the beginning and ending points of a human\u2019s head movements as the human looks across the room and uses that information to figure out where the person is looking. The robot then uses its learned model of head movements to fixate on the same location as the human.<\/p>\n<p>The team also recreated one of Meltzoff\u2019s tests that showed infants who had experience with visual barriers and blindfolds weren\u2019t interested in looking where a blindfolded adult was looking, because they understood the person couldn\u2019t actually see. Once the team enabled the robot to \u201clearn\u201d what the consequences of being blindfolded were, it no longer followed the human\u2019s head movement to look at the same spot.<\/p>\n<p><strong>Smart movements: beyond mimicking<\/strong><\/p>\n<p>In the second experiment, the team allowed a robot to experiment with pushing or picking up different objects and moving them around a tabletop. The robot used that model to imitate a human who moved objects around or cleared everything off the tabletop. Rather than rigidly mimicking the human action each time, the robot sometimes used different means to achieve the same ends.<\/p>\n<p>\u201cIf the human pushes an object to a new location, it may be easier and more reliable for a robot with a gripper to pick it up to move it there rather than push it,\u201d said lead author <a href=\"https:\/\/sites.google.com\/site\/gradstudentpage\/\" >Michael Jae-Yoon Chung<\/a>, a UW doctoral student in computer science and engineering. \u201cBut that requires knowing what the goal is, which is a hard problem in robotics and which our paper tries to address.\u201d<\/p>\n<p>Though the initial experiments involved learning how to infer goals and imitate simple behaviors, the team plans to explore how such a model can help robots learn more complicated tasks.<\/p>\n<p>\u201cBabies learn through their own play and by watching others,\u201d says Meltzoff, \u201cand they are the best learners on the planet &#8212; why not design robots that learn as effortlessly as a child?\u201d<\/p>\n<p>That raises a question: can babies also learn from robots they&#8217;ve taught &#8212; in a closed loop? And where might that eventually take education &#8212; and civilization?<\/p>\n<hr \/>\n<p><strong>Abstract of\u00a0<em>A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning<\/em><\/strong><\/p>\n<p>A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Babies learn about the world by exploring how their bodies move in space, grabbing toys, pushing things off tables and by watching and imitating what adults are doing. So instead of laboriously writing code (or moving a robot&rsquo;s arm or body to show it how to perform an action), why not just let them learn [&#8230;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46,49,43],"tags":[],"class_list":["post-4040","post","type-post","status-publish","format-standard","hentry","category-airobotics","category-cognitive-scienceneuroscience","category-news"],"_links":{"self":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/4040"}],"collection":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/comments?post=4040"}],"version-history":[{"count":1,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/4040\/revisions"}],"predecessor-version":[{"id":4041,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/4040\/revisions\/4041"}],"wp:attachment":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/media?parent=4040"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/categories?post=4040"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/tags?post=4040"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}