{"id":14289,"date":"2017-03-08T01:29:33","date_gmt":"2017-03-08T01:29:33","guid":{"rendered":"http:\/\/www.kurzweilai.net\/?p=295100"},"modified":"2017-03-09T23:14:30","modified_gmt":"2017-03-09T23:14:30","slug":"how-to-control-robots-with-your-mind","status":"publish","type":"post","link":"https:\/\/hoo.central12.com\/fugic\/2017\/03\/08\/how-to-control-robots-with-your-mind\/","title":{"rendered":"How to control robots with your mind"},"content":{"rendered":"<div id=\"attachment_295183\" class=\"wp-caption aligncenter\" style=\"width: 524px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-295183 noshadow\" title=\"brain-wave communication with robot\" src=\"http:\/\/www.kurzweilai.net\/images\/brain-wave-communication-with-robot.png\" alt=\"\" width=\"514\" height=\"393\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer\u2019s EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al.\/MIT, Boston University)<\/p><\/div>\n<p>Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show <a href=\"http:\/\/www.amc.com\/shows\/humans?gclid=COy7_vTJxdICFcEbgQodAtMBjw\" ><em>HUMANS<\/em><\/a> (now in second season).<\/p>\n<p>Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT\u2019s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.<\/p>\n<p>In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.<\/p>\n<div id=\"attachment_295188\" class=\"wp-caption aligncenter\" style=\"width: 524px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-295188\" title=\"human EEG - robot control system\" src=\"http:\/\/www.kurzweilai.net\/images\/human-EEG-robot-control-system.png\" alt=\"\" width=\"514\" height=\"402\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al.\/MIT, Boston University)<\/p><\/div>\n<p>While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. \u201cImagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,\u201d says CSAIL Director Daniela Rus. \u201cA streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven\u2019t even invented yet.\u201d<\/p>\n<p>The team used a humanoid robot named \u201cBaxter\u201d from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.<\/p>\n<p><iframe frameborder=\"0\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/Zd9WhJPa2Ok\" width=\"560\"><\/iframe><br \/>\n<em>MITCSAIL | Brain-controlled Robots<\/em><\/p>\n<p><strong>Intuitive human-robot interaction<\/strong><\/p>\n<p>The system detects brain signals called \u201cerror-related potentials\u201d (generated whenever our brains notice a mistake) to determine if the human agrees with a robot&#8217;s decision.<\/p>\n<p>\u201cAs you watch the robot, all you have to do is mentally agree or disagree with what it is doing,\u201d says Rus. \u201cYou don\u2019t have to train yourself to think in a certain way \u2014 the machine adapts to you, and not the other way around.\u201d Or if the robot\u2019s not sure about its decision, it can trigger a human response to get a more accurate answer.<\/p>\n<p>The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can\u2019t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.<\/p>\n<p>The project was funded in part by Boeing and the National Science Foundation. An open-access <a href=\"http:\/\/groups.csail.mit.edu\/drl\/wiki\/images\/e\/ec\/Correcting_Robot_Mistakes_in_Real_Time_Using_EEG_Signals.pdf\" >paper<\/a> will be presented at the IEEE <a href=\"http:\/\/www.icra2017.org\/\" >International Conference on Robotics and Automation<\/a> (ICRA) conference in Singapore this May.<\/p>\n<p><strong>Here, robot, Fetch!<\/strong><\/p>\n<div id=\"attachment_295150\" class=\"wp-caption aligncenter\" style=\"width: 521px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-295150\" title=\"robot learns to fetch\" src=\"http:\/\/www.kurzweilai.net\/images\/robot-learns-to-fetch.png\" alt=\"\" width=\"511\" height=\"307\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">Robot asks questions, and based on a person\u2019s language and gesture, infers what item to deliver. (credit: David Whitney\/Brown University)<\/p><\/div>\n<p>But what if the robot is still confused? Researchers in <a href=\"http:\/\/h2r.cs.brown.edu\/\" >Brown University&#8217;s Humans to Robots Lab<\/a> have an app for that.<\/p>\n<p>\u201cFetching objects is an important task that we want collaborative robots to be able to do,\u201d said computer science <a href=\"http:\/\/cs.brown.edu\/~stefie10\/\" >professor Stefanie Tellex<\/a>. \u201cBut it\u2019s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it\u2019s not sure.\u201d<\/p>\n<p>Tellex\u2019s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for \u201ca marker\u201d isn\u2019t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.<\/p>\n<p>\u201cWhat we want in these situations is for the robot to be able to signal that it\u2019s confused and ask a question rather than just fetching the wrong object,\u201d Tellex explained.<\/p>\n<p>The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it\u2019s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, \u201cthis one?\u201d<\/p>\n<p><iframe frameborder=\"0\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/xuPZ9zKVIfw\" width=\"560\"><\/iframe><br \/>\n<em>David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback<\/em><\/p>\n<p>One of the important features of the system is that the robot doesn\u2019t ask questions with every interaction; it asks intelligently.<\/p>\n<p>And even though the system asks only a very simple question, it\u2019s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as &#8220;implicatures,&#8221; make the algorithm more efficient.<\/p>\n<p>In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system\u2019s accuracy and speed. &#8220;Currently we do not consider the parse of the human\u2019s speech. We would like the model to understand prepositional phrases (&#8216;on the left,&#8217; &#8216;nearest to me&#8217;). This would allow the robot to understand how items are spatially related to other items through language.&#8221;<\/p>\n<p>Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.<\/p>\n<p>An open-access <a href=\"http:\/\/h2r.cs.brown.edu\/wp-content\/uploads\/2016\/10\/whitney17.pdf\" >paper<\/a> on the DARPA-funded research will also be presented at the\u00a0<a href=\"http:\/\/www.icra2017.org\/\" >International Conference on Robotics and Automation<\/a>.<\/p>\n<hr \/>\n<h4>Abstract of\u00a0<em><\/em>Correcting Robot Mistakes in Real Time Using EEG Signals<\/h4>\n<p>Communication with a robot using brain activity from a human collaborator could provide a direct and fast<br \/>\nfeedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime<br \/>\nclosed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.<\/p>\n<hr \/>\n<h4>Abstract of\u00a0<em>Reducing Errors in Object-Fetching Interactions through Social Feedback<\/em><\/h4>\n<p>Fetching items is an important problem for a social robot. It requires a robot to interpret a person\u2019s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person\u2019s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model\u2019s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season). Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT&rsquo;s Computer Science [&#8230;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46,49,48,43],"tags":[],"class_list":["post-14289","post","type-post","status-publish","format-standard","hentry","category-airobotics","category-cognitive-scienceneuroscience","category-electronics","category-news"],"_links":{"self":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/14289"}],"collection":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/comments?post=14289"}],"version-history":[{"count":3,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/14289\/revisions"}],"predecessor-version":[{"id":14358,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/14289\/revisions\/14358"}],"wp:attachment":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/media?parent=14289"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/categories?post=14289"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/tags?post=14289"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}