{"id":22318,"date":"2018-02-09T23:05:43","date_gmt":"2018-02-09T23:05:43","guid":{"rendered":"http:\/\/www.kurzweilai.net\/?p=308184"},"modified":"2018-02-12T23:33:52","modified_gmt":"2018-02-12T23:33:52","slug":"ai-algorithm-with-social-skills-teaches-humans-how-to-collaborate","status":"publish","type":"post","link":"https:\/\/hoo.central12.com\/fugic\/2018\/02\/09\/ai-algorithm-with-social-skills-teaches-humans-how-to-collaborate\/","title":{"rendered":"AI algorithm with &lsquo;social skills&rsquo; teaches humans how to collaborate"},"content":{"rendered":"<div id=\"attachment_308782\" class=\"wp-caption aligncenter\" style=\"width: 570px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-308782\" title=\"Human-machine cooperation\" src=\"http:\/\/www.kurzweilai.net\/images\/Human-machine-cooperation.png\" alt=\"\" width=\"560\" height=\"453\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">(credit: Iyad Rahwan)<\/p><\/div>\n<p>An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player\u00a0games.<\/p>\n<p>The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (\u201cS sharp\u201d), in three types of interactions:\u00a0machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.<\/p>\n<p>\u201cTwo humans, if they were honest with each other and loyal, would have done as well as two machines,\u201d said lead author\u00a0<a href=\"http:\/\/www.byu.edu\/\" >BYU<\/a>\u00a0computer science professor\u00a0<a href=\"https:\/\/cs.byu.edu\/faculty\/jwc54\" >Jacob Crandall<\/a>. \u201cAs it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it\u2019s programmed to not lie] and it also learns to maintain cooperation once it emerges.\u201d<\/p>\n<p>\u201cThe end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,\u201d said Crandall.\u00a0\u201cAI needs to be able to respond to us and articulate what it\u2019s doing. It has to be able to interact with other people.\u201d<\/p>\n<p><strong>How casual talk by AI helps humans be more cooperative<\/strong><\/p>\n<p>One important finding: colloquial phrases (called \u201ccheap talk\u201d in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a \u201cSweet. We are getting rich!\u201d or \u201cI accept your last proposal.\u201d If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking \u201cCurse you!\u201d, \u201cYou will pay for that!\u201d or even an \u201cIn your face!\u201d<\/p>\n<p>And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine &#8212; a sort of mini \u201cTuring test.\u201d<\/p>\n<p>The research findings, Crandall hopes, could have long-term implications for human relationships. \u201cIn society, relationships break down all the time,\u201d he said. \u201cPeople that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.\u201d<\/p>\n<p>The research is described in an <a href=\"https:\/\/www.nature.com\/articles\/s41467-017-02597-8\" >open-access paper<\/a> in <em>Nature Communications.<\/em><\/p>\n<p><strong>A human-machine collaborative chatbot system\u00a0<\/strong><\/p>\n<div id=\"attachment_308803\" class=\"wp-caption aligncenter\" style=\"width: 449px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-308803\" title=\"Evorus conversation\" src=\"http:\/\/www.kurzweilai.net\/images\/Evorus-conversation.png\" alt=\"\" width=\"439\" height=\"281\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">An actual conversation on Evorus, combining multiple chatbots and workers. (credit: T. Huang et al.)<\/p><\/div>\n<p>In a related study,\u00a0Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=3SAG8jP-Q-M\" >Evorus<\/a>\u00a0that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.<\/p>\n<p>Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.<\/p>\n<p>Humans are simultaneously training the system&#8217;s AI, making it gradually less dependent on people, says\u00a0<a href=\"http:\/\/www.cs.cmu.edu\/~jbigham\/\" >Jeff Bigham<\/a>, associate professor in the CMU Human-Computer Interaction Institute.<\/p>\n<p>The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to \u201clong tail\u201d questions will remain relatively constant.<\/p>\n<p>Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its <a href=\"https:\/\/arstechnica.com\/information-technology\/2016\/03\/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi\/\" >Tay chatbot<\/a> in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).<\/p>\n<p>The preliminary system is available for <a href=\"http:\/\/talkingtothecrowd.org\/\" >download<\/a> and use by anyone willing to be part of the research effort. It is\u00a0deployed via <a href=\"https:\/\/hangouts.google.com\/\" >Google Hangouts<\/a>, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.<\/p>\n<p>A open-access research paper on Evorus,\u00a0<a href=\"https:\/\/www.cs.cmu.edu\/~jbigham\/pubs\/pdfs\/2018\/evorus.pdf\" >available online<\/a>, will be presented at <a href=\"https:\/\/chi2018.acm.org\/\" >CHI 2018<\/a>, the Conference on Human Factors in Computing Systems in Montreal, April 21&#8211;26, 2018.<\/p>\n<p><iframe frameborder=\"0\" height=\"316\" src=\"https:\/\/www.youtube.com\/embed\/3SAG8jP-Q-M\" width=\"562\"><\/iframe><\/p>\n<hr \/>\n<h4>Abstract of\u00a0<em>Cooperating with machines<\/em><\/h4>\n<p>Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human\u2013machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm\u00a0with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human\u2013machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.<\/p>\n<hr \/>\n<h4>Abstract of<em> A Crowd-powered Conversational Assistant Built to Automate Itself Over Time<\/em><\/h4>\n<p>Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player&nbsp;games. The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (&ldquo;S sharp&rdquo;), [&#8230;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46,43,808,69],"tags":[],"class_list":["post-22318","post","type-post","status-publish","format-standard","hentry","category-airobotics","category-news","category-social-networkingweb","category-socialethicallegal"],"_links":{"self":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/22318"}],"collection":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/comments?post=22318"}],"version-history":[{"count":2,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/22318\/revisions"}],"predecessor-version":[{"id":22375,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/22318\/revisions\/22375"}],"wp:attachment":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/media?parent=22318"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/categories?post=22318"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/tags?post=22318"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}