{"id":13633,"date":"2017-02-05T06:28:08","date_gmt":"2017-02-05T06:28:08","guid":{"rendered":"http:\/\/www.kurzweilai.net\/?p=293156"},"modified":"2017-02-07T03:56:30","modified_gmt":"2017-02-07T03:56:30","slug":"beneficial-ai-conference-develops-asilomar-ai-principles-to-guide-future-ai-research","status":"publish","type":"post","link":"https:\/\/hoo.central12.com\/fugic\/2017\/02\/05\/beneficial-ai-conference-develops-asilomar-ai-principles-to-guide-future-ai-research\/","title":{"rendered":"Beneficial AI conference develops &lsquo;Asilomar AI principles&rsquo; to guide future AI research"},"content":{"rendered":"<div id=\"attachment_293174\" class=\"wp-caption aligncenter\" style=\"width: 635px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-293174\" title=\"BAI17 audience\" src=\"http:\/\/www.kurzweilai.net\/images\/BAI17-audience.jpg\" alt=\"\" width=\"625\" height=\"322\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">Beneficial AI conference (credit: Future of Life Institute)<\/p><\/div>\n<p>At the <a href=\"https:\/\/futureoflife.org\/bai-2017\/\" >Beneficial AI 2017 conference<\/a>, January 5&#8211;8 held at a conference center in Asilomar, California &#8212; a sequel to the 2015 <a title=\"Permanent Link: AI safety conference in Puerto Rico\" href=\"https:\/\/futureoflife.org\/2015\/10\/12\/ai-safety-conference-in-puerto-rico\/\" rel=\"bookmark\" >AI Safety conference in Puerto Rico<\/a> &#8212; the <a href=\"https:\/\/futureoflife.org\/\" >Future of Life Institute<\/a> (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.<\/p>\n<p>FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.<\/p>\n<div id=\"attachment_293170\" class=\"wp-caption aligncenter\" style=\"width: 646px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\" wp-image-293170\" title=\"BAI17participants\" src=\"http:\/\/www.kurzweilai.net\/images\/BAI17participants.png\" alt=\"\" width=\"636\" height=\"96\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">Beneficial AI conference participants (credit: Future of Life Institute)<\/p><\/div>\n<p>The result was 23 <a href=\"https:\/\/futureoflife.org\/ai-principles\/\">Asilomar AI Principles<\/a>, intended to suggest AI research guidelines, such as &#8220;The goal of AI research should be to create not undirected intelligence, but beneficial intelligence&#8221; and &#8220;An arms race in lethal autonomous weapons should be avoided&#8221;; identify ethics and values, such as safety and transparency; and address longer-term issues &#8212; notably, &#8220;\u00a0Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.&#8221;<\/p>\n<p>To date, <a href=\"https:\/\/futureoflife.org\/principles-signatories\/\" >2515 AI researchers and others<\/a> are signatories of the Principles. The process is described <a href=\"https:\/\/futureoflife.org\/2017\/01\/17\/principled-ai-discussion-asilomar\/\" >here<\/a>.<\/p>\n<p>The conference location has historic significance. In 2009, the <a title=\"A.A.A.I.\u2019s page on ethical issues\" href=\"http:\/\/www.aaai.org\/AITopics\/pmwiki\/pmwiki.php\/AITopics\/Ethics\">Association for the Advancement of Artificial Intelligence<\/a> held the <a href=\"http:\/\/research.microsoft.com\/en-us\/um\/people\/horvitz\/AAAI_Presidential_Panel_2008-2009.htm\" >Asilomar Meeting on Long-Term AI Futures<\/a> to address similar concerns. And in 1975, the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Asilomar_Conference_on_Recombinant_DNA\" >Asilomar Conference on Recombinant DNA<\/a>\u00a0was held to discuss potential\u00a0biohazards\u00a0and regulation of emerging biotechnology.<\/p>\n<p>The non-profit\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Future_of_Life_Institute\" >Future of Life Institute<\/a> was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna,\u00a0Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC\u00a0physicist\u00a0<a title=\"Anthony Aguirre\" href=\"https:\/\/en.wikipedia.org\/wiki\/Anthony_Aguirre\">Anthony Aguirre<\/a>. Its mission is &#8220;to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.&#8221;<\/p>\n<p>FLI&#8217;s <a href=\"https:\/\/futureoflife.org\/team\/\" >scientific advisory board<\/a> includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science\/Smith-Zadeh Professor in Engineering Stuart Russell.<\/p>\n<p><iframe frameborder=\"0\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/h0962biiZa4\" width=\"560\"><\/iframe> <a href=\"https:\/\/www.youtube.com\/channel\/UC-rCCy3FQ-GItDimSR9lhzw\" data-ytid=\"UC-rCCy3FQ-GItDimSR9lhzw\" data-sessionlink=\"itct=CDUQ4TkiEwi5qPaDuvfRAhWR0JwKHcFhD3co-B0\"><br \/>\n<em>Future of Life Institute<\/em><\/a><em> | Superintelligence: Science or Fiction? | Elon Musk &amp; Other Great Minds<\/em><\/p>\n<p><em> Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.<\/em><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At the Beneficial AI 2017 conference, January 5&ndash;8 held at a conference center in Asilomar, California &mdash; a sequel to the 2015 AI Safety conference in Puerto Rico &mdash; the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to [&#8230;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46,43,57,50],"tags":[],"class_list":["post-13633","post","type-post","status-publish","format-standard","hentry","category-airobotics","category-news","category-singularityfutures","category-survivaldefense"],"_links":{"self":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/13633"}],"collection":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/comments?post=13633"}],"version-history":[{"count":4,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/13633\/revisions"}],"predecessor-version":[{"id":13656,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/13633\/revisions\/13656"}],"wp:attachment":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/media?parent=13633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/categories?post=13633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/tags?post=13633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}