{"id":6469,"date":"2016-03-24T07:39:46","date_gmt":"2016-03-24T07:39:46","guid":{"rendered":"http:\/\/www.kurzweilai.net\/?p=276767"},"modified":"2016-03-27T20:51:07","modified_gmt":"2016-03-27T20:51:07","slug":"automated-lip-reading-invented","status":"publish","type":"post","link":"https:\/\/hoo.central12.com\/fugic\/2016\/03\/24\/automated-lip-reading-invented\/","title":{"rendered":"Automated lip-reading invented"},"content":{"rendered":"<div id=\"attachment_276780\" class=\"wp-caption aligncenter\" style=\"width: 306px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding-top: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;\"><img class=\"size-full wp-image-276780\" title=\"HAL reading lips\" src=\"http:\/\/www.kurzweilai.net\/images\/HAL-reading-lips.jpg\" alt=\"\" width=\"296\" height=\"414\" \/><p style=' padding: 0 4px 5px; margin: 0;'  class=\"wp-caption-text\">(credit: MGM)<\/p><\/div>\n<p>New lip-reading technology developed at the <a href=\"https:\/\/www.uea.ac.uk\/\" >University of East Anglia<\/a>\u00a0could help in solving crimes and provide communication assistance for people with hearing and speech impairments.<\/p>\n<p><strong><\/strong>The visual speech recognition\u00a0technology, created by Helen L.\u00a0Bear, PhD, and <a href=\"https:\/\/www.uea.ac.uk\/computing\/people\/profile\/r-w-harvey\" >Prof Richard Harvey<\/a> of UEA\u2019s\u00a0<a href=\"https:\/\/www.uea.ac.uk\/computing\" >School of Computing Sciences<\/a>, can be applied \u201cany place where the audio isn\u2019t good enough to determine what people are saying,\u201d Bear said. Those include criminal investigations, entertainment, and especially where are there are high levels of noise, such as in cars or aircraft cockpits, she said.<\/p>\n<p>Bear said unique\u00a0problems with determining speech\u00a0arise when sound isn\u2019t available &#8212; such as on video footage &#8212; or if the audio is inadequate and there aren\u2019t clues to give the context of a conversation. Or on those ubiquitous annoying videos with music that masks speech. The sounds \u2018\/p\/,\u2019 \u2018\/b\/,\u2019 and \u2018\/m\/\u2019 all look similar on the lips, but now\u00a0the machine lip-reading classification technology can differentiate between the sounds for a more accurate translation.<\/p>\n<p>\u201cWe are still learning the science of visual\u00a0speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,&#8221; said Bear.<\/p>\n<p>\u201cLip-reading is one of the most challenging problems in artificial intelligence, so it\u2019s great to make progress on one of the trickier aspects, which is how to train machines to recognize the appearance and shape of human lips,\u201d said Harvey.<\/p>\n<p>The research, part of a three-year project, was supported by the Engineering and Physical Sciences Research Council (EPSRC). The research will be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP)\u00a0in Shanghai.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>New lip-reading technology developed at the University of East Anglia&nbsp;could help in solving crimes and provide communication assistance for people with hearing and speech impairments. The visual speech recognition&nbsp;technology, created by Helen L.&nbsp;Bear, PhD, and Prof Richard Harvey of UEA&rsquo;s&nbsp;School of Computing Sciences, can be applied &ldquo;any place where the audio isn&rsquo;t good enough to [&#8230;]<\/p>\n","protected":false},"author":13,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[46,43],"tags":[],"class_list":["post-6469","post","type-post","status-publish","format-standard","hentry","category-airobotics","category-news"],"_links":{"self":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/6469"}],"collection":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/comments?post=6469"}],"version-history":[{"count":3,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/6469\/revisions"}],"predecessor-version":[{"id":6496,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/posts\/6469\/revisions\/6496"}],"wp:attachment":[{"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/media?parent=6469"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/categories?post=6469"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hoo.central12.com\/fugic\/wp-json\/wp\/v2\/tags?post=6469"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}