This app lets autonomous video drones with facial recognition target persons

Creating a selfie video with a drone and an app (credit: Neurala)

Robotics company Neurala has combined facial-recognition and drone-control mobile software in an iOS/Android app called “Selfie Dronie” that enables low-cost Parrot Bebop and Bebop 2 drones to take hands-free videos and follow a subject autonomously.

To create a video, you simply select the person or object and you’re done. The drone then flies an arc around the subject to take a video selfie (it moves with the person). Or it zooms upward for a dramatic aerial shot in “dronie” mode.


Neurela | This video demonstrates Neurela’s target-following technology

Basically, the app replaces remote-control gadgets and controlling via GPS on cell phones. Instead, once the target person is designated, the drone operates autonomously.

Neurala explains that its Neurala Intelligence Engine (NIE) can immediately learn to recognize an object using an ordinary camera. Then, as the object moves, Neurala’s deep learning algorithms learn more about the object in real time and in different environments, and by comparing these observations to other things it has learned in the past — going beyond current deep-learning visual processing, which requires training first.

Based on Mars rover technology

Neurala says NASA funded Neurela in October to commercialize its autonomous navigation, object recognition, and obstacle avoidance software developed for planetary exploration robots such as Curiosity rover, and apply it in real-world situations on Earth for self-driving cars, home robots, and autonomous drones.

Neurela says what makes its software unique is its use of deep learning and passive sensors, instead of “expensive and power-hungry active systems,” such as radar and LIDAR, used in most prototype self-driving vehicles.

Of course, it’s a small step from this technology to surveillance drones with facial recognition and autonomous weaponized unmanned aerial vehicles (see “The proposed ban on offensive autonomous weapons is unrealistic and dangerous” and “Why we really should ban autonomous weapons: a response“), especially given the recent news in Paris and Brussels and current terrorist threats directed to the U.S. and other countries.

Pigeons diagnose breast cancer on X-rays as well as radiologists

The pigeons’ training environment included a food pellet dispenser, a touch-sensitive screen which projected the medical image, as well as blue and yellow choice buttons on either side of the image. Pecks to those buttons and to the screen were automatically recorded. (credit: Levenson RM et al./PloS)

“Pigeons do just as well as humans in categorizing digitized slides and mammograms of benign and malignant human breast tissue,” said Richard Levenson, professor of pathology and laboratory medicine at UC Davis Health System and lead author of a new open-access study in PLoS One by researchers at the University of California, Davis and The University of Iowa.

“The pigeons were able to generalize what they had learned, so that when we showed them a completely new set of normal and cancerous digitized slides, they correctly identified them,” Levenson  said. “The pigeons also learned to correctly identify cancer-relevant microcalcifications on mammograms, but they had a tougher time classifying suspicious masses on mammograms — a task that is extremely difficult, even for skilled human observers.”

Although a pigeon’s brain is no bigger than the tip of an index finger, the neural pathways involved operate in ways very similar to those at work in the human brain. “Research over the past 50 years has shown that pigeons can distinguish identities and emotional expressions on human faces, letters of the alphabet, misshapen pharmaceutical capsules, and even paintings by Monet vs. Picasso,” said Edward Wasserman, professor of psychological and brain sciences at The University of Iowa and co-author of the study. “Their visual memory is equally impressive, with a proven recall of more than 1,800 images.”

Pigeons rival radiologists at discriminating breast cancer

Examples of benign (left) and malignant (right) breast specimens stained with hematoxylin and eosin, at different magnifications. The birds were remarkably adept at discriminating between benign and malignant breast cancer slides at all magnifications, a task that can perplex inexperienced human observers, who typically require considerable training to attain mastery. (credit: Levenson RM et al./PloS)

For the study, each pigeon learned to discriminate cancerous from non-cancerous images and slides using traditional “operant conditioning,” a technique in which a bird was rewarded only when a correct selection was made; incorrect selections were not rewarded and prompted correction trials. Training with stained pathology slides included a large set of benign and cancerous samples from routine cases at UC Davis Medical Center.

“The birds were remarkably adept at discriminating between benign and malignant breast cancer slides at all magnifications, a task that can perplex inexperienced human observers, who typically require considerable training to attain mastery,” Levenson said. He said the pigeons achieved nearly 85 percent correct within 15 days.

Flock-sourcing: 99 percent accuracy

“When we showed a cohort of four birds a set of uncompressed images, an approach known as “flock-sourcing,” the group’s accuracy level reached an amazing 99 percent correct, higher than that achieved by any of the four individual birds.” Wasserman has conducted studies on pigeons for more than 40 years.

The birds, however, had difficulty evaluating the malignant potential of breast masses (without microcalcifications) detected on mammograms, a task the authors acknowledge as “very challenging.”

After years of education and training, physicians can sometimes struggle with the interpretation of microscope slides and mammograms. Levenson, a pathologist who studies artificial intelligence for image analysis and other applications in biology and medicine, believes there is considerable room for enhancing the process.

“While new technologies are constantly being designed to enhance image acquisition, processing, and display, these potential advances need to be validated using trained observers to monitor quality and reliability,” Levenson said. “This is a difficult, time-consuming, and expensive process that requires the recruitment of clinicians as subjects for these relatively mundane tasks. “Pigeons’ sensitivity to diagnostically salient features in medical images suggest that they can provide reliable feedback on many variables at play in the production, manipulation, and viewing of these diagnostically crucial tools, and can assist researchers and engineers as they continue to innovate.”

This work also suggests that pigeons’ remarkable ability to discriminate between complex visual images could be put to good use as trained medical image observers, to help researchers explore image quality and the impact of color, contrast, brightness, and image compression artifacts on diagnostic performance.


Victor Navarro | Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images


Abstract of Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images

Pathologists and radiologists spend years acquiring and refining their medically essential visual skills, so it is of considerable interest to understand how this process actually unfolds and what image features and properties are critical for accurate diagnostic performance. Key insights into human behavioral tasks can often be obtained by using appropriate animal models. We report here that pigeons (Columba livia)—which share many visual system properties with humans—can serve as promising surrogate observers of medical images, a capability not previously documented. The birds proved to have a remarkable ability to distinguish benign from malignant human breast histopathology after training with differential food reinforcement; even more importantly, the pigeons were able to generalize what they had learned when confronted with novel image sets. The birds’ histological accuracy, like that of humans, was modestly affected by the presence or absence of color as well as by degrees of image compression, but these impacts could be ameliorated with further training. Turning to radiology, the birds proved to be similarly capable of detecting cancer-relevant microcalcifications on mammogram images. However, when given a different (and for humans quite difficult) task—namely, classification of suspicious mammographic densities (masses)—the pigeons proved to be capable only of image memorization and were unable to successfully generalize when shown novel examples. The birds’ successes and difficulties suggest that pigeons are well-suited to help us better understand human medical image perception, and may also prove useful in performance assessment and development of medical imaging hardware, image processing, and image analysis tools.

Can humans empathize with robots? The knife test.

Examples of pictures of humans and robots in pain or perceived pain (credit: Toyohashi University of Technology)

Researchers at Toyohashi University of Technology and Kyoto University have found the first neurophysiological evidence of humans’ ability to empathize with robots in perceived pain — at least when it comes to losing a finger.

They monitored event-related electroencephalography (EEG) signals from 15 healthy adults who were observing pictures of either a human or robotic hand in painful or non-painful situations, such as a finger being cut by a knife.

They found that brain potentials related to humanoid robots in (perceived) pain were similar to those for  humans in pain, but with one exception: The ascending phase of the P3 wave (350-500 ms after the stimulus presentation) did not show an effect for a robot in perceived pain, compared to a human. The researchers suggest that could be explained by the perceived unnaturalness of robot hands being cut by knives.

However, the researchers agree the experiment was not conclusive. What about a robot with a human size-hand with thinner fingers? Or more natural-looking features (no wires or metal)? Or no actual fingers? Or an unfriendly robot?


Abstract of Measuring empathy for human and robot hand pain using electroencephalography

This study provides the first physiological evidence of humans’ ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

IBM’s Watson shown to enhance human-computer co-creativity, support biologically inspired design

Using Watson for enhancing human-computer co-creativity (credit: Georgia Tech)

Georgia Institute of Technology researchers, working with student teams, trained a cloud-based version of IBM’s Watson called the Watson Engagement Advisor to provide answers to questions about biologically inspired design (biomimetics), a design paradigm that uses biological systems as analogues for inventing technological systems.

Ashok Goel, a professor at Georgia Tech’s School of Interactive Computing who conducts research on computational creativity. In an experiment, he used this version of Watson as an “intelligent research assistant” to support teaching about biologically inspired design and computational creativity in the Georgia Tech CS4803/8803 class on Computational Creativity in Spring 2015. Goel found that Watson’s ability to retrieve natural language information could allow a novice to quickly “train up” about complex topics and better determine whether their idea or hypothesis is worth pursuing.

An intelligent research assistant

In the form of a class project, the students fed Watson several hundred biology articles from Biologue, an interactive biology repository, and 1,200 question-answer pairs. The teams then posed questions to Watson about the research it had “learned” regarding big design challenges in areas such as engineering, architecture, systems, and computing.

Examples of questions:

“How do you make a better desalination process for consuming sea water?” (Animals have a variety of answers for this, such as how seagulls filter out seawater salt through special glands.)

“How can manufacturers develop better solar cells for long-term space travel?” One answer: Replicate how plants in harsh climates use high-temperature fibrous insulation material to regulate temperature.

Watson effectively acted as an intelligent sounding board to steer students through what would otherwise be a daunting task of parsing a wide volume of research that may fall outside their expertise.

This version of Watson also prompts users with alternate ways to ask questions for better results. Those results are packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. This was intended to allow the average user to navigate results more easily on a given topic.

Results from training the Watson AI system were packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. Each leaf is the starting point for a Q&A with Watson. (credit: Georgia Tech)

“Imagine if you could ask Google a complicated question and it immediately responded with your answer — not just a list of links to manually open, says Goel. “That’s what we did with Watson. Researchers are provided a quickly digestible visual map of the concepts relevant to the query and the degree to which they are relevant. We were able to add more semantic and contextual meaning to Watson to give some notion of a conversation with the AI.”

Georgia Tech’s Watson Engagement Advisor (credit: Georgia Tech)

Goel believes this approach to using Watson could assist professionals in a variety of fields by allowing them to ask questions and receive answers as quickly as in a natural conversation. He plans to investigate other areas with Watson such as online learning and healthcare.

The work was presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2015 Fall Symposium on Cognitive Assistance in Government, Nov. 12–14, in Arlington, Va. and was published in Procs. AAAI 2015 Fall Symposium on Cognitive Assistance (open access).


Abstract of Using Watson for Enhancing Human-Computer Co-Creativity

We describe an experiment in using IBM’s Watson cognitive system to teach about human-computer co-creativity in
a Georgia Tech Spring 2015 class on computational creativity. The project-based class used Watson to support biologically
inspired design, a design paradigm that uses biological systems as analogues for inventing technological
systems. The twenty-four students in the class self-organized into six teams of four students each, and developed semester-long projects that built on Watson to support biologically inspired design. In this paper, we describe this experiment in using Watson to teach about human-computer co-creativity, present one project in detail, and summarize the remaining five projects. We also draw lessons on building on Watson for (i) supporting biologically inspired design, and (ii) enhancing human-computer co-creativity.

Disney Research-CMU design tool helps novices design 3-D-printable robotic creatures

Digital designs for robotic creatures are shown on the left and the physical prototypes produced via 3-D printing are on the right (credit: Disney Research, Carnegie Melon University)

Now you can design and build your own customized walking robot using a 3-D printer and off-the-shelf servo motors, with the help of a new DYI design tool developed by Disney Research and Carnegie Mellon University.

You can specify the shape, size, and number of legs for your robotic creature, using intuitive editing tools to interactively explore design alternatives. The system takes over much of the non-intuitive and tedious task of planning the motion of the robot, and ensures that your design is capable of moving the way you want and not fall down. Or you can alter your creature’s gait as desired.

Six robotic creatures designed with the Disney Research-CMU interactive design system: one biped, four quadrupeds and one five-legged robot (credit: Disney Research, Carnegie Melon University)

“Progress in rapid manufacturing technology is making it easier and easier to build customized robots, but designing a functioning robot remains a difficult challenge that requires an experienced engineer,” said Markus Gross, vice president of research for Disney Research. “Our new design system can bridge this gap and should be of great interest to technology enthusiasts and the maker community at large.”

The research team presented the system at SIGGRAPH Asia 2015, the ACM Conference on Computer Graphics and Interactive Techniques, in Kobe, Japan.

Design viewports

The design interface features two viewports: one that lets you edit the robot’s structure and motion and a second that displays how those changes would likely alter the robot’s behavior.

You can load an initial, skeletal description of the robot and the system then creates an initial geometry and places a motor at each joint position. You can then edit the robot’s structure, adding or removing motors, or adjust their position and orientation.

The researchers have developed an efficient optimization method that uses an approximate dynamics model to generate stable walking motions for robots with varying numbers of legs. In contrast to conventional methods that can require several minutes of computation time to generate motions, the process takes just a few seconds, enhancing the interactive nature of the design tool.

3-D printer-ready designs

Once the design process is complete, the system automatically generates 3-D geometry for all body parts, including connectors for the motors, which can then be sent to a 3-D printer for fabrication.

In a test of creating two four-legged robots, it took only minutes to design these creatures, but hours to assemble them and days to produce parts on 3-D printers,” said Bernhard Thomaszewski, a research scientist at Disney Research. “It is both expensive and time-consuming to build a prototype — which underscores the importance of a design system [that] produces a final design without the need for building multiple physical iterations.”

The research team also included roboticists at ETH Zurich. For more information, see Interactive Design of 3D-Printable Robotic Creatures (open access) and visit the project website.


Disney Research | Interactive Design of 3D Printable Robotic Creatures

Google open-sources its TensorFlow machine learning system

Google announced today that it will make its new second-generation “TensorFlow” machine-learning system open source.

That means programmers can now achieve some of what Google engineers have done, using TensorFlow — from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos, to reading a sign in a foreign language using Google Translate.

Google says TensorFlow is a highly scalable machine learning system — it can run on a single smartphone or across thousands of computers in datacenters. The idea is to accelerate research on machine learning, “or wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data.”

This blog post by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead, provides a technical overview. “Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with signals derived from deep neural networks, and to power the magic features of tomorrow,” they note.

Semantic Scholar uses AI to transform scientific search

Example of the top return in a Semantic Scholar search for “quantum computer silicon” constrained to overviews (52 out of 1,397 selected papers since 1989) (credit: AI2)

The Allen Institute for Artificial Intelligence (AI2) launched Monday (Nov. 2) its free Semantic Scholar service, intended to allow scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work.

Semantic Scholar leverages AI2’s expertise in data mining, natural-language processing, and computer vision, according to according to Oren Etzioni, PhD, CEO at AI2. At launch, the system searches more than three million computer science papers, and will add scientific categories on an ongoing basis.

With Semantic Scholar, computer scientists can:

  • Home in quickly on what they are looking for, with advanced selection filtering tools. Researchers can filter search results by author, publication, topic, and date published. This gets the researcher to the most relevant result in the fastest way possible, and reduces information overload.
  • Instantly access a paper’s figures and findings. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
  • Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
  • Be prompted with key phrases within each paper to winnow the search further.

Example of figures and tables extracted from the first document discovered (“Quantum computation and quantum information”) in the search above (credit: AI2)

How Semantic Scholar works

Using machine reading and vision methods, Semantic Scholar crawls the web, finding all PDFs of publicly available scientific papers on computer science topics, extracting both text and diagrams/captions, and indexing it all for future contextual retrieval.

Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of paper and how influential its citations are. It provides the scientist with a simple user interface (optimized for mobile) that maps to academic researchers’ expectations.

Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Semantic Scholar builds from the foundation of other research-paper search applications such as Google Scholar, adding AI methods to overcome information overload.

“Semantic Scholar is a first step toward AI-based discovery engines that will be able to connect the dots between disparate studies to identify novel hypotheses and suggest experiments that would otherwise be missed,” said Etzione. “Our goal is to enable researchers to find answers to some of science’s thorniest problems.”

MOTOBOT: the first autonomous motorcycle-riding humanoid robot

MOTOBOT Ver. 1 (credit: Yamaha)

Yamaha introduced MOTOBOT Ver.1, the first autonomous motorcycle-riding humanoid robot, at the Tokyo Motor Show Wednesday (Oct. 28). A fusion of Yamaha’s motorcycle (an unmodified Yamaha YZF-R1M) and robotics technology, the future Motobot robot will ride an unmodified motorcycle on a racetrack at more than 200 km/h (124 mph), Yamaha says.

“We want to apply the fundamental technology and know-how gained in the process of this challenge to the creation of advanced rider safety and rider-support systems and put them to use in our current businesses, as well as using them to pioneer new lines of business,” says Yamaha in its press release.


Yamaha | New Yamaha MotoBot Concept Ver. 1

This robot will out-walk and out-run you one day

A walk in the park. Oregon State University engineers have successfully field-tested their walking robot, ATRIAS. (credit: Oregon State University)

Imagine robots that can walk and run like humans — or better than humans. Engineers at Oregon State University (OSU) and Technische Universitat Munchen may have achieved a major step in that direction with their “spring-mass” implementation of human and animal walking dynamics, allowing robots to maintain balance and efficiency of motion in difficult environments.

Studies done with OSU’s ATRIAS robot model, which incorporates the spring-mass theory, show that it’s three times more energy-efficient than any other human-sized bipedal robots.

“I’m confident that this is the future of legged robotic locomotion,” said Jonathan Hurst, an OSU professor of mechanical engineering and director of the Dynamic Robotics Laboratory in the OSU College of Engineering. “We’ve basically demonstrated the fundamental science of how humans walk,” he said.

When further refined and perfected, walking and running robots may work in the armed forces, as fire fighters, in factories or doing ordinary household chores, he said. “This could become as big as the automotive industry,” Hurst added.

Wearable robots and prostheses too

Aspects of the locomotion technology may also assist people with disabilities, said Daniel Renjewski with the Technische Universitat Munchen, the lead author on the study published in IEEE Transactions on Robotics. “Robots are already used for gait training, and we see the first commercial exoskeletons on the market,” he said. “This enables us to build an entirely new class of wearable robots and prostheses that could allow the user to regain a natural walking gait.”

Topology and key technical features of the ATRIAS robot. ATRIAS has six electric motors powered by a lithium polymer battery. It can take impacts and retain its balance and walk over rough and bumpy terrain. Power electronics, batteries, and control computer are located inside the trunk. (credit: Daniel Renjewski et al./IEEE Transactions on Robotics)

In continued research, work will be done to improve steering, efficiency, leg configuration, inertial actuation, robust operation, external sensing, transmissions and actuators, and other technologies.

The work has been supported by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Human Frontier Science Program.


Oregon State University | ATRIAS Bipedal Robot: Takes a Walk in the Park


Abstract of Exciting Engineered Passive Dynamics in a Bipedal Robot

A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in software, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring-mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring-mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring-mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring-mass model-based control strategies for dynamic gait control of robots.

How to fall gracefully if you’re a robot


Georgia Tech | Algorithm allows robot to fall gracefully

Researchers at Georgia Tech are teaching robots how to fall with grace and without serious damage.

This is becoming important as costly robots become more common in manufacturing, healthcare, and domestic tasks.

Ph.D. graduate Sehoon Ha and Professor Karen Liu developed a new algorithm that tells a robot how to react to a wide variety of falls, from a single step to recover from a gentle nudge to a rolling motion that breaks a high-speed fall. The idea is learn the best sequence of movements to slow their momentum and minimize the damage or injury they might cause to themselves or others while falling.

“Our work unified existing research about how to teach robots to fall by giving them a tool to automatically determine the total number of contacts (how many hands shoved it, for example), the order of contacts, and the position and timing of those contacts,” said Ha, now a postdoctoral associate at Disney Research Pittsburgh. “All of that impacts the potential of a fall and changes the robot’s response.”

The algorithm was validated in physics simulation and experimentally tested on a BioloidGP humanoid.

With the latest finding, Ha builds upon Liu’s previous research that studied how cats modify their bodies in the midst of a fall. Liu knew from that work that one of the most important factors in a fall is the angle of the landing.

“From previous work, we knew a robot had the computational know-how to achieve a softer landing, but it didn’t have the hardware to move quickly enough like a cat,” Liu said. “Our new planning algorithm takes into account the hardware constraints and the capabilities of the robot, and suggests a sequence of contacts so the robot gradually can slow itself down.”

They suggest robots may soon fall more gracefully than people — and possibly even cats.


DARPA TV | A Celebration of Risk (a.k.a., Robots Take a Spill)


Abstract of Multiple Contact Planning for Minimizing Damage of Humanoid Falls

This paper introduces a new planning algorithm to minimize the damage of humanoid falls by utilizing multiple contact points. Given an unstable initial state of the robot, our approach plans for the optimal sequence of contact points such that the initial momentum is dissipated with minimal impacts on the robot. Instead of switching among a collection of individual control strategies, we propose a general algorithm which plans for appropriate responses to a wide variety of falls, from a single step to recover a gentle nudge, to a rolling motion to break a high-speed fall. Our algorithm transforms the falling problem into a sequence of inverted pendulum problems and use dynamic programming to solve the optimization efficiently. The planning algorithm is validated in physics simulation and experimentally tested on a BioloidGP humanoid.