The Holy Grail: Machine Learning + Extreme Robotics

Two experts on robotics and machine learning will reveal breakthrough developments in humanlike robots and machine learning at the annual SXSW conference in Austin next March, in a proposed* panel called “The Holy Grail: Machine Learning + Extreme Robotics.”

Participants will interact with Hanson Robotics’ forthcoming state-of-the-art female Sophia robot as a participant on the panel as she spontaneously tracks human faces, listens to speech, and generates a natural-language response while participating in dialogue about the potential of genius machines.

This conversation on the future of advanced robotics combined with machine learning and cognitive science will feature visionary Hanson Robotics founder/CEO David Hanson and Microsoft executive Jim Kankanias, who heads Program Management for Information Management and Machine Learning in the Cloud + Enterprise Division at Microsoft. The panel will be moderated by Hanson Robotics consultant Eric Shuss.

Stay tuned here for updates.

* Contingent on getting enough votes by end of day Friday, Sept. 4 (cast your vote here — requires registration).

AI authors crowdsourced interactive fiction


GVU Center at Georgia Tech | A new Georgia Tech artificial intelligence system develops interactive stories through crowdsourced data for more robust fiction. Here (in a simplified example), the AI replicates a typical first date to the movies (user choices are in red), complete with loud theater talkers and the arm-over-shoulder movie move.

Georgia Institute of Technology researchers have developed a new artificially intelligent system that crowdsources plots for interactive stories, which are popular in video games and let players choose different branching story options.

“Our open interactive narrative system learns genre models from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience,” says Mark Riedl, lead investigator and associate professor of interactive computing at Georgia Tech.

With potentially limitless crowdsourced plot points, the system could allow for more creative stories and an easier method for interactive narrative generation. For example, imagine a Star Wars game using online fan fiction, generating paths for a player to take.

Current AI models for games have a limited number of scenarios, no matter what a player chooses. They depend on a dataset already programmed into a model by experts.

Near human-level authoring

The Scheherazade-IF Architecture (credit: Matthew Guzdial et al.)

test* of the AI system, called Scheherazade IF (Interactive Fiction) — a reference to the fabled Arabic queen and storyteller — showed that it can achieve near human-level authoring, the researchers claim.

“When enough data is available and that data sufficiently covers all aspects of the game experience, the system was able to meet or come close to meeting human performance in creating a playable story,” says Riedl.

The creators say that they are seeking to inject more creative scenarios into the system. Right now, the AI plays it safe with the crowdsourced content, producing what one might expect in different genres. But opportunities exist to train Scheherazade (just like its namesake implies) to surprise and immerse those in future interactive experiences.

The impact of this research can support not only online storytelling for entertainment, but also digital storytelling used in online course education or corporate training.

* The researchers evaluated the AI system by measuring the number of “commonsense” errors (e.g. scenes out of sequence) found by players, as well as players’ subjective experiences for things such as enjoyment and coherence of story.

Three test groups played through two interactive stories — a bank robbery and a date to the movies — to measure performance of three narrative generators: the AI story generator, a human-programmed generator, or a random story generator.

For the bank robbery story, the AI system performed identically to the human-programmed generator in terms of errors reported by players, with a median of three each. The random generator produced a median of 12.5 errors reported.

For the movie date scenario, the median values of errors reported were three (human), five (AI) and 15 (random). This shows the AI system performing at 83.3 percent of the human-programmed generator.

As for the play experience itself, the human and AI generators compared favorably for coherence, player involvement, enjoyment and story recognition.


Abstract of Crowdsourcing Open Interactive Narrative

Interactive narrative is a form of digital interactive experience in which users influence a dramatic storyline through their actions. Artificial intelligence approaches to interactive narrative use a domain model to determine how the narrative should unfold based on user actions. However, domain models for interactive narrative require artificial intelligence and knowledge representation expertise. We present open interactive narrative, the problem of generating an interactive narrative experience about any possible topic. We present an open interactive narrative system— Scherazade IF—that learns a domain model from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience. We report on an evaluation of our system showing near-human level authoring

Completely paralyzed man voluntarily moves his legs, UCLA scientists report

Mark Pollock and trainer Simon O’Donnell (credit: Mark Pollock)

A 39-year-old man who had been completely paralyzed for four years was able to voluntarily control his leg muscles and take thousands of steps in a “robotic exoskeleton” device during five days of training, and for two weeks afterward, UCLA scientists report.

This is the first time that a person with chronic, complete paralysis has regained enough voluntary control to actively work with a robotic device designed to enhance mobility.

In addition to the robotic device, the man was aided by a novel noninvasive spinal stimulation technique that does not require surgery. His leg movements also resulted in other health benefits, including improved cardiovascular function and muscle tone.

The new approach combines a battery-powered wearable bionic suit that enables people to move their legs in a step-like fashion, with a noninvasive procedure that the same researchers had previously used to enable five men who had been completely paralyzed to move their legs in a rhythmic motion.

That earlier achievement is believed to be the first time people who are completely paralyzed have been able to relearn voluntary leg movements without surgery. (The researchers do not describe the achievement as “walking” because no one who is completely paralyzed has independently walked in the absence of the robotic device and electrical stimulation of the spinal cord.)

Mountain racing blind? No problem. Paralyzed? “Iron ElectriRx” man is aceing that too

In the latest study, the researchers treated Mark Pollock, who lost his sight in 1998 and later became the first blind man to race to the South Pole. In 2010, Pollock fell from a second-story window and suffered a spinal cord injury that left him paralyzed from the waist down.

At UCLA, outfitted with the robotic exoskeleton, Pollock made substantial progress after receiving a few weeks of physical training without spinal stimulation and then just five days of spinal stimulation training in a one-week span, for about an hour a day.

“In the last few weeks of the trial, my heart rate hit 138 beats per minute,” Pollock said. “This is an aerobic training zone, a rate I haven’t even come close to since being paralyzed while walking in the robot alone, without these interventions. That was a very exciting, emotional moment for me, having spent my whole adult life before breaking my back as an athlete.”

Even in the years since he lost his sight, Pollock has competed in ultra-endurance races across deserts, mountains and the polar ice caps. He also won silver and bronze medals in rowing at the Commonwealth Games and launched a motivational speaking business.

“Stepping with the stimulation and having my heart rate increase, along with the awareness of my legs under me, was addictive. I wanted more,” he said.

The research was published by the IEEE Engineering in Medicine and Biology Society, the world’s largest society of biomedical engineers.

Expanding the clinical toolbox for the paralyzed

“It will be difficult to get people with complete paralysis to walk completely independently, but even if they don’t accomplish that, the fact they can assist themselves in walking will greatly improve their overall health and quality of life,” said V. Reggie Edgerton, senior author of the research and a UCLA distinguished professor of integrative biology and physiology, neurobiology and neurosurgery.

The procedure used a robotic device manufactured by Richmond, California-based Ekso Bionics that captures data that enables the research team to determine how much the subject is moving his own limbs, as opposed to being aided by the device.

“If the robot does all the work, the subject becomes passive and the nervous system shuts down,” Edgerton said.

The data showed that Pollock was actively flexing his left knee and raising his left leg and that during and after the electrical stimulation, he was able to voluntarily assist the robot during stepping; it wasn’t just the robotic device doing the work.

“For people who are severely injured but not completely paralyzed, there’s every reason to believe that they will have the opportunity to use these types of interventions to further improve their level of function. They’re likely to improve even more,” Edgerton said. “We need to expand the clinical toolbox available for people with spinal cord injury and other diseases.”


Edgerton Lab, University of California Los Angeles | Paralyzed subject Training in Ekso during spinal cord stimulation

The future of spinal-cord research

Edgerton and his research team have received many awards and honors for their research, including Popular Mechanics’ 2011 Breakthrough Award.

“Dr. Edgerton is a pioneer and we are encouraged by these findings to broaden our understanding of possible treatment options for paralysis,” said Peter Wilderotter, president and CEO of the Christopher and Dana Reeve Foundation, which helped fund the research. “Given the complexities of a spinal cord injury, there will be no one-size-fits-all cure but rather a combination of different interventions to achieve functional recovery.

“What we are seeing right now in the field of spinal cord research is a surge of momentum with new directions and approaches to remind the spine of its potential even years after an injury,” he said.

NeuroRecovery Technologies, a medical technology company Edgerton founded, designs and develops devices that help restore movement in patients with paralysis. The company provided the device used to stimulate the spinal cord in combination with the Ekso in this research.

Edgerton said although it likely will be years before the new approaches are widely available, he now believes it is possible to significantly improve quality of life for patients with severe spinal cord injuries, and to help them recover multiple body functions.

In addition to the Reeve foundation, the research was funded by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, the F. M. Kirby Foundation, the Walkabout Foundation, the Dana and Albert R. Broccoli Foundation, Ekso Bionics, NeuroRecovery Technologies and the Mark Pollock Trust.

Almost 6 million Americans live with paralysis, including nearly 1.3 million with spinal cord injuries.


Abstract of Iron ‘ElectriRx’ Man: Overground Stepping in an Exoskeleton Combined with Noninvasive Spinal Cord Stimulation after Paralysis

We asked whether coordinated voluntary movement of the lower limbs could be regained in an individual having been completely paralyzed (>4 yr) and completely absent of vision (>15 yr) using a novel strategy – transcutaneous spinal cord stimulation at selected sites over the spinal vertebrae with just one week of training. We also asked whether this stimulation strategy could facilitate stepping assisted by an exoskeleton (EKSO, EKSO Bionics) that is designed so that the subject can voluntarily complement the work being performed by the exoskeleton. We found that spinal cord stimulation enhanced the level of effort that the subject could generate while stepping in the exoskeleton. In addition, stimulation improved the coordination patterns of the lower limb muscles resulting in a more continuous, smooth stepping motion in the exoskeleton. These stepping sessions in the presence of stimulation were accompanied by greater cardiac responses and sweating than could be attained without the stimulation. Based on the data from this case study it appears that there is considerable potential for positive synergistic effects after complete paralysis by combining the overground stepping in an exoskeleton, a novel transcutaneous spinal cord stimulation paradigm, and daily training.

How mass extinctions can accelerate robot evolution

At the start of the simulation, a biped robot controlled by a computationally evolved brain stands upright on a 16 meter by 16 meter surface. The simulation proceeds until the robot falls or until 15 seconds have elapsed. (credit: Joel Lehman)

Robots evolve more quickly and efficiently after a virtual mass extinction modeled after real-life disasters, such as the one that killed off the dinosaurs, computer scientists at The University of Texas at Austin have found.

Mass extinctions speed up evolution by unleashing new creativity in adaptations.

Computer scientists Risto Miikkulainen and Joel Lehman co-authored the study published in an open-access paper in the journal PLOS One.

“Focused destruction can lead to surprising outcomes,” said Miikkulainen, a professor of computer science at UT Austin. “Sometimes you have to develop something that seems objectively worse in order to develop the tools you need to get better.”

Survival of the evolvable

In biology, mass extinctions are known for being highly destructive, erasing a lot of genetic material from the tree of life. But some evolutionary biologists hypothesize that extinction events actually accelerate evolution by promoting those lineages that are the most evolvable, meaning ones that can quickly create useful new features and abilities.

Miikkulainen and Lehman found that, at least with robots, this is the case.

For years, computer scientists have used computer algorithms inspired by evolution to train simulated robot brains, called neural networks, to improve at a task from one generation to the next. But could mass destruction speed things up?

To find out, they connected neural networks to simulated robotic legs with the goal of evolving a robot that could walk smoothly and stably. As with real evolution, random mutations were introduced through the computational evolution process. The scientists created many different niches so that a wide range of novel features and abilities would come about.

Pruning to achieve super-robots

After hundreds of generations, a wide range of robotic behaviors had evolved to fill these niches, many of which were not directly useful for walking. Then the researchers randomly killed off the robots in 90 percent of the niches, mimicking a mass extinction.

After several such cycles of evolution and extinction, they discovered that the lineages that survived were the most evolvable and, therefore, had the greatest potential to produce new behaviors. Not only that, but overall, better solutions to the task of walking were evolved in simulations with mass extinctions, compared with simulations without them.

Practical applications of the research could include the development of robots that can better overcome obstacles (such as robots searching for survivors in earthquake rubble, exploring Mars or navigating a minefield) and human-like game agents.

“This is a good example of how evolution produces great things in indirect, meandering ways,” explains Lehman, a former postdoctoral researcher in Miikkulainen’s lab, now at the IT University of Copenhagen. He and a former student of Miikkulainen’s at UT Austin, Kenneth Stanley, recently published a popular science book about evolutionary meandering, The Myth of the Objective: Why Greatness Cannot Be Planned. “Even destruction can be leveraged for evolutionary creativity,” Lehman says.

This research was funded by the National Science Foundation (NSF), National Institutes of Health and UT Austin’s Freshman Research Initiative. Funding from NSF was provided through grants to BEACON, a multi-university center established to study evolution in action in natural and virtual settings. The University of Texas at Austin is a member of BEACON. Evolutionary biologists in BEACON assisted Miikkulainen and Lehman in designing the research project and interpreting the results.


Abstract of Extinction Events Can Accelerate Evolution

Extinction events impact the trajectory of biological evolution significantly. They are often viewed as upheavals to the evolutionary process. In contrast, this paper supports the hypothesis that although they are unpredictably destructive, extinction events may in the long term accelerate evolution by increasing evolvability. In particular, if extinction events extinguish indiscriminately many ways of life, indirectly they may select for the ability to expand rapidly through vacated niches. Lineages with such an ability are more likely to persist through multiple extinctions. Lending computational support for this hypothesis, this paper shows how increased evolvability will result from simulated extinction events in two computational models of evolved behavior. The conclusion is that although they are destructive in the short term, extinction events may make evolution more prolific in the long term.

Speech-classifier program is better at predicting psychosis than psychiatrists

This image shows discrimination between at-risk youths who transitioned to psychosis (red) and those who did not (blue). The polyhedron contains all the at-risk youth who did NOT develop psychosis (blue). All of the at-risk youth who DID later develop psychosis (red) are outside the polyhedron. Thus the speech classifier had 100 percent discrimination or accuracy. The speech classifier consisted of “minimum semantic coherence” (the flow of meaning from one sentence to the next), and indices of reduced complexity of speech, including phrase length and decreased use of “determiner” pronouns (“that,” “what,” “whatever,” “which,” and “whichever”). (credit: Cheryl Corcoran et al./NPJ Schizophrenia/Columbia University Medical Center)

An automated speech analysis program correctly differentiated between at-risk young people who developed psychosis over a later two-and-a-half year period and those who did not.

In a proof-of-principle study, researchers at Columbia University Medical Center, New York State Psychiatric Institute, and the IBM T. J. Watson Research Center found that the computerized analysis provided a more accurate classification than clinical ratings.  The study was published Wednesday Aug. 26 in an open-access paper in NPJ-Schizophrenia.

About one percent of the population between the ages of 14 and 27 is considered to be at clinical high risk (CHR) for psychosis. CHR individuals have symptoms such as unusual or tangential thinking, perceptual changes, and suspiciousness. About 20% will go on to experience a full-blown psychotic episode. Identifying who falls in that 20% category before psychosis occurs has been an elusive goal. Early identification could lead to intervention and support that could delay, mitigate or even prevent the onset of serious mental illness.

Measuring psychosis

Speech provides a unique window into the mind, giving important clues about what people are thinking and feeling. Participants in the study took part in an open-ended, narrative interview in which they described their subjective experiences. These interviews were transcribed and then analyzed by computer for patterns of speech, including semantics (meaning) and syntax (structure).

The analysis established each patient’s semantic coherence (how well he or she stayed on topic), and syntactic structure, such as phrase length and use of determiner words that link the phrases. A clinical psychiatrist may intuitively recognize these signs of disorganized thoughts in a traditional interview, but a machine can augment what is heard by precisely measuring the variables. The participants were then followed for two and a half years.

The speech features that predicted psychosis onset included breaks in the flow of meaning from one sentence to the next, and speech that was characterized by shorter phrases with less elaboration.

Speech classifier: 100% accurate

The speech classifier tool developed in this study to mechanically sort these specific, symptom-related features is striking for achieving 100% accuracy.  The computer analysis correctly differentiated between the five individuals who later experienced a psychotic episode and the 29 who did not.

These results suggest that this method may be able to identify thought disorder in its earliest, most subtle form, years before the onset of psychosis. Thought disorder is a key component of schizophrenia, but quantifying it has proved difficult.

For the field of schizophrenia research, and for psychiatry more broadly, this opens the possibility that new technology can aid in prognosis and diagnosis of severe mental disorders, and track treatment response. Automated speech analysis is inexpensive, portable, fast, and non-invasive. It has the potential to be a powerful tool that can complement clinical interviews and ratings.

Further research with a second, larger group of at-risk individuals is needed to see if this automated capacity to predict psychosis onset is both robust and reliable. Automated speech analysis used in conjunction with neuroimaging may also be useful in reaching a better understanding of early thought disorder, and the paths to develop treatments for it.


Abstract of Automated analysis of free speech predicts psychosis onset in high-risk youths

Background/Objectives: Psychiatry lacks the objective clinical tests routinely used in other specializations. Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals.

AIMS: In this proof-of-principle study, our aim was to test automated speech analyses combined with Machine Learning to predict later psychosis onset in youths at clinical high-risk (CHR) for psychosis.

Methods: Thirty-four CHR youths (11 females) had baseline interviews and were assessed quarterly for up to 2.5 years; five transitioned to psychosis. Using automated analysis, transcripts of interviews were evaluated for semantic and syntactic features predicting later psychosis onset. Speech features were fed into a convex hull classification algorithm with leave-one-subject-out cross-validation to assess their predictive value for psychosis outcome. The canonical correlation between the speech features and prodromal symptom ratings was computed.

Results: Derived speech features included a Latent Semantic Analysis measure of semantic coherence and two syntactic markers of speech complexity: maximum phrase length and use of determiners (e.g., which). These speech features predicted later psychosis development with 100% accuracy, outperforming classification from clinical interviews. Speech features were significantly correlated with prodromal symptoms.

Conclusions: Findings support the utility of automated speech analysis to measure subtle, clinically relevant mental state changes in emergent psychosis. Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry.

3D-printed swimming microrobots can sense and remove toxins

3D-printed microfish contain functional nanoparticles that enable them to be self-propelled, chemically powered and magnetically steered. The microfish are also capable of removing and sensing toxins. (credit: J. Warner, UC San Diego Jacobs School of Engineering)

A new kind of fish-shaped microrobots called “microfish” can swim around efficiently in liquids, are chemically powered by hydrogen peroxide, and magnetically controlled. They will inspire a new generation of “smart” microrobots that have diverse capabilities such as detoxification, sensing, and directed drug delivery, said nanoengineers at the University of California, San Diego.

To manufacture the microfish, the researchers used an innovative 3D printing technology they developed, with numerous improvements over other methods traditionally employed to create microrobots, such as microjet engines, microdrillers, and microrockets.

Most of these microrobots are incapable of performing more sophisticated tasks because they feature simple mechanical designs — such as spherical or cylindrical structures — and are made of homogeneous inorganic materials.

The research, led by Professors Shaochen Chen and Joseph Wang of the NanoEngineering Department at the UC San Diego, was published in the Aug. 12 issue of the journal Advanced Materials.

A microrobotic toxin scavenger

Platinum nanoparticles in the tail of the fish achieve propulsion via reaction with hydrogen peroxide; iron oxide nanoparticles are loaded into the head of the fish for magnetic control (credit: W. Zhu and J. Li, UC San Diego Jacobs School of Engineering)

The nanoengineers were able to easily add functional nanoparticles into certain parts of the microfish bodies.

They installed platinum nanoparticles in the tails, which react with hydrogen peroxide to propel the microfish forward, and magnetic iron oxide nanoparticles in the heads, which allowed them to be steered with magnets.

“We have developed an entirely new method to engineer nature-inspired microscopic swimmers that have complex geometric structures and are smaller than the width of a human hair.

With this method, we can easily integrate different functions inside these tiny robotic swimmers for a broad spectrum of applications,” said the co-first author Wei Zhu, a nanoengineering Ph.D. student in Chen’s research group at the Jacobs School of Engineering at UC San Diego.

As a proof-of-concept demonstration, the researchers incorporated toxin-neutralizing polydiacetylene (PDA) nanoparticles throughout the bodies of the microfish to neutralize harmful pore-forming toxins such as the ones found in bee venom.

The researchers noted that the powerful swimming of the microfish in solution greatly enhanced their ability to clean up toxins.

When the PDA nanoparticles bind with toxin molecules, they become fluorescent and emit red-colored light. The team was able to monitor the detoxification ability of the microfish by the intensity of their red glow. “The neat thing about this experiment is that it shows how the microfish can doubly serve as detoxification systems and as toxin sensors,” said Zhu.

“Another exciting possibility we could explore is to encapsulate medicines inside the microfish and use them for directed drug delivery,” said Jinxing Li, the other co-first author of the study and a nanoengineering Ph.D. student in Wang’s research group.

3D-printing microrobots

Schematic illustration of the μCOP method to fabricate microfish. (Left) UV light illuminates mirrors, generating an optical pattern specified by the control computer. The pattern is projected through optics onto the photosensitive monomer solution to fabricate the fish layer-by-layer. (Right) 3D microscopy image of an array of printed microfish. Scale bar, 100 micrometers. (credit: Wei Zhu et al./Advanced Materials)

The new microfish fabrication method is based on a rapid, high-resolution 3D printing technology called microscale continuous optical printing (μCOP) developed in Chen’s lab, offering speed, scalability, precision, and flexibility.

The key component of the μCOP technology is a digital micromirror array device (DMD) chip, which contains approximately two million micromirrors. Each micromirror is individually controlled to project UV light in the desired pattern (in this case, a fish shape) onto a photosensitive material, which solidifies upon exposure to UV light. The microfish are constructed one layer at a time, allowing each set of functional nanoparticles to be “printed” into specific parts of the fish bodies.

Fluorescent images demonstrating the detoxification capability of microfish containing encapsulated PDA nanoparticles (credit: Wei Zhu et al./Advanced Materials)

Within seconds, the researchers can print an array containing hundreds of microfish, each measuring 120 microns long and 30 microns thick. This process also does not require the use of harsh chemicals. Because the μCOP technology is digitized, the researchers could easily experiment with different designs for their microfish, including shark and manta ray shapes. They could also build microrobots in based on other biological organisms, such as birds, said Zhu.

“This method has made it easier for us to test different designs for these microrobots and to test different nanoparticles to insert new functional elements into these tiny structures. It’s my personal hope to further this research to eventually develop surgical microrobots that operate safer and with more precision,” said Li.


Abstract of 3D-Printed Artificial Microfish

Hydrogel microfish featuring biomimetic structures, locomotive capabilities, and functionalized nanoparticles are engineered using a rapid 3D printing platform: microscale continuous ­optical printing (μCOP). The 3D-printed ­microfish exhibit chemically powered and magnetically guided propulsion, as well as highly efficient detoxification capabilities that highlight the technical versatility of this platform for engineering advanced functional microswimmers for diverse biomedical applications.

The origin of the robot species

A “mother robot” (A) is used for automatic assembly of candidate agents from active and passive modules. For the construction process, the robotic manipulator is equipped with a gripper and a glue supplier. Each agent is represented by the information stored in its genome (B). It contains one gene per module, and each gene contains information about the module types, construction parameters and motor control of the agent. A construction sequence encoded by one gene is shown in (C). First, the part of the robot which was encoded by the previous genes is rotated (C1 to C2). Second, the new module (here active) is picked from stock, rotated (C3), and eventually attached on top of the agent (C4). (credit: Luzius Brodbeck et al./PLOS ONE)

Researchers led by the University of Cambridge have built a mother robot that can build its own children, test which one does best, and automatically use the results to inform the design of the next generation — passing down preferential traits automatically.

Without any human intervention or computer simulation, beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.

In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next.

The results, reported in an open access paper in the journal PLOS One, found that the “fittest” individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.

Natural selection

Natural selection is ”essentially what this robot is doing — we can actually watch the improvement and diversification of the species,” said lead researcher Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich.

For each robot child, there is a unique “genome” made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands.

As in nature, the evolution takes place through “mutation,” where components of one gene are modified or single genes are added or deleted, and “crossover,” where a new genome is formed by merging genes from two individuals.

To allow the mother to determine which children were the fittest, each child was tested on how far it traveled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation to preserve their abilities, while mutation and crossover were introduced in the less successful children.

The increase in performance was due to both the fine-tuning of design parameters and the fact that the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.


Cambridge University | Fumiya Iida’s research looks at how robotics can be improved by taking inspiration from nature, whether that’s learning about intelligence, or finding ways to improve robotic locomotion. Iida’s lab is filled with a wide array of hopping robots, which may take their inspiration from grasshoppers, humans or even dinosaurs. One of his group’s developments, the “Chairless Chair,” is a wearable device that allows users to “sit” anywhere, without the need for a real chair.

Creative machines

“One of the big questions in biology is how intelligence came about — we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customization, but we want to see robots that are capable of innovation and creativity.”

In nature, organisms are able to adapt their physical characteristics to their environment over time. These adaptations allow biological organisms to survive in a wide variety of different environments — allowing animals to make the move from living in the water to living on land, for instance.

But machines are not adaptable in the same way. They are essentially stuck in one shape for their entire “lives,” and it’s uncertain whether changing their shape would make them more adaptable to changing environments.

Using a computer simulation to study artificial evolution generates thousands, or even millions, of possibilities in a short amount of time, but the researchers found that having the robot generate its own possibilities, without any computer simulation, resulted in more successful children. The disadvantage is that it takes time: each child took the robot about 10 minutes to design, build and test. A robot also requires between ten and 100 times more energy than an animal to do the same thing.

According to Iida, in the future they might use a computer simulation to pre-select the most promising candidates, and use real-world models for actual testing.


Cambridge University | Researchers have observed the process of evolution by natural selection at work in robots, by constructing a “mother” robot that can design, build and test its own “children,” and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.


 Abstract of Morphological Evolution of Physical Robots through Model-Free Phenotype Development

Artificial evolution of physical systems is a stochastic optimization method in which physical machines are iteratively adapted to a target function. The key for a meaningful design optimization is the capability to build variations of physical machines through the course of the evolutionary process. The optimization in turn no longer relies on complex physics models that are prone to the reality gap, a mismatch between simulated and real-world behavior. We report model-free development and evaluation of phenotypes in the artificial evolution of physical systems, in which a mother robot autonomously designs and assembles locomotion agents. The locomotion agents are automatically placed in the testing environment and their locomotion behavior is analyzed in the real world. This feedback is used for the design of the next iteration. Through experiments with a total of 500 autonomously built locomotion agents, this article shows diversification of morphology and behavior of physical robots for the improvement of functionality with limited resources.

Should humans be able to marry robots?

(credit: AMC)

The Supreme Court’s recent 5–4 decision in Obergefell v. Hodges legalizing same-sex marriage raises the interesting question: what’s next on the “slippery slope”? Robot-human marriages? Robot-robot marriages?

Why yes, predicts on Slate.

“There has recently been a burst of cogent accounts of human-robot sex and love in popular culture: Her and Ex Machina, the AMC drama series Humans, and the novel Love in the Age of Mechanical Reproduction,” he points out, along with David Levy’s 2007 book, Love and Sex With Robots.

But will the supremes’ decision open the door to robot-human marriage? Marchant explains that the decision was based on an analysis of four “principles and traditions”:

  • Individual autonomy, the right of each of us to decide our own private choices. Check.
  • Between “two persons.” “Marriage responds to the universal fear that a lonely person might call out only to find no one there,” the court said. “It offers the hope of companionship and understanding and assurance that while both still live there will be someone to care for the other.” Existing care robots would exceed some people in meeting that criterion. Check.
  • Marriage safeguards children and families. Could a future robot be an effective parent? Why not? Check.
  • Marriage is “central to many practical and legal realities of modern life, such as taxation, inheritance, property rights, hospital access and insurance coverage.” Hmm … sounds like a legal/accounting robot would excel in those areas. Double check.

“While few people would understand or support robot-human intimacy today, as robots get more sophisticated and humanlike, more and more people will find love, happiness, and intimacy in the arms of a machine.”

As HUMANS viewers know, at least in fiction, “Robot sex and love is coming, and robot-human marriage will likely not be far behind.”

 

 

 

AI and robotics researchers call for global ban on autonomous weapons

More than 1,000 leading artificial intelligence (AI) and robotics researchers and others, including Stephen Hawking and Elon Musk, just signed and published an open letter from the Future of Life Institute (FLI) today calling for a ban on offensive autonomous weapons.

FLI defines “autonomous weapons” as those that select and engage targets without human intervention, such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

The researchers believe that AI technology has reached a point where the deployment of such systems is feasible within years, not decades, and that the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Only be a matter of time until they appear on the black market

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

The proposed ban is similar to the broadly supported international agreements that have successfully prohibited chemical, biological weapons, blinding laser weapons, and space-based nuclear weapons.

“We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter concludes.

List of signatories

Deep Genomics launches, uniting deep learning and genome biology

“Deep learning” reveals the genetic origins of disease. A computational system mimics the biology of RNA splicing by correlating DNA elements with splicing levels in healthy human tissues. The system can scan DNA and identify damaging genetic variants, including those deep within introns.This procedure has led to insights into the genetics of autism, cancers, and spinal muscular atrophy. (credit: Hui Y. Xiong et al./Science)

Deep Genomics, a University of Toronto spinoff, launched today (July 22), combining deep learning and artificial intelligence with the study of the human genome. The company is building on more than a decade of research and expertise in both fields.

Using deep learning allows Deep Genomics to predict the consequences of genomic alteration on various cell mechanisms to make life-changing decisions, potentially via personalized medicine treatment, the researchers say.

“Our vision is to change the course of genomic medicine and help save lives by determining smarter treatment options,” says Brendan Frey, the company’s president and CEO, a Fellow of the Canadian Institute for Advanced Research, and a Professor at the University of Toronto.

SPIDEX, a Google for DNA mutation effects

Professor Brendan Frey (center-right) and colleagues at the University of Toronto Faculty of Applied Science & Engineering (credit: Roberta Baker/ U of T Engineering)

The scientific community has spent decades searching for mutations within specific genes that can be connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. But there is a vast amount of mutations and combinations of mutations that have neither been observed nor studied, posing a challenge for diagnostics and therapeutics today.

“We envision a future where computers are trusted to predict the outcome of laboratory experiments and treatments, long before anyone picks up a test tube. Our first step will be to open up a genome-wide database of over 300 million potentially disease-causing variants, most of which are in regions of the genome that can’t be examined using other methods.”

Deep Genomics’ first product, called SPIDEX, provides information about how these DNA mutations may alter splicing in the cell, a process that is crucial for normal development. It also connects the dots between a variant or mutation of unknown significance and a variant that has been linked to a disease to determine its level of danger.

Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described in the January 9, 2015 issue of the journal Science.

Labs will send the mutations they’ve collected to Deep Genomics, and the company will use their proprietary deep learning system, which includes SPIDEX, to “read” the genome and assess how likely the mutation is to cause a problem. SPIDEX can also connect the dots between a variant of unknown significance and a variant that has been linked to disease.

The company plans to further grow its team of machine learning, genome biology, and computational biology experts, and continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA.

The company’s scientific advisory board includes Yann LeCun, Director, Facebook AI Research; Stephen Scherer, Director, The Center for Applied Genomics; and Jordan Lerner-Ellis, Director, Molecular Diagnostics at Mount Sinai Hospital.

More information: www.deepgenomics.com.


Abstract of The human splicing code reveals new insights into the genetic determinants of disease

To facilitate precision medicine and whole-genome annotation, we developed a machine-learning technique that scores how strongly genetic variants affect RNA splicing, whose alteration contributes to many diseases. Analysis of more than 650,000 intronic and exonic variants revealed widespread patterns of mutation-driven aberrant splicing. Intronic disease mutations that are more than 30 nucleotides from any splice site alter splicing nine times as often as common variants, and missense exonic disease mutations that have the least impact on protein function are five times as likely as others to alter splicing. We detected tens of thousands of disease-causing mutations, including those involved in cancers and spinal muscular atrophy. Examination of intronic and exonic variants found using whole-genome sequencing of individuals with autism revealed misspliced genes with neurodevelopmental phenotypes. Our approach provides evidence for causal variants and should enable new discoveries in precision medicine.