Do you trust robots?

Would you trust this robot? (credit: Rethink Robotics)

Trust in robots is a critical component in safety that requires study, says MIT Professor Emeritus Thomas B. Sheridan in an open-access study published in Human Factors journal.

For decades, he has studied humans and automation and in each case, he noted significant human factors challenges — particularly concerning safety. He looked at self-driving cars and highly automated transit systems; routine tasks such as the delivery of packages in Amazon warehouses; devices that handle tasks in hazardous or inaccessible environments, such as the Fukushima nuclear plant; and robots that engage in social interaction (Barbies).

For example, no human driver, he claims, will stay alert to take over control of a self-driving car quickly enough should the automation fail. Nor does self-driving car technology consider the value of social interaction between drivers such as eye contact and hand signals. And would airline passengers be happy if computerized monitoring replaced the second pilot?

Designing a robot to move an elderly person in and out of bed would potentially reduce back injuries among human caregivers, but questions abound as to what physical form that robot should take, and hospital patients may be alienated by robots delivering their food trays. The ability of robots to learn from human feedback is an area that demands human factors research, as is understanding how people of different ages and abilities best learn from robots.

Sheridan also challenges the human factors community to address the inevitable trade-offs: the possibility of robots providing jobs rather than taking them away, robots as assistants that can enhance human self-worth instead of diminishing it, and the role of robots to improve rather than jeopardize security.


Abstract of Human–Robot Interaction: Status and Challenges

Objective: The current status of human–robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described.

Background: Robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control.

Methods: This mini-review describes HRI developments in four application areas and what are the challenges for human factors research.

Results: In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control.

Conclusions: HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven.

Applications: HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations.

Public beta of toolkit for developing machine learning for robots and games released

Make a three-dimensional bipedal robot walk forward as fast as possible, without falling over (credit: OpenAI Gym)

OpenAI (a non-profit AI research company sponsored by Elon Musk and others) has released the public beta of OpenAI Gym, a toolkit for developing and comparing algorithms for reinforcement learning (RL), a type of machine learning.

OpenAI Gym consists of a growing suite of environments (from simulated robots to Atari games), and a site for comparing and reproducing results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow and Theano. The environments are initially written in Python (other languages planned).

If you’d like to dive in right away, you can work through a tutorial, and you help out while learning by reproducing a result.

What is reinforcement learning?

Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment. It’s exciting for two reasons, according to OpenAI’s Greg Brockman and John Schulman:

  • RL is very general, encompassing all problems that involve making a sequence of decisions: for example, controlling a robot’s motors so that it’s able to run and jump, making business decisions like pricing and inventory management, or playing video games and board games. RL can even be applied to supervised learning problems with sequential or structured outputs.
  • RL algorithms have started to achieve good results in many difficult environments. RL has a long history, but until recent advances in deep learning, it required lots of problem-specific engineering. DeepMind’s Atari resultsBRETT from Pieter Abbeel’s group, and AlphaGo all used deep RL algorithms, which did not make too many assumptions about their environment, and thus can be applied in other settings.

However, RL research is also slowed down by two factors:

  • The need for better benchmarks. In supervised (human-managed) learning, progress has been driven by large labeled datasets like ImageNet. In RL, the closest equivalent would be a large and diverse collection of environments. However, the existing open-source collections of RL environments don’t have enough variety, and they are often difficult to even set up and use.
  • Lack of standardization of environments used in publications. Subtle differences in the problem definition, such as the reward function or the set of actions, can drastically alter a task’s difficulty. This issue makes it difficult to reproduce published research and compare results from different papers.

OpenAI Gym is an attempt to fix both problems.

Partners include:

More information, including enviroments (Atari games, 2D and 3D robots, and toy text, for example) is available here.

“During the public beta, we’re looking for feedback on how to make this into an even better tool for research,” says the OpenAI team. “If you’d like to help, you can try your hand at improving the state-of-the-art on each environment, reproducing other people’s results, or even implementing your own environments. Also please join us in the community chat!


John Schulman | hopper

System predicts 85 percent of cyber attacks using input from human experts

AI2 combs through data and detects suspicious activity using unsupervised machine-learning. It then presents this activity to human analysts, who confirm which events are actual attacks, and incorporate that feedback into its models for the next set of data. (credit: Kalyan Veeramachaneni/MIT CSAIL)

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the machine-learning startup PatternEx have developed an AI platform called AI2 that predicts cyber-attacks significantly better than existing systems by continuously incorporating input from human experts (AI2 refers to merging AI with “analyst intuition”:  rules created by living experts).

The team showed that AI2 can detect 85 percent of attacks —about three times better than previous benchmarks — while also reducing the number of false positives by a factor of 5. The system was tested on 3.6 billion pieces of data known as “log lines,” which were generated by millions of users over a period of three months.

To predict attacks, AI2 combs through data and detects suspicious activity by clustering the data into meaningful patterns using unsupervised (automatic, no human help) machine learning. It then presents this activity to human analysts, who confirm which events are actual attacks, and incorporates that feedback into its models for the next set of data.

“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”

Creating cybersecurity systems that merge human- and computer-based approaches is tricky, partly because of the challenge of manually labeling cybersecurity data for the algorithms. For example, let’s say you want to develop a computer-vision algorithm that can identify objects with high accuracy. Labeling data for that is simple: Just enlist a few human volunteers to label photos as either “objects” or “non-objects,” and feed that data into the algorithm.

But for a cybersecurity task, the average person on a crowdsourcing site like Amazon Mechanical Turk simply doesn’t have the skillset to apply labels like “DDOS” or “exfiltration attacks,” says Veeramachaneni. “You need security experts.” That opens up another problem: Experts are busy and expensive, so an effective machine-learning system has to be able to improve itself without overwhelming its human overlords.

Merging methods

AI2’s secret weapon is that it fuses together three different unsupervised-learning methods, and then shows the top events to analysts for them to label. It then builds a supervised model that it can constantly refine through what the team calls a “continuous active learning system.”

Specifically, on day one of its training, AI2 picks the 200 most abnormal events and gives them to the expert. As it improves over time, it identifies more and more of the events as actual attacks, meaning that in a matter of days, the analyst may only be looking at 30 or 40 events a day.

“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” says Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”

The team says that AI2 can scale to billions of log lines per day, transforming the pieces of data on a minute-by-minute basis into different “features”, or discrete types of behavior that are eventually deemed “normal” or “abnormal.”

“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.”

Veeramachaneni presented a paper about the system at last week’s IEEE International Conference on Big Data Security in New York City.


MITCSAIL | AI2: an AI-driven predictive cybersecurity platform


Abstract of AI2 : Training a big data machine to defend

We present an analyst-in-the-loop security system, where analyst intuition is put together with stateof-the-art machine learning to build an end-to-end active learning system. The system has four key features: a big data behavioral analytics platform, an ensemble of outlier detection methods, a mechanism to obtain feedback from security analysts, and a supervised learning module. When these four components are run in conjunction on a daily basis and are compared to an unsupervised outlier detection method, detection rate improves by an average of 3.41×, and false positives are reduced fivefold. We validate our system with a real-world data set consisting of 3.6 billion log lines. These results show that our system is capable of learning to defend against unseen attacks.

NYU Holodeck to be model for year 2041 cyberlearning

NYU-X Holodeck (credit: Winslow Burleson and Armanda Lewis)

In an open-access paper in the Journal of Artificial Intelligence Education, Winslow Burleson, PhD, MSE, associate professor, New York University Rory Meyers College of Nursing, suggests that “advanced cyberlearning environments that involve VR and AI innovations are needed to solve society’s “wicked challenges*” — entrenched and seemingly intractable societal problems.

Burleson and and co-author Armanda Lewis imagine such technology in a year 2041 Holodeck, which Burleson’s NYU-X Lab is currently developing in prototype form, in collaboration with colleagues at NYU Courant, Tandon, Steinhardt, and Tisch.

“The “Holodeck” will support a broad range of transdisciplinary collaborations, integrated education, research, and innovation by providing a networked software/hardware infrastructure that can synthesize visual, audio, physical, social, and societal components,” said Burleson.

It’s intended as a model for the future of cyberlearning experience, integrating visual, audio, and physical (haptics, objects, real-time fabrication) components, with shared computation, integrated distributed data, immersive visualization, and social interaction to make possible large-scale synthesis of learning, research, and innovation.


This reminds me of the book Education and Ecstasy, written in 1968 by George B. Leonard, a respected editor for LOOK magazine and, in many respects, a pioneer in what has become the transhumanism movement. That book laid out the justification and promise of advanced educational technology in the classroom for an entire generation. Other writers, such as Harry S. Broudy in the Real World of the Public Schools (1972) followed, arguing that we can not afford “master teachers” in every classroom, but still need to do far better, both then and now.

Today, theories and models of automated planning using computers in complex situations are advanced and “wicked” social simulations can demonstrate the “potholes” in proposed action scenarios. Virtual realties, holodecks, interactive games, robotic and/or AI assistants offer “sandboxes” for learning and for sharing that learning with others. Leonard’s vision, proposed in 1968 for the year 2000, has not yet been realized. However, by 2041, according to these authors, it just might be.

— Warren E. Lacefield, Ph.D. President/CEO Academic Software, Inc.; Associate Professor (retired), Evaluation, Measurement, and Research Program, Department of Educational Leadership, Research, and Technology, Western Michigan University (aka “Asiwel” on KurzweilAI)


Key aspects of the Holodeck: personal stories and interactive experiences that make it a rich environment; open streaming content that make it real and compelling; and contributions that personalize the learning experience. The goal is to create a networked infrastructure and communication environment where “wicked challenges” can be iteratively explored and re-solved, utilizing visual, acoustic, and physical sensory feedback, human dynamics with and social collaboration.

Burleson and Lewis envision that in 2041, learning is unlimited — each individual can create a teacher, team, community, world, galaxy or universe of their own.

* In the late 1960s, urban planners Horst Rittel and Melvin Webber began formulating the concept of “wicked problems” or “wicked challenges” –problems so vexing in the realm of social and organizational planning that they could not be successfully ameliorated with traditional linear, analytical, systems-engineering types of approaches.

These “wicked challenges” are poorly defined, abstruse, and connected to strong moral, political and professional issues.  Some examples might include: “How should we deal with crime and violence in our schools?  “How should we wage the ‘War on Terror’? or “What is good national immigration policy?”

“Wicked problems,” by their very nature, are strongly stakeholder dependent; there is often little consensus even about what the problem is, let alone how to deal with it. And, the challenges themselves are ever shifting sets of inherently complex, interacting issues evolving in a dynamic social context. Often, new forms of “wicked challenges” emerge as a result of trying to understand and treat just one challenge in isolation.


Abstract of Optimists’ Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041)

This essay imagines the role that artificial intelligence innovations play in the integrated living, learning and research environments of 2041. Here, in 2041, in the context of increasingly complex wicked challenges, whose solutions by their very nature continue to evade even the most capable experts, society and technology have co-evolved to embrace cyberlearning as an essential tool for envisioning and refining utopias–non-existent societies described in considerable detail. Our society appreciates that evolving these utopias is critical to creating and resolving wicked challenges and to better understanding how to create a world in which we are actively “learning to be” – deeply engaged in intrinsically motivating experiences that empower each of us to reach our full potential. Since 2015, Artificial Intelligence in Education (AIED) has transitioned from what was primarily a research endeavour, with educational impact involving millions of user/learners, to serving, now, as a core contributor to democratizing learning (Dewey 2004) and active citizenship for all (billions of learners throughout their lives). An expansive experiential super computing cyberlearning environment, we affectionately call the “Holodeck,” supports transdisciplinary collaboration and integrated education, research, and innovation, providing a networked software/hardware infrastructure that synthesizes visual, audio, physical, social, and societal components. The Holodeck’s large-scale integration of learning, research, and innovation, through real-world problem solving and teaching others what you have learned, effectively creates a global meritocratic network with the potential to resolve society’s wicked challenges while empowering every citizen to realize her or his full potential.

Microscope uses nanosecond-speed laser and deep learning to detect cancer cells more efficiently

This microscope uses specially designed optics that boost image clarity and slows them enough to be detected and digitized at a rate of 36 million images per second. It then uses deep learning to distinguish cancer cells from healthy white blood cells. (credit: Tunde Akinloye/CNSI)

Scientists at the California NanoSystems Institute at UCLA have developed a new technique for identifying cancer cells in blood samples faster and more accurately than the current standard methods.

In one common approach to testing for cancer, doctors add biochemicals to blood samples. Those biochemicals attach biological “labels” to the cancer cells, and those labels enable instruments to detect and identify them. However, the biochemicals can damage the cells and render the samples unusable for future analyses. There are other current techniques that don’t use labeling but can be inaccurate because they identify cancer cells based only on one physical characteristic.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system

The new technique images cells without destroying them and can identify 16 physical characteristics — including size, granularity and biomass — instead of just one.

Time-stretch quantitative phase imaging (TS-QPI) and analytics system (credit: Claire Lifan Chen et al./Nature Scientific Reports)

The new technique combines two components that were invented at UCLA:

  • A “photonic time stretch” microscope, which is capable of quickly imaging cells in blood samples. Invented by Barham Jalali, professor and Northrop-Grumman Optoelectronics Chair in electrical engineering, it works by taking pictures of flowing blood cells using laser bursts (similar to how a camera uses a flash). Each flash only lasts nanoseconds (billionths of a second) to avoid damage to cells, but that normally means the images are both too weak to be detected and too fast to be digitized by normal instrumentation. The new microscope overcomes those challenges by using specially designed optics that amplify and boost the clarity of the images, and simultaneously slow them down enough to be detected and digitized at a rate of 36 million images per second.
  • A deep learning computer program, which identifies cancer cells with more than 95 percent accuracy. Deep learning is a form of artificial intelligence that uses complex algorithms to extract patterns and knowledge from rich multidimenstional datasets, with the goal of achieving accurate decision making.

The study was published in the open-access journal Nature Scientific Reports. The researchers write in the paper that the system could lead to data-driven diagnoses by cells’ physical characteristics, which could allow quicker and earlier diagnoses of cancer, for example, and better understanding of the tumor-specific gene expression in cells, which could facilitate new treatments for disease.

The research was supported by NantWorks, LLC.


Abstract of Deep Learning in Label-free Cell Classification

Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

Autonomous vehicles might have to be test-driven tens or hundreds of years to demonstrate their safety

A Lexus RX450h retrofitted by Google for its driverless car fleet (credit: Steve Jurvetson/CC)

Autonomous vehicles would have to be driven hundreds of millions of miles or even hundreds of billions of miles over tens and even hundreds of years (under some scenarios) to create enough data to statistically demonstrate their safety, when compared to the rate at which injuries and fatalities* occur in human-controlled cars and trucks, according to a new open-access RAND report.**

Although the total number of crashes, injuries and fatalities from human drivers is high**, the rate of these failures (77 injuries per 100 million miles driven) is low in comparison with the number of miles that people drive.

The solution is to find more practical testing methods, such as virtual testing and simulators, mathematical modeling, scenario testing, and pilot studies.

Another report, from the University of Michigan’s Transportation Research Institute in October 2015, found that even though they haven’t been at fault, self-driving test cars are involved in crashes at five times the rate of conventional cars per mile and four times the injury rate (although minor) — or twice as high, when figures are adjusted to take into account the fact that many accidents involving conventional cars go unreported.

* According to the Center for Disease Control and Prevention, motor vehicle accidents are a leading cause of premature death in the United States and are responsible for over $80 billion annually in medical care and lost productivity due to injuries. Autonomous vehicles hold enormous potential for managing this crisis and researchers say autonomous vehicles could significantly reduce the number of accidents caused by human error. According to the National Highway Traffic Safety Administration, more than 90 percent of automobile crashes are caused by human errors such as driving too fast, as well as alcohol impairment, distraction and fatigue. Autonomous vehicles are never drunk, distracted or tired; these factors are involved in 41 percent, 10 percent and 2.5 percent of all fatal crashes, respectively.

** As of March 2016, Google self-driving cars in “autonomous mode” have driven 1,498,214 miles. As of Feb. 29, 2016, only one Google self-driving car has caused an accident, according to Google (on Feb. 14, a Google self-driving car, traveling at 2 mph, pulled out in front of a public bus going 15 mph, according to the California DMV).

MIT AI Lab 3D-prints first mobile robot made of solids and liquids

This 3D-printed hexapod robot moves via a single motor that spins a crankshaft that pumps fluid to the robot’s legs. Every component except the motor, battery, and added electronics is printed in a single step, no assembly required. (credit: R. MacCurdy/MIT CSAIL)

Ever want to just push a button and print out a hydraulically-powered robot that can immediately get to work for you?

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have designed a system that does just that: 3D-prints moving robots with solid and liquid materials in a single step and on a commercially available 3D printer. No assembly required.

“All you have to do is stick in a battery and motor, and you have a robot that can practically walk right out of the printer,” says CSAIL Director Daniela Rus, who oversaw the project and co-wrote the paper.

Printable hydraulics

The 3D-printed six-legged demo robot weighs about 1.5 pounds and is less than 6 inches long. The robot uses a tripod gait. A single DC motor spins a central crankshaft that pumps fluid via banks of bellows pumps. Fluid is forced out of the pumps and distributed to each leg actuator by pipes embedded within the robot’s body. Added after printing to enhance control, an onboard microcontroller (A) controls a motor (B) and enables responses to environmental stimuli via a sensor (C), with cellphone control via Bluetooth. (credit: R. MacCurdy/MIT CSAIL)

To demonstrate their “printable hydraulics” concept, the researchers 3D-printed a tiny six-legged robot that can crawl, using 12 hydraulic pumps embedded within its body. They also added an onboard microcontroller, sensor, and remote control via cellphone.

The inkjet printer deposits individual droplets of material that are each 20 to 30 micrometers in diameter. The printer proceeds layer-by-layer from the bottom up. For each layer, the printer deposits different materials in different parts, and then uses high-intensity UV light to solidify all of the materials (except the liquids). Each layer consist of a “photopolymer” (a solid) and “a non-curing material” (a liquid).

3D-printed bellows produced via inkjet printer in a single print. Co-deposition of liquids and solids allows fine internal channels to be fabricated and pre-filled. The part is ready to use when it is removed from the printer. (credit: R. MacCurdy/MIT CSAIL)

With current methods, liquid printing requires an additional post-printing step such as melting the materials away or having a human manually scrape them clean, making it hard for liquid-based methods to be used in factory-scale manufacturing.

Liquids also often interfere with the droplets that are supposed to solidify. To handle that issue, the team printed dozens of test geometries with different orientations to determine the proper resolutions for printing solids and liquids together.

The team also 3-D printed a silicone-rubber robotic hand with fluid-actuated fingers. They developed this “soft gripper” for the Baxter research robot, designed by former CSAIL director Rodney Brooks as part of his spinoff company Rethink Robotics.

“Printable hydraulics” allows for a customizable design template that can create robots of different sizes, shapes, and functions and is compatible with any multimaterial 3-D inkjet printer.

“The CSAIL team has taken multi-material printing to the next level by printing not just a combination of different polymers or a mixture of metals, but essentially a self-contained working hydraulic system,” says Hod Lipson, a professor of engineering at Columbia University and co-author of Fabricated: The New World of 3-D Printing. “It’s an important step towards the next big phase of 3-D printing — moving from printing passive parts to printing active integrated systems.”

An open-access paper will be included in this summer’s IEEE International Conference on Robotics and Automation (ICRA) proceedings. The research was funded in part by a grant from the National Science Foundation.


MIT CSAIL | Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed the first-ever technique for 3D-printing robots that involves printing solid and liquid materials at the same time.


Abstract of Printable Hydraulics: A Method for Fabricating Robots by 3D Co-Printing Solids and Liquids

This paper introduces a novel technique for fabricating functional robots using 3D printers. Simultaneously depositing photopolymers and a non-curing liquid allows complex, pre-filled fluidic channels to be fabricated. This new printing capability enables complex hydraulically actuated robots and robotic components to be automatically built, with no assembly required. The technique is showcased by printing linear bellows actuators, gear pumps, soft grippers and a hexapod robot, using a commercially-available 3D printer. We detail the steps required to modify the printer and describe the design constraints imposed by this new fabrication approach.

Largest network of cortical neurons mapped from ~100 terabytes data set

Neuroscientists have constructed a network map of connections between cortical neurons, traced from a ~100 terabytes 3D data set. The data were created by an electron microscope in nanoscopic detail, allowing every one of the “wires” to be seen, along with their connections. Some of the neurons are color-coded according to their activity patterns in the living brain. (credit: Clay Reid, Allen Institute; Wei-Chung Lee, Harvard Medical School; Sam Ingersoll, graphic artist)

The largest network of the connections between neurons in the cortex to date has been published by an international team of researchers from the Allen Institute for Brain Science, Harvard Medical School, and Neuro-Electronics Research Flanders (NERF).

In the process of their study*, the researchers developed new tools that will be useful for “reverse engineering the brain by discovering relationships between circuit wiring and neuronal and network computations,” says Wei-Chung Lee, Ph.D., Instructor in Neurobiology at Harvard Medicine School and lead author of a paper published this week in the journal Nature.

The study is part of a “functional connectomics” research program started almost ten years ago that aims at bridging a longstanding gap between two areas of neuroscience study: brain activity (using fMRI imaging) and brain wiring (using detailed electron microscopy).

The research began by identifying neurons in the mouse visual cortex that responded to particular visual stimuli (pyramidal cells in V1, the rodent primary visual cortex), such as vertical or horizontal bars on a screen. The scientists then made ultra-thin slices of brain and captured millions of detailed images of those targeted cells and synapses, which were then reconstructed in three dimensions. Teams of annotators simultaneously traced individual neurons through the 3D stacks of images and located connections between individual neurons.

Analyzing this wealth of data yielded several results, including the first direct structural evidence to support the hypothesis that neurons that do similar tasks are more likely to be connected to each other than to nearby neurons that carry out different tasks.

New tools for reverse engineering the brain

The researchers said these new research tools will also be employed in an $18.7 million IARPA project announced March 12 with the Allen Institute for Brain Science, Baylor College of Medicine, and Princeton University, which seeks to scale these methods to a larger segment of brain tissue. That project is part of the Machine Intelligence from Cortical Networks (MICrONS) program (see CMU announces research project to reverse-engineer brain algorithms, funded by IARPA), which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.

The 3D image data from the Allen Institute-Baylor study will be sent to Princeton University, under Sebastian Seung, Ph.D., Professor of Computer Science and the Princeton Neuroscience Institute, where it will be painstakingly reconstructed in three dimensions by human annotators aided by powerful machine vision and machine learning algorithms, and each individual neuron with all its myriad processes will be traced and analyzed.

The end goal of that functional connectomics project is to create a reconstruction of a cubic millimeter of brain tissue, the size of a grain of sand, yet containing the largest section of brain ever to be studied in this way to date.

Improving artificial neural networks algorithms

Beyond that, the ultimate goal of MICrONS is to implement the algorithms and learning rules that scientists decipher from the brain to advance the field of artificial intelligence by improving artificial neural networks algorithms — for speech recognition, recognizing faces, and helping analyze big data for biomedical research, for example.

“In many ways, these artificial neural networks are still primitive compared to biological networks of neurons and do not learn the way real brains do,” says Andreas Tolias, Ph.D., Associate Professor in the Department of Neuroscience at Baylor College of Medicine. “Our goal is to fill this gap and apply the algorithms of the brain to engineer novel artificial network architectures.”

* This work was supported by the Harvard Medical School Vision Core Grant, the Bertarelli Foundation, the Edward R. and Anne G. Lefler Center,the Stanley and Theodora Feldberg Fund, Neuro-Electronics Research Flanders (NERF), the Allen Institute for Brain Science, and the National Institutes of Health, through resources provided by the National Resource for Biomedical Supercomputing at the Pittsburgh Supercomputing Center and the National Center for Multiscale Modeling of Biological Systems.


Abstract of Anatomy and function of an excitatory network in the visual cortex

Circuits in the cerebral cortex consist of thousands of neurons connected by millions of synapses. A precise understanding of these local networks requires relating circuit activity with the underlying network structure. For pyramidal cells in superficial mouse visual cortex (V1), a consensus is emerging that neurons with similar visual response properties excite each other, but the anatomical basis of this recurrent synaptic network is unknown. Here we combined physiological imaging and large-scale electron microscopy to study an excitatory network in V1. We found that layer 2/3 neurons organized into subnetworks defined by anatomical connectivity, with more connections within than between groups. More specifically, we found that pyramidal neurons with similar orientation selectivity preferentially formed synapses with each other, despite the fact that axons and dendrites of all orientation selectivities pass near (<5 μm) each other with roughly equal probability. Therefore, we predict that mechanisms of functionally specific connectivity take place at the length scale of spines. Neurons with similar orientation tuning formed larger synapses, potentially enhancing the net effect of synaptic specificity. With the ability to study thousands of connections in a single circuit, functional connectomics is proving a powerful method to uncover the organizational logic of cortical networks.

Automated lip-reading invented

(credit: MGM)

New lip-reading technology developed at the University of East Anglia could help in solving crimes and provide communication assistance for people with hearing and speech impairments.

The visual speech recognition technology, created by Helen L. Bear, PhD, and Prof Richard Harvey of UEA’s School of Computing Sciences, can be applied “any place where the audio isn’t good enough to determine what people are saying,” Bear said. Those include criminal investigations, entertainment, and especially where are there are high levels of noise, such as in cars or aircraft cockpits, she said.

Bear said unique problems with determining speech arise when sound isn’t available — such as on video footage — or if the audio is inadequate and there aren’t clues to give the context of a conversation. Or on those ubiquitous annoying videos with music that masks speech. The sounds ‘/p/,’ ‘/b/,’ and ‘/m/’ all look similar on the lips, but now the machine lip-reading classification technology can differentiate between the sounds for a more accurate translation.

“We are still learning the science of visual speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,” said Bear.

“Lip-reading is one of the most challenging problems in artificial intelligence, so it’s great to make progress on one of the trickier aspects, which is how to train machines to recognize the appearance and shape of human lips,” said Harvey.

The research, part of a three-year project, was supported by the Engineering and Physical Sciences Research Council (EPSRC). The research will be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai.

A morphing metal for soft robots and other machines

Morphed configurations that demonstrate the composite’s ability to hold bent (A), twisted (B), relaxed (C), and elongated (D) positions at room temperature (credit: Ilse M. Van Meerbeek et al./Advanced Materials)

Cornell University engineering professor Rob Shepherd and his group have developed a hybrid material combining a stiff metal called Field’s metal and a soft, porous silicone foam. Think T-1000 Terminator.

The material combines the best properties of both — stiffness when it’s called for, and elasticity when a change of shape is required. The material also has the ability to self-heal following damage.

“Sometimes you want a robot, or any machine, to be stiff,” said Shepherd. “But when you make them stiff, they can’t morph their shape very well. To give a soft robot both capabilities, to be able to morph their structure but also to be stiff and bear load, that’s what this material does.”

In addition to its low melting point of 144 degrees Fahrenheit, Field’s metal was chosen because, unlike similar alloys, it contains no lead, making it biocompatible.


Cornell University | Metal Elastomer Composite

To create the hybrid material, the elastomer foam is dipped into the molten metal, then placed in a vacuum so that the air in the foam’s pores is removed and replaced by the alloy. The foam had pore sizes of about 2 millimeters that can be tuned to create a stiffer or a more flexible material. In testing of its strength and elasticity, the material showed an ability to deform when heated above 144 degrees, regain rigidity when cooled, then return to its original shape and strength when reheated.

His group’s work has been published in Advanced Materials and will be the cover story in an upcoming issue of the journal’s print edition.

The work was supported by the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the Alfred P. Sloan Foundation.


Abstract of Morphing Metal and Elastomer Bicontinuous Foams for Reversible Stiffness, Shape Memory, and Self-Healing Soft Machines

A metal–elastomer-foam composite that varies in stiffness, that can change shape and store shape memory, that self-heals, and that welds into monolithic structures from smaller components is presented.