Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) actual device. (credit: She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

Metalens with artificial muscle simulates (and goes way beyond) human-eye and camera optical functions

A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.

The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.

The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.

The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)

Simulating the human eye’s lens and ciliary muscles

In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.

(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) Photo of actual device. (credit: Alan She et al./Sci. Adv.)

Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.

The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.

* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.

** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and  Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”


Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift

Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.


Abstract of Large area metalenses: design, characterization, and mass manufacturing

Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

Low-cost EEG can now be used to reconstruct images of what you see

(left:) Test image displayed on computer monitor. (right:) Image captured by EEG and decoded. (credit: Dan Nemrodov et al./eNeuro)

A new technique developed by University of Toronto Scarborough neuroscientists has, for the first time, used EEG detection of brain activity in reconstructing images of what people perceive.

The new technique “could provide a means of communication for people who are unable to verbally communicate,” said Dan Nemrodov, Ph.D., a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough. “It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects, rather than relying on verbal descriptions provided to a sketch artist.”

(left:) EEG electrodes used in the study (photo credit: Ken Jones). (right in red:) The area where the images were detected, the occipital lobe, is the visual processing center of the mammalian brain, containing most of the anatomical region of the visual cortex. (credit: CC/Wikipedia)

For the study, test subjects were shown images of faces while their brain activity was detected by EEG (electroencephalogram) electrodes over the occipital lobe, the visual processing center of the brain. The data was then processed by the researchers, using a technique based on machine learning algorithms that allowed for digitally recreating the image that was in the subject’s mind.

More practical than fMRI for reconstructing brain images

This new technique was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past.

According to Nemrodov, techniques like fMRI — which measures brain activity by detecting changes in blood flow — can grab finer details of what’s going on in specific areas of the brain, but EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison.

While fMRI captures activity at the time scale of seconds, EEG captures activity at the millisecond scale, he says. “So we can see, with very fine detail, how the percept of a face develops in our brain using EEG.” The researchers found that it takes the brain about 120 milliseconds (0.12 seconds) to form a good representation of a face we see, but the important time period for recording starts around 200 milliseconds, Nemrodov says. That’s followed by machine-learning processing to decode the image.*

This study provides validation that EEG has potential for this type of image reconstruction, notes Nemrodov, something many researchers doubted was possible, given its apparent limitations.

Clinical and forensic uses

“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities,” says Nestor. “It unveils the subjective content of our mind and it provides a way to access, explore, and share the content of our perception, memory, and imagination.”

Work is now underway in Nestor’s lab to test how EEG could be used to reconstruct images from a wider range of objects beyond faces — even to show “what people remember or imagine, or what they want to express,” says Nestor. (A new creative tool?)

The research, which is published (open-access) in the journal eNeuro, was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.

* “After we obtain event-related potentials (ERPs) [the measured brain response from a visual sensory event, in this case] — we use a support vector machine (SVM) algorithm to compute pairwise classifications of the visual image identities,” Nemrodov explained to KurzweilAI. “Based on the resulting dissimilarity matrix, we build a face space from which we estimate in a pixel-wise manner the appearance of every individual left-out (to avoid circularity) face. We do it by a linear combination of the classification images plus the origin of the face space.” The method is based on a former study: Nestor, A., Plaut, D. C., & Behrmann, M. (2016). Feature-based face representations and image reconstruction from behavioral and neural data. Proceedings of the National Academy of Sciences. 25 113: 416-421.


University of Toronto Scarborough | Do you see what I see? Harnessing brain waves can help reconstruct mental images


Nature Video | Reading minds


Abstract of The Neural Dynamics of Facial Identity Processing: insights from EEG-Based Pattern Analysis and Image Reconstruction

Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.

Do our brains use the same kind of deep-learning algorithms used in AI?

This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex, the most prevalent cell type in the cortex. The tree-like form separates “roots,” where bottoms of cortical neurons are located just where they need to be to receive signals about sensory input, from “branches” at the top, which are well placed to receive feedback error signals. Right: Illustration of simplified pyramidal neuron models. (credit: CIFAR)

Deep-learning researchers have found that certain neurons in the brain have shape and electrical properties that appear to be well-suited for “deep learning” — the kind of machine-intelligence used in beating humans at Go and Chess.

Canadian Institute For Advanced Research (CIFAR) Fellow Blake Richards and his colleagues — Jordan Guerguiev at the University of Toronto, Scarborough, and Timothy Lillicrap at Google DeepMind — developed an algorithm that simulates how a deep-learning network could work in our brains. It represents a biologically realistic way by which real brains could do deep learning.*

The finding is detailed in a study published December 5th in the open-access journal eLife. (The paper is highly technical; Adam Shai of Stanford University and Matthew E. Larkum of Humboldt University, Germany wrote a more accessible paper summarizing the ideas, published in the same eLife issue.)

Seeing the trees and the forest

Image of a neuron recorded in Blake Richard’s lab (credit: Blake Richards)

“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.” That allows these functions to have the required separation.

Using this knowledge of the neurons’ structure, the researchers built a computer model using the same shapes, with received signals in specific sections. It turns out that these sections allowed simulated neurons in different layers to collaborate — achieving deep learning.

“It’s just a set of simulations so it can’t tell us exactly what our brains are doing, but it does suggest enough to warrant further experimental examination if our own brains may use the same sort of algorithms that they use in AI,” Richards says.

“No one has tested our predictions yet,” he told KurzweilAI. “But, there’s a new preprint that builds on what we were proposing in a nice way from Walter Senn‘s group, and which includes some results on unsupervised learning (Yoshua [Bengio] mentions this work in his talk).

How the brain achieves deep learning

The tree-like pyramidal neocortex neurons are only one of many types of cells in the brain. Richards says future research should model different brain cells and examine how they interact together to achieve deep learning. In the long term, he hopes researchers can overcome major challenges, such as how to learn through experience without receiving feedback or to solve the “credit assignment problem.”**

Deep learning has brought about machines that can “see” the world more like humans can, and recognize language. But does the brain actually learn this way? The answer has the potential to create more powerful artificial intelligence and unlock the mysteries of human intelligence, he believes.

“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” Richards says.

Perhaps this kind of research could one day also address future ethical and other human-machine-collaboration issues — including merger, as Elon Musk and Ray Kurzweil have proposed, to achieve a “soft takeoff” in the emergence of superintelligence.

* This research idea goes back to AI pioneers Geoffrey Hinton, a CIFAR Distinguished Fellow and founder of the Learning in Machines & Brains program, and program Co-Director Yoshua Bengio, who was one of the main motivations for founding the program. These researchers sought not only to develop artificial intelligence, but also to understand how the human brain learns, says Richards.

In the early 2000s, Richards and Lillicrap took a course with Hinton at the University of Toronto and were convinced deep learning models were capturing “something real” about how human brains work. At the time, there were several challenges to testing that idea. Firstly, it wasn’t clear that deep learning could achieve human-level skill. Secondly, the algorithms violated biological facts proven by neuroscientists.

The paper builds on research from Bengio’s lab on a more biologically plausible way to train neural nets and an algorithm developed by Lillicrap that further relaxes some of the rules for training neural nets. The paper also incorporates research from Matthew Larkam on the structure of neurons in the neocortex.

By combining neurological insights with existing algorithms, Richards’ team was able to create a better and more realistic algorithm for simulating learning in the brain.

The study was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), a Google Faculty Research Award, and CIFAR.

** In the paper, the authors note that a large gap exists between deep learning in AI and our current understanding of learning and memory in neuroscience. “In particular, unlike deep learning researchers, neuroscientists do not yet have a solution to the ‘credit assignment problem’ (Rumelhart et al., 1986; Lillicrap et al., 2016; Bengio et al., 2015). Learning to optimize some behavioral or cognitive function requires a method for assigning ‘credit’ (or ‘blame’) to neurons for their contribution to the final behavioral output (LeCun et al., 2015; Bengio et al., 2015). The credit assignment problem refers to the fact that assigning credit in multi-layer networks is difficult, since the behavioral impact of neurons in early layers of a network depends on the downstream synaptic connections.” The authors go on to suggest a solution.

 

Two new wearable sensors may replace traditional medical diagnostic devices

Throat-motion sensor monitors stroke effects more effectively

A radical new type of stretchable, wearable sensor that measures vocal-cord movements could be a “game changer” for stroke rehabilitation, according to Northwestern University scientists. The sensors can also measure swallowing ability (which may be affected by stroke), heart function, muscle activity, and sleep quality. Developed in the lab of engineering professor John A. Rogers, Ph.D., in partnership with Shirley Ryan AbilityLab in Chicago, the new sensors have been deployed to tens of patients.

“One of the biggest problems we face with stroke patients is that their gains tend to drop off when they leave the hospital,” said Arun Jayaraman, Ph.D., research scientist at the Shirley Ryan AbilityLab and a wearable-technology expert. “With the home monitoring enabled by these sensors, we can intervene at the right time, which could lead to better, faster recoveries for patients.”

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Monitoring movements, not sounds. The new band-aid-like stretchable throat sensor (two are applied) measures speech patterns by detecting throat movements to improve diagnosis and treatment of aphasia, a communication disorder associated with stroke.

Speech-language pathologists currently use microphones to monitor patients’ speech functions, which can’t distinguish between patients’ voices and ambient noise.

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Full-body kinematics. AbilityLab also uses similar electronic biosensors (developed in Rogers’ lab) on the legs, arms and chest to monitor stroke patients’ recovery progress. The sensors stream data wirelessly to clinicians’ phones and computers, providing a quantitative, full-body picture of patients’ advanced physical and physiological responses in real time.

Patients can wear them even after they leave the hospital, allowing doctors to understand how their patients are functioning in the real world.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Mobile displays. Data from the sensors will be presented in a simple iPad-like display that is easy for both clinicians and patients to understand. It will send alerts when patients are under-performing on a certain metric and allow them to set and track progress toward their goals. A smartphone app can also help patients make corrections.

The researchers plan to test the sensors on patients with other conditions, such as Parkinson’s disease.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Body-chemicals sensor. Another patch developed by the Rogers Lab does colorimetric analysis — determining the concentration of a chemical — for measuring sweat rate/loss and electrolyte loss. The Rogers Lab has a contract with Gatorade, and is testing this technology with the U.S. Air Force, the Seattle Mariners, and other unnamed sports teams.

Phone apps will also be available to capture precise colors and for data extraction, using algorithms.

A wearable electrocardiogram

Electrocardiogram on a prototype skin sensor (credit: 2018 Takao Someya Research Group)

Wearing your heart of your sleeve. Imagine looking at a electrocardiogram displayed on your wrist, using a simple skin sensor (replacing the usual complex array of EKG body electrodes), linked wirelessly to a smartphone or the cloud.

That’s the concept for a new wearable device developed by a team headed by Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering and Dai Nippon Printing (DNP). It’s designed to provide continuous, non-invasive health monitoring.

 

The soft, flexible skin display is about 1 millimeter thick. (credit: 2018 Takao Someya Research Group.)

Stretchable nanomesh. The device uses a lightweight sensor made from a nanomesh electrode and a display made from a 16 x 24 array of micro LEDs and stretchable wiring, mounted on a rubber sheet. It’s stretchable by up 45 percent of its original length and can be worn on the skin continuously for a week without causing inflammation.

The sensor can also measure temperature, pressure, and the electrical properties of muscle, and can display messages on skin.

DNP hopes to bring the integrated skin display to market within three years.

Neuroscientists reverse Alzheimer’s disease in mice

The brain of a 10-month-old mouse with Alzheimer’s disease (left) is full of amyloid plaques (red). These hallmarks of Alzheimer’s disease are reversed in animals that have gradually lost the BACE1 enzyme (right). (credit: Hu et al., 2018)

Researchers from the Cleveland Clinic Lerner Research Institute have completely reversed the formation of amyloid plaques in the brains of mice with Alzheimer’s disease by gradually depleting an enzyme called BACE1. The procedure also improved the animals’ cognitive function.

The study, published February 14 in the Journal of Experimental Medicine, raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer’s disease in humans.


Background: Serious side effects

One of the earliest events in Alzheimer’s disease is an abnormal buildup of beta-amyloid peptide, which can form large, amyloid plaques in the brain and disrupt the function of neuronal synapses. The BACE1 (aka beta-secretase) enzyme helps produce beta-amyloid peptide by cleaving (splitting) amyloid precursor protein (APP). So drugs that inhibit BACE1 are being developed as potential Alzheimer’s disease treatments. But that’s a problem because BACE1 also controls many important neural processes; accidental cleaving of other proteins instead of APP could lead these drugs to have serious side effects. For example, mice completely lacking BACE1 suffer severe neurodevelopmental defects.


A genetic-engineering solution

To deal with the serious side effects, the researchers generated mice that gradually lose the BACE1 enzyme as they grow older. These mice developed normally and appeared to remain perfectly healthy over time. The researchers then bred these rodents with mice that start to develop amyloid plaques and Alzheimer’s disease when they are 75 days old.

The resulting offspring’s BACE1 levels were approximately 50% lower than normal (but also formed plaques at this age). However, as these mice continued to age and lose BACE1 activity, there were lower beta-amyloid peptide levels and the plaques began to disappear. At 10 months old, the mice had no plaques in their brains. Loss of BACE1 also improved the learning and memory of mice with Alzheimer’s disease.

“To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer’s disease mouse models,” says senior author Riqiang Yan, who will become chair of the department of neuroscience at the University of Connecticut this spring.

Decreasing BACE1 activity also reversed other hallmarks of Alzheimer’s disease, such as activation of microglial cells and the formation of abnormal neuronal processes.

However, the researchers also found that depletion of BACE1 only partially restored synaptic function, suggesting that BACE1 may be required for optimal synaptic activity and cognition.

“Our study provides genetic evidence that preformed amyloid deposition can be completely reversed after sequential and increased deletion of BACE1 in the adult,” says  Yan. “Our data show that BACE1 inhibitors have the potential to treat Alzheimer’s disease patients without unwanted toxicity. Future studies should develop strategies to minimize the synaptic impairments arising from significant inhibition of BACE1 to achieve maximal and optimal benefits for Alzheimer’s patients.”


Abstract of BACE1 deletion in the adult mouse reverses preformed amyloid deposition and improves cognitive functions

BACE1 initiates the generation of the β-amyloid peptide, which likely causes Alzheimer’s disease (AD) when accumulated abnormally. BACE1 inhibitory drugs are currently being developed to treat AD patients. To mimic BACE1 inhibition in adults, we generated BACE1 conditional knockout (BACE1fl/fl) mice and bred BACE1fl/fl mice with ubiquitin-CreER mice to induce deletion of BACE1 after passing early developmental stages. Strikingly, sequential and increased deletion of BACE1 in an adult AD mouse model (5xFAD) was capable of completely reversing amyloid deposition. This reversal in amyloid deposition also resulted in significant improvement in gliosis and neuritic dystrophy. Moreover, synaptic functions, as determined by long-term potentiation and contextual fear conditioning experiments, were significantly improved, correlating with the reversal of amyloid plaques. Our results demonstrate that sustained and increasing BACE1 inhibition in adults can reverse amyloid deposition in an AD mouse model, and this observation will help to provide guidance for the proper use of BACE1 inhibitors in human patients.

How to train a robot to do complex abstract thinking

Robot inspects cooler, ponders next step (credit: Intelligent Robot Lab / Brown University)

Robots are great at following programmed steps. But asking a robot to “move the green bottle from the cooler to the cupboard” would require it to have abstract representations of these things and actions, plus knowledge of its surroundings.

(“Hmm, which of those millions of pixels is a ‘cooler,’ whatever than means? How do I get inside it and also the ‘cupboard’? …”)

To help robots answer these kinds of questions and plan complex multi-step tasks, robots can construct two kinds of abstract representations of the world around them, say Brown University and MIT researchers:

  • “Procedural abstractions”: bundling all the low-level movements composed into higher-level skills (such as opening a door). Most of those robots doing fancy athletic tricks are explicitly programmed with such procedural abstractions, say the researchers.
  • “Perceptual abstractions”: making sense out of the millions of confusing pixels in the real world.

Building truly intelligent robots

According to George Konidaris, Ph.D., an assistant professor of computer science at Brown and the lead author of the new study, there’s been less progress in perceptual abstraction — the focus of the new research.

To explore this, the researchers trained a robot they called “Anathema” (aka “Ana”). They started by teaching Ana “procedural abstractions” in a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard. They gave Ana a set of high-level motor skills for manipulating the objects in the room, such as opening and closing both the cooler and the cupboard, flipping the switch, and picking up a bottle.

Ana was also able to learn a very abstract description of the visual environment that contained only what was necessary for her to be able to perform a particular skill. Once armed with these learned abstract procedures and perceptions, the researchers gave Ana a challenge: “Take the bottle from the cooler and put it in the cupboard.”


Ana’s dynamic concept of a “cooler,” based on configurations of pixels in open and closed positions. (credit: Intelligent Robot Lab / Brown University)

Accepting the challenge, Ana navigated to the cooler. She had learned the configuration of pixels in her visual field associated with the cooler lid being closed (the only way to open it). She had also learned how to open it: stand in front of it and don’t do anything (because she needed both hands to open the lid).

She opened the cooler and sighted the bottle. But she didn’t pick it up. Not yet.

She realized that if she had the bottle in her gripper, she wouldn’t be able to open the cupboard — that requires both hands. Instead, she went directly to the cupboard.

There, she saw that the light switch was in the “on” position, and instantly realized that opening the cupboard would block the switch. So she turned the switch off before opening the cupboard. Finally, she returned to the cooler, retrieved the bottle, and placed it in the cupboard.

She had developed the entire plan in about four milliseconds.


“She learned these abstractions on her own”

Once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world by making sense of its pixelated surroundings, according to Konidaris. “We didn’t provide Ana with any of the abstract representations she needed to plan for the task,” he said. “She learned those abstractions on her own, and once she had them, planning was easy.”

Her entire knowledge and skill set was represented in a text file just 126 lines long.

Konidaris says the research provides an important theoretical building block for applying artificial intelligence to robotics. “We believe that allowing our robots to plan and learn in the abstract rather than the concrete will be fundamental to building truly intelligent robots,” he said. “Many problems are often quite simple, if you think about them in the right way.”

Source: Journal of Artificial Intelligence Research (open-access). Funded by DARPA and MIT’s Intelligence Initiative.


IRL Lab | Learning Symbolic Representations for High-Level Robot Planning

Ray Kurzweil’s ‘singularity’ prediction supported by prominent AI scientists


According to an article in web magazine Futurism today, two prominent artificial intelligence (AI) experts have agreed with inventor, author and futurist Ray Kurzweil’s prediction of singularity — a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed — to happen in about 30 years: Massachusetts Institute of Technology Patrick Winston, Ph.D., Ford Professor of Artificial Intelligence and Computer Science, and Jurgen Schmidhuber, Ph.D., Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI.

Schmidhuber is confident that the singularity “is just 30 years away, if the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain, but are much faster. There is no doubt in my mind that AIs are going to become super smart.”

Are you a cyborg?

Bioprinting a brain

Cryogenic 3D-printing soft hydrogels. Top: the bioprinting process. Bottom: SEM image of general microstructure (scale bar: 100 µm). (credit: Z. Tan/Scientific Reports)

A new bioprinting technique combines cryogenics (freezing) and 3D printing to create geometrical structures that are as soft (and complex) as the most delicate body tissues — mimicking the mechanical properties of organs such as the brain and lungs.

The idea: “Seed” porous scaffolds that can act as a template for tissue regeneration (from neuronal cells, for example), where damaged tissues are encouraged to regrow — allowing the body to heal without tissue rejection or other problems. Using “pluripotent” stem cells that can change into different types of cells is also a possibility.

Smoothy. Solid carbon dioxide (dry ice) in an isopropanol bath is used to rapidly cool hydrogel ink (a rapid liquid-to-solid phase change) as it’s extruded, yogurt-smoothy-style. Once thawed, the gel is as soft as body tissues, but doesn’t collapse under its own weight — a previous problem.

Current structures produced with this technique are “organoids” a few centimeters in size. But the researchers hope to create replicas of actual body parts with complex geometrical structures — even whole organs. That could allow scientists to carry out experiments not possible on live subjects, or for use in medical training, replacing animal bodies for surgical training and simulations. Then on to mechanobiology and tissue engineering.

Source: Imperial College London, Scientific Reports (open-access).

How to generate electricity with your body

Bending a finger generates electricity in this prototype device. (credit: Guofeng Song et al./Nano Energy)

A new triboelectric nanogenerator (TENG) design, using a gold tab attached to your skin, will convert mechanical energy into electrical energy for future wearables and self-powered electronics. Just bend your finger or take a step.

Triboelectric charging occurs when certain materials become electrically charged after coming into contact with a different material. In this new design by University of Buffalo and Chinese scientists, when a stretched layer of gold is released, it crumples, creating what looks like a miniature mountain range. An applied force leads to friction between the gold layers and an interior PDMS layer, causing electrons to flow between the gold layers.

More power to you. Previous TENG designs have been difficult to manufacture (requiring complex lithography) or too expensive. The new 1.5-centimeters-long prototype generates a maximum of 124 volts but at only 10 microamps. It has a power density of 0.22 millwatts per square centimeter. The team plans larger pieces of gold to deliver more electricity and a portable battery.

Source: Nano Energy. Support: U.S. National Science Foundation, the National Basic Research Program of China, National Natural Science Foundation of China, Beijing Science and Technology Projects, Key Research Projects of the Frontier Science of the Chinese Academy of Sciences ,and National Key Research and Development Plan.

This artificial electrical eel may power your implants

How the eel’s electrical organs generate electricity by moving sodium (Na) and potassium (K) ions across a selective membrane. (credit: Caitlin Monney)

Taking it a giant (and a bit scary) step further, an artificial electric organ, inspired by the electric eel, could one day power your implanted implantable sensors, prosthetic devices, medication dispensers, augmented-reality contact lenses, and countless other gadgets. Unlike typical toxic batteries that need to be recharged, these systems are soft, flexible, transparent, and potentially biocompatible.

Doubles as a defibrillator? The system mimicks eels’ electrical organs, which use thousands of alternating compartments with excess potassium or sodium ions, separated by selective membranes. To create a jolt of electricity (600 volts at 1 ampere), an eel’s membranes allow the ions to flow together. The researchers built a similar system, but using sodium and chloride ions dissolved in a water-based hydrogel. It generates more than 100 volts, but at safe low current — just enough to power a small medical device like a pacemaker.

The researchers say the technology could also lead to using naturally occurring processes inside the body to generate electricity, a truly radical step.

Source: Nature, University of Fribourg, University of Michigan, University of California-San Diego. Funding: Air Force Office of Scientific Research, National Institutes of Health.

E-skin for Terminator wannabes

A section of “e-skin” (credit: Jianliang Xiao / University of Colorado Boulder)

A new type of thin, self-healing, translucent “electronic skin” (“e-skin,” which mimicks the properties of natural skin) has applications ranging from robotics and prosthetic development to better biomedical devices and human-computer interfaces.

Ready for a Terminator-style robot baby nurse? What makes this e-skin different and interesting is its embedded sensors, which can measure pressure, temperature, humidity and air flow. That makes it sensitive enough to let a robot take care of a baby, the University of Colorado mechanical engineers and chemists assure us. The skin is also rapidly self-healing (by reheating), as in The Terminator, using a mix of three commercially available compounds in ethanol.

The secret ingredient: A novel network polymer known as polyimine, which is fully recyclable at room temperature. Laced with silver nanoparticles, it can provide better mechanical strength, chemical stability and electrical conductivity. It’s also malleable, so by applying moderate heat and pressure, it can be easily conformed to complex, curved surfaces like human arms and robotic hands.

Source: University of Colorado, Science Advances (open-access). Funded in part by the National Science Foundation.

Altered Carbon

Vertebral cortical stack (credit: Netflix)

Altered Carbon takes place in the 25th century, when humankind has spread throughout the galaxy. After 250 years in cryonic suspension, a prisoner returns to life in a new body with one chance to win his freedom: by solving a mind-bending murder.

Resleeve your stack. Human consciousness can be digitized and downloaded into different bodies. A person’s memories have been encapsulated into “cortical stack” storage devices surgically inserted into the vertebrae at the back of the neck. Disposable physical bodies called “sleeves” can accept any stack.

But only the wealthy can acquire replacement bodies on a continual basis. The long-lived are called Meths, as in the Biblical figure Methuselah. The uber rich are also able to keep copies of their minds in remote storage, which they back up regularly, ensuring that even if their stack is destroyed, the stack can be resleeved (except for periods of time not backed up — as in the hack-murder).

Source: Netflix. Premiered on February 2, 2018. Based on the 2002 novel of the same title by Richard K. Morgan.