How to open the blood-brain-barrier with precision for safer drug delivery

Schematic representation of the feedback-controlled focused ultrasound drug delivery system. Serving as the acoustic indicator of drug-delivery dosage, the microbubble emission signal was sensed and compared with the expected value. The difference was used as feedback to the ultrasound transducer for controlling the level of the ultrasound transmission. The ultrasound transducer and sensor were located outside the rat skull. The microbubbles were generated in the bloodstream at the target location in the brain. (credit: Tao Sun/Brigham and Women’s Hospital; adapted by KurzweilAI)

Researchers at Brigham and Women’s Hospital have developed a safer way to use focused ultrasound to temporarily open the blood-brain barrier* to allow for delivering vital drugs for treating glioma brain tumors — an alternative to invasive incision or radiation.

Focused ultrasound drug delivery to the brain uses “cavitation” — creating microbubbles — to temporarily open the blood-brain barrier. The problem with this method has been that if these bubbles destabilize and collapse, they could damage the critical vasculature in the brain.

To create a finer degree of control over the microbubbles and improve safety, the researchers placed a sensor outside of the rat brain to listen to ultrasound echoes bouncing off the microbubbles, as an indication of how stable the bubbles were.** That data was used to modify the ultrasound intensity, stabilizing the microbubbles to maintain safe ultrasound exposure.

The team tested the approach in both healthy rats and in an animal model of glioma brain cancer. Further research will be needed to adapt the technique for humans, but the approach could offer improved safety and efficacy control for human clinical trials, which are now underway in Canada.

The research, published this week in the journal Proceedings of the National Academy of Sciences, was supported by the National Institutes of Health in Canada.

* The blood brain barrier is an impassable obstacle for 98% of drugs, which it treats as pathogens and blocks them from passing from patients’ bloodstream into the brain. Using focused ultrasound, drugs can administered using an intravenous injection of innocuous lipid-coated gas microbubbles.

** For the ultrasound transducer, the researchers combined two spherically curved transducers (operating at a resonant frequency at 274.3 kHz) to double the effective aperture size and provide significantly improved focusing in the axial direction.


Abstract of Closed-loop control of targeted ultrasound drug delivery across the blood–brain/tumor barriers in a rat glioma model

Cavitation-facilitated microbubble-mediated focused ultrasound therapy is a promising method of drug delivery across the blood–brain barrier (BBB) for treating many neurological disorders. Unlike ultrasound thermal therapies, during which magnetic resonance thermometry can serve as a reliable treatment control modality, real-time control of modulated BBB disruption with undetectable vascular damage remains a challenge. Here a closed-loop cavitation controlling paradigm that sustains stable cavitation while suppressing inertial cavitation behavior was designed and validated using a dual-transducer system operating at the clinically relevant ultrasound frequency of 274.3 kHz. Tests in the normal brain and in the F98 glioma model in vivo demonstrated that this controller enables reliable and damage-free delivery of a predetermined amount of the chemotherapeutic drug (liposomal doxorubicin) into the brain. The maximum concentration level of delivered doxorubicin exceeded levels previously shown (using uncontrolled sonication) to induce tumor regression and improve survival in rat glioma. These results confirmed the ability of the controller to modulate the drug delivery dosage within a therapeutically effective range, while improving safety control. It can be readily implemented clinically and potentially applied to other cavitation-enhanced ultrasound therapies.

Mapping connections of single neurons using a holographic light beam

Controlling single neurons using optogenetics (credit: the researchers)

Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.

The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).

The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.

Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)

The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)

In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.

Mapping neural connections in real time

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.


Abstract of Temporally precise single-cell-resolution optogenetics

Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.

New silicon ‘Neuropixels’ probes record activity of hundreds of neurons simultaneously

A section of a Neuropixels probe (credit: Howard Hughes Medical Institute)

In a $5.5 million international collaboration, researchers and engineers have developed powerful new “Neuropixels” brain probes that can simultaneously monitor the neural activity of hundreds of neurons at several layers of a rodent’s brain for the first time.

Described in a paper published today (November 8, 2017) in Nature, Neuropixels probes represent a significant advance in neuroscience measurement technology, and will allow for the most precise understanding yet of how large networks of nerve cells coordinate to give rise to behavior and cognition, according to the researchers.

Illustrating the ability to record several layers of brain regions simultaneously, 741 electrodes in two Neuropixels probes recorded signals from five major brain structures in 13 awake head-fixed mice. The number of putative single neurons from each structure is shown in parentheses. (Approximate probe locations are shown overlaid on the Allen Mouse Brain Atlas at the left.) (credit: James J. Jun et al./Nature)

Neuropixels probes are similar to electrophysiology probes that neuroscientists have used for decades to detect extracellular electrical activity in the brains of living animals — but they incorporate two critical advances:

  • The new probes are thinner (70 x 20 micrometers) than a human hair, but about as long as a mouse brain (one centimeter), so they pass through and collect data from multiple brain regions at the same time. The 960 electrical sensors and recording electrodes are densely packed along their length, recording hundreds of well-resolved single neuron signal traces, making it easier for researchers to pinpoint the cellular sources of brain activity.
  • Each of the new probes incorporates a nearly complete recording system — reducing hardware size and cost and eliminating hundreds of bulky output wires.

These features will allow researchers to collect more meaningful data in a single experiment than current technologies (which can measure the activity of individual neurons within a specific spot in the brain or can reveal larger, regional patterns of activity, but not both simultaneously).

To develop the probes, researchers at HHMI’s Janelia Research Campus, the Allen Institute for Brain Science, and University College London worked with engineers at imec, an international nanoelectronics research center in Leuven, Belgium, with grant funding from the Gatsby Charitable Foundation and Wellcome.

Recordings from 127 neurons in entorhinal and medial prefrontal cortices using chronic implants in unrestrained rats. (a) Schematic representation of the implant in the entorhinal cortex. MEnt, medial entorhinal cortex; PrS, presubiculum; S, subiculum; V1B, primary visual cortex, binocular area; V2L, secondary visual cortex, lateral area. (b) Filtered voltage traces from 130 channels spanning 1.3 mm of the shank. (credit: James J. Jun et al./Nature)

Accelerating neuroscience research

With nearly 1,000 electrical sensors positioned along a probe thinner than a human hair but long enough to access many regions of a rodent’s brain simultaneously, the new technology could greatly accelerate neuroscience research, says Timothy Harris, senior fellow at Janelia, leader of the Neuropixels collaboration. “You can [detect the activity of] large numbers of neurons from multiple brain regions with higher fidelity and much less difficulty,” he says.

There are currently more than 400 prototype Neuropixels probes in testing at research centers worldwide.

“At the Allen Institute for Brain Science, one of our chief goals is to decipher the cellular-level code used by the brain,” says Christof Koch, President and Chief Scientist at the Allen Institute for Brain Science. “The Neuropixels probes represent a significant leap forward in measurement technology and will allow for the most precise understanding yet of how large coalitions of nerve cells coordinate to give rise to behavior and cognition.”

Neuropixels probes are expected to be available for purchase by research laboratories by mid-2018.

Scientists from the consortium will present data collected using prototype Neuropixels probes at the Annual Meeting of the Society for Neuroscience in Washington, DC, November 11–15, 2017.

Daydreaming means you’re smart and creative

MRI scan showing regions of the default mode network, involved in daydreaming (credit: C C)

Daydreaming during meetings or class might actually be a sign that you’re smart and creative, according to a Georgia Institute of Technology study.

“People with efficient brains may have too much brain capacity to stop their minds from wandering,” said Eric Schumacher, an associate psychology professor who co-authored a research paper published in the journal Neuropsychologia.

Participants were instructed to focus on a stationary fixation point for five minutes in an MRI machine. Functional MRI (fMRI), which measures changes in brain oxygen levels as a proxy for neural activity, was used.

The researchers examined that data to find correlated brain patterns (parts of the brain that worked together) between the daydreaming “default mode network” (DMN)* and two other brain networks. The team compared that data with tests of the participants that measured their intellectual and creative ability and with a questionnaire about how much a participant’s mind wandered in daily life.**

Is your brain efficient?

The scientists found a correlation between mind wandering and fluid intelligence, creative ability, and more efficient brain systems.

How can you tell if your brain is efficient? One clue is that you can zone in and out of conversations or tasks when appropriate, then naturally tune back in without missing important points or steps, according to Schumacher. He says higher efficiency also means more capacity to think and the ability to mind-wander when performing easy tasks.

“Our findings remind me of the absent-minded professor — someone who’s brilliant, but off in his or her own world, sometimes oblivious to their own surroundings,” said Schumacher. “Or school children who are too intellectually advanced for their classes. While it may take five minutes for their friends to learn something new, they figure it out in a minute, then check out and start daydreaming.”

The study also involved researchers at the University of New Mexico, U.S. Army Research Laboratory, University of Pennsylvania, and Charles River Analytics. It was funded by the National Science Foundation and based on work supported by the U.S. Intelligence Advanced Research Projects Activity (IARPA).

Performing on autopilot

However, recent research at the University of Cambridge published in the Proceedings of National Academy of Sciences showed that daydreaming also plays an important role in allowing us to switch to “autopilot” once we are familiar with a task.

“Rather than waiting passively for things to happen to us, we are constantly trying to predict the environment around us,” says Deniz Vatansever, who carried out the study as part of his PhD at the University of Cambridge and who is now based at the University of York.

“Our evidence suggests it is the default mode network that enables us do this. It is essentially like an autopilot that helps us make fast decisions when we know what the rules of the environment are. So for example, when you’re driving to work in the morning along a familiar route, the default mode network will be active, enabling us to perform our task without having to invest lots of time and energy into every decision.”

In the study, 28 volunteers took part in a task while lying inside a magnetic resonance imaging (MRI) scanner. Functional MRI (fMRI) was also used.

This new study supports an idea expounded upon by Daniel Kahneman, Nobel Memorial Prize in Economics laureate 2002, in his book Thinking, Fast and Slow: that there are two systems that help us make decisions — a rational system that helps us reach calculated decisions, and a fast system that allows us to make intuitive decisions. The new research suggests this latter system may be linked with the DMN.

The researchers believe their findings have relevance to brain injury, particularly following traumatic brain injury, where problems with memory and impulsivity can substantially compromise social reintegration. They say the findings may also have relevance for mental health disorders, such as addiction, depression and obsessive compulsive disorder — where particular thought patterns drive repeated behaviors — and with the mechanisms of anesthetic agents and other drugs on the brain.

* In 2001, scientists at the Washington University School of Medicine found that a collection of brain regions appeared to be more active during such states of rest. This network was named the “default mode network.” While it has since been linked to, among other things, daydreaming, thinking about the past, planning for the future, and creativity, its precise function is unclear.

** Specifically, the researchers examined the extent to which the default mode network (DMN), along with the dorsal attention network (DAN) and frontoparietal control network (FPCN), correlate with the tendency to mind wandering in daily life, based on a five-minute resting state fMRI scan. They also used measures of executive function, fluid intelligence (Ravens), and creativity (Remote Associates Task).


Abstract of Functional connectivity within and between intrinsic brain networks correlates with trait mind wandering

Individual differences across a variety of cognitive processes are functionally associated with individual differences in intrinsic networks such as the default mode network (DMN). The extent to which these networks correlate or anticorrelate has been associated with performance in a variety of circumstances. Despite the established role of the DMN in mind wandering processes, little research has investigated how large-scale brain networks at rest relate to mind wandering tendencies outside the laboratory. Here we examine the extent to which the DMN, along with the dorsal attention network (DAN) and frontoparietal control network (FPCN) correlate with the tendency to mind wander in daily life. Participants completed the Mind Wandering Questionnaire and a 5-min resting state fMRI scan. In addition, participants completed measures of executive functionfluid intelligence, and creativity. We observed significant positive correlations between trait mind wandering and 1) increased DMN connectivity at rest and 2) increased connectivity between the DMN and FPCN at rest. Lastly, we found significant positive correlations between trait mind wandering and fluid intelligence (Ravens) and creativity (Remote Associates Task). We interpret these findings within the context of current theories of mind wandering and executive function and discuss the possibility that certain instances of mind wandering may not be inherently harmful. Due to the controversial nature of global signal regression (GSReg) in functional connectivity analyses, we performed our analyses with and without GSReg and contrast the results from each set of analyses.


Abstract of Default mode contributions to automated information processing

Concurrent with mental processes that require rigorous computation and control, a series of automated decisions and actions govern our daily lives, providing efficient and adaptive responses to environmental demands. Using a cognitive flexibility task, we show that a set of brain regions collectively known as the default mode network plays a crucial role in such “autopilot” behavior, i.e., when rapidly selecting appropriate responses under predictable behavioral contexts. While applying learned rules, the default mode network shows both greater activity and connectivity. Furthermore, functional interactions between this network and hippocampal and parahippocampal areas as well as primary visual cortex correlate with the speed of accurate responses. These findings indicate a memory-based “autopilot role” for the default mode network, which may have important implications for our current understanding of healthy and adaptive brain processing.

Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time

Purdue Engineering researchers have developed a system that can show what people are seeing in real-world videos, decoded from their fMRI brain scans — an advanced new form of  “mind-reading” technology that could lead to new insights in brain function and to advanced AI systems.

The research builds on previous pioneering research at UC Berkeley’s Gallant Lab, which created a computer program in 2011 that translated fMRI brain-wave patterns into images that loosely mirrored a series of images being viewed.

The new system also decodes moving images that subjects see in videos and does it in near-real-time. But the researchers were also able to determine the subjects’ interpretations of the images they saw — for example, interpreting an image as a person or thing — and could even reconstruct a version of the original images that the subjects saw.

Deep-learning AI system for watching what the brain sees

Watching in near-real-time what the brain sees. Visual information generated by a video (a) is processed in a cascade from the retina through the thalamus (LGN area) to several levels of the visual cortex (b), detected from fMRI activity patterns (c) and recorded. A powerful deep-learning technique (d) then models this detected cortical visual processing. Called a convolutional neural network (CNN), this model transforms every video frame into multiple layers of features, ranging from orientations and colors (the first visual layer) to high-level object categories (face, bird, etc.) in semantic (meaning) space (the eighth layer). The trained CNN model can then be used to reverse this process, reconstructing the original videos — even creating new videos that the CNN model had never watched. (credit: Haiguang Wen et al./Cerebral Cortex)

The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including clips showing people or animals in action and nature scenes.

To decode the  fMRI images, the research pioneered the use of a deep-learning technique called a convolutional neural network (CNN). The trained CNN model was able to accurately decode the fMRI blood-flow data to identify specific image categories (such as the face, bird, ship, and scene examples in the above figure). The researchers could compare (in near-real-time) these viewed video images side-by-side with the computer’s visual interpretation of what the person’s brain saw.

Reconstruction of a dynamic visual experience in the experiment. The top row shows the example movie frames seen by one subject; the bottom row shows the reconstruction of those frames based on the subject’s cortical fMRI responses to the movie. (credit: Haiguang Wen et al./ Cerebral Cortex)

The researchers were also able to figure out how certain locations in the visual cortex were associated with specific information a person was seeing.

Decoding how the visual cortex works

CNNs have been used to recognize faces and objects, and to study how the brain processes static images and other visual stimuli. But the new findings represent the first time CNNs have been used to see how the brain processes videos of natural scenes. This is “a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said doctoral student Haiguang Wen.

Wen was first author of a paper describing the research, appearing online Oct. 20 in the journal Cerebral Cortex.

“Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen explained. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”

The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called “cross-subject encoding and decoding.” This finding is important because it demonstrates the potential for broad applications of such models to study brain function, including people with visual deficits.

The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper are also publicly available at the Laboratory of Integrated Brain Imaging website.

UPDATE Oct. 28, 2017 — Additional figure added, comparing the original images and those reconstructed from the subject’s cortical fMRI responses to the movie; subhead revised to clarify the CNN function. Two references also added.


Abstract of Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision

Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.

Leading brain-training game improves memory and attention better than competing method

EEGs taken before and after the training showed that the biggest changes occurred in the brains of the group that trained using the “dual n-back” method (right). (credit: Kara J. Blacker/JHU)

A leading brain-training game called “dual n-back” was significantly better in improving memory and attention than a competing “complex span” game, Johns Hopkins University researchers found in a recent experiment.*

These results, published Monday Oct. 16, 2017 in an open-access paper in the Journal of Cognitive Enhancement, suggest it’s possible to train the brain like other body parts — with targeted workouts to improve the cognitive skills needed when tasks are new and you can’t just rely on old knowledge and habits, says co-author Susan Courtney, a Johns Hopkins neuroscientist and professor of psychological and brain sciences.


Johns Hopkins University | The Best Way to Train Your Brain: A Game

The dual n-back game is a memory sequence test in which you must remember a constantly updating sequence of visual and auditory stimuli. As shown in a simplified version in the video above, participants saw squares flashing on a grid while hearing letters. But in the experiment, the subjects also had to remember if the square they just saw and the letter they heard were both the same as one round back.

As the test got harder, they had to recall squares and letters two, three, and four rounds back. The subjects also showed significant changes in brain activity in the prefrontal cortex, the critical region responsible for higher learning.

With the easier complex span game, there’s a distraction between items, but participants don’t need to continually update the previous items in their mind.

((You can try an online version of the dual n-back test/game here and of the digit-span test here. The training programs Johns Hopkins compared are tools scientists rely on to test the brain’s working memory, not the commercial products sold to consumers. )

30 percent improvement in working memory

The researchers found that the group that practiced the dual n-back exercise showed a 30 percent improvement in their working memory — nearly double the gains in the group using complex span. “The findings suggest that [the dual n-back] task is changing something about the brain,” Courtney said. “There’s something about sequencing and updating that really taps into the things that only the pre-frontal cortex can do, the real-world problem-solving tasks.”

The next step, the researchers say, is to figure out why dual n-back is so good at improving working memory, then figure out how to make it even more effective so that it can become a marketable or even clinically useful brain-training program.

* Scientists trying to determine if brain exercises make people smarter have had mixed results. Johns Hopkins researchers suspected the problem wasn’t the idea of brain training, but the type of exercise researchers chose to test it. They decided to compare directly the leading types of exercises and measure people’s brain activity before and after training; that had never been attempted before, according to lead author Kara J. Blacker, a former Johns Hopkins post-doctoral fellow in psychological and brain sciences, now a researcher at the Henry M. Jackson Foundation for Advancement of Military Medicine, Inc. For the experiment, the team assembled three groups of participants, all young adults. Everyone took an initial battery of cognitive tests to determine baseline working memory, attention, and intelligence. Everyone also got an electroencephalogram, or EEG, to measure brain activity. Then, everyone was sent home to practice a computer task for a month. One group used one leading brain exercise while the second group used the other. The third group practiced on a control task. Everyone trained five days a week for 30 minutes, then returned to the lab for another round of tests to see if anything about their brain or cognitive abilities had changed.


Abstract of N-back Versus Complex Span Working Memory Training

Working memory (WM) is the ability to maintain and manipulate task-relevant information in the absence of sensory input. While its improvement through training is of great interest, the degree to which WM training transfers to untrained WM tasks (near transfer) and other untrained cognitive skills (far transfer) remains debated and the mechanism(s) underlying transfer are unclear. Here we hypothesized that a critical feature of dual n-back training is its reliance on maintaining relational information in WM. In experiment 1, using an individual differences approach, we found evidence that performance on an n-back task was predicted by performance on a measure of relational WM (i.e., WM for vertical spatial relationships independent of absolute spatial locations), whereas the same was not true for a complex span WM task. In experiment 2, we tested the idea that reliance on relational WM is critical to produce transfer from n-back but not complex span task training. Participants completed adaptive training on either a dual n-back task, a symmetry span task, or on a non-WM active control task. We found evidence of near transfer for the dual n-back group; however, far transfer to a measure of fluid intelligence did not emerge. Recording EEG during a separate WM transfer task, we examined group-specific, training-related changes in alpha power, which are proposed to be sensitive to WM demands and top-down modulation of WM. Results indicated that the dual n-back group showed significantly greater frontal alpha power after training compared to before training, more so than both other groups. However, we found no evidence of improvement on measures of relational WM for the dual n-back group, suggesting that near transfer may not be dependent on relational WM. These results suggest that dual n-back and complex span task training may differ in their effectiveness to elicit near transfer as well as in the underlying neural changes they facilitate.

Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Loihi chip (credit: Intel Corporation)

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**

Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.

The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.

He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.

For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.

The Loihi test chip

Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”

Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.

In the first half of 2018, Intel plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.

* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology

** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Neuroscientists restore vegetative-state patient’s consciousness with vagus nerve stimulation

Information sharing increases after vagus nerve stimulation over centroposterior regions of the brain. (Left) Coronal view of weighted symbolic mutual information (wSMI) shared by all channels pre- and post-vagus nerve stimulation (VNS) (top and bottom, respectively). For visual clarity, only links with wSMI higher than 0.025 are shown. (Right) Topographies of the median wSMI that each EEG channel shares with all the other channels pre- and post-VNS (top and bottom, respectively). The bar graph represents the median wSMI over right centroposterior electrodes (darker dots) which significantly increases post-VNS. (credit: Martina Corazzol et al./Current Biology)

A 35-year-old man who had been in a vegetative state for 15 years after a car accident has shown signs of consciousness after neurosurgeons in France implanted a vagus nerve stimulator into his chest — challenging the general belief that disorders of consciousness that persist for longer than 12 months are irreversible.

In a 2007 Weill Cornell Medical College study reported in Nature, neurologists found temporary improvements in patients in a state of minimal consciousness while being treated with bilateral deep brain electrical stimulation (DBS) of the central thalamus. Aiming instead to achieve permanent results, the French researchers proposed use of vagus nerve stimulation* (VNS) to activate the thalamo-cortical network, based on the “hypothesis that vagus nerve stimulation functionally reorganizes the thalamo-cortical network.”

A vagus neural stimulation therapy system. The vagus nerve connects the brain to many other parts of the body, including the gut. It’s known to be important in waking, alertness, and many other essential functions. (credit: Cyberonics, Inc./LivaNova)

After one month of VNS — a treatment currently used for epilepsy and depression — the patient’s attention, movements, and brain activity significantly improved and he began responding to simple orders that were impossible before, the researchers report today (Sept. 25, 2017) in an open-access paper in Current Biology.

For example, he could follow an object with his eyes and turn his head upon request, and when the examiner’s head suddenly approached the patient’s face, he reacted with surprise by opening his eyes wide.

Evidence from brain-activity recordings

PET images acquired during baseline (left: pre-VNS) and 3 months post vagus nerve stimulation (right: post-VNS). After vagus nerve stimulation, the metabolism increased in the right parieto-occipital cortex, thalamus and striatum. (credit: Corazzol et al.)

“After one month of stimulation, when [electrical current] intensity reached 1 mA, clinical examination revealed reproducible and consistent improvements in general arousal, sustained attention, body motility, and visual pursuit,” the researchers note.

Brain-activity recordings in the new study revealed major changes. A theta EEG signal (important for distinguishing between a vegetative and minimally conscious state) increased significantly in those areas of the brain involved in movement, sensation, and awareness. The brain’s functional connectivity also increased. And a PET scan showed increases in metabolic activity in both cortical and subcortical regions of the brain.

The researchers also speculate that “since the vagus nerve has bidirectional control over the brain and the body, reactivation of sensory/visceral afferences might have enhanced brain activity within a body/brain closed loop process.”

The team is now planning a large collaborative study to confirm and extend the therapeutic potential of VNS for patients in a vegetative or minimally conscious state.

However, “some physicians and brain injury specialists remain skeptical about whether the treatment truly worked as described,” according to an article today in Science. “The surgery to implant the electrical stimulator, the frequent behavioral observations, and the moving in and out of brain scanners all could have contributed to the patient’s improved state, says Andrew Cole, a neurologist at Harvard Medical School in Boston who studies consciousness. ‘I’m not saying their claim is untrue,’ he says. ‘I’m just saying it’s hard to interpret based on the results as presented.’”

The study was supported by CNRS, ANR, and a grant from the University of Lyon

* “The vagus nerve carries somatic and visceral efferents and afferents distributed throughout the central nervous system, either monosynaptically or via the nucleus of the solitary tract (NTS). The vagus directly modulates activity in the brainstem and via the NTS it reaches the dorsal raphe nuclei, the thalamus, the amygdala, and the hippocampus. In humans, vagus nerve stimulation increases metabolism in the forebrain, thalamus and reticular formation. It also enhances neuronal firing in the locus coeruleus which leads to massive release of norepinephrine in the thalamus and hippocampus, a noradrenergic pathway important for arousal, alertness and the fight-or-flight response.” — Corazzol and Lio et al./Current Biology


Abstract of Restoring consciousness with vagus nerve stimulation

Patients lying in a vegetative state present severe impairments of consciousness [1] caused by lesions in the cortex, the brainstem, the thalamus and the white matter [2]. There is agreement that this condition may involve disconnections in long-range cortico–cortical and thalamo-cortical pathways [3]. Hence, in the vegetative state cortical activity is ‘deafferented’ from subcortical modulation and/or principally disrupted between fronto-parietal regions. Some patients in a vegetative state recover while others persistently remain in such a state. The neural signature of spontaneous recovery is linked to increased thalamo-cortical activity and improved fronto-parietal functional connectivity [3]. The likelihood of consciousness recovery depends on the extent of brain damage and patients’ etiology, but after one year of unresponsive behavior, chances become low [1]. There is thus a need to explore novel ways of repairing lost consciousness. Here we report beneficial effects of vagus nerve stimulation on consciousness level of a single patient in a vegetative state, including improved behavioral responsiveness and enhanced brain connectivity patterns.