How to shine light deeper into the brain

Near-infrared (NIR) light can easily pass through brain tissue with minimal scattering, allowing it to reach deep structures. There, up-conversion nanoparticles (UCNPs; blue) previously inserted in the tissue can absorb this light to generate shorter-wavelength blue-green light that can activate nearby neurons. (credit: RIKEN)

An international team of researchers has developed a way to shine light at new depths in the brain. It may lead to development of new, non-invasive clinical treatments for neurological disorders and new research tools.

The new method extends the depth that optogenetics — a method for stimulating neurons with light — can reach. With optogenetics, blue-green light is used to turn on “light-gated ion channels” in neurons to stimulate neural activity. But blue-green light is heavily scattered by tissue. That limits how deep the light can reach and currently requires insertion of invasive optical fibers.

The researchers took a new approach to brain stimulation, as they reported in Science on February 9.

  1. They used longer-wavelength (650 to 1350nm) near-infrared (NIR) light, which can penetrate deeper into the brain (via the skull) of mice.
  2. The NIR light illuminated “upconversion nanoparticles” (UCNPs), which absorbed the near-infrared laser light and glowed blue-green in formerly inaccessible (deep) targeted neural areas.*
  3. The blue-green light then triggered (via chromophores, light-responsive molecules) ion channels in the neurons to turn on memory cells in the hippocampus and other areas. These included the medial septum, where nanoparticle-emitted light contributed to synchronizing neurons in a brain wave called the theta cycle.**

Non-invasive activation of neurons in the VTA, a reward center of the mouse brain. The blue-light sensitive ChR2 chromophores (green) were expressed (from an injection) on both sides of the VTA. But upconversion nanoparticles (blue) were only injected on the right. So when near-IR light was applied to both sides, it only activated the expression of the activity-induced chromophore cFos gene (red) on the side with the nanoparticles. (credit: RIKEN)

This study was a collaboration between scientists at the RIKEN Brain Science Institute, the National University of Singapore, the University of Tokyo, Johns Hopkins University, and Keio University.

Non-invasive light therapy

“Nanoparticles effectively extend the reach of our lasers, enabling ‘remote’ delivery of light and potentially leading to non-invasive therapies,” says Thomas McHugh, research group leader at the RIKEN Brain Science Institute in Japan. In addition to activating neurons, UCNPs can also be used for inhibition. In this study, UCNPs were able to quell experimental seizures in mice by emitting yellow light to silence hyperexcitable neurons.

Schematic showing near-infrared radiation (NIR) being absorbed by upconversion nanoparticles (UCNPs) and re-radiated as shorter-wavelength (peaking at 450 and 475 nm) blue light that triggers a previously injected chromophore (a light emitting molecule expressed by neurons) — in this case, channelrhodopsin-2 (ChR2). In one experiment, the chromophore triggered a calcium ion channel in neurons in the ventral tegmental area (VTA) of the mouse brain (a region located ~4.2 mm below the skull), causing stimulation of neurons. (credit: Shuo Chen et al./Science)

While current deep brain stimulation is effective in alleviating specific neurological symptoms, it lacks cell-type specificity and requires permanently implanted electrodes, the researchers note.

The nanoparticles described in this study are compatible with the various light-activated channels currently in use in the optogenetics field and can be employed for neural activation or inhibition in many deep brain structures. “The nanoparticles appear to be quite stable and biocompatible, making them viable for long-term use. Plus, the low dispersion means we can target neurons very specifically,” says McHugh.

However, “a number of challenges must be overcome before this technique can be used in patients,” say Neus Feliu et al. in “Toward an optically controlled brain, Science  09 Feb 2018. “Specifically, neurons have to be transfected with light-gated ion channels … a substantial challenge [and] … placed close to the target neurons. … Neuronal networks undergo continuous changes [so] the stimulation pattern and placement of [nanoparticles] may have to be adjusted over time. … Potent upconverting NPs are also needed … [which] may change properties over time, such as structural degradation and loss of functional properties. … Long-term toxicity studies also need to be carried out.”

* “The lanthanide-doped up-conversion nanoparticles (UCNPs) were capable of converting low-energy incident NIR photons into high-energy visible emission with an efficiency orders of magnitude greater than that of multiphoton processes. … The core-shell UCNPs exhibited a characteristic up-conversion emission spectrum peaking at 450 and 475 nm upon excitation at 980 nm. Upon transcranial delivery of 980-nm CW laser pulses at a peak power of 2.0 W (25-ms pulses at 20 Hz over 1 s), an upconverted emission with a power density of ~0.063 mW/mm2 was detected. The conversion yield of NIR to blue light was ~2.5%. NIR pulses delivered across a wide range of laser energies to living tissue result in little photochemical or thermal damage.” — Shuo Chen et al./Science

** “Memory recall in mice also persisted in tests two weeks later. This indicates that the UCNPs remained at the injection site, which was confirmed through microscopy of the brains.” — Shuo Chen et al./Science

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Abstract of Near-infrared deep brain stimulation via upconversion nanoparticle–mediated optogenetics

Optogenetics has revolutionized the experimental interrogation of neural circuits and holds promise for the treatment of neurological disorders. It is limited, however, because visible light cannot penetrate deep inside brain tissue. Upconversion nanoparticles (UCNPs) absorb tissue-penetrating near-infrared (NIR) light and emit wavelength-specific visible light. Here, we demonstrate that molecularly tailored UCNPs can serve as optogenetic actuators of transcranial NIR light to stimulate deep brain neurons. Transcranial NIR UCNP-mediated optogenetics evoked dopamine release from genetically tagged neurons in the ventral tegmental area, induced brain oscillations through activation of inhibitory neurons in the medial septum, silenced seizure by inhibition of hippocampal excitatory cells, and triggered memory recall. UCNP technology will enable less-invasive optical neuronal activity manipulation with the potential for remote therapy.

Superconducting ‘synapse’ could enable powerful future neuromorphic supercomputers

NIST’s artificial synapse, designed for neuromorphic computing, mimics the operation of switch between two neurons. One artificial synapse is located at the center of each X. This chip is 1 square centimeter in size. (The thick black vertical lines are electrical probes used for testing.) (credit: NIST)

A superconducting “synapse” that “learns” like a biological system, operating like the human brain, has been built by researchers at the National Institute of Standards and Technology (NIST).

The NIST switch, described in an open-access paper in Science Advances, provides a missing link for neuromorphic (brain-like) computers, according to the researchers. Such “non-von Neumann architecture” future computers could significantly speed up analysis and decision-making for applications such as self-driving cars and cancer diagnosis.

The research is supported by the Intelligence Advanced Research Projects Activity (IARPA) Cryogenic Computing Complexity Program, which was launched in 2014 with the goal of paving the way to “a new generation of superconducting supercomputer development beyond the exascale.”*

A synapse is a connection or switch between two neurons, controlling transmission of signals. (credit: NIST)

NIST’s artificial synapse is a metallic cylinder 10 micrometers in diameter — about 10 times larger than a biological synapse. It simulates a real synapse by processing incoming electrical spikes (pulsed current from a neuron) and customizing spiking output signals. The more firing between cells (or processors), the stronger the connection. That process enables both biological and artificial synapses to maintain old circuits and create new ones.

Dramatically faster, lower-energy-required, compared to human synapses

But the NIST synapse has two unique features that the researchers say are superior to human synapses and to other artificial synapses:

  • Operating at 100 GHz, it can fire at a rate that is much faster than the human brain — 1 billion times per second, compared to a brain cell’s rate of about 50 times per second.
  • It uses only about one ten-thousandth as much energy as a human synapse. The spiking energy is less than 1 attojoule** — roughly equivalent to the miniscule chemical energy bonding two atoms in a molecule — compared to the roughly 10 femtojoules (10,000 attojoules) per synaptic event in the human brain. Current neuromorphic platforms are orders of magnitude less efficient than the human brain. “We don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.

Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses — a crucial piece — have been missing. The new Josephson junction-based artificial synapse would be used in neuromorphic computers made of superconducting components (which can transmit electricity without resistance), so they would be more efficient than designs based on semiconductors or software. Data would be transmitted, processed, and stored in units of magnetic flux.

The brain is especially powerful for tasks like image recognition because it processes data both in sequence and simultaneously and it stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.

The new NIST artificial synapses combine small size, superfast spiking signals, and low energy needs, and could be stacked into dense 3D circuits for creating large systems. They could provide a unique route to a far more complex and energy-efficient neuromorphic system than has been demonstrated with other technologies, according to the researchers.

Nature News does raise some concerns about the research, quoting neuromorphic-technology experts: “Millions of synapses would be necessary before a system based on the technology could be used for complex computing; it remains to be seen whether it will be possible to scale it to this level. … The synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. That this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. … We don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively.”


Inside a superconducting synapse 

The NIST synapse is a customized Josephson junction***, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced.

Illustration showing the basic operation of NIST’s artificial synapse, based on a Josephson junction. Very weak electrical current pulses are used to control the number of nanoclusters (green) pointing in the same direction. Shown here: a “magnetically disordered state” (left) vs. “magnetically ordered state” (right). (credit: NIST)

Each artificial synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters (“nanoclusters”) of manganese in a silicon matrix. The nanoclusters — about 20,000 per square micrometer — act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner. The number of nanoclusters pointing in the same direction can be controlled, which affects the superconducting properties of the junction.

Diagram of circuit used in the simulation. The blue and red areas represent pre- and post-synapse neurons, respectively. The X symbol represents the Josephson junction. (credit: Michael L. Schneider et al./Science Advances)

The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering — that is, the number of nanoclusters pointing in the same direction.

This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes. The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.

Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.


* Future exascale supercomputers would run at 1018 exaflops (“flops” = floating point operations per second) or more. The current fastest supercomputer — the Sunway TaihuLight — operates at about 0.1 exaflops; zettascale computers, the next step beyond exascale, would run 10,000 times faster than that.

** An attojoule is 10-18 joule, a unit of energy, and is one-thousandth of a femtojoule.

*** The Josephson effect is the phenomenon of supercurrent — i.e., a current that flows indefinitely long without any voltage applied — across a device known as a Josephson junction, which consists of two superconductors coupled by a weak link. — Wikipedia


Abstract of Ultralow power artificial synapses using nanotextured magnetic Josephson junctions

Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies.

MIT nanosystem delivers precise amounts of drugs directly to a tiny spot in the brain

MIT’s miniaturized system can deliver multiple drugs to precise locations in the brain, also monitor and control neural activity (credit: MIT)

MIT researchers have developed a miniaturized system that can deliver tiny quantities of medicine to targeted brain regions as small as 1 cubic millimeter, with precise control over how much drug is given. The goal is to treat diseases that affect specific brain circuits without interfering with the normal functions of the rest of the brain.*

“We believe this tiny microfabricated device could have tremendous impact in understanding brain diseases, as well as providing new ways of delivering biopharmaceuticals and performing biosensing in the brain,” says Robert Langer, the David H. Koch Institute Professor at MIT and one of the senior authors of an open-access paper that appears in the Jan. 24 issue of Science Translational Medicine.**

Miniaturized neural drug delivery system (MiNDS). Top: Miniaturized delivery needle with multiple fluidic channels for delivering different drugs. Bottom: scanning electron microscope image of cannula tip for delivering a drug or optogenetic light (to stimulate neurons) and a tungsten electrode (yellow dotted area — magnified view in inset) for detecting neural activity. (credit: Dagdeviren et al., Sci. Transl. Med., adapted by KurzweilAI)

The researchers used state-of-the-art microfabrication techniques to construct cannulas (thin tubes) with diameters of about 30 micrometers (width of a fine human hair) and lengths up to 10 centimeters. These cannulas are contained within a stainless steel needle with a diameter of about 150 micrometers. Inside the cannulas are small pumps that can deliver tiny doses (hundreds of nanoliters***) deep into the brains of rats — with very precise control over how much drug is given and where it goes.

In one experiment, they delivered a drug called muscimol to a rat brain region called the substantia nigra, which is located deep within the brain and helps to control movement. Previous studies have shown that muscimol induces symptoms similar to those seen in Parkinson’s disease. The researchers were able to stimulate the rats to continually turn in a clockwise direction. They also could also halt the Parkinsonian behavior by delivering a dose of saline through a different channel to wash the drug away.

“Since the device can be customizable, in the future we can have different channels for different chemicals, or for light, to target tumors or neurological disorders such as Parkinson’s disease or Alzheimer’s,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper.

This device could also make it easier to deliver potential new treatments for behavioral neurological disorders such as addiction or obsessive compulsive disorder. (These may be caused by specific disruptions in how different parts of the brain communicate with each other.)

Measuring drug response

The researchers also showed that they could incorporate an electrode into the tip of the cannula, which can be used to monitor how neurons’ electrical activity changes after drug treatment. They are now working on adapting the device so it can also be used to measure chemical or mechanical changes that occur in the brain following drug treatment.

The cannulas can be fabricated in nearly any length or thickness, making it possible to adapt them for use in brains of different sizes, including the human brain, the researchers say.

“This study provides proof-of-concept experiments, in large animal models, that a small, miniaturized device can be safely implanted in the brain and provide miniaturized control of the electrical activity and function of single neurons or small groups of neurons. The impact of this could be significant in focal diseases of the brain, such as Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and chairman of the Department of Neurosurgery at Brigham and Women’s Hospital, who was not involved in the research.

The research was funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.

* To treat brain disorders, drugs (such as l-dopa, a dopamine precursor used to treat Parkinson’s disease, and Prozac, used to boost serotonin levels in patients with depression) often interact with brain chemicals called neurotransmitters (or the cell receptors interact with neurotransmitters) — creating side effects throughout the brain.

** Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, is also a senior author of the paper.

*** It would take one billion nanoliter drops to fill 4 cups.


Abstract of Miniaturized neural system for chronic, local intracerebral drug delivery

Recent advances in medications for neurodegenerative disorders are expanding opportunities for improving the debilitating symptoms suffered by patients. Existing pharmacologic treatments, however, often rely on systemic drug administration, which result in broad drug distribution and consequent increased risk for toxicity. Given that many key neural circuitries have sub–cubic millimeter volumes and cell-specific characteristics, small-volume drug administration into affected brain areas with minimal diffusion and leakage is essential. We report the development of an implantable, remotely controllable, miniaturized neural drug delivery system permitting dynamic adjustment of therapy with pinpoint spatial accuracy. We demonstrate that this device can chemically modulate local neuronal activity in small (rodent) and large (nonhuman primate) animal models, while simultaneously allowing the recording of neural activity to enable feedback control.

An artificial synapse for future miniaturized portable ‘brain-on-a-chip’ devices

Biological synapse structure (credit: Thomas Splettstoesser/CC)

MIT engineers have designed a new artificial synapse made from silicon germanium that can precisely control the strength of an electric current flowing across it.

In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting with 95 percent accuracy. The engineers say the new design, published today (Jan. 22) in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other machine-learning tasks.

Controlling the flow of ions: the challenge

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. The idea is to apply a voltage across layers that would cause ions (electrically charged atoms) to move in a switching medium (synapse-like space) to create conductive filaments in a manner that’s similar to how the “weight” (connection strength) of a synapse changes.

There are more than 100 trillion synapses (in a typical human brain) that mediate neuron signaling in the brain, strengthening some neural connections while pruning (weakening) others — a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, all at lightning speeds.

Instead of carrying out computations based on binary, on/off signaling, like current digital chips, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights” — much like neurons that activate in various ways (depending on the type and number of ions that flow across a synapse).

But it’s been difficult to control the flow of ions in existing synapse designs. These have multiple paths that make it difficult to predict where ions will make it through, according to research team leader Jeehwan Kim, PhD, an assistant professor in the departments of Mechanical Engineering and Materials Science and Engineering, a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

Epitaxial random access memory (epiRAM)

(Left) Cross-sectional transmission electron microscope image of 60 nm silicon-germanium (SiGe) crystal grown on a silicon substrate (diagonal white lines represent candidate dislocations). Scale bar: 25 nm. (Right) Cross-sectional scanning electron microscope image of an epiRAM device with titanium (Ti)–gold (Au) and silver (Ag)–palladium (Pd) layers. Scale bar: 100 nm. (credit: Shinhyun Choi et al./Nature Materials)

So instead of using amorphous materials as an artificial synapse, Kim and his colleagues created an new “epitaxial random access memory” (epiRAM) design.

They started with a wafer of silicon. They then grew a similar pattern of silicon germanium — a material used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials could form a funnel-like dislocation, creating a single path through which ions can predictably flow.*

This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Testing the ability to recognize samples of handwriting

As a test, Kim and his team explored how the epiRAM device would perform if it were to carry out an actual learning task: recognizing samples of handwriting — which researchers consider to be a practical test for neuromorphic chips. Such chips would consist of artificial “neurons” connected to other “neurons” via filament-based artificial “synapses.”

Image-recognition simulation. (Left) A 3-layer multilayer-perception neural network with black and white input signal for each layer in algorithm level. The inner product (summation) of input neuron signal vector and first synapse array vector is transferred after activation and binarization as input vectors of second synapse arrays. (Right) Circuit block diagram of hardware implementation showing a synapse layer composed of epiRAM crossbar arrays and the peripheral circuit. (credit: Shinhyun Choi et al./Nature Materials)

They ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from the MNIST handwritten recognition dataset**, commonly used by neuromorphic designers.

They found that their neural network device recognized handwritten samples 95.1 percent of the time — close to the 97 percent accuracy of existing software algorithms running on large computers.

A chip to replace a supercomputer

The team is now in the process of fabricating a real working neuromorphic chip that can carry out handwriting-recognition tasks. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that are currently only possible with large supercomputers.

“Ultimately, we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial intelligence hardware.”

This research was supported in part by the National Science Foundation. Co-authors included researchers at Arizona State University.

* They applied voltage to each synapse and found that all synapses exhibited about the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material. They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

** The MNIST (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems and for training and testing in the field of machine learning. It contains 60,000 training images and 10,000 testing images. 


Abstract of SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

Tracking a thought’s fleeting trip through the brain


Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

Recording the electrical activity of neurons directly from the surface of the brain, using electrocorticograhy (ECoG)*, neuroscientists were able to track the flow of thought across the brain in real time for the first time. They showed clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.

Here’s what they found.

For a simple task, such as repeating a word seen or heard:

The visual and auditory cortices react first to perceive the word. The prefrontal cortex then kicks in to interpret the meaning, followed by activation of the motor cortex (preparing for a response). During the half-second between stimulus and response, the prefrontal cortex remains active to coordinate all the other brain areas.

For a particularly hard task, like determining the antonym of a word:

During the time the brain takes several seconds to respond, the prefrontal cortex recruits other areas of the brain — probably including memory networks (not tracked). The prefrontal cortex then hands off to the motor cortex to generate a spoken response.

In both cases, the brain begins to prepare the motor areas to respond very early (during initial stimulus presentation) — suggesting that we get ready to respond even before we know what the response will be.

“This might explain why people sometimes say things before they think,” said Avgusta Shestyuk, a senior researcher in UC Berkeley’s Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.


For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).

The research backs up what neuroscientists have pieced together over the past decades from studies in monkeys and humans.

“These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. “Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It’s the glue of cognition.”

Researchers at Johns Hopkins University, California Pacific Medical Center, and Stanford University were also involved. The work was supported by the National Science Foundation, National Institute of Mental Health, and National Institute of Neurological Disorders and Stroke.

* Other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain. The UC Berkeley scientists instead employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures. The new study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore. Once the electrodes were placed on the brains of each patient, the researchers conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word, or assessing whether an adjective describes the patient’s personality.


Abstract of Persistent neuronal activity in human prefrontal cortex links perception and action

How do humans flexibly respond to changing environmental demands on a subsecond temporal scale? Extensive research has highlighted the key role of the prefrontal cortex in flexible decision-making and adaptive behaviour, yet the core mechanisms that translate sensory information into behaviour remain undefined. Using direct human cortical recordings, we investigated the temporal and spatial evolution of neuronal activity (indexed by the broadband gamma signal) in 16 participants while they performed a broad range of self-paced cognitive tasks. Here we describe a robust domain- and modality-independent pattern of persistent stimulus-to-response neural activation that encodes stimulus features and predicts motor output on a trial-by-trial basis with near-perfect accuracy. Observed across a distributed network of brain areas, this persistent neural activation is centred in the prefrontal cortex and is required for successful response implementation, providing a functional substrate for domain-general transformation of perception into action, critical for flexible behaviour.

Scientists map mammalian neural microcircuits in precise detail

Nanoengineered electroporation microelectrodes (NEMs) allow for improved current distribution and electroporation effectiveness by reducing peak voltage regions (to avoid damaging tissue). (left) Cross-section of NEM model, illustrating the total effective electroporation volume and its distribution of the voltage around the pipette tip, at a safe current of 50 microamperes. (Scale bar = 5 micrometers.) (right) A five-hole NEM after successful insertion into brain tissue, imaged with high-resolution focused ion beam (FIB). (Scale bar = 2 micrometers) (credit: D. Schwartz et al./Nature Communications)

Neuroscientists at the Francis Crick Institute have developed a new technique to map electrical microcircuits* in the brain at far more detail than existing techniques*, which are limited to tiny sections of the brain (or remain confined to simpler model organisms, like zebrafish).

In the brain, groups of neurons that connect up in microcircuits help us process information about things we see, smell and taste. Knowing how many neurons and other types of cells make up these microcircuits would give scientists a deeper understanding of how the brain computes complex information.

Nanoengineered microelectrodes

The researchers developed a new design called “nanoengineered electroporation** microelectrodes” (NEMs). They were able to use an NEM to map out all 250 cells that make up a specific microcircuit in a part of a mouse brain that processes smell (known as the “olfactory bulb glomerulus”) in a horizontal slice of the olfactory bulb — something never before achieved.

To do that, the team created a series of tiny pores (holes) near the end of a micropipette using nano-engineering tools. The new design distributes the electrical current uniformly over a wider area (up to a radius of about 50 micrometers — the size of a typical neural microcircuit), with minimal cell damage.

The researchers tested the NEM technique with a specific microcircuit, the olfactory bulb glomerulus (which detects smells). They were able to identify detailed, long-range, complex anatomical features (scale bar = 100 micrometers). (White arrows identify parallel staining of vascular structures.) (credit: D. Schwartz et al./Nature Communications)

Seeing 100% of the cells in a brain microcircuit for the first time

Unlike current methods, the team was able to stain up to 100% of the cells in the microcircuit they were investigating, according to Andreas Schaefer, who led the research, which was published in open-access Nature Communications today (Jan. 12, 2018).

“As the brain is made up of repeating units, we can learn a lot about how the brain works as a computational machine by studying it at this [microscopic] level,” he said. “Now that we have a tool of mapping these tiny units, we can start to interfere with specific cell types to see how they directly control behavior and sensory processing.”

The work was conducted in collaboration with researchers at the Max-Planck-Institute for Medical Research in Heidelberg, Heidelberg University, Heidelberg University Hospital, University College London, the MRC National Institute for Medical Research, and Columbia University Medical Center.

* Scientists currently use color-tagged viruses or charged dyes with applied electroporation current to stain brain cells. These methods, using a glass capillary with a single hole, are limited to low current (higher current could damage tissue), so they can only allow for identifying a limited area of a microcircuit.

** Electroporation is a microbiology technique that applies an electrical field to cells to increase the permeability (ease of penetration) of the cell membrane, allowing (in this case) fluorophores (fluorescent, or glowing dyes) to penetrate into the cells to label (identify parts of) the neural microcircuits (including the “inputs” and “outputs”) under a microscope.


Abstract of Architecture of a mammalian glomerular domain revealed by novel volume electroporation using nanoengineered microelectrodes

Dense microcircuit reconstruction techniques have begun to provide ultrafine insight into the architecture of small-scale networks. However, identifying the totality of cells belonging to such neuronal modules, the “inputs” and “outputs,” remains a major challenge. Here, we present the development of nanoengineered electroporation microelectrodes (NEMs) for comprehensive manipulation of a substantial volume of neuronal tissue. Combining finite element modeling and focused ion beam milling, NEMs permit substantially higher stimulation intensities compared to conventional glass capillaries, allowing for larger volumes configurable to the geometry of the target circuit. We apply NEMs to achieve near-complete labeling of the neuronal network associated with a genetically identified olfactory glomerulus. This allows us to detect sparse higher-order features of the wiring architecture that are inaccessible to statistical labeling approaches. Thus, NEM labeling provides crucial complementary information to dense circuit reconstruction techniques. Relying solely on targeting an electrode to the region of interest and passive biophysical properties largely common across cell types, this can easily be employed anywhere in the CNS.

Brainwave ‘mirroring’ neurotechnology improves post-traumatic stress symptoms

Patient receiving a real-time reflection of her frontal-lobe brainwave activity as a stream of audio tones through earbuds. (credit: Brain State Technologies)

You are relaxing comfortably, eyes closed, with non-invasive sensors attached to your scalp that are picking up signals from various areas of your brain. The signals are converted by a computer to audio tones that you can hear on earbuds. Over several sessions, the different frequencies (pitches) of the tones associated with the two hemispheres of the brain create a mirror for your brainwave activity, helping your brain reset itself to reduce traumatic stress.

In a study conducted at Wake Forest School of Medicine, 20 sessions of noninvasive brainwave “mirroring” neurotechnology called HIRREM* (high-resolution, relational, resonance-based electroencephalic mirroring) significantly reduced symptoms of post-traumatic stress resulting from service as a military member or vet.


Example of tones (credit: Brain State Technologies)

“We observed reductions in post-traumatic symptoms**, including insomnia, depressive mood, and anxiety, that were durable through six months after the use of HIRREM, but additional research is needed to confirm these initial findings,” said the study’s principal investigator, Charles H. Tegeler, M.D., professor of neurology at Wake Forest School of Medicine, a part of Wake Forest Baptist.

About 500 patients have participated in HIRREM clinical trials at Wake Forest School of Medicine and other locations, according to Brain State Technologies Founder and CEO Lee Gerdes.


Brain State Technologies | HIRREM process, showing a technologist applying Brain State Technologies’ proprietary HIRREM process with a military veteran client.

HIRREM is intended for medical research. A consumer version of the core underlying brainwave mirroring process is available as “Brainwave Optimization” from Brain State Technologies in Scottsdale, Arizona. The company also offers a wearable device for ongoing brain support, BRAINtellect B2v2.


How HIRREM neurotechnology works

(credit: Brain State Technologies)

HIRREM is a neurotechnology that dynamically measures brain electrical activity. It uses two or more EEG (electroencephalogram, or brain-wave detection) scalp sensors to pick up signals from both sides of the brain. Computer software algorithms then convert dominant brain frequencies in real time into audible tones with varying pitch and timing, which can be heard on earbuds.

In effect, the brain is listening to itself. It the process, it makes self-adjustments towards improved balance (between brain temporal lobe activity in the two hemispheres — sympathetic (right) and parasympathetic (left) — of the brain), resulting in reduced hyper-arousal. No conscious cognitive activity is required. Signals from other areas of the brain can also be studied.

The net effect is to reset stress response patterns that have been wired by repetitive traumatic events (physical or non-physical).***

“Thus, if the stimulus is acoustic response to brain function (often called neurofeedback (NFB), then the response is made based on a threshold of the NFB provider. Since the brain moves three to five times faster than the thoughtful response of the client, the brain’s activity is way beyond any kind of activity which the client can mitigate. The NFB hypothesis is that the operant conditioning can be learned by the brain so it changes itself.

“In a HIRREM placebo-controlled insomnia study, HIRREM showed statistically significant improvement in sleep function over the placebo. Additionally, HIRREM demonstrated that biomarkers for the test were also statistically significant over the placebo. Posters for this study were presented at the International Sleep Conference and at the Dept of Defense Research meeting on sleep. A full length manuscript of the study is in process with hopes to be published Q1 2018).”


The study was published (open access) in the Dec. 22 online edition of the journal Military Medical Research with co-authors at Brain State Technologies. It was supported through the Joint Capability Technology Demonstration Program within the Office of the Under Secretary of Defense and by a grant from The Susanne Marcus Collins Foundation, Inc. to the Department of Neurology at Wake Forest Baptist.

The researchers acknowledge limitations of the study, including the small number of participants and the absence of a control group. It was also an open-label project, meaning that both researchers and participants knew what treatment was being administered.

* HIRREM is a registered trademark of Brain State Technologies based in Scottsdale, Arizona, and has been licensed to Wake Forest University for collaborative research since 2011.  In this single-site study, 18 service members or recent veterans, who experienced symptoms over one to 25 years, received an average of 19½ HIRREM sessions over 12 days. Symptom data were collected before and after the study sessions, and follow-up online interviews were conducted at one-, three- and six-month intervals. In addition, heart rate and blood pressure readings were recorded after the first and second visits to analyze downstream autonomic balance with heart rate variability and baroreflex sensitivity. HIRREM has been used experimentally with more than 500 patients at Wake Forest School of Medicine.

** According to the U.S. Department of Veterans Affairs, approximately 31 percent of Vietnam veterans, 10 percent of Gulf War (Desert Storm) veterans and 11 percent of veterans of the war in Afghanistan experience PTSD. Symptoms can include insomnia, poor concentration, sadness, re-experiencing traumatic events, irritability or hyper-alertness, and diminished autonomic cardiovascular regulation.

*** The effect is based on the “bihemispheric autonomic model” (BHAM ), “which proposes that trauma-related sympathetic hyperarousal may be an expression of maladaptive right temporal lobe activity, whereas the avoidant and dissociative features of the traumatic stress response may be indicators of a parasympathetic “freeze” response that is significantly driven by the left temporal lobe. An implication [is that brain-based] intervention may facilitate the reduction of symptom clusters associated with autonomic disturbances through the mitigation of maladaptive asymmetries.” — Catherine L. Tegeler et al./Military Medical Research.

Update Jan. 10, 2017: What about a control group?

“Our study had an open label design, without a control group,” Tegeler explained to KurzweilAI in an email, in response to reader questions.

“We agree that a randomized design is scientifically a more powerful approach, and one we would have preferred.  The reality was that for this cohort of participants, mostly drawn from the special operations community, constraints due to limitation on allowable time away from duties, training cycle pressures, therapeutic expectations, and available funding, prevented consideration of a controlled design.

“Other studies have used a placebo-controlled design utilizing acoustic stimulation linked to brainwaves, as compared to acoustic stimulation not linked to brainwaves. Manuscripts are being prepared to report those results. Finally, our current studies are all focused on evaluation of the effects and benefits of HIRREM alone, for a variety of symptoms or conditions.  That said, in the future there may be opportunities to seek funding for projects that might combine, or follow up after HIRREM, with other strategies such as meditation, improved nutrition, or exercise.”

“Biofeedback/neurofeedback is an open-loop system indicating that the feedback from the brain or other biological function is provided back to the client as the function being analyzed triggers a stimulus,” Gerdes added.


Abstract of Successful use of closed-loop allostatic neurotechnology for post-traumatic stress symptoms in military personnel: self-reported and autonomic improvements

Background: Military-related post-traumatic stress (PTS) is associated with numerous symptom clusters and diminished autonomic cardiovascular regulation. High-resolution, relational, resonance-based, electroencephalic mirroring (HIRREM®) is a noninvasive, closed-loop, allostatic, acoustic stimulation neurotechnology that produces real-time translation of dominant brain frequencies into audible tones of variable pitch and timing to support the auto-calibration of neural oscillations. We report clinical, autonomic, and functional effects after the use of HIRREM® for symptoms of military-related PTS.

Methods: Eighteen service members or recent veterans (15 active-duty, 3 veterans, most from special operations, 1 female), with a mean age of 40.9 (SD = 6.9) years and symptoms of PTS lasting from 1 to 25 years, undertook 19.5 (SD = 1.1) sessions over 12 days. Inventories for symptoms of PTS (Posttraumatic Stress Disorder Checklist – Military version, PCL-M), insomnia (Insomnia Severity Index, ISI), depression (Center for Epidemiologic Studies Depression Scale, CES-D), and anxiety (Generalized Anxiety Disorder 7-item scale, GAD-7) were collected before (Visit 1, V1), immediately after (Visit 2, V2), and at 1 month (Visit 3, V3), 3 (Visit 4, V4), and 6 (Visit 5, V5) months after intervention completion. Other measures only taken at V1 and V2 included blood pressure and heart rate recordings to analyze heart rate variability (HRV) and baroreflex sensitivity (BRS), functional performance (reaction and grip strength) testing, blood and saliva for biomarkers of stress and inflammation, and blood for epigenetic testing. Paired t-tests, Wilcoxon signed-rank tests, and a repeated-measures ANOVA were performed.

Results: Clinically relevant, significant reductions in all symptom scores were observed at V2, with durability through V5. There were significant improvements in multiple measures of HRV and BRS [Standard deviation of the normal beat to normal beat interval (SDNN), root mean square of the successive differences (rMSSD), high frequency (HF), low frequency (LF), and total power, HF alpha, sequence all, and systolic, diastolic and mean arterial pressure] as well as reaction testing. Trends were seen for improved grip strength and a reduction in C-Reactive Protein (CRP), Angiotensin II to Angiotensin 1–7 ratio and Interleukin-10, with no change in DNA n-methylation. There were no dropouts or adverse events reported.

Conclusions: Service members or veterans showed reductions in symptomatology of PTS, insomnia, depressive mood, and anxiety that were durable through 6 months after the use of a closed-loop allostatic neurotechnology for the auto-calibration of neural oscillations. This study is the first to report increased HRV or BRS after the use of an intervention for service members or veterans with PTS. Ongoing investigations are strongly warranted.

Will artificial intelligence become conscious?

(Credit: EPFL/Blue Brain Project)

By Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University

Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry — and even keep humans company when other people aren’t nearby.

A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.

As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is.

Is awareness enough?

Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.

On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.

Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.

Quantum views

Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.

Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.

Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.

Big-C and scientific discovery

Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation. For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.

A dramatic piece of evidence in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.

The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes. The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.

Mind and self-organizing systems

It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.

Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.

Reprinted with permission from The Conversation

Video games and piano lessons improve cognitive functions in seniors, researchers find

(credit: Nintendo)

For seniors, playing 3D-platform games like Super Mario 64 or taking piano lessons can stave off mild cognitive impairment and perhaps even prevent Alzheimer’s disease, according to a new study by Université de Montréal psychology professors.

In the studies, 33 people ages 55 to 75 were instructed to play Super Mario 64 for 30 minutes a day, five days a week for a period of six months, or take piano lessons (for the first time in their life) with the same frequency and in the same sequence. A control group did not perform any particular task.

The researchers evaluated the effects of the experiment with cognitive performance tests and magnetic resonance imaging (MRI) to measure variations in the volume of gray matter.

Increased gray matter in the left and right hippocampus and the left cerebellum after older adults completed six months of video-game training. (credit: Greg L. West et al./PLoS One)

  • The participants in the video-game cohort saw increases in gray matter volume in the cerebellum (plays a major role in motor control and balance) and the hippocampus (associated with spatial and episodic memory, a key factor in long-term cognitive health); and their short-term memory improved. (The hippocampus gray matter acts as a marker for neurological disorders that can occur over time, including mild cognitive impairment and Alzheimer’s.)
  • There were gray-matter increases in the dorsolateral prefrontal cortex (controls planning, decision-making, and inhibition) and cerebellum of the participants who took piano lessons.
  • Some degree of atrophy was noted in all three areas of the brain among those in the passive control group.

“These findings can also be used to drive future research on Alzheimer’s, since there is a link between the volume of the hippocampus and the risk of developing the disease,” said Gregory West, an associate professor at the Université de Montréal and lead author of an open-access paper in PLoS One journal.

“3-D video games engage the hippocampus into creating a cognitive map, or a mental representation, of the virtual environment that the brain is exploring,” said West. “Several studies suggest stimulation of the hippocampus increases both functional activity and gray matter within this region.”

However, “It remains to be seen whether it is specifically brain activity associated with spatial memory that affects plasticity, or whether it’s simply a matter of learning something new.”

Researchers at the Memorial University in Newfoundland, and Montreal’s Douglas Hospital Research Centre were also involved in the study.

In two previous studies by the researchers in 2014 and 2017, young adults in their twenties were asked to play 3D video games of logic and puzzles on platforms like Super Mario 64. Findings showed that the gray matter in their hippocampus also increased after training.


Abstract of Playing Super Mario 64 increases hippocampal grey matter in older adults

Maintaining grey matter within the hippocampus is important for healthy cognition. Playing 3D-platform video games has previously been shown to promote grey matter in the hippocampus in younger adults. In the current study, we tested the impact of 3D-platform video game training (i.e., Super Mario 64) on grey matter in the hippocampus, cerebellum, and the dorsolateral prefrontal cortex (DLPFC) of older adults. Older adults who were 55 to 75 years of age were randomized into three groups. The video game experimental group (VID; n = 8) engaged in a 3D-platform video game training over a period of 6 months. Additionally, an active control group took a series of self-directed, computerized music (piano) lessons (MUS; n = 12), while a no-contact control group did not engage in any intervention (CON; n = 13). After training, a within-subject increase in grey matter within the hippocampus was significant only in the VID training group, replicating results observed in younger adults. Active control MUS training did, however, lead to a within-subject increase in the DLPFC, while both the VID and MUS training produced growth in the cerebellum. In contrast, the CON group displayed significant grey matter loss in the hippocampus, cerebellum and the DLPFC.

Take a fantastic 3D voyage through the brain with immersive VR system


Wyss Center for Bio and Neuroengineering/Lüscher lab (UNIGE) | Brain circuits related to natural reward

What happens when you combine access to unprecedented huge amounts of anatomical data of brain structures with the ability to display billions of voxels (3D pixels) in real time, using high-speed graphics cards?

Answer: An awesome new immersive virtual reality (VR) experience for visualizing and interacting with up to 10 terabytes (trillions of bytes) of anatomical brain data.

Developed by researchers from the Wyss Center for Bio and Neuroengineering and the University of Geneva, the system is intended to allow neuroscientists to highlight, select, slice, and zoom on down to individual neurons at the micrometer cellular level.

This 2-D brain image of a mouse brain injected with a fluorescent retrograde virus in the brain stem — captured with a lightsheet microscope — represents the kind of rich, detailed visual data that can be explored with a new VR system. (credit: Courtine Lab/EPFL/Leonie Asboth, Elodie Rey)

The new VR system grew out of a problem with using the Wyss Center’s lightsheet microscope (one of only three in the world): how can you navigate and make sense out the immense volume of neuroanatomical data?

“The system provides a practical solution to experience, analyze and quickly understand these exquisite, high-resolution images,” said Stéphane Pages, PhD, Staff Scientist at the Wyss Center and Senior Research Associate at the University of Geneva, senior author of a dynamic poster presented November 15 at the annual meeting of the Society for Neuroscience 2017.

For example, using “mini-brains,” researchers will be able to see how new microelectrode probes behave in brain tissue, and how tissue reacts to them.

Journey to the center of the cell: VR movies

A team of researchers in Australia has taken the next step: allowing scientists, students, and members of the public to explore these kinds of images — even interact with cells and manipulate models of molecules.

As described in a paper published in the journal Traffic, the researchers built a 3D virtual model of a cell, combining lightsheet microscope images (for super-resolution, real-time, single-molecule detection of fluorescent proteins in cells and tissues) with scanning electron microscope imaging data (for a more complete view of the cellular architecture).

To demonstrate this, they created VR movies (shown below) of the surface of a breast cancer cell. The movies can be played on a Samsung Gear VR or Google cardboard device or using the built-in YouTube 360 player with Chrome, Firefox, MS Edge, or Opera browsers. The movies will also play on a conventional smartphone (but without 3D immersion).

UNSW 3D Visualisation Aesthetics Lab | The cell “paddock” view puts the user on the surface of the cell and demonstrates different mechanisms by which nanoparticles can be internalized into cells.

UNSW 3D Visualisation Aesthetics Lab | The cell “cathedral” view takes the user inside the cell and allows them to explore key cellular compartments, including the mitochondria (red), lysosomes (green), early endosomes (light blue), and the nucleus (purple).


Abstract of Analyzing volumetric anatomical data with immersive virtual reality tools

Recent advances in high-resolution 3D imaging techniques allow researchers to access unprecedented amounts of anatomical data of brain structures. In parallel, the computational power of commodity graphics cards has made rendering billions of voxels in real-time possible. Combining these technologies in an immersive virtual reality system creates a novel tool wherein observers can physically interact with the data. We present here the possibilities and demonstrate the value of this approach for reconstructing neuroanatomical data. We use a custom built digitally scanned light-sheet microscope (adapted from Tomer et al., Cell, 2015), to image rodent clarified whole brains and spinal cords in which various subpopulations of neurons are fluorescently labeled. Improvements of existing microscope designs allow us to achieve an in-plane submicronic resolution in tissue that is immersed in a variety of media (e. g. organic solvents, Histodenz). In addition, our setup allows fast switching between different objectives and thus changes image resolution within seconds. Here we show how the large amount of data generated by this approach can be rapidly reconstructed in a virtual reality environment for further analyses. Direct rendering of raw 3D volumetric data is achieved by voxel-based algorithms (e.g. ray marching), thus avoiding the classical step of data segmentation and meshing along with its inevitable artifacts. Visualization in a virtual reality headset together with interactive hand-held pointers allows the user with to interact rapidly and flexibly with the data (highlighting, selecting, slicing, zooming etc.). This natural interface can be combined with semi-automatic data analysis tools to accelerate and simplify the identification of relevant anatomical structures that are otherwise difficult to recognize using screen-based visualization. Practical examples of this approach are presented from several research projects using the lightsheet microscope, as well as other imaging techniques (e.g., EM and 2-photon).


Abstract of Journey to the centre of the cell: Virtual reality immersion into scientific data

Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.