IBM scientists emulate neurons with phase-change technology

A prototype chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. The devices are accessed via an array of probes in this prototype to allow for characterization and testing. The tiny squares are contact pads used to access the nanometer-scale phase-change cells (inset).  Each set of probes can access a population of 100 cells. There are thousands to millions of these cells on one chip and IBM accesses them (in this particular photograph) by means of the sharp needles (probe card). (credit: IBM Research)

Scientists at IBM Research in Zurich have developed artificial neurons that emulate how neurons spike (fire). The goal is to create energy-efficient, high-speed, ultra-dense integrated neuromorphic (brain-like) technologies for applications in cognitive computing, such as unsupervised learning for detecting and analyzing patterns.

Applications could include internet of things sensors that collect and analyze volumes of weather data for faster forecasts and detecting patterns in financial transactions, for example.

The results of this research appeared today (Aug. 3) as a cover story in the journal Nature Nanotechnology.

Emulating neuron spiking

General pattern of a neural spike (action potential). A neuron fires (generates a rapid action potential, or voltage, when triggered by a stimulus) a signal from a synapse (credit: Chris 73/Diberri CC)

IBM’s new neuron-like spiking mechanism is based on a recent IBM breakthrough in phase-change materials. Phase-change materials are used for storing and processing digital data in re-writable Blu-ray discs, for example. The new phase-change materials developed by IBM recently are used instead for storing and processing analog data — like the synapses and neurons in our biological brains.

The new phase-change materials also overcome  problems in conventional computing, where there’s a separate memory and logic unit, slowing down computation. These functions are combined in the new artificial neurons, just as they are in a biological neuron.

In biological neurons, a thin lipid-bilayer membrane separates the electrical charges inside the cell from those outside it. The membrane potential is altered by the arrival of excitatory and inhibitory postsynaptic potentials through the dendrites of the neuron, and upon sufficient excitation of the neuron (a phase change), an action potential, or spike, is generated. IBM’s new germanium-antimony-tellurium (GeSbTe or GST) phase-change material emulates this process. It has two stable states: an amorphous one (without a clearly defined structure) and a crystalline one (with a structure). (credit: Tomas Tuma et al./Nature Nanotechnology)

Alternative to von-Neumann-based algorithms

In addition, previous attempts to build artificial neurons are built using CMOS-based circuits, the standard transistor technology we have in our computers. The new phase-change technology can reproduce similar functionality at reduced power consumption. The artificial neurons are also superior in functioning at nanometer-length-scale dimensions and feature native stochasticity (based on random variables, simulating neurons).

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, a co-author of the paper.

“The relatively complex computational tasks, such as Bayesian inference, that stochastic neuronal populations can perform with collocated processing and storage render them attractive as a possible alternative to von-Neumann-based algorithms in future cognitive computers,” the IBM scientists state in the paper.

IBM scientists have organized hundreds of these artificial neurons into populations and used them to represent fast and complex signals. These artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt lightbulb.


IBM Research | All-memristive neuromorphic computing with level-tuned neurons


Abstract of Stochastic phase-change neurons

Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.

Study reveals new measure of intelligence involving temporal variability of brain areas

Whole-brain temporal (time) variability is high in areas associated with intelligence and low in sensory cortices (credit: Jie Zhang et al./Brain)

A new study of human intelligence by University of Warwick researchers and associates at nine universities in China and NEC Laboratories America has quantified the brain’s dynamic functions, identifying how different parts of the brain interact with each other at different times, they reported in the journal Brain.

The more variable a brain is, and the more its different parts frequently connect with each other, the higher a person’s intelligence and creativity are, the researchers found .

Specifically, using resting-state MRI analysis of 1180 people’s brains in eight datasets around the world, the researchers discovered that the areas of the brain associated with learning and development, such as the hippocampus, show high levels of temporal variability — meaning that they change their neural connections with other parts of the brain more frequently, over a matter of minutes or seconds. But regions of the brain that aren’t associated with intelligence — visual, auditory, and sensory-motor areas — show small variability and adaptability.

This more accurate understanding of human intelligence could be applied to the construction of advanced artificial neural networks for computers, with the ability to learn, grow and adapt, the researchers suggest. Currently, AI systems do not process the functional variability and adaptability that is vital to the human brain for growth and learning, they note.

Improving mental-health treatments

Brain regions showing significant variability differences between patients with mental disorders and matched healthy controls. Blue indicates that the variability of patients is lower than that of controls, and red-to-yellow indicates the opposite. (credit: Jie Zhang et al./Brain)

This study may also have implications for a deeper understanding of another largely misunderstood field: mental health. Altered patterns of variability were observed in the brain’s default network with schizophrenia, autism, and attention deficit hyperactivity disorder (ADHD) patients.

Knowing the root cause of mental-health defects may bring scientists closer to treating and preventing these conditions in the future, according to the researchers.


Abstract of Neural, electrophysiological and anatomical basis of brain-network variability and its characteristic changes in mental disorders 

Functional brain networks demonstrate significant temporal variability and dynamic reconfiguration even in the resting state. Currently, most studies investigate temporal variability of brain networks at the scale of single (micro) or whole-brain (macro) connectivity. However, the mechanism underlying time-varying properties remains unclear, as the coupling between brain network variability and neural activity is not readily apparent when analysed at either micro or macroscales. We propose an intermediate (meso) scale analysis and characterize temporal variability of the functional architecture associated with a particular region. This yields a topography of variability that reflects the whole-brain and, most importantly, creates an analytical framework to establish the fundamental relationship between variability of regional functional architecture and its neural activity or structural connectivity. We find that temporal variability reflects the dynamical reconfiguration of a brain region into distinct functional modules at different times and may be indicative of brain flexibility and adaptability. Primary and unimodal sensory-motor cortices demonstrate low temporal variability, while transmodal areas, including heteromodal association areas and limbic system, demonstrate the high variability. In particular, regions with highest variability such as hippocampus/parahippocampus, inferior and middle temporal gyrus, olfactory gyrus and caudate are all related to learning, suggesting that the temporal variability may indicate the level of brain adaptability. With simultaneously recorded electroencephalography/functional magnetic resonance imaging and functional magnetic resonance imaging/diffusion tensor imaging data, we also find that variability of regional functional architecture is modulated by local blood oxygen level-dependent activity and α-band oscillation, and is governed by the ratio of intra- to inter-community structural connectivity. Application of the mesoscale variability measure to multicentre datasets of three mental disorders and matched controls involving 1180 subjects reveals that those regions demonstrating extreme, i.e. highest/lowest variability in controls are most liable to change in mental disorders. Specifically, we draw attention to the identification of diametrically opposing patterns of variability changes between schizophrenia and attention deficit hyperactivity disorder/autism. Regions of the default-mode network demonstrate lower variability in patients with schizophrenia, but high variability in patients with autism/attention deficit hyperactivity disorder, compared with respective controls. In contrast, subcortical regions, especially the thalamus, show higher variability in schizophrenia patients, but lower variability in patients with attention deficit hyperactivity disorder. The changes in variability of these regions are also closely related to symptom scores. Our work provides insights into the dynamic organization of the resting brain and how it changes in brain disorders. The nodal variability measure may also be potentially useful as a predictor for learning and neural rehabilitation.

Electric brain stimulation during sleep found to enhance motor memory consolidation


UNC Health Care | UNC Science Short: Sleep Spindles

University of North Carolina (UNC) School of Medicine scientists report using transcranial alternating current stimulation (tACS) to enhance memory during sleep, laying the groundwork for a new treatment paradigm for neurological and psychiatric disorders.

The findings, published in the journal Current Biology, offer a non-invasive method to potentially help millions of people with conditions such as autism, Alzheimer’s disease, schizophrenia, and major depressive disorder.

Do sleep spindles cause memory consolidation? The experiment.

For years, researchers have recorded electrical brain activity that oscillates or alternates during sleep on an electroencephalogram (EEG) as waves called sleep spindles. And scientists have suspected their involvement in cataloging and storing memories as we sleep.

Sleep spindle and associated K-Complex (credit: Neocadre/public domain)

“But we didn’t know if sleep spindles enable or even cause memories to be stored and consolidated,” said senior author UNC neuroscientist Flavio Frohlich, PhD, assistant professor of psychiatry and member of the UNC Neuroscience Center. “They could’ve been merely byproducts of other brain processes that enabled what we learn to be stored as a memory. But our study shows that, indeed, the spindles are crucial for the process of creating memories we need for everyday life. And we can target them to enhance memory.”

Feedback-Controlled Spindle tACS (A) Graphical representation of real-time spindle detection and feedback-controlled transcranial current stimulation. (B) Schematic of tACS current source and stimulation electrode configuration; stimulation electrode placement according to International 10-20 locations. (credit: Caroline Lustenberger et al./Current Biology)

During Frohlich’s study, 16 male participants underwent a screening night of sleep before completing two nights of sleep for the study.

Before going to sleep each night, all participants performed two common memory exercises — associative word-pairing tests and motor sequence tapping tasks, which involved repeatedly finger-tapping a specific sequence. During both study nights, each participant had electrodes placed at specific spots on their scalps. During sleep one of the nights, each person received tACS — an alternating current of weak electricity synchronized with the brain’s natural sleep spindles. During sleep the other night, each person received sham stimulation as placebo.

Direct causal link

Each morning, researchers had participants perform the same standard memory tests. Frohlich’s team found no improvement in test scores for associative word-pairing but a significant improvement in the motor tasks when comparing the results between the stimulation and placebo night.

“This demonstrated a direct causal link between the electric activity pattern of sleep spindles and the process of motor memory consolidation.” Frohlich said.

This marks the first time a research group has reported selectively targeting sleep spindles without also increasing other natural electrical brain activity during sleep. This has never been accomplished with tDCS* (transcranial direct current stimulation), the much more popular cousin of tACS in which a constant stream of weak electrical current is applied to the scalp (see Neuroscience researchers caution public about hidden risks of self-administered brain stimulation*).

“We’re excited about this because we know sleep spindles, along with memory formation, are impaired in a number of disorders, such as schizophrenia and Alzheimer’s,” said Caroline Lustenberger, PhD, first author and postdoctoral fellow in the Frohlich lab. “We hope that targeting these sleep spindles could be a new type of treatment for memory impairment and cognitive deficits.”

The challenging research ahead

Frohlich said the next step is to try the same type of non-invasive brain stimulation in patients that have known deficits in these spindle activity patterns.

Based on the Current Biology paper, it’s clear the team is just getting started, with a lot of interesting possibilities to explore. For example, they note that “it is still unclear which specific cortical regions might be involved in sleep-dependent memory consolidation.”

It’s a sort of jungle in there, one with unknown species. The researchers say they may target posterior brain regions using faster frequencies (e.g., 15 Hz tACS) to optimally benefit motor memory consolidation, for example, and that maybe they should try synchronization of frontal oscillatory activity. And they want to find out if spindles synchronized across cortical regions are “essential for memory consolidation to occur” or are only spindles localized to brain regions necessary for performing the task?

Future studies will also be needed to investigate more complex ‘‘real-life’’ motor tasks that benefit from sleep and to relate those findings to sleep spindles, they add. “Future studies are also needed to further find optimized stimulation parameters by means of ideal stimulation location (centro-parietal instead of frontal) and (spindle) frequency applied.”

And when it comes to developing neuro-therapeutics, things will get even wilder. Will anomalous spindle patterns need to be reconstructed and reinforced, or will whole new customized patterns be needed? And if so, would this be some kind of reconstruction of a person’s memory (at least, their motor memory, for starters)? and what about dealing with circadian desynchronization and its possible disruptive effects on spindle patterns? And does it makes sense to look at phase patterns, in addition to frequency?

And perhaps most interesting to Kurzweilians,  could the ability to improve spindle patterns, using machine learning, lead one day to cyborg minds and merged human-machine superintelligence?

KurzweilAI will be closely following this fascinating research.

The study was funded by the National Institutes of Health, UNC Department of Psychiatry, UNC School of Medicine, and Swiss National Science Foundation.

* We asked Frohlich to comment on possible risks similar to those reported with tDCS. “So far, tACS used in laboratory studies (i.e., in a well-controlled environment) has been very safe and we have never experienced serious side effects in any of our tACS studies,” he said. “However, unregulated DIY use of tACS could indeed theoretically carry risks, albeit we do not know at this point.”


Abstract of Feedback-Controlled Transcranial Alternating Current Stimulation Reveals a Functional Role of Sleep Spindles in Motor Memory Consolidation

Transient episodes of brain oscillations are a common feature of both the waking and the sleeping brain. Sleep spindles represent a prominent example of a poorly understood transient brain oscillation that is impaired in disorders such as Alzheimer’s disease and schizophrenia. However, the causal role of these bouts of thalamo-cortical oscillations remains unknown. Demonstrating a functional role of sleep spindles in cognitive processes has, so far, been hindered by the lack of a tool to target transient brain oscillations in real time. Here, we show, for the first time, selective enhancement of sleep spindles with non-invasive brain stimulation in humans. We developed a system that detects sleep spindles in real time and applies oscillatory stimulation. Our stimulation selectively enhanced spindle activity as determined by increased sigma activity after transcranial alternating current stimulation (tACS) application. This targeted modulation caused significant enhancement of motor memory consolidation that correlated with the stimulation-induced change in fast spindle activity. Strikingly, we found a similar correlation between motor memory and spindle characteristics during the sham night for the same spindle frequencies and electrode locations. Therefore, our results directly demonstrate a functional relationship between oscillatory spindle activity and cognition.

Imaging the brain at multiple size scales

A new technique called “magnified analysis of proteome” (MAP) developed at MIT allows researchers to peer at molecules within cells or take a wider view of the long-range connections between neurons. (credit: Courtesy of the researchers)

MIT researchers have developed a new technique for imaging brain tissue at multiple scales, allowing them to peer at molecules within cells or take a wider view of the long-range connections between neurons.

This technique, known as “magnified analysis of proteome” (MAP), should help scientists in their ongoing efforts to chart the connectivity and functions of neurons in the human brain, says Kwanghun Chung, the Samuel A. Goldblith Assistant Professor in the Department of Chemical Engineering, extending the work of Sebastian Seung and colleagues on the Human Connectome Project.

“We use a chemical process to make the whole brain size-adjustable, while preserving pretty much everything. We preserve the proteome (the collection of proteins found in a biological sample), we preserve nanoscopic details, and we also preserve brain-wide connectivity,” says Chung, the senior author of a paper describing the method in the July 25 issue of Nature Biotechnology.

The researchers also showed that the technique is applicable to other organs such as the heart, lungs, liver, and kidneys.

Multiscale imaging

A 3-dimensional image taken via the CLARITY technique showing a 1 millimeter slice of mouse hippocampus. The different colors represent proteins stained with fluorescent antibodies. Excitatory neurons are labeled in green, Inhibitory neurons in red, and astrocytes in blue. (credit: Kwanghun Chung and Karl Deisseroth)

The new MAP technique builds on a tissue transformation method known as CLARITY, which Chung developed as a postdoc at Stanford University. CLARITY preserves cells and molecules in brain tissue and makes them transparent so the molecules inside the cell can be imaged in 3-D. In the new study, Chung sought a way to image the brain at multiple scales, within the same tissue sample.*

There are hundreds of thousands of commercially available antibodies that can be used to fluorescently tag specific proteins. In this study, the researchers imaged neuronal structures such as axons and synapses by labeling proteins found in those structures, and they also labeled proteins that allow them to distinguish neurons from glial cells.

“We can use these antibodies to visualize any target structures or molecules,” Chung says. “We can visualize different neuron types and their projections to see their connectivity. We can also visualize signaling molecules or functionally important proteins.”

High resolution

Once the tissue is expanded, the researchers can use any of several common microscopes to obtain images with a resolution as high as 60 nanometers — much better than the usual 200 to 250-nanometer limit of light microscopes, which are constrained by the wavelength of visible light. The researchers also demonstrated that this approach works with relatively large tissue samples, up to 2 millimeters thick.

“This is, as far as I know, the first demonstration of super-resolution proteomic imaging of millimeter-scale samples,” Chung says.

“This is an exciting advance for brain mapping, a technique that reveals the molecular and connectional architecture of the brain with unprecedented detail,” says Seung, a professor of computer science at the Princeton Neuroscience Institute, who was not involved in the research.

Currently, efforts to map the connections of the human brain rely on electron microscopy, but Chung and colleagues demonstrated that the higher-resolution MAP imaging technique can trace those connections more accurately.

Chung’s lab is now working on speeding up the imaging and the image processing, which is challenging because there is so much data generated from imaging the expanded tissue samples. “It’s already easier than other techniques because the process is really simple and you can use off-the-shelf molecular markers, but we are trying to make it even simpler,” Chung says.

* To achieve that, the researchers developed a method to reversibly expand tissue samples in a way that preserves nearly all of the proteins within the cells. Those proteins can then be labeled with fluorescent molecules and imaged.

The technique relies on flooding the brain tissue with acrylamide polymers, which can form a dense gel. In this case, the gel is 10 times denser than the one used for the CLARITY technique, which gives the sample much more stability. This stability allows the researchers to denature and dissociate the proteins inside the cells without destroying the structural integrity of the tissue sample.

Before denaturing the proteins, the researchers attach them to the gel using formaldehyde, as Chung did in the CLARITY method. Once the proteins are attached and denatured, the gel expands the tissue sample to four or five times its original size.

“It is reversible and you can do it many times,” Chung says. “You can then use off-the-shelf molecular markers like antibodies to label and visualize the distribution of all these preserved biomolecules.”


Melanie Gonick/MIT | MIT researchers have developed a new technique for imaging brain tissue at multiple scales, allowing them to peer at molecules within cells or take a wider view of the long-range connections between neurons.

More on this News Release

Imaging the brain at multiple size scales

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

JOURNAL
Nature Biotechnology

Abstract of Multiplexed and scalable super-resolution imaging of three-dimensional protein localization in size-adjustable tissues

The biology of multicellular organisms is coordinated across multiple size scales, from the subnanoscale of molecules to the macroscale, tissue-wide interconnectivity of cell populations. Here we introduce a method for super-resolution imaging of the multiscale organization of intact tissues. The method, called magnified analysis of the proteome (MAP), linearly expands entire organs fourfold while preserving their overall architecture and three-dimensional proteome organization. MAP is based on the observation that preventing crosslinking within and between endogenous proteins during hydrogel-tissue hybridization allows for natural expansion upon protein denaturation and dissociation. The expanded tissue preserves its protein content, its fine subcellular details, and its organ-scale intercellular connectivity. We use off-the-shelf antibodies for multiple rounds of immunolabeling and imaging of a tissue’s magnified proteome, and our experiments demonstrate a success rate of 82% (100/122 antibodies tested). We show that specimen size can be reversibly modulated to image both inter-regional connections and fine synaptic architectures in the mouse brain.

Neuroscience researchers caution public about hidden risks of self-administered brain stimulation

TheBrainDriver v.2.0 tDCS device (credit: TheBrainDriver, LLC)

“Do-it-yourself” users of transcranial direct current stimulation (tDCS) seeking cognitive enhancement are exposing themselves to hidden risks, neuroscientists warn in an open-access Open Letter in the journal Annals of Neurology.

tDCS devices are made up of a band that wraps around one’s head with electrodes placed at specific scalp locations to target specific brain regions. The devices transmit varying levels of electrical current to the brain to achieve the desired result, such as an enhanced state of relaxation, energy, focus, creativity, or a variety of other goals.

Cognitive neuroscience research suggests that tDCS can enhance cognition, and relieve symptoms of anxiety, depression, and other conditions. “Published results of these studies might lead DIY tDCS users to believe that they can achieve the same results if they mimic the way stimulation is delivered in research studies. However, there are many reasons why this simply isn’t true,” said first author, Rachel Wurzman, PhD, a postdoctoral research fellow in the Laboratory for Cognition and Neural Stimulation at Perelman School of Medicine at the University of Pennsylvania.

“It is important for people to understand why outcomes of tDCS can be unpredictable, because we know that in some cases, the benefits that are seen after tDCS in certain mental abilities may come at the expense of others.”

The “Open Letter” is signed by 39 researchers who share this sentiment, representing an unprecedented consensus among tDCS experts. “Given the possibility that the improper use of our articles might cause harm, as a community we felt it necessary — an ethical obligation — to explain in a peer-reviewed journal why it is that we generally do not encourage do-it-yourself use of tDCS,” she said.

tDCS risks

Among the concerns explained in the paper:

  • It is not yet known whether stimulation extends beyond the specific brain regions targeted. These indirect effects may alter unintended brain functions. Stimulating one region could improve one’s ability to perform one task but hurt the ability to perform another.
  • What a person is doing during tDCS — reading a book, watching TV, sleeping — can change its effects. Which activity is best to achieve a certain change in brain function is not yet known.
  • The researchers have never performed tDCS at the frequency levels some home users experiment with, such as stimulating daily for months or longer. “We know that stimulation from a few sessions can be quite lasting, but we do not yet know the possible risks of a larger cumulative dose over several years or a lifetime,” they wrote.
  • Small changes in tDCS settings, including the current’s amplitude, stimulation duration and electrode placement, can have large and unexpected effects; more stimulation is not necessarily better.
  • The effects of tDCS vary across different people. Up to 30 percent of experimental subjects respond with changes in brain excitability in the opposite direction from other subjects using identical tDCS settings. Factors such as gender, handedness, hormones, medication, etc. could impact and potentially reverse a given tDCS effect.
  • Most research is conducted for the purpose of treating disease, with the goal of alleviating symptoms, with a detailed disclosure or risks as required of studies of human research subjects. The level of risk is quite different for healthy subjects performing tDCS at home.

Cinnamon may be the latest nootropic

(credit: The Great American Spice Co.)

Kalipada Pahan, PhD, a researcher at Rush University and the Jesse Brown VA Medical Center in Chicago, has found that cinnamon improved performance of mice in a maze test.

His group published their latest findings online June 24, 2016, in the Journal of Neuroimmune Pharmacology.

“The increase in learning in poor-learning mice after cinnamon treatment was significant,” says Pahan. “For example, poor-learning mice took about 150 seconds to find the right hole in the Barnes maze test. On the other hand, after one month of cinnamon treatment, poor-learning mice were finding the right hole within 60 seconds.”

Acts as slow-release form of sodium benzoate

Pahan’s research shows that the effect appears to be due mainly to sodium benzoate, a chemical produced as cinnamon is broken down in the body. Food makers use a synthetic form of it as a preservative. It is also an FDA-approved drug used to treat hyperammonemia — too much ammonia in the blood.

Though some health concerns exist regarding sodium benzoate, most experts agree it’s perfectly safe in the amounts generally consumed. One reassuring point is that it’s water-soluble and easily excreted in the urine.

Cinnamon acts as a slow-release form of sodium benzoate, says Pahan. His lab studies show that different compounds within cinnamon—including cinnamaldehyde, which gives the spice is distinctive flavor and aroma—are “metabolized into sodium benzoate in the liver. Sodium benzoate then becomes the active compound, which readily enters the brain and stimulates hippocampal plasticity.”

Those changes in the hippocampus—the brain’s main memory center—appear to be the mechanism by which cinnamon and sodium benzoate exert their benefits.

In their study, Pahan’s group first tested mice in mazes to separate the good and poor learners. Good learners made fewer wrong turns and took less time to find food. In analyzing baseline disparities between the good and poor learners, Pahan’s team found differences in two brain proteins. The gap was all but erased when cinnamon was given.

“Little is known about the changes that occur in the brains of poor learners,” says Pahan. “We saw increases in GABRA5 and a decrease in CREB in the hippocampus of poor learners. Interestingly, these particular changes were reversed by one month of cinnamon treatment.”

The researchers also examined brain cells taken from the mice. They found that sodium benzoate enhanced the structural integrity of the dendrites, the tree-like extensions of neurons that enable them to communicate with other brain cells.

High-quality clinical evidence on cinnamon is limited

Cinnamon, like many spices, has antioxidant and anti-inflammatory properties. So it could be expected to exert a range of health-boosting actions, and it does have a centuries-long history of medicinal use around the world.

But the U.S. National Center for Complementary and Integrative Health says that “high-quality clinical evidence to support the use of cinnamon for any medical condition is generally lacking.” Most of the clinical trials that have taken place have focused on the spice’s possible effect on blood sugar for people with diabetes. Little if any clinical research has been done on the spice’s possible brain-boosting properties.

Pahan hopes to change that. Based on the promising results from his group’s preclinical studies, he believes that “besides general memory improvement, cinnamon may target Alzheimer’s disease, mild cognitive impairment [a precursor to Alzheimer's], and Parkinson’s disease as well.” He is now talking with neurologists about planning a clinical trial on Alzheimer’s.

But Pahan warns that most cinnamon found in the store is the Chinese variety, which contains a compound called coumarin that may be toxic to the liver in high amounts. A person would likely have to eat tons of cinnamon to run into a problem, but just the same, Pahan recommends the Ceylon or Sri Lanka type, which is coumarin-free.

“Simply smelling the spice may not help because cinnamaldehyde should be metabolized into cinnamic acid and then sodium benzoate,” explains Pahan. “For metabolism [to occur], cinnamaldehyde should be within the cell.”

As for himself, Pahan isn’t waiting for clinical trials. He takes about a teaspoonful—about 3.5 grams—of cinnamon powder mixed with honey as a supplement every night.

Should the research on cinnamon continue to move forward, he envisions a similar remedy being adopted by struggling students worldwide.

Pahan’s study was funded by VA, the National Institutes of Health, and the Alzheimer’s Association.


Abstract of Cinnamon Converts Poor Learning Mice to Good Learners: Implications for Memory Improvement

This study underlines the importance of cinnamon, a commonly used natural spice and flavoring material, and its metabolite sodium benzoate (NaB) in converting poor learning mice to good learning ones. NaB, but not sodium formate, was found to upregulate plasticity-related molecules, stimulate NMDA- and AMPA-sensitive calcium influx and increase of spine density in cultured hippocampal neurons. NaB induced the activation of CREB in hippocampal neurons via protein kinase A (PKA), which was responsible for the upregulation of plasticity-related molecules. Finally, spatial memory consolidation-induced activation of CREB and expression of different plasticity-related molecules were less in the hippocampus of poor learning mice as compared to good learning ones. However, oral treatment of cinnamon and NaB increased spatial memory consolidation-induced activation of CREB and expression of plasticity-related molecules in the hippocampus of poor-learning mice and converted poor learners into good learners. These results describe a novel property of cinnamon in switching poor learners to good learners via stimulating hippocampal plasticity.

Cinnamon may be the latest nootropic

(credit: The Great American Spice Co.)

Kalipada Pahan, PhD, a researcher at Rush University and the Jesse Brown VA Medical Center in Chicago, has found that cinnamon improved performance of mice in a maze test.

His group published their latest findings online June 24, 2016, in the Journal of Neuroimmune Pharmacology.

“The increase in learning in poor-learning mice after cinnamon treatment was significant,” says Pahan. “For example, poor-learning mice took about 150 seconds to find the right hole in the Barnes maze test. On the other hand, after one month of cinnamon treatment, poor-learning mice were finding the right hole within 60 seconds.”

Acts as slow-release form of sodium benzoate

Pahan’s research shows that the effect appears to be due mainly to sodium benzoate, a chemical produced as cinnamon is broken down in the body. Food makers use a synthetic form of it as a preservative. It is also an FDA-approved drug used to treat hyperammonemia — too much ammonia in the blood.

Though some health concerns exist regarding sodium benzoate, most experts agree it’s perfectly safe in the amounts generally consumed. One reassuring point is that it’s water-soluble and easily excreted in the urine.

Cinnamon acts as a slow-release form of sodium benzoate, says Pahan. His lab studies show that different compounds within cinnamon—including cinnamaldehyde, which gives the spice is distinctive flavor and aroma—are “metabolized into sodium benzoate in the liver. Sodium benzoate then becomes the active compound, which readily enters the brain and stimulates hippocampal plasticity.”

Those changes in the hippocampus—the brain’s main memory center—appear to be the mechanism by which cinnamon and sodium benzoate exert their benefits.

In their study, Pahan’s group first tested mice in mazes to separate the good and poor learners. Good learners made fewer wrong turns and took less time to find food. In analyzing baseline disparities between the good and poor learners, Pahan’s team found differences in two brain proteins. The gap was all but erased when cinnamon was given.

“Little is known about the changes that occur in the brains of poor learners,” says Pahan. “We saw increases in GABRA5 and a decrease in CREB in the hippocampus of poor learners. Interestingly, these particular changes were reversed by one month of cinnamon treatment.”

The researchers also examined brain cells taken from the mice. They found that sodium benzoate enhanced the structural integrity of the dendrites, the tree-like extensions of neurons that enable them to communicate with other brain cells.

High-quality clinical evidence on cinnamon is limited

Cinnamon, like many spices, has antioxidant and anti-inflammatory properties. So it could be expected to exert a range of health-boosting actions, and it does have a centuries-long history of medicinal use around the world.

But the U.S. National Center for Complementary and Integrative Health says that “high-quality clinical evidence to support the use of cinnamon for any medical condition is generally lacking.” Most of the clinical trials that have taken place have focused on the spice’s possible effect on blood sugar for people with diabetes. Little if any clinical research has been done on the spice’s possible brain-boosting properties.

Pahan hopes to change that. Based on the promising results from his group’s preclinical studies, he believes that “besides general memory improvement, cinnamon may target Alzheimer’s disease, mild cognitive impairment [a precursor to Alzheimer's], and Parkinson’s disease as well.” He is now talking with neurologists about planning a clinical trial on Alzheimer’s.

But Pahan warns that most cinnamon found in the store is the Chinese variety, which contains a compound called coumarin that may be toxic to the liver in high amounts. A person would likely have to eat tons of cinnamon to run into a problem, but just the same, Pahan recommends the Ceylon or Sri Lanka type, which is coumarin-free.

“Simply smelling the spice may not help because cinnamaldehyde should be metabolized into cinnamic acid and then sodium benzoate,” explains Pahan. “For metabolism [to occur], cinnamaldehyde should be within the cell.”

As for himself, Pahan isn’t waiting for clinical trials. He takes about a teaspoonful—about 3.5 grams—of cinnamon powder mixed with honey as a supplement every night.

Should the research on cinnamon continue to move forward, he envisions a similar remedy being adopted by struggling students worldwide.

Pahan’s study was funded by VA, the National Institutes of Health, and the Alzheimer’s Association.


Abstract of Cinnamon Converts Poor Learning Mice to Good Learners: Implications for Memory Improvement

This study underlines the importance of cinnamon, a commonly used natural spice and flavoring material, and its metabolite sodium benzoate (NaB) in converting poor learning mice to good learning ones. NaB, but not sodium formate, was found to upregulate plasticity-related molecules, stimulate NMDA- and AMPA-sensitive calcium influx and increase of spine density in cultured hippocampal neurons. NaB induced the activation of CREB in hippocampal neurons via protein kinase A (PKA), which was responsible for the upregulation of plasticity-related molecules. Finally, spatial memory consolidation-induced activation of CREB and expression of different plasticity-related molecules were less in the hippocampus of poor learning mice as compared to good learning ones. However, oral treatment of cinnamon and NaB increased spatial memory consolidation-induced activation of CREB and expression of plasticity-related molecules in the hippocampus of poor-learning mice and converted poor learners into good learners. These results describe a novel property of cinnamon in switching poor learners to good learners via stimulating hippocampal plasticity.

Americans worried about gene editing, brain chip implants, and synthetic blood

(iStock Photo)

Many in the general U.S. public are concerned about technologies to make people’s minds sharper and their bodies stronger and healthier than ever before, according to a new Pew Research Center survey of more than 4,700 U.S. adults.

The survey covers broad public reaction to scientific advances and examines public attitudes about the potential use of three specific emerging technologies for human enhancement.

The nationally representative survey centered on public views about gene editing that might give babies a lifetime with much reduced risk of serious disease, implantation of brain chips that potentially could give people a much improved ability to concentrate and process information, and transfusions of synthetic blood that might give people much greater speed, strength, and stamina.

A majority of Americans would be “very” or “somewhat” worried about gene editing (68%); brain chips (69%); and synthetic blood (63%), while no more than half say they would be enthusiastic about each of these developments.

Among the key data:

  • More say they would not want enhancements of their brains and their blood–66% and 63%, respectively–than say they would want them (32% and 35%). U.S. adults are closely split on the question of whether they would want gene editing to help prevent diseases for their babies (48% would, 50% would not).
  • Majorities say these enhancements could exacerbate the divide between haves and have-nots. For instance, 73% believe inequality will increase if brain chips become available because initially they will be obtainable only by the wealthy. At least seven-in-ten predict each of these technologies will become available before they have been fully tested or understood.
  • Substantial shares say they are not sure whether these interventions are morally acceptable. But among those who express an opinion, more people say brain and blood enhancements would be morally unacceptable than say they are acceptable.
  • More adults say the downsides of brain and blood enhancements would outweigh the benefits for society than vice versa. Americans are a bit more positive about the impact of gene editing to reduce disease; 36% think it will have more benefits than downsides, while 28% think it will have more downsides than benefits.
  • Opinion is closely divided when it comes to the fundamental question of whether these potential developments are “meddling with nature” and cross a line that should not be crossed, or whether they are “no different” from other ways that humans have tried to better themselves over time. For example, 49% of adults say transfusions with synthetic blood for much improved physical abilities would be “meddling with nature,” while a roughly equal share (48%) say this idea is no different than other ways human have tried to better themselves.

The survey data reveal several patterns surrounding Americans’ views about these ideas:

  • People’s views about these human enhancements are strongly linked with their religiosity.
  • People are less accepting of enhancements that produce extreme changes in human abilities. And, if an enhancement is permanent and cannot be undone, people are less inclined to support it.
  • Women tend to be more wary than men about these potential enhancements from cutting-edge technologies.

The survey also finds some similarities between what Americans think about these three potential, future enhancements and their attitudes toward the kinds of enhancements already widely available today. As a point of comparison, this study examined public thinking about a handful of current enhancements, including elective cosmetic surgery, laser eye surgery, skin or lip injections, cosmetic dental procedures to improve one’s smile, hair replacement surgery and contraceptive surgery.

  • 61% of Americans say people are too quick to undergo cosmetic procedures to change their appearance in ways that are not really important, while 36% “it’s understandable that more people undergo cosmetic procedures these days because it’s a competitive world and people who look more attractive tend to have an advantage.”
  • When it comes to views about elective cosmetic surgery, in particular, 34% say elective cosmetic surgery is “taking technology too far,” while 62% say it is an “appropriate use of technology.” Some 54% of U.S. adults say elective cosmetic surgery leads to about equal benefits and downsides for society, while 26% express the belief that there are more downsides than benefits, and just 16% say society receives more benefits than downsides from cosmetic surgery.

The survey data is drawn from a nationally representative survey of 4,726 U.S. adults conducted by Pew Research Center online and by mail from March 2-28, 2016.

Pew Research Center is a nonpartisan “fact tank” that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions. The center is a subsidiary of The Pew Charitable Trusts, its primary funder.

Whole-brain imaging method identifies common brain disorders

Synaptic-density images of the human brain, derived from PET scans. The sequential images are coronal slices (from front to back of the brain), sagittal slices (from left to right), and transverse images (from bottom to top). (credit: Video by Yale PET Center)

How many of the estimated 100 trillion synapses in your brain are actually functioning? It’s an important question for diagnosis and treatment of people with common brain disorders, such as epilepsy, Alzheimer’s disease, autism, depression, schizophrenia, and traumatic brain injury (TBI), but one that could not be answered, except in an autopsy (or an invasive surgical sample of a small area).

Now a Yale-led team of researchers has developed a way to measure the density of synapses in the brain using a PET (positron emission tomography) scan. They invented a radioligand (a radioactive tracer that, when injected into the body, binds to a type of protein and “lights up” during a PET scan) called [11C]UCB-J that allows for imaging a protein (called SV2A) that is uniquely present in all synapses in the brain.

PET scan reveals unilateral mesial temporal sclerosis in epilepsy patients (white arrows indicate loss of [11C]UCB-J binding in the mesial temporal lobe). (credit: Sjoerd J. Finnema et al./Science Translational Medicine)

They used this new imaging technique on baboons and humans, then applied mathematical tools to quantify synaptic density, and confirmed that the new method served as a marker for synaptic density. The method revealed synaptic loss in three patients with epilepsy compared to healthy individuals.

With this noninvasive method, researchers may now be able to follow the progression of many brain disorders by measuring changes in synaptic density over time or assess how well pharmaceuticals slow the loss of neurons.

Professor of radiology and biomedical imaging Richard Carson and his team plan future studies involving PET imaging of synapses for a variety of brain disorders.

Published July 20 in Science Translational Medicine, the study was supported in part by the Swebilius Foundation, UCB Pharma, and the National Center for Advancing Translational Science, a component of the National Institutes of Health.


Abstract of Imaging synaptic density in the living human brain

Chemical synapses are the predominant neuron-to-neuron contact in the central nervous system. Presynaptic boutons of neurons contain hundreds of vesicles filled with neurotransmitters, the diffusible signaling chemicals. Changes in the number of synapses are associated with numerous brain disorders, including Alzheimer’s disease and epilepsy. However, all current approaches for measuring synaptic density in humans require brain tissue from autopsy or surgical resection. We report the use of the synaptic vesicle glycoprotein 2A (SV2A) radioligand [11C]UCB-J combined with positron emission tomography (PET) to quantify synaptic density in the living human brain. Validation studies in a baboon confirmed that SV2A is an alternative synaptic density marker to synaptophysin. First-in-human PET studies demonstrated that [11C]UCB-J had excellent imaging properties. Finally, we confirmed that PET imaging of SV2A was sensitive to synaptic loss in patients with temporal lobe epilepsy. Thus, [11C]UCB-J PET imaging is a promising approach for in vivo quantification of synaptic density with several potential applications in diagnosis and therapeutic monitoring of neurological and psychiatric disorders.

Distinct stages of thinking revealed by brain activity patterns

Durations of four stages associated with problem solving and corresponding MRI images. In the four example problems (left), the arrows denote new mathematical operators that participants had learned. Color coding reflects the durations of the stages. (credit: John R. Anderson et al./Psychological Science)

Using neuroimaging data, Carnegie Mellon University researchers have identified four distinct stages of math problem solving, according to a new study published in the journal Psychological Science.

“How students were solving these kinds of problems was a total mystery to us until we applied these techniques,” says psychological scientist John Anderson, lead researcher on the study. “Now, when students are sitting there thinking hard, we can tell what they are thinking each second.”

Insights from this work may eventually be applied to the design of more effective classroom instruction, says Anderson.

Combining pattern analysis and hidden semi-Markov models

Anderson combined two analytical approaches — multivoxel pattern analysis (MVPA) and hidden semi-Markov models (HSMM) — to shed light on the different stages of thinking. MVPA has typically been used to identify momentary patterns of activation; adding HSMM, Anderson hypothesized, would yield information about how these patterns play out over time.

The researchers applied this combined approach to neuroimaging data collected from participants as they solved specific types of math problems. To gauge whether the stages that were identified mapped on to actual stages of thinking, the researchers manipulated different features of the math problems; some problems required more effort in coming up with an appropriate solution plan and others required more effort in executing the solution.

The aim was to test whether these manipulations had the specific effects one would expect on the durations of the different stages.

Stages of cognition

The researchers identified four stages of cognition: encoding, planning, solving, and responding. The planning stage tended to be longer when the problem required more planning, and the solution stage tended to be longer when the solution was more difficult to execute, indicating that the method mapped onto real stages of cognition that were differentially affected by various features of the problems.

“Typically, researchers have looked at the total time to complete a task as evidence of the stages involved in performing that task and how they are related,” says Anderson.  “The methods in this paper allow us to measure the stages directly.”

Although the study focused specifically on mathematical problem solving, the method holds promise for broader application, the researchers argue. Using the same method with brain imaging techniques that have greater temporal resolution, such as EEG, could reveal even more detailed information about the various stages of cognitive processing.

This work was supported by a National Science Foundation grant and by a James S. McDonnell Foundation Scholar Award.


Abstract of Hidden Stages of Cognition Revealed in Patterns of Brain Activation

To advance cognitive theory, researchers must be able to parse the performance of a task into its significant mental stages. In this article, we describe a new method that uses functional MRI brain activation to identify when participants are engaged in different cognitive stages on individual trials. The method combines multivoxel pattern analysis to identify cognitive stages and hidden semi-Markov models to identify their durations. This method, applied to a problem-solving task, identified four distinct stages: encoding, planning, solving, and responding. We examined whether these stages corresponded to their ascribed functions by testing whether they are affected by appropriate factors. Planning-stage duration increased as the method for solving the problem became less obvious, whereas solving-stage duration increased as the number of calculations to produce the answer increased. Responding-stage duration increased with the difficulty of the motor actions required to produce the answer.