Using graphene to detect brain cancer cells

Brain cell culture. Left: Normal astrocyte brain cell; Right: cancerous Glioblastoma Multiforme (GBM) version, imaged by Raman spectrography. (credit: B. Keisham et al./ACS Appl. Mater. Interfaces)

By interfacing brain cells with graphene, University of Illinois at Chicago researchers have differentiated a single hyperactive Glioblastoma Multiforme cancerous astrocyte cell from a normal cell in the lab — pointing the way to developing a simple, noninvasive tool for early cancer diagnosis.

In the study, reported in the journal ACS Applied Materials & Interfaces, the researchers looked at lab-cultured human brain astrocyte cells taken from a mouse model. They compared normal astrocytes to their cancerous counterpart, highly malignant brain tumor glioblastoma multiforme.

Illustration showing an astrocyte cell taken from a mouse brain draped over graphene (credit: B. Keisham et al./ACS Appl. Mater. Interfaces)

In a lab analysis, the cell is draped over graphene, explains Vikas Berry, associate professor and head of chemical engineering at UIC, who led the research along with Ankit Mehta, assistant professor of clinical neurosurgery in the UIC College of Medicine.

“The electric field around the cancer cell pushes away electrons in graphene’s electron cloud,” he said, which changes the vibration energy of the carbon atoms [in the graphene]. The change in vibration energy (resulting from the cancerous condition) can be pinpointed by Raman spectroscopy with a resolution of 300 nanometers, allowing for determining the activity of a single cell. (Raman spectroscopy is a highly sensitive method commonly used in chemistry to identify molecules by how they scatter laser light.)

“Graphene is the thinnest known material and is very sensitive to whatever happens on its surface,” Berry said. The nanomaterial is composed of a single layer of carbon atoms linked in a hexagonal chicken-wire pattern, and all the atoms share a cloud of electrons moving freely about the surface.

Patient biopsies planned

The technique is now being studied in a mouse model of cancer, with results that are “very promising,” Berry said. Experiments with patient biopsies would be further down the road. “Once a patient has brain tumor surgery, we could use this technique to see if the tumor relapses,” Berry said. “For this, we would need a cell sample we could interface with graphene and look to see if cancer cells are still present.”

The same technique may also work to differentiate between other types of cells or the activity of cells. “We may be able to use it with bacteria to quickly see if the strain is Gram-positive or Gram-negative,” Berry said. “We may be able to use it to detect sickle cells.”

Earlier this year, Berry and other coworkers introduced nanoscale ripples in graphene, causing it to conduct differently in perpendicular directions, useful for electronics. They wrinkled the graphene by draping it over a string of rod-shaped bacteria, then vacuum-shrinking the germs. “We took the earlier work and sort of flipped it over,” Berry said. “Instead of laying graphene on cells, we laid cells on graphene and studied graphene’s atomic vibrations.”

Funding was provided by UIC.


Abstract of Cancer Cell Hyperactivity and Membrane Dipolarity Monitoring via Raman Mapping of Interfaced Graphene: Toward Non-Invasive Cancer Diagnostics

Ultrasensitive detection, mapping, and monitoring of the activity of cancer cells is critical for treatment evaluation and patient care. Here, we demonstrate that a cancer cell’s glycolysis-induced hyperactivity and enhanced electronegative membrane (from sialic acid) can sensitively modify the second-order overtone of in-plane phonon vibration energies (2D) of interfaced graphene via a hole-doping mechanism. By leveraging ultrathin graphene’s high quantum capacitance and responsive phononics, we sensitively differentiated the activity of interfaced Glioblastoma Multiforme (GBM) cells, a malignant brain tumor, from that of human astrocytes at a single-cell resolution. GBM cell’s high surface electronegativity (potential ∼310 mV) and hyperacidic-release induces hole-doping in graphene with a 3-fold higher 2D vibration energy shift of approximately 6 ± 0.5 cm–1 than astrocytes. From molecular dipole-induced quantum coupling, we estimate that the sialic acid density on the cell membrane increases from one molecule per ∼17 nm2 to one molecule per ∼7 nm2. Furthermore, graphene phononic response also identified enhanced acidity of cancer cell’s growth medium. Graphene’s phonon-sensitive platform to determine interfaced cell’s activity/chemistry will potentially open avenues for studying activity of other cancer cell types, including metastatic tumors, and characterizing different grades of their malignancy.

How to control a robotic arm with your mind — no implanted electrodes required

Research subjects at the University of Minnesota fitted with a specialized noninvasive EEG brain cap were able to move a robotic arm in three dimensions just by imagining moving their own arms (credit: University of Minnesota College of Science and Engineering)

Researchers at the University of Minnesota have achieved a “major breakthrough” that allows people to control a robotic arm in three dimensions, using only their minds. The research has the potential to help millions of people who are paralyzed or have neurodegenerative diseases.

The open-access study is published online today in Scientific Reports, a Nature research journal.


College of Science and Engineering, UMN | Noninvasive EEG-based control of a robotic arm for reach and grasp tasks

“This is the first time in the world that people can operate a robotic arm to reach and grasp objects in a complex 3D environment, using only their thoughts without a brain implant,” said Bin He, a University of Minnesota biomedical engineering professor and lead researcher on the study. “Just by imagining moving their arms, they were able to move the robotic arm.”

The noninvasive technique, based on a brain-computer interface (BCI) using electroencephalography (EEG), records weak electrical activity of the subjects’ brain through a specialized, high-tech EEG cap fitted with 64 electrodes. A computer then converts the “thoughts” into actions by advanced signal processing and machine learning.

An example of an implanted electrode array, allowing for a patient to control a robot arm with her thoughts (credit: UPMC)

The system solves problems with previous BCI systems. Early efforts used invasive electrode arrays implanted in the cortex to control robotic arms, or the patient’s own arm using neuromuscular electrical stimulation. These early systems face the risk of post-surgery complications and infections and are difficult to keep working over time. They also limit broad use.

An EEG-based device that allows an amputee to grasp with a bionic hand, powered only by his thoughts (credit: University of Houston)

More recently, noninvasive EEG has been used. It doesn’t require risky, expensive surgery and it’s easy and fast to attach scalp electrodes. For example, in 2015, University of Houston researchers developed an EEG-based system that allows users to successfully grasp objects, including a water bottle and a credit card. The subject grasped the selected objects 80 percent of the time using a high-tech bionic hand fitted to the amputee’s stump.

Other EEG-based systems for patients have included ones capable of controlling a lower-limb exoskelton and a thought-controlled robotic exoskeleton for the hand. However, these systems have not been suitable for  multi-dimensional control of a robotic arm, allowing the patient to reach, grasp, and move an object in three-dimensional (3D) space.

Full 3D control or a robotic arm by just thinking — no implants

The new University of Minnesota EEG BCI system was developed to enable such natural, unimpeded movements of a robotic arm in 3D space, such as picking up a cup, moving it around on a table, and drinking from it — similar to the robot arm controlled by implanted-electrodes shown in the photo above of a patient served a chocolate bar.*

The new  technology basically works in the same way as the robot system using implanted electrodes. It’s based on the motor cortex, the area of the brain that governs movement. When humans move, or think about a movement, neurons in the motor cortex produce tiny electric currents. Thinking about a different movement activates a new assortment of neurons, a phenomenon confirmed by cross-validation using functional MRI in He’s previous study. In the new study, the researchers sorted out the possible movements, using advanced signal processing.

User controls flight of a 3D virtual helicopter using brain waves

User controls flight of a 3D virtual helicopter using brain waves (credit: Bin He/University of Minnesota)

This robotic-arm research builds upon He’s research published in 2011, in which subjects were able to fly a virtual quadcopter using noninvasive EEG technology, and later research allowing for flying a physical quadcopter.

The next step of He’s research will be to further develop this BCI technology, using a brain-controlled robotic prosthetic limb attached to a person’s body for patients who have had a stroke or are paralyzed.

The University of Minnesota study was funded by the National Science Foundation (NSF), the National Center for Complementary and Integrative Health, National Institute of Biomedical Imaging and Bioengineering, and National Institute of Neurological Disorders and Stroke of the National Institutes of Health (NIH), and the University of Minnesota’s MnDRIVE (Minnesota’s Discovery, Research and InnoVation Economy) Initiative funded by the Minnesota Legislature.

* Eight healthy human subjects completed the experimental sessions of the study wearing the EEG cap. Subjects gradually learned to imagine moving their own arms without actually moving them to control a robotic arm in 3D space. They started from learning to control a virtual cursor on computer screen and then learned to control a robotic arm to reach and grasp objects in fixed locations on a table. Eventually, they were able to move the robotic arm to reach and grasp objects in random locations on a table and move objects from the table to a three-layer shelf by only thinking about these movements.

All eight subjects could control a robotic arm to pick up objects in fixed locations with an average success rate above 80 percent and move objects from the table onto the shelf with an average success rate above 70 percent.


Abstract of Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks

Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.

Macaque monkeys have the anatomy for human speech, so why can’t they speak?

Researchers used X-ray videos (right) to capture and trace the movements of the different parts of a macaque’s vocal anatomy — such as the tongue, lips, and larynx — during a number of orofacial behaviors. (credit: Illustration by Tecumseh Fitch, University of Austria, and image courtesy of Asif Ghazanfar, Princeton Neuroscience Institute)

While they have a speech-ready vocal tract, primates can’t speak because they lack a speech-ready brain, contrary to widespread opinion that they are limited by anatomy, researchers at Princeton University and associates have reported Dec. 9 in the open-access journal Science Advances.

The researchers reached this conclusion by first recording X-ray videos showing the movements of the different parts of a macaque’s vocal anatomy — such as the tongue, lips and larynx. They then converted that data into a computer model that could predict and simulate a macaque’s vocal range.


Audio file of researchers’ macaque vocal model uttering the same phrase “Will you marry me?,” synthesized with the same noisy source (credit: W. Tecumseh Fitch et al./Science Advances)


Audio file of an adult human female saying “Will you marry me?,” resynthesized with a noisy source (credit: W. Tecumseh Fitch et al./Science Advances)

The model was used to create computer-generated audio clips (above — may not work with older browsers) that simulate what a macaque (top) might sound like if it could speak, compared to a human female (bottom). The clearly audible phrase: “Will you marry me?”

The researchers found that a macaque would be able to produce comprehensible vowel sounds — and even full sentences — with its vocal tract if it had the neural ability to speak, according to co-corresponding author Asif Ghazanfar, a Princeton University professor of psychology and the Princeton Neuroscience Institute.

“The key conclusion from our study is that the basic primate vocal production apparatus is easily capable of producing five clearly distinguishable vowels … the worldwide norm for human languages, and many of the world’s languages make do with only three vowels,” the researchers note in their paper. “The common stop consonants (/p/, /b/, /k/, and /g/) along with a variety of other consonantal sounds (for example, /h/, /m/, and /w/) would be easily attainable by a macaque monkey.”

The work was supported by the National Institutes of Health and European Research Council advanced and starting grants.


Abstract of Monkey vocal tracts are speech-ready

For four decades, the inability of nonhuman primates to produce human speech sounds has been claimed to stem from limitations in their vocal tract anatomy, a conclusion based on plaster casts made from the vocal tract of a monkey cadaver. We used x-ray videos to quantify vocal tract dynamics in living macaques during vocalization, facial displays, and feeding. We demonstrate that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language, showing that previous techniques based on postmortem samples drastically underestimated primate vocal capabilities. Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it.

Scientists track amazing restoration of communication in ‘minimally conscious’ patient

Nancy Smith Worthen doing art therapy with her daughter, Maggie, who was initially minimally conscious. (credit: Nancy Worthen)

In a three-year study of a severely brain-injured woman’s remarkable recovery, doctors measured aspects of brain structure and function before and after recovering communication, a first — raising the question: could other minimally responsive or unresponsive chronic-care patients also regain organized, higher-level brain function?

Challenging what neurologists thought was possible, the pioneering study by Weill Cornell Medicine scientists was published Dec. 7 in Science Translational Medicine.

It began 21 months after Margaret Worthen suffered massive strokes. Most doctors diagnosed her as being in a vegetative state, unable to speak and unaware of herself and her environment.

Then, during her first visit with a new team led by senior study author Nicholas D. Schiff, M.D.,* doctors detected an unexpected ability to respond to their command to look down with her left eye.

Her ability was initially intermittent, but over the course of a year she developed a viable one-way communication system. She was able to respond to yes and no questions, such as “Is your name Margaret?” or “Is your father’s name Michael?” by moving her left eye down or up, but still lacked a method to ask questions or use a brain-computer interface.

But over the next nearly two years, the connections and function of the areas of her brain responsible for producing expressive language and responding to human speech were also gradually restored.


To measure the changes in Margaret’s brain as she recovered the ability to communicate, the researchers used a number of state-of-the-art imaging tools. The main technique, called diffusion tensor imaging, uses the diffusion of water molecules to generate contrast in magnetic resonance (MRI) images, enabling scientists to infer the degree of white-matter connection between specific areas.

Longitudinal structural remodeling of the brain, as identified by diffusion tensor imaging. The “fractional anisotropy” (FA) between cortical regions, which indicates the concentration in specific white matter fibers — measured (from 0 to 1) at the first (A) and last (B) time point of evaluation — is shown as colored lines between the connected brain regions (black squares), exhibiting a notable increase of connections. The FA changes reflect fiber density, axonal diameter, and myelination (covering) in white matter. The inferior temporal gyrus, which is associated with visual object recognition, is denoted with a white square. (credit: Daniel J. Thengone et al./Science Translation Medicine)

The researchers also found a reconnection of Broca’s area — the region of the brain located in the frontal lobe of the dominant hemisphere that is associated with language and speech. In addition, the two hemispheres of the patient’s brain increased their connectivity. The findings suggest that reconnection began within the left hemisphere and extended to the right, corresponding to the restoration of Worthen’s ability to communicate.

Local and global changes in brain structure and function captured over time using multimodal imaging analyses. The time line shows the four time points of evaluation (T1 to T4) since the patient suffered a traumatic brain injury. From the first time point of evaluation (at 21 months after injury) to the last (at 54 months after injury), structural and functional brain imaging and electrophysiological studies revealed concordant findings in global remodeling and local changes, supporting the recovery of language abilities over the 2-year and 9-month period. Note the increase (more red) in white matter fiber density between 21 and 54 months. (credit: Daniel J. Thengone et al./Science Translation Medicine)

The researchers also used functional MRI (fMRI) to measure blood flow changes in response to brain activity while Ms. Worthen listened to a human voice speaking. They found that the functional recovery of Broca’s area over time corresponded with the change in structural connectivity. In addition, language-responsive brain regions showed increasing correlation across both hemispheres over time.


“She was being treated as if she was there, and she was present.”

What triggered the restoration of her brain’s expressive language system was her caregivers’ attempts to establish communication over the nearly three years of the study. Her recovered expressive language networks may also have triggered a restoration of her inner speech or inner dialogue, which could have reinforced those same networks, according to the study authors.

The findings raise important questions about how clinicians care for patients who are diagnosed as in vegetative or minimally conscious state, Schiff said. “We looked carefully at Margaret, made an observation that there was a way to try to connect with her, and the world changed for her. Her life changed because now she wasn’t being treated as if she may or may not be conscious. She was being treated as if she was there, and she was present.”

The discovery changed everything, according to her mother, Nancy Smith Worthen. For example, it opened the door to speech therapy, which enabled caregivers to assess whether or not she was experiencing pain, and for her mother to determine her daughter’s wants or needs. Margaret was able to produce paintings with her mother’s hand guiding her, which her mother credits with further stimulating her brain.

Margaret went on to develop enough facility with a computer interface that she was actually able to attend her five-year college reunion and hold conversations with classmates, assisted by a speech therapist. Her story is among those included in Rights Come to Mind: Brain Injury, Ethics, and the Struggle for Consciousness, by Joseph Fins, M.D., the E. William Davis Jr., MD, Professor of Medical Ethics and chief of the Division of Medical Ethics at Weill Cornell Medicine.

Regrettably, she died of complications from pneumonia in 2015.

“When I would ask Margaret if she wanted to do this work with Dr. Schiff, she would always say yes,” Nancy Smith Worthen said. “To have her brain be studied was the way she could do something in the world with her disability. And now, even after she’s gone, she’s still contributing. I’m just so proud of her.”

New hope for minimally conscious patients

Although the findings are from a single-subject study, scientists say they could apply to other patients, based on the proposed mechanism and previous research findings. Ten years ago, team members reported similar changes of increased connections between the two cerebral hemispheres in Terry Wallis, a minimally conscious patient who spontaneously recovered speech 19 years after a traumatic brain injury.

In addition, other studies have implicated injuries to the central thalamus such as those Margaret experienced in a general mechanism underlying impaired function in patients with disorders of consciousness.

This research was supported by the National Institutes of Health, the Charles A. Dana Foundation, the James S. McDonnell Foundation, the Jerold B. Katz Foundation and the Fred Plum Fellowship in Systems Neurology and Neurosciences.

* Schiff is the Jerold B. Katz Professor of Neurology and Neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.


Abstract of Local changes in network structure contribute to late communication recovery after severe brain injury

Spontaneous recovery of brain function after severe brain injury may evolve over a long time period and is likely to involve both structural and functional reorganization of brain networks. We longitudinally tracked the recovery of communication in a patient with severe brain injury using multimodal brain imaging techniques and quantitative behavioral assessments measured at the bedside over a period of 2 years and 9 months (21 months after initial injury). Structural diffusion tensor imaging revealed changes in brain structure across interhemispheric connections and in local brain regions that support language and visuomotor function. These findings correlated with functional brain imaging using functional magnetic resonance imaging and positron emission tomography, which demonstrated increased language network recruitment in response to natural speech stimuli, graded increases in interhemispheric interactions of language-related frontal cortices, and increased cerebral metabolic activity in the language-dominant hemisphere. In addition, electrophysiological studies showed recovery of synchronization of sleep spindling activity. The observed changes suggest a specific mechanism for late recovery of communication after severe brain injury and provide support for the potential of activity-dependent structural and functional remodeling over long time periods.

Triggered by ultrasound, microbubbles open the blood-brain barrier to administer drugs without harming other areas of the body

Microbubbles containing a new fluorescent substance in their lipid coating, released in a designated point in the brain by ultrasound (credit: C. Sierra et al./Columbia University UEIL)

Using ultrasound to bypass the blood-brain barrier (BBB), Columbia University researchers have succeeded in releasing drugs only in the specific area of the brain where they are needed — not in the rest of the body. The goal is to help treat Parkinson’s, Alzheimer’s, and other neurodegenerative diseases without collateral damage.

The BBB is an impassable obstacle for 98% of drugs, which it treats as pathogens and blocks them from passing from patients’ bloodstream into the brain. Using ultrasound (sound whose frequency is higher than the range of human hearing), drugs can administered using an intravenous injection of innocuous lipid-coated gas microbubbles. That technique was perfected by Columbia University scientists, who reported their research in 2011 in PNAS and in 2014 in the Journal of Cerebral Blood Flow & Metabolism.

Microbubble (credit: C. Sierra et al./Journal of Cerebral Blood Flow & Metabolism)

With this method, ultrasound is focused on a specific region of the brain, causing the microbubbles to oscillate and increase in size and expand. When they reach the critical size of 8 microns, the blood–brain barrier near them opens, allowing the medicine circulating in the blood to pass through.

This technique has been used experimentally for over ten years, but has had a disadvantage: at excessive pressure, the microbubbles can suddenly collapse, so there is no lipid shell to retain the covering of  the microbubbles, allowing the contained drug to flow through the blood pool and causing microdamage to the cerebral vessels and elsewhere.

Now, scientists at the Ultrasound Elasticity Imaging Laboratory (UEIL) at Columbia University have taken a major step forward by incorporating a fluorescent molecule called 5-dodecanoylaminofluorescein into the lipid coating of the microbubbles.

This molecule allows the scientists to determine the optimal pressure to prevent bursting of the microbubbles*, according to UEIL physicist Carlos Sierra and lead author of a paper on this new finding published in the current issue of Journal of Cerebral Blood Flow & Metabolism.

Human trials planned

So far, the researchers have proven the efficacy of their technique on mice, confirming that this molecule was reaching the brain without affecting other parts of the animal. They also identified the acoustic pressure thresholds at which the substance is guaranteed to safely reach its target in vivo.

“Defining these parameters means we can think about how to transfer the technique to human patients, although it has to be tested on monkeys first,” Sierra explains. “It could be applied to diseases like Parkinson’s, Alzheimer’s, Huntington’s diseases, brain tumors, strokes, multiple sclerosis, and amyotrophic lateral sclerosis, where we expect to see a very significant rise in the efficacy of treatment and a considerable reduction in side effects.”

Sierra is funded by a grant from Berrié Foundation in Spain.

* By ex vivo fluorescence imaging and by in vivo transcranial passive cavitation detection.


Abstract of Lipid microbubbles as a vehicle for targeted drug delivery using focused ultrasound-induced blood–brain barrier opening

Focused ultrasound in conjunction with lipid microbubbles has fully demonstrated its ability to induce non-invasive, transient, and reversible blood–brain barrier opening. This study was aimed at testing the feasibility of our lipid-coated microbubbles as a vector for targeted drug delivery in the treatment of central nervous system diseases. These microbubbles were labeled with the fluorophore 5-dodecanoylaminfluorescein. Focused ultrasound targeted mouse brains in vivo in the presence of these microbubbles for trans-blood–brain barrier delivery of 5-dodecanoylaminfluorescein. This new approach, compared to previously studies of our group, where fluorescently labeled dextrans and microbubbles were co-administered, represents an appreciable improvement in safety outcome and targeted drug delivery. This novel technique allows the delivery of 5-dodecanoylaminfluorescein at the region of interest unlike the alternative of systemic exposure. 5-dodecanoylaminfluorescein delivery was assessed by ex vivo fluorescence imaging and by in vivo transcranial passive cavitation detection. Stable and inertial cavitation doses were quantified. The cavitation dose thresholds for estimating, a priori, successful targeted drug delivery were, for the first time, identified with inertial cavitation were concluded to be necessary for successful delivery. The findings presented herein indicate the feasibility and safety of the proposed microbubble-based targeted drug delivery and that, if successful, can be predicted by cavitation detection in vivo.


 

Tracking large neural networks in the brain by making neurons glow

A neuron glows with bioluminescent light produced by a new genetically engineered sensor. (credit: Johnson Lab, Vanderbilt University)

A new kind of bioluminescent sensor developed by Vanderbilt scientists causes individual brain cells to glow in the dark, giving neuroscientists a new tool to track what’s happening in large neural networks in the brain.

The sensor is a genetically modified form of luciferase, the enzyme that fireflies and other species use to produce light.

Traditional electrical techniques for recording the activity of neurons are limited to small numbers of neurons at a time. Instead, the new sensors use a new combination of optical techniques to record the activity of hundreds of neurons at the same time, according to Carl Johnson, Stevenson Professor of Biological Sciences, who headed the effort.

The research is a spinoff of the team’s research in bioluminescence* in the green alga Chlamydomonas. Johnson and his colleagues realized that if they could combine luminescence with optogenetics (which uses light to control cells, particularly neurons), they could create a powerful new tool for studying brain activity.

To create the new sensor, Johnson and his collaborators first genetically modified a type of luciferase obtained from a luminescent species of shrimp so that it would light up when exposed to calcium ions (their level  pikes briefly when a neuron receives an impulse from one of its neighbors). Then they attached a virus that infects and genetically modifies neurons.

They tested their new calcium sensor with an optogenetic protein called channelrhodopsin that causes the calcium ion channels in the neuron’s outer membrane to open, flooding the cell with calcium. Using neurons grown in culture they found that the luminescent enzyme reacted visibly to the influx of calcium produced when the probe was stimulated by brief light flashes of visible light.


Vanderbilt University | Bioluminescent sensor causes brain cells to glow in the dark

To determine how well their sensor works with larger numbers of neurons, they inserted it into brain slices from mouse hippocampus containing thousands of neurons. In this case, they flooded the slices with an increased concentration of potassium ions, which causes the cell’s ion channels to open. Again, they found that the sensor responded to the variations in calcium concentrations by brightening and dimming.


Vanderbilt University | Optical sensor illuminates activity of neural network

“We’ve shown that the approach works,” Johnson said. “Now we have to determine how sensitive it is. We have some indications that it is sensitive enough to detect the firing of individual neurons, but we have to run more tests to determine if it actually has this capability.”

The research is described in a paper published in the open-access journal Nature Communications on Oct. 27 and was funded by the National Institutes of Health, the National Science Foundation, and a grant from the Vanderbilt Brain Institute.

* There have also been efforts in optical recording of neurons using fluorescence, but this requires a strong external light source which can cause the tissue to heat up and can interfere with some biological processes, particularly those that are light sensitive, Johnson explained. It would also interfere with the light used in optogenetics. In contrast, light in bioluminescence is produced by biochemical reactions (the light-emitting pigment luciferin and the enzyme luciferase).


Abstract of Coupling optogenetic stimulation with NanoLuc-based luminescence (BRET) Ca++ sensing

Optogenetic techniques allow intracellular manipulation of Ca++ by illumination of light-absorbing probe molecules such as channelrhodopsins and melanopsins. The consequences of optogenetic stimulation would optimally be recorded by non-invasive optical methods. However, most current optical methods for monitoring Ca++ levels are based on fluorescence excitation that can cause unwanted stimulation of the optogenetic probe and other undesirable effects such as tissue autofluorescence. Luminescence is an alternate optical technology that avoids the problems associated with fluorescence. Using a new bright luciferase, we here develop a genetically encoded Ca++ sensor that is ratiometric by virtue of bioluminescence resonance energy transfer (BRET). This sensor has a large dynamic range and partners optimally with optogenetic probes. Ca++ fluxes that are elicited by brief pulses of light to cultured cells expressing melanopsin and to neurons-expressing channelrhodopsin are quantified and imaged with the BRET Ca++ sensor in darkness, thereby avoiding undesirable consequences of fluorescence irradiation.

First glimpse of new concepts developing in the brain

This image illustrates how the study participants learned about the habitat and the diet of eight animals, such as the cytar (not its real zoological name) and shows the set of habitat brain regions (A-green) and diet (B-red and blue) regions where the new knowledge was stored. (L refers to left hemisphere of the brain.) (credit: Carnegie Mellon University)

Carnegie Mellon University (CMU) scientists have for the first time documented the actual formation of newly learned concepts inside the brain.

Predicting the unique brain activation patterns associated with names (credit: CMU)

Thanks to recent advances in brain imaging technology at CMU and elsewhere, it is now known how specific concrete objects are coded in the brain — neuroscientists can identify which object, such as a house or a banana, someone is thinking about from its functional magnetic resonance imaging (fMRI) brain signature.

Taking the next step, the researchers decided to observe the actual formation of these signatures in an experiment.

Neuroscientist Marcel Just, the D.O. Hebb University Professor of Cognitive Neuroscience in CMU’s Dietrich College of Humanities and Social Sciences, and Andrew Bauer, a Ph.D. student in psychology, taught 16 study participants diet and dwelling information about extinct animals to monitor the growth of the neural representations of eight new animal concepts in the participants’ brains.

Drawing on the previous research findings, the research team knew “where” to expect the new knowledge to emerge in the brains of their participants. Information about dwellings and information about eating have each been shown to reside in their own set of brain regions, regions that are common across people.

Over the course of an hour, the study participants were given a zoology mini-tutorial on the diets and habitats of the animals, while the scientists used fMRI to monitor the emergence of the concepts in the participants’ brains. As the new properties were taught, the activation levels in the eating regions and the dwelling regions changed.

Concept signatures

One important result was that after the zoology tutorial, each one of the eight animal concepts developed its own unique activation signature. This made it possible for a computer program to determine which of the eight animals a participant was thinking about at a given time. In effect, the program was reading their minds as they contemplated a brand-new thought.

Even though the animals had unique activation signatures, the animals that shared similar properties (such as a similar habitat) had similar activation signatures. That is, a resemblance between the properties of two animals resulted in a resemblance between their activation signatures. This finding shows that the activation signatures are not just arbitrary patterns, but are meaningful and interpretable.

“The activation signature of a concept is a composite of the different types of knowledge of the concept that a person has stored, and each type of knowledge is stored in its own characteristic set of regions,” Just said.

Another important result was that once a property of an animal was learned, it remained intact in the brain, even after other properties of the animal had been learned. This finding indicates the relative neural durability of what we learn.

Implications for learning and brain disorders

“Each time we learn something, we permanently change our brains in a systematic way,” said Bauer, the study’s lead author. “It was exciting to see our study successfully implant the information about extinct animals into the expected locations in the brain’s filing system.”

Just believes that the study provides a foundation for brain researchers to trace how a new concept makes its way into the brain from the words and graphics used to teach it. That suggests that it may be possible to assess the progress in learning a complicated concept like those in a high-school physics lesson. fMRI pattern analyses could diagnose which aspects of a concept students misunderstand (or lack), in a way that could guide the next iteration of instruction.

The results from this study also indicate that it may be possible to use a similar approach to understand the “loss” of knowledge in various brain disorders, such as dementia or Alzheimer’s disease, or due to brain injuries. The loss of a concept in the brain may be the reverse of the process that the study observed.


Abstract of Monitoring the growth of the neural representations of new animal concepts

Although enormous progress has recently been made in identifying the neural representations of individual object concepts, relatively little is known about the growth of a neural knowledge representation as a novel object concept is being learned. In this fMRI study, the growth of the neural representations of eight individual extinct animal concepts was monitored as participants learned two features of each animal, namely its habitat (i.e., a natural dwelling or scene) and its diet or eating habits. Dwelling/scene information and diet/eating-related information have each been shown to activate their own characteristic brain regions. Several converging methods were used here to capture the emergence of the neural representation of a new animal feature within these characteristic, a priori-specified brain regions. These methods include statistically reliable identification (classification) of the eight newly acquired multivoxel patterns, analysis of the neural representational similarity among the newly learned animal concepts, and conventional GLM assessments of the activation in the critical regions. Moreover, the representation of a recently learned feature showed some durability, remaining intact after another feature had been learned. This study provides a foundation for brain research to trace how a new concept makes its way from the words and graphics used to teach it, to a neural representation of that concept in a learner’s brain.

New unique brain ‘fingerprint’ method can identify a person with nearly 100% accuracy

A research team led by Carnegie Mellon University used diffusion MRI to measure the local connectome of 699 brains from five data sets. The local connectome comprises the point-by-point connections along all of the white matter pathways in the brain, as opposed to the connections between brain regions. To create a fingerprint, they used diffusion MRI data to calculate the distribution of water diffusion along the cerebral white matter’s fibers. (credit: Carnegie Mellon University)

Researchers have “fingerprinted” the white matter of the human brain using a new diffusion MRI method, mapping the brain’s connections (the connectome) at a more detailed level than ever before. They confirmed that structural connections in the brain are unique to each individual person and the connections were able to identify a person with nearly 100% accuracy.

The new method could provide biomarkers to help researchers determine how factors such as disease, the environment, genetic and social factors, and different experiences impact the brain and change over time.

“This means that many of your life experiences are somehow reflected in the connectivity of your brain,” said Timothy Verstynen, an assistant professor of psychology at Carnegie Mellon University and senior author of the study, published in open-access PLOS Computational Biology.

The local connectome: a personal biomarker

Demonstrating the level of detail, one local connectome fingerprint is shown in different zoom-in resolutions. A local connectome fingerprint has a total of 513,316 entries of scalar values. (credit: Fang-Cheng Yeh et al./PLoS Comput Biol)

For the study, the researchers used diffusion MRI to measure the local connectome of 699 brains from five data sets. The local connectome is the point-by-point connections along all of the white matter pathways in the brain, as opposed to the connections between brain regions. To create a fingerprint for each person, they used the diffusion MRI data to calculate the distribution of water diffusion along the cerebral white matter’s fibers.*

The measurements revealed the local connectome is highly unique to an individual and can be used as a personal biomarker for human identity. To test the uniqueness, the team ran more than 17,000 identification tests. With nearly 100 percent accuracy, they were able to tell whether two local connectomes, or brain “fingerprints,” came from the same person or not.

Curiously, they discovered that identical twins only share about 12 percent of structural connectivity patterns and the brain’s unique local connectome is sculpted over time, changing at an average rate of 13 percent every 100 days.

Decoding unexplored connectome data

“The most exciting part is that we can apply this new method to existing data and reveal new information that is already sitting there unexplored. The higher specificity allows us to reliably study how genetic and environmental factors shape the human brain over time, thereby opening a gate to understand how the human brain functions or dysfunctions,” said Fang-Cheng (Frank) Yeh, the study’s first author and now an assistant professor of neurological surgery at the University of Pittsburgh.

So we can start to look at how shared experiences — for example, poverty, or people who have the same pathological disease — are reflected in their brain connections, which could lead to new medical biomarkers for certain health concerns.

The team included researchers at the U.S. Army Research Laboratory, the University of Pittsburgh, the National Taiwan University, and the University of California, Santa Barbara. The Army Research Laboratory funded this research, which was supported by NSF BIGDATA, WU-Minn Consortium, the Ruentex Group, the Ministry of Economic Affairs, Taiwan, and National Institutes of Health.

* The local connectome is defined as the degree of connectivity between adjacent voxels within a white matter fascicle measured by the density of the diffusing water. A collection of these density measurements provides a high-dimensional feature vector that can describe the unique configuration of the structural connectome within an individual, providing a novel approach for comparing differences and similarities between individuals as pairwise distances. To evaluate the performance of this approach, the researchers used four independently collected diffusion MRI datasets with repeat scans at different time intervals (ranging from the same day to a year) to examine whether local connectome fingerprints can reliably distinguish the difference between within-subject and between-subject scans.


Abstract of Quantifying Differences and Similarities in Whole-Brain White Matter Architecture Using Local Connectome Fingerprints

Quantifying differences or similarities in connectomes has been a challenge due to the immense complexity of global brain networks. Here we introduce a noninvasive method that uses diffusion MRI to characterize whole-brain white matter architecture as a single local connectome fingerprint that allows for a direct comparison between structural connectomes. In four independently acquired data sets with repeated scans (total N = 213), we show that the local connectome fingerprint is highly specific to an individual, allowing for an accurate self-versus-others classification that achieved 100% accuracy across 17,398 identification tests. The estimated classification error was approximately one thousand times smaller than fingerprints derived from diffusivity-based measures or region-to-region connectivity patterns for repeat scans acquired within 3 months. The local connectome fingerprint also revealed neuroplasticity within an individual reflected as a decreasing trend in self-similarity across time, whereas this change was not observed in the diffusivity measures. Moreover, the local connectome fingerprint can be used as a phenotypic marker, revealing 12.51% similarity between monozygotic twins, 5.14% between dizygotic twins, and 4.51% between none-twin siblings, relative to differences between unrelated subjects. This novel approach opens a new door for probing the influence of pathological, genetic, social, or environmental factors on the unique configuration of the human connectome.

Test of transcranial direct current stimulation (tDCS) of the brain shows improved multitasking performance

Placement of five anode electrodes (left) over the dorsolateral prefrontal cortex and the cathode (right) over the right shoulder (to avoid spurious cognitive effects from cortical excitability) (credit: Justin Nelson et al./ Front. Hum. Neurosci.)

In an experiment at the Air Force Research Laboratory, Wright-Patterson Air Force Base in Ohio, researchers* have found that transcranial direct-current stimulation (tDCS) of the brain can improve people’s multitasking skills and help avoid the drop in performance that comes with information overload.

The study was reported in a pre-publication paper in the open-access journal Frontiers of Human Neuroscience. It was motivated by the observation that various Air Force operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences,” according to the researchers.

The study set out to determine at what baud rate (difficulty level) improved multitasking throughput capacity kicks in and whether tDCS can improve performance, resulting in higher throughput capacity. The researchers used a common stimulation site for augmenting cognitive function via tDCS: the left dorsolateral prefrontal cortex (DLPFC), which has been associated with working memory, attention, vigilance, planning, and reasoning.**

Multitasking test

User interface of the multitasking Multi-Attribute Task Battery (MATB) (credit: Justin Nelson et al./ Front. Hum. Neurosci.)

The Air Force multi-attribute task battery (AF-MATB), based on the MATB multitasking test developed by NASA, requires the human operator (or experimental subject) to simultaneously monitor and respond to four independent tasks — systems monitoring, communication, targeting and resource management — on one computer screen.

There were 20 participants in the test, split evenly between experimental and control groups. The experimental group received 2mA of tDCS for a duration of 30 minutes. To emulate the subjective skin sensations present in active tDCS, the control group received sham stimulation at 2mA for only 30 seconds.

For example, the systems monitoring task (upper left) involved monitoring the two lights (rectangles), keeping the left light in the on status (displaying green) and the right light in the off status (displaying black). The triangular yellow markers shifted randomly towards the top or the bottom of the dial and began oscillating around a new location. When this event occurred, participants were to select the corresponding F1–F4 keys to reset the dials.

Augmented multitasking

The total input began at 0.6 bits/s and increased to 2.2 bits/s by a factor of 0.2 bits/s every four minutes. The study found that with tDCS stimulation, the subject’s multitasking throughput (or channel capacity) plateaued (maxed out) at 2.0 bits/s input, whereas the control group (no stimulation) plateaued near 1.3 bits/s, showing that tDCS had the ability to augment and enhance multitasking throughput by a factor of 1.5 in this experiment.

The finding potentially has important implications for high-workload environments that provide information to the operator via a wide range of stimuli. Additional research may be conducted to evaluate the robustness of these observed effects.

However, as reported by KurzweilAI, a recent open letter by 39 neuroscience researchers, published in the Annals of Neurology, warns that “outcomes of tDCS can be unpredictable … the benefits that are seen after tDCS in certain mental abilities may come at the expense of others.”

* Human factors engineers at the Wright State University Department of Biomedical, Industrial and Human Factors Engineering were also involved in the research.

** The left dorsolateral was selected as the stimulation site because this region of the brain is associated with sustained attention, working memory, decision making, planning and reasoning which are all directly involved with multitasking. The left DLPFC was chosen instead of the right DLPFC to enhance performance by right-handed participants, but stimulation of the right DLPFC may also influence working memory performance, according to the researchers.

Could these three brain regions be the seat of consciousness?

(Left) A coma-specific region in the left pontine tegmentum in the brainstem (red). (Right) Multiple nuclei implicated in arousal surround the coma-specific region, including the dorsal raphe (yellow dots), locus coeruleus (black dots), and parabrachial nucleus (red dashed line). (credit: David B. Fischer, MD et al./Neurology)

An international team of neurologists led by Beth Israel Deaconess Medical Center (BIDMC) has identified three specific regions of the brain that appear to be critical components of consciousness: one in the brainstem, involved in arousal; and two cortical regions involved in awareness.

To pinpoint the exact regions, the neurologists first analyzed 36 patients with brainstem lesions (injuries). They discovered that a specific small area of the brainstem — the pontine tegmentum (specifically, the rostral dorsolateral portion) — was significantly associated with coma.* (The brainstem connects the brain with the spinal cord and is responsible for the sleep/wake cycle and cardiac and respiratory rates.)

Human connectome (credit: Human Connectome Project)

Once they had identified the area involved in arousal, they next looked to see which cortical regions were connected to this arousal area and also become disconnected in disorders of consciousness. To do that, they used the Human Connectome — a sort of wiring diagram of the brain.

Thanks to the connectome, “we can look at not just the location of lesions, but also their connectivity,” said Michael D. Fox, MD, PhD, Director of the Laboratory for Brain Network Imaging and Modulation and the Associate Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at BIDMC.

The coma-specific brainstem region is functionally connected to clusters in the anterior insula (AI) and pregenual anterior cingulate cortex (pACC). Voxels within these nodes were functionally connected to all 12 coma lesions, and were more functionally connected to coma lesions than control lesions. (credit: David B. Fischer, MD et al./Neurology)

They discovered two connected cortical regions: the pregenual anterior cingulate cortex (pACC) and the left ventral anterior insula (AI). Both regions were previously implicated in both arousal and awareness.

“Over the past year, researchers in my lab have used this approach to understand visual and auditory hallucinations, impaired speech, and movement disorders,” said Fox. “A collaborative team of neuroscientists and physicians had the insight and unique expertise needed to apply this approach to consciousness.”

Consciousness network

Finally, the team investigated whether this brainstem-cortex network was functioning in another subset of patients with disorders of consciousness, including coma. Using a special type of MRI scan, the scientists found that their newly identified “consciousness network” was disrupted in patients with impaired consciousness.

Published recently in the journal Neurology, the findings — bolstered by data from rodent studies — suggest that the network between the brainstem and these two cortical regions plays a role in maintaining human consciousness.

A next step, Fox notes, may be to investigate other data sets in which patients lost consciousness to find out if the same, different, or overlapping neural networks are involved.

“This is most relevant if we can use these networks as a target for brain stimulation for people with disorders of consciousness,” said Fox. “If we zero in on the regions and network involved, can we someday wake someone up who is in a persistent vegetative state? That’s the ultimate question.”

Researchers at the University of Iowa Carver College of Medicine, the Brain and Spine Institute (Institut du Cerveau et de la Moelle épinière-ICM) at Hôpital Pitié-Salpêtrière, University and University Hospital of Liège, the Comparative Neuroanatomy Lab and the Centre for Integrative Neuroscience in Tübingen, the Max Planck Institute for Biological Cybernetics, and Massachusetts General Hospital were also involved.

This work was supported by the Howard Hughes Medical Institute, the Parkinson’s Disease Foundation, NIH, American Academy of Neurology/American Brain Foundation, Sidney R. Baer, Jr. Foundation, Harvard Catalyst, the Belgian National Funds for Scientific Research, the European Commission, the James McDonnell Foundation, the European Space Agency, Mind Science Foundation, the French Speaking Community Concerted Research Action, the Public Utility Foundation “Université Européenne du Travail,” Fondazione Europea di Ricerca Biomedica, the University and University Hospital of Liège, the Center for Integrative Neuroscience, and the Max Planck Society.

* 12 lesions led to coma and 24 (the control group) did not. Ten out of the 12 coma-inducing brainstem lesions were involved in this area, while just one of the 24 control lesions was.


Abstract of A human brain network derived from coma-causing brainstem lesions

Objective: To characterize a brainstem location specific to coma-causing lesions, and its functional connectivity network.

Methods: We compared 12 coma-causing brainstem lesions to 24 control brainstem lesions using voxel-based lesion-symptom mapping in a case-control design to identify a site significantly associated with coma. We next used resting-state functional connectivity from a healthy cohort to identify a network of regions functionally connected to this brainstem site. We further investigated the cortical regions of this network by comparing their spatial topography to that of known networks and by evaluating their functional connectivity in patients with disorders of consciousness.

Results: A small region in the rostral dorsolateral pontine tegmentum was significantly associated with coma-causing lesions. In healthy adults, this brainstem site was functionally connected to the ventral anterior insula (AI) and pregenual anterior cingulate cortex (pACC). These cortical areas aligned poorly with previously defined resting-state networks, better matching the distribution of von Economo neurons. Finally, connectivity between the AI and pACC was disrupted in patients with disorders of consciousness, and to a greater degree than other brain networks.

Conclusions: Injury to a small region in the pontine tegmentum is significantly associated with coma. This brainstem site is functionally connected to 2 cortical regions, the AI and pACC, which become disconnected in disorders of consciousness. This network of brain regions may have a role in the maintenance of human consciousness.