Brain scan better than polygraph in spotting lies

Significant clusters in fMRI exam are located in the anterior cingulate cortex, bilateral inferior frontal, inferior parietal and medial temporal gyrl, and the precuneus. (credit: Perelman School of Medicine at the University of Pennsylvania/Journal of Clinical Psychiatry)

Scanning people’s brains with fMRI (functional magnetic resonance imaging) was significantly more effective at spotting lies than a traditional polygraph test, researchers in the Perelman School of Medicine at the University of Pennsylvania found in a study published in the Journal of Clinical Psychiatry.

When someone is lying, areas of the brain linked to decision-making are activated, which lights up on an fMRI scan for experts to see. While laboratory studies showed fMRI’s ability to detect deception with up to 90 percent accuracy, estimates of polygraphs’ accuracy ranged wildly, between chance and 100 percent, depending on the study.

The Penn study is the first to compare the two modalities in the same individuals in a blinded and prospective fashion. The approach adds scientific data to the long-standing debate about this technology and builds the case for more studies investigating its potential real-life applications, such as evidence in criminal legal proceedings.

Neuroscientists better than polygraph examiners at detecting deception

Researchers from Penn’s departments of Psychiatry and Biostatistics and Epidemiology found that neuroscience experts without prior experience in lie detection, using fMRI data, were 24 percent more likely to detect deception than professional polygraph examiners reviewing polygraph recordings. In both fMRI and polygraph, participants took a standardized “concealed information” test.*

Polygraph monitors individuals’ electrical skin conductivity, heart rate, and respiration during a series of questions. Polygraph is based on the assumption that incidents of lying are marked by upward or downward spikes in these measurements.

“Polygraph measures reflect complex activity of the peripheral nervous system that is reduced to only a few parameters, while fMRI is looking at thousands of brain clusters with higher resolution in both space and time. While neither type of activity is unique to lying, we expected brain activity to be a more specific marker, and this is what I believe we found,” said the study’s lead author, Daniel D. Langleben, MD, a professor of Psychiatry.

fMRI Correct and Polygraphy Incorrect. (Left) All 3 fMRI raters correctly identified number 7 as the concealed number. (Right) Representative fragments from the electrodermal activity polygraphy channel correspond to responses about the same concealed numbers. The gray bars mark the time of polygraph examiner’s question (“Did you write the number [X]?”), and the thin black bars immediately following indicate the time of participant’s “No” response. All 3 polygraph raters incorrectly identified number 6 as the Lie Item. (credit: Daniel D. Langleben et al./Journal of Clinical Psychiatry)

In one example in the paper, fMRI clearly shows increased brain activity when a participant, who picked the number seven, is asked if that is their number. Experts who studied the polygraph counterpart incorrectly identified the number six as the lie. The polygraph associated with the number six shows high peaks after the participant is asked the same questions several times in a row, suggesting that answer was a lie.

The scenario was reversed in another example, as neither fMRI nor polygraph experts were perfect, which is demonstrated in the paper. However, overall, fMRI experts were 24 percent more likely to detect the lie in any given participant.

Combination of technologies was 100 percent correct

Beyond the accuracy comparison, authors made another important observation. In the 17 cases when polygraph and fMRI agreed on what the concealed number was, they were 100 percent correct. Such high precision of positive determinations could be especially important in the United States and British criminal proceedings, where avoiding false convictions takes absolute precedence over catching the guilty, the authors said.

They cautioned that while this does suggest that the two modalities may be complementary if used in sequence, their study was not designed to test combined use of both modalities and their unexpected observation needs to be confirmed experimentally before any practical conclusions could be made.

The study was supported by the U.S. Army Research Office, No Lie MRI, Inc, and the University of Pennsylvania Center for MRI and Spectroscopy.

* To compare the two technologies, 28 participants were given the so-called “Concealed Information Test” (CIT). CIT is designed to determine whether a person has specific knowledge by asking carefully constructed questions, some of which have known answers, and looking for responses that are accompanied by spikes in physiological activity. Sometimes referred to as the Guilty Knowledge Test, CIT has been developed and used by polygraph examiners to demonstrate the effectiveness of their methods to subjects prior to the actual polygraph examination.

In the Penn study, a polygraph examiner asked participants to secretly write down a number between three and eight. Next, each person was administered the CIT while either hooked to a polygraph or lying inside an MRI scanner. Each of the participants had both tests, in a different order, a few hours apart. During both sessions, they were instructed to answer “no” to questions about all the numbers, making one of the six answers a lie.  The results were then evaluated by three polygraph and three neuroimaging experts separately and then compared to determine which technology was better at detecting the fib.


Abstract of Polygraphy and Functional Magnetic Resonance Imaging in Lie Detection: A Controlled Blind Comparison Using the Concealed Information Test

Objective: Intentional deception is a common act that often has detrimental social, legal, and clinical implications. In the last decade, brain activation patterns associated with deception have been mapped with functional magnetic resonance imaging (fMRI), significantly expanding our theoretical understanding of the phenomenon. However, despite substantial criticism, polygraphy remains the only biological method of lie detection in practical use today. We conducted a blind, prospective, and controlled within-subjects study to compare the accuracy of fMRI and polygraphy in the detection of concealed information. Data were collected between July 2008 and August 2009.

Method: Participants (N = 28) secretly wrote down a number between 3 and 8 on a slip of paper and were questioned about what number they wrote during consecutive and counterbalanced fMRI and polygraphy sessions. The Concealed Information Test (CIT) paradigm was used to evoke deceptive responses about the concealed number. Each participant’s preprocessed fMRI images and 5-channel polygraph data were independently evaluated by 3 fMRI and 3 polygraph experts, who made an independent determination of the number the participant wrote down and concealed.

Results: Using a logistic regression, we found that fMRI experts were 24% more likely (relative risk = 1.24, P < .001) to detect the concealed number than the polygraphy experts. Incidentally, when 2 out of 3 raters in each modality agreed on a number (N = 17), the combined accuracy was 100%.

Conclusions: These data justify further evaluation of fMRI as a potential alternative to polygraphy. The sequential or concurrent use of psychophysiology and neuroimaging in lie detection also deserves new consideration.

Researchers restore leg movement in primates using wireless neural interface

Brain-spinal interface bypasses spinal cord injuries in rhesus macaques, restoring nearly normal intentional walking movement (credit: Jemère Ruby)

An international team of scientists has used a wireless “brain-spinal interface” to bypass spinal cord injuries in a pair of rhesus macaques, restoring nearly normal intentional walking movement to a temporarily paralyzed leg.

The finding could help in developing a similar system to rehabilitate humans who have had spinal cord injuries.

The system uses signals recorded from a pill-sized electrode array implanted in the motor cortex of the brain to trigger coordinated electrical stimulation of nerves in the spine that are responsible for locomotion.

Monkeys were implanted with a microelectrode array into the leg area of the left motor cortex. (1) During recordings, a wireless module transmitted broadband neural signals to a control computer. A raster plot recorded three successive gait cycles. Each line represents spiking events identified from one electrode, while the horizontal axis indicates time. (2) A decoder running on the control computer identified motor states from these neural signals. (3) These motor states triggered electrical spinal cord stimulation protocols, using an implanted pulse generator with real-time triggering capabilities. (4) The stimulator was connected to a spinal implant targeting specific dorsal roots of the lumbar spinal cord. Electromyographic (muscle) signals of an extensor (gray) and flexor (black) muscles acting at the ankle recorded over three successive gait cycles are shown together with a stick diagram decomposition of leg movements during the stance (gray) and swing (black) phases of gait. (credit: Jemère Ruby)

A wireless neurosensor, developed in the neuroengineering lab of Brown University professor Arto Nurmikko, sends the signals gathered by the brain chip wirelessly to a computer that decodes them and sends them wirelessly back to an electrical spinal stimulator implanted in the lumbar spine, below the area of injury.

That electrical stimulation, delivered in patterns coordinated by the decoded brain, sends signals to the spinal nerves that control locomotion.

The sensor technology was developed in part for investigational use in humans by the BrainGate collaboration, a research team that includes Brown, Case Western Reserve University, Massachusetts General Hospital, the Providence VA Medical Center, and Stanford University. The technology is being used in ongoing pilot clinical trials, and was used previously in a study in which people with tetraplegia were able to operate a robotic arm simply by thinking about the movement of their own hand. (credit: Brown University)

The ability to transmit brain signals wirelessly was critical to this work with monkeys, the researchers note. Wired brain-sensing systems limit freedom of movement, which in turn limits the information researchers are able to gather about locomotion.

Despite current limitations, the research sets the stage for future studies in primates and, at some point, potentially as a rehabilitation aid in humans, the researchers suggest.

The study, published in the journal Nature, was performed by scientists and neuroengineers in a collaboration led by Ecole Polytechnique Federale Lausanne (EPFL) in Switzerland, together with Brown University, Medtronic and Fraunhofer ICT-IMM in Germany.

The research was funded by European Community’s Seventh Framework Program, International Foundation for Research in Paraplegia Starting Grant from the European Research Council, The Wyss Centre in Geneva Marie Curie Fellowship, Marie Curie COFUND EPFL fellowships, Medtronic Morton Cure Paralysis Fund fellowship, NanoTera.ch Programme (SpineRepair), National Centre of Competence in Research in Robotics Sinergia program, Sino-Swiss Science and Technology Cooperation, and the Swiss National Science Foundation.


Abstract of A brain–spine interface alleviating gait deficits after spinal cord injury in primates

Spinal cord injury disrupts the communication between the brain and the spinal circuits that orchestrate movement. To bypass the lesion, brain–computer interfaces have directly linked cortical activity to electrical stimulation of muscles, and have thus restored grasping abilities after hand paralysis. Theoretically, this strategy could also restore control over leg muscle activity for walking. However, replicating the complex sequence of individual muscle activation patterns underlying natural and adaptive locomotor movements poses formidable conceptual and technological challenges. Recently, it was shown in rats that epidural electrical stimulation of the lumbar spinal cord can reproduce the natural activation of synergistic muscle groups producing locomotion. Here we interface leg motor cortex activity with epidural electrical stimulation protocols to establish a brain–spine interface that alleviated gait deficits after a spinal cord injury in non-human primates. Rhesus monkeys (Macaca mulatta) were implanted with an intracortical microelectrode array in the leg area of the motor cortex and with a spinal cord stimulation system composed of a spatially selective epidural implant and a pulse generator with real-time triggering capabilities. We designed and implemented wireless control systems that linked online neural decoding of extension and flexion motor states with stimulation protocols promoting these movements. These systems allowed the monkeys to behave freely without any restrictions or constraining tethered electronics. After validation of the brain–spine interface in intact (uninjured) monkeys, we performed a unilateral corticospinal tract lesion at the thoracic level. As early as six days post-injury and without prior training of the monkeys, the brain–spine interface restored weight-bearing locomotion of the paralysed leg on a treadmill and overground. The implantable components integrated in the brain–spine interface have all been approved for investigational applications in similar human research, suggesting a practical translational pathway for proof-of-concept studies in people with spinal cord injury.

Scientists find key protein for spinal cord repair in zebrafish


Duke University | Spinal Cord Injury and Regeneration in Zebrafish

Duke University scientists have found a protein that’s important for the ability of the freshwater zebrafish’s spinal cord to heal completely after being severed. Their study, published Nov. 4  in the journal Science, could generate new leads for what is a paralyzing and often fatal injury for humans.

Searching for the repair molecules

Schematic of the multistep process of spinal cord regeneration in zebrafish. (Injury response:) When the zebrafish’s severed spinal cord undergoes regeneration, a bridge forms. (Bridging:) The first cells extend projections into a distance tens of times their own length and connect across a wide gulf of the injury. Nerve cells follow. (Remodeling:) By 8 weeks, new nerve tissue has filled the gap and the animals have fully reversed their severe paralysis. (credit: Mayssa H. Mokalled et al./Science)

To understand what molecules were potentially responsible for this remarkable process, the scientists searched for the genes whose activity abruptly changed after spinal cord injury. Of dozens of genes strongly activated by injury, seven coded for proteins that are secreted from cells. One of these, called CTGF (connective tissue growth factor), was intriguing because its levels rose in the supporting cells, or glia, that formed the bridge in the first two weeks following injury.

What’s more, when they tried deleting CTGF genetically, those fish failed to regenerate.

The human CTGF protein is 87% similar in its amino acid building blocks to the zebrafish form. So when the team added the human version of CTGF to the injury site in fish, it boosted regeneration and the fish swam better by two weeks after the injury.

The second half of the CTGF protein seems to be the key to the healing, the group found. It’s a large protein, made of four smaller parts, and it has more than one function. That might make it easier to deliver and more specific as a therapy for spinal injuries.

Mouse studies next

Unfortunately, CTGF is probably not sufficient on its own for people to regenerate their own spinal cords, according to the study’s senior investigator, Kenneth Poss, professor of cell biology and director of the Regeneration Next initiative at Duke. Healing is more complex in mammals, in part because scar tissue forms around the injury. The Poss team expects studies of CTGF to move into mammals like mice to determine when they express CTGF, and in what cell types.

These experiments may reveal some answers to why zebrafish can regenerate whereas mammals cannot. It may be a matter of how the protein is controlled rather than its make-up, Poss said.

The group also plans to follow up on other proteins secreted after injury that were identified in their initial search, which may provide additional hints into the zebrafish’s secrets of regeneration.

Scientists at the Max Planck Institute for Heart and Lung Research were also involved in the research, which was supported by the National Institutes of Health, the Max Planck Society, and Duke University School of Medicine.


Abstract of Injury-induced ctgfa directs glial bridging and spinal cord regeneration in zebrafish

Unlike mammals, zebrafish efficiently regenerate functional nervous system tissue after major spinal cord injury.Whereas glial scarring presents a roadblock for mammalian spinal cord repair, glial cells in zebrafish form a bridge across severed spinal cord tissue and facilitate regeneration. We performed a genome-wide profiling screen for secreted factors that are up-regulated during zebrafish spinal cord regeneration. We found that connective tissue growth factor a (ctgfa) is induced in and around glial cells that participate in initial bridging events. Mutations in ctgfa disrupted spinal cord repair, and transgenic ctgfa overexpression and local delivery of human CTGF recombinant protein accelerated bridging and functional regeneration. Our study reveals that CTGF is necessary and sufficient to stimulate glial bridging and natural spinal cord regeneration.

Neuroscience review reframes ‘mind-wandering’ and mental illness

Brain regions within the core default network subsystem (DNCORE) that are more active during task-unrelated thought than during task-related thought (credit: Kalina Christoff et al./Nature Reviews Neuroscience)

In a review of neuroscience literature from more than 200 journals, published in Nature Reviews Neuroscience, a University of British Columbia-led team has proposed a radical new framework for understanding “mind wandering” and mental illness.

Within this framework, spontaneous thought processes — including mind-wandering, creative thinking, and dreaming — arise when thoughts are relatively free from deliberate and automatic constraints. Mind-wandering is not far from creative thinking.

“We propose that mind-wandering isn’t an odd quirk of the mind,” said Kalina Christoff, Ph.D., the review’s lead author and a professor of psychology at UBC. “Rather, it’s something that the mind does when it enters into a spontaneous mode. Without this spontaneous mode, we couldn’t do things like dream or think creatively.”

How mental disorders disrupt mind-wandering and creativity

“Mind-wandering is typically characterized as thoughts that stray from what you’re doing, but we believe this definition is limited,” Christoff said. Instead, mind-wandering happens when thoughts flow freely (that is, when the mind is in its default state).

In contrast, mental disorders, such as anxiety and depression, arise when two types of constraints — automatic and deliberate — disrupt free movement of thoughts and ideas.

(Top) Brain regions more active during REM sleep vs. during waking rest. (Bottom) Brain regions more active during creative-idea generation vs. during creative-idea valuation (credit: Kalina Christoff et al./Nature Reviews Neuroscience)

“Sometimes the mind moves freely from one idea to another, but at other times it keeps coming back to the same idea, drawn by some worry or emotion,” said Christoff. “Understanding what makes thought free and what makes it constrained is crucial because it can help us understand how thoughts move in the minds of those diagnosed with mental illness.”

The team’s meta-analysis reframes disorders, anxiety, and ADHD, for instance, as extensions of a normal variation in thinking, the researchers explain. The anxious mind helps us focus on what’s personally important, while the ADHD mind is marked by an excessive variability in thought (too much spontaneity).

This new perspective on mind-wandering could help psychologists gain a more in-depth understanding of mental illnesses, said review co-author Zachary Irving, a postdoctoral scholar at the University of California, Berkeley, who has ADHD.

The review was also co-authored by a researcher from Cornell University and one from the University of Colorado Boulder.


Abstract of Mind-wandering as spontaneous thought: a dynamic framework

Most research on mind-wandering has characterized it as a mental state with contents that are task unrelated or stimulus independent. However, the dynamics of mind-wandering — how mental states change over time — have remained largely neglected. Here, we introduce a dynamic framework for understanding mind-wandering and its relationship to the recruitment of large-scale brain networks. We propose that mind-wandering is best understood as a member of a family of spontaneous-thought phenomena that also includes creative thought and dreaming. This dynamic framework can shed new light on mental disorders that are marked by alterations in spontaneous thought, including depression, anxiety and attention deficit hyperactivity disorder.

‘Passive haptic learning’ (PHL) system teaches Morse code without trying

Study participants learned to tap Morse code into Google Glass after four hours (credit: Georgia Tech/Caitlyn Seim)

Researchers at the Georgia Institute of Technology have developed a “passive haptic learning” (PHL) system that teaches people Morse code within four hours, using a series of vibrations felt near the ear. Participants wearing Google Glass learned it without paying attention to the signals —they played games while feeling the taps and hearing the corresponding letters.

They were 94 percent accurate keying a sentence that included every letter of the alphabet and 98 percent accurate writing codes for every letter.

Georgia Tech researchers used this same method to teach people braillehow to play the piano and improved hand sensation for those with partial spinal cord injury.

The team used Glass for this study because it has both a built-in speaker and tapper (a bone-conduction transducer).

Unconscious learning

In the study, participants played a game while feeling vibration taps between their temple and ear. The taps represented the dots and dashes of Morse code and passively “taught” users through their tactile senses — even while they were distracted by the game.

The taps were created when researchers sent a very low-frequency signal to Glass’s speaker system. At less than 15 Hz, the signal was below hearing range but, because it was played very slowly, the sound was felt as a vibration.

Half of the participants in the study felt the vibration taps and heads a voice prompt for each corresponding letter. The other half — the control group — felt no taps to help them learn.

Participants were tested throughout the study on their knowledge of Morse code and their ability to type it.  After less than four hours of feeling every letter, everyone was challenged to type the alphabet in Morse code in a final test.

The findings demonstrate silent, eyes-free text entry on a mobile device without a keyboard (participants touched on Glass’s touchpad to enter Morse code during quizzes), but the system could be adapted to allow other mobile or wearable devices to be used (credit: Georgia Tech/Caitlyn Seim)

The control group was accurate only half the time.  Those who felt the passive cues were nearly perfect.

The method “shows that PHL lowers the barrier to learn text-entry methods — something we need for smartwatches and any text-entry that doesn’t require you to look at your device or keyboard,” said Georgia Tech Professor Thad Starner.

“This research also shows that other common devices with an actuator could be used for passive haptic learning,” he says. “Your smartwatch, Bluetooth headset, fitness tracker or phone.”

The researchers’ next study will go a step further, investigating whether PHL can teach people how to type on a QWERTY keyboard.

The work is supported in part by the National Science Foundation.


Abstract of Tactile taps teach rhythmic text entry: passive haptic learning of morse code

Passive Haptic Learning (PHL) is the acquisition of sensorimotor skills with little or no active attention to learning. This technique is facilitated by wearable computing, and applications are diverse. However, it is not known whether rhythm-based information can be conveyed passively. In a 12 participant study, we investigate whether Morse code, a rhythmbased text entry system, can be learned through PHL using the bone conduction transducer on Google Glass. After four hours of exposure to passive stimuli while focusing their attention on a distraction task, PHL participants achieved a 94% accuracy rate keying a pangram (a phrase with all the letters of the alphabet) using Morse code on Glass’s trackpad versus 53% for the control group. Most PHL participants achieved 100% accuracy before the end of the study. In written tests, PHL participants could write the codes for each letter of the alphabet with 98% accuracy versus 59% for control. When perceiving Morse code, PHL participants also performed significantly better than control: 83% versus 46% accuracy.

New study challenges consensus that math abilities are innate

How do you decide which cart to get behind to check out faster? (credit: iStock)

A new theory on how the brain first learns basic math could alter approaches to identifying and teaching students with math-learning disabilities, according to Ben-Gurion University of the Negev (BGU) researchers.

The widely accepted “sense of numbers” theory suggests people are born with a “sense of numbers,” an innate ability to recognize different quantities, and that this ability improves with age. Early math curricula and tools for diagnosing math-specific learning disabilities such as dyscalculia, a brain disorder that makes it hard to make sense of numbers and math concepts, have been based on that consensus.

Other theories suggest that a “sense of magnitude” that enables people to discriminate between different “continuous magnitudes,” such as the density of two groups of apples or total surface area of two pizza trays, is even more basic and automatic than a sense of numbers.

Not just numbers

The new study, published in the Behavioral and Brain Sciences journal, combines these approaches. The researchers argue that understanding the relationship between size and number is what’s critical for the development of higher math abilities. By combining number and size (e.g., area, density, and perimeter), we can make faster and more efficient decisions.

For example: how do you choose the quickest checkout line at the grocery store? While most people intuitively get behind someone with a less filled-looking cart, a fuller-looking cart with fewer, larger items may actually be quicker. The way we make these kinds of decisions reveals that people use the natural correlation between number and continuous magnitudes to compare options — not just numbers.

Other factors, such as language and cognitive control, also play a role in acquiring numerical concepts, they note.

“This new approach will allow us to develop diagnostic tools that do not require any formal math knowledge, thus allowing diagnosis and treatment of dyscalculia before school age,” says Tali Leibovich, PhD, from University of Western Ontario, who led the study.

“If we are able to understand how the brain learns math, and how it understands numbers and more complex math concepts that shape the world we live in, we will be able to teach math in a more intuitive and enjoyable way,” says Leibovich.

The study was supported by the European Research Council under the European Union’s Seventh Framework Programme.


Abstract of From ‘sense of number’ to ‘sense of magnitude’ – The role of continuous magnitudes in numerical cognition

In this review, we are pitting two theories against each other: the more accepted theory—the ‘number sense’ theory—suggesting that a sense of number is innate and non-symbolic numerosity is being processed independently of continuous magnitudes (e.g., size, area, density); and the newly emerging theory suggesting that (1) both numerosities and continuous magnitudes are processed holistically when comparing numerosities, and (2) a sense of number might not be innate. In the first part of this review, we discuss the ‘number sense’ theory. Against this background, we demonstrate how the natural correlation between numerosities and continuous magnitudes makes it nearly impossible to study non-symbolic numerosity processing in isolation from continuous magnitudes, and therefore the results of behavioral and imaging studies with infants, adults and animals can be explained, at least in part, by relying on continuous magnitudes. In the second part, we explain the ‘sense of magnitude’ theory and review studies that directly demonstrate that continuous magnitudes are more automatic and basic than numerosities. Finally, we present outstanding questions. Our conclusion is that there is not enough convincing evidence to support the number sense theory anymore. Therefore, we encourage researchers not to assume that number sense is simply innate, but to put this hypothesis to the test, and to consider if such an assumption is even testable in light of the correlation of numerosity and continuous magnitudes.

Neurons from stem cells replace damaged neurons, precisely rewiring into the brain

As shown in this in vivo two-photon image, neuronal transplants (blue) connect with host neurons (yellow) in the adult mouse brain in a highly specific manner, rebuilding neural networks lost upon injury. (credit: Sofia Grade/LMU/Helmholtz Zentrum München)

Embryonic neural stem cells transplanted into damaged areas of the visual cortex of adult mice were able to differentiate into pyramidal cells — forming normal synaptic connections, responding to visual stimuli, and integrating into neural networks — researchers at LMU Munich, the Max Planck Institute for Neurobiology in Martinsried and the Helmholtz Zentrum München have demonstrated.

The adult human brain has very little ability to compensate for nerve-cell loss, so biomedical researchers and clinicians are exploring the possibility of using transplanted nerve cells to replace neurons that have been irreparably damaged as a result of trauma or disease, leading to a lifelong neurological deficit.

Previous studies have suggested there is potential to remedy at least some of the clinical symptoms resulting from acquired brain disease through the transplantation of fetal nerve cells into damaged neuronal networks. However, it has not been clear whether transplanted intact neurons could be sufficiently integrated to result in restored function of the damaged network.

Stem-cell-derived neurons mirror connections of damaged neurons

Now, in study published in Nature, the researchers have found that transplanted embryonic nerve cells properly differentiated into pyramidal cells, forming normal synaptic connections, responding to visual stimuli, and carrying out the tasks performed by the damaged cells (videos here).

The researchers were also “astounded” to find that the replacement neurons grew axons throughout the adult brain, reaching proper target areas, and receiving V1-specific (from the primary visual cortex) inputs from host neurons — precisely the same inputs that the original neurons had received.

This includes neocortical circuits that normally never incorporate new neurons in the adult brain.

In addition, after 2–3 months, the transplanted neurons were fully integrated in the brain, showing functional properties indistinguishable from the original neurons.

The study was supported by funding from the German Research Foundation (DFG). 


Abstract of Transplanted embryonic neurons integrate into adult neocortical circuits

The ability of the adult mammalian brain to compensate for neuronal loss caused by injury or disease is very limited. Transplantation aims to replace lost neurons, but the extent to which new neurons can integrate into existing circuits is unknown. Here, using chronic in vivo two-photon imaging, we show that embryonic neurons transplanted into the visual cortex of adult mice mature into bona fide pyramidal cells with selective pruning of basal dendrites, achieving adult-like densities of dendritic spines and axonal boutons within 4–8 weeks. Monosynaptic tracing experiments reveal that grafted neurons receive area-specific, afferent inputs matching those of pyramidal neurons in the normal visual cortex, including topographically organized geniculo-cortical connections. Furthermore, stimulus-selective responses refine over the course of many weeks and finally become indistinguishable from those of host neurons. Thus, grafted neurons can integrate with great specificity into neocortical circuits that normally never incorporate new neurons in the adult brain.

A deep-learning system to alert companies before litigation

(credit: Intraspexion, Inc.)

Imagine a world with less litigation.

That’s the promise of a deep-learning system developed by Intraspexion, Inc. that can alert company or government attorneys to forthcoming risks before getting hit with expensive litigation.

“These risks show up in internal communications such as emails,” said CEO Nick Brestoff. “In-house attorneys have been blind to these risks, so they are stuck with managing the lawsuits.”

Example of employment discrimination indicators buried in emails (credit: Intraspexion, Inc.)

Intraspexion’s first deep learning model has been trained to find the risks of employment discrimination. “What we can do with employment discrimination now we can do with other litigation categories, starting with breach of contract and fraud, and then scaling up to dozens more,” he said.

Brestoff claims that deep learning enables a huge paradigm shift for the legal profession. “We’re going straight after the behemoth of litigation. This shift doesn’t make attorneys better able to know the law; it makes them better able to know the facts, and to know them early enough to do something about them.”

And to prevent huge losses. “As I showed in my book, Preventing Litigation: An Early Warning System), using 10 years of cost (aggregated as $1.6 trillion) and caseload data (about 4 million lawsuits – federal and state — for that same time frame), the average cost per case was at least about $350,000,” Brestoff explained to KurzweilAI in an email.

Brestoff, who studied engineering at Cal Tech before attending law school at USC, will present Intraspexion’s deep learning system in a talk at the AI World Conference & Exposition 2016, November 7–9 in San Francisco.

 

Engineers reveal fabrication process for revolutionary transparent graphene neural sensors

A blue light shines through a transparent, implantable medical sensor onto a brain. The invention may help neural researchers better view brain activity. (credit: Justin Williams research group)

In an open-access paper published Thursday (Oct. 13, 2016) in the journal Nature Protocols, University of Wisconsin–Madison engineers have published details of how to fabricate and use neural microelectrocorticography (μECoG) arrays made with transparent graphene in applications in electrophysiology, fluorescent microscopy, optical coherence tomography, and optogenetics.

Graphene is one of the most promising candidates for transparent neural electrodes, because the material has a UV to IR transparency of more than 90%, in addition to its high electrical and thermal conductivity, flexibility, and biocompatibility, the researchers note in the paper. That allows for simultaneous high-resolution imaging and optogenetic control.

Left: Optical coherence tomography (OCT) image captured through an implanted transparent graphene electrode array, allowing for simultaneous observation of cells immediately beneath electrode sites during optical or electrical stimulation. Right: Optical coherence tomography (OCT) image taken with an implanted conventional opaque platinum electrode array. (credit: Dong-Wook Park et al./Nature Protocols)

The procedures in the paper are for a graphene μECoG electrode array implanted on the surface of the cerebral cortex and can be completed within 3–4 weeks by an experienced graduate student, according to the researchers. But this protocol “may be amenable to fabrication and testing of a multitude of other electrode arrays used in biological research, such as penetrating neural electrode arrays to study deep brain, nerve cuffs that are used to interface with the peripheral nervous system (PNS), or devices that interface with the muscular system,” according to the paper.

The researchers first announced the breakthrough in the open-access journal Nature Communications in 2014, as KurzweilAI reported. Now, the UW–Madison researchers are looking at ways to improve and build upon the technology. They also are seeking to expand its applications from neuroscience into areas such as research of stroke, epilepsy, Parkinson’s disease, cardiac conditions, and many others. And they hope other researchers do the same.

Funding for the initial research came from the Reliable Neural-Interface Technology program at the U.S. Defense Advanced Research Projects Agency.

The research was led by Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW–Madison and Justin Williams, the Vilas Distinguished Achievement Professor in biomedical engineering and neurological surgery at UW–Madison.

Researchers at the University of Wisconsin-Milwaukee, Medtronic PLC Neuromodulation, the University of Washington, and Mahidol University in Bangkok, Thailand were also involved.


Abstract of Fabrication and utility of a transparent graphene neural electrode array for electrophysiology, in vivo imaging, and optogenetics

Transparent graphene-based neural electrode arrays provide unique opportunities for simultaneous investigation of electrophysiology, various neural imaging modalities, and optogenetics. Graphene electrodes have previously demonstrated greater broad-wavelength transmittance (~90%) than other transparent materials such as indium tin oxide (~80%) and ultrathin metals (~60%). This protocol describes how to fabricate and implant a graphene-based microelectrocorticography (μECoG) electrode array and subsequently use this alongside electrophysiology, fluorescence microscopy, optical coherence tomography (OCT), and optogenetics. Further applications, such as transparent penetrating electrode arrays, multi-electrode electroretinography, and electromyography, are also viable with this technology. The procedures described herein, from the material characterization methods to the optogenetic experiments, can be completed within 3–4 weeks by an experienced graduate student. These protocols should help to expand the boundaries of neurophysiological experimentation, enabling analytical methods that were previously unachievable using opaque metal–based electrode arrays.

Mars-bound astronauts face brain damage from galactic cosmic ray exposure, says NASA-funded study

An (unshielded) view of Mars (credit: SpaceX)

A NASA-funded study of rodents exposed to highly energetic charged particles — similar to the galactic cosmic rays that will bombard astronauts during extended spaceflights — found that the rodents developed long-term memory deficits, anxiety, depression, and impaired decision-making (not to mention long-term cancer risk).

The study by University of California, Irvine (UCI) scientists* appeared Oct. 10 in Nature’s open-access Scientific Reports. It follows one last year that appeared in the May issue of open-access Science Advances, showing somewhat shorter-term brain effects of galactic cosmic rays.

The rodents were subjected to charged particle irradiation (ionized charged atomic nuclei from oxygen and titanium) at the NASA Space Radiation Laboratory at New York’s Brookhaven National Laboratory.

Digital imaging revealed a reduction of dendrites (green) and spines (red) on neurons of  irradiated rodents, disrupting the transmission of signals among brain cells and thus impairing the brain’s neural network. Left: dendrites in unirradiated brains. Center: dendrites exposed to 0.05 Gy** ionized oxygen. Right: dendrites exposed to 0.30 Gy ionized oxygen. (credit: Vipan K. Parihar et al./Scientific Reports)

Six months after exposure, the researchers still found significant levels of brain inflammation and damage to neurons,  poor performance on behavioral tasks designed to test learning and memory, and reduced “fear extinction” (an active process in which the brain suppresses prior unpleasant and stressful associations) — leading to elevated anxiety.

Similar types of more severe cognitive dysfunction (“chemo brain”) are common in brain cancer patients who have received high-dose, photon-based radiation treatments.

“The space environment poses unique hazards to astronauts,” said Charles Limoli, a professor of radiation oncology in UCI’s School of Medicine. “Exposure to these particles can lead to a range of potential central nervous system complications that can occur during and persist long after actual space travel. Many of these adverse consequences to cognition may continue and progress throughout life.”

NASA health hazards advisory

“During a 360-day round trip [to Mars], an astronaut would receive a dose of about 662 millisieverts (0.662 Gy) [twice the highest amount of radiation used in the UCI experiment with rodents] according to data from the Radiation Assessment Detector (RAD) … piggybacking on Curiosity,” said Cary Zeitlin, PhD, a principal scientist in Southwest Research Institute Space Science and Engineering Division and lead author of an article published in the journal Science in 2013. “In terms of accumulated dose, it’s like getting a whole-body CT scan once every five or six days [for a year],” he said in a NASA press release. There’s also the risk from increased radiation during periodic solar storms.

In addition, as dramatized in the movie The Martian (and explained in this analysis), there’s a risk on the surface of Mars, although less than in space, thanks to the atmosphere, and thanks to nighttime shielding of solar radiation by the planet.

In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.

“There’s going to be some risk of radiation, but it’s not deadly,” claimed SpaceX CEO Elon Musk Sept. 27 in an announcement of plans to establish a permanent, self-sustaining civilization of a million people on Mars (with an initial flight as soon as 2024). “There will be some slightly increased risk of cancer, but I think it’s relatively minor. … Are you prepared to die? If that’s OK, you’re a candidate for going.”

Sightseeers expose themselves to galactic cosmic radiation on Europa, a moon of Jupiter, shown in the background (credit: SpaceX)

Not to be one-upped by Musk, President Obama said in an op-ed on the CNN blog on Oct. 11 (perhaps channeling JFK) that “we have set a clear goal vital to the next chapter of America’s story in space: sending humans to Mars by the 2030s and returning them safely to Earth, with the ultimate ambition to one day remain there for an extended time.”

In a follow-up explainer, NASA Administrator Charles Bolden and John Holdren, Director of the White House Office of Science and Technology Policy, announced that in August, NASA selected six companies (under the  Next Space Technologies for Exploration Partnerships-2 (NextSTEP-2) program) to produce ground prototypes for deep space habitat modules. No mention of plans for avoiding astronaut brain damage, and the NextSTEP-2 illustrations don’t appear to address that either.

Concept image of Sierra Nevada Corporation’s habitation prototype, based on its Dream Chaser cargo module. No multi-ton shielding is apparent. (credit: Sierra Nevada)

Hitchhiking on an asteroid

So what are the solutions (if any)? Material shielding can be effective against galactic cosmic rays, but it’s expensive and impractical for space travel. For instance, a NASA design study for a large space station envisioned four metric tons per square meter of shielding to drop radiation exposure to 2.5 millisieverts (mSv) (or 0.0025 Gy) annually (the annual global average dose from natural background radiation is 2.4 mSv (3.6 in the U.S., including X-rays), according to a United Nations report in 2008).

Various alternate shielding scheme have been proposed. NASA scientist Geoffrey A. Landis suggested in a 1991 paper the use of magnetic deflection of charged radiation particles (imitating the Earth’s magnetosphere***). Improvements in superconductors since 1991 may make this more practical today and possibly more so in future.

In a 2011 paper in Acta Astronautica, Gregory Matloff of New York City College of Technology suggested that a Mars-bound spacecraft could tunnel into the asteroid for shielding, as long as the asteroid is at least 33 feet wide (if the asteroid were especially iron-rich, the necessary width would be smaller), National Geographic reported.

The calculated orbit of (357024) 1999 YR14 (credit: Lowell Observatory Near-Earth-Object Search)

“There are five known asteroids that fit the criteria and will pass from Earth to Mars before the year 2100. … The asteroids 1999YR14 and 2007EE26, for example, will both pass Earth in 2086, and they’ll make the journey to Mars in less than a year,” he said. Downside: it would be five years before either asteroid would swing around Mars as it heads back toward Earth.

Meanwhile, future preventive treatments may help. Limoli’s group is working on pharmacological strategies involving compounds that scavenge free radicals and protect neurotransmission.

* An Eastern Virginia Medical School researcher also contributed to the study.

** The Scientific Reports paper shows these values as centigray (cGy), a decimal fraction (0.01) of the SI derived Gy (Gray) unit of absorbed dose and specific energy (energy per unit mass). Such energies are usually associated with ionizing radiation such as gamma particles or X-rays.

*** Astronauts working for extended periods on the International Space Station do not face the same level of bombardment with galactic cosmic rays because they are still within the Earth’s protective magnetosphere. Astronauts on Apollo and Skylab missions received on average 1.2 mSv (0.0012 Gy) per day and 1.4 mSv (0.0014 Gy) per day respectively, according to a NASA study.


Abstract of Cosmic radiation exposure and persistent cognitive dysfunction

The Mars mission will result in an inevitable exposure to cosmic radiation that has been shown to cause cognitive impairments in rodent models, and possibly in astronauts engaged in deep space travel. Of particular concern is the potential for cosmic radiation exposure to compromise critical decision making during normal operations or under emergency conditions in deep space. Rodents exposed to cosmic radiation exhibit persistent hippocampal and cortical based performance decrements using six independent behavioral tasks administered between separate cohorts 12 and 24 weeks after irradiation. Radiation-induced impairments in spatial, episodic and recognition memory were temporally coincident with deficits in executive function and reduced rates of fear extinction and elevated anxiety. Irradiation caused significant reductions in dendritic complexity, spine density and altered spine morphology along medial prefrontal cortical neurons known to mediate neurotransmission interrogated by our behavioral tasks. Cosmic radiation also disrupted synaptic integrity and increased neuroinflammation that persisted more than 6 months after exposure. Behavioral deficits for individual animals correlated significantly with reduced spine density and increased synaptic puncta, providing quantitative measures of risk for developing cognitive impairment. Our data provide additional evidence that deep space travel poses a real and unique threat to the integrity of neural circuits in the brain.


Abstract of What happens to your brain on the way to Mars

As NASA prepares for the first manned spaceflight to Mars, questions have surfaced concerning the potential for increased risks associated with exposure to the spectrum of highly energetic nuclei that comprise galactic cosmic rays. Animal models have revealed an unexpected sensitivity of mature neurons in the brain to charged particles found in space. Astronaut autonomy during long-term space travel is particularly critical as is the need to properly manage planned and unanticipated events, activities that could be compromised by accumulating particle traversals through the brain. Using mice subjected to space-relevant fluences of charged particles, we show significant cortical- and hippocampal-based performance decrements 6 weeks after acute exposure. Animals manifesting cognitive decrements exhibited marked and persistent radiation-induced reductions in dendritic complexity and spine density along medial prefrontal cortical neurons known to mediate neurotransmission specifically interrogated by our behavioral tasks. Significant increases in postsynaptic density protein 95 (PSD-95) revealed major radiation-induced alterations in synaptic integrity. Impaired behavioral performance of individual animals correlated significantly with reduced spine density and trended with increased synaptic puncta, thereby providing quantitative measures of risk for developing cognitive decrements. Our data indicate an unexpected and unique susceptibility of the central nervous system to space radiation exposure, and argue that the underlying radiation sensitivity of delicate neuronal structure may well predispose astronauts to unintended mission-critical performance decrements and/or longer-term neurocognitive sequelae.