Brain-inspired algorithms may make for optimized computational networks

Salk and Carnegie Mellon researchers developed a new model for building efficient networks by studying the rate at which the brain prunes back some of its connections during development. In this model, nodes (such as neurons or sensors) make too many connections (left) before pruning back to connections that are most relevant (right). The team applied their synaptic pruning-based algorithm to air flight patterns and found it was able to create routes to allow passengers to reach their destinations efficiently. (credit: Salk Institute and Carnegie Mellon University)

The developing brain prunes (eliminates) unneeded connections between neurons during early childhood. Now researchers from the Salk Institute for Biological Studies and Carnegie Mellon University have determined the rate at which that happens, and the implications of that finding for computational networks.


Neurons create networks through a process called pruning. At birth and throughout early childhood, the brain’s neurons make a vast number of connections — many more than the brain actually needs to function. So as the brain matures and learns, it begins to quickly cut away connections that aren’t being used. When the brain reaches adulthood, it has about 50 to 60 percent less synaptic connections than it had at its peak in childhood. Understanding how the network of neurons in the brain organizes to form its adult structure is key to understanding how the brain learns and functions.


“By thinking computationally about how the brain develops, we questioned how rates of synapse pruning may affect network topology and function,” says Saket Navlakha, assistant professor at the Salk Institute’s Center for Integrative Biology and a former postdoctoral researcher in Carnegie Mellon’s Machine Learning Department. “We have used the resulting insights to develop new algorithms for constructing adaptive and robust networks in other domains.” The findings were recently published in an open-access paper in PLOS Computational Biology,

But the processes the brain and network engineers conventionally use to learn the optimal network structure are very different. Computer science and engineering networks initially contain a small number of connections and then add more connections as needed.

An improved computer-network algorithm based on brain pruning

“Engineered networks are built by adding connections rather than removing them. You would think that developing a network using a pruning process would be wasteful,” says Ziv Bar-Joseph, associate professor in Carnegie Mellon’s Machine Learning and Computational Biology departments. “But as we showed, there are cases where such a process can prove beneficial for engineering as well.”

The researchers first determined key aspects of the pruning process by counting the number of synapses present in a mouse model’s somatosensory cortex over time. After counting synapses in more than 10,000 electron microscopy images, they found that synapses were rapidly pruned early in development, and then as time progressed, the pruning rate slowed.

The results of these experiments allowed the team to develop an algorithm for designing computational networks based on the brain pruning approach. Using simulations and theoretical analysis they found that the neuroscience-based algorithm produced computer networks that were much more efficient than the current engineering methods. The flow of information was more direct, and provided multiple paths for information to reach the same endpoint, minimizing the risk of network failure.

Optimizing airline routes as a test case

Delta U.S. routes (not the focus of this study) (credit: David Galvin/University of Notre Dame)

“We took this high-level algorithm that explains how neural structures are built during development and used that to inspire an algorithm for an engineered network,” says Alison Barth, professor in Carnegie Mellon’s Department of Biological Sciences and member of the university’s BrainHubSM initiative. “It turns out that this neuroscience-based approach could offer something new for computer scientists and engineers to think about as they build networks.”

Improving airline efficiency and robustness using pruning algorithms. Based on actual data of travel frequency among 122 popular cities from the 3rd quarter of 2013, researchers derived a comparison of efficiency (travel time in terms of number of hops) and robustness (number of alternative routes with the same number of hops) using different algorithms. Decreasing-rate pruning produced more efficient networks with similar robustness. (credit: Saket Navlakha1et al. PLOS Computational Biology)

As a test of how the algorithm could be used outside of neuroscience, Navlakha applied the algorithm to flight data from the U.S. Department of Transportation. He found that the synaptic pruning-based algorithm created the most effective routes to allow passengers to reach their destinations.

“We realize that it wouldn’t be cost effective to apply this to networks that require significant infrastructure, like railways or pipelines,” Navlakha said. “But for those that don’t, like wireless networks and sensor networks, this could be a valuable adaptive method to guide the formation of networks.”


Abstract of Decreasing-rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks

Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we use this process as inspiration for a new network design algorithm, which also led to a new experimental hypothesis. In particular, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. This inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

Can your phone really know you’re depressed?

StudentLife app, sensing, and analytics system architecture (credit: Rui Wang et al.)

Northwestern scientists believe an open-access android cell phone app called Purple Robot can detect depression simply by tracking the number of minutes you use the phone and your daily geographical locations.

The more time you spend using your phone, the more likely you are depressed, they found in a small Northwestern Medicine study published yesterday (July 15) in the Journal of Medical Internet Research. The average daily usage for depressed individuals was about 68 minutes, while for non-depressed individuals it was about 17 minutes.

Another factor was your location. Spending most of your time at home and most of your time in fewer locations — as measured by GPS tracking — also are linked to depression.

In addition, having a less regular day-to-day schedule, leaving your house and going to work at different times each day, for example, also is linked to depression.

Based on those three factors, they claim they could identify which of 28 individuals they recruited from Craig’s List had depressive symptoms — based on a standardized questionnaire measuring depression called the PHQ-9 — 87 percent accuracy.

Example phone usage data from a participant. Each row is a day, and the black bars show the extent of time during which the phone has been is use. The bars on the right side show the overall phone usage duration for each day. (credit: Sohrab Saeb et al./Journal of Medical Internet Research)

“The significance of this is we can detect if a person has depressive symptoms and the severity of those symptoms without asking them any questions,” said senior author David Mohr, director of the Center for Behavioral Intervention Technologies at Northwestern University Feinberg School of Medicine. “We now have an objective measure of behavior related to depression. And we’re detecting it passively. Phones can provide data unobtrusively and with no effort on the part of the user.”

Better than questionnaires

The smartphone data was more reliable in detecting depression than daily questions participants answered about how sad they were feeling on a scale of 1 to 10. Those answers may be rote and often not reliable, said lead author Sohrob Saeb, a postdoctoral fellow and computer scientist in preventive medicine at Feinberg.

“The data showing depressed people tended not to go many places reflects the loss of motivation seen in depression,” said Mohr, who is a clinical psychologist and professor of preventive medicine at Feinberg. “When people are depressed, they tend to withdraw and don’t have the motivation or energy to go out and do things.”

The research could ultimately lead to monitoring people at risk of depression and enabling health care providers to intervene more quickly, they suggest.

While the phone usage data didn’t identify how people were using their phones, Mohr suspects people who spent the most time on them were surfing the web or playing games, rather than talking to friends. “People are likely, when on their phones, to avoid thinking about things that are troubling, painful feelings or difficult relationships,” Mohr said. “It’s an avoidance behavior we see in depression.”

That assumption seems questionable; non-depressed people often spend time on phones texting, checking Facebook, reading, emails, etc.

But Saeb also analyzed the GPS locations and phone usage for 28 individuals (20 females and eight males, average age of 29) over two weeks. The sensor tracked GPS locations every five minutes.

To determine the relationship between phone usage and geographical location and depression, the subjects took a widely used standardized questionnaire measuring depression, the PHQ-9, at the beginning of the two-week study. The PHQ-9 asks about symptoms used to diagnose depression such as sadness, loss of pleasure, hopelessness, disturbances in sleep and appetite, and difficulty concentrating. Then, Saeb developed algorithms using the GPS and phone usage data collected from the phone, and correlated the results of those GPS and phone usage algorithms with the subjects’ depression test results.

Of the participants, 14 did not have any signs of depression and 14 had symptoms ranging from mild to severe depression.

The goal of the research is to passively detect depression and different levels of emotional states related to depression, Saeb said. The information ultimately could be used to monitor people who are at risk of depression to, perhaps, offer them interventions if the sensor detected depression or to deliver the information to their clinicians. Future Northwestern research will look at whether getting people to change those behaviors linked to depression improves their mood.

“We will see if we can reduce symptoms of depression by encouraging people to visit more locations throughout the day, have a more regular routine, spend more time in a variety of places or reduce mobile phone use,” Saeb said.

In addition to studies that use mobile phone sensor data to better understand depression, Mohr’s team also is running clinical trials to treat depression and anxiety using evidence-based interventions.

Contact ehealth@northwestern.edu or 855-682-2487 to learn how to join one of their paid research studies, or visit http://cbitshealth.northwestern.edu/.

This research was funded by research grants from the National Institute of Mental Health of the National Institutes of Health.


Abstract of Mobile Phone Sensor Correlates of Depressive Symptom Severity in Daily-Life Behavior: An Exploratory Study

Background: Depression is a common, burdensome, often recurring mental health disorder that frequently goes undetected and untreated. Mobile phones are ubiquitous and have an increasingly large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms.

Objective: The objective of this study was to explore the detection of daily-life behavioral markers using mobile phone global positioning systems (GPS) and usage sensors, and their use in identifying depressive symptom severity.

Methods: A total of 40 adult participants were recruited from the general community to carry a mobile phone with a sensor data acquisition app (Purple Robot) for 2 weeks. Of these participants, 28 had sufficient sensor data received to conduct analysis. At the beginning of the 2-week period, participants completed a self-reported depression survey (PHQ-9). Behavioral features were developed and extracted from GPS location and phone usage data.

Results: A number of features from GPS data were related to depressive symptom severity, including circadian movement (regularity in 24-hour rhythm;r=-.63, P=.005), normalized entropy (mobility between favorite locations; r=-.58,P=.012), and location variance (GPS mobility independent of location; r=-.58,P=.012). Phone usage features, usage duration, and usage frequency were also correlated (r=.54, P=.011, and r=.52, P=.015, respectively). Using the normalized entropy feature and a classifier that distinguished participants with depressive symptoms (PHQ-9 score ≥5) from those without (PHQ-9 score <5), we achieved an accuracy of 86.5%. Furthermore, a regression model that used the same feature to estimate the participants’ PHQ-9 scores obtained an average error of 23.5%.

Conclusions: Features extracted from mobile phone sensor data, including GPS and phone usage, provided behavioral markers that were strongly related to depressive symptom severity. While these findings must be replicated in a larger study among participants with confirmed clinical symptoms, they suggest that phone sensors offer numerous clinical opportunities, including continuous monitoring of at-risk populations with little patient burden and interventions that can provide just-in-time outreach.

How to regenerate axons to recover from spinal-cord injury

HKUST researchers cut mouse corticospinal tract axons (labeled red). A year later, they deleted the Pten gene in the experimental group (bottom) but not the control group. The Pten gene removal resulted in axon regrowth in seven months, unlike the control group (top). (credit: Kaimeng Du et al./The Journal of Neuroscience)

Researchers at the Hong Kong University of Science and Technology (HKUST) have found a way to help patients recover from chronic spinal cord injury (SCI) by stimulating the growth of axons.

Chronic SCI prevents a large number of injured axons from crossing a lesion, particularly in the corticospinal tract (CST). Patients inflicted with SCI often suffer a temporary or permanent loss of mobility and paralysis.

As reported in the July 1st issue of The Journal of Neuroscience, the researchers found that deleting the PTEN gene in mice neurons results in stimulation of growth of axons across the lesion (wound) and past it —- even when treatment was delayed up to 1 year after the original injury.

The deletion also up-regulated (increased) the activity of another gene called MTOR (the mammalian target of rapamycin), which further promoted regeneration of  the axons.

Corticospinal tract (credit: Gray’s Anatomy/Wikimedia Commons)

“As one of the long descending tracts controlling voluntary movement, the corticospinal tract (CST) plays an important role for functional recovery after spinal cord injury,” says Kai Liu, PhD, the study’s senior author and assistant professor in life sciences at HKUST.

“The regeneration of CST has been a major challenge in the field, especially after chronic injuries. Here we developed a strategy to modulate PTEN/mTOR signaling in adult corticospinal motor neurons in the post-injury paradigm.

“It not only promoted the sprouting of uninjured CST axons, but also enabled the regeneration of injured axons past the lesion in a mouse model of spinal cord injury, The results considerably extend the window of opportunity for regenerating CST axons severed in spinal cord injuries.

“It is interesting to find that chronically injured neurons retain the ability to reform tentative synaptic connections,” says Liu. “PTEN inhibition can be targeted on particular neurons, which means that we can apply the procedure specifically on the region of interest as we continue our research.”


Abstract of Pten Deletion Promotes Regrowth of Corticospinal Tract Axons 1 Year after Spinal Cord Injury

Chronic spinal cord injury (SCI) is a formidable hurdle that prevents a large number of injured axons from crossing the lesion, particularly the corticospinal tract (CST). This study shows that Pten deletion in the adult mouse cortex enhances compensatory sprouting of uninjured CST axons. Furthermore, forced upregulation of mammalian target of rapamycin (mTOR) initiated either 1 month or 1 year after injury promoted regeneration of CST axons. Our results indicate that both developmental and injury-induced mTOR downregulation in corticospinal motor neurons can be reversed in adults. Modulating neuronal mTOR activity is a potential strategy for axon regeneration after chronic SCI.

SIGNIFICANCE STATEMENT As one of the long descending tracts controlling voluntary movement, the corticospinal tract (CST) plays an important role for functional recovery after spinal cord injury. The regeneration of CST has been a major challenge in the field, especially after chronic injuries. Here we developed a strategy to modulate Pten/mammalian target of rapamycin signaling in adult corticospinal motor neurons in the postinjury paradigm. It not only promoted the sprouting of uninjured CST axons, but also enabled the regeneration of injured axons past the lesion in a mouse model of spinal cord injury, even when treatment was delayed up to 1 year after the original injury. The results considerably extend the window of opportunity for regenerating CST axons severed in spinal cord injuries.

Could this new electrical brain-zap method help you learn muscle skills faster?

Three electrical brain-stimulation methods. Vertical axis: current-flow intensity; horizontal axis: time. (adapted from Shapour Jaberzadeh et al./PLOS ONE)

Researchers headed by Shapour Jaberzadeh and his group at Monash University have discovered a new noninvasive technique that could rev up your brain to improve your physical performance — for athletes and musicians, for instance — and might also improve treatments for brain-related conditions such as stroke, depression, and chronic pain.

The two neuroelectrical treatment methods currently in use are transcranial direct current simulation (tDCS) — low intensity direct current (direct current is what a battery creates) — and transcranial alternating current simulation (tACS) — current that constantly changes and reverses polarity (alternating current, or AC, is used in houses and buildings).

Introducing transcranial pulsed current stimulation

The newest method, called transcranial pulsed current stimulation (tPCS), increases more corticospinal (muscle-movement-related) excitability, according to the researchers.

“We discovered that this new treatment produced larger excitability changes in the brain,” said Jaberzadeh. In addition, increasing the length of the pulse and decreasing the [time] interval between pulses heightened excitability even further.

The research is described in a paper published Wednesday (July 15) in the open-access journal PLOS ONE.

“When we learn a task during movement training (for example playing the piano), gradually our performance gets better. This improvement coincides with enhancement of the brain excitability. Compared to tDCS, our novel technique can play an important role in enhancement of the brain excitability, which may help recipients learn new tasks faster.”

Jaberzadeh said the technique had exciting implications for a whole host of conditions in which “enhancement of the brain excitability” has a therapeutic effect. These include training for treatment of stroke and other neurological disorders, mental disorders, and even management of pain.

“Our next step is to investigate the underlying mechanisms for the efficacy of this new technique. This will enable us to develop more effective protocols for application of tPCS in patients with different pathological conditions.”

One side effect of the treatment: the patient sees lights flashing in their eyes (retinal phosphenes) — actually a plus for trippers.

New tinnitus treatment uses TMS

Transcranial magnetic stimulation being applied for tinnitus by Sarah Theodoroff, Ph.D., assistant professor of Otolaryngology/Head and Neck Surgery at OHSU (credit: VA Portland Health Care System/OHSU)

In related neuromodulation news, transcranial magnetic stimulation (TMS) significantly improved tinnitus symptoms for more than half of study participants in recent research at the VA Portland Medical Center and Oregon Health & Science University.

“For some study participants, this was the first time in years that they experienced any relief in symptoms,” according to the the researchers.

The study was funded by the Veterans Administration and published in the journal JAMA Otolaryngology — Head & Neck Surgery.

Tinnitus affects nearly 45 million Americans. People with this condition hear a persistent sound that can range from ringing or buzzing to a hissing or white noise hum when there is no external sound source. Currently, there are no proven treatments available.

Currently, the Food and Drug Administration has only approved transcranial magnetic stimulation for treatment of depression.


Abstract of Anodal Transcranial Pulsed Current Stimulation: The Effects of Pulse Duration on Corticospinal Excitability

The aim is to investigate the effects of pulse duration (PD) on the modulatory effects of transcranial pulsed current (tPCS) on corticospinal excitability (CSE). CSE of the dominant primary motor cortex (M1) of right first dorsal interosseous muscle was assessed by motor evoked potentials, before, immediately, 10, 20 and 30 minutes after application of five experimental conditions: 1) anodal transcranial direct current stimulation (a-tDCS), 2) a-tPCS with 125 ms pulse duration (a-tPCSPD = 125), 3) a-tPCS with 250 ms pulse duration (a-tPCSPD = 250), 4) a-tPCS with 500 ms pulse duration (a-tPCSPD = 500) and 5) sham a-tPCS. The total charges were kept constant in all experimental conditions except sham condition. Post-hoc comparisons indicated that a-tPCSPD = 500 produced larger CSE compared to a-tPCSPD = 125(P<0.0001), a-tPCSPD = 250 (P = 0.009) and a-tDCS (P = 0.008). Also, there was no significant difference between a-tPCSPD = 250 and a-tDCS on CSE changes (P>0.05). All conditions except a-tPCSPD = 125 showed a significant difference to the sham group (P<0.006). All participants tolerated the applied currents. It could be concluded that a-tPCS with a PD of 500ms induces largest CSE changes, however further studies are required to identify optimal values.

Abstract of Repetitive Transcranial Magnetic Stimulation Treatment for Chronic Tinnitus: A Randomized Clinical Trial

Importance Chronic tinnitus negatively affects the quality of life for millions of people. This clinical trial assesses a potential treatment for tinnitus.

Objectives To determine if repetitive transcranial magnetic stimulation (rTMS) can reduce the perception or severity of tinnitus and to test the hypothesis that rTMS will result in a statistically significantly greater percentage of responders to treatment in an active rTMS group compared with a placebo rTMS group.

Design, Setting, and Participants A randomized, participant and clinician or observer–blinded, placebo-controlled clinical trial of rTMS involving individuals who experience chronic tinnitus. Follow-up assessments were conducted at 1, 2, 4, 13, and 26 weeks after the last treatment session. The trial was conducted between April 2011 and December 2014 at Portland Veterans Affairs Medical Center among 348 individuals with chronic tinnitus who were initially screened for participation. Of those, 92 provided informed consent and underwent more detailed assessments. Seventy individuals met criteria for inclusion and were randomized to receive active or placebo rTMS. Sixty-four participants (51 men and 13 women, with a mean [SD] age of 60.6 [8.9] years) were included in the data analyses. No participants withdrew because of adverse effects of rTMS.

Interventions Participants received 2000 pulses per session of active or placebo rTMS at a rate of 1-Hz rTMS daily on 10 consecutive workdays.

Main Outcomes and Measures The Tinnitus Functional Index (TFI) was the main study outcome. Our hypothesis was tested by comparing baseline and posttreatment TFIs for each participant and group.

Results Overall, 18 of 32 participants (56%) in the active rTMS group and 7 of 32 participants (22%) in the placebo rTMS group were responders to rTMS treatment. The difference in the percentage of responders to treatment in each group was statistically significant (χ21 = 7.94, P < .005).

Conclusions and Relevance Application of 1-Hz rTMS daily for 10 consecutive workdays resulted in a statistically significantly greater percentage of responders to treatment in the active rTMS group compared with the placebo rTMS group. Improvements in tinnitus severity experienced by responders were sustained during the 26-week follow-up period. Before this procedure can be implemented clinically, larger studies should be conducted to refine treatment protocols.

Trial Registration clinicaltrials.gov Identifier: NCT01104207

Miniature brain organoids made from patient skin cells reveal insights into autism

A human brain organoid showing complex internal organization, with immature proliferating cells (red) and a surrounding layer of maturing neurons (green) (credit: Jessica Mariani)

Taking a radical research approach to understanding autism, Yale School of Medicine researchers converted skin cells from autism patients into stem cells and then grew them into tiny brains in a dish — revealing unexpected mechanisms of the disease.

The study was published in an open-access paper today (July 16) in the journal Cell.

Most autism research has taken the approach of combing through patient genomes for mutations that may underlie the disorder and then using animal or cell-based models to study the genes and their possible roles in brain development. That has left more than 80% of autism cases with no clear genetic cause.

“Instead of starting from genetics, we’ve started with the biology of the disorder itself to try to get a window into the genome,” says senior author Flora Vaccarino the Harris Professor of Child Psychiatry and Professor or Neurobiology at Yale.

The clinical characteristics of autism are complex and wide-ranging, making the prospect of finding common underlying factors slim. So the researchers focused on the approximately one-fifth of autism patients that share a distinctive feature correlated with disease severity — an enlarged brain.

Unexpected neuron-type imbalance

The “brain organoids” that resulted are just a few millimeters about a fifth of a millimeter in diameter but mimic the basics of early human brain development, roughly corresponding to the first few months of gestation. When the researchers analyzed the patient organoids, they uncovered altered expression networks for genes controlling neuronal development.

The patient organoids showed an unexpected overproduction of inhibitory neurons that quiet down neural activity, while those that excite the partners they’re wired to were unaffected, leading to an imbalance in neuron type. Remarkably, by suppressing the expression of a single gene whose expression was abnormally increased in patient organoids, the authors were able to correct this bias, suggesting that it may be possible to intervene clinically to restore neuronal balance.

The authors are now using their data to home in on the difficult-to-find mutations or epigenetic changes responsible for the gene expression alterations and neuronal imbalance observed in the study. Efforts to extend their growth to later embryonic stages are also under way by a number of groups and will allow even further insights into the disease mechanisms.

“This study speaks to the importance of using human cells and using them in an assay that could bring a better understanding of the pathophysiology of autism and with that, possibly better treatments,” according to Vaccarino.

The success of the approach also suggests that similar methods might be used to gain important insights into other human developmental diseases that have until now been difficult to crack open.


Abstract of FOXG1-Dependent Dysregulation of GABA/Glutamate Neuron Differentiation in Autism Spectrum Disorders

Autism spectrum disorder(ASD) is a disorder of brain development. Most cases lack a clear etiology or genetic basis, and the difficulty of re-enacting human brain development has precluded understanding of ASD pathophysiology. Here we use three-dimensional neural cultures (organoids) derived from induced pluripotent stem cells (iPSCs) to investigate neurodevelopmental alterations in individuals with severe idiopathic ASD. While no known underlying genomic mutation could be identified, transcriptome and gene network analyses revealed upregulation of genes involved in cell proliferation, neuronal differentiation, and synaptic assembly. ASD-derived organoids exhibit an accelerated cell cycle and overproduction of GABAergic inhibitory neurons. Using RNA interference, we show that overexpression of the transcription factor FOXG1 is responsible for the overproduction of GABAergic neurons. Altered expression of gene network modules and FOXG1 are positively correlated with symptom severity. Our data suggest that a shift toward GABAergic neuron fate caused by FOXG1 is a developmental precursor of ASD.

Scientists ‘watch’ rats exploring their memories

Each of the five panels shows a memory snapshot created by hundreds of place cells while the rat was physically stationary at the top of the 1.8 m track (black). The time difference between the first and last snapshot is a mere one-fifth of a second; the positions represented by the neurons are shown in bright colors. (credit: Reprinted with permission from Pfeiffer and Foster, Science, 349:180)(2015)

How do you visualize your memory? As a continuous video recording, or as a series of snapshots strung together?

According to Johns Hopkins scientists, who actually watched nerve cells firing in the brains of rats as they planned where to go next, it’s a series of snapshots — more like jumping across stepping stones than walking across a bridge.

“Our data from rats suggest that our memories are actually organized that way, with one network of neurons responsible for the snapshots and another responsible for the string that connects them,” says David Foster, Ph.D., an assistant professor of neuroscience at the Johns Hopkins University School of Medicine.

A summary of their experiments, published in the journal Science on July 10, sheds light on what memories are and how they form,and how the system could fail.

The 2014 Nobel Prize in Physiology or Medicine was awarded for the discovery of a positioning system in the brain. Grid cells, together with other cells in the entorhinal cortex of the brain that recognize the direction of the head and the border of the room, form networks with the place cells in the hippocampus. This circuitry constitutes a comprehensive positioning system in the brain that appears to have components similar to those of the rat brain. (Credit: Mattias Karlén/The Nobel Committee for Physiology or Medicine)

Foster and his team focused their experiments on a group of nerve cells in the hippocampus of the brain known — in animals and people — for creating a mental “map” of experiences, or memories. The cells are called place cells because they each develop a preferred place in an environment and mainly fire only when the animal is in that place.

In previous experiments, Foster’s group learned that when a rat wants to get from point A to point D, it maps out its route mentally before starting on its journey. They could “see” this happen by implanting many tiny wires in the brains of the rats so that they could monitor the activity of more than 200 place cells at a time. By doing so, they found that the place cells representing point A would fire first, followed by those for point B, then C and D.

Maps of gaps

Their latest work, says Foster, is essentially a higher resolution “map” of the same process, which revealed gaps in between points A, B, C and D, corresponding to actual “gaps”  between discrete “memories” in the rats’ brains.

“The trajectories that the rats reconstructed weren’t smooth,” says Foster. “We were able to see that neural activity ‘hovers’ in one place for about 20 milliseconds before ‘jumping’ to another place, where it hovers again before moving on to the next point.”

He says that what seems to be happening during the hovering phase is an individual memory is being strengthened or focused. “At first, you get a ‘blurry’ representation of point A because a bunch of place cells all around point A fire, but, as time passes, the activity becomes more focused on A,” he explains. Then the activity jumps to a “blurry” version of B, which then gets focused.

“We think that there is a whole network of cells dedicated to this process of fine-tuning and jumping,” says Foster. “Without it, memory retrieval would be even messier than it is.”

In the future, the group plans to see what happens when certain memories within a path go missing, hoping to learn more about what memories are and how we can preserve them in those suffering from Alzheimer’s disease and other cognitive disorders.


Abstract of Autoassociative dynamics in the generation of sequences of hippocampal place cells

Neuronal circuits produce self-sustaining sequences of activity patterns, but the precise mechanisms remain unknown. Here we provide evidence for autoassociative dynamics in sequence generation. During sharp-wave ripple (SWR) events, hippocampal neurons express sequenced reactivations, which we show are composed of discrete attractors. Each attractor corresponds to a single location, the representation of which sharpens over the course of several milliseconds, as the reactivation focuses at that location. Subsequently, the reactivation transitions rapidly to a spatially discontiguous location. This alternation between sharpening and transition occurs repeatedly within individual SWRs and is locked to the slow-gamma (25 to 50 hertz) rhythm. These findings support theoretical notions of neural network function and reveal a fundamental discretization in the retrieval of memory in the hippocampus, together with a function for gamma oscillations in the control of attractor dynamics.

Wireless device delivers drugs to brain and triggers neurons via remote control

Tiny, implantable devices are capable of delivering light or drugs to specific areas of the brain, potentially improving drug delivery to targeted regions of the brain and reducing side effects. Eventually, the devices may be used to treat pain, depression, epilepsy and other neurological disorders in people. (credit: Alex David Jerez Roman)

A team of researchers has developed a tiny “wireless optofluidic neural probe” the width of a human hair that can be implanted in the brain and triggered by remote control to deliver drugs and activate targeted populations of brain cells.

The technology, demonstrated for the first time in mice, may one day be used to treat pain, depression, epilepsy, and other neurological disorders in people by targeting therapies to specific brain circuits with fewer side effects, according to the researchers at Washington University School of Medicine in St. Louis and the University of Illinois at Urbana-Champaign.

Soft optofluidic neural probe during simultaneous drug delivery and photostimulation (from micro-ILED). Drugs would be delivered via the fluidic channel and activated with light as needed. (Insets) Comparison of such a device (top) and a conventional metal cannula (bottom). Scale bars, 1 mm. (credit: Jae-Woong Jeong et al./Cell)

The research builds on earlier work in optogenetics, a technology that makes individual brain cells sensitive to light and then activates those targeted populations of cells with flashes of light.

The study was published online today (July 16) in the journal Cell and will appear in the July 30 print issue.

Previous attempts to deliver drugs or other agents, such as enzymes or other compounds, to experimental animals have required the animals to be tethered to rigid pumps and tubes that restricted their movement and often caused them to experience stress.

Exploded-view schematic diagram that illustrates an array of inorganic light-emitting diodes mounted on top of a soft microfluidic system that includes four separate microfluidic channels, each connected to a set of fluid reservoirs that include copper membranes as hermetic seals, expandable composite materials as mechanical transducers, and microscale heating elements as actuators (credit: Jae-Woong Jeong et al./Cell)

The new wireless optofluidic neural probes were built with four chambers to carry drugs directly into the brain via microfluidic channels and microscale pumps, and the probes are soft like brain tissue.

New tool for mapping brain-circuit activity

A freely moving rat with head-mounted device for drug delivery and photostimulation via the optofluidic system. The device is remotely controlled via infrared technology, similar to that used in a TV remote. Scale bar, 1 cm. (credit: Jae-Woong Jeong et al./Cell)

By activating brain cells with drugs and with light, the scientists are getting an unprecedented look at the inner workings of the brain.

“This is the kind of revolutionary tool development that neuroscientists need to map out brain circuit activity,” said James Gnadt, PhD, program director at the National Institute of Neurological Disorders and Stroke at the National Institutes of Health (NIH).

“It’s very much in line with the goals of the NIH’s BRAIN Initiative, a program designed to accelerate the development and application of new technologies to shed light on the complex links between brain function and behavior.”

The new devices may ultimately also help people with neurological disorders and other problems, according to co-first author Jae-Woong Jeong, PhD, a former postdoctoral researcher at the University of Illinois and now assistant professor of electrical, computer and energy engineering at the University of Colorado, Boulder.

“The device can remain in the brain and function for a long time without causing inflammation or neural damage,” Jeong said.

The researchers also believe that similar, more flexible devices could have applications in areas of the body other than the brain, including peripheral organs.

Messing with mice minds 

As part of the study, the researchers, who clearly are having way too much fun, showed that by delivering a drug to one side of an animal’s brain, they could stimulate neurons involved in movement, which caused the mouse to move in a circle.

In other mice, shining a light directly onto brain cells expressing a light-sensitive protein prompted the release of dopamine, a neurotransmitter that rewarded the mice by making them feel good. The mice then returned to the same location in a maze to seek another reward. But the researchers were able to interfere with that light-activated pursuit by remotely controlling the release of a drug that blocks the action of dopamine on its receptors.

The researchers hope to incorporate a design much like a printer’s ink cartridge so that drugs can continue to be delivered to specific cells in the brain, or elsewhere in the body, for as long as required and needed to replace the entire device.


Abstract of Wireless Optofluidic Systems for Programmable In Vivo Pharmacology and Optogenetics

In vivo pharmacology and optogenetics hold tremendous promise for dissection of neural circuits, cellular signaling, and manipulating neurophysiological systems in awake, behaving animals. Existing neural interface technologies, such as metal cannulas connected to external drug supplies for pharmacological infusions and tethered fiber optics for optogenetics, are not ideal for minimally invasive, untethered studies on freely behaving animals. Here, we introduce wireless optofluidic neural probes that combine ultrathin, soft microfluidic drug delivery with cellular-scale inorganic light-emitting diode (m-ILED) arrays. These probes are orders of magnitude smaller than cannulas and allow wireless, programmed spatiotemporal control of fluid delivery and photostimulation. We demonstrate these devices in freely moving animals to modify gene expression, deliver peptide ligands, and provide concurrent photostimulation with antagonist drug delivery to manipulate mesoaccumbens rewardrelated behavior. The minimally invasive operation of these probes forecasts utility in other organ systems and species, with potential for broad application in biomedical science, engineering, and medicine.

Memory-loss case ‘like nothing we have ever seen before’

(credit: Newmarket Films)

Gerald Burgess, a University of Leicester lecturer in clinical psychology, has described treating an individual who suffered a “Memento/Before I Go to Sleep“-style anterograde amnesia memory loss after a treatment at a dentist — “like nothing we have ever seen before.”

Since the one-hour root-canal treatment, during which the a 38-year-old man from the UK was given a local anesthetic, the individual cannot remember anything beyond 90 minutes.

He is fully aware of his identity and his personality did not change, says Burgess, but every day the man thinks it is the day of his dental appointment. He has to manage his life through an electronic diary and access to prompts.

Burgess has now described the study, done a decade ago, in an open-access paper published in May in the journal Neurocase. He is also appealing for people who know of someone who might have suffered similar symptoms of memory loss, or medical or allied health professionals working with someone like this, to contact him to build up knowledge and evidence in this field of study.

Possible explanations

Burgess notes that “what we did know about from decades of research and hundreds of case studies, is that bilateral damage to the hippocampal and/or diencephalon structures causes profound amnesia … [but] we should perhaps not be so stuck in thinking that profound amnesia only occurs in the context of visible damage to the brain’s hippocampal and/or diencephalon structures.

“Those structures appear just to be needed for the initial holding or retention of information before engrams then proceed slowly through several other neuro-electrical and neuro-chemical events, before finally permanent memories are stored, and that something can occur at some later point in this process to vanquish the memory trace permanently.

“An acquired or manifest deficiency of protein synthesis, required for permanent re-structuring of synapses in the brain, seemed an intriguing speculation, and one we hope there might be further human research into. This speculation was sparked by two seemingly key coincidences of one, timing when this protein synthesis stage occurs coincides with the patient’s forgetting at 90 minutes or thereabouts, and two, both ‘episodic’ and ‘procedural’ memories appear to require successful protein synthesis to occur for long-term memory permanence, and the patient cannot retain any new either episodic or procedural memories — and this is unusual compared to traditional cases of amnesia.”

The work was done in collaboration with Bhanu Chadalavada Consultant Psychiatrist at Northamptonshire Healthcare Foundation NHS Trust.


Newmarket | Memento (2000) (HD Trailer)

A man creates a strange system to help him remember things; so he can hunt for the murderer of his wife without his short-term memory loss being an obstacle.


Scott Free | Before I Go To Sleep Official Trailer #1 (2014) – Nicole Kidman, Colin Firth Movie HD

A woman wakes up every day, remembering nothing as a result of a traumatic accident in her past. One day, new terrifying truths emerge that force her to question everyone around her.


Columbia Pictures | 50 First Dates Trailer

A man afraid of commitment thinks he’s finally found the girl of his dreams, until he discovers she has short-term memory loss and forgets him the very next day.


Abstract of Profound anterograde amnesia following routine anesthetic and dental procedure: a new classification of amnesia characterized by intermediate-to-late-stage consolidation failure?

Anterograde amnesia caused by bilateral hippocampal or diencephalon damage manifests in characteristic symptoms of preserved intellect and implicit learning, and short span of awareness with complete and rapid forgetting of episodic material. A new case, WO, 38-year-old male with anterograde amnesia, in the absence of structural brain changes or psychological explanation is presented, along with four comparison cases from the extant literature that share commonalities between them including preserved intellect, span of awareness greater than working memory, and complete forgetting within hours or days following successful learning, including notably for both explicit and implicit material. WO’s amnesia onset coincided with anesthetic injection and root canal procedure, with extended vasovagal-like incident. The commonalities between the five cases presented may suggest a shared biological mechanism involving the breakdown of intermediate-to-late-stage consolidation that does not depend on the structural integrity of the hippocampi. Speculation on the mechanism of consolidation breakdown and diagnostic implications are discussed.

Gene therapy restores hearing in deaf mice

The inverted V’s above are sensory hair bundles in the ear, each containing 50 to 100 microvilli tipped with TMC proteins. Gene therapy restores hearing by providing working copies of those proteins. (credit: Gwenaelle Geleoc & Artur Indzhykulian)

Patients with hearing loss will one day have their genome sequenced and their hearing restored by gene therapy, says Jeffrey Holt, PhD,  a scientist in the F.M. Kirby Neurobiology Center at Boston Children’s Hospital and an associate professor of Otolaryngology at Harvard Medical School.

A proof-of-principle study published by the journal Science Translational Medicine takes a step in that direction, restoring hearing in deaf mice. Clinical trials of gene therapy for humans could be started within 5 to 10 years, Holt believes.

Holt, with first author Charles Askew and colleagues at École Polytechnique Fédérale de Lausanne in Switzerland, focused on deafness caused by a gene called TMC1 — one of more than 70 different genes are known to cause deafness when mutated. TMC1 accounts for 4 to 8 percent of genetic deafness, and also encodes a protein that’s critical for hearing by helping to convert sound into electrical signals that travel to the brain.

To deliver the functioning TMC1 gene into the ear, the team inserted it into an engineered virus called adeno-associated virus 1, or AAV1, and added a promoter, a genetic sequence that turns the gene on only in certain sensory cells in the cochlea, known as hair cells.

“I heard that!” Rasbak/Wikimedia Commons

They then injected the engineered AAV1 into the inner ears of mutant, deaf mice modeling the more common recessive form of TMC1 deafness, which causes profound hearing loss in children from a very young age, usually by around 2 years. After the injection, the animals’ sensory hair cells began responding to sound and electrical activity began showing up in the auditory portion of their brainstems.

How it works

Holt’s team showed in 2013 that TMC1 and the related protein TMC2 are critical for hearing, ending a rigorous 30-year search by scientists. Sensory hair cells contain tiny projections called microvilli, each tipped with a channel formed by TMC1 and TMC2 proteins. Arriving sound waves wiggle the microvilli, causing the channels to open. That allows calcium to enter the cell, generating an electrical signal that travels to the brain and ultimately translates to hearing.

Although the channel is made up of either TMC1 or TMC2, a mutation in the TMC1 gene is sufficient to cause deafness. However, Holt’s study also showed that gene therapy with the TMC2 gene could compensate for loss of a functional TMC1, restoring hearing in the recessive deafness model and partial hearing in a mouse model of dominant TMC1 deafness, in which patients gradually go deaf beginning around 10 to 15 years of age.


Abstract of Tmc gene therapy restores auditory function in deaf mice

Genetic hearing loss accounts for up to 50% of prelingual deafness worldwide, yet there are no biologic treatments currently available. To investigate gene therapy as a potential biologic strategy for restoration of auditory function in patients with genetic hearing loss, we tested a gene augmentation approach in mouse models of genetic deafness. We focused on DFNB7/11 and DFNA36, which are autosomal recessive and dominant deafnesses, respectively, caused by mutations in transmembrane channel–like 1 (TMC1). Mice that carry targeted deletion of Tmc1 or a dominant Tmc1 point mutation, known as Beethoven, are good models for human DFNB7/11 and DFNA36. We screened several adeno-associated viral (AAV) serotypes and promoters and identified AAV2/1 and the chicken β-actin (Cba) promoter as an efficient combination for driving the expression of exogenous Tmc1 in inner hair cells in vivo. Exogenous Tmc1 or its closely related ortholog, Tmc2, were capable of restoring sensory transduction, auditory brainstem responses, and acoustic startle reflexes in otherwise deaf mice, suggesting that gene augmentation with Tmc1 or Tmc2 is well suited for further development as a strategy for restoration of auditory function in deaf patients who carry TMC1 mutations.

Self/Less movie features uploading … to an existing human body

In Self/Less, a science-fiction thriller to be released in the U.S. today, July 10, 2015, Damian Hale, an extremely wealthy aristocrat (Ben Kingsley) dying from cancer, undergoes a $250 million radical medical procedure at a lab called Phoenix Biogenic in Manhattan to have his consciousness transferred into the body of a healthy young man (Ryan Reynolds).

(credit: Hilary Bronwyn Gayle / Gramercy Pictures)

But when he starts to uncover the mystery of the body’s origin — he has flashbacks in a dream of a former life as Mark — he discovers the body was not grown in a laboratory, as promised, and that the “organization” he bought the body from will kill to protect its investment. To make matters worse, he faces the threat of losing control of the body he now possesses and its original owner’s consciousness resurfacing, which will erase his mind in the process.

Curiously, at one point, Mark looks up the scientist who did the transfer on Wikipedia, and finds that he was the “godfather of transhumanism.” “What many summer movie-goers might not realize is that Self/less is loosely based on a real-life project called the 2045 Initiative, which is being spearheaded by Dmitry Itskov, a Russian multi-millionaire, Ars Technica suggests. But the theme has also been explored in a number of movies, ranging from Metropolis to The Sixth Day, Avatar, and The Age of Ultron.

(credit: 2045 Strategic Social Initiative)