Hierarchies exist in the brain because of lower connection costs, research shows

The Evolutionary Origins of Hierarchy: Evolution with performance-only selection results in non-hierarchical and non-modular networks, which take longer to adapt to new environments. However, evolving networks with a connection cost creates hierarchical and functionally modular networks that can solve the overall problem by recursively solving its sub-problems. These networks also adapt to new environments faster. (credit: Henok Mengistu et al./PLOS Comp. Bio)

New research suggests why the human brain and other biological networks exhibit a hierarchical structure, and the study may improve attempts to create artificial intelligence.

The study, by researchers from the University of Wyoming and the French Institute for Research in Computer Science and Automation (INRIA, in France), demonstrates that the evolution of hierarchy — a simple system of ranking — in biological networks may arise because of the costs associated with network connections.

This study also supports Ray Kurzweil’s theory of the hierarchical structure of the neocortex, presented in his 2012 book, How to Create a Mind.

The human brain has separate areas for vision, motor control, and tactile processing, for example, and each of these areas consist of sub-regions that govern different parts of the body.

Evolutionary pressure to reduce the number and cost of connections

The research findings suggest that hierarchy evolves not because it produces more efficient networks, but instead because hierarchically wired networks have fewer connections. That’s because connections in biological networks are expensive — they have to be built, maintained, etc. — so there’s an evolutionary pressure to reduce the number of connections.

In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings may also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.

The research, led by Henok S. Mengistu, is described in an open-access paper in PLOS Computational Biology. The researchers also simulated the evolution of computational brain models, known as artificial neural networks, both with and without a cost for network connections. They found that hierarchical structures emerge much more frequently when a cost for connections is present.

Aside from explaining why biological networks are hierarchical, the research might also explain why many man-made systems such as the Internet and road systems are also hierarchical. “The next step is to harness and combine this knowledge to evolve large-scale, structurally organized networks in the hopes of creating better artificial intelligence and increasing our understanding of the evolution of animal intelligence, including our own,” according to the researchers.


Abstract of The Evolutionary Origins of Hierarchy

Hierarchical organization—the recursive composition of sub-modules—is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force–the cost of connections–promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.

Chronic stroke patients safely recover after injection of human stem cells


Injecting specially prepared human adult stem cells directly into the brains of chronic stroke patients proved safe and effective in restoring motor (muscle) function in a small clinical trial led by Stanford University School of Medicine investigators.

The 18 patients had suffered their first and only stroke between six months and three years before receiving the injections, which involved drilling a small hole through their skulls.

For most patients, at least a full year had passed since their stroke — well past the time when further recovery might be hoped for.  In each case, the stroke had taken place beneath the brain’s outermost layer, or cortex, and had severely affected motor function. “Some patients couldn’t walk,” Steinberg said. “Others couldn’t move their arm.”

Sonia Olea Coontz had a stroke in 2011 that affected the movement of her right arm and leg. After modified stem cells were injected into her brain as part of a clinical trial, she says her limbs “woke up.” (credit: Mark Rightmire/Stanford University School of Medicine)

One of those patients, Sonia Olea Coontz, of Long Beach, California, now 36, had a stroke in May 2011. “My right arm wasn’t working at all,” said Coontz. “It felt like it was almost dead. My right leg worked, but not well.” She walked with a noticeable limp. “I used a wheelchair a lot. After my surgery, they woke up,” she said of her limbs.

‘Clinically meaningful’ results

The promising results set the stage for an expanded trial of the procedure now getting underway. They also call for new thinking regarding the permanence of brain damage, said Gary Steinberg, MD, PhD, professor and chair of neurosurgery.

“This was just a single trial, and a small one,” cautioned Steinberg, who led the 18-patient trial and conducted 12 of the procedures himself. (The rest were performed at the University of Pittsburgh.) “It was designed primarily to test the procedure’s safety. But patients improved by several standard measures, and their improvement was not only statistically significant, but clinically meaningful. Their ability to move around has recovered visibly. That’s unprecedented. At six months out from a stroke, you don’t expect to see any further recovery.”

The trial’s results are detailed in a paper published online June 2 in Stroke. Steinberg, who has more than 15 years’ worth of experience in work with stem cell therapies for neurological indications, is the paper’s lead and senior author.

The procedure involved injecting SB623 mesenchymal stem cells, derived from the bone marrow of two donors and then modified to beneficially alter the cells’ ability to restore neurologic function.*

Motor-function improvements

Substantial improvements were seen in patients’ scores on several widely accepted metrics of stroke recovery. Perhaps most notably, there was an overall 11.4-point improvement on the motor-function component of the Fugl-Meyer test, which specifically gauges patients’ movement deficits. “Patients who were in wheelchairs are walking now,” said Steinberg, who is the Bernard and Ronni Lacroute-William Randolph Hearst Professor in Neurosurgery and Neurosciences.

“We know these cells don’t survive for more than a month or so in the brain,” he added. “Yet we see that patients’ recovery is sustained for greater than one year and, in some cases now, more than two years.”

Importantly, the stroke patients’ postoperative improvement was independent of their age or their condition’s severity at the onset of the trial. “Older people tend not to respond to treatment as well, but here we see 70-year-olds recovering substantially,” Steinberg said. “This could revolutionize our concept of what happens after not only stroke, but traumatic brain injury and even neurodegenerative disorders. The notion was that once the brain is injured, it doesn’t recover — you’re stuck with it. But if we can figure out how to jump-start these damaged brain circuits, we can change the whole effect.

“We thought those brain circuits were dead. And we’ve learned that they’re not.”

New trial now recruiting 156 patients

A new randomized, double-blinded multicenter phase-2b trial aiming to enroll 156 chronic stroke patients is now actively recruiting patients. Steinberg is the principal investigator of that trial. For more information, you can e-mail stemcellstudy@stanford.edu. “There are close to 7 million chronic stroke patients in the United States,” Steinberg said. “If this treatment really works for that huge population, it has great potential.”

Some 800,000 people suffer a stroke each year in the United States alone. About 85 percent of all strokes are ischemic: They occur when a clot forms in a blood vessel supplying blood to part of the brain, with subsequent intensive damage to the affected area. The specific loss of function incurred depends on exactly where within the brain the stroke occurs, and on its magnitude.

Although approved therapies for ischemic stroke exist, to be effective they must be applied within a few hours of the event — a time frame that often is exceeded by the amount of time it takes for a stroke patient to arrive at a treatment center.

Consequently, only a small fraction of patients benefit from treatment during the stroke’s acute phase. The great majority of survivors end up with enduring disabilities. Some lost functionality often returns, but it’s typically limited. And the prevailing consensus among neurologists is that virtually all recovery that’s going to occur comes within the first six months after the stroke.

* Mesenchymal stem cells are the naturally occurring precursors of muscle, fat, bone and tendon tissues. In preclinical studies, though, they’ve not been found to cause problems by differentiating into unwanted tissues or forming tumors. Easily harvested from bone marrow, they appear to trigger no strong immune reaction in recipients even when they come from an unrelated donor. In fact, they may actively suppress the immune system. For this trial, unlike the great majority of transplantation procedures, the stem cell recipients received no immunosuppressant drugs.

During the procedure, patients’ heads were held in fixed positions while a hole was drilled through their skulls to allow for the injection of SB623 cells, accomplished with a syringe, into a number of spots at the periphery of the stroke-damaged area, which varied from patient to patient.

Afterward, patients were monitored via blood tests, clinical evaluations and brain imaging. Interestingly, the implanted stem cells themselves do not appear to survive very long in the brain. Preclinical studies have shown that these cells begin to disappear about one month after the procedure and are gone by two months. Yet, patients showed significant recovery by a number of measures within a month’s time, and they continued improving for several months afterward, sustaining these improvements at six and 12 months after surgery. Steinberg said it’s likely that factors secreted by the mesenchymal cells during their early postoperative presence near the stroke site stimulates lasting regeneration or reactivation of nearby nervous tissue.

No relevant blood abnormalities were observed. Some patients experienced transient nausea and vomiting, and 78 percent had temporary headaches related to the transplant procedure.


Abstract of Clinical Outcomes of Transplanted Modified Bone Marrow–Derived Mesenchymal Stem Cells in Stroke: A Phase 1/2a Study

Background and Purpose—Preclinical data suggest that cell-based therapies have the potential to improve stroke outcomes.

Methods—Eighteen patients with stable, chronic stroke were enrolled in a 2-year, open-label, single-arm study to evaluate the safety and clinical outcomes of surgical transplantation of modified bone marrow–derived mesenchymal stem cells (SB623).

Results—All patients in the safety population (N=18) experienced at least 1 treatment-emergent adverse event. Six patients experienced 6 serious treatment-emergent adverse events; 2 were probably or definitely related to surgical procedure; none were related to cell treatment. All serious treatment-emergent adverse events resolved without sequelae. There were no dose-limiting toxicities or deaths. Sixteen patients completed 12 months of follow-up at the time of this analysis. Significant improvement from baseline (mean) was reported for: (1) European Stroke Scale: mean increase 6.88 (95% confidence interval, 3.5–10.3;P<0.001), (2) National Institutes of Health Stroke Scale: mean decrease 2.00 (95% confidence interval, −2.7 to −1.3; P<0.001), (3) Fugl-Meyer total score: mean increase 19.20 (95% confidence interval, 11.4–27.0; P<0.001), and (4) Fugl-Meyer motor function total score: mean increase 11.40 (95% confidence interval, 4.6–18.2;P<0.001). No changes were observed in modified Rankin Scale. The area of magnetic resonance T2 fluid-attenuated inversion recovery signal change in the ipsilateral cortex 1 week after implantation significantly correlated with clinical improvement at 12 months (P<0.001 for European Stroke Scale).

Conclusions—In this interim report, SB623 cells were safe and associated with improvement in clinical outcome end points at 12 months.

Implanted neuroprosthesis improves walking ability in stroke patient

Left: multichannel implantable gait-assist system. Right: participant walking with the system. (credit: N.S. Makowski et al./Am. J. Phys. Med. Rehabil.)

A surgically implanted neuroprosthesis has led to substantial improvement in walking speed and distance in a patient with limited mobility after a stroke, according to a single-patient study in the American Journal of Physical Medicine & Rehabilitation.

The system, programmed to stimulate coordinated activity of hip, knee, and ankle muscles, “is a promising intervention to provide assistance to stroke survivors during daily walking,” write Nathaniel S. Makowski, PhD, and colleagues of the Louis Stokes Cleveland Veterans Affairs Medical Center.

With technical refinements and further research, such implanted neuroprosthesis systems might help to promote walking ability for at least some patients with post-stroke disability.

Clinically relevant gait improvements

The researchers report their experience with an implanted neuroprosthesis in a 64-year-old man with impaired motion and sensation of his left leg and foot after a hemorrhagic (bleeding) stroke. After thorough evaluation, he underwent surgery to place an implanted pulse generator and intramuscular stimulating electrodes in seven muscles of the hip, knee, and ankle.*

Makowski and colleagues then created a customized electrical stimulation program to activate the muscles, with the goal of restoring a more natural gait pattern. The patient went through extensive training in the researchers’ laboratory for several months after neuroprosthesis placement.

With training without muscle stimulation, gait speed only increased from 0.29 meters per second (m/s) before surgery, to 0.35 m/s after training, a non-significant improvement. But when muscle stimulation was turned on, gait speed increased dramatically: to 0.72 m/s, with “more symmetrical and dynamic gait.”

In addition, the patient was able to walk much farther. When first evaluated, he could walk only 76 meters before becoming fatigued. After training but without stimulation, he could walk about 300 meters (in 16 minutes). With stimulation, the patient’s maximum walking distance increased to more than 1,400 meters (in 41 minutes) with stimulation.

Even though the patient wasn’t walking with stimulation outside the laboratory, his walking ability in daily life improved significantly. He went from “household-only” ambulation to increased walking outside in the neighborhood.

“The therapeutic effect is likely a result of muscle conditioning during stimulated exercise and gait training,” according to the authors. “Persistent use of the device during walking may provide ongoing training that maintains both muscle conditioning and cardiovascular health.”

While the results of this initial experience in a single patient are encouraging, the researchers emphasize that large-scale studies will be needed to demonstrate the wider applicability of a neuroprosthesis for multi-joint control. If the benefits are confirmed, Makowski and colleagues conclude, “daily use of an implanted system could have significant clinical relevance to a portion of the stroke population.”

* Tensor fasciae latae (hip flexor), sartorius (hip and knee flexor), gluteus maximus (hip extensor), short head of biceps femoris (knee flexor), quadriceps (knee extensor), tibialis anterior/peroneus longus (ankle dorsiflexors), and gastrocnemius.


Abstract of Improving Walking with an Implanted Neuroprosthesis for Hip, Knee, and Ankle Control After Stroke.

Objective: The objective of this work was to quantify the effects of a fully implanted pulse generator to activate or augment actions of hip, knee, and ankle muscles after stroke.

Design: The subject was a 64-year-old man with left hemiparesis resulting from hemorrhagic stroke 21 months before participation. He received an 8-channel implanted pulse generator and intramuscular stimulating electrodes targeting unilateral hip, knee, and ankle muscles on the paretic side. After implantation, a stimulation pattern was customized to assist with hip, knee, and ankle movement during gait.

The subject served as his own concurrent and longitudinal control with and without stimulation. Outcome measures included 10-m walk and 6-minute timed walk to assess gait speed, maximum walk time, and distance to measure endurance, and quantitative motion analysis to evaluate spatial-temporal characteristics. Assessments were repeated under 3 conditions: (1) volitional walking at baseline, (2) volitional walking after training, and (3) walking with stimulation after training.

Results: Volitional gait speed improved with training from 0.29 m/s to 0.35 m/s and further increased to 0.72 m/s with stimulation. Most spatial-temporal characteristics improved and represented more symmetrical and dynamic gait.

Conclusions: These data suggest that a multijoint approach to implanted neuroprostheses can provide clinically relevant improvements in gait after stroke.

How to erase bad memories and enhance good ones

Mice normally freeze in position as a response to fear, as shown here under control condition (center row): fear conditioning induces freezing behavior in response (recall) to exposure to the conditioned stimulus (tone), but the freezing response normally decreases (extinction) following several days of multiple tone exposures (the mice get used to it). However, enhancing release of acetylcholine (blue light) to the amygdala during conditioned fear training resulted in continued freezing behavior 24 hours later and persisted over long periods of time (extinction). In contrast, reducing acetylcholine (yellow light) during the initial training period reduced the freezing behavior (during recall) and led to greater retention of the extinction learning (reduced freezing). (credit: Li Jiang et al./Neuron)

Imagine if people with dementia could enhance good memories or those with post-traumatic stress disorder could wipe out bad memories. A Stony Brook University research team has now taken a step toward that goal by manipulating one of the brain’s natural mechanisms for signaling involved in emotional memory: a neurotransmitter called acetylcholine.

The region of the brain most involved in emotional memory is thought to be the amygdala. Cholinergic neurons that reside in the base of the brain — the same neurons that appear to be affected early in cognitive decline — stimulate release of acetylcholine by neurons in the amygdala, which strengthens emotional memories.

Because fear is a strong and emotionally charged experience, Lorna Role, PhD, Professor and Chair of the Department of Neurobiology and Behavior, and colleagues used a fear-based memory model in mice to test the underlying mechanism of memory and the specific role of acetylcholine in the amygdala.

A step toward reversing post-traumatic stress disorder

Be afraid. Be very afraid. Optogenetic stimulation with blue light. (credit: Deisseroth Laboratory)

To achieve precise control, the team used optogenetics, a research method using light, to stimulate specific populations of cholinergic neurons in the amygdala during the experiments to release acetylcholine. As noted in previous studies reported on KurzweilAI, shining blue (or green) light on neurons treated with light-sensitive membrane proteins stimulates the neurons while shining yellow (or red) light inhibits (blocks) them.

So when the researchers used optogenetics with blue light to increase the amount of acetylcholine released in the amygdala during the formation of a traumatic memory, they found it greatly strengthened fear memory, making the memory last more than twice as long as normal.

But when they decreased acetylcholine signaling (using yellow light) in the amygdala from a traumatic experience — one that normally produces a fear response — they could actually extinguish (wipe out) the memory.

Role said the long-term goal of their research is to find ways — potentially independent of acetylcholine (or drug administration) — to enhance or diminish the strength of good memories and diminish the bad ones.

Their findings are published in the journal Neuron. The research was supported in part by the National Institutes of Health.


Abstract of Cholinergic Signaling Controls Conditioned Fear Behaviors and Enhances Plasticity of Cortical-Amygdala Circuits

We examined the contribution of endogenous cholinergic signaling to the acquisition and extinction of fear- related memory by optogenetic regulation of cholinergic input to the basal lateral amygdala (BLA). Stimulation of cholinergic terminal fields within the BLA in awake-behaving mice during training in a cued fear-conditioning paradigm slowed the extinction of learned fear as assayed by multi-day retention of extinction learning. Inhibition of cholinergic activity during training reduced the acquisition of learned fear behaviors. Circuit mechanisms underlying the behavioral effects of cholinergic signaling in the BLA were assessed by in vivo and ex vivo electrophysiological recording. Photostimulation of endogenous cholinergic input (1) enhances firing of putative BLA principal neurons through activation of acetylcholine receptors (AChRs), (2) enhances glutamatergic synaptic transmission in the BLA, and (3) induces LTP of cortical-amygdala circuits. These studies support an essential role of cholinergic modulation of BLA circuits in the inscription and retention of fear memories.

Cell-phone-radiation study finds associated brain and heart tumors in rodents

Glioma in rat brain (credit: Samuel Samnick et al./European Journal of Nuclear Medicine)

A series of studies over two years with rodents exposed to radio frequency radiation (RFR) found low incidences of malignant gliomas (tumors of glial support cells) in the brain and schwannoma tumors in the heart.*

The studies were performed under the auspices of the U.S. National Toxicology Program (NTP).

Potentially preneoplastic (pre-cancer) lesions were also observed in the brain and heart of male rats exposed to RFR, with higher confidence in the association with neoplastic lesions in the heart than the brain.

No biologically significant effects were observed in the brain or heart of female rats regardless of type of radiation.

The NTP notes that the open-access report is a preview and has not been peer-reviewed.**

In 2011, the WHO/International Agency for Research on Cancer (IARC) classified RFR as possibly carcinogenic to humans, also based on increased risk for glioma.

* The rodents were subjected to whole-body exposure to the two types RFR modulation currently used in U.S. wireless networks — CDMA and GSM — at frequencies of 900 MHz for rats and 1900 MHz for mice, with a total exposure time of approximately 9 hours a day over the course of the day, 7 days/week. The glioma lesions occurred in 2 to 3 percent of the rats and the schwannomas occurred in 1 to 6 percent of the rats.

** The NTP says further details will be published in the peer-reviewed literature later in 2016. The reports are “limited to select findings of concern in the brain and heart and do not represent a complete reporting of all findings from these studies of cell phone RFR,” which will be “reported together with the current findings in two forthcoming NTP peer-reviewed reports, to be available for peer review and public comment by the end of 2017.”


Abstract of Report of Partial Findings from the National Toxicology Program Carcinogenesis Studies of Cell Phone Radiofrequency Radiation

The U.S. National Toxicology Program (NTP) has carried out extensive rodent toxicology and carcinogenesis studies of radiofrequency radiation (RFR) at frequencies and modulations used in the US telecommunications industry. This report presents partial findings from these studies. The occurrences of two tumor types in male Harlan Sprague Dawley rats exposed to RFR, malignant gliomas in the brain and schwannomas of the heart, were considered of particular interest, and are the subject of this report. The findings in this report were reviewed by expert peer reviewers selected by the NTP and National Institutes of Health (NIH). These reviews and responses to comments are included as appendices to this report, and revisions to the current document have incorporated and addressed these comments. Supplemental information in the form of 4 additional manuscripts has or will soon be submitted for publication. These manuscripts describe in detail the designs and performance of the RFR exposure system, the dosimetry of RFR exposures in rats and mice, the results to a series of pilot studies establishing the ability of the animals to thermoregulate during RFR exposures, and studies of DNA damage. Capstick M, Kuster N, Kühn S, Berdinas-Torres V, Wilson P, Ladbury J, Koepke G, McCormick D, Gauger J, Melnick R. A radio frequency radiation reverberation chamber exposure system for rodents Yijian G, Capstick M, McCormick D, Gauger J, Horn T, Wilson P, Melnick RL and Kuster N. Life time dosimetric assessment for mice and rats exposed to cell phone radiation Wyde ME, Horn TL, Capstick M, Ladbury J, Koepke G, Wilson P, Stout MD, Kuster N, Melnick R, Bucher JR, and McCormick D. Pilot studies of the National Toxicology Program’s cell phone radiofrequency radiation reverberation chamber exposure system Smith-Roe SL, Wyde ME, Stout MD, Winters J, Hobbs CA, Shepard KG, Green A, Kissling GE, Tice RR, Bucher JR, Witt KL. Evaluation of the genotoxicity of cell phone radiofrequency radiation in male and female rats and mice following subchronic exposure.

Rapid eye movement sleep (dreaming) shown necessary for memory formation

Inhibition of  media septum GABA neurons during rapid eye movement (REM) sleep reduces theta rhythm (a characteristic of REM sleep). Schematic of the in vivo recording configuration: an optic fiber delivered orange laser light to the media septum part of the brain, allowing for optogenetic inhibition of media septum GABA neurons while recording the local field potential signal from electrodes implanted in hippocampus area CA1. (credit: Richard Boyce et al./Science)

A study published in the journal Science by researchers at the Douglas Mental Health University Institute at McGill University and the University of Bern provides the first evidence that rapid eye movement (REM) sleep — the phase where dreams appear — is directly involved in memory formation (at least in mice).

“We already knew that newly acquired information is stored into different types of memories, spatial or emotional, before being consolidated or integrated,” says Sylvain Williams, a researcher and professor of psychiatry at McGill*. “How the brain performs this process has remained unclear until now. We were able to prove for the first time that REM sleep (dreaming) is indeed critical for normal spatial memory formation in mice,” said Williams.

Dream quest

Hundreds of previous studies have tried unsuccessfully to isolate neural activity during REM sleep using traditional experimental methods. In this new study, the researchers instead used optogenetics, which enables scientists to precisely target a population of neurons and control its activity by light.

“We chose to target [GABA neurons in the media septum] that regulate the activity of the hippocampus, a structure that is critical for memory formation during wakefulness and is known as the ‘GPS system’ of the brain,” Williams says.

To test the long-term spatial memory of mice, the scientists trained the rodents to spot a new object placed in a controlled environment where two objects of similar shape and volume stand. Spontaneously, mice spend more time exploring a novel object than a familiar one, showing their use of learning and recall.

Shining orange laser light on media septum (MS) GABA neurons during REM sleep reduces frequency and power (purple section) of neuron signals in dorsal CA1 area of hippocampus (credit: Richard Boyce et al./Science)

When these mice were in REM sleep, however, the researchers used light pulses to turn off their memory-associated neurons to determine if it affects their memory consolidation. The next day, the same rodents did not succeed the spatial memory task learned on the previous day. Compared to the control group, their memory seemed erased, or at least impaired.

“Silencing the same neurons for similar durations outside of REM episodes had no effect on memory. This indicates that neuronal activity specifically during REM sleep is required for normal memory consolidation,” says the study’s lead author, Richard Boyce, a PhD student.

Implications for brain disease

REM sleep is understood to be a critical component of sleep in all mammals, including humans. Poor sleep quality is increasingly associated with the onset of various brain disorders such as Alzheimer’s and Parkinson’s disease.

In particular, REM sleep is often significantly perturbed in Alzheimer’s diseases (AD), and results from this study suggest that disruption of REM sleep may contribute directly to memory impairments observed in AD, the researchers say.

This work was partly funded by the Canadian Institutes of Health Research (CIHR), the Natural Science and Engineering Research Council of Canada (NSERC), a postdoctoral fellowship from Fonds de la recherche en Santé du Québec (FRSQ) and an Alexander Graham Bell Canada Graduate scholarship (NSERC).

* Williams’ team is also part of the CIUSSS de l’Ouest-de-l’Île-de-Montréal research network. Williams co-authored the study with Antoine Adamantidis, a researcher at the University of Bern’s Department of Clinical Research and at the Sleep Wake Epilepsy Center of the Bern University Hospital.


Abstract of Causal evidence for the role of REM sleep theta rhythm in contextual memory consolidation

Rapid eye movement sleep (REMS) has been linked with spatial and emotional memory consolidation. However, establishing direct causality between neural activity during REMS and memory consolidation has proven difficult because of the transient nature of REMS and significant caveats associated with REMS deprivation techniques. In mice, we optogenetically silenced medial septum γ-aminobutyric acid–releasing (MSGABA) neurons, allowing for temporally precise attenuation of the memory-associated theta rhythm during REMS without disturbing sleeping behavior. REMS-specific optogenetic silencing of MSGABA neurons selectively during a REMS critical window after learning erased subsequent novel object place recognition and impaired fear-conditioned contextual memory. Silencing MSGABA neurons for similar durations outside REMS episodes had no effect on memory. These results demonstrate that MSGABA neuronal activity specifically during REMS is required for normal memory consolidation.

Your smartphone and tablet may be making you ADHD-like

(credit: KurzweilAI)

Smartphones and other digital technology may be causing ADHD-like symptoms, according to an open-access study published in the proceedings of ACM CHI ’16, the Human-Computer Interaction conference of the Association for Computing Machinery, ongoing in San Jose.

In a two-week experimental study, University of Virginia and University of British Columbia researchers showed that when students kept their phones on ring or vibrate and with notification alerts on, they reported more symptoms of inattention and hyperactivity than when they kept their phones on silent.

The results suggest that even people who have not been diagnosed with ADHD may experience some of the disorder’s symptoms, including distraction, fidgeting, having trouble sitting still, difficulty doing quiet tasks and activities, restlessness, and difficulty focusing and getting bored easily when trying to focus, the researchers said.

“We found the first experimental evidence that smartphone interruptions can cause greater inattention and hyperactivity — symptoms of attention deficit hyperactivity disorder — even in people drawn from a nonclinical population,”said Kostadin Kushlev, a psychology research scientist at the University of Virginia who led the study with colleagues at the University of British Columbia.

In the study, 221 students at the University of British Columbia drawn from the general student population were assigned for one week to maximize phone interruptions by keeping notification alerts on, and their phones within easy reach.

Indirect effects of manipulating smartphone interruptions on psychological well-being via inattention symptoms. Numbers are unstandardized regression coefficients. (credit: Kostadin Kushlev et al./CHI 2016)

During another week participants were assigned to minimize phone interruptions by keeping alerts off and their phones away.

At the end of each week, participants completed questionnaires assessing inattention and hyperactivity. Unsurprisingly, the results showed that the participants experienced significantly higher levels of inattention and hyperactivity when alerts were turned on.

Digital mobile users focus more on concrete details than the big picture

Using digital platforms such as tablets and laptops for reading may also make you more inclined to focus on concrete details rather than interpreting information more contemplatively or abstractly (seeing the big picture), according to another open-access study published in ACM CHI ’16 proceedings.

Researchers at Dartmouth’s Tiltfactor lab and the Human-Computer Interaction Institute at Carnegie Mellon University conducted four studies with a total of 300 participants. Participants were tested by reading a short story and a table of information about fictitious Japanese car models.

The studies revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical printout) exhibited a lower level of “construal” (abstract) thinking. However, the researchers also found that engaging the subjects in a more abstract mindset prior to an information processing task on a digital platform appeared to help facilitate a better performance on tasks that require abstract thinking.

Coping with digital overload

Given the widespread acceptance of digital devices, as evidenced by millions of apps, ubiquitous smartphones, and the distribution of iPads in schools, surprisingly few studies exist about how digital tools affect us, the researchers noted.

“The ever-increasing demands of multitasking, divided attention, and information overload that individuals encounter in their use of digital technologies may cause them to ‘retreat’ to the less cognitively demanding lower end of the concrete-abstract continuum,” according to the authors. They also say the new research suggests that “this tendency may be so well-ingrained that it generalizes to contexts in which those resource demands are not immediately present.”

Their recommendation for human-computer interaction designers and researchers: “Consider strategies for encouraging users to see the ‘forest’ as well as the ‘trees’ when interacting with digital platforms.”

Jony Ive, are you listening?


Abstract of “Silence your phones”: Smartphone notifications increase inattention and hyperactivity symptoms

As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.


Abstract of High-Low Split: Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms

The present research investigated whether digital and non-digital platforms activate differing default levels of cognitive construal. Two initial randomized experiments revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical print-out) exhibited a lower level of construal, one prioritizing immediate, concrete details over abstract, decontextualized interpretations. This pattern emerged both in digital platform participants’ greater preference for concrete versus abstract descriptions of behaviors as well as superior performance on detail-focused items (and inferior performance on inference-focused items) on a reading comprehension assessment. A pair of final studies found that the likelihood of correctly solving a problem-solving task requiring higher-level “gist” processing was: (1) higher for participants who processed the information for task on a non-digital versus digital platform and (2) heightened for digital platform participants who had first completed an activity activating an abstract mindset, compared to (equivalent) performance levels exhibited by participants who had either completed no prior activity or completed an activity activating a concrete mindset.

(credit: KurzweilAI/Apple)

Electronic devices that melt in your brain

Illustration of the construction of a bioresorbable neural electrode array for ECoG and subdermal EEG measurements. A photolithographically patterned, n-doped silicon nanomaterial (300 nm thick) is used for electrodes and interconnects. A 100 nm thick film of silicon dioxide and a foil of PLGA (30 nm thick) serve as a bioresorbable encapsulating layer and substrate, respectively. The device connects to an external data acquisition system through a conductive film interfaced to the Si nm interconnects at contact pads at the edge. (credit: Ki Jun Yu et al./Nature Materials))

Two implantable devices developed by American and Chinese researchers are designed to dissolve in the brain over time and may eliminate several current problems with implants.

University of Pennsylvania researchers have developed an electrode and an electrode array, both made of layers of silicon and molybdenum that can measure physiological characteristics (like neuron signals) and dissolve at a known rate (determined by the material’s thickness). The team used the device in anesthetized rats to record brain waves (EEGs) and induced epileptic spikes in intact live tissue.

In another experiment, they showed the dissolvable electronics could be used in a complex, multiplexed ECoG (intracranial electroencephalography) array over a 30-day period.

Cartoon illustration of a four-channel bioresorbable electrode array implanted on the left hemisphere of the brain of a rat for chronic recordings. A flexible cable connects the array to a custom-built circular interface board fixed to the skull using dental cement. (credit: Ki Jun Yu et al./Nature Materials)

As the researchers note online in Nature Materials, this new technology offers equal or greater resolution for measuring the brain’s electrical activity, compared to conventional electrodes, while eliminating “the risks, cost, and discomfort associated with surgery to extract current devices used for post-operative monitoring,” according to senior co-author Brian Litt, MD, a professor of Neurology, Neurosurgery, and Bioengineering at the Perelman School of Medicine.

Other potential uses of the dissolvable electronics include:

  • Disorders such as epilepsy, Parkinson’s disease, depression, chronic pain, and conditions of the peripheral nervous system. “These measurements are critically important for mapping and monitoring brain function during and in preparation for neurosurgery, for assisting in device placement, such as for Parkinson’s disease, and for guiding surgical procedures on complex, interconnected nerve structures,” Litt said.
  • Post-operative monitoring and recording of physiological characteristic after minimally invasive placement of vascular, cardiac, orthopaedic, neural or other devices. At present, post-operative monitoring is based on clinical examination or interventional radiology, which is invasive, expensive, and impractical for continuous monitoring over days to months.
  • Heart and brain surgery for applications such as aneurysm coiling, stent placement, embolization, and endoscopic operations. These new devices could also monitor structures that are exposed during surgery, but are too delicate to disturb later by removing devices.
  • More complex devices that also include flow, pressure, and other measurement capabilities.

The research was funded by the Defense Advanced Research Projects Agency, the Penn Medicine Neuroscience Center, a T32- Brain Injury Research Training Grant, the Mirowski Family Foundation, and by Neil and Barbara Smit.

A bioresorbable memristor

A 3D schematic drawing of cross-bar memristors on a silicon wafer made with dissolvable materials. (credit: Xingli He et al./Applied Materials & Interfaces)

In related research, Chinese researchers have developed a memristor (memory resistor) with biodissolvable components, using a 30 nm film of egg albumin protein on a silicon film substrate and electrodes made out of magnesium and tungsten.*

Testing showed that the device’s bipolar switching performance was comparable to oxide-based memristors, with a high-to-low resistance ratio in the range of 100 to 10,000. The device can store information for more than 10,000 seconds without any deterioration, showing its high stability and reliability.

Under dry conditions in the lab, the components worked reliably for more than three months. In water, the electrodes and albumin dissolved in two to 10 hours in the lab. The rest of the chip took about three days to break down, leaving minimal residues behind, the researchers report in the journal ACS Applied Materials & Interfaces.

The research was funded by the National Natural Science Foundation of China and the Research Fund for the Doctoral Program of Higher Education of China.

* Memristors may have future applications in nanoelectronic memories, computer logic, and neuromorphic/neuromemristive computer architectures. As shown in the following illustration, redox (oxidation and reduction) of iron molecules in albumen are primarily responsible for this memristor’s switching behavior. However, both Mg and W electrodes can dissolve in water easily and were shown to diffuse into the albumen film, where they also contribute to the formation of conductive filaments through redox reactions.

Schematic of the four switching processes for an albumen-based memristor, showing initial state of a memristor with Mg and W as the top and bottom electrode, respectively. (i) The colored spheres represent different ions. (ii) When a positive voltage is applied to the top electrode, ions move along the electric field, and accumulate locally in strong field regions in the albumen layer; meanwhile, injected electrons from the bottom electrode reduce metallic ions such as Fe3+ and Mg2+ to metal elements. (iii) At a specific voltage, the filaments are formed to connect the top and bottom electrodes electrically, and the device is turned on (the low resistance state). (iv) When applying a reset voltage, the conductive filaments are broken due to the oxidation of the metal elements by the injected electrons from the top electrode, the filaments are ruptured near the top electrode, and the device returns to the high-resistance state. (credit: Xingli He et al./Applied Materials & Interfaces)


Abstract of Bioresorbable silicon electronics for transient spatiotemporal mapping of electrical activity from the cerebral cortex

Bioresorbable silicon electronics technology offers unprecedented opportunities to deploy advanced implantable monitoring systems that eliminate risks, cost and discomfort associated with surgical extraction. Applications include postoperative monitoring and transient physiologic recording after percutaneous or minimally invasive placement of vascular, cardiac, orthopaedic, neural or other devices. We present an embodiment of these materials in both passive and actively addressed arrays of bioresorbable silicon electrodes with multiplexing capabilities, which record in vivo electrophysiological signals from the cortical surface and the subgaleal space. The devices detect normal physiologic and epileptiform activity, both in acute and chronic recordings. Comparative studies show sensor performance comparable to standard clinical systems and reduced tissue reactivity relative to conventional clinical electrocorticography (ECoG) electrodes. This technology offers general applicability in neural interfaces, with additional potential utility in treatment of disorders where transient monitoring and modulation of physiologic function, implant integrity and tissue recovery or regeneration are required.


Abstract of Transient Resistive Switching Devices Made from Egg Albumen Dielectrics and Dissolvable Electrodes

Egg albumen as the dielectric, and dissolvable Mg and W as the top and bottom electrodes are used to fabricate water-soluble memristors. 4 × 4 cross-bar configuration memristor devices show a bipolar resistive switching behavior with a high to low resistance ratio in the range of 1 × 102 to 1 × 104, higher than most other biomaterial-based memristors, and a retention time over 104 s without any sign of deterioration, demonstrating its high stability and reliability. Metal filaments accompanied by hopping conduction are believed to be responsible for the switching behavior of the memory devices. The Mg and W electrodes, and albumen film all can be dissolved in water within 72 h, showing their transient characteristics. This work demonstrates a new way to fabricate biocompatible and dissolvable electronic devices by using cheap, abundant, and 100% natural materials for the forthcoming bioelectronics era as well as for environmental sensors when the Internet of things takes off.

More evidence that you’re a mindless robot with no free will

(credit: iStock)

The results of two Yale University psychology experiments suggest that what we believe to be a conscious choice may actually be constructed, or confabulated, unconsciously after we act — to rationalize our decisions. A trick of the mind.

“Our minds may be rewriting history,” said Adam Bear, a Ph.D. student in the Department of Psychology and lead author of a paper published April 28 in the journal Psychological Science.

Tricks of the mind

A model of “postdictive” choice. Although choice of a circle is not actually completed until after a circle has turned red, the choice may seem to have occurred before that event because the participant has not yet become conscious of the circle’s turning red. The circle’s turning red can therefore unconsciously bias a participant’s choice when the delay is sufficiently short. (credit: Adam Bear and Paul Bloom/Psychological Science)

Bear and Paul Bloom performed two simple experiments to test how we experience choices. In one experiment, participants were told that five white circles would appear on the computer screen in front of them and, in rapid-fire sequence, one would turn red. They were asked to predict which one would turn red and mentally note this. After a circle turned red, participants then recorded by keystroke whether they had chosen correctly, had chosen incorrectly, or had not had time to complete their choice.

The circle that turned red was always selected by the system randomly, so probability dictates that participants should predict the correct circle 20% of the time. But when they only had a fraction of a second to make a prediction, these participants were likely to report that they correctly predicted which circle would change color more than 20% of the time.

In contrast, when participants had more time to make their guess — approaching a full second — the reported number of accurate predictions dropped back to expected levels of 20% success, suggesting that participants were not simply lying about their accuracy to impress the experimenters.

(In a second experiment to eliminate artifacts, participants chose one of two different-colored circles, with similar results.)

Confabulating reality

What happened, Bear suggests, is that events were rearranged in subjects’ minds: People unconsciously perceived the color red from the screen image before they predicted it would appear, but then right after that, consciously experienced these two things in the opposite order.

Bear said it is unknown whether this “postdictive” illusion is caused by a quirk in perceptual processing that can only be reproduced in the lab, or whether it might have “far more pervasive effects on our everyday lives and sense of free will.”

Previous research at Charité–Universitätsmedizin Berlin suggests the latter, and includes volition. That research involved a “duel” game between a human and a brain-computer interface (see Do we have free will?). It showed that there‘s a “point of no return” in the decision-making process (at about 200 milliseconds before actual movement onset), after which cancellation of a person’s movement is no longer possible.


Abstract of A Simple Task Uncovers a Postdictive Illusion of Choice

Do people know when, or whether, they have made a conscious choice? Here, we explore the possibility that choices can seem to occur before they are actually made. In two studies, participants were asked to quickly choose from a set of options before a randomly selected option was made salient. Even when they believed that they had made their decision prior to this event, participants were significantly more likely than chance to report choosing the salient option when this option was made salient soon after the perceived time of choice. Thus, without participants’ awareness, a seemingly later event influenced choices that were experienced as occurring at an earlier time. These findings suggest that, like certain low-level perceptual experiences, the experience of choice is susceptible to “postdictive” influence and that people may systematically overestimate the role that consciousness plays in their chosen behavior.

Deep neural networks that identify shapes nearly as well as humans

Self-driving car vs. pedestrians (credit: Google)

Deep neural networks (DNNs) are capable of learning to identify shapes, so “we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours,” say KU Leuven researchers.

“For the first time, a dramatic increase in performance has been observed on object and scene categorization tasks, quickly reaching performance levels rivaling humans,” they note in an open-access paper in PLOS Computational Biology.

Categorization accuracy for models created by three DNNs (CaffeNet, VGG-19, and GoggLeNet) for three types of images (color, grayscaled, silhouette). For each type, mean human performance is indicated by a gray horizontal line, with the gray surrounding band depicting 95% confidence intervals. Error bars (vertical black lines) depict 95% confidence intervals. (credit: J. Kubilius et al./PLoS Comput Biol)

The researchers found that when trained for generic object recognition from natural photographs, several different DNNs developed visual representations that relate closely to human perceptual shape judgments, even though they were never explicitly trained for shape processing.

However, “We’re not there just yet,” say the researchers. “Even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes —- although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would err, their decisions would be at least as reasonable as ours.”


Abstract of Deep Neural Networks as a Computational Model for Human Shape Sensitivity

Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development.