Graphene is ideal substrate for brain electrodes, researchers find

This illustration portrays neurons interfaced with a sheet of graphene molecules in the background (credit: Graphene Flagship)

An international study headed by the European Graphene Flagship research consortium has found that graphene is a promising material for use in electrodes that interface with neurons, based on its excellent conductivity, flexibility for molding into complex shapes, biocompatibility, and stability within the body.

The graphene-based substrates they studied* promise to overcome problems with “glial scar” tissue formation (caused by electrode-based brain trauma and long-term inflammation). To avoid that, current electrodes based on tungsten or silicon use a protective coating on electrodes, which reduces charge transfer. Current electrodes are also rigid (resulting in tissue detachment and preventing neurons from moving) and generate electrical noise, with partial or complete loss of signal over time, the researchers note in a paper published recently in the journal ACS Nano.

Electrodes are used as neural biosensors and for prosthetic applications — such as deep-brain intracranial electrodes used to control motor disorders (mainly epilepsy or Parkinson’s) and for brain-computer interfaces (BCIs), used to recover sensory functions or control robotic arms for paralyzed patients. These applications require an interface with long-term, minimal interference.

Interfacing graphene to neurons directly

Scanning electron microscope image of rat hippocampal neurons grown in the lab on a graphene-based substrate, showing normal morphology characterized by well-defined round neural soma, extended neurite arborization (branching), and cell density similar to control substrates (credit: A. Fabbro et al./ACS Nano)

“For the first time, we interfaced graphene to neurons directly, without any peptide-coating,” explained lead neuroscientist Prof. Laura Ballerini of the International School for Advanced Studies (SISSA/ISAS) and the University of Trieste.

Using electron microscopy and immunofluorescence, the researchers found that the neurons remained healthy, transmitting normal electric impulses and, importantly, no adverse glial reaction, which leads to damaging scar tissue, was seen.

Atomic force microscope (AFM) image of graphene-based substrate created using liquid phase exfoliation (credit: A. Fabbro et al./ACS Nano)

As a next step, Ballerini says the team plans to investigate how different forms of graphene, from multiple layers to monolayers, are able to affect neurons,  and “whether tuning the graphene material properties might alter the synapses and neuronal excitability in new and unique ways.”

Prof. Andrea C. Ferrari, Director of the Cambridge Graphene Centre and Chair of the Graphene Flagship Executive Board, said the Flagship will “support biomedical research and development based on graphene technology with a new work package and a significant cash investment from 2016.”

The interdisciplinary collaboration also included the University Castilla-La Mancha and the Cambridge Graphene Centre.

* The study used two methods of creating graphene-based substrates (GBSs).  Liquid phase exfoliation (LPE) — peeling off graphene from graphite — can be performed without the potentially hazardous chemical treatments involved in graphene oxide production, is scalable, and operates at room temperature, with high yield. LPE dispersions can also be easily deposited on target substrates by drop-casting, filtration, or printing. Ball milling (BM), with the help of melamine (which forms large hydrogen-bond domains, unlike LPE), can be performed in a solid environment. “Our data indicate that both GBSs are promising for next-generation bioelectronic systems, to be used as brain interfaces,” the paper concludes.

Scientists decode brain signals to recognize images in real time

Using electrodes implanted in the temporal lobes of seven awake epilepsy patients, University of Washington scientists have decoded brain signals (representing images) at nearly the speed of perception for the first time* — enabling the scientists to predict in real time which images of faces and houses the patients were viewing and when, and with better than 95 percent accuracy.

Multi-electrode placements on thalamus surface (credit: K.J. Miller et al./PLoS Comput Biol)

The research, published Jan. 28 in open-access PLOS Computational Biology, may lead to an effective way to help locked-in patients (who were paralyzed or have had a stroke) communicate, the scientists suggest.

Predicting what someone is seeing in real time

“We were trying to understand, first, how the human brain perceives objects in the temporal lobe, and second, how one could use a computer to extract and predict what someone is seeing in real time,” explained University of Washington computational neuroscientist Rajesh Rao. He is a UW professor of computer science and engineering and directs the National Science Foundation’s Center for Sensorimotor Engineering, headquartered at UW.

The study involved patients receiving care at Harborview Medical Center in Seattle. Each had been experiencing epileptic seizures not relieved by medication, so each had undergone surgery in which their brains’ temporal lobes were implanted (temporarily, for about a week) with electrodes to try to locate the seizures’ focal points.

Temporal lobes process sensory input and are a common site of epileptic seizures. Situated behind mammals’ eyes and ears, the lobes are also involved in Alzheimer’s and dementias and appear somewhat more vulnerable than other brain structures to head traumas, said UW Medicine neurosurgeon Jeff Ojemann.

Recording digital signatures of images in real time

In the experiment, signals from electrocorticographic (ECoG) electrodes from multiple temporal-lobe locations were processed powerful computational software that extracted two characteristic properties of the brain signals: “event-related potentials” (voltages from hundreds of thousands of neurons activated by an image) and “broadband spectral changes” (processing of power measurements  across a wide range of frequencies).

Averaged broadband power at two multi-electrode locations (1 and 4) following presentation of different images; note that responses to people are stronger than to houses. (credit: K.J. Miller et al./PLoS Comput Biol)

Target image (credit: K.J. Miller et al./PLoS Comput Biol)

The subjects, watching a computer monitor, were shown a random sequence of pictures: brief (400 millisecond) flashes of images of human faces and houses, interspersed with blank gray screens. Their task was to watch for an image of an upside-down house and verbally report this target, which appeared once during each of 3 runs (3 of 300 stimuli). Patients identified the target with less than 3 percent errors across all 21 experimental runs.

The computational software sampled and digitized the brain signals 1,000 times per second to extract their characteristics. The software also analyzed the data to determine which combination of electrode locations and signal types correlated best with what each subject actually saw.

By training an algorithm on the subjects’ responses to the (known) first two-thirds of the images, the researchers could examine the brain signals representing the final third of the images, whose labels were unknown to them, and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen, with only ~20 milliseconds timing error.

This accuracy was attained only when event-related potentials and broadband changes were combined for prediction, which suggests they carry complementary information.

Steppingstone to real-time brain mapping

“Traditionally scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

The scientists’ technique, he said, is a steppingstone for brain mapping, in that it could be used to identify in real time which locations of the brain are sensitive to particular types of information.

“The computational tools that we developed can be applied to studies of motor function, studies of epilepsy, studies of memory. The math behind it, as applied to the biological, is fundamental to learning,” Ojemann added.

Lead author of the study is Kai Miller, a neurosurgery resident and physicist at Stanford University who obtained his M.D. and Ph.D. at the UW. Other collaborators were Dora Hermes, a Stanford postdoctoral fellow in neuroscience, and Gerwin Schalk, a neuroscientist at the Wadsworth Institute in New York.

This work was supported by National Aeronautics and Space Administration Graduate Student Research Program, the National Institutes of Health, the National Science Foundation, and the U.S. Army.

* In previous studies, such as these three covered on KurzweilAI, brain images were reconstructed after they were viewed, not in real time: Study matches brain scans with topics of thoughts, Neuroscape Lab visualizes live brain functions using dramatic images, How to make movies of what the brain sees.


Abstract of Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change

The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject’s perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject’s perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing.

Information flows through a subset of high-traffic ‘hub neurons’ in cortical regions of the brain

Non-gray circles and their connections represent 80, 70 and 60 percent (from top to bottom) respectively of all outgoing traffic within the sampled section of a cortical region (credit: Indiana University)

A new study, reported in an open-access paper in the journal Neuroscience, shows that 70 percent of all information within cortical regions in the brain passes through only 20 percent of these regions’ neurons.

“The discovery of this small but information-rich subset of neurons within cortical regions suggests this sub-network might play a vital role in communication, learning and memory,” said Sunny Nigam, a Ph.D. candidate in the Indiana University Bloomington College of Arts and Sciences’ Department of Physics, the lead author on the study.

The scientists also report these high-traffic “hub neurons” could play a role in understanding brain health since this sort of highly efficient network — in which a small number of neurons are more essential to brain function — is also more vulnerable to disruption. That’s because relatively small breakages can cause the whole system to “go down.”

“The brain seems to favor efficiency over vulnerability,” said John M. Beggs, associate professor of biophysics in the IU Bloomington Department of Physics, who is senior author on the paper. “In addition to helping us understand how the cortex processes information, this work could shed light on how the brain responds to neurodegenerative diseases that affect the ‘network.’”

If the higher metabolic rates of hub neurons make them more vulnerable, for example, the resulting damage could be particularly harmful in conditions in which neurons are known to die, such as Alzheimer’s disease.

The existence of neurons that carry the majority of information between cortical regions in the brain has been previously reported by Olaf Sporns, Distinguished Professor and Robert H. Shaffer Chair in the IU Bloomington Department of Psychological and Brain Sciences, who is a co-author on the paper. But the new study is the first to show that a similar dynamic exists in communication within cortical regions (or “micro-structures”) of the brain.

It is also the first to measure activity across a particularly large number of neurons in these regions.

To conduct the study, IU scientists recorded small electrical impulses from up to 500 neurons from the somatosensory cortex — the part of the brain responsible for the sense of touch — measuring a surprisingly large volume of traffic across a relatively small area.

As a collaboration across the fields of physics, informatics, neuroscience, and psychological and brain sciences, the IU scientists were able to reveal the flow of both outgoing and incoming information within the living neural network by combining data from extremely high-resolution imaging technology with complex biophysical computer simulations of the brain.

“This is the first study to combine such a large number of neurons with such high temporal resolution,” Nigam said. “As a result, we can actually detect the direction of the communication flowing between neurons, creating a ‘transportation map’ from the connections within the cortex.”

The experiments, conducted in live and tissue samples, were based in rodents. But similar high-traffic zones in the cortex have been shown to exist in more advanced mammals, including primates and adult humans. The IU study is the first to explore the behavior of this region in mammals at the level of individual neurons. The only previous similar experiment was conducted in worms.

Nigam added that understanding how the brain maintains good “air traffic control” between information-rich and information-poor neurons will be the next step in unraveling the mystery of hub neurons. “If we ever want to understand how these types of neurons keep information in our heads flowing smoothly,” he said, “we really need to learn a lot more about how they work together.”

Other authors on the paper are researchers at University of California, Santa Cruz; University of California, Los Angeles; AGH University of Science and Technology, Poland; and Duke-NUS Graduate Medical School, Singapore.

This research was supported by the National Science Foundation; the National Institutes of Health; the Ministry of Education, Culture, Sports, Science and Technology of Japan; and Indiana University.


Abstract of Rich-Club Organization in Effective Connectivity among Cortical Neurons

The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory.

SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases.

How to rewire the brain with artificial axons to replace damaged pathways

Conceptual schematic of “micro-tissue engineered neural network” (micro-TENN, left) with neurons (red, bottom end) that extend axons (red lines). The new Micro-TENNs (red vertical lines in the middle of the diagram) soften immediately following insertion to better match the mechanical properties of the native brain tissue (purple). The Micro-TENN constructs emulate long axonal pathways with neuronal clusters at the end(s) to allow for integration with native neurons at superficial and deep targets (right, circle insets). (credit: J P Harris et al./Journal of Neural Engineering)

Penn State scientists have grown improved artificial transplantable artificial axons (brain pathways) in the lab. The new “micro-tissue engineered neural networks” (micro-TENNS) replace broken axon pathways when implanted in the brains of rats.

(Neurons are connected by long fibrous projections known as axons. When these connections are damaged, they have very limited capacity to regenerate — unlike many other cells in the body — thus permanently disrupting the body’s signal transmission and communication structure.)

The new lab-grown axons could one day replace damaged axons in the brains of patients with severe head injuries, strokes, or neurodegenerative diseases and could be safely delivered with minimal disruption to brain tissue, according to new research from Penn Medicine’s department of Neurosurgical Research. Their (literally) pathfinding work is published in an open-access paper in the Journal of Neural Engineering.

Senior author D. Kacy Cullen, PhD, an assistant professor of Neurosurgery and his team, previously reported in a 2015 publication in Tissue Engineering that micro-TENNS could be delivered into the brains of rats. The research team has now developed a new, less-invasive delivery method to minimize the body’s reaction and improve the survival and integration of the neural networks.

Micro-TENN axon fabrication process (credit: J P Harris et al./Journal of Neural Engineering)

The team replaced needles with a molded cylinder of agarose (a sugar) filled with an extracellular matrix (ECM) supporting structure through which axons — and associated neurons — could grow in the cerebral cortex and the thalamus. The new axons maintained their architecture for several weeks and successfully integrated into existing brain structures, softening immediately following insertion to better match the mechanical properties of the native brain tissue.

Cullen and team plan to perfect their processes and further integrate neuroscience and engineering to come up with unique ways to aid patients suffering from brain injury or common neurodegenerative diseases. “Additional research is required to directly test micro-TENN neuron survival and integration for each of these insertion methods,” Cullen said.

“We hope this regenerative medicine strategy will someday enable us to grow individualized neural networks that are tailored for each patient’s specific need,” he said, “ultimately, to replace lost neural circuits and improve brain function.”


Abstract of Advanced biomaterial strategies to transplant preformed micro-tissue engineered neural networks into the brain

Objective. Connectome disruption is a hallmark of many neurological diseases and trauma with no current strategies to restore lost long-distance axonal pathways in the brain. We are creating transplantable micro-tissue engineered neural networks (micro-TENNs), which are preformed constructs consisting of embedded neurons and long axonal tracts to integrate with the nervous system to physically reconstitute lost axonal pathways. Approach. We advanced micro-tissue engineering techniques to generate micro-TENNs consisting of discrete populations of mature primary cerebral cortical neurons spanned by long axonal fascicles encased in miniature hydrogel micro-columns. Further, we improved the biomaterial encasement scheme by adding a thin layer of low viscosity carboxymethylcellulose (CMC) to enable needle-less insertion and rapid softening for mechanical similarity with brain tissue. Main results. The engineered architecture of cortical micro-TENNs facilitated robust neuronal viability and axonal cytoarchitecture to at least 22 days in vitro. Micro-TENNs displayed discrete neuronal populations spanned by long axonal fasciculation throughout the core, thus mimicking the general systems-level anatomy of gray matter—white matter in the brain. Additionally, micro-columns with thin CMC-coating upon mild dehydration were able to withstand a force of 893 ± 457 mN before buckling, whereas a solid agarose cylinder of similar dimensions was predicted to withstand less than 150 μN of force. This thin CMC coating increased the stiffness by three orders of magnitude, enabling needle-less insertion into brain while significantly reducing the footprint of previous needle-based delivery methods to minimize insertion trauma. Significance. Our novel micro-TENNs are the first strategy designed for minimally invasive implantation to facilitate nervous system repair by simultaneously providing neuronal replacement and physical reconstruction of long-distance axon pathways in the brain. The micro-TENN approach may offer the ability to treat several disorders that disrupt the connectome, including Parkinson’s disease, traumatic brain injury, stroke, and brain tumor excision.


Abstract of Rebuilding Brain Circuitry with Living Micro-Tissue Engineered Neural Networks

Prominent neuropathology following trauma, stroke, and various neurodegenerative diseases includes neuronal degeneration as well as loss of long-distance axonal connections. While cell replacement and axonal pathfinding strategies are often explored independently, there is no strategy capable of simultaneously replacing lost neurons and re-establishing long-distance axonal connections in the central nervous system. Accordingly, we have created micro-tissue engineered neural networks (micro-TENNs), which are preformed constructs consisting of long integrated axonal tracts spanning discrete neuronal populations. These living micro-TENNs reconstitute the architecture of long-distance axonal tracts, and thus may serve as an effective substrate for targeted neurosurgical reconstruction of damaged pathways in the brain. Cerebral cortical neurons or dorsal root ganglia neurons were precisely delivered into the tubular constructs, and properties of the hydrogel exterior and extracellular matrix internal column (180–500 μm diameter) were optimized for robust neuronal survival and to promote axonal extensions across the 2.0 cm tube length. The very small diameter permits minimally invasive delivery into the brain. In this study, preformed micro-TENNs were stereotaxically injected into naive rats to bridge deep thalamic structures with the cerebral cortex to assess construct survival and integration. We found that micro-TENN neurons survived at least 1 month and maintained their long axonal architecture along the cortical–thalamic axis. Notably, we also found neurite penetration from micro-TENN neurons into the host cortex, with evidence of synapse formation. These micro-TENNs represent a new strategy to facilitate nervous system repair by recapitulating features of neural pathways to restore or modulate damaged brain circuitry.

Memory capacity of brain is 10 times more than previously thought

In a computational reconstruction of brain tissue in the hippocampus, Salk and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. (credit: Salk Institute)

Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient, and could help engineers build computers that are incredibly powerful but also conserve energy.

“This is a real bombshell in the field of neuroscience,” says Terry Sejnowski, Salk professor and co-senior author of the paper, which was published in eLife. “We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power. Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (1 quadrillion or 1015 bytes), in the same ballpark as the World Wide Web.”

“When we first reconstructed every dendrite, axon, glial process, and synapse* from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.”

10 times more discrete sizes of synapses discovered

The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.

At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.

To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.

The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.

Salk scientists computationally reconstructed brain tissue in the hippocampus to study the sizes of connections (synapses). The larger the synapse, the more likely the neuron will send a signal to a neighboring neuron. The team found that there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far great capacity at storing information than previously thought. Pictured here is a synapse between an axon (green) and dendrite (yellow). (credit: Salk Institute)

“We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature,” says Bartol.

Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.

But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.

“Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.

“This is roughly an order of magnitude of precision more than anyone has ever imagined,” says Sejnowski.

What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.

“We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.

The researchers calculated that for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.

“This means that every 2 or 20 minutes, your synapses are going up or down to the next size. The synapses are adjusting themselves according to the signals they receive,” says Bartol.

“Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.

“The implications of what we found are far-reaching,” adds Sejnowski. “Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.”

A model for energy-efficient computers

The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build powerful and ultraprecise, but energy-efficient, computers, particularly ones that employ “deep learning” and artificial neural nets—techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.

“This trick of the brain absolutely points to a way to design better computers,” says Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.”

Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.

The work was supported by the NIH and the Howard Hughes Medical Institute.

* Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons. Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses — with more surface area and vesicles of neurotransmitters — are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.

UPDATE 1/22/2016 “in the same ballpark as the World Wide Web” removed; appears to be inaccurate. The Internet Archive, a subset of the Web, currently stores 50 petabytes, for example.


Salk Institute | Salk scientists computationally reconstructed brain tissue in the hippocampus to study the sizes of connections (synapses). The larger the synapse, the more likely the neuron will send a signal to a neighboring neuron. The team found that there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far great capacity at storing information than previous


Abstract of Nanoconnectomic upper bound on the variability of synaptic plasticity

Information in a computer is quantified by the number of bits that can be stored and recovered. An important question about the brain is how much information can be stored at a synapse through synaptic plasticity, which depends on the history of probabilistic synaptic activity. The strong correlation between size and efficacy of a synapse allowed us to estimate the variability of synaptic plasticity. In an EM reconstruction of hippocampal neuropil we found single axons making two or more synaptic contacts onto the same dendrites, having shared histories of presynaptic and postsynaptic activity. The spine heads and neck diameters, but not neck lengths, of these pairs were nearly identical in size. We found that there is a minimum of 26 distinguishable synaptic strengths, corresponding to storing 4.7 bits of information at each synapse. Because of stochastic variability of synaptic activation the observed precision requires averaging activity over several minutes.

Tiny electronic implants monitor brain injury, then melt away

Artist’s rendering of bioresorbable implanted brain sensor (top left) connected via biodegradable wires to external wireless transmitter (ring, top right) for monitoring a rat’s brain (red) (credit: Graphic by Julie McMahon)

Researchers at University of Illinois at Urbana-Champaign and Washington University School of Medicine in St. Louis have developed a new class of small, thin electronic sensors that can monitor temperature and pressure within the skull — crucial health parameters after a brain injury or surgery* — then melt away when they are no longer needed, eliminating the need for additional surgery to remove the monitors and reducing the risk of infection and hemorrhage.

Similar sensors could be adapted for postoperative monitoring in other body systems as well, the researchers say.

John A. Rogers, a professor of materials science and engineering at the at the U. of I. at Urbana-Champaign, and Wilson Ray, a professor of neurological surgery at Washington University, published their work in the journal Nature.

After a traumatic brain injury or brain surgery, it’s crucial to monitor the patient for swelling and pressure on the brain. Current monitoring technology is bulky and invasive, Rogers said, and the wires restrict the patent’s movement and hamper physical therapy as they recover. Because they require continuous, hard-wired access into the head, such implants also carry the risk of allergic reactions, infection and hemorrhage, and even could exacerbate the inflammation they are meant to monitor.

Bioresorbable materials

“If you simply could throw out all the conventional hardware and replace it with very tiny, fully implantable sensors capable of the same function, constructed out of bioresorbable materials in a way that also eliminates or greatly miniaturizes the wires, then you could remove a lot of the risk and achieve better patient outcomes,” Rogers said. ”We were able to demonstrate all of these key features in animal models, with a measurement precision that’s just as good as that of conventional devices.”

Schematic illustration of a biodegradable pressure sensor. The inset shows the location of the silicon-nanomembrane (Si-NM) strain gauge. (credit: Seung-Kyun Kang et al./Nature)

The new devices incorporate dissolvable silicon technology developed by Rogers’ group at the U. of I. The sensors, smaller than a grain of rice, are built on extremely thin sheets of nanoporous silicon — which are naturally biodegradable. The sheets are configured to function normally for a few weeks, then dissolve away, completely and harmlessly, in the body’s own fluids (via hydrolysis and/or metabolic action).

The silicon platforms are also sensitive to clinically relevant pressure levels in the intracranial fluid surrounding the brain.

The researchers added a tiny temperature sensor and connected it to a wireless transmitter roughly the size of a postage stamp, implanted under the skin but on top of the skull.

Tiny pressure and temperature sensor (bottom right) connects via bioresorbable molybdenum wires (2-micrometers in diameter) to a penny-size wireless transmitter externally mounted on top of the skull. (image credit: John A. Rogers)

The Illinois group worked with clinical experts in traumatic brain injury at Washington University to implant the sensors in rats, testing for performance and biocompatibility. They found that the temperature and pressure readings from the dissolvable sensors matched conventional monitoring devices for accuracy.

“The ultimate strategy is to have a device that you can place in the brain — or in other organs in the body — that is entirely implanted, intimately connected with the organ you want to monitor and can transmit signals wirelessly to provide information on the health of that organ, allowing doctors to intervene if necessary to prevent bigger problems,” said Rory Murphy, a neurosurgeon at Washington University and co-author of the paper.

“After the critical period that you actually want to monitor, it will dissolve away and disappear.”

Embedding drug-delivery and electrical-stimulator devices

The researchers are moving toward human trials for this technology, as well as extending its functionality for other biomedical applications.

“We have established a range of device variations, materials, and measurement capabilities for sensing in other clinical contexts,” Rogers said. “In the near future, we believe that it will be possible to embed therapeutic function, such as electrical stimulation or drug delivery, into the same systems while retaining the essential bioresorbable character.”

The National Institutes of Health, the Defense Advanced Research Projects Agency and the Howard Hughes Medical Institute supported this work. Rogers and Braun are affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.

* About 50,000 people die of traumatic brain injuries annually in the U.S. When patients with such injuries arrive in the hospital, doctors must be able to accurately measure intracranial pressure in the brain and inside the skull because an increase in pressure can lead to further brain injury, and there is no way to reliably estimate pressure levels from brain scans or clinical features in patients.


Abstract of Bioresorbable silicon electronic sensors for the brain

Many procedures in modern clinical medicine rely on the use of electronic implants in treating conditions that range from acute coronary events to traumatic injury. However, standard permanent electronic hardware acts as a nidus for infection: bacteria form biofilms along percutaneous wires, or seed haematogenously, with the potential to migrate within the body and to provoke immune-mediated pathological tissue reactions. The associated surgical retrieval procedures, meanwhile, subject patients to the distress associated with re-operation and expose them to additional complications. Here, we report materials, device architectures, integration strategies, and in vivo demonstrations in rats of implantable, multifunctional silicon sensors for the brain, for which all of the constituent materials naturally resorb via hydrolysis and/or metabolic action, eliminating the need for extraction. Continuous monitoring of intracranial pressure and temperature illustrates functionality essential to the treatment of traumatic brain injury; the measurement performance of our resorbable devices compares favourably with that of non-resorbable clinical standards. In our experiments, insulated percutaneous wires connect to an externally mounted, miniaturized wireless potentiostat for data transmission. In a separate set-up, we connect a sensor to an implanted (but only partially resorbable) data-communication system, proving the principle that there is no need for any percutaneous wiring. The devices can be adapted to sense fluid flow, motion, pH or thermal characteristics, in formats that are compatible with the body’s abdomen and extremities, as well as the deep brain, suggesting that the sensors might meet many needs in clinical medicine.

Why doesn’t my phone understand me yet?

A computer would have a hard time understanding this conversation, but humans get it immediately. That’s because human communicators share a conceptual space or common ground that enables them to quickly interpret a situation. (credit for Edward Hopper’s Nighthawks artwork: the Art Institute of Chicago)

Because machines can’t develop a shared understanding of the people, place and situation — often including a long social history, the key to human communication) — say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues.

In other words, machines don’t consider the context of a conversation the way people do.

The word “bank,” for example, would be interpreted one way if you’re holding a credit card but a different way if you’re holding a fishing pole, says Stolk. “Without context, making a ‘V’ with two fingers could mean victory, the number two, or ‘these are the two fingers I broke.’”

Stolk argues that scientists and engineers should focus more on the contextual aspects of mutual understanding, basing his argument on experimental evidence from brain scans that humans achieve nonverbal mutual understanding using unique computational and neural mechanisms. Some of the studies Stolk has conducted suggest that a breakdown in mutual understanding is also behind social disorders such as autism.

Brain scans pinpoint site for “meeting of minds”

As two people conversing rely more and more on previously shared concepts, the same area of their brains — the right superior temporal gyrus — becomes more active (blue is activity in communicator, orange is activity in interpreter). This suggests that this brain region is key to mutual understanding as people continually update their shared understanding of the context of the conversation to improve mutual understanding. (credit: Arjen Stolk, UC Berkeley)

To explore how brains achieve mutual understanding, Stolk created a game that requires two players to communicate the rules to each other solely by game movements, without talking or even seeing one another, eliminating the influence of language or gesture. He then placed both players in an fMRI (functional magnetic resonance imager) and scanned their brains as they non-verbally communicated with one another via computer.

He found that the same regions of the brain — located in the poorly understood right temporal lobe, just above the ear — became active in both players during attempts to communicate the rules of the game. Critically, the superior temporal gyrus of the right temporal lobe maintained a steady, baseline activity throughout the game but became more active when one player suddenly understood what the other player was trying to communicate. The brain’s right hemisphere is more involved in abstract thought and social interactions than the left hemisphere.

“These regions in the right temporal lobe increase in activity the moment you establish a shared meaning for something, but not when you communicate a signal,” Stolk said. “The better the players got at understanding each other, the more active this region became.”

This means that both players are building a similar conceptual framework in the same area of the brain, constantly testing one another to make sure their concepts align, and updating only when new information changes that mutual understanding. The results were reported in 2014 in the Proceedings of the National Academy of Sciences.

“It is surprising,” said Stolk, “that for both the communicator, who has static input while she is planning her move, and the addressee, who is observing dynamic visual input during the game, the same region of the brain becomes more active over the course of the experiment as they improve their mutual understanding.”

Statistical reasoning vs. human reasoning

Robots and computers, on the other hand, converse based on a statistical analysis of a word’s meaning, Stolk said. If you usually use the word “bank” to mean a place to cash a check, then that will be the assumed meaning in a conversation, even when the conversation is about fishing.

“Apple’s Siri focuses on statistical regularities, but communication is not about statistical regularities,” he said. “Statistical regularities may get you far, but it is not how the brain does it. In order for computers to communicate with us, they would need a cognitive architecture that continuously captures and updates the conceptual space shared with their communication partner during a conversation.”

Hypothetically, such a dynamic conceptual framework would allow computers to resolve the intrinsically ambiguous communication signals produced by a real person, including drawing upon information stored years earlier.

Stolk’s studies have pinpointed other brain areas critical to mutual understanding. In a 2014 study, he used brain stimulation to disrupt a rear portion of the temporal lobe and found that it is important for integrating incoming signals with knowledge from previous interactions.

A later study found that in patients with damage to the frontal lobe (the ventromedial prefrontal cortex), decisions to communicate are no longer fine-tuned to stored knowledge about an addressee. Both studies could explain why such patients appear socially awkward in everyday social interactions.

“Most cognitive neuroscientists focus on the signals themselves, on the words, gestures and their statistical relationships, ignoring the underlying conceptual ability that we use during communication and the flexibility of everyday life,” he said. “Language is very helpful, but it is a tool for communication, it is not communication per se. By focusing on language, you may be focusing on the tool, not on the underlying mechanism, the cognitive architecture we have in our brain that helps us to communicate.”

Stolk and his colleagues discuss the importance of conceptual alignment for mutual understanding in an opinion piece appearing Jan. 11 in the journal Trends in Cognitive Sciences, in hopes of moving the study of communication to a new level, with a focus on conceptual alignment.

Stolk’s co-authors are Ivan Toni of the Donders Institute for Brain, Cognition and Behavior at Radboud University in the Netherlands, where the studies were conducted, and Lennart Verhagen of the University of Oxford.


Abstract of Conceptual Alignment: How Brains Achieve Mutual Understanding

We share our thoughts with other minds, but we do not understand how. Having a common language certainly helps, but infants’ and tourists’ communicative success clearly illustrates that sharing thoughts does not require signals with a pre-assigned meaning. In fact, human communicators jointly build a fleeting conceptual space in which signals are a means to seek and provide evidence for mutual understanding. Recent work has started to capture the neural mechanisms supporting those fleeting conceptual alignments. The evidence suggests that communicators and addressees achieve mutual understanding by using the same computational procedures, implemented in the same neuronal substrate, and operating over temporal scales independent from the signals’ occurrences.

Trends

State-of-the-art artificial agents such as the virtual assistants on our phones are powered by associative deep-learning algorithms. Yet those agents often make communicative errors that, if made by real people, would lead us to question their mental capacities.

We argue that these communicative errors are a consequence of focusing on the statistics of the signals we use to understand each other during communicative interactions.

Recent empirical work aimed at understanding our communicative abilities is showing that human communicators share concepts, not signals.

The evidence shows that communicators and addressees achieve mutual understanding by using the same computational procedures, implemented in the same neuronal substrate, and operating over temporal scales independent from the signals’ occurrences.

UCSD spinoffs create lab-quality portable 64-channel BCI headset

A dry-electrode, portable 64-channel wearable EEG headset (credit: Jacobs School of Engineering/UC San Diego)

The first dry-electrode, portable 64-channel wearable brain-computer interface (BCI) has been developed by bioengineers and cognitive scientists associated with UCSD Jacobs School.

The system is comparable to state-of-the-art equipment found in research laboratories, but with portability, allowing for tracking brain states throughout the day and augmenting the brain’s capabilities, the researchers say. Current BCI devices require gel-based electrodes or fewer than 64 channels.

The dry EEG sensors are easier to apply than wet sensors, while still providing high-density/low-noise brain activity data, according to the researchers. The headset includes a Bluetooth transmitter, eliminating the usual array of wires. The system also includes a sophisticated software suite for data interpretation and analysis for applications including research, neuro-feedback, and clinical diagnostics.

Cognionics HD-72 64-channel mobile EEG system (credit: Cognionics)

“This is going to take neuroimaging to the next level by deploying on a much larger scale,” including use in homes and even while driving, said Mike Yu Chi, a Jacobs School alumnus and CTO of Cognionics who led the team that developed the headset.

The researchers also envision a future when neuroimaging can be used to bring about new therapies for neurological disorders. “We will be able to prompt the brain to fix its own problems,” said Gert Cauwenberghs, a bioengineering professor at the Jacobs School and a principal investigator on a National Science Foundation grant. “We are trying to get away from invasive technologies, such as deep brain stimulation and prescription medications, and instead start up a repair process by using the brain’s synaptic plasticity.”

“In 10 years, using a brain-machine interface might become as natural as using your smartphone is today, said Tim Mullen, a UC San Diego alumnus, lead author on the study and a former researcher at the Swartz Center for Computational Neuroscience at UC San Diego.

The researchers from the Jacobs School of Engineering and Institute for Neural Computation at UC San Diego detailed their findings in an article of the Special Issue on Wearable Technologies published recently in IEEE Transactions on Biomedical Engineering.

EEG headset

The innovative dry electrodes developed by the researchers eliminate the complexity and mess of affixing gel electrodes. By using silver/silver chloride tips, along with associated electronics and special headset mechanical construction, the dry electrodes reduce electrical noise and can be conveniently used through hair. (credit: Tim R. Mullen et al./IEEE Transactions on Biomedical Engineering)

For this vision of the future to become a reality, sensors will need to become not only wearable but also comfortable, and algorithms for data analysis will need to be able to cut through noise to extract meaningful data. The EEG headset developed by Chi and his team has an octopus-like shape, in which each arm is elastic, so that it fits on many different kinds of head shapes. The sensors at the end of each arm are designed to make optimal contact with the scalp while adding as little noise in the signal as possible.

The researchers spent four years perfecting the sensors’ materials. The sensors are designed to work on a subject’s hair. The material allows the sensors to remain flexible and durable while still conducting high-quality signals, thanks to a silver/silver-chloride coating. The design includes shielding from interference from electrical equipment and other electronics. The researchers also developed sensors intended for direct application to the scalp.

Software and data analysis

In the study, the data that the headset captured were analyzed with software developed by Mullen and Christian Kothe, another former researcher at the Swartz Center for Computational Neuroscience.

First, brain signals needed to be separated from noise in the EEG data. The tiny electrical currents originating from the brain are often contaminated by high-amplitude artifacts generated when subjects move, speak, or even blink. The researchers designed an algorithm that separates the EEG data in real-time into different components that are statistically unrelated to one another.

The algorithm then compares these elements with clean data obtained, for instance, when a subject is at rest. Abnormal data are labeled as noise and discarded. “The algorithm attempts to remove as much of the noise as possible while preserving as much of the brain signal as possible,” said Mullen.

The researchers were also able to track, in real time, how signals from different areas of the brain interact with one another, building an ever-changing network map of brain activity. They then used machine learning to connect specific network patterns in brain activity to cognition and behavior.

Future plans

Mullen’s start-up, Qusp, has developed NeuroScale, a cloud-based software platform that provides continuous real-time interpretation of brain and body signals through an Internet application program interface (API). The goal is to enable brain-computer interface and advanced signal processing methods to be easily integrated with various everyday applications and wearable devices.

“A Holy Grail in our field is to track meaningful changes in distributed brain networks at the ‘speed of thought’,” Mullen said. “We’re closer to that goal, but we’re not quite there yet.”

Cognionics is selling the headset to research groups, especially for use in neuro-feedback. Next steps include improving the headset’s performance while subjects are moving. The device can reliably capture signals while subjects walk but less so during more strenuous activities such as running. Electronics also need improvement to function for longer time periods — days and even weeks instead of hours.

The ultimate goal is to get the headset into the clinic to help diagnose a range of conditions, such as strokes and seizures, says Chi.

These researchers’ development and testing projects have been funded in part by a five-year Emerging Frontiers of Research Innovation grant from the National Science Foundation, DARPA, and the Army Research Laboratory (Aberdeen, MD) Collaborative Technology Alliance (CTA).


Abstract of Real-time neuroimaging and cognitive monitoring using wearable dry EEG

Goal: We present and evaluate a wearable high-density dry-electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods: The system integrates a 64-channel dry EEG form factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then, we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in nine subjects using the dry EEG system. Results: Simulations yielded high accuracy (AUC = 0.97 ± 0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity [short-time directdirected transfer function (sdDTF)] was significantly above chance with similar performance (AUC) for cLORETA (0.74 ± 0.09) and LCMV (0.72 ± 0.08) source localization. Cortical ERPbased classification was equivalent to ProxConn for cLORETA (0.74 ± 0.16) butsignificantlybetterforLCMV (0.82 ± 0.12). Conclusion: We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from highdensity wearable dry EEG. Significance: This paper is the first validated application of these methods to 64-channel dry EEG. This study addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes.

Why evolution may be intelligent, based on deep learning

Moth Orchid flower (credit: Christian Kneidinger)

A computer scientist and biologist propose to unify the theory of evolution with learning theories to explain the “amazing, apparently intelligent designs that evolution produces.”

The scientists — University of Southampton School of Electronics and Computer Science professor Richard Watson* and Eötvös Loránd University (Budapest) professor of biology Eörs Szathmáry* — say they’ve found that it’s possible for evolution to exhibit some of the same intelligent behaviors as learning systems — including neural networks.

Writing in an opinion paper published in the journal Trends in Ecology and Evolution, they use “formal analogies” and transfer specific models and results between the two theories in an attempt to solve several evolutionary puzzles.

The authors cite work by Pavlicev and colleagues** showing that selection on relational alleles (gene variants) increases phenotypic (organism trait) correlation if the traits are selected together and decreases correlation if they are selected antagonistically, which is characteristic of Hebbian learning, they note.

“This simple step from evolving traits to evolving correlations between traits is crucial; it moves the object of natural selection from fit phenotypes (which ultimately removes phenotypic variability altogether) to the control of phenotypic variability,” the researchers say.

Why evolution is not blind

“Learning theory is not just a different way of describing what Darwin already told us,” said Watson. “It expands what we think evolution is capable of. It shows that natural selection is sufficient to produce significant features of intelligent problem-solving.”

Conventionally, evolution, which depends on random variation, has been considered blind, or at least myopic, he notes. “But showing that evolving systems can learn from past experience means that evolution has the potential to anticipate what is needed to adapt to future environments in the same way that learning systems do.

“A system exhibits learning if its performance at some task improves with experience,” the authors note in the paper. “Reusing behaviors that have been successful in the past (reinforcement learning) is intuitively similar to the way selection increases the proportion of fit phenotypes [an organism's observable characteristics or traits] in a population. In fact, evolutionary processes and simple learning processes are formally equivalent.

“In particular, learning can be implemented by incrementally adjusting a probability distribution over behaviors (e.g., Bayesian learning or Bayesian updating). Or, if a behavior is represented by a vector of features or components, by adjusting the probability of using each individual component in proportion to its average reward in past behaviors (e.g., Multiplicative Weights Update Algorithm, MWUA).”

The evolution of connections in a Recurrent Gene Regulation Network (GRN) shows associative learning behaviors. When a Hopfield network is trained on a set of patterns with Hebbian learning, it forms an associative memory of the patterns in the training set. When subsequently stimulated with random excitation patterns, the activation dynamics of the trained network will spontaneously recall the patterns from the training set or generate new patterns that are generalizations of the training patterns. (A–D) A GRN is evolved to produce first one phenotype (set of characteristics or traits — Charles Darwin in this example) and then another (Donald Hebb) in an alternating manner. The resulting phenotype is not merely an average of the two phenotypic patterns that were selected in the past. Rather, different embryonic phenotypes (e.g., random initial conditions C and D) developed into different adult phenotypes (with this evolved GRN) and match either A or B. These two phenotypes can be produced from genotypes (DNA sequences) that are a single mutation apart. In a separate experiment, selection iterates over a set of target phenotypes (E–H). In addition to developing phenotypes that match patterns selected in the past (e.g., I), this GRN also generalizes to produce new phenotypes that were not selected for in the past but belong to a structurally similar class, for example, by creating novel combinations of evolved modules (e.g., developmental attractors exist for a phenotype with all four “loops” (J). This demonstrates a capability for evolution to exhibit phenotypic novelty in exactly the same sense that learning neural networks can generalize from past experience. (credit: Richard A. Watson and Eörs Szathmáry/Trends in Ecology and Evolution)

Unsupervised learning

An even more interesting process in evolution is unsupervised learning, where mechanisms do not depend on an external reward signal, the authors say in the paper:

By reinforcing correlations that are frequent, regardless of whether they are good, unsupervised correlation learning can produce system-level behaviors without system-level rewards. This can be implemented without centralized learning mechanisms. (Recent theoretical work shows that selection acting only to maximize individual growth rate, when applied to interspecific competition coefficients within an ecological community, produces unsupervised learning at the system level.)

This is an exciting possibility because it means that, despite not being a unit of selection, an ecological community might exhibit organizations that confer coordinated collective behaviors — for example, a distributed ecological memory that can recall multiple past ecological states. …

Taken together, correlation learning, unsupervised correlation learning, and deep correlation learning thus provide a formal way to understand how variation, selection, and inheritance, respectively, might be transformed over evolutionary time.

The authors’ new approach also offers an alternative to “intelligent design” (ID), which negates natural selection as an explanation for apparently intelligent features of nature. (The leading proponents of ID are associated with the Discovery Institute. See Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I.*** — a debate between Kurzweil and several Discovery Institute fellows.)

So if evolutionary theory can learn from the principles of cognitive science and deep learning, can cognitive science and deep learning learn from evolutionary theory?

* The authors are also affiliated with the Parmenides Foundation in Munich.

** Watson, R.A. et al. (2014) The evolution of phenotypic correlations and ‘developmental memory.’ Evolution 68, 1124–1138 and Pavlicev, M.et al. (2011) Evolution of adaptive phenotypic variation patterns by direct selection for evolvability. Proc. R. Soc. B Biol. Sci. 278, 1903–1912

*** This book is available free on KurzweilAI, as noted.


Abstract of How Can Evolution Learn?
The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the ‘uninformed’ process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles – the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs.

Do we have free will?

Human “duels” against a brain-computer interface (BCI) in an experiment. (credit: Carsten Bogler/Charité)

It’s a question that’s been debated by philosophers for centuries. Now neuroscientists from Charité –Universitätsmedizin Berlin have run an experiment to find out, using a “duel” game between human and brain-computer interface (BCI).

As KurzweilAI reported last year:

In the early 1980s, University of California, San Francisco neuroscientist Benjamin Libet conducted an experiment to assess the nature of free will. Subjects hooked up to an electroencephalogram (EEG) were asked to push a button whenever they liked. They were also asked to note the precise time that they first became aware of the wish or urge to move.

Libet’s experiments showed that distinctive “readiness potential” brain activity began, on average, several seconds before subjects became aware that they planned to move. Libet concluded that the desire to move arose unconsciously, and “free will” could instead only come in the form of a conscious veto of what he called “free won’t.”

The  Charité’s Bernstein Center for Computational Neuroscience researchers have now created an experiment to test the “free won’t” part. Using state-of-the-art measurement techniques, the researchers tested whether or not after the readiness potential (RP) for a movement has already been triggered, people are able to stop planned movements (under conscious control).

The “point of no return”

The researchers asked study participants to participate in a “duel” game with a computer and monitored their brain waves using electroencephalography (EEG). A specially-trained computer was then tasked with using this EEG data to predict when a subject would move (the aim: out-maneuver the player). The researchers manipulated the outcome of the game in favor of the computer as soon as brain wave measurements indicated that the player was about to move.

The idea was that if subjects were able to evade being predicted, this would be evidence that subjects could control their actions for much longer than previously thought.

The researchers discovered that subject could achieve that, but that there‘s a “point of no return” in the decision-making process [at about 200 milliseconds before movement onset], after which cancellation of movement is no longer possible. “A person’s decisions are not at the mercy of unconscious and early brain waves. They are able to actively intervene in the decision-making process and interrupt a movement,” says Prof. John-Dylan Haynes, PhD., research-team leader.

Further studies are planned to investigate more complex decision-making processes.

Technische Universität Berlin researchers were also involved in the study. The study results have been published in an open-access paper in the journal PNAS.

UPDATE: 200 milliseconds before movement onset, not after RP.


Abstract of The point of no return in vetoing self-initiated movements

In humans, spontaneous movements are often preceded by early brain signals. One such signal is the readiness potential (RP) that gradually arises within the last second preceding a movement. An important question is whether people are able to cancel movements after the elicitation of such RPs, and if so until which point in time. Here, subjects played a game where they tried to press a button to earn points in a challenge with a brain–computer interface (BCI) that had been trained to detect their RPs in real time and to emit stop signals. Our data suggest that subjects can still veto a movement even after the onset of the RP. Cancellation of movements was possible if stop signals occurred earlier than 200 ms before movement onset, thus constituting a point of no return.