Intelligence-augmentation device lets users ‘speak silently’ with a computer by just thinking

MIT Media Lab researcher Arnav Kapur demonstrates the AlterEgo device. It picks up neuromuscular facial signals generated by his thoughts; a bone-conduction headphone lets him privately hear responses from his personal devices. (credit: Lorrie Lejeune/MIT)

MIT researchers have invented a system that allows someone to communicate silently and privately with a computer or the internet by simply thinking — without requiring any facial muscle movement.

The AlterEgo system consists of a wearable device with electrodes that pick up otherwise undetectable neuromuscular subvocalizations — saying words “in your head” in natural language. The signals are fed to a neural network that is trained to identify subvocalized words from these signals. Bone-conduction headphones also transmit vibrations through the bones of the face to the inner ear to convey information to the user — privately and without interrupting a conversation. The device connects wirelessly to any external computing device via Bluetooth.

A silent, discreet, bidirectional conversation with machines. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?,” says Arnav Kapur, a graduate student at the MIT Media Lab who led the development of the new system. Kapur is first author on an open-access paper on the research presented in March at the IUI ’18 23rd International Conference on Intelligent User Interfaces.

In one of the researchers’ experiments, subjects used the system to silently report opponents’ moves in a chess game and silently receive recommended moves from a chess-playing computer program. In another experiment, subjects were able to undetectably answer difficult computational problems, such as the square root of large numbers or obscure facts. The researchers achieved 92% median word accuracy levels, which is expected to improve.  “I think we’ll achieve full conversation someday,” Kapur said.

Non-disruptive. “We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”*


Even the tiniest signal to her jaw or larynx might be interpreted as a command. Keeping one hand on the sensitivity knob, she concentrated to erase mistakes the machine kept interpreting as nascent words.

            Few people used subvocals, for the same reason few ever became street jugglers. Not many could operate the delicate systems without tipping into chaos. Any normal mind kept intruding with apparent irrelevancies, many ascending to the level of muttered or almost-spoken words the outer consciousness hardly noticed, but which the device manifested visibly and in sound.
            Tunes that pop into your head… stray associations you generally ignore… memories that wink in and out… impulses to action… often rising to tickle the larynx, the tongue, stopping just short of sound…
            As she thought each of those words, lines of text appeared on the right, as if a stenographer were taking dictation from her subvocalized thoughts. Meanwhile, at the left-hand periphery, an extrapolation subroutine crafted little simulations.  A tiny man with a violin. A face that smiled and closed one eye… It was well this device only read the outermost, superficial nervous activity, associated with the speech centers.
            When invented, the sub-vocal had been hailed as a boon to pilots — until high-performance jets began plowing into the ground. We experience ten thousand impulses for every one we allow to become action. Accelerating the choice and decision process did more than speed reaction time. It also shortcut judgment.
            Even as a computer input device, it was too sensitive for most people.  Few wanted extra speed if it also meant the slightest sub-surface reaction could become embarrassingly real, in amplified speech or writing.

            If they ever really developed a true brain to computer interface, the chaos would be even worse.

— From EARTH (1989) chapter 35 by David Brin (with permission)


IoT control. In the conference paper, the researchers suggest that an “internet of things” (IoT) controller “could enable a user to control home appliances and devices (switch on/off home lighting, television control, HVAC systems etc.) through internal speech, without any observable action.” Or schedule an Uber pickup.

Peripheral devices could also be directly interfaced with the system. “For instance, lapel cameras and smart glasses could directly communicate with the device and provide contextual information to and from the device. … The device also augments how people share and converse. In a meeting, the device could be used as a back-channel to silently communicate with another person.”

Applications of the technology could also include high-noise environments, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press, suggests Thad Starner, a professor in Georgia Tech’s College of Computing. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally.”

* Or users could, conceivably, simply zone out — checking texts, email messages, and twitter (all converted to voice) during boring meetings, or even reply, using mentally selected “smart reply” type options.

DARPA-funded prosthetic memory system successful in humans, study finds

Hippocampal prosthesis restores memory functions by creating “MIMO” model-based electrical stimulation of the hippocampus — bypassing a damaged brain region (red X). (credit: USC)

Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) Viterbi School of Engineering have demonstrated a neural prosthetic system that can improve a memory by “writing” information “codes” (based on a patient’s specific memory patterns) into the hippocampus of human subjects via an electrode implanted in the hippocampus (a part of the brain involved in making new memories).

In this pilot study, described in a paper published in Journal of Neural Engineering, epilepsy patients’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements, as shown in this video. The research, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), offers evidence supporting pioneering research by USC scientist Theodore Berger, Ph.D. (a co-author of the paper), on an electronic system for restoring memory in rats (reported on KurzweilAI in 2011).

“This is the first time scientists have been able to identify a patient’s own brain-cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better — an important first step in potentially restoring memory loss,” said the paper’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

The study focused on improving episodic memory (information that is new and useful for a short period of time, such as where you parked your car on any given day) — the most common type of memory loss in people with Alzheimer’s disease, stroke, and head injury.

The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures.

Reinforcing memories

(LEFT) In one test*, the researchers recorded (“Actual”) the neural patterns or “codes” between two of three main areas of the hippocampus, known as CA3 and CA1, while the eight study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen. The USC team, led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a code (RIGHT) for correct memory performance, based on a multi-input multi-output (MIMO) nonlinear mathematical model. The Wake Forest Baptist team played back that code to the patients (“Predicted” signal) while the patients performed the image-recall task. In this test, the patients’ episodic memory performance then showed a 37 percent improvement over baseline. (credit: USC)

“We showed that we could tap into a patient’s own memory content, reinforce it, and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.

“To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”

The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger, and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.

* In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes. After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.


Abstract of Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall

Objective. We demonstrate here the first successful implementation in humans of a proof-of-concept system for restoring and improving memory function via facilitation of memory encoding using the patient’s own hippocampal spatiotemporal neural codes for memory. Memory in humans is subject to disruption by drugs, disease and brain injury, yet previous attempts to restore or rescue memory function in humans typically involved only nonspecific, modulation of brain areas and neural systems related to memory retrieval. Approach. We have constructed a model of processes by which the hippocampus encodes memory items via spatiotemporal firing of neural ensembles that underlie the successful encoding of short-term memory. A nonlinear multi-input, multi-output (MIMO) model of hippocampal CA3 and CA1 neural firing is computed that predicts activation patterns of CA1 neurons during the encoding (sample) phase of a delayed match-to-sample (DMS) human short-term memory task. Main results. MIMO model-derived electrical stimulation delivered to the same CA1 locations during the sample phase of DMS trials facilitated short-term/working memory by 37% during the task. Longer term memory retention was also tested in the same human subjects with a delayed recognition (DR) task that utilized images from the DMS task, along with images that were not from the task. Across the subjects, the stimulated trials exhibited significant improvement (35%) in both short-term and long-term retention of visual information. Significance. These results demonstrate the facilitation of memory encoding which is an important feature for the construction of an implantable neural prosthetic to improve human memory.

The brain learns completely differently than we’ve assumed, new learning theory says

(credit: Getty)

A revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.

A biological schema of an output neuron, comprising a neuron’s soma (body, shown as gray circle, top) with two roots of dendritic trees (light-blue arrows), splitting into many dendritic branches (light-blue lines). The signals arriving from the connecting input neurons (gray circles, bottom) travel via their axons (red lines) and their many branches until terminating with the synapses (green stars). There, the signals connect with dendrites (some synapse branches travel to other neurons), which then connect to the soma. (credit: Shira Sardi et al./Sci. Rep)

The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theory has remained a deeply rooted assumption in neuroscience.

Synaptic vs. dendritic learning

In vitro experimental setup. A micro-electrode array comprising 60 extracellular electrodes separated by 200 micrometers, indicating a neuron patched (connected) by an intracellular electrode (orange) and a nearby extracellular electrode (green line). (Inset) Reconstruction of a fluorescence image, showing a patched cortical pyramidal neuron (red) and its dendrites growing in different directions and in proximity to extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports adapted by KurzweilAI)

Hebb was wrong, says Kanter. “A new type of experiments strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

Image representing the current synaptic (pink) vs. the new dendritic (green) learning scenarios of the brain. In the current scenario, a neuron (black) with a small number (two in this example) dendritic trees (center) collects incoming signals via synapses (represented by red valves), with many thousands of tiny adjustable learning parameters. In the new dendritic learning scenario (green) a few (two in this example) adjustable controls (red valves) are located in close proximity to the computational element, the neuron. The scale is such that if a neuron collecting its incoming signals is represented by a person’s faraway fingers, the length of its hands would be as tall as a skyscraper (left). (credit: Prof. Ido Kanter)

The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.

According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.

This research is supported in part by the TELEM grant of the Israel Council for Higher Education.


Abstract of Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.

Recording data from one million neurons in real time

(credit: Getty)

Neuroscientists at the Neuronano Research Centre at Lund University in Sweden have developed and tested an ambitious new design for processing and storing the massive amounts of data expected from future implantable brain machine interfaces (BMIs) and brain-computer interfaces (BCIs).

The system would simultaneously acquire data from more than 1 million neurons in real time. It would convert the spike data (using bit encoding) and send it via an effective communication format for processing and storage on conventional computer systems. It would also provide feedback to a subject in under 25 milliseconds — stimulating up to 100,000 neurons.

Monitoring large areas of the brain in real time. Applications of this new design include basic research, clinical diagnosis, and treatment. It would be especially useful for future implantable, bidirectional BMIs and BCIs, which are used to communicate complex data between neurons and computers. This would include monitoring large areas of the brain in paralyzed patients, revealing an imminent epileptic seizure, and providing real-time feedback control to robotic arms used by quadriplegics and others.

The system is intended for recording neural signals from implanted electrodes, such as this 32-electrode grid, used for long-term, stable neural recording and treatment of neurological disorders. (credit: Thor Balkhed)

“A considerable benefit of this architecture and data format is that it doesn’t require further translation, as the brain’s [spiking] signals are translated directly into bitcode,” making it available for computer processing and dramatically increasing the processing speed and database storage capacity.

“This means a considerable advantage in all communication between the brain and computers, not the least regarding clinical applications,” says Bengt Ljungquist, lead author of the study and doctoral student at Lund University.

Future BMI/BCI systems. Current neural-data acquisition systems are typically limited to 512 or 1024 channels and the data is not easily converted into a form that can be processed and stored on PCs and other computer systems.

“The demands on hardware and software used in the context of BMI/BCI are already high, as recent studies have used recordings of up to 1792 channels for a single subject,” the researchers note in an open-access paper published in the journal Neuroinformatics.

That’s expected to increase. In 2016, DARPA (U.S. Defense Advanced Research Project Agency) announced its Neural Engineering System Design (NESD) program*, intended “to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. …

“Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.”

System architecture overview of storage for large amounts of real time neural data, proposed by Lund University researchers. A master clock pulse (a) synchronizes n acquisition systems (b), which handles bandpass filtering, spike sorting (for spike data), and down-sampling (for narrow band data), receiving electro-physiological data from subject (e). Neuronal spike data is encoded in a data grid of neurons time bins. (c). The resulting data grid is serialized and sent over to spike data storage in HDF5 file format (d), as well as to narrow band (f) and waveform data storage (g). In this work, a and b are simulated, c and d are implemented, while f and g are suggested (not yet
implemented) components. (credit: Bengt Ljungquis et al./Neuroinformatics)

* DARPA has since announced that it has “awarded contracts to five research organizations and one company that will support the Neural Engineering System Design (NESD) program: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley. These organizations have formed teams to develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration. Four of the teams will focus on vision and two will focus on aspects of hearing and speech.”


Abstract of A Bit-Encoding Based New Data Structure for Time and Memory Efficient Handling of Spike Times in an Electrophysiological Setup.

Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.

New algorithm will allow for simulating neural connections of entire brain on future exascale supercomputers

(credit: iStock)

An international team of scientists has developed an algorithm that represents a major step toward simulating neural connections in the entire human brain.

The new algorithm, described in an open-access paper published in Frontiers in Neuroinformatics, is intended to allow simulation of the human brain’s 100 billion interconnected neurons on supercomputers. The work involves researchers at the Jülich Research Centre, Norwegian University of Life Sciences, Aachen University, RIKEN, KTH Royal Institute of Technology, and KTH Royal Institute of Technology.

An open-source neural simulation tool. The algorithm was developed using NEST* (“neural simulation tool”) — open-source simulation software in widespread use by the neuroscientific community and a core simulator of the European Human Brain Project. With NEST, the behavior of each neuron in the network is represented by a small number of mathematical equations, the researchers explain in an announcement.

Since 2014, large-scale simulations of neural networks using NEST have been running on the petascale** K supercomputer at RIKEN and JUQUEEN supercomputer at the Jülich Supercomputing Centre in Germany to simulate the connections of about one percent of the neurons in the human brain, according to Markus Diesmann, PhD, Director at the Jülich Institute of Neuroscience and Medicine. Those simulations have used a previous version of the NEST algorithm.

Why supercomputers can’t model the entire brain (yet). “Before a neuronal network simulation can take place, neurons and their connections need to be created virtually,” explains senior author Susanne Kunkel of KTH Royal Institute of Technology in Stockholm.

During the simulation, a neuron’s action potentials (short electric pulses) first need to be sent to all 100,000 or so small computers, called nodes, each equipped with a number of processors doing the actual calculations. Each node then checks which of all these pulses are relevant for the virtual neurons that exist on this node.

That process requires one bit of information per processor for every neuron in the whole network. For a network of one billion neurons, a large part of the memory in each node is consumed by this single bit of information per neuron. Of course, the amount of computer memory required per processor for these extra bits per neuron increases with the size of the neuronal network. To go beyond the 1 percent and simulate the entire human brain would require the memory available to each processor to be 100 times larger than in today’s supercomputers.

In future exascale** computers, such as the post-K computer planned in Kobe and JUWELS at Jülich*** in Germany, the number of processors per compute node will increase, but the memory per processor and the number of compute nodes will stay the same.

Achieving whole-brain simulation on future exascale supercomputers. That’s where the next-generation NEST algorithm comes in. At the beginning of the simulation, the new NEST algorithm will allow the nodes to exchange information about what data on neuronal activity needs to sent and to where. Once this knowledge is available, the exchange of data between nodes can be organized such that a given node only receives the information it actually requires. That will eliminate the need for the additional bit for each neuron in the network.

Brain-simulation software, running on a current petascale supercomputer, can only represent about 1 percent of neuron connections in the cortex of a human brain (dark red area of brain on left). Only about 10 percent of neuron connections (center) would be possible on the next generation of exascale supercomputers, which will exceed the performance of today’s high-end supercomputers by 10- to 100-fold. However, a new algorithm could allow for 100 percent (whole-brain-scale simulation) on exascale supercomputers, using the same amount of computer memory as current supercomputers. (credit: Forschungszentrum Jülich, adapted by KurzweilAI)

With memory consumption under control, simulation speed will then become the main focus. For example, a large simulation of 0.52 billion neurons connected by 5.8 trillion synapses running on the supercomputer JUQUEEN in Jülich previously required 28.5 minutes to compute one second of biological time. With the improved algorithm, the time will be reduced to just 5.2 minutes, the researchers calculate.

“The combination of exascale hardware and [forthcoming NEST] software brings investigations of fundamental aspects of brain function, like plasticity and learning, unfolding over minutes of biological time, within our reach,” says Diesmann.

The new algorithm will also make simulations faster on presently available petascale supercomputers, the researchers found.

NEST simulation software update. In one of the next releases of the simulation software by the Neural Simulation Technology Initiative, the researchers will make the new open-source code freely available to the community.

For the first time, researchers will have the computer power available to simulate neuronal networks on the scale of the entire human brain.

Kenji Doya of Okinawa Institute of Science and Technology (OIST) may be among the first to try it. “We have been using NEST for simulating the complex dynamics of the basal ganglia circuits in health and Parkinson’s disease on the K computer. We are excited to hear the news about the new generation of NEST, which will allow us to run whole-brain-scale simulations on the post-K computer to clarify the neural mechanisms of motor control and mental functions,” he says .

* NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems, rather than on the exact morphology of individual neurons. NEST is ideal for networks of spiking neurons of any size, such as models of information processing, e.g., in the visual or auditory cortex of mammals, models of network activity dynamics, e.g., laminar cortical networks or balanced random networks, and models of learning and plasticity.

** Petascale supercomputers operate at petaflop/s (quadrillions or 1015 floating point operations per second). Future exascale supercomputers will operate at exaflop/s (1018 flop/s). The fastest supercomputer at this time is the Sunway TaihuLight at the National Supercomputing Center in Wuxi, China, operating at 93 petaflops/sec.

*** At Jülich, the work is supported by the Simulation Laboratory Neuroscience, a facility of the Bernstein Network Computational Neuroscience at Jülich Supercomputing Centre. Partial funding comes from the European Union Seventh Framework Programme (Human Brain Project, HBP) and the European Union’s Horizon 2020 research and innovation programme, and the Exploratory Challenge on Post-K Computer (Understanding the neural mechanisms of thoughts and its applications to AI) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Japan. With their joint project between Japan and Europe, the researchers hope to contribute to the formation of an International Brain Initiative (IBI).

BernsteinNetwork | NEST — A brain simulator


Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

 

Neuroscientists devise scheme for mind-uploading centuries in the future

Representative electron micrograph of white matter region in cryopreserved pig brain (credit: Brain Preservation Foundation)

Two researchers — Robert McIntyre, an MIT graduate, and Gregory M. Fahy, PhD., 21st Century Medicine (21CM) Chief Scientific Officer, have developed a method for scanning a preserved brain’s connectome (the 150 trillion microscopic synaptic connections presumed to encode all of a person’s knowledge).

That data could possibly be used, centuries later, to reconstruct a whole-brain emulation — uploading your mind into a computer or Avatar-style robotic, virtual, or synthetic body, McIntyre and others suggest.

According to MIT Technology Review, McIntyre has formed a startup company called Nectome that has won a large NIH grant for creating “technologies to enable whole-brain nanoscale preservation and imaging.”

McIntyre is also collaborating with Edward Boyden, PhD., a top neuroscientist at MIT and inventor of a new “expansion microscopy” technique (to achieve super-resolution with ordinary confocal microscopes), as KurzweilAI recently reported. The technique also causes brain tissue to swell, making it more accessible.

Preserving brain information patterns, not biological function

Unlike cryonics (freezing people or heads for future revival), the researchers did not intend to revive a pig or pig brain (or human, in the future). Instead, the idea is to develop a bridge to future mind-uploading technology by preserving the information content of the brain, as encoded within the frozen connectome.

The first step in the ASC procedure is to perfuse the brain’s vascular system with the toxic fixative glutaraldehyde (typically used as an embalming fluid but also used by neuroscientists to prepare brain tissue for the highest resolution electron microscopic and immunofluorescent examination). That instantly halts metabolic processes by covalently crosslinking the brain’s proteins in place, leading to death (by contemporary standards). The brain is then quickly stored at -130 degrees C, stopping all further decay.

The method, tested on a pig’s brain, led to 21st Century Medicine (21CM), lead researcher McIntyre, and senior author Fahy winning the $80,000 Large Mammal Brain Preservation Prize offered by the Brain Preservation Foundation (BPF), announced March 13.

To accomplish this, McIntyre’s team scaled up the same procedure they used to previously preserve a rabbit brain, for which they won the BPF’s Small Mammal Prize in February 2016, as KurzweilAI has reported. That research was judged by neuroscientist Ken Hayworth, PhD., President of the Brain Preservation Foundationand noted connectome researcher Prof. Sebastian Seung, PhD., Princeton Neuroscience Institute.

Caveats

However, BRF warns that this single prize-winning laboratory demonstration is “insufficient to address the types of quality control measures that should be expected of any procedure that would be applied to humans.” Hayworth outlines here his position on a required medical procedure and associated quality control protocol, prior to any such offering.

The ASC method, if verified by science, raises serious ethical, legal, and medical questions. For example:

  • Should ASC be developed into a medical procedure and if so, how?
  • Should ASC be available in an assisted suicide scenario for terminal patients?
  • Could ASC be a “last resort” to enable a dying person’s mind to survive and reach a future world?
  • How real are claims of future mind uploading?
  • Is it legal?*

“It may take decades or even centuries to develop the technology to upload minds if it is even possible at all,” says the BPF press release. “ASC would enable patients to safely wait out those centuries. For now, neuroscience is actively exploring the plausibility of mind uploading through ongoing studies of the physical basis of memory, and through development of large-scale neural simulations and tools to map connectomes.”

Interested? Nectome has a $10,000 (refundable) wait list.

* Nectome “has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal.” — MIT Technology Review

Measuring deep-brain neurons’ electrical signals at high speed with light instead of electrodes

MIT researchers have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to more effectively study how neurons behave, millisecond by millisecond, as the brain performs a particular function. (credit: Courtesy of the researchers)

Researchers at MIT have developed a new approach to measure electrical activity deep in the brain: using light — an easier, faster, and more informative method than inserting electrodes.

They’ve developed a new light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

Better than electrodes. “If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden*, Ph.D., an associate professor of biological engineering and brain and cognitive sciences at MIT and a pioneer in optogenetics (a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins). Boyden is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology.

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” he says. The new method is also more effective than current optogenetics methods, which also use light-sensitive proteins to silence or stimulate neuron activity.

“Imaging of neuronal activity using voltage sensors opens up the exciting possibility for simultaneous recordings of large populations of neurons with single-cell single-spike resolution in vivo,” the researchers report in the paper.

Robot-controlled protein evolution. For the past two decades, Boyden and other scientists have sought a way to monitor electrical activity in the brain through optogenetic imaging, instead of recording with electrodes. But fluorescent molecules used for this kind of imaging have been limited in their speed of response, sensitivity to changes in voltage, and resistance to photobleaching (fading caused by exposure to light).

Instead, Boyden and his colleagues built a robot to screen millions of proteins. They generated the appropriate proteins for the traits they wanted by using a process called “directed protein evolution.” To demonstrate the power of this approach, they then narrowed down the evolved protein versions to a top performer, which they called “Archon1.” After the Archon1 gene is delivered into a cell, the expressed Archon1 protein embeds itself into the cell membrane — the ideal place for accurate measurement of a cell’s electrical activity.

Using light to measure neuron voltages. When the Archon1 cells are then exposed to a certain wavelength of reddish-orange light, the protein emits a longer wavelength of red light, and the brightness of that red light corresponds to the voltage (in millivolts) of that cell at a given moment in time. The researchers were able to use this method to measure electrical activity in mouse brain-tissue slices, in transparent zebrafish larvae, and in the transparent worm C. elegans (being transparent makes it easy to expose these organisms to light and to image the resulting fluorescence).

The researchers are now working on using this technology to measure brain activity in live mice as they perform various tasks, which Boyden believes should allow for mapping neural circuits and discovering how the circuits produce specific behaviors. “We will be able to watch a neural computation happen,” he says. “Over the next five years or so, we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The researchers also showed that Archon1 can be used in conjunction with current optogenetics methods. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Detecting electrical activity at millisecond-speed. Harvard professor Alan Cohen, who developed the predecessor to Archon1, says the new protein brings scientists closer to the goal of imaging electrical activity in live brains at a millisecond timescale (1,000 measurements per second).

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement,” says Cohen, who was not involved in this study.  “The Boyden lab developed a very clever high-throughput screening approach to this problem. Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

* Boyden is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar.

** The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2 (previously engineered by Adam Cohen’s lab at Harvard University and based on the molecule Arch, which the Boyden lab reported in 2010). The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness. The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to Archon1.


Abstract of A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters

We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans.

Low-cost EEG can now be used to reconstruct images of what you see

(left:) Test image displayed on computer monitor. (right:) Image captured by EEG and decoded. (credit: Dan Nemrodov et al./eNeuro)

A new technique developed by University of Toronto Scarborough neuroscientists has, for the first time, used EEG detection of brain activity in reconstructing images of what people perceive.

The new technique “could provide a means of communication for people who are unable to verbally communicate,” said Dan Nemrodov, Ph.D., a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough. “It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects, rather than relying on verbal descriptions provided to a sketch artist.”

(left:) EEG electrodes used in the study (photo credit: Ken Jones). (right in red:) The area where the images were detected, the occipital lobe, is the visual processing center of the mammalian brain, containing most of the anatomical region of the visual cortex. (credit: CC/Wikipedia)

For the study, test subjects were shown images of faces while their brain activity was detected by EEG (electroencephalogram) electrodes over the occipital lobe, the visual processing center of the brain. The data was then processed by the researchers, using a technique based on machine learning algorithms that allowed for digitally recreating the image that was in the subject’s mind.

More practical than fMRI for reconstructing brain images

This new technique was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past.

According to Nemrodov, techniques like fMRI — which measures brain activity by detecting changes in blood flow — can grab finer details of what’s going on in specific areas of the brain, but EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison.

While fMRI captures activity at the time scale of seconds, EEG captures activity at the millisecond scale, he says. “So we can see, with very fine detail, how the percept of a face develops in our brain using EEG.” The researchers found that it takes the brain about 120 milliseconds (0.12 seconds) to form a good representation of a face we see, but the important time period for recording starts around 200 milliseconds, Nemrodov says. That’s followed by machine-learning processing to decode the image.*

This study provides validation that EEG has potential for this type of image reconstruction, notes Nemrodov, something many researchers doubted was possible, given its apparent limitations.

Clinical and forensic uses

“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities,” says Nestor. “It unveils the subjective content of our mind and it provides a way to access, explore, and share the content of our perception, memory, and imagination.”

Work is now underway in Nestor’s lab to test how EEG could be used to reconstruct images from a wider range of objects beyond faces — even to show “what people remember or imagine, or what they want to express,” says Nestor. (A new creative tool?)

The research, which is published (open-access) in the journal eNeuro, was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.

* “After we obtain event-related potentials (ERPs) [the measured brain response from a visual sensory event, in this case] — we use a support vector machine (SVM) algorithm to compute pairwise classifications of the visual image identities,” Nemrodov explained to KurzweilAI. “Based on the resulting dissimilarity matrix, we build a face space from which we estimate in a pixel-wise manner the appearance of every individual left-out (to avoid circularity) face. We do it by a linear combination of the classification images plus the origin of the face space.” The method is based on a former study: Nestor, A., Plaut, D. C., & Behrmann, M. (2016). Feature-based face representations and image reconstruction from behavioral and neural data. Proceedings of the National Academy of Sciences. 25 113: 416-421.


University of Toronto Scarborough | Do you see what I see? Harnessing brain waves can help reconstruct mental images


Nature Video | Reading minds


Abstract of The Neural Dynamics of Facial Identity Processing: insights from EEG-Based Pattern Analysis and Image Reconstruction

Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.

Neuroscientists reverse Alzheimer’s disease in mice

The brain of a 10-month-old mouse with Alzheimer’s disease (left) is full of amyloid plaques (red). These hallmarks of Alzheimer’s disease are reversed in animals that have gradually lost the BACE1 enzyme (right). (credit: Hu et al., 2018)

Researchers from the Cleveland Clinic Lerner Research Institute have completely reversed the formation of amyloid plaques in the brains of mice with Alzheimer’s disease by gradually depleting an enzyme called BACE1. The procedure also improved the animals’ cognitive function.

The study, published February 14 in the Journal of Experimental Medicine, raises hopes that drugs targeting this enzyme will be able to successfully treat Alzheimer’s disease in humans.


Background: Serious side effects

One of the earliest events in Alzheimer’s disease is an abnormal buildup of beta-amyloid peptide, which can form large, amyloid plaques in the brain and disrupt the function of neuronal synapses. The BACE1 (aka beta-secretase) enzyme helps produce beta-amyloid peptide by cleaving (splitting) amyloid precursor protein (APP). So drugs that inhibit BACE1 are being developed as potential Alzheimer’s disease treatments. But that’s a problem because BACE1 also controls many important neural processes; accidental cleaving of other proteins instead of APP could lead these drugs to have serious side effects. For example, mice completely lacking BACE1 suffer severe neurodevelopmental defects.


A genetic-engineering solution

To deal with the serious side effects, the researchers generated mice that gradually lose the BACE1 enzyme as they grow older. These mice developed normally and appeared to remain perfectly healthy over time. The researchers then bred these rodents with mice that start to develop amyloid plaques and Alzheimer’s disease when they are 75 days old.

The resulting offspring’s BACE1 levels were approximately 50% lower than normal (but also formed plaques at this age). However, as these mice continued to age and lose BACE1 activity, there were lower beta-amyloid peptide levels and the plaques began to disappear. At 10 months old, the mice had no plaques in their brains. Loss of BACE1 also improved the learning and memory of mice with Alzheimer’s disease.

“To our knowledge, this is the first observation of such a dramatic reversal of amyloid deposition in any study of Alzheimer’s disease mouse models,” says senior author Riqiang Yan, who will become chair of the department of neuroscience at the University of Connecticut this spring.

Decreasing BACE1 activity also reversed other hallmarks of Alzheimer’s disease, such as activation of microglial cells and the formation of abnormal neuronal processes.

However, the researchers also found that depletion of BACE1 only partially restored synaptic function, suggesting that BACE1 may be required for optimal synaptic activity and cognition.

“Our study provides genetic evidence that preformed amyloid deposition can be completely reversed after sequential and increased deletion of BACE1 in the adult,” says  Yan. “Our data show that BACE1 inhibitors have the potential to treat Alzheimer’s disease patients without unwanted toxicity. Future studies should develop strategies to minimize the synaptic impairments arising from significant inhibition of BACE1 to achieve maximal and optimal benefits for Alzheimer’s patients.”


Abstract of BACE1 deletion in the adult mouse reverses preformed amyloid deposition and improves cognitive functions

BACE1 initiates the generation of the β-amyloid peptide, which likely causes Alzheimer’s disease (AD) when accumulated abnormally. BACE1 inhibitory drugs are currently being developed to treat AD patients. To mimic BACE1 inhibition in adults, we generated BACE1 conditional knockout (BACE1fl/fl) mice and bred BACE1fl/fl mice with ubiquitin-CreER mice to induce deletion of BACE1 after passing early developmental stages. Strikingly, sequential and increased deletion of BACE1 in an adult AD mouse model (5xFAD) was capable of completely reversing amyloid deposition. This reversal in amyloid deposition also resulted in significant improvement in gliosis and neuritic dystrophy. Moreover, synaptic functions, as determined by long-term potentiation and contextual fear conditioning experiments, were significantly improved, correlating with the reversal of amyloid plaques. Our results demonstrate that sustained and increasing BACE1 inhibition in adults can reverse amyloid deposition in an AD mouse model, and this observation will help to provide guidance for the proper use of BACE1 inhibitors in human patients.

Are you a cyborg?

Bioprinting a brain

Cryogenic 3D-printing soft hydrogels. Top: the bioprinting process. Bottom: SEM image of general microstructure (scale bar: 100 µm). (credit: Z. Tan/Scientific Reports)

A new bioprinting technique combines cryogenics (freezing) and 3D printing to create geometrical structures that are as soft (and complex) as the most delicate body tissues — mimicking the mechanical properties of organs such as the brain and lungs.

The idea: “Seed” porous scaffolds that can act as a template for tissue regeneration (from neuronal cells, for example), where damaged tissues are encouraged to regrow — allowing the body to heal without tissue rejection or other problems. Using “pluripotent” stem cells that can change into different types of cells is also a possibility.

Smoothy. Solid carbon dioxide (dry ice) in an isopropanol bath is used to rapidly cool hydrogel ink (a rapid liquid-to-solid phase change) as it’s extruded, yogurt-smoothy-style. Once thawed, the gel is as soft as body tissues, but doesn’t collapse under its own weight — a previous problem.

Current structures produced with this technique are “organoids” a few centimeters in size. But the researchers hope to create replicas of actual body parts with complex geometrical structures — even whole organs. That could allow scientists to carry out experiments not possible on live subjects, or for use in medical training, replacing animal bodies for surgical training and simulations. Then on to mechanobiology and tissue engineering.

Source: Imperial College London, Scientific Reports (open-access).

How to generate electricity with your body

Bending a finger generates electricity in this prototype device. (credit: Guofeng Song et al./Nano Energy)

A new triboelectric nanogenerator (TENG) design, using a gold tab attached to your skin, will convert mechanical energy into electrical energy for future wearables and self-powered electronics. Just bend your finger or take a step.

Triboelectric charging occurs when certain materials become electrically charged after coming into contact with a different material. In this new design by University of Buffalo and Chinese scientists, when a stretched layer of gold is released, it crumples, creating what looks like a miniature mountain range. An applied force leads to friction between the gold layers and an interior PDMS layer, causing electrons to flow between the gold layers.

More power to you. Previous TENG designs have been difficult to manufacture (requiring complex lithography) or too expensive. The new 1.5-centimeters-long prototype generates a maximum of 124 volts but at only 10 microamps. It has a power density of 0.22 millwatts per square centimeter. The team plans larger pieces of gold to deliver more electricity and a portable battery.

Source: Nano Energy. Support: U.S. National Science Foundation, the National Basic Research Program of China, National Natural Science Foundation of China, Beijing Science and Technology Projects, Key Research Projects of the Frontier Science of the Chinese Academy of Sciences ,and National Key Research and Development Plan.

This artificial electrical eel may power your implants

How the eel’s electrical organs generate electricity by moving sodium (Na) and potassium (K) ions across a selective membrane. (credit: Caitlin Monney)

Taking it a giant (and a bit scary) step further, an artificial electric organ, inspired by the electric eel, could one day power your implanted implantable sensors, prosthetic devices, medication dispensers, augmented-reality contact lenses, and countless other gadgets. Unlike typical toxic batteries that need to be recharged, these systems are soft, flexible, transparent, and potentially biocompatible.

Doubles as a defibrillator? The system mimicks eels’ electrical organs, which use thousands of alternating compartments with excess potassium or sodium ions, separated by selective membranes. To create a jolt of electricity (600 volts at 1 ampere), an eel’s membranes allow the ions to flow together. The researchers built a similar system, but using sodium and chloride ions dissolved in a water-based hydrogel. It generates more than 100 volts, but at safe low current — just enough to power a small medical device like a pacemaker.

The researchers say the technology could also lead to using naturally occurring processes inside the body to generate electricity, a truly radical step.

Source: Nature, University of Fribourg, University of Michigan, University of California-San Diego. Funding: Air Force Office of Scientific Research, National Institutes of Health.

E-skin for Terminator wannabes

A section of “e-skin” (credit: Jianliang Xiao / University of Colorado Boulder)

A new type of thin, self-healing, translucent “electronic skin” (“e-skin,” which mimicks the properties of natural skin) has applications ranging from robotics and prosthetic development to better biomedical devices and human-computer interfaces.

Ready for a Terminator-style robot baby nurse? What makes this e-skin different and interesting is its embedded sensors, which can measure pressure, temperature, humidity and air flow. That makes it sensitive enough to let a robot take care of a baby, the University of Colorado mechanical engineers and chemists assure us. The skin is also rapidly self-healing (by reheating), as in The Terminator, using a mix of three commercially available compounds in ethanol.

The secret ingredient: A novel network polymer known as polyimine, which is fully recyclable at room temperature. Laced with silver nanoparticles, it can provide better mechanical strength, chemical stability and electrical conductivity. It’s also malleable, so by applying moderate heat and pressure, it can be easily conformed to complex, curved surfaces like human arms and robotic hands.

Source: University of Colorado, Science Advances (open-access). Funded in part by the National Science Foundation.

Altered Carbon

Vertebral cortical stack (credit: Netflix)

Altered Carbon takes place in the 25th century, when humankind has spread throughout the galaxy. After 250 years in cryonic suspension, a prisoner returns to life in a new body with one chance to win his freedom: by solving a mind-bending murder.

Resleeve your stack. Human consciousness can be digitized and downloaded into different bodies. A person’s memories have been encapsulated into “cortical stack” storage devices surgically inserted into the vertebrae at the back of the neck. Disposable physical bodies called “sleeves” can accept any stack.

But only the wealthy can acquire replacement bodies on a continual basis. The long-lived are called Meths, as in the Biblical figure Methuselah. The uber rich are also able to keep copies of their minds in remote storage, which they back up regularly, ensuring that even if their stack is destroyed, the stack can be resleeved (except for periods of time not backed up — as in the hack-murder).

Source: Netflix. Premiered on February 2, 2018. Based on the 2002 novel of the same title by Richard K. Morgan.