Are you ready for atom-thin, ‘invisible’ displays everywhere?

Photograph of a proof-of-concept transparent display (left), and closeups showing the display in off and on states (credit: UC Berkeley)

Bloomberg reported this morning (April 4) that Apple is planning a new iPhone with touchless gesture control and displays that curve inward gradually from top to bottom. Apple’s probable use of microLED technology promises to offer “power savings and a reduced screen thickness when put beside current-generation display panels,”according to Apple Insider.

But UC Berkeley engineers have an even more radical concept for future electronics: invisible displays, using a new atomically thin display technology.

Imagine seeing the person you’re talking to projected onto a blank wall by just pointing at it, or seeing a map pop up on your car window (ideally, matched to the road you’re on) at night and disappear when you wave it off.

Schematic of transient-electroluminescent device. An AC voltage is applied between the gate (bottom) and source (top) electrodes. Light emission occurs near the source contact edge during the moment when the AC signal switches its polarity from positive to negative (and vice versa), so both positive and negative charges are present at the same time in the semiconductor, creating light. (credit: Der-Hsien Lien et al./Nature Communications)

The secret: an ultrathin monolayer semiconductor just three atoms thick — a bright “transient electroluminescent” device that is fully transparent when turned off and that can conform to curved surfaces, even human skin.* The four different monolayer materials each emit different colors of light.

The display is currently a proof-of-concept design — just a few millimeters wide, and about 1 percent efficient (commercial LEDs have efficiencies of around 25 to 30 percent). “A lot of work remains to be done and a number of challenges need to be overcome to further advance the technology for practical applications,” explained Ali Javey, Ph.D., professor of Electrical Engineering and Computer Sciences at Berkeley.

The research study was published March 26 in an open-access paper in Nature Communications. It was funded by the National Science Foundation and the Department of Energy.

* Typically, two contact points are used in a semiconductor-based light emitting device: one for injecting negatively charged particles and one injecting positively charged particles. Making contacts that can efficiently inject these charges is a fundamental challenge for LEDs, and it is particularly challenging for monolayer semiconductors since there is so little material to work with. The Berkeley research team engineered a way around this: designing a new device that only requires one contact on the transition-metal dichalcogenide (MoS2, WS2, MoSe2, and WSe2) monolayer instead of two contacts.


Abstract of Large-area and bright pulsed electroluminescence in monolayer semiconductors

Transition-metal dichalcogenide monolayers have naturally terminated surfaces and can exhibit a near-unity photoluminescence quantum yield in the presence of suitable defect passivation. To date, steady-state monolayer light-emitting devices suffer from Schottky contacts or require complex heterostructures. We demonstrate a transient-mode electroluminescent device based on transition-metal dichalcogenide monolayers (MoS2, WS2, MoSe2, and WSe2) to overcome these problems. Electroluminescence from this dopant-free two-terminal device is obtained by applying an AC voltage between the gate and the semiconductor. Notably, the electroluminescence intensity is weakly dependent on the Schottky barrier height or polarity of the contact. We fabricate a monolayer seven-segment display and achieve the first transparent and bright millimeter-scale light-emitting monolayer semiconductor device.

DARPA-funded prosthetic memory system successful in humans, study finds

Hippocampal prosthesis restores memory functions by creating “MIMO” model-based electrical stimulation of the hippocampus — bypassing a damaged brain region (red X). (credit: USC)

Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) Viterbi School of Engineering have demonstrated a neural prosthetic system that can improve a memory by “writing” information “codes” (based on a patient’s specific memory patterns) into the hippocampus of human subjects via an electrode implanted in the hippocampus (a part of the brain involved in making new memories).

In this pilot study, described in a paper published in Journal of Neural Engineering, epilepsy patients’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements, as shown in this video. The research, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), offers evidence supporting pioneering research by USC scientist Theodore Berger, Ph.D. (a co-author of the paper), on an electronic system for restoring memory in rats (reported on KurzweilAI in 2011).

“This is the first time scientists have been able to identify a patient’s own brain-cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better — an important first step in potentially restoring memory loss,” said the paper’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

The study focused on improving episodic memory (information that is new and useful for a short period of time, such as where you parked your car on any given day) — the most common type of memory loss in people with Alzheimer’s disease, stroke, and head injury.

The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures.

Reinforcing memories

(LEFT) In one test*, the researchers recorded (“Actual”) the neural patterns or “codes” between two of three main areas of the hippocampus, known as CA3 and CA1, while the eight study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen. The USC team, led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a code (RIGHT) for correct memory performance, based on a multi-input multi-output (MIMO) nonlinear mathematical model. The Wake Forest Baptist team played back that code to the patients (“Predicted” signal) while the patients performed the image-recall task. In this test, the patients’ episodic memory performance then showed a 37 percent improvement over baseline. (credit: USC)

“We showed that we could tap into a patient’s own memory content, reinforce it, and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.

“To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”

The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger, and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.

* In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes. After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.


Abstract of Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall

Objective. We demonstrate here the first successful implementation in humans of a proof-of-concept system for restoring and improving memory function via facilitation of memory encoding using the patient’s own hippocampal spatiotemporal neural codes for memory. Memory in humans is subject to disruption by drugs, disease and brain injury, yet previous attempts to restore or rescue memory function in humans typically involved only nonspecific, modulation of brain areas and neural systems related to memory retrieval. Approach. We have constructed a model of processes by which the hippocampus encodes memory items via spatiotemporal firing of neural ensembles that underlie the successful encoding of short-term memory. A nonlinear multi-input, multi-output (MIMO) model of hippocampal CA3 and CA1 neural firing is computed that predicts activation patterns of CA1 neurons during the encoding (sample) phase of a delayed match-to-sample (DMS) human short-term memory task. Main results. MIMO model-derived electrical stimulation delivered to the same CA1 locations during the sample phase of DMS trials facilitated short-term/working memory by 37% during the task. Longer term memory retention was also tested in the same human subjects with a delayed recognition (DR) task that utilized images from the DMS task, along with images that were not from the task. Across the subjects, the stimulated trials exhibited significant improvement (35%) in both short-term and long-term retention of visual information. Significance. These results demonstrate the facilitation of memory encoding which is an important feature for the construction of an implantable neural prosthetic to improve human memory.

round-up | Five important biomedical technology breakthroughs

Printing your own bioprinter

PrintrBot Simple Metal modified with the LVE for FRESH printing. (credit: Adam Feinberg/HardwareX)

Now you can build your own low-cost 3-D bioprinter by modifying a standard commercial desktop 3-D printer for under $500 — thanks to an open-source “LVE 3-D” design developed by Carnegie Mellon University (CMU) researchers. CMU provides detailed instructional videos.

You can print artificial human tissue scaffolds on a larger scale (entire human heart) and at higher resolution and quality, the researchers say. Most 3-D bioprinters start between $10K and $20K, and commercial 3D printers cost up to $200,000 and are typically proprietary machines, closed source, and difficult to modify.

AI-enhanced medical imaging technique reduces radiation doses, MRI times

AUTOMAP yields higher-quality images medical images from less data, reducing radiation doses for CT and PET and shortening scan times for MRI. Shown here: MRI images reconstructed from the same data with conventional approaches (left) and AUTOMAP (right). (credit: Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital)

Massachusetts General Hospital (MGH) researchers have developed a machine-learning-based technique that enables clinicians to acquire higher-quality images without the increased radiation dose — from acquiring additional data from computed tomography (CT) or positron emission tomography (PET) — or the uncomfortably long scan times needed for magnetic resonance imaging (MRI).

The new AUTOMAP (automated transform by manifold approximation)  deep-learning technique avoids radiologists having to tweak manual settings to overcome imperfections in raw data.

The technique could also help radiologists make real-time decisions about imaging protocols while the patient is in the scanner (image reconstruction time is just tens of milliseconds), thanks to AI algorithms running on graphical processing units (GPUs). — MGH Athinoula A. Martinos Center for Biomedical Imaging, Nature.

“Nanotweezers” trap cells and nanoparticles with a laser beam

Trapping a nanoparticle with laser beams (credit: Linhan Lin et al./Nature Photonics)

A new tool called “opto-thermoelectric nanotweezers” (OTENT) allows bioscientists to use light to manipulate biological cells and molecules at single-molecule resolution. The goal is to make nanomedicine discoveries for early disease diagnosis.

By optically heating a plasmonic substrate, a light-directed thermoelectric field is generated by spatial separation of dissolved ions within the heating laser spot — allowing for manipulating nanoparticles over a wide range of materials, sizes and shapes. — University of Texas at Austin, Nature Photonics.

CMU Associate Professor Adam Feinberg says his lab “aims to produce open-source biomedical research that other researchers can expand upon … to seed innovation widely [and] encourage the rapid development of biomedical technologies to save lives.”— CMU, video, HardwareX (open-access)

Nanometer-scale MRI opens the door to viewing virus nanoparticles and proteins

MRFM imaging device under microwave irradiation (inset: ultra-sensitive mechanical oscillator — a silicon nanowire, with sample coated at the tip (credit: R. Annabestani et al./Physical Review X)

A new technique allows for magnetic resonance imaging (MRI) at the unprecedented resolution of 2 nanometers — 10,000 times smaller than current millimeter resolution.

It promises to open the door to major advances in understanding virus particles, proteins that cause diseases like Parkinson’s and Alzheimer’s, and discovery of new materials, say researchers at the University of Waterloo Institute for Quantum Computing.

The breakthrough technique combines magnetic resonance force microscopy (MRFM) with the ability to precisely control atomic spins. “We now have unprecedented access to understanding complex biomolecules,” says Waterloo physicist Raffi Budakian. — Waterloo, Physical Review X, arXiv (open-access)

Skin-implantable biosensor relays real-time personal health data to a cell phone, lasts for years

Left: Infrared light (arrow shows excitation light) causes a biosensor (blue) under the skin to fluoresce at a level determined by the chemical of interest (center). Right: A detector (arrow) receives and analyzes the signals from the biosensor and transmits data to a computer or phone. (credit: Profusa)

A technology for continuous monitoring of glucose, lactate, oxygen, carbon dioxide, and other molecules — using tiny biosensors that are placed under the skin with a single injection — has been developed by DARPA/NIH-supported Profusa. Using a flexible, biocompatible hydrogel fiber about 5mm long and 0.5mm wide, the biosensors can continuously measure body chemistries.

An external device sends light through the skin to the biosensor dye phosphor, which then emits light proportional to the concentration of the current analyte (such as glucose) of interest. A detector then wirelessly transmits the brightness measurement to a computer or cell phone to record the change. Data can be shared securely via digital networks with healthcare providers.

To date, the injected biosensors have functioned for as long as four years. For example, tracking the rise and fall of oxygen levels around muscle with these sensors produces an “oxygen signature” that may reveal a person’s fitness level.

Until now, local inflammation and scar tissue from the “foreign body response” (from a sensing electrode wire that penetrates the skin) has prevented development of in-body sensors capable of continuous, long-term monitoring of body chemistry. The Lumee Oxygen Platform, the first medical application of the biosensor technology, was approved in 2017 for sale in Europe, designed for patients undergoing treatment for chronic limb ischemia, avoiding amputations. — Profusa

The brain learns completely differently than we’ve assumed, new learning theory says

(credit: Getty)

A revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.

A biological schema of an output neuron, comprising a neuron’s soma (body, shown as gray circle, top) with two roots of dendritic trees (light-blue arrows), splitting into many dendritic branches (light-blue lines). The signals arriving from the connecting input neurons (gray circles, bottom) travel via their axons (red lines) and their many branches until terminating with the synapses (green stars). There, the signals connect with dendrites (some synapse branches travel to other neurons), which then connect to the soma. (credit: Shira Sardi et al./Sci. Rep)

The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theory has remained a deeply rooted assumption in neuroscience.

Synaptic vs. dendritic learning

In vitro experimental setup. A micro-electrode array comprising 60 extracellular electrodes separated by 200 micrometers, indicating a neuron patched (connected) by an intracellular electrode (orange) and a nearby extracellular electrode (green line). (Inset) Reconstruction of a fluorescence image, showing a patched cortical pyramidal neuron (red) and its dendrites growing in different directions and in proximity to extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports adapted by KurzweilAI)

Hebb was wrong, says Kanter. “A new type of experiments strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

Image representing the current synaptic (pink) vs. the new dendritic (green) learning scenarios of the brain. In the current scenario, a neuron (black) with a small number (two in this example) dendritic trees (center) collects incoming signals via synapses (represented by red valves), with many thousands of tiny adjustable learning parameters. In the new dendritic learning scenario (green) a few (two in this example) adjustable controls (red valves) are located in close proximity to the computational element, the neuron. The scale is such that if a neuron collecting its incoming signals is represented by a person’s faraway fingers, the length of its hands would be as tall as a skyscraper (left). (credit: Prof. Ido Kanter)

The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.

According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.

This research is supported in part by the TELEM grant of the Israel Council for Higher Education.


Abstract of Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.

Next-gen optical disc has 10TB capacity and six-century lifespan

(credit: Getty)

Scientists from RMIT University in Australia and Wuhan Institute of Technology in China have developed a radical new high-capacity optical disc called “nano-optical long-data memory” that they say can record and store 10 TB (terabytes, or trillions of bytes) of data per disc securely for more than 600 years. That’s a four-times increase of storage density and 300 times increase in data lifespan over current storage technology.

Preparing  for zettabytes of data in 2025

Forecast of exponential growth of creation of Long Data, with three-year doubling time (credit: IDC)

According to IDC’s Data Age 2025 study in 2017, the recent explosion of Big Data and global cloud storage generates 2.5 PB (1015 bytes) a day, stored in massive, power-hungry data centers that use 3 percent of the world’s electricity supply. The data centers rely on hard disks, which have limited capacity (2TB per disk) and last only two years. IDC forecasts that by 2025, the global datasphere will grow exponentially to 163 zettabytes (that’s 163 trillion gigabytes) — ten times the 16.1ZB of data generated in 2016.

Examples of massive Long Data:

  • The Square Kilometer Array (SKA) radio telescope produces 576 petabytes of raw data per hour.
  • The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to map the human brain is handling data measured in yottabytes (one trillion terabytes).
  • Studying the mutation of just one human family tree over ten generations (500 years) will require 8 terabytes of data.

IDC estimates that by 2025, nearly 20% of the data in the global datasphere will be critical to our daily lives (such as biomedical data) and nearly 10% of that will be hypercritical. “By 2025, an average connected person anywhere in the world will interact with connected devices nearly 4,800 times per day — basically one interaction every 18 seconds,” the study estimates.

Replacing hard drives with optical discs

There’s a current shift from focus on “Big Data” to “Long Data,” which enables new insights to be discovered by mining massive datasets that capture changes in the real world over decades and centuries.* The researchers say their new Long-data memory technology could offer a more cost-efficient and sustainable solution to the global data storage problem.

The new technology could radically improve the energy efficiency of data centers. It would use 1000 times less power than a hard-disk-based data center by requiring far less cooling and doing away with the energy-intensive task of data migration (backing up to a new disk) every two years. Optical discs are also inherently more secure than hard disks.

“While optical technology can expand capacity, the most advanced optical discs developed so far have only 50-year lifespans,” explained lead investigator Min Gu, a professor at RMIT and senior author of an open-access paper published in Nature Communications. “Our technique can create an optical disc with the largest capacity of any optical technology developed to date and our tests have shown it will last over half a millennium and is suitable for mass production of optical discs.”

There’s an existing Blu-ray disc technology called M-DISC, that can store data for 1,000 years, but is limited to 100 GB, compared to 6000 10 TB— 100 times more data on a disc.

“This work can be the building blocks for the future of optical long-data centers over centuries, unlocking the potential of the understanding of the long processes in astronomy, geology, biology, and history,” the researchers note in the paper. “It also opens new opportunities for high-reliability optical data memory that could survive in extreme conditions, such as high temperature and high pressure.”

How the nano-optical long-data memory technology works

The high-capacity optical data memory uses gold nanoplasmonic hybrid glass composites to encode and preserve long data over centuries. (credit: Qiming Zhang et al./Nature Communications, adapted by KurzweilAI)

The new nano-optical long-data memory technology is based on a novel gold nanoplasmonic* hybrid glass matrix, unlike the materials used in current optical discs. The technique relies on a sol-gel process, which uses chemical precursors to produce ceramics and glass with higher purity and homogeneity than conventional processes. Glass is a highly durable material that can last up to 1000 years and can be used to hold data, but has limited native storage capacity because of its inflexibility. So the team combined glass with an organic material, halving its lifespan (to 600 years) but radically increasing its capacity.

Data is further encoded by heating gold nanorods, causing them to morph, in four discrete steps, into spheres. (credit: Qiming Zhang et al./Nature Communications, adapted by KurzweilAI)

To create the nanoplasmonic hybrid glass matrix, gold nanorods were incorporated into a hybrid glass composite. The researchers chose gold because like glass, it is robust and highly durable. The system allows data to be recorded in five dimensions — three dimensions in space (data is stored in gold nanorods at multiple levels in the disc and in four different shapes), plasmonic-controlled multi-color encoding**, and light-polarization encoding.

Scientists at Monash University were also involved in the research.

* “Long Data” refers here to Big Data across millennia (both historical and future), as explained here, not to be confused with the “long data” software data type. A short history of Big Data forecasts is here.

** As explained here, here, and here.

UPDATE MAR. 27, 2018 — nano-optical long-data memory disc capacity of 600TB corrected to read 10TB.


Abstract of High-capacity optical long data memory based on enhanced Young’s modulus in nanoplasmonic hybrid glass composites

Emerging as an inevitable outcome of the big data era, long data are the massive amount of data that captures changes in the real world over a long period of time. In this context, recording and reading the data of a few terabytes in a single storage device repeatedly with a century-long unchanged baseline is in high demand. Here, we demonstrate the concept of optical long data memory with nanoplasmonic hybrid glass composites. Through the sintering-free incorporation of nanorods into the earth abundant hybrid glass composite, Young’s modulus is enhanced by one to two orders of magnitude. This discovery, enabling reshaping control of plasmonic nanoparticles of multiple-length allows for continuous multi-level recording and reading with a capacity over 10 terabytes with no appreciable change of the baseline over 600 years, which opens new opportunities for long data memory that affects the past and future.

Recording data from one million neurons in real time

(credit: Getty)

Neuroscientists at the Neuronano Research Centre at Lund University in Sweden have developed and tested an ambitious new design for processing and storing the massive amounts of data expected from future implantable brain machine interfaces (BMIs) and brain-computer interfaces (BCIs).

The system would simultaneously acquire data from more than 1 million neurons in real time. It would convert the spike data (using bit encoding) and send it via an effective communication format for processing and storage on conventional computer systems. It would also provide feedback to a subject in under 25 milliseconds — stimulating up to 100,000 neurons.

Monitoring large areas of the brain in real time. Applications of this new design include basic research, clinical diagnosis, and treatment. It would be especially useful for future implantable, bidirectional BMIs and BCIs, which are used to communicate complex data between neurons and computers. This would include monitoring large areas of the brain in paralyzed patients, revealing an imminent epileptic seizure, and providing real-time feedback control to robotic arms used by quadriplegics and others.

The system is intended for recording neural signals from implanted electrodes, such as this 32-electrode grid, used for long-term, stable neural recording and treatment of neurological disorders. (credit: Thor Balkhed)

“A considerable benefit of this architecture and data format is that it doesn’t require further translation, as the brain’s [spiking] signals are translated directly into bitcode,” making it available for computer processing and dramatically increasing the processing speed and database storage capacity.

“This means a considerable advantage in all communication between the brain and computers, not the least regarding clinical applications,” says Bengt Ljungquist, lead author of the study and doctoral student at Lund University.

Future BMI/BCI systems. Current neural-data acquisition systems are typically limited to 512 or 1024 channels and the data is not easily converted into a form that can be processed and stored on PCs and other computer systems.

“The demands on hardware and software used in the context of BMI/BCI are already high, as recent studies have used recordings of up to 1792 channels for a single subject,” the researchers note in an open-access paper published in the journal Neuroinformatics.

That’s expected to increase. In 2016, DARPA (U.S. Defense Advanced Research Project Agency) announced its Neural Engineering System Design (NESD) program*, intended “to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. …

“Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.”

System architecture overview of storage for large amounts of real time neural data, proposed by Lund University researchers. A master clock pulse (a) synchronizes n acquisition systems (b), which handles bandpass filtering, spike sorting (for spike data), and down-sampling (for narrow band data), receiving electro-physiological data from subject (e). Neuronal spike data is encoded in a data grid of neurons time bins. (c). The resulting data grid is serialized and sent over to spike data storage in HDF5 file format (d), as well as to narrow band (f) and waveform data storage (g). In this work, a and b are simulated, c and d are implemented, while f and g are suggested (not yet
implemented) components. (credit: Bengt Ljungquis et al./Neuroinformatics)

* DARPA has since announced that it has “awarded contracts to five research organizations and one company that will support the Neural Engineering System Design (NESD) program: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley. These organizations have formed teams to develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration. Four of the teams will focus on vision and two will focus on aspects of hearing and speech.”


Abstract of A Bit-Encoding Based New Data Structure for Time and Memory Efficient Handling of Spike Times in an Electrophysiological Setup.

Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.

New algorithm will allow for simulating neural connections of entire brain on future exascale supercomputers

(credit: iStock)

An international team of scientists has developed an algorithm that represents a major step toward simulating neural connections in the entire human brain.

The new algorithm, described in an open-access paper published in Frontiers in Neuroinformatics, is intended to allow simulation of the human brain’s 100 billion interconnected neurons on supercomputers. The work involves researchers at the Jülich Research Centre, Norwegian University of Life Sciences, Aachen University, RIKEN, KTH Royal Institute of Technology, and KTH Royal Institute of Technology.

An open-source neural simulation tool. The algorithm was developed using NEST* (“neural simulation tool”) — open-source simulation software in widespread use by the neuroscientific community and a core simulator of the European Human Brain Project. With NEST, the behavior of each neuron in the network is represented by a small number of mathematical equations, the researchers explain in an announcement.

Since 2014, large-scale simulations of neural networks using NEST have been running on the petascale** K supercomputer at RIKEN and JUQUEEN supercomputer at the Jülich Supercomputing Centre in Germany to simulate the connections of about one percent of the neurons in the human brain, according to Markus Diesmann, PhD, Director at the Jülich Institute of Neuroscience and Medicine. Those simulations have used a previous version of the NEST algorithm.

Why supercomputers can’t model the entire brain (yet). “Before a neuronal network simulation can take place, neurons and their connections need to be created virtually,” explains senior author Susanne Kunkel of KTH Royal Institute of Technology in Stockholm.

During the simulation, a neuron’s action potentials (short electric pulses) first need to be sent to all 100,000 or so small computers, called nodes, each equipped with a number of processors doing the actual calculations. Each node then checks which of all these pulses are relevant for the virtual neurons that exist on this node.

That process requires one bit of information per processor for every neuron in the whole network. For a network of one billion neurons, a large part of the memory in each node is consumed by this single bit of information per neuron. Of course, the amount of computer memory required per processor for these extra bits per neuron increases with the size of the neuronal network. To go beyond the 1 percent and simulate the entire human brain would require the memory available to each processor to be 100 times larger than in today’s supercomputers.

In future exascale** computers, such as the post-K computer planned in Kobe and JUWELS at Jülich*** in Germany, the number of processors per compute node will increase, but the memory per processor and the number of compute nodes will stay the same.

Achieving whole-brain simulation on future exascale supercomputers. That’s where the next-generation NEST algorithm comes in. At the beginning of the simulation, the new NEST algorithm will allow the nodes to exchange information about what data on neuronal activity needs to sent and to where. Once this knowledge is available, the exchange of data between nodes can be organized such that a given node only receives the information it actually requires. That will eliminate the need for the additional bit for each neuron in the network.

Brain-simulation software, running on a current petascale supercomputer, can only represent about 1 percent of neuron connections in the cortex of a human brain (dark red area of brain on left). Only about 10 percent of neuron connections (center) would be possible on the next generation of exascale supercomputers, which will exceed the performance of today’s high-end supercomputers by 10- to 100-fold. However, a new algorithm could allow for 100 percent (whole-brain-scale simulation) on exascale supercomputers, using the same amount of computer memory as current supercomputers. (credit: Forschungszentrum Jülich, adapted by KurzweilAI)

With memory consumption under control, simulation speed will then become the main focus. For example, a large simulation of 0.52 billion neurons connected by 5.8 trillion synapses running on the supercomputer JUQUEEN in Jülich previously required 28.5 minutes to compute one second of biological time. With the improved algorithm, the time will be reduced to just 5.2 minutes, the researchers calculate.

“The combination of exascale hardware and [forthcoming NEST] software brings investigations of fundamental aspects of brain function, like plasticity and learning, unfolding over minutes of biological time, within our reach,” says Diesmann.

The new algorithm will also make simulations faster on presently available petascale supercomputers, the researchers found.

NEST simulation software update. In one of the next releases of the simulation software by the Neural Simulation Technology Initiative, the researchers will make the new open-source code freely available to the community.

For the first time, researchers will have the computer power available to simulate neuronal networks on the scale of the entire human brain.

Kenji Doya of Okinawa Institute of Science and Technology (OIST) may be among the first to try it. “We have been using NEST for simulating the complex dynamics of the basal ganglia circuits in health and Parkinson’s disease on the K computer. We are excited to hear the news about the new generation of NEST, which will allow us to run whole-brain-scale simulations on the post-K computer to clarify the neural mechanisms of motor control and mental functions,” he says .

* NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems, rather than on the exact morphology of individual neurons. NEST is ideal for networks of spiking neurons of any size, such as models of information processing, e.g., in the visual or auditory cortex of mammals, models of network activity dynamics, e.g., laminar cortical networks or balanced random networks, and models of learning and plasticity.

** Petascale supercomputers operate at petaflop/s (quadrillions or 1015 floating point operations per second). Future exascale supercomputers will operate at exaflop/s (1018 flop/s). The fastest supercomputer at this time is the Sunway TaihuLight at the National Supercomputing Center in Wuxi, China, operating at 93 petaflops/sec.

*** At Jülich, the work is supported by the Simulation Laboratory Neuroscience, a facility of the Bernstein Network Computational Neuroscience at Jülich Supercomputing Centre. Partial funding comes from the European Union Seventh Framework Programme (Human Brain Project, HBP) and the European Union’s Horizon 2020 research and innovation programme, and the Exploratory Challenge on Post-K Computer (Understanding the neural mechanisms of thoughts and its applications to AI) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Japan. With their joint project between Japan and Europe, the researchers hope to contribute to the formation of an International Brain Initiative (IBI).

BernsteinNetwork | NEST — A brain simulator


Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

 

DARPA-funded ‘body on a chip’ microfluidic system could revolutionize drug evaluation

To measure the effects of drugs on different parts of the body, MIT’s new “Physiome-on-a-chip” microfluidic platform can connect 3D tissues from up to 10 “organs on chips” — allowing researchers to accurately replicate human-organ interactions for weeks at a time. (credit: Felice Frankel)

MIT bioengineers have developed a new microfluidic platform technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans.

The microfluidic platform can connect 3D tissues from up to 10 organs. Replacing animal testing, it can accurately replicate human-organ interactions for weeks at a time and can allow for measuring the effects of drugs on different parts of the body, according to the engineers. For example, the system could reveal whether a drug that is intended to treat one organ will have adverse effects on another.

Physiome on a chip. The new technology was originally funded in 2012 by the Defense Advanced Research Projects Agency (DARPA) Microphysiological Systems (MPS) program (see “DARPA and NIH to fund ‘human body on a chip’ research”). The goal of the $32 million program was to model potential drug effects more accurately and rapidly.

Linda Griffith, PhD, the MIT School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and her colleagues decided to pursue a technology that they call a “physiome on a chip.” Griffith is one of the senior authors of a paper on the study, which appears in the open-access Nature journal Scientific Reports.

Schematic overview of “Physiome-on-a-chip” approach, using bioengineered devices that nurture many interconnected 3D “microphysiological systems” (MPSs), aka “organs-on-chips.” MPSs are in vitro (lab) models that capture facets of in vivo (live) organ function. They represent specified functional behaviors of each organ of interest, such as gut, brain, and liver, as shown here. MPSs are designed to capture essential features of in vivo physiology that is based on quantitative systems models tailored for individual applications, such as drug fate or disease modeling. (illustration credit: Victor O. Leshyk)

To achieve this, the researchers needed to develop a platform that would allow tissues to grow and interact with each other. They also needed to develop engineered tissue that would accurately mimic the functions of human organs. Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. And most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require awkward external pumps.

The MIT team decided to create an open system, making it easier to manipulate the system and remove samples for analysis. Their system was adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations*. It also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger tissues (for example, tumors within an organ) to be evaluated.

Complex interactions with 1 or 2 million cells in 10 organs. The researchers created three versions of their system, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells.

MPS platform and flow partitioning. (Left) Exploded rendering of the 7-MPS platform. Rigid plates of polysulfone (yellow) and acrylic (clear) sandwich an elastomeric (rubbery) membrane to form a pumping manifold with integrated fluid channels. Channels interface to the top side of the polysulfone plate (yellow) to deliver fluid to each MPS compartment (such as brain) in a defined manner. Fluid return occurs passively via spillway channels machined into the top plate. (Right) Flow partitioning mirrors physiological cardiac output for 7-way platforms. A 10-MPS platform adds muscle, skin, and kidney flow. (credit: Collin D. Edington et al./Scientific Reports)

These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These “primary cells” are more difficult to work with, but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down.

Replacing animal testing. The new microfluidic platform technology is superior to animal testing, according to Griffith:

  • Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects.
  • Drugs that work in animals often fail in human trials.
  • “Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic. With our chip, you can distribute a drug and then look for the effects on other tissues and measure the exposure and how it is metabolized.”
  • These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because the treatments are only designed to interact with the human immune system.

Modeling Parkinson’s and metastasizing tumors. Griffith believes that the most immediate applications for this technology involve modeling two to four specific organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease. Other applications the lab is investigating include modeling tumors that metastasize to other parts of the body.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

* Co-author David Hughes is an employee of CN BioInnovations, the commercial vendor for the Liverchip. Linda Griffith and Steve Tannenbaum receive patent royalties from the Liverchip.

Abstract of Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies

Microphysiological systems (MPSs) are in vitro models that capture facets of in vivo organ function through use of specialized culture microenvironments, including 3D matrices and microperfusion. Here, we report an approach to co-culture multiple different MPSs linked together physiologically on re-useable, open-system microfluidic platforms that are compatible with the quantitative study of a range of compounds, including lipophilic drugs. We describe three different platform designs – “4-way”, “7-way”, and “10-way” – each accommodating a mixing chamber and up to 4, 7, or 10 MPSs. Platforms accommodate multiple different MPS flow configurations, each with internal re-circulation to enhance molecular exchange, and feature on-board pneumatically-driven pumps with independently programmable flow rates to provide precise control over both intra- and inter-MPS flow partitioning and drug distribution. We first developed a 4-MPS system, showing accurate prediction of secreted liver protein distribution and 2-week maintenance of phenotypic markers. We then developed 7-MPS and 10-MPS platforms, demonstrating reliable, robust operation and maintenance of MPS phenotypic function for 3 weeks (7-way) and 4 weeks (10-way) of continuous interaction, as well as PK analysis of diclofenac metabolism. This study illustrates several generalizable design and operational principles for implementing multi-MPS “physiome-on-a-chip” approaches in drug discovery.

‘Minimalist machine learning’ algorithm analyzes complex microscopy and other images from very little data

(a) Raw microscopy image of a slice of mouse lymphblastoid cells. (b) Reconstructed image using time-consuming manual segmentation — note missing data (arrow). (c) Equivalent output of the new “Mixed-Scale Dense Convolution Neural Network” algorithm with 100 layers. (credit: Data from A. Ekman and C. Larabell, National Center for X-ray Tomography.)

Mathematicians at Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a radical new approach to machine learning: a new type of highly efficient “deep convolutional neural network” that can automatically analyze complex experimental scientific images from limited data.*

As experimental facilities generate higher-resolution images at higher speeds, scientists struggle to manage and analyze the resulting data, which is often done painstakingly by hand.

For example, biologists record cell images and painstakingly outline the borders and structure by hand. One person may spend weeks coming up with a single fully three-dimensional image of a cellular structure. Or materials scientists use tomographic reconstruction to peer inside rocks and materials, and then manually label different regions, identifying cracks, fractures, and voids by hand. Contrasts between different yet important structures are often very small and “noise” in the data can mask features and confuse the best of algorithms.

To meet this challenge, mathematicians Daniël Pelt and James Sethian at Berkeley Lab’s Center for Advanced Mathematics for Energy Research Applications (CAMERA)** attacked the problem of machine learning from very limited amounts of data — to do “more with less.”

Their goal was to figure out how to build an efficient set of mathematical “operators” that could greatly reduce the number of required parameters.

“Mixed-Scale Dense” network learns quickly with far fewer images

Many applications of machine learning to imaging problems use deep convolutional neural networks (DCNNs), in which the input image and intermediate images are convolved in a large number of successive layers, allowing the network to learn highly nonlinear features. To train deeper and more powerful networks, additional layer types and connections are often required. DCNNs typically use a large number of intermediate images and trainable parameters, often more than 100 million, to achieve results for difficult problems.

The new method the mathematicians developed, “Mixed-Scale Dense Convolution Neural Network” (MS-D), avoids many of these complications. It “learns” much more quickly than manually analyzing the tens or hundreds of thousands of labeled images required by typical machine-learning methods, and requires far fewer images, according to Pelt and Sethian.

(Top) A schematic representation of a two-layer CNN architecture. (Middle) A schematic representation of a common DCNN architecture with scaling operations; downward arrows represent downscaling operations, upward arrows represent upscaling operations and dashed arrows represent skipped connections. (Bottom) Schematic representation of an MS-D network; colored lines represent 3×3 dilated convolutions, with each color corresponding to a different dilation. (credit: Daniël Pelt and James Sethian/PNAS, composited by KurzweilAI)

The “Mixed-Scale Dense” network architecture calculates ”dilated convolutions” — a substitute for complex scaling operations. To capture features at various spatial ranges, it employs multiple scales within a single layer, and densely connects all intermediate images. The new algorithm achieves accurate results with few intermediate images and parameters, eliminating both the need to tune hyperparameters and additional layers or connections to enable training, according to the researchers.***

“In many scientific applications, tremendous manual labor is required to annotate and tag images — it can take weeks to produce a handful of carefully delineated images,” said Sethian, who is also a mathematics professor at the University of California, Berkeley. “Our goal was to develop a technique that learns from a very small data set.”

Details of the algorithm were published Dec. 26, 2017 in a paper in the Proceedings of the National Academy of Sciences.

Radically transforming our ability to understand disease

The MS-D approach is already being used to extract biological structure from cell images, and is expected to provide a major new computational tool to analyze data across a wide range of research areas. In one project, the MS-D method needed data from only seven cells to determine the cell structure.

“The breakthrough resulted from realizing that the usual downscaling and upscaling that capture features at various image scales could be replaced by mathematical convolutions handling multiple scales within a single layer,” said Pelt, who is also a member of the Computational Imaging Group at the Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands.

“In our laboratory, we are working to understand how cell structure and morphology influences or controls cell behavior. We spend countless hours hand-segmenting cells in order to extract structure, and identify, for example, differences between healthy vs. diseased cells,” said Carolyn Larabell, Director of the National Center for X-ray Tomography and Professor at the University of California San Francisco School of Medicine.

“This new approach has the potential to radically transform our ability to understand disease, and is a key tool in our new Chan-Zuckerberg-sponsored project to establish a Human Cell Atlas, a global collaboration to map and characterize all cells in a healthy human body.”

To make the algorithm accessible to a wide set of researchers, a Berkeley team built a web portal, “Segmenting Labeled Image Data Engine (SlideCAM),” as part of the CAMERA suite of tools for DOE experimental facilities.

High-resolution science from low-resolution data

A different challenge is to produce high-resolution images from low-resolution input. If you’ve ever tried to enlarge a small photo and found it only gets worse as it gets bigger, this may sound close to impossible.

(a) Tomographic images of a fiber-reinforced mini-composite, reconstructed using 1024 projections. Noisy images (b) of the same object were obtained by reconstructing using only 128 projections, and were used as input to an MS-D network (c). A small region indicated by a red square is shown enlarged in the bottom-right corner of each image. (credit: Daniël Pelt and James A. Sethian/PNAS)

As an example, imagine trying to de-noise tomographic reconstructions of a fiber-reinforced mini-composite material. In an experiment described in the paper, images were reconstructed using 1,024 acquired X-ray projections to obtain images with relatively low amounts of noise. Noisy images of the same object were then obtained by reconstructing using only 128 projections. Training inputs to the Mixed-Scale Dense network were the noisy images, with corresponding noiseless images used as target output during training. The trained network was then able to effectively take noisy input data and reconstruct higher resolution images.

Pelt and Sethian are now applying their approach to other new areas, such as real-time analysis of images coming out of synchrotron light sources, biological reconstruction of cells, and brain mapping.

* Inspired by the brain, convolutional neural networks are computer algorithms that have been successfully used in analyzing visual imagery. “Deep convolutional neural networks (DCNNs) use a network architecture similar to standard convolutional neural networks, but consist of a larger number of layers, which enables them to model more complicated functions. In addition, DCNNs often include downscaling and upscaling operations between layers, decreasing and increasing the dimensions of feature maps to capture features at different image scales.” — Daniël Pelt and James A. Sethian/PNAS

** In 2014, Sethian established CAMERA at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory as an integrated, cross-disciplinary center to develop and deliver fundamental new mathematics required to capitalize on experimental investigations at DOE Office of Science user facilities. CAMERA is part of the lab’s Computational Research Division. It is supported by the offices of Advanced Scientific Computing Research and Basic Energy Sciences in the Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time.

*** “By combining dilated convolutions and dense connections, the MS-D network architecture can achieve accurate results with significantly fewer feature maps and trainable parameters than existing architectures, enabling accurate training with relatively small training sets. MS-D networks are able to automatically adapt by learning which combination of dilations to use, allowing identical MS-D networks to be applied to a wide range of different problems.” — Daniël Pelt and James A. Sethian/PNAS


Abstract of A mixed-scale dense convolutional neural network for image analysis

Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.

Neuroscientists devise scheme for mind-uploading centuries in the future

Representative electron micrograph of white matter region in cryopreserved pig brain (credit: Brain Preservation Foundation)

Two researchers — Robert McIntyre, an MIT graduate, and Gregory M. Fahy, PhD., 21st Century Medicine (21CM) Chief Scientific Officer, have developed a method for scanning a preserved brain’s connectome (the 150 trillion microscopic synaptic connections presumed to encode all of a person’s knowledge).

That data could possibly be used, centuries later, to reconstruct a whole-brain emulation — uploading your mind into a computer or Avatar-style robotic, virtual, or synthetic body, McIntyre and others suggest.

According to MIT Technology Review, McIntyre has formed a startup company called Nectome that has won a large NIH grant for creating “technologies to enable whole-brain nanoscale preservation and imaging.”

McIntyre is also collaborating with Edward Boyden, PhD., a top neuroscientist at MIT and inventor of a new “expansion microscopy” technique (to achieve super-resolution with ordinary confocal microscopes), as KurzweilAI recently reported. The technique also causes brain tissue to swell, making it more accessible.

Preserving brain information patterns, not biological function

Unlike cryonics (freezing people or heads for future revival), the researchers did not intend to revive a pig or pig brain (or human, in the future). Instead, the idea is to develop a bridge to future mind-uploading technology by preserving the information content of the brain, as encoded within the frozen connectome.

The first step in the ASC procedure is to perfuse the brain’s vascular system with the toxic fixative glutaraldehyde (typically used as an embalming fluid but also used by neuroscientists to prepare brain tissue for the highest resolution electron microscopic and immunofluorescent examination). That instantly halts metabolic processes by covalently crosslinking the brain’s proteins in place, leading to death (by contemporary standards). The brain is then quickly stored at -130 degrees C, stopping all further decay.

The method, tested on a pig’s brain, led to 21st Century Medicine (21CM), lead researcher McIntyre, and senior author Fahy winning the $80,000 Large Mammal Brain Preservation Prize offered by the Brain Preservation Foundation (BPF), announced March 13.

To accomplish this, McIntyre’s team scaled up the same procedure they used to previously preserve a rabbit brain, for which they won the BPF’s Small Mammal Prize in February 2016, as KurzweilAI has reported. That research was judged by neuroscientist Ken Hayworth, PhD., President of the Brain Preservation Foundationand noted connectome researcher Prof. Sebastian Seung, PhD., Princeton Neuroscience Institute.

Caveats

However, BRF warns that this single prize-winning laboratory demonstration is “insufficient to address the types of quality control measures that should be expected of any procedure that would be applied to humans.” Hayworth outlines here his position on a required medical procedure and associated quality control protocol, prior to any such offering.

The ASC method, if verified by science, raises serious ethical, legal, and medical questions. For example:

  • Should ASC be developed into a medical procedure and if so, how?
  • Should ASC be available in an assisted suicide scenario for terminal patients?
  • Could ASC be a “last resort” to enable a dying person’s mind to survive and reach a future world?
  • How real are claims of future mind uploading?
  • Is it legal?*

“It may take decades or even centuries to develop the technology to upload minds if it is even possible at all,” says the BPF press release. “ASC would enable patients to safely wait out those centuries. For now, neuroscience is actively exploring the plausibility of mind uploading through ongoing studies of the physical basis of memory, and through development of large-scale neural simulations and tools to map connectomes.”

Interested? Nectome has a $10,000 (refundable) wait list.

* Nectome “has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal.” — MIT Technology Review