Planarian regeneration model discovered by AI algorithm

Head-trunk-tail planarian regeneration results from experiments (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria — the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for more than 100 years. The work, published in the June 4 issue of PLOS Computational Biology (open access), demonstrates how “robot science” can help human scientists in the future.

To bioengineer complex organs, scientists need to understand the mechanisms by which those shapes are normally produced by the living organism.

However, there’s a significant knowledge gap between the molecular genetic components needed to produce a particular organism shape and understanding how to generate that particular complex shape in the correct size, shape and orientation, said the paper’s senior author, Michael Levin, Ph.D., Vannevar Bush professor of biology and director of the Tufts Center for Regenerative and Developmental Biology.

“Most regenerative models today derived from genetic experiments are arrow diagrams, showing which gene regulates which other gene. That’s fine, but it doesn’t tell you what the ultimate shape will be. You cannot tell if the outcome of many genetic pathway models will look like a tree, an octopus or a human,” said Levin.

“Most models show some necessary components for the process to happen, but not what dynamics are sufficient to produce the shape, step by step. What we need are algorithmic or constructive models, which you could follow precisely and there would be no mystery or uncertainty. You follow the recipe and out comes the shape.”

Such models are required to know what triggers could be applied to such a system to cause regeneration of particular components, or other desired changes in shape. However, no such tools yet exist for mining the fast-growing mountain of published experimental data in regeneration and developmental biology, said the paper’s first author, Daniel Lobo, Ph.D., post-doctoral fellow in the Levin lab.

An evolutionary computation algorithm

To address this challenge, Lobo and Levin developed an algorithm that could be used to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The algorithm generated networks by randomly combining previous networks and performing random changes, additions and deletions. Each candidate network was tested in a virtual worm, under simulated experiments. The algorithm compared the resulting shape from the simulation with real published data in the database.

As evolution proceeded, gradually the new networks could explain more experiments in the database comprising most of the known planarian experimental literature regarding head vs. tail regeneration.

Regenerative model discovered by AI

Regulatory network found by the automated system, explaining the combined phenotypic (forms and characteristics) experimental data of the key publications of head-trunk-tail planarian regeneration (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

The researchers ultimately applied the algorithm to a combined experimental dataset of 16 key planarian regeneration experiments to determine if the approach could identify a comprehensive regulatory network of planarian generation.

After 42 hours, the algorithm returned the discovered regulatory network, which correctly predicted all 16 experiments in the dataset. The network comprised seven known regulatory molecules as well as two proteins that had not yet been identified in existing papers on planarian regeneration.

“This represents the most comprehensive model of planarian regeneration found to date. It is the only known model that mechanistically explains head-tail polarity determination in planaria under many different functional experiments and is the first regenerative model discovered by artificial intelligence,” said Levin.

The paper represents a successful application of the growing field of “robot science.”

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said.

“One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining, but also inference of meaning of the data.”

This work was supported with funding from the National Science Foundation, National Institutes of Health, USAMRMC, and the Mathers Foundation.

Next-generation energy-efficient light-based computers

Infrared light enters this silicon structure from the left. The cut-out patterns, determined by an algorithm, route two different wavelengths of this light into the two pathways on the right. (credit: Alexander Piggott)

Stanford University engineers have developed a new design algorithm that can automate the process of designing optical interconnects, which could lead to faster, more energy-efficient computers that use light rather than electricity for internal data transport.

Light can transmit more data while consuming far less power than electricity. According to a study by David Miller, the MIT W.M. Keck Foundation Professor of Electrical Engineering, up to 80 percent of microprocessor power is consumed by sending data over interconnects (wires that connect chips).

In addition, “for chip-scale links, light can carry more than 20 times as much data,” said Stanford graduate student Alexander Y. Piggott, lead author of a Nature Photonics article.

However, designing optical interconnects (using silicon fiber-optics cables) is complex and requires custom design for each interconnect. Given that thousands of interconnects are needed for each electronic system, optical data transport has remained impractical.

Optimized design of optical interconnects

Now the Stanford engineers believe they’ve broken that bottleneck by inventing what they call an “inverse design algorithm.” It works as the name suggests: the engineers specify what they want the optical circuit to do, and the software provides the details of how to fabricate a silicon structure to perform the task.

The wavelength demultiplexer developed by the Stanford team comprised one input waveguide, two output waveguides, and a chip for switching outputs based on incoming wavelengths (credit: Alexander Y. Piggott et al./Nature Photonics)

“We used the algorithm to design a working optical circuit and made several copies in our lab,” said Jelena Vuckovic, a Stanford professor of electrical engineering and senior author of the article.

The optical circuit they created was a silicon wavelength demultiplexer (which splits incoming light into multiple channels based on the wavelengths of the light). The device split 1,300 nm and 1,550 nm light from an input waveguide into two output waveguides.

(“Multiplexing” allows for multiple signals to be transmitted over a thin fiber-optic cable, which is how the Internet and cable television is able to transmit massive amounts of data, not possible with wires.)

The engineers note that once the algorithm has calculated the proper shape for the task, standard scalable industrial processes can be used to transfer that pattern onto silicon. The device footprint is only 2.8 x 2.8 micrometers, making this the smallest dielectric wavelength splitter to date.

The researchers envision other potential applications for their inverse design algorithm, including high-bandwidth optical communications, compact microscopy systems, and ultra-secure quantum communications.


Abstract of Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer

Integrated photonic devices are poised to play a key role in a wide variety of applications, ranging from optical interconnects and sensors to quantum computing. However, only a small library of semi-analytically designed devices is currently known. Here, we demonstrate the use of an inverse design method that explores the full design space of fabricable devices and allows us to design devices with previously unattainable functionality, higher performance and robustness, and smaller footprints than conventional devices. We have designed a silicon wavelength demultiplexer that splits 1,300 nm and 1,550 nm light from an input waveguide into two output waveguides, and fabricated and characterized several devices. The devices display low insertion loss (∼2 dB), low crosstalk (<−11 dB) and wide bandwidths (>100 nm). The device footprint is 2.8 × 2.8 μm2, making this the smallest dielectric wavelength splitter.

‘Brainprints’ could replace passwords

Sarah Laszlo, an assistant professor of psychology, adjusting an EEG electrode (credit: Jonathan Cohen, Binghamton University)photographer

The way your brain responds to certain words could be used to replace passwords, according to a study by researchers from Binghamton University, published in academic journal Neurocomputing.

The psychologists recorded volunteers’ EEG signals from volunteers reading a list of acronyms, focusing on the part of the brain associated with reading and recognizing words.

Participants’ “event-related potential” signals reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy, using only three electrodes.

The results suggest that brainwaves could be used by security systems to verify a person’s identity.

Better than fingerprints or retinal patterns in the eye

According to Sarah Laszlo, assistant professor of psychology and linguistics at Binghamton University and co-author of the “Brainprint” paper, brain biometrics are appealing because they are cancellable (can be reset) and cannot be stolen by malicious means, such as copying a fingerprint.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint — the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable.

So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said, meaning the user could simply record the EEG pattern associated with another word or phrase.

Useful in high-security environments

Sample correctly classified brainprint recording (credit: B.C. Armstrong/Neurocomputing)

Zhanpeng Jin, assistant professor at Binghamton University’s departments of Electrical and Computer Engineering, and Biomedical Engineering, doesn’t see brainprint as the kind of system that would be mass-produced for low security applications (at least in the near future*) but it could have important security applications.

“We tend to see the applications of this system as being more along the lines of high-security physical locations, like the Pentagon, where there aren’t that many users that are authorized to enter, and those users don’t need to constantly be authorizing the way that a consumer might need to authorize into their phone or computer,” Jin said.

The project is funded by the National Science Foundation and Binghamton University’s Interdisciplinary Collaboratino Grants (ICG) Program.

* Widespread use of low-cost EEG devices could potentially change that.


Abstract of Brainprint: Assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics

The human brain continually generates electrical potentials representing neural communication. These potentials can be measured at the scalp, and constitute the electroencephalogram (EEG). When the EEG is time-locked to stimulation – such as the presentation of a word – and averaged over many such presentations, the Event-Related Potential (ERP) is obtained. The functional characteristics of components of the ERP are well understood, and some components represent processing that may differ uniquely from individual to individual—such as the N400 component, which represents access to the semantic network. We applied several pattern classifiers to ERPs representing the response of individuals to a stream of text designed to be idiosyncratically familiar to different individuals. Results indicate that there are robustly identifiable features of the ERP that enable labeling of ERPs as belonging to individuals with accuracy reliably above chance (in the range of 82–97%). Further, these features are stable over time, as indicated by continued accurate identification of individuals from ERPs after a lag of up to six months. Even better, the high degree of labeling accuracy achieved in all cases was achieved with the use of only 3 electrodes on the scalp—the minimal possible number that can acquire clean data.