Single-molecule-level data storage may achieve 100 times higher data density

(credit: iStock)

Scientists at the University of Manchester have developed a data-storage method that could achieve 100 times higher data density than current technologies.*

The system would allow for data servers to operate at the (relatively high) temperature of -213 °C. That could make it possible in the future for data servers to be chilled by liquid nitrogen (-196 °C) — a cooling method that is relatively cheap compared to the far more expensive liquid helium (which requires -269 °C) currently used.

The research provides proof-of-concept that such technologies could be achievable in the near future “with judicious molecular design.”

Huge benefits for the environment

Molecular-level data storage could lead to much smaller hard drives that require less energy, meaning data centers across the globe could be smaller, lower-cost, and a lot more energy-efficient.

Google data centers (credit: Google)

For example, Google currently has 15 data centers around the world. They process an average of 40 million searches per second, resulting in 3.5 billion searches per day and 1.2 trillion searches per year. To deal with all that data, Google had approximately 2.5 million servers in each data center, it was reported in 2016, and that number was likely to rise.

Some reports say the energy consumed at such centers could account for as much as 2 per cent of the world’s total greenhouse gas emissions. This means any improvement in data storage and energy efficiency could also have huge benefits for the environment as well as vastly increasing the amount of information that can be stored.

The research, led by David Mills, PhD, and Nicholas Chilton, PhD, from the School of Chemistry, is published in the journal Nature. “Our aim is to achieve even higher operating temperatures in the future, ideally functioning above liquid nitrogen temperatures,” said Mills.

* The method uses single-molecule magnets, which display “hysteresis” — a magnetic memory effect that is a requirement of magnetic data storage, such as hard drives. Molecules containing lanthanide atoms have exhibited this phenomenon at the highest temperatures to date. Lanthanides are rare earth metals used in all forms of everyday electronic devices such as smartphones, tablets and laptops. The team achieved their results using the lanthanide element dysprosium.


Abstract of Molecular magnetic hysteresis at 60 kelvin in dysprosocenium

Lanthanides have been investigated extensively for potential applications in quantum information processing and high-density data storage at the molecular and atomic scale. Experimental achievements include reading and manipulating single nuclear spins, exploiting atomic clock transitions for robust qubits and, most recently, magnetic data storage in single atoms. Single-molecule magnets exhibit magnetic hysteresis of molecular origin—a magnetic memory effect and a prerequisite of data storage—and so far, lanthanide examples have exhibited this phenomenon at the highest temperatures. However, in the nearly 25 years since the discovery of single-molecule magnets, hysteresis temperatures have increased from 4 kelvin to only about 14 kelvin using a consistent magnetic field sweep rate of about 20 oersted per second, although higher temperatures have been achieved by using very fast sweep rates (for example, 30 kelvin with 200 oersted per second). Here we report a hexa-tert-butyldysprosocenium complex—[Dy(Cpttt)2][B(C6F5)4], with Cpttt = {C5H2tBu3-1,2,4} and tBu = C(CH3)3—which exhibits magnetic hysteresis at temperatures of up to 60 kelvin at a sweep rate of 22 oersted per second. We observe a clear change in the relaxation dynamics at this temperature, which persists in magnetically diluted samples, suggesting that the origin of the hysteresis is the localized metal–ligand vibrational modes that are unique to dysprosocenium. Ab initio calculations of spin dynamics demonstrate that magnetic relaxation at high temperatures is due to local molecular vibrations. These results indicate that, with judicious molecular design, magnetic data storage in single molecules at temperatures above liquid nitrogen should be possible.

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate.

What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information.

In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

Like our Universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0.

It’s still an open question how, when and where life first appeared in our Universe, but there is strong evidence that here on Earth life first appeared about 4 billion years ago.

Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way.

Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information processing, such as when you use information from your eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple.

For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.”

You’ve learned how to speak and countless other skills. Bacteria, on the other hand, aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard- coded into their DNA from the start.

There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information-processing system that implements the sugar-finding algorithm and other software.


“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity Is Near and How to Create a Mind


Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software are evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.

You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software.

Perhaps your school allows you to select a foreign language: Do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it?

This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style.

I weigh about twenty-five times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been preloaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. Life 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all; in contrast, a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts.

This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past fifty thousand years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died.

By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks.

This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant.

Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our Universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.

The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or acquiring a thousand times bigger brain.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:

• Life 1.0 (biological stage): evolves its hardware and software

• Life 2.0 (cultural stage): evolves its hardware, designs much of its software

• Life 3.0 (technological stage): designs its hardware and software

After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.

From the book Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark, © 2017 by Max Tegmark. Published by arrangement with Alfred A. Knopf, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.

How to design a custom robot in minutes without being a roboticist

Robot designs by novices using Interactive Robogami (credit: MIT CSAIL)

MIT’s new “Interactive Robogami” system will let you design a robot in minutes and then 3D-print and assemble it in about four hours.

“Designing robots usually requires expertise that only mechanical engineers and roboticists have,” says PhD student Adriana Schulz, co-lead author of a paper in The International Journal of Robotics Research and a researcher in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “What’s exciting here is that we’ve created a tool that allows a casual user to design their own robot by giving them this expert knowledge.”

Interactive Robogami uses simulations and interactive feedback with algorithms for design composition, allowing users to focus on high-level conceptual design. Users can choose from a library of more than 50 different bodies, wheels, legs and “peripherals,” as well as a selection of different steps (“gaits”).

Gallery of designs created by a novice user after a 20-minute training session. Each of the models took between 3 and 25 minutes to design and contains multiple components from the database. (credit: Adriana Schulz et al./The International Journal of Robotics Research)

The system checks to makes sure a design is actually physically possible, analyzing factors such as speed and stability. Once designed, the team’s origami-inspired “3-D print and fold” technique allows for printing the design as flat faces connected at joints, and then folding the design into the final shape, combining the most effective parts of 2D and 3D printing.*

CSAIL director Daniela Rus, a Professor of Electrical Engineering and Computer Science, hopes people will be able to use the system to incorporate robots that can help with everyday tasks, and that similar systems with rapid printing technologies will enable large-scale customization and production of robots.

“These tools enable new approaches to teaching computational thinking and creating,” says Rus. “Students can not only learn by coding and making their own robots, but by bringing to life conceptual ideas about what their robots can actually do.”

This research was supported by the National Science Foundation’s Expeditions in Computing program.

* To test the system, the team used eight subjects who were given 20 minutes of training and asked to perform two tasks: Create a mobile, stable car design in just ten minutes, and create a trajectory to navigate a robot through an obstacle course in the least amount of travel time. The team fabricated a total of six robots, each of which took 10 to 15 minutes to design, 3 to 7 hours to print and 30 to 90 minutes to assemble. The team found that their 3D print-and-fold method reduced printing time by 73 percent and the amount of material used by 70 percent. The robots also demonstrated a wide range of movement, like using single legs to walk, using different step sequences, and using legs and wheels simultaneously.


Abstract of Interactive robogami: An end-to-end system for design of robots with ground locomotion

This paper aims to democratize the design and fabrication of robots, enabling people of all skill levels to make robots without needing expert domain knowledge. Existing work in computational design and rapid fabrication has explored this question of customization for physical objects but so far has not been able to conquer the complexity of robot designs. We have developed Interactive Robogami, a tool for composition-based design of ground robots that can be fabricated as flat sheets and then folded into 3D structures. This rapid prototyping process enables users to create lightweight, affordable, and materially versatile robots with short turnaround time. Using Interactive Robogami, designers can compose new robot designs from a database of print-and-fold parts. The designs are tested for the users’ functional specifications via simulation and fabricated on user satisfaction. We present six robots designed and fabricated using a 3D printing based approach, as well as a larger robot cut from sheet metal. We have also conducted a user study that demonstrates that our tool is intuitive for novice designers and expressive enough to create a wide variety of ground robot designs.

Flexible ‘electronic skin’ patch provides wearable health monitoring anywhere on the body

New soft electronic stick-on patch collects, analyzes, and diagnoses biosignals and sends data wirelessly to a mobile app. (credit: DGIST)

A radical new electronic skin monitor developed by Korean and U.S. scientists tracks heart rate, respiration, muscle movement, acceleration, and electrical activity in the heart, muscles, eyes, and brain and wirelessly transmits it to a smartphone, allowing for continuous health monitoring.

KurzweilAI has covered a number of biomedical skin-monitoring devices. This new design is noteworthy because the soft, flexible self-adhesive patch (a soft silicone material about four centimeters or 1.5 inches in diameter) can be instantly stuck just about anywhere on the body as needed — no battery required (it’s powered wirelessly).

Optical image of the three-dimensional network of helical coils as electrical interconnects for soft electronics. (credit: DGIST)

The patch is designed more like a mattress or creeping vine than a conventional electronic device. It contains about 50 components connected by a network of 250 tiny flexible wire coils embedded in protective silicone. Unlike flat sensors, the tiny helical wire coils, made of gold, chromium and phosphate, are firmly connected to the base only at one end and can stretch and contract like a spring without breaking.

Helical coils serve as 3D electrical interconnects for soft electronics. (credit: DGIST)

The researchers say the microsystem could also be used in soft robotics, virtual reality, and autonomous navigation.

The microsystem was developed by an international team led by Kyung-In Jang, a professor of robotics engineering at South Korea’s Daegu Gyeongbuk Institute of Science and Technology, and John A. Rogers, the director of Northwestern University’s Center for Bio-Integrated Electronics. The research is described in the open-access journal Nature Communications.

“We have several human subject studies ongoing with our medical school at Northwestern — mostly with a focus on health status monitoring in infants,” Rogers told KurzweilAI.


Abstract of Self-assembled three dimensional network designs for soft electronics

Low modulus, compliant systems of sensors, circuits and radios designed to intimately interface with the soft tissues of the human body are of growing interest, due to their emerging applications in continuous, clinical-quality health monitors and advanced, bioelectronic therapeutics. Although recent research establishes various materials and mechanics concepts for such technologies, all existing approaches involve simple, two-dimensional (2D) layouts in the constituent micro-components and interconnects. Here we introduce concepts in three-dimensional (3D) architectures that bypass important engineering constraints and performance limitations set by traditional, 2D designs. Specifically, open-mesh, 3D interconnect networks of helical microcoils formed by deterministic compressive buckling establish the basis for systems that can offer exceptional low modulus, elastic mechanics, in compact geometries, with active components and sophisticated levels of functionality. Coupled mechanical and electrical design approaches enable layout optimization, assembly processes and encapsulation schemes to yield 3D configurations that satisfy requirements in demanding, complex systems, such as wireless, skin-compatible electronic sensors.

A breakthrough new method for 3D-printing living tissues

The 3D droplet bioprinter, developed by the Bayley Research Group at Oxford, producing millimeter-sized tissues (credit: Sam Olof/ Alexander Graham)

Scientists at the University of Oxford have developed a radical new method of 3D-printing laboratory-grown cells that can form complex living tissues and cartilage to potentially support, repair, or augment diseased and damaged areas of the body.

Printing high-resolution living tissues is currently difficult because the cells often move within printed structures and can collapse on themselves. So the team devised a new way to produce tissues in protective nanoliter droplets wrapped in a lipid (oil-compatible) coating that is assembled, layer-by-layer, into living cellular structures.

3D-printing cellular constructs. (left) Schematic of cell printing. The dispensing nozzle ejects cell-containing bioink droplets into a lipid-containing oil. The droplets are positioned by the programmed movement of the oil container. The droplets cohere through the formation of droplet interface lipid bilayers. (center) A related micrograph of a patterned cell junction, containing two cell types, printed as successive layers of 130-micrometer droplets ejected from two glass nozzles. (right) A confocal fluorescence micrograph of about 700 printed human embryonic kidney cells under oil at a density of 40 million cells per milliliter (scale bar = 150 micrometers). (credit: Alexander D. Graham et al./Scientific Reports)

This new method improves the survival rate of the individual cells and allows for building each tissue one drop at a time to mimic the behaviors and functions of the human body. The patterned cellular constructs, once fully grown, can mimic or potentially enhance natural tissues.

“We were aiming to fabricate three-dimensional living tissues that could display the basic behaviors and physiology found in natural organisms,” explained Alexander Graham, PhD, lead author and 3D Bioprinting Scientist at OxSyBio (Oxford Synthetic Biology).*

“To date, there are limited examples of printed tissues [that] have the complex cellular architecture of native tissues. Hence, we focused on designing a high-resolution cell printing platform, from relatively inexpensive components, that could be used to reproducibly produce artificial tissues with appropriate complexity from a range of cells, including stem cells.”

A confocal micrograph of an artificial tissue containing two populations of human embryonic kidney cells (HEK-293T) printed in the form of an arborized structure within a cube (credit: Sam Olof/Alexander Graham)

The researchers hope that with further development, the materials could have a wide impact on healthcare worldwide and bypass clinical animal testing. The scientists plan to develop new complementary printing techniques that allow for a wider range of living and hybrid materials, producing tissues at industrial scale.

“We believe it will be possible to create personalized treatments by using cells sourced from patients to mimic or enhance natural tissue function,” said Sam Olof, PhD, Chief Technology Officer at OxSyBio. “In the future, 3D bio-printed tissues may also be used for diagnostic applications — for example, for drug or toxin screening.”

The study results were published August 1 in the open-access journal Scientific Reports.


Abstract of High-Resolution Patterned Cellular Constructs by Droplet-Based 3D Printing

Bioprinting is an emerging technique for the fabrication of living tissues that allows cells to be arranged in predetermined three-dimensional (3D) architectures. However, to date, there are limited examples of bioprinted constructs containing multiple cell types patterned at high-resolution. Here we present a low-cost process that employs 3D printing of aqueous droplets containing mammalian cells to produce robust, patterned constructs in oil, which were reproducibly transferred to culture medium. Human embryonic kidney (HEK) cells and ovine mesenchymal stem cells (oMSCs) were printed at tissue-relevant densities (107 cells mL−1) and a high droplet resolution of 1 nL. High-resolution 3D geometries were printed with features of ≤200 μm; these included an arborised cell junction, a diagonal-plane junction and an osteochondral interface. The printed cells showed high viability (90% on average) and HEK cells within the printed structures were shown to proliferate under culture conditions. Significantly, a five-week tissue engineering study demonstrated that printed oMSCs could be differentiated down the chondrogenic lineage to generate cartilage-like structures containing type II collagen.

KurzweilAI special project August 7–18

Dear reader,

The KurzweilAI editorial/research team will be working on a special 10-day project starting Monday August 7, so we will be suspending newsletter publication until Monday August 21.

Our website will remain up and we will continue to welcome your emails. We hope you’re enjoying your summer vacation.

Thanks for your always-interesting participation,

Amara D. Angelica
Research Director/Editor, KurzweilAI

How to turn a crystal into an erasable electrical circuit

Washington State University researchers used light to write a highly conducting electrical path in a crystal that can be erased and reconfigured. (Left) A photograph of a sample with four metal contacts. (Right) An illustration of a laser drawing a conductive path between two contacts. (credit: Washington State University)

Washington State University (WSU) physicists have found a way to write an electrical circuit into a crystal, opening up the possibility of transparent, three-dimensional electronics that, like an Etch A Sketch, can be erased and reconfigured.

Ordinarily, a crystal does not conduct electricity. But when the researchers heated up crystal strontium titanate under the specific conditions, the crystal was altered so that light made it conductive. The circuit could be erased by heating it with an optical pen.

Schematic diagram of experiment in writing an electrical circuit into a crystal (credit: Washington State University)

The physicists were able to increase the crystal’s conductivity 1,000-fold. The phenomenon occurred at room temperature.

“It opens up a new type of electronics where you can define a circuit optically and then erase it and define a new one,” said Matt McCluskey, a WSU professor of physics and materials science.

The work was published July 27, 2017 in the open-access on-line journal Scientific Reports. The research was funded by the National Science Foundation.


Abstract of Using persistent photoconductivity to write a low-resistance path in SrTiO3

Materials with persistent photoconductivity (PPC) experience an increase in conductivity upon exposure to light that persists after the light is turned off. Although researchers have shown that this phenomenon could be exploited for novel memory storage devices, low temperatures (below 180 K) were required. In the present work, two-point resistance measurements were performed on annealed strontium titanate (SrTiO3, or STO) single crystals at room temperature. After illumination with sub-gap light, the resistance decreased by three orders of magnitude. This markedly enhanced conductivity persisted for several days in the dark. Results from IR spectroscopy, electrical measurements, and exposure to a 405 nm laser suggest that contact resistance plays an important role. The laser was then used as an “optical pen” to write a low-resistance path between two contacts, demonstrating the feasibility of optically defined, transparent electronics.

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

Ray Kurzweil reveals plans for ‘linguistically fluent’ Google software

Smart Reply (credit: Google Research)

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXiv paper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”