Ultra-thin ‘atomistor’ synapse-like memory storage device paves way for faster, smaller, smarter computer chips

Illustration of single-atom-layer “atomristors” — the thinnest-ever memory-storage device (credit: Cockrell School of Engineering, The University of Texas at Austin)

A team of electrical engineers at The University of Texas at Austin and scientists at Peking University has developed a one-atom-thick 2D “atomristor” memory storage device that may lead to faster, smaller, smarter computer chips.

The atomristor (atomic memristor) improves upon memristor (memory resistor) memory storage technology by using atomically thin nanomaterials (atomic sheets). (Combining memory and logic functions, similar to the synapses of biological brains, memristors “remember” their previous state after being turned off.)

Schematic of atomristor memory sandwich based on molybdenum sulfide (MoS2) in a form of a single-layer atomic sheet grown on gold foil. (Blue: Mo; yellow: S) (credit: Ruijing Ge et al./Nano Letters)

Memory storage and transistors have, to date, been separate components on a microchip. Atomristors combine both functions on a single, more-efficient device. They use metallic atomic sheets (such as graphene or gold) as electrodes and semiconducting atomic sheets (such as molybdenum sulfide) as the active layer. The entire memory cell is a two-layer sandwich only ~1.5 nanometers thick.

“The sheer density of memory storage that can be made possible by layering these synthetic atomic sheets onto each other, coupled with integrated transistor design, means we can potentially make computers that learn and remember the same way our brains do,” said Deji Akinwande, associate professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineering.

“This discovery has real commercialization value, as it won’t disrupt existing technologies,” Akinwande said. “Rather, it has been designed to complement and integrate with the silicon chips already in use in modern tech devices.”

The research is described in an open-access paper in the January American Chemical Society journal Nano Letters.

Longer battery life in cell phones

For nonvolatile operation (preserving data after power is turned off), the new design also “offers a substantial advantage over conventional flash memory, which occupies far larger space. In addition, the thinness allows for faster and more efficient electric current flow,” the researchers note in the paper.

The research team also discovered another unique application for the atomristor technology: Atomristors are the smallest radio-frequency (RF) memory switches to be demonstrated, with no DC battery consumption, which could ultimately lead to longer battery life for cell phones and other battery-powered devices.*

Funding for the UT Austin team’s work was provided by the National Science Foundation and the Presidential Early Career Award for Scientists and Engineers, awarded to Akinwande in 2015.

* “Contemporary switches are realized with transistor or microelectromechanical devices, both of which are volatile, with the latter also requiring large switching voltages [which are not ideal] for mobile technologies,” the researchers note in the paper. Atomristors instead allow for nonvolatile low-power radio-frequency (RF) switches with “low voltage operation, small form-factor, fast switching speed, and low-temperature integration compatible with silicon or flexible substrates.”


Abstract of Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides

Recently, two-dimensional (2D) atomic sheets have inspired new ideas in nanoscience including topologically protected charge transport,1,2 spatially separated excitons,3 and strongly anisotropic heat transport.4 Here, we report the intriguing observation of stable nonvolatile resistance switching (NVRS) in single-layer atomic sheets sandwiched between metal electrodes. NVRS is observed in the prototypical semiconducting (MX2, M = Mo, W; and X = S, Se) transitional metal dichalcogenides (TMDs),5 which alludes to the universality of this phenomenon in TMD monolayers and offers forming-free switching. This observation of NVRS phenomenon, widely attributed to ionic diffusion, filament, and interfacial redox in bulk oxides and electrolytes,6−9 inspires new studies on defects, ion transport, and energetics at the sharp interfaces between atomically thin sheets and conducting electrodes. Our findings overturn the contemporary thinking that nonvolatile switching is not scalable to subnanometre owing to leakage currents.10 Emerging device concepts in nonvolatile flexible memory fabrics, and brain-inspired (neuromorphic) computing could benefit substantially from the wide 2D materials design space. A new major application, zero-static power radio frequency (RF) switching, is demonstrated with a monolayer switch operating to 50 GHz.

An artificial synapse for future miniaturized portable ‘brain-on-a-chip’ devices

Biological synapse structure (credit: Thomas Splettstoesser/CC)

MIT engineers have designed a new artificial synapse made from silicon germanium that can precisely control the strength of an electric current flowing across it.

In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting with 95 percent accuracy. The engineers say the new design, published today (Jan. 22) in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other machine-learning tasks.

Controlling the flow of ions: the challenge

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. The idea is to apply a voltage across layers that would cause ions (electrically charged atoms) to move in a switching medium (synapse-like space) to create conductive filaments in a manner that’s similar to how the “weight” (connection strength) of a synapse changes.

There are more than 100 trillion synapses (in a typical human brain) that mediate neuron signaling in the brain, strengthening some neural connections while pruning (weakening) others — a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, all at lightning speeds.

Instead of carrying out computations based on binary, on/off signaling, like current digital chips, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights” — much like neurons that activate in various ways (depending on the type and number of ions that flow across a synapse).

But it’s been difficult to control the flow of ions in existing synapse designs. These have multiple paths that make it difficult to predict where ions will make it through, according to research team leader Jeehwan Kim, PhD, an assistant professor in the departments of Mechanical Engineering and Materials Science and Engineering, a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

Epitaxial random access memory (epiRAM)

(Left) Cross-sectional transmission electron microscope image of 60 nm silicon-germanium (SiGe) crystal grown on a silicon substrate (diagonal white lines represent candidate dislocations). Scale bar: 25 nm. (Right) Cross-sectional scanning electron microscope image of an epiRAM device with titanium (Ti)–gold (Au) and silver (Ag)–palladium (Pd) layers. Scale bar: 100 nm. (credit: Shinhyun Choi et al./Nature Materials)

So instead of using amorphous materials as an artificial synapse, Kim and his colleagues created an new “epitaxial random access memory” (epiRAM) design.

They started with a wafer of silicon. They then grew a similar pattern of silicon germanium — a material used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials could form a funnel-like dislocation, creating a single path through which ions can predictably flow.*

This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Testing the ability to recognize samples of handwriting

As a test, Kim and his team explored how the epiRAM device would perform if it were to carry out an actual learning task: recognizing samples of handwriting — which researchers consider to be a practical test for neuromorphic chips. Such chips would consist of artificial “neurons” connected to other “neurons” via filament-based artificial “synapses.”

Image-recognition simulation. (Left) A 3-layer multilayer-perception neural network with black and white input signal for each layer in algorithm level. The inner product (summation) of input neuron signal vector and first synapse array vector is transferred after activation and binarization as input vectors of second synapse arrays. (Right) Circuit block diagram of hardware implementation showing a synapse layer composed of epiRAM crossbar arrays and the peripheral circuit. (credit: Shinhyun Choi et al./Nature Materials)

They ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from the MNIST handwritten recognition dataset**, commonly used by neuromorphic designers.

They found that their neural network device recognized handwritten samples 95.1 percent of the time — close to the 97 percent accuracy of existing software algorithms running on large computers.

A chip to replace a supercomputer

The team is now in the process of fabricating a real working neuromorphic chip that can carry out handwriting-recognition tasks. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that are currently only possible with large supercomputers.

“Ultimately, we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial intelligence hardware.”

This research was supported in part by the National Science Foundation. Co-authors included researchers at Arizona State University.

* They applied voltage to each synapse and found that all synapses exhibited about the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material. They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

** The MNIST (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems and for training and testing in the field of machine learning. It contains 60,000 training images and 10,000 testing images. 


Abstract of SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

Amazon’s store of the future opens

(credit: Amazon)

Amazon’s first Amazon Go store opened today in Seattle, automating most of the purchase, checkout, and payment steps associated with a retail transaction and replacing cash registers, cashiers, credit cards, self-checkout kiosks, RFID chips — and lines — with hundreds of small cameras, computer vision, deep-learning algorithms, and sensor fusion.

Just walk in (as long as you have the Amazon Go app and an Amazon.com account), scan a QR code at the turnstile, grab, and go.

Meanwhile, the shutdown of the dysfunctional U.S. government continues.* Hmm, what if we created Government Go?

If you visit the store (2131 7th Ave — 7 a.m. to 9 p.m. PT Monday to Friday), let us know about your experience and thoughts in the comments below.

* January 22 at 6:11 PM EST: House votes to end government shutdown, sending legislation to Trump — Washington Post

Deep neural network models score higher than humans in reading and comprehension test

(credit: Alibaba Group)

Microsoft has developed a deep neural network that scored higher than humans on exact scores in a Stanford University reading and comprehension test Stanford Question Answering Dataset (SQuAD).

Microsoft achieved 82.650 on Jan. 3; Alibaba Group Holding Ltd. came in at second place at 82.440 on Jan. 5. The best human score so far is 82.304.

“SQuAD is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage,” according to the Stanford NLP Group. “With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.”

“The Chinese e-commerce titan has joined the likes of Tencent Holdings Ltd. and Baidu Inc. in a race to develop AI that can enrich social media feeds, target ads and services or even aid in autonomous driving, Bloomberg notes. “Beijing has endorsed the technology in a national-level plan that calls for the country to become the industry leader 2030.”

Read more: China’s Plan for World Domination in AI (Bloomberg)

Deep neural network models score higher than humans in reading and comprehension test

(credit: Alibaba Group)

Microsoft and Alibaba have developed deep neural network models that scored higher than humans in a Stanford University reading and comprehension test, Stanford Question Answering Dataset (SQuAD).

Microsoft achieved 82.650 on the ExactMatch (EM) metric* on Jan. 3, and Alibaba Group Holding Ltd. scored 82.440 on Jan. 5. The best human score so far is 82.304.

“SQuAD is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage,” according to the Stanford NLP Group. “With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.”

“A strong start to 2018 with the first model (SLQA+) to exceed human-level performance on @stanfordnlp SQuAD’s EM metric!,” said Pranav Rajpurkar, a Ph.D. student in the Stanford Machine Learning Group and lead author of a paper in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing on SQuAD (available on open-access ArXiv). “Next challenge: the F1 metric*, where humans still lead by ~2.5 points!” (Alibaba’s SLQA+ scored 88.607 on the F1 metric and Microsoft’s r-net+ scored 88.493.)

However, challenging the “comprehension” description, Gary Marcus, PhD, a Professor of Psychology and Neural Science at NYU, notes in a tweet that “the SQUAD test shows that machines can highlight relevant passages in text, not that they understand those passages.”

“The Chinese e-commerce titan has joined the likes of Tencent Holdings Ltd. and Baidu Inc. in a race to develop AI that can enrich social media feeds, target ads and services or even aid in autonomous driving, Bloomberg notes. “Beijing has endorsed the technology in a national-level plan that calls for the country to become the industry leader 2030.”

Read more: China’s Plan for World Domination in AI (Bloomberg)

*”The ExactMatch metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F1 score metric measures the average overlap between the prediction and ground truth answer.” – Pranav Rajpurkar et al., ArXiv

How to grow functioning human muscles from stem cells

A cross section of a muscle fiber grown from induced pluripotent stem cells, showing muscle cells (green), cell nuclei (blue), and the surrounding support matrix for the cells (credit: Duke University)

Biomedical engineers at Duke University have grown the first functioning human skeletal muscle from human induced pluripotent stem cells (iPSCs). (Pluripotent stem cells are important in regenerative medicine because they can generate any type of cell in the body and can propagate indefinitely; the induced version can be generated from adult cells instead of embryos.)

The engineers say the new technique is promising for cellular therapies, drug discovery, and studying rare diseases. “When a child’s muscles are already withering away from something like Duchenne muscular dystrophy, it would not be ethical to take muscle samples from them and do further damage,” explained Nenad Bursac, professor of biomedical engineering at Duke University and senior author of an open-access paper on the research published Tuesday, January 9, in Nature Communications.


How to grow a muscle

In the study, the researchers started with human induced pluripotent stem cells. These are cells taken from adult non-muscle tissues, such as skin or blood, and reprogrammed to revert to a primordial state. The pluripotent stem cells are then grown while being flooded with a molecule called Pax7 — which signals the cells to start becoming muscle.

After two to four weeks of 3-D culture, the resulting muscle cells form muscle fibers that contract and react to external stimuli such as electrical pulses and biochemical signals — mimicking neuronal inputs just like native muscle tissue. The researchers also implanted the newly grown muscle fibers into adult mice. The muscles survived and functions for at least three weeks, while progressively integrating into the native tissue through vascularization (growing blood vessels).

A stained cross section of the new muscle fibers, showing muscle cells (red), receptors for neuronal input (green), and cell nuclei (blue) (credit: Duke University)

Once the cells were well on their way to becoming muscle, the researchers stopped providing the Pax7 signaling molecule and started giving the cells the support and nourishment they needed to fully mature. (At this point in the research, the resulting muscle is not as strong as native muscle tissue, and also falls short of the muscle grown in a previous study*, which started from muscle biopsies.)

However, the pluripotent stem cell-derived muscle fibers develop reservoirs of “satellite-like cells” that are necessary for normal adult muscles to repair damage, while the muscle from the previous study had much fewer of these cells. The stem cell method is also capable of growing many more cells from a smaller starting batch than the previous biopsy method.

“With this technique, we can just take a small sample of non-muscle tissue, like skin or blood, revert the obtained cells to a pluripotent state, and eventually grow an endless amount of functioning muscle fibers to test,” said Bursac.

The researchers could also, in theory, fix genetic malfunctions in the induced pluripotent stem cells derived from a patient, he added. Then they could grow small patches of completely healthy muscle. This could not heal or replace an entire body’s worth of diseased muscle, but it could be used in tandem with more widely targeted genetic therapies or to heal more localized problems.


The researchers are now refining their technique to grow more robust muscles and beginning work to develop new models of rare muscle diseases. This work was supported by the National Institutes of Health.


Duke Engineering | Human Muscle Grown from Skin Cells

Muscles for future microscale robot exoskeletons

Meanwhile, physicists at Cornell University are exploring ways to create muscles for future microscale robot exoskeletons — rapidly changing their shape upon sensing chemical or thermal changes in their environment. The new designs are compatible with semiconductor manufacturing, making them useful for future microscale robotics.

The microscale robot exoskeleton muscles move using a motor called a bimorph. (A bimorph is an assembly of two materials — in this case, graphene and glass — that bends when driven by a stimulus like heat, a chemical reaction or an applied voltage.) The shape change happens because, in the case of heat, two materials with different thermal responses expand by different amounts over the same temperature change. The bimorph bends to relieve some of this strain, allowing one layer to stretch out longer than the other. By adding rigid flat panels that cannot be bent by bimorphs, the researchers localize bending to take place only in specific places, creating folds. With this concept, they are able to make a variety of folding structures ranging from tetrahedra (triangular pyramids) to cubes. The bimorphs also fold in response to chemical stimuli by driving large ions into the glass, causing it to expand. (credit: Marc Z. Miskin et al./PNAS)

Their work is outlined in a paper published Jan. 2 in Proceedings of the National Academy of Sciences.

* The advance builds on work published in 2015, when the Duke engineers grew the first functioning human muscle tissue from cells obtained from muscle biopsies. In that research, Bursac and his team started with small samples of human cells obtained from muscle biopsies, called “myoblasts,” that had already progressed beyond the stem cell stage but hadn’t yet become mature muscle fibers. The engineers grew these myoblasts by many folds and then put them into a supportive 3-D scaffolding filled with a nourishing gel that allowed them to form aligned and functioning human muscle fibers.


Abstract of Engineering human pluripotent stem cells into a functional skeletal muscle tissue

The generation of functional skeletal muscle tissues from human pluripotent stem cells (hPSCs) has not been reported. Here, we derive induced myogenic progenitor cells (iMPCs) via transient overexpression of Pax7 in paraxial mesoderm cells differentiated from hPSCs. In 2D culture, iMPCs readily differentiate into spontaneously contracting multinucleated myotubes and a pool of satellite-like cells endogenously expressing Pax7. Under optimized 3D culture conditions, iMPCs derived from multiple hPSC lines reproducibly form functional skeletal muscle tissues (iSKM bundles) containing aligned multi-nucleated myotubes that exhibit positive force–frequency relationship and robust calcium transients in response to electrical or acetylcholine stimulation. During 1-month culture, the iSKM bundles undergo increased structural and molecular maturation, hypertrophy, and force generation. When implanted into dorsal window chamber or hindlimb muscle in immunocompromised mice, the iSKM bundles survive, progressively vascularize, and maintain functionality. iSKM bundles hold promise as a microphysiological platform for human muscle disease modeling and drug development.


Abstract of Graphene-based bimorphs for micron-sized, autonomous origami machines

Origami-inspired fabrication presents an attractive platform for miniaturizing machines: thinner layers of folding material lead to smaller devices, provided that key functional aspects, such as conductivity, stiffness, and flexibility, are persevered. Here, we show origami fabrication at its ultimate limit by using 2D atomic membranes as a folding material. As a prototype, we bond graphene sheets to nanometer-thick layers of glass to make ultrathin bimorph actuators that bend to micrometer radii of curvature in response to small strain differentials. These strains are two orders of magnitude lower than the fracture threshold for the device, thus maintaining conductivity across the structure. By patterning 2-<mml:math><mml:mi>

Will artificial intelligence become conscious?

(Credit: EPFL/Blue Brain Project)

By Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University

Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry — and even keep humans company when other people aren’t nearby.

A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.

As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is.

Is awareness enough?

Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.

On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.

Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.

Quantum views

Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.

Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.

Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.

Big-C and scientific discovery

Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation. For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.

A dramatic piece of evidence in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.

The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes. The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.

Mind and self-organizing systems

It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.

Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.

Reprinted with permission from The Conversation

AlphaZero’s ‘alien’ superhuman-level program masters chess in 24 hours with no domain knowledge

AlphaZero vs. Stockfish chess program | Round 1 (credit: Chess.com)

Demis Hassabis, the founder and CEO of DeepMind, announced at the Neural Information Processing Systems conference (NIPS 2017) last week that DeepMind’s new AlphaZero program achieved a superhuman level of play in chess within 24 hours.

The program started from random play, given no domain knowledge except the game rules, according to an arXiv paper by DeepMind researchers published Dec. 5.

“It doesn’t play like a human, and it doesn’t play like a program,” said Hassabis, an expert chess player himself. “It plays in a third, almost alien, way. It’s like chess from another dimension.”

AlphaZero also mastered both shogi (Japanese chess) and Go within 24 hours, defeating a world-champion program in all three cases. The original AlphaGo mastered Go by learning thousands of example games and then practicing against another version of itself.

“AlphaZero was not ‘taught’ the game in the traditional sense,” explains Chess.com. “That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns. This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. … The program had four hours to play itself many, many times, thereby becoming its own teacher.”

“What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory,” MIT Technology Review notes. “Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value.”


Abstract of Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

New technology allows robots to visualize their own future


UC Berkeley | Vestri the robot imagines how to perform tasks

UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.

The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Why (most) future robots won’t look like robots

A future robot’s body could combine soft actuators and stiff structure, with distributed computation throughout — an example of the new “material robotics.” (credit: Nikolaus Correll/University of Colorado)

Future robots won’t be limited to humanoid form (like Boston Robotics’ formidable backflipping Atlas). They’ll be invisibly embedded everywhere in common objects.

Such as a shoe that can intelligently support your gait, change stiffness as you’re running or walking, and adapt to different surfaces — or even help you do backflips.

That’s the vision of researchers at Oregon State University, the University of Colorado, Yale University, and École Polytechnique Fédérale de Lausanne, who describe the burgeoning new field of  “material robotics” in a perspective article published Nov. 29, 2017 in Science Robotics. (The article cites nine articles in this special issue, three of which you can access for free.)

Disappearing into the background of everyday life

The authors challenge a widespread basic assumption: that robots are either “machines that run bits of code” or “software ‘bots’ interacting with the world through a physical instrument.”

“We take a third path: one that imbues intelligence into the very matter of a robot,” says Oregon State University researcher Yiğit Mengüç, an assistant professor of mechanical engineering in OSU’s College of Engineering and part of the college’s Collaborative Robotics and Intelligent Systems Institute.

On that path, materials scientists are developing new bulk materials with the inherent multifunctionality required for robotic applications, while roboticists are working on new material systems with tightly integrated components, disappearing into the background of everyday life. “The spectrum of possible ap­proaches spans from soft grippers with zero knowledge and zero feedback all the way to humanoids with full knowledge and full feed­back,” the authors note in the paper.

For example, “In the future, your smartphone may be made from stretchable, foldable material so there’s no danger of it shattering,” says Mengüç. “Or it might have some actuation, where it changes shape in your hand to help with the display, or it can be able to communicate something about what you’re observing on the screen. All these bits and pieces of technology that we take for granted in life will be living, physically responsive things, moving, changing shape in response to our needs, not just flat, static screens.”

Soft robots get superpowers

Origami-inspired artificial muscles capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure (credit: Shuguang Li/Wyss Institute at Harvard University)

As a good example of material-enabled robotics, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed origami-inspired, programmable, super-strong artificial muscles that will allow future soft robots to lift objects that are up to 1,000 times their own weight — using only air or water pressure.

The actuators are “programmed” by the structural design itself. The skeleton folds define how the whole structure moves — no control system required.

That allows the muscles to be very compact and simple, which makes them more appropriate for mobile or body-mounted systems that can’t accommodate large or heavy machinery, says Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL and first author of an an open-access article on the research published Nov. 21, 2017 in Proceedings of the National Academy of Sciences (PNAS).

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” The structural geometry of the skeleton itself determines the muscle’s motion. A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement — it’s automagically determined entirely by the shape and composition of the skeleton. (credit: Shuguang Li/Wyss Institute at Harvard University)

Resilient, multipurpose, scalable

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight. A 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, which makes them safer than most of the other artificial muscles currently being tested. The muscles have been built in sizes ranging from a few millimeters up to a meter. So the muscles can be used in numerous applications at multiple scales, from miniature surgical devices to wearable robotic exoskeletons, transformable architecture, and deep-sea manipulators for research or construction, up to large deployable structures for space exploration.

The team could also construct the muscles out of the water-soluble polymer PVA. That opens the possibility of bio-friendly robots that can perform tasks in natural settings with minimal environmental impact, or ingestible robots that move to the proper place in the body and then dissolve to release a drug.

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.


Wyss Institute | Origami-Inspired Artificial Muscles


Abstract of Fluid-driven origami-inspired artificial muscles

Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.