This low-power chip could make speech recognition practical for tiny devices

MIT researchers have built a low-power chip specialized for automatic speech recognition. A cellphone running speech-recognition software might require about 1 watt of power; the new chip requires 100 times less power (between 0.2 and 10 milliwatts, depending on the number of words it has to recognize).

That could translate to a power savings of 90 to 99 percent, making voice control practical for wearables (especially watches, earbuds, and glasses, where speech recognition is essential) and other simple electronic devices, including ones that have to harvest energy from their environments or go months between battery charges, used in the “internet of things”  (IoT), says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, whose group developed the new chip.

A voice-recognition network is too big to fit in a chip’s onboard memory, which is a problem because going off-chip for data is much more energy intensive than retrieving it from local stores. So the MIT researchers’ design concentrates on minimizing the amount of data that the chip has to retrieve from off-chip memory.

The new MIT speech-recognition chip uses an SRAM memory chip (instead of MLC flash, which requires higher energy-per-bit); deep neural networks (a first in a standalone hardware speech recognizer) that are optimized for power consumption as low as 3.3 mW by using limited network widths and quantized, sparse weight matrices; and other methods. The chip supports vocabularies up to 145,000 words, run in real-time. A simple “voice activity detection” circuit monitors ambient noise to determine whether it might be speech, and not just higher energy. (credit: Michael Price et al./MIT)

The new chip was presented last week at the International Solid-State Circuits Conference.

The research was funded through the Qmulus Project, a joint venture between MIT and Quanta Computer, and the chip was prototyped through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.


Abstract of A Scalable Speech Recognizer with Deep-Neural-Network Acoustic Models and Voice-Activated Power Gating

The applications of speech interfaces, commonly used for search and personal assistants, are diversifying to include wearables, appliances, and robots. Hardware-accelerated automatic speech recognition (ASR) is needed for scenarios that are constrained by power, system complexity, or latency. Furthermore, a wakeup mechanism, such as voice activity detection (VAD), is needed to power gate the ASR and downstream system. This paper describes IC designs for ASR and VAD that improve on the accuracy, programmability, and scalability of previous work.

Space X plans global space internet

(credit: SpaceX)

SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.

Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.

Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.

Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.

Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.

* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.

** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.

 

First nanoengineered retinal implant could help the blind regain functional vision

Activated by incident light, photosensitive silicon nanowires 1 micrometer in diameter stimulate residual undamaged retinal cells to induce visual sensations. (credit (image adapted): Sohmyung Ha et al./ J. Neural Eng)

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the first nanoengineered retinal prosthesis — a step closer to restoring the ability of neurons in the retina to respond to light.

The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa, and loss of vision due to diabetes.

Despite advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited — well under the acuity threshold of 20/200 that defines legal blindness.

The new prosthesis relies on two new technologies: implanted arrays of photosensitive nanowires and a wireless power/data system.

Implanted arrays of silicon nanowires

The new prosthesis uses arrays of nanowires that simultaneously sense light and electrically stimulate the retina. The nanowires provide higher resolution than anything achieved by other devices — closer to the dense spacing of photoreceptors in the human retina, according to the researchers.*

Comparison of retina and electrode geometries between an existing retinal prosthesis and new nanoengineered prosthesis design. (left) Planar platinum electrodes (gray) of the FDA-approved Argus II retinal prosthesis (a 60-element array with 200 micrometer electrode diameter). (center) Retinal photoreceptor cells: rods (yellow) and cones (green). (right) Fabricated silicon nanowires (1 micrometer in diameter) at the same spatial magnification as photoreceptor cells. (credit: Science Photo Library and Sohmyung Ha et al./ J. Neural Eng.)

Existing retinal prostheses require a vision sensor (such as a camera) outside of the eye to capture a visual scene and then transform it into signals to sequentially stimulate retinal neurons (in a matrix). Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. The nanowires are bundled into a grid of electrodes, directly activated by light.

This direct, local translation of incident light into electrical stimulation makes for a much simpler — and scalable — architecture for a prosthesis, according to the researchers.

Wireless power and telemetry system

For the new device, power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system. Data to the nanowires is sent over the same wireless link at record speed and energy efficiency. The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye.**

Three of the researchers have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.***

The research was described in a recent issue of the Journal of Neural Engineering. It was funded by Nanovision Biosciences, Qualcomm Inc., and the Institute of Engineering in Medicine and the Clinical and Translational Research Institute at UC San Diego.

* For visual acuity of 20/20,  an electrode pixel size of 5 μm (micrometers) is required; 20/200 visual acuity requires 50 μm. The minimum number of electrodes required for pattern recognition or reading text is estimated to be about 600. The new nanoengineered silicon nanowire electrodes are 1 μm in diameter, and for the experiment, 2500 silicon nanowires were used.

** The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. (credit: UC San Diego)

*** For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).


Abstract of Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry

Objective. Despite considerable advances in retinal prostheses over the last two decades, the resolution of restored vision has remained severely limited, well below the 20/200 acuity threshold of blindness. Towards drastic improvements in spatial resolution, we present a scalable architecture for retinal prostheses in which each stimulation electrode is directly activated by incident light and powered by a common voltage pulse transferred over a single wireless inductive link. Approach. The hybrid optical addressability and electronic powering scheme provides separate spatial and temporal control over stimulation, and further provides optoelectronic gain for substantially lower light intensity thresholds than other optically addressed retinal prostheses using passive microphotodiode arrays. The architecture permits the use of high-density electrode arrays with ultra-high photosensitive silicon nanowires, obviating the need for excessive wiring and high-throughput data telemetry. Instead, the single inductive link drives the entire array of electrodes through two wires and provides external control over waveform parameters for common voltage stimulation. Main results. A complete system comprising inductive telemetry link, stimulation pulse demodulator, charge-balancing series capacitor, and nanowire-based electrode device is integrated and validated ex vivo on rat retina tissue. Significance. Measurements demonstrate control over retinal neural activity both by light and electrical bias, validating the feasibility of the proposed architecture and its system components as an important first step towards a high-resolution optically addressed retinal prosthesis.

First nanoengineered retinal implant could help the blind regain functional vision

Activated by incident light, photosensitive silicon nanowires 1 micrometer in diameter stimulate residual undamaged retinal cells to induce visual sensations. (credit (image adapted): Sohmyung Ha et al./ J. Neural Eng)

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the first nanoengineered retinal prosthesis — a step closer to restoring the ability of neurons in the retina to respond to light.

The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa, and loss of vision due to diabetes.

Despite advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited — well under the acuity threshold of 20/200 that defines legal blindness.

The new prosthesis relies on two new technologies: implanted arrays of photosensitive nanowires and a wireless power/data system.

Implanted arrays of silicon nanowires

The new prosthesis uses arrays of nanowires that simultaneously sense light and electrically stimulate the retina. The nanowires provide higher resolution than anything achieved by other devices — closer to the dense spacing of photoreceptors in the human retina, according to the researchers.*

Comparison of retina and electrode geometries between an existing retinal prosthesis and new nanoengineered prosthesis design. (left) Planar platinum electrodes (gray) of the FDA-approved Argus II retinal prosthesis (a 60-element array with 200 micrometer electrode diameter). (center) Retinal photoreceptor cells: rods (yellow) and cones (green). (right) Fabricated silicon nanowires (1 micrometer in diameter) at the same spatial magnification as photoreceptor cells. (credit: Science Photo Library and Sohmyung Ha et al./ J. Neural Eng.)

Existing retinal prostheses require a vision sensor (such as a camera) outside of the eye to capture a visual scene and then transform it into signals to sequentially stimulate retinal neurons (in a matrix). Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. The nanowires are bundled into a grid of electrodes, directly activated by light.

This direct, local translation of incident light into electrical stimulation makes for a much simpler — and scalable — architecture for a prosthesis, according to the researchers.

Wireless power and telemetry system

For the new device, power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system. Data to the nanowires is sent over the same wireless link at record speed and energy efficiency. The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye.**

Three of the researchers have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.***

The research was described in a recent issue of the Journal of Neural Engineering. It was funded by Nanovision Biosciences, Qualcomm Inc., and the Institute of Engineering in Medicine and the Clinical and Translational Research Institute at UC San Diego.

* For visual acuity of 20/20,  an electrode pixel size of 5 μm (micrometers) is required; 20/200 visual acuity requires 50 μm. The minimum number of electrodes required for pattern recognition or reading text is estimated to be about 600. The new nanoengineered silicon nanowire electrodes are 1 μm in diameter, and for the experiment, 2500 silicon nanowires were used.

** The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. (credit: UC San Diego)

*** For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).


Abstract of Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry

Objective. Despite considerable advances in retinal prostheses over the last two decades, the resolution of restored vision has remained severely limited, well below the 20/200 acuity threshold of blindness. Towards drastic improvements in spatial resolution, we present a scalable architecture for retinal prostheses in which each stimulation electrode is directly activated by incident light and powered by a common voltage pulse transferred over a single wireless inductive link. Approach. The hybrid optical addressability and electronic powering scheme provides separate spatial and temporal control over stimulation, and further provides optoelectronic gain for substantially lower light intensity thresholds than other optically addressed retinal prostheses using passive microphotodiode arrays. The architecture permits the use of high-density electrode arrays with ultra-high photosensitive silicon nanowires, obviating the need for excessive wiring and high-throughput data telemetry. Instead, the single inductive link drives the entire array of electrodes through two wires and provides external control over waveform parameters for common voltage stimulation. Main results. A complete system comprising inductive telemetry link, stimulation pulse demodulator, charge-balancing series capacitor, and nanowire-based electrode device is integrated and validated ex vivo on rat retina tissue. Significance. Measurements demonstrate control over retinal neural activity both by light and electrical bias, validating the feasibility of the proposed architecture and its system components as an important first step towards a high-resolution optically addressed retinal prosthesis.

Engineers shrink atomic-force microscope to dime-sized device

A MEMS-based atomic force microscope developed by engineers at the University of Texas at Dallas that is about 1 square centimeter in size (top center), shown attached here to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. (credit: University of Texas at Dallas)

University of Texas at Dallas researchers have created an atomic force microscope (AFM) on a chip, dramatically shrinking the size — and, hopefully, the price — of a microscope used to characterize material properties down to molecular dimensions.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas.  “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

Moheimani and his colleagues describe their prototype device in this month’s issue of the IEEE Journal of Microelectromechanical Systems.

A conventional AFM consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image. (credit: CC/Opensource Handbook of Nanoscience and Nanotechnology)

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — roughly on the scale of individual molecules.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” explained Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science.

The MEMS version

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Anthony Fowler, PhD, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board that contains circuitry, sensors, and other miniaturized components that control the movement and other aspects of the device.

Conventional AFM (credit: Asylum Research, Inc.)

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive. “An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass-produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

Semiconductor-industry uses

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

Moheimani’s research has been funded by UT Dallas startup funds, the Von Ehr Distinguished Chair, and the Defense Advanced Research Projects Agency.


Abstract of On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach

The atomic force microscope (AFM) is an invaluable scientific tool; however, its conventional implementation as a relatively costly macroscale system is a barrier to its more widespread use. A microelectromechanical systems (MEMS) approach to AFM design has the potential to significantly reduce the cost and complexity of the AFM, expanding its utility beyond current applications. This paper presents an on-chip AFM based on a silicon-on-insulator MEMS fabrication process. The device features integrated xy electrostatic actuators and electrothermal sensors as well as an AlN piezoelectric layer for out-of-plane actuation and integrated deflection sensing of a microcantilever. The three-degree-of-freedom design allows the probe scanner to obtain topographic tapping-mode AFM images with an imaging range of up to 8μm×8μm in closed loop. [2016-0211]

A biocompatible stretchable material for brain implants and ‘electronic skin’

A printed electrode pattern of a new polymer being stretched to several times of its original length (top), and a transparent, highly stretchy “electronic skin” patch (bottom) from the same material, forming an intimate interface with the human skin to potentially measure various biomarkers (credit: Bao Lab)

Stanford chemical engineers have developed a soft, flexible plastic electrode that stretches like rubber but carries electricity like wires — ideal for brain interfaces and other implantable electronics, they report in an open-access March 10 paper in Science Advances.

Developed by Zhenan Bao, a professor of chemical engineering, and his team, the material is still a laboratory prototype, but the team hopes to develop it as part of their long-term focus on creating flexible materials that interface with the human body.

Flexible interface

“One thing about the human brain that a lot of people don’t know is that it changes volume throughout the day,” says postdoctoral research fellow Yue Wang, the first author on the paper. “It swells and de-swells.” The current generation of electronic implants can’t stretch and contract with the brain, making it complicated to maintain a good connection.

Illustration showing incorporation of ionic liquid-assisted stretchability and electrical conductivity (STEC) enhancers to convert conventional PEDOT:PSS film (top) to stretchable film (bottom). (credit: Wang et al., Sci. Adv.)

To create this flexible electrode, the researchers began with a plastic (PEDOT:PSS) with high electrical conductivity and biocompatibility (could be safely brought into contact with the human body), but was brittle. So they added a “STEC” (stretchability and electrical conductivity) molecule similar to the kind of additives used to thicken soups in industrial kitchens.

This additive transformed the plastic’s chunky and brittle molecular structure into a fishnet pattern with holes in the strands to allow the material to stretch and deform. The resulting plastic remained very conductive even when stretched 800 percent its original length.

Scientists at SLAC National Accelerator Laboratory, UCLA, the Materials Science Institute of Barcelona, and Samsung Advanced Institute of Technology were also involved in the research, which was funded by Samsung Electronics and the Air Force Office of Science Research.


Stanford University School of Engineering | Stretchable electrodes pave way for flexible electronics


Abstract of A highly stretchable, transparent, and conductive polymer

Previous breakthroughs in stretchable electronics stem from strain engineering and nanocomposite approaches. Routes toward intrinsically stretchablemolecularmaterials remain scarce but, if successful,will enable simpler fabrication processes, such as direct printing and coating, mechanically robust devices, and more intimate contact with objects. We report a highly stretchable conducting polymer, realized with a range of enhancers that serve dual functions to changemorphology andas conductivity-enhancingdopants inpoly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). The polymer films exhibit conductivities comparable to the best reported values for PEDOT:PSS, with higher than 3100 S/cm under 0% strain and higher than 4100 S/cm under 100% strain—among the highest for reported stretchable conductors. It is highly durable under cyclic loading,with the conductivitymaintained at 3600 S/cm even after 1000 cycles to 100% strain. The conductivity remained above 100 S/cm under 600% strain, with a fracture strain as high as 800%, which is superior to even the best silver nanowire– or carbon nanotube–based stretchable conductor films. The combination of excellent electrical andmechanical properties allowed it to serve as interconnects for field-effect transistor arrays with a device density that is five times higher than typical lithographically patterned wavy interconnects.

IBM-led international research team stores one bit of data on a single atom

Scanning tunneling microscope image of a single atom of holmium, an element that researchers used as a magnet to store one bit of data. (credit: IBM Research — Almaden)

An international team led by IBM has created the world’s smallest magnet, using a single atom of rare-earth element holmium, and stored one bit of data on it over several hours.

The achievement represents the ultimate limit of the classical approach to high-density magnetic storage media, according to a paper published March 8 in the journal Nature.

Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom may lead to significantly smaller and denser storage devices in the future. (The researchers are currently working in an ultrahigh vacuum at 1.2 K (a temperature near absolute zero.)

Using a scanning tunneling microscope* (STM), the researchers also showed that a device using two magnetic atoms could be written and read independently, even when they were separated by just one nanometer.

IBM microscope mechanic Bruce Melior at scanning tunneling microscope, used to view and manipulate atoms (credit: IBM Research — Almaden)

The researchers believe this tight spacing could eventually yield magnetic storage that is 1,000 times denser than today’s hard disk drives and solid state memory chips. So they could one day store 1,000 times more information in the same space. That means data centers, computers, and personal devices would be radically smaller and more powerful.

Single-atom write and read operations. (Left) To write the data onto the holmium atom, a pulse of electric current from the magnetized tip of a scanning tunneling microscope (STM) is used to flip the orientation of the atom’s field between a 0 or 1. The STM is also used to read it. (Right) A second read-out method used an iron atom as a magnetic sensor, which also allowed the team to read out multiple bits at the same time, making it more practical than an STM. (credit: IBM Research and Fabian D. Natterer et al./Nature)

Researchers at EPFL in Switzerland, University of Chinese Academy of Sciences in Hong Kong, University of Göttingen in Germany, Universität Zürich in Switzerland, Institute of Basic Science, Center for Quantum Nanoscience in South Korea, and Ewha Womans University in South Korea were also on the research team.

* The STM was developed in 1981, earning its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986. IBM is planning future scanning tunneling microscope studies to investigate the potential of performing quantum information processing using individual magnetic atoms. Earlier this week, IBM announced it will be building the world’s first commercial quantum computers for business and science.


IBM Research | IBM Research Created the World’s Smallest Magnet — an Atom


How to control robots with your mind

The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer’s EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).

Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.

In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.

The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

The team used a humanoid robot named “Baxter” from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.


MITCSAIL | Brain-controlled Robots

Intuitive human-robot interaction

The system detects brain signals called “error-related potentials” (generated whenever our brains notice a mistake) to determine if the human agrees with a robot’s decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.” Or if the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer.

The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can’t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.

The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.

Here, robot, Fetch!

Robot asks questions, and based on a person’s language and gesture, infers what item to deliver. (credit: David Whitney/Brown University)

But what if the robot is still confused? Researchers in Brown University’s Humans to Robots Lab have an app for that.

“Fetching objects is an important task that we want collaborative robots to be able to do,” said computer science professor Stefanie Tellex. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”

Tellex’s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for “a marker” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.

“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex explained.

The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”


David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback

One of the important features of the system is that the robot doesn’t ask questions with every interaction; it asks intelligently.

And even though the system asks only a very simple question, it’s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as “implicatures,” make the algorithm more efficient.

In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed. “Currently we do not consider the parse of the human’s speech. We would like the model to understand prepositional phrases (‘on the left,’ ‘nearest to me’). This would allow the robot to understand how items are spatially related to other items through language.”

Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.

An open-access paper on the DARPA-funded research will also be presented at the International Conference on Robotics and Automation.


Abstract of Correcting Robot Mistakes in Real Time Using EEG Signals

Communication with a robot using brain activity from a human collaborator could provide a direct and fast
feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime
closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.


Abstract of Reducing Errors in Object-Fetching Interactions through Social Feedback

Fetching items is an important problem for a social robot. It requires a robot to interpret a person’s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.

Groundbreaking technology rewarms large-scale animal tissues preserved at low temperatures

Inductive radio-frequency heating of magnetic nanoparticles embedded in tissue (red material in container) preserved at very low temperatures restored the tissue without damage (credit: Navid Manuchehrabadi et al./Science Translational Medicine)

A research team led by the University of Minnesota has discovered a way to rewarm large-scale animal heart valves and blood vessels preserved at very low (cryogenic) temperatures without damaging the tissue. The discovery could one day lead to saving millions of human lives by creating cryogenic tissue and organ banks of organs and tissues for transplantation.

The research was published March 1 in an open-access paper in Science Translational Medicine.

Long-term preservation methods like vitrification cool biological samples to an ice-free glassy state, using very low temperatures between -160 and -196 degrees Celsius, but tissues larger than 1 milliliter (0.03 fluid ounce) often suffer major damage during the rewarming process, making them unusable for tissues.

In the new research, the researchers were able to restore 50 milliliters (1.7 fluid ounces) of tissue with warming at more than 130°C/minute without damage.

Radiofrequency inductive heating of iron nanoparticles

To achieve that, they developed a revolutionary new method using silica-coated iron-oxide nanoparticles dispersed throughout a cryoprotectant solution around the tissue. The nanoparticles act as tiny heaters around the tissue when they are activated using noninvasive radiofrequency inductive energy, rapidly and uniformly warming the tissue.

This transmission electron microscopy (TEM) image shows the iron oxide nanoparticles (coated in mesoporous silica) that are used in the tissue warming process. (credit: Haynes research group/University of Minnesota)

The results showed that none of the tissues displayed signs of harm — unlike control samples using vitrification and rewarmed slowly over ice or using convection warming. The researchers were also able to successfully wash away the iron oxide nanoparticles from the sample following the warming.

“This is the first time that anyone has been able to scale up to a larger biological system and demonstrate successful, fast, and uniform warming of hundreds of degrees Celsius per minute of preserved tissue without damaging the tissue,” said University of Minnesota mechanical engineering and biomedical engineering professor John Bischof, the senior author of the study.

Organs next

Bischof said there is a strong possibility they could scale up to even larger systems, like organs. The researchers plan to start with rodent organs (such as rat and rabbit) and then scale up to pig organs and then, hopefully, human organs. The technology might also be applied beyond cryogenics, including delivering lethal pulses of heat to cancer cells.

The researchers’ goal is to eliminate transplant waiting lists. Currently, hearts and lungs donated for transplantation must be discarded because these tissues cannot be kept on ice for longer than a matter of hours, according to the researchers.*

It will be interesting to see if the technology can one day be extended to cryonics.

The research was funded by the National Science Foundation (NSF), National Institutes of Health (NIH), U.S. Army Medical Research and Materiel Command, Minnesota Futures Grant from the University of Minnesota, and the University of Minnesota Carl and Janet Kuhrmeyer Chair in Mechanical Engineering. Researchers at Carnegie Mellon University, Clemson University and Tissue Testing Technologies LLC were also involved in the study.

* “A major limitation of transplantation is the ischemic injury that tissue and organs sustain during the time between recovery from the donor and implantation in the recipient. The maximum tolerable organ preservation for transplantation by hypothermic storage is typically 4 hours for heart and lungs; 8 to 12 hours for liver, intestine, and pancreas; and up to 36 hours for kidney transplants. In many cases, such limits actually prevent viable tissue or organs from reaching recipients. For instance, more than 60% of donor hearts and lungs are not used or transplanted partly because their maximum hypothermic preservation times have been exceeded. Further, if only half of these discarded organs were transplanted, then it has been estimated that wait lists for these organs could be extinguished within 2 to 3 years.” — Navid Manuchehrabadi et al./Science Translational Medicine


Abstract of Improved tissue cryopreservation using inductive heating of magnetic nanoparticles

Vitrification, a kinetic process of liquid solidification into glass, poses many potential benefits for tissue cryopreservation including indefinite storage, banking, and facilitation of tissue matching for transplantation. To date, however, successful rewarming of tissues vitrified in VS55, a cryoprotectant solution, can only be achieved by convective warming of small volumes on the order of 1 ml. Successful rewarming requires both uniform and fast rates to reduce thermal mechanical stress and cracks, and to prevent rewarming phase crystallization. We present a scalable nanowarming technology for 1- to 80-ml samples using radiofrequency-excited mesoporous silica–coated iron oxide nanoparticles in VS55. Advanced imaging including sweep imaging with Fourier transform and microcomputed tomography was used to verify loading and unloading of VS55 and nanoparticles and successful vitrification of porcine arteries. Nanowarming was then used to demonstrate uniform and rapid rewarming at >130°C/min in both physical (1 to 80 ml) and biological systems including human dermal fibroblast cells, porcine arteries and porcine aortic heart valve leaflet tissues (1 to 50 ml). Nanowarming yielded viability that matched control and/or exceeded gold standard convective warming in 1- to 50-ml systems, and improved viability compared to slow-warmed (crystallized) samples. Last, biomechanical testing displayed no significant biomechanical property changes in blood vessel length or elastic modulus after nanowarming compared to untreated fresh control porcine arteries. In aggregate, these results demonstrate new physical and biological evidence that nanowarming can improve the outcome of vitrified cryogenic storage of tissues in larger sample volumes.

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.