New transistor design enables flexible, high-performance wearable/mobile electronics

Advanced flexible transistor developed at UW-Madison (photo credit: Jung-Hun Seo/University at Buffalo, State University of New York)

A team of University of Wisconsin–Madison (UW–Madison) engineers has created “the most functional flexible transistor in the world,” along with a fast, simple, inexpensive fabrication process that’s easily scalable to the commercial level.

The development promises to allow manufacturers to add advanced, smart-wireless capabilities to wearable and mobile devices that curve, bend, stretch and move.*

The UW–Madison group’s advance is based on a BiCMOS (bipolar complementary metal oxide semiconductor) thin-film transistor, combining speed, high current, and low power dissipation (heat and wasted energy) on just one surface (a silicon nanomembrane, or “Si NM”).**

BiCMOS transistors are the chip of choice for “mixed-signal” devices (combining analog and digital capabilities), which include many of today’s portable electronic devices such as cellphones. “The [BiCMOS] industry standard is very good,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW–Madison. “Now we can do the same things with our transistor — but it can bend.”

The research was described in the inaugural issue of Nature Publishing Group’s open-access journal Flexible Electronics, published Sept. 27, 2017.***

Making traditional BiCMOS flexible electronics is difficult, in part because the process takes several months and requires a multitude of delicate, high-temperature steps. Even a minor variation in temperature at any point could ruin all of the previous steps.

Ma and his collaborators fabricated their flexible electronics on a single-crystal silicon nanomembrane on a single bendable piece of plastic. The secret to their success is their unique process, which eliminates many steps and slashes both the time and cost of fabricating the transistors.

“In industry, they need to finish these in three months,” he says. “We finished it in a week.”

He says his group’s much simpler, high-temperature process can scale to industry-level production right away.

“The key is that parameters are important,” he says. “One high-temperature step fixes everything — like glue. Now, we have more powerful mixed-signal tools. Basically, the idea is for [the flexible electronics platform] to expand with this.”

* Some companies (such as Samsung) have developed flexible displays, but not other flexible electronic components in their devices, Ma explained to KurzweilAI.

** “Flexible electronics have mainly focused on their form factors such as bendability, lightweight, and large area with low-cost processability…. To date, all the [silicon, or Si]-based thin-film transistors (TFTs) have been realized with CMOS technology because of their simple structure and process. However, as more functions are required in future flexible electronic applications (i.e., advanced bioelectronic systems or flexible wireless power applications), an integration of functional devices in one flexible substrate is needed to handle complex signals and/or various power levels.” — Jung Hun Seo et al./Flexible Electronics. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer. 

*** Co-authors included researchers at the University at Buffalo, State University of New York, and the University of Texas at Arlington. This work was supported by the Air Force Office Of Scientific Research.


Abstract of High-performance flexible BiCMOS electronics based on single-crystal Si nanomembrane

In this work, we have demonstrated for the first time integrated flexible bipolar-complementary metal-oxide-semiconductor (BiCMOS) thin-film transistors (TFTs) based on a transferable single crystalline Si nanomembrane (Si NM) on a single piece of bendable plastic substrate. The n-channel, p-channel metal-oxide semiconductor field-effect transistors (N-MOSFETs & P-MOSFETs), and NPN bipolar junction transistors (BJTs) were realized together on a 340-nm thick Si NM layer with minimized processing complexity at low cost for advanced flexible electronic applications. The fabrication process was simplified by thoughtfully arranging the sequence of necessary ion implantation steps with carefully selected energies, doses and anneal conditions, and by wisely combining some costly processing steps that are otherwise separately needed for all three types of transistors. All types of TFTs demonstrated excellent DC and radio-frequency (RF) characteristics and exhibited stable transconductance and current gain under bending conditions. Overall, Si NM-based flexible BiCMOS TFTs offer great promises for high-performance and multi-functional future flexible electronics applications and is expected to provide a much larger and more versatile platform to address a broader range of applications. Moreover, the flexible BiCMOS process proposed and demonstrated here is compatible with commercial microfabrication technology, making its adaptation to future commercial use straightforward.

Ray Kurzweil on The Age of Spiritual Machines: A 1999 TV interview

Dear readers,

For your interest, this 1999 interview with me, which I recently re-watched, describes some interesting predictions that are still coming true today. It’s intriguing to look back at the last 18 years to see what actually unfolded. This video is a compelling glimpse into the future, as we’re living it today.

Enjoy!

— Ray


Dear readers,

This interview by Harold Hudson Channer was recorded on Jan. 14, 1999 and aired February 1, 1999 on a Manhattan Neighborhood Network cable-access show, Conversations with Harold Hudson Channer.

In the discussion, Ray explains many of the ahead-of-their-time ideas presented in The Age of Spiritual Machines*, such as the “law of accelerating returns” (how technological change is exponential, contrary to the common-sense “intuitive linear” view); the forthcoming revolutionary impacts of AI; nanotech brain and body implants for increased intelligence, improved health, and life extension; and technological impacts on economic growth.

I was personally inspired by the book in 1999 and by Ray’s prophetic, uplifting vision of the future. I hope you also enjoy this blast from the past.

— Amara D. Angelica, Editor

* First published in hardcover January 1, 1999 by Viking. The series also includes The Age of Intelligent Machines (The MIT Press, 1992) and The Singularity Is Near (Penquin Books, 2006).

Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Loihi chip (credit: Intel Corporation)

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**

Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.

The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.

He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.

For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.

“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.

The Loihi test chip

Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”

Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.

In the first half of 2018, Intel plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.

“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.

* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology

** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Human vs. deep-neural-network performance in object recognition

(credit: UC Santa Barbara)

Before you read this: look for toothbrushes in the photo above.

Did you notice the huge toothbrush on the left? Probably not. That’s because when humans search through scenes for a particular object, we often miss objects whose size is inconsistent with the rest of the scene, according to scientists in the Department of Psychological & Brain Sciences at UC Santa Barbara.

The scientists are investigating this phenomenon in an effort to better understand how humans and computers compare in doing visual searches. Their findings are published in the journal Current Biology.

Hiding in plain sight

“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention, and search.

The experiment used scenes of ordinary objects featured in computer-generated images that varied in color, viewing angle, and size, mixed with “target-absent” scenes. The researchers asked 60 viewers to search for these objects (e.g., toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze.

The researchers found that people tended to miss the target more often when it was mis-scaled (too large or too small) — even when looking directly at the target object.

Computer vision, by contrast, doesn’t have this issue, the scientists reported. However, in the experiments, the researchers found that the most advanced form of computer vision — deep neural networks — had its own limitations.

Human search strategies that could improve computer vision

Red rectangle marks incorrect image identification as a cell phone by a deep-learning algorithm (credit: UC Santa Barbara)

For example, a CNN deep-learning neural net incorrectly identified a computer keyboard as a cell phone, based on similarity in shape and the location of the object in spatial proximity to a human hand (as would be expected of a cell phone). But for humans, the object’s size (compared to the nearby hands) is clearly seen as inconsistent with a cell phone.

“This strategy allows humans to reduce false positives when making fast decisions,” the researchers note in the paper.

“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.”

That is, human brains use the relationships between objects and their context within the scene to guide their eyes — a useful strategy to process scenes rapidly, eliminate distractors, and reduce false positives.

This finding might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives, according to the researchers.

Future research

“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” says Eckstein, who is contemplating a follow-up study. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”

In the more immediate future, the team’s research will look into the brain activity that occurs when we view mis-scaled objects.

“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects.

“So what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”


Abstract of Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [ 1, 2 ] to birds [ 3 ] to humans [ 4–12 ] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes. We show that humans often miss targets when their size is inconsistent with the rest of the scene, even when the targets were made larger and more salient and observers fixated the target. In contrast, we show that state-of-the-art deep neural networks do not exhibit such deficits in finding mis-scaled targets but, unlike humans, can be fooled by target-shaped distractors that are inconsistent with the expected target’s size within the scene. Thus, it is not a human deficiency to miss targets when they are inconsistent in size with the scene; instead, it is a byproduct of a useful strategy that the brain has implemented to rapidly discount potential distractors.

Neuroscientists restore vegetative-state patient’s consciousness with vagus nerve stimulation

Information sharing increases after vagus nerve stimulation over centroposterior regions of the brain. (Left) Coronal view of weighted symbolic mutual information (wSMI) shared by all channels pre- and post-vagus nerve stimulation (VNS) (top and bottom, respectively). For visual clarity, only links with wSMI higher than 0.025 are shown. (Right) Topographies of the median wSMI that each EEG channel shares with all the other channels pre- and post-VNS (top and bottom, respectively). The bar graph represents the median wSMI over right centroposterior electrodes (darker dots) which significantly increases post-VNS. (credit: Martina Corazzol et al./Current Biology)

A 35-year-old man who had been in a vegetative state for 15 years after a car accident has shown signs of consciousness after neurosurgeons in France implanted a vagus nerve stimulator into his chest — challenging the general belief that disorders of consciousness that persist for longer than 12 months are irreversible.

In a 2007 Weill Cornell Medical College study reported in Nature, neurologists found temporary improvements in patients in a state of minimal consciousness while being treated with bilateral deep brain electrical stimulation (DBS) of the central thalamus. Aiming instead to achieve permanent results, the French researchers proposed use of vagus nerve stimulation* (VNS) to activate the thalamo-cortical network, based on the “hypothesis that vagus nerve stimulation functionally reorganizes the thalamo-cortical network.”

A vagus neural stimulation therapy system. The vagus nerve connects the brain to many other parts of the body, including the gut. It’s known to be important in waking, alertness, and many other essential functions. (credit: Cyberonics, Inc./LivaNova)

After one month of VNS — a treatment currently used for epilepsy and depression — the patient’s attention, movements, and brain activity significantly improved and he began responding to simple orders that were impossible before, the researchers report today (Sept. 25, 2017) in an open-access paper in Current Biology.

For example, he could follow an object with his eyes and turn his head upon request, and when the examiner’s head suddenly approached the patient’s face, he reacted with surprise by opening his eyes wide.

Evidence from brain-activity recordings

PET images acquired during baseline (left: pre-VNS) and 3 months post vagus nerve stimulation (right: post-VNS). After vagus nerve stimulation, the metabolism increased in the right parieto-occipital cortex, thalamus and striatum. (credit: Corazzol et al.)

“After one month of stimulation, when [electrical current] intensity reached 1 mA, clinical examination revealed reproducible and consistent improvements in general arousal, sustained attention, body motility, and visual pursuit,” the researchers note.

Brain-activity recordings in the new study revealed major changes. A theta EEG signal (important for distinguishing between a vegetative and minimally conscious state) increased significantly in those areas of the brain involved in movement, sensation, and awareness. The brain’s functional connectivity also increased. And a PET scan showed increases in metabolic activity in both cortical and subcortical regions of the brain.

The researchers also speculate that “since the vagus nerve has bidirectional control over the brain and the body, reactivation of sensory/visceral afferences might have enhanced brain activity within a body/brain closed loop process.”

The team is now planning a large collaborative study to confirm and extend the therapeutic potential of VNS for patients in a vegetative or minimally conscious state.

However, “some physicians and brain injury specialists remain skeptical about whether the treatment truly worked as described,” according to an article today in Science. “The surgery to implant the electrical stimulator, the frequent behavioral observations, and the moving in and out of brain scanners all could have contributed to the patient’s improved state, says Andrew Cole, a neurologist at Harvard Medical School in Boston who studies consciousness. ‘I’m not saying their claim is untrue,’ he says. ‘I’m just saying it’s hard to interpret based on the results as presented.’”

The study was supported by CNRS, ANR, and a grant from the University of Lyon

* “The vagus nerve carries somatic and visceral efferents and afferents distributed throughout the central nervous system, either monosynaptically or via the nucleus of the solitary tract (NTS). The vagus directly modulates activity in the brainstem and via the NTS it reaches the dorsal raphe nuclei, the thalamus, the amygdala, and the hippocampus. In humans, vagus nerve stimulation increases metabolism in the forebrain, thalamus and reticular formation. It also enhances neuronal firing in the locus coeruleus which leads to massive release of norepinephrine in the thalamus and hippocampus, a noradrenergic pathway important for arousal, alertness and the fight-or-flight response.” — Corazzol and Lio et al./Current Biology


Abstract of Restoring consciousness with vagus nerve stimulation

Patients lying in a vegetative state present severe impairments of consciousness [1] caused by lesions in the cortex, the brainstem, the thalamus and the white matter [2]. There is agreement that this condition may involve disconnections in long-range cortico–cortical and thalamo-cortical pathways [3]. Hence, in the vegetative state cortical activity is ‘deafferented’ from subcortical modulation and/or principally disrupted between fronto-parietal regions. Some patients in a vegetative state recover while others persistently remain in such a state. The neural signature of spontaneous recovery is linked to increased thalamo-cortical activity and improved fronto-parietal functional connectivity [3]. The likelihood of consciousness recovery depends on the extent of brain damage and patients’ etiology, but after one year of unresponsive behavior, chances become low [1]. There is thus a need to explore novel ways of repairing lost consciousness. Here we report beneficial effects of vagus nerve stimulation on consciousness level of a single patient in a vegetative state, including improved behavioral responsiveness and enhanced brain connectivity patterns.

Artificial ‘skin’ gives robotic hand a sense of touch

University of Houston researchers have reported a development in stretchable electronics that can serve as artificial skin for a robotic hand and biomedical devices (credit: University of Houston)

A team of researchers from the University of Houston has reported a development in stretchable electronics that can serve as an artificial skin, allowing a robotic hand to sense the difference between hot and cold, and also offering advantages for a wide range of biomedical devices.

The work, reported in the open-access journal Science Advances, describes a new mechanism for producing stretchable electronics, a process that relies upon readily available materials and could be scaled up for commercial production.

Cunjiang Yu, Bill D. Cook Assistant Professor of mechanical engineering and lead author of the paper, said the work is the first to create a semiconductor in a rubber composite format, designed to allow the electronic components to retain functionality even after the material is stretched by 50 percent.

He noted that traditional semiconductors are brittle and using them in otherwise stretchable materials has required a complicated system of mechanical accommodations. That’s both more complex and less stable than the new discovery, as well as more expensive, he said. “Our strategy has advantages for simple fabrication, scalable manufacturing, high-density integration, large strain tolerance, and low cost,” he said.

Photograph of a robotic hand with intrinsically stretchable rubbery sensors (credit: Hae-Jin Kim et al./Science Advances)

The team used the skin to demonstrate that a robotic hand could sense the temperature of hot and iced water in a cup. The skin also was able to interpret computer signals sent to the hand and reproduce the signals as American Sign Language.

Uses of the stretchable skin include soft wearable electronics such as health monitors, medical implants, and human-machine interfaces.

The stretchable composite semiconductor was prepared by using a silicon-based polymer known as polydimethylsiloxane (PDMS) and tiny nanowires to create a solution that was then hardened into a material that used the nanowires to transport electric current.


Abstract of Rubbery electronics and sensors from intrinsically stretchable elastomeric composites of semiconductors and conductors

A general strategy to impart mechanical stretchability to stretchable electronics involves engineering materials into special architectures to accommodate or eliminate the mechanical strain in nonstretchable electronic materials while stretched. We introduce an all solution–processed type of electronics and sensors that are rubbery and intrinsically stretchable as an outcome from all the elastomeric materials in percolated composite formats with P3HT-NFs [poly(3-hexylthiophene-2,5-diyl) nanofibrils] and AuNP-AgNW (Au nanoparticles with conformally coated silver nanowires) in PDMS (polydimethylsiloxane). The fabricated thin-film transistors retain their electrical performances by more than 55% upon 50% stretching and exhibit one of the highest P3HT-based field-effect mobilities of 1.4 cm2/V∙s, owing to crystallinity improvement. Rubbery sensors, which include strain, pressure, and temperature sensors, show reliable sensing capabilities and are exploited as smart skins that enable gesture translation for sign language alphabet and haptic sensing for robotics to illustrate one of the applications of the sensors.

A battery-free origami robot powered and controlled by external magnetic fields

Wirelessly powered and controlled magnetic folding robot arm can grasp and bend (credit: Wyss Institute at Harvard University)

Harvard University researchers have created a battery-free, folding robot “arm” with multiple “joints,” gripper “hand,” and actuator “muscles” — all powered and controlled wirelessly by an external resonant magnetic field.

The design is inspired by the traditional Japanese art of origami (used to transform a simple sheet of paper into complex, three-dimensional shapes through a specific pattern of folds, creases, and crimps). The prototype device is capable of complex, repeatable movements at millimeter to centimeter scales.

The research, by scientists at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS), is reported in Science Robotics.

How it works

Design of small-scale-structure prototype of wirelessly controlled robotic arm (credit: Mustafa Boyvat et al./Science Robotics)

The researchers designed a 0.8-gram prototype small-scale-structure* prototype robotic “arm” capable of bending and opening or closing a gripper around an object. The “arm” is constructed with a special origami-like pattern that uses hinges (“joints”) to permit it to bend. There is also a “hand” (gripper — left panel in above image) that opens or closes.

To power the device, an external coil with its own power source (see video below) is used to generate a low-frequency magnetic field that induces an electrical current in three magnetic coils. The current heats the spiral-wire shape-memory-alloy actuator wires (coiled wire shown in inset above). That causes the actuator wires (“muscles”) to contract, making the attached nearby “joints” bend, and folding the robot body.

Mechanism of the origami gripper (for small-scale prototype design). (Left) The coil SMA actuator pushes the center link connected to both fingers and the gripper opens fingers, enabled by dynamic folding at the joints (left). The plate spring, which is a passive compression spring, pulls the link back as the gripper closes the fingers, again by rotations at folding joints (center). (Right) A photo of the gripper showing the SMA actuator wire attached at the center link. (credit: Mustafa Boyvat et al./Science Robotics)

By changing the resonant frequency of the external electromagnetic field, the two longer actuator wires (coiled wires shown in above illustration) are instead heated and stretched, opening the gripper (“hand”).

In both cases, when the external field-induced current stops, the actuators relax, springing back to their “memory” positions and causing the robot body to straighten out or the gripper’s outer triangles to close.

Minimally invasive medicine and surgery applications

As an example of a practical future application, instead of having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks, like holding tissue or filming, powered by a coil outside their body.

Using a much larger source coil — on the order of yards in diameter — could enable wireless, battery-free communication between multiple “smart” objects in a room or building.

“Medical devices today are commonly limited by the size of the batteries that power them, whereas these remotely powered origami robots can break through that size barrier and potentially offer entirely new, minimally invasive approaches for medicine and surgery in the future,” says Wyss Founding Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at Harvard’s School of Engineering and Applied Sciences.

This work was supported by the National Science Foundation, the U.S. Army Research Laboratory, and the Swiss National Science Foundation.

* A large-scale-structure prototype version has minor differences, including 12-cm folding lines vs. 1.7-cm folding lines in the smaller version.

Wyss Institute | Battery-Free Folding Robots


Abstract of Addressable wireless actuation for multijoint folding robots and devices

“Printing” robots and other complex devices through a process of origami-like folding is an emerging and promising manufacturing method due to the inherent simplicity and low cost of folding-based assembly. Folding is used in this class of device to create both complex static structures and flexure-based compliant mechanisms. Dependency on batteries to power these folds with no external wires is a hurdle to giving small-scale folding robots and devices functionality. We demonstrate a battery-free wireless folding method for dynamic multijoint structures, achieving addressable folding motions—both individual and collective folding—using only basic passive electronic components on the device. The method is based on electromagnetic power transmission and resonance selectivity for actuation of resistive shape memory alloy actuators without the need for physical connection or line of sight. We demonstrate the utility of this approach using two folded devices at different sizes using different circuit approaches.