Creating complex structures using DNA origami and nanoparticles

Cluster assembled from DNA-functionalized gold nanoparticles on vertices of a octahedral DNA origami frame (credit: Brookhaven National Laboratory))

Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and collaborators have developed a method using DNA for designing new customized materials with complex structures for applications in energy, optics, and medicine.

They used ropelike configurations of DNA to form a rigid geometrical framework and then added dangling pieces of single-stranded DNA to glue nanoparticles in place.

The method, described in the journal Nature Nanotechnology, produced predictable geometric configurations that are somewhat analogous to molecules made of atoms, according to Brookhaven physicist Oleg Gang, who led the project at the Lab’s Center for Functional Nanomaterials (CFN).

“While atoms form molecules based on the nature of their chemical bonds, there has been no easy way to impose such a specific spatial binding scheme on nanoparticles, he said. “This is exactly the problem that our method addresses.

“We may be able to design materials that mimic nature’s machinery to harvest solar energy, or manipulate light for telecommunications applications, or design novel catalysts for speeding up a variety of chemical reactions,” Gang said.

As a demonstration, the researchers used an octahedral (eight-sided) scaffold (structure) with particles positioned in precise locations on the scaffold according to specific DNA coding. They also used the geometrical clusters as building blocks for larger arrays, including linear chains and two-dimensional planar sheets.

“Our work demonstrates the versatility of this approach and opens up numerous exciting opportunities for high-yield precision assembly of tailored 3D building blocks in which multiple nanoparticles of different structures and functions can be integrated,” said CFN scientist Ye Tian, one of the lead authors on the paper.

A new DNA “origami” kit

Scientists built octahedrons using ropelike structures made of bundles of DNA double-helix molecules to form the frames (a). Single strands of DNA attached at the vertices (numbered in red) can be used to attach nanoparticles coated with complementary strands. This approach can yield a variety of structures, including ones with the same type of particle at each vertex (b), arrangements with particles placed only on certain vertices (c), and structures with different particles placed strategically on different vertices (d). (credit: Brookhaven National Laboratory)

This nanoscale construction approach takes advantage of two key characteristics of the DNA molecule: the twisted-ladder double helix shape, and the natural tendency of strands with complementary bases (the A, T, G, and C letters of the genetic code) to pair up in a precise way.

Here’s how the scientists built a complex structure with this “DNA origami” kit:

1. They created bundles of six double-helix DNA molecules.

2. They put four of these bundles together to make a stable, somewhat rigid building material — similar to the way individual fibrous strands are woven together to make a very strong rope.

3. They used these ropelike girders to form the frame of three-dimensional octahedrons, “stapling” the linear DNA chains together with hundreds of short complementary DNA strands. (“We refer to these as DNA origami octahedrons,” Gang said.)

4. To make it possible to “glue” nanoparticles to the 3D frames, the scientists engineered each of the original six-helix bundles to have one helix with an extra single-stranded piece of DNA sticking out from both ends.

5. When assembled into the 3D octahedrons, each vertex of the frame had a few of these “sticky end” tethers available for binding with objects coated with complementary DNA strands.

“When nanoparticles coated with single strand tethers are mixed with the DNA origami octahedrons, the ‘free’ pieces of DNA find one another so the bases can pair up according to the rules of the DNA complementarity code. Thus the specifically DNA-encoded particles can find their correspondingly designed place on the octahedron vertices,” Gang explained.

A combination cryo-electron microscopy image of an octahedral frame with one gold nanoparticle bound to each of the six vertices, shown from three different angles. (Credit: Brookhaven National Laboratory)

The scientists can also change what binds to each vertex by changing the DNA sequences encoded on the tethers. In one experiment, they encoded the same sequence on all the octahedron’s tethers, and attached strands with a complementary sequence to gold nanoparticles. The result: One gold nanoparticle attached to each of octahedron’s six vertices.

In additional experiments,the scientists changed the sequence of some vertices and used complementary strands on different kinds of particles, illustrating that they could direct the assembly and arrangement of the particles in a very precise way.

By strategically placing tethers on particular vertices, the scientists used the octahedrons to link nanoparticles into one-dimensional chainlike arrays (left) and two-dimensional square sheets (right). (Credit: Brookhaven National Laboratory)

In one case, they made two different arrangements of the same three pairs of particles of different sizes, producing products with different optical properties. They were even able to use DNA tethers on selected vertices to link octahedrons end-to-end, forming chains, and in 2D arrays, forming sheets.

Visualizing the structures

TEM image of part of the 1D array (credit: Brookhaven National Lab)

Confirming the particle arrangements and structures was a major challenge because the nanoparticles and the DNA molecules making up the frames have very different densities. Certain microscopy techniques would reveal only the particles, while others would distort the 3D structures.

To see both the particles and origami frames, the scientists used cryo-electron microscopy (cryo-EM), led by Brookhaven Lab and Stony Brook University biologist Huilin Li, an expert in this technique, and Tong Wang, the paper’s other lead co-author, who works in Brookhaven’s Biosciences department with Li.

They had to subtract information from the images to “see” the different density components separately, then combine the information using single particle 3D reconstruction and tomography to produce the final images.

This research was supported by the DOE Office of Science.


Abstract of Prescribed nanoparticle cluster architectures and low-dimensional arrays built using octahedral DNA origami frames

Three-dimensional mesoscale clusters that are formed from nanoparticles spatially arranged in pre-determined positions
can be thought of as mesoscale analogues of molecules. These nanoparticle architectures could offer tailored properties
due to collective effects, but developing a general platform for fabricating such clusters is a significant challenge. Here, we
report a strategy for assembling three-dimensional nanoparticle clusters that uses a molecular frame designed with
encoded vertices for particle placement. The frame is a DNA origami octahedron and can be used to fabricate clusters
with various symmetries and particle compositions. Cryo-electron microscopy is used to uncover the structure of the DNA
frame and to reveal that the nanoparticles are spatially coordinated in the prescribed manner. We show that the DNA
frame and one set of nanoparticles can be used to create nanoclusters with different chiroptical activities. We also show
that the octahedra can serve as programmable interparticle linkers, allowing one- and two-dimensional arrays to be
assembled with designed particle arrangements.

One step closer to a single-molecule device

Molecular diode artist’s impression (credit: Columbia Engineering)

Columbia Engineering researchers have created the first single-molecule diode — the ultimate in miniaturization for electronic devices — with potential for real-world applications in electronic systems.

The diode that has a high (>250) rectification and a high “on” current (~ 0.1 microamps), says Latha Venkataraman, associate professor of applied physics. “Constructing a device where the active elements are only a single molecule … which has been the ‘holy grail’ of molecular electronics, represents the ultimate in functional miniaturization that can be achieved for an electronic device,” he said.

With electronic devices becoming smaller every day, the field of molecular electronics has become ever more critical in solving the problem of further miniaturization, and single molecules represent the limit of miniaturization. The idea of creating a single-molecule diode was suggested by Arieh Aviram and Mark Ratner who theorized in 1974 that a molecule could act as a rectifier, a one-way conductor of electric current.

The future of miniaturization

Single-molecule asymmetric molecular structure (alkyl side chains omitted for clarity) using a donor–bridge–acceptor architecture to mimic a semiconductor p–n junction (credit: Brian Capozzi et al./Nature Nanotechnology)

Researchers have since been exploring the charge-transport properties of molecules. They have shown that single-molecules attached to metal electrodes (single-molecule junctions) can be made to act as a variety of circuit elements, including resistors, switches, transistors, and, indeed, diodes.

They have learned that it is possible to see quantum mechanical effects, such as interference, manifest in the conductance properties of molecular junctions.

Since a diode acts as an electricity valve, its structure needs to be asymmetric so that electricity flowing in one direction experiences a different environment than electricity flowing in the other direction. To develop a single-molecule diode, researchers have simply designed molecules that have asymmetric structures.

“While such asymmetric molecules do indeed display some diode-like properties, they are not effective,” explains Brian Capozzi, a PhD student working with Venkataraman and lead author of the paper.

“A well-designed diode should only allow current to flow in one direction …  and it should allow a lot of current to flow in that direction. Asymmetric molecular designs have typically suffered from very low current flow in both ‘on’ and ‘off’ directions, and the ratio of current flow in the two has typically been low. Ideally, the ratio of ‘on’ current to ‘off’ current, the rectification ratio, should be very high.”

To overcome the issues associated with asymmetric molecular design, Venkataraman and her colleagues — Chemistry Assistant Professor Luis Campos’ group at Columbia and Jeffrey Neaton’s group at the Molecular Foundry at UC Berkeley — focused on developing an asymmetry in the environment around the molecular junction. They created an environmental asymmetry through a rather simple method: they surrounded the active molecule with an ionic solution and used gold metal electrodes of different sizes to contact the molecule.

Avoiding quantum-mechanical effects

Their results achieved rectification ratios as high as 250 — 50 times higher than earlier designs. The “on” current flow in their devices can be more than 0.1 microamps, which, Venkataraman notes, is a lot of current to be passing through a single-molecule. And, because this new technique is so easily implemented, it can be applied to all nanoscale devices of all types, including those that are made with graphene electrodes.

“It’s amazing to be able to design a molecular circuit, using concepts from chemistry and physics, and have it do something functional,” Venkataraman says. “The length scale is so small that quantum mechanical effects are absolutely a crucial aspect of the device. So it is truly a triumph to be able to create something that you will never be able to physically see and that behaves as intended.”

She and her team are now working on understanding the fundamental physics behind their discovery, and trying to increase the rectification ratios they observed, using new molecular systems.

The study, described in a paper published today (May 25) in Nature Nanotechnology, was funded by the National Science Foundation, the Department of Energy, and the Packard Foundation.

Fly-catching robot speeds biomedical research

A fruit fly hangs unharmed at the end of the robot’s suction tube. The robot uses machine vision to inspect and analyze the captured fly. (credit: Stanforf Bio-X)

Stanford Bio-X scientists have created a robot that speeds and extends biomedical research with a common laboratory organism — fruit flies (Drosophila).

The robot can visually inspect awake flies and carry out behavioral experiments that were impossible with anesthetized flies. The work is described today (May 25) in the journal Nature Methods.

“Robotic technology offers a new prospect for automated experiments and enables fly researchers to do several things they couldn’t do previously,” said research team leader Mark Schnitzer, an associate professor of biology and of applied physics.

“For example, it can do studies with large numbers of flies inspected in very precise ways.” The group did one study of 1,000 flies in 10 hours, a task that would have taken much longer for even a highly skilled human.

Zap, you’re part of an experiment

When the robot’s fly-snatching apparatus is ready to grab a fly, it flashes a brief infrared blast of light that is invisible to the fly. The light reflects off its thorax, indicating the precise location of each fly and allowing the robot to recognize each individual fly by its reflection pattern. Then, a tiny, narrow suction tube strikes one of the illuminated thoraxes, painlessly sucking onto the fly and lifting it up.

Once the fly is attached, the robot uses machine vision to analyze the fly’s physical attributes, sort the flies by male and female, and even carry out a microdissection to reveal the fly’s minuscule brain. In one experiment, the robot’s machine vision was able to differentiate between two strains of flies so similar they are indistinguishable to the human eye.

Speeding disease research

All this is good news to the legion of graduate students who still spend hours a day looking at flies under a microscope as part of work that continues to uncover mechanisms in human aging, cancer, diabetes and a range of other diseases.

Although flies and humans have obvious differences, in many cases our cells and organs behave in similar ways and it is easier to study those processes in flies than in humans. The earliest information about how radiation causes gene mutations came from fruit flies, as did an understanding of our daily sleep/waking rhythms. And many of the molecules that are now famous for their roles in regulating how cells communicate were originally discovered by scientists hunched over microscopes staring at the unmoving bodies of anesthetized flies.

Now, that list of fruit fly contributions can be expended to include behavioral studies, previously impossible because the humans carrying out the analysis can neither see fly behaviors clearly nor distinguish between individuals.

In their paper, Schnitzer and his team had the robot pick up a fly and carry it to a trackball. Once there, they exposed the fly to different smells and could record how the fly behaved — racing along the trackball to get closer or attempting to turn away.

The work was funded by the W.M. Keck Foundation, the Stanford Bio-X program, an NIH Director’s Pioneer Award, and the Stanford-NIBIB Training Program in Biomedical Imaging Instrumentation.

Converting blood stem cells to sensory neural cells to predict and treat pain

McMaster University scientists have discovered how to make adult sensory neurons from a patient’s blood sample to measure pain (credit: McMaster University

Stem-cell scientists at McMaster University have developed a way to directly convert adult human blood cells to sensory neurons, providing the first objective measure of how patients may feel things like pain, temperature, and pressure, the researchers reveal in an open-access paper in the journal Cell Reports.

Currently, scientists and physicians have a limited understanding of the complex issue of pain and how to treat it. “The problem is that unlike blood, a skin sample or even a tissue biopsy, you can’t take a piece of a patient’s neural system,” said Mick Bhatia, director of the McMaster Stem Cell and Cancer Research Institute and research team leader. “It runs like complex wiring throughout the body and portions cannot be sampled for study.

“Now we can take easy to obtain blood samples, and make the main cell types of neurological systems in a dish that is specialized for each patient,” said Bhatia. “We can actually take a patient’s blood sample, as routinely performed in a doctor’s office, and with it we can produce one million sensory neurons, [which] make up the peripheral nerves. We can also make central nervous system cells.”

Testing pain drugs

The new technology has “broad and immediate applications,” said Bhatia: It allows researchers to understand disease and improve treatments by asking questions such as: Why is it that certain people feel pain versus numbness? Is this something genetic? Can the neuropathy that diabetic patients experience be mimicked in a dish?

It also paves the way for the discovery of new pain drugs that don’t just numb the perception of pain. Bhatia said non-specific opioids used for decades are still being used today. “If I was a patient and I was feeling pain or experiencing neuropathy, the prized pain drug for me would target the peripheral nervous system neurons, but do nothing to the central nervous system, thus avoiding addictive drug side effects,” said Bhatia.

“Until now, no one’s had the ability and required technology to actually test different drugs to find something that targets the peripheral nervous system, and not the central nervous system, in a patient-specific, or personalized manner.”

A patient time machine 

Bhatia’s team also successfully tested their process with cryopreserved (frozen) blood. Since blood samples are taken and frozen with many clinical trials, this give them “almost a bit of a time machine” to run tests on neurons created from blood samples of patients taken in past clinical trials, where responses and outcomes have already been recorded.

In the future, the process may have prognostic (predictive diagnostic) potential, explained Bhatia: one might be able to look at a patient with Type 2 Diabetes and predict whether they will experience neuropathy, by running tests in the lab using their own neural cells derived from their blood sample.

“This bench-to-bedside research is very exciting and will have a major impact on the management of neurological diseases, particularly neuropathic pain,” said Akbar Panju, medical director of the Michael G. DeGroote Institute for Pain Research and Care, a clinician and professor of medicine.

“This research will help us understand the response of cells to different drugs and different stimulation responses, and allow us to provide individualized or personalized medical therapy for patients suffering with neuropathic pain.”

This research was supported by the Canadian Institutes of Health Research, Ontario Institute of Regenerative Medicine, Marta and Owen Boris Foundation, J.P. Bickell Foundation, the Ontario Brain Institute, and Brain Canada.

Pain insensitivity

In related news, an international team of researchers co-led by the University of Cambridge reported Monday in the journal Nature Genetics that they have identified a gene, PRDM12, that is essential to the production of pain-sensing neurons in humans. Rare individuals — around one in a million people in the UK — are born unable to feel pain, in a condition known as congenital insensitivity to pain (CIP). These people accumulate numerous self-inflicted injuries, often leading to reduced lifespan.

The researchers are hopeful that this new gene could be an excellent candidate for drug development.


Abstract of Single Transcription Factor Conversion of Human Blood Fate to NPCs with CNS and PNS Developmental Capacity

The clinical applicability of direct cell fate conversion depends on obtaining tissue from patients that is easy to harvest, store, and manipulate for reprogramming. Here, we generate induced neural progenitor cells (iNPCs) from neonatal and adult peripheral blood using single-factor OCT4 reprogramming. Unlike fibroblasts that share molecular hallmarks of neural crest, OCT4 reprogramming of blood was facilitated by SMAD+GSK-3 inhibition to overcome restrictions on neural fate conversion. Blood-derived (BD) iNPCs differentiate in vivo and respond to guided differentiation in vitro, producing glia (astrocytes and oligodendrocytes) and multiple neuronal subtypes, including dopaminergic (CNS related) and nociceptive neurons (peripheral nervous system [PNS]). Furthermore, nociceptive neurons phenocopy chemotherapy-induced neurotoxicity in a system suitable for high-throughput drug screening. Our findings provide an easily accessible approach for generating human NPCs that harbor extensive developmental potential, enabling the study of clinically relevant neural diseases directly from patient cohorts.

Combining light and sound to create nanoscale optical waveguides

Researchers have shown that a DC voltage applied to layers of graphene and boron nitride can be used to control light emission from a nearby atom. Here, graphene is represented by a maroon-colored top layer; boron nitride is represented by yellow-green lattices below the graphene; and the atom is represented by a grey circle. A low concentration of DC voltage (in blue) allows the light to propagate inside the boron nitride, forming a tightly confined waveguide for optical signals. (credit: Anshuman Kumar Srivastava and Jose Luis Olivares/MIT)

In a new discovery that could lead to chips that combine optical and electronic components, researchers at MIT, IBM and two universities have found a way to combine light and sound with far lower losses than when such devices are made separately and then interconnected, they say.

Light’s interaction with graphene produces vibrating electron particles called plasmons, while light interacting with hexagonal boron nitride (hBN) produces phonons (sound “particles”). Fang and his colleagues found that when the materials are combined in a certain way, the plasmons and phonons can couple, producing a strong resonance.

The properties of the graphene allow precise control over light, while hBN provides very strong confinement and guidance of the light. Combining the two makes it possible to create new “metamaterials” that marry the advantages of both, the researchers say.

The work is co-authored by MIT associate professor of mechanical engineering Nicholas Fang and graduate student Anshuman Kumar, and their co-authors at IBM’s T.J. Watson Research Center, Hong Kong Polytechnic University, and the University of Minnesota.

According to Phaedon Avouris, a researcher at IBM and co-author of the paper, “The combination of these two materials provides a unique system that allows the manipulation of optical processes.”

The two materials are structurally similar — both composed of hexagonal arrays of atoms that form two-dimensional sheets — but they each interact with light quite differently. The researchers found that these interactions can be complementary, and can couple in ways that afford a great deal of control over the behavior of light.

The hybrid material blocks light when a particular voltage is applied to the graphene layer. When a different voltage is applied, a special kind of emission and propagation, called “hyperbolicity” occurs. This phenomenon has not been seen before in optical systems, Fang says.

Nanoscale optical waveguides

The result: an extremely thin sheet of material can interact strongly with light, allowing beams to be guided, funneled, and controlled by different voltages applied to the sheet.

The combined materials create a tuned system that can be adjusted to allow light only of certain specific wavelengths or directions to propagate, they say.

These properties should make it possible, Fang says, to create tiny optical waveguides, about 20 nanometers in size —- the same size range as the smallest features that can now be produced in microchips.

“Our work paves the way for using 2-D material heterostructures for engineering new optical properties on demand,” says co-author Tony Low, a researcher at IBM and the University of Minnesota.

Single-molecule optical resolution

Another potential application, Fang says, comes from the ability to switch a light beam on and off at the material’s surface; because the material naturally works at near-infrared wavelengths, this could enable new avenues for infrared spectroscopy, he says. “It could even enable single-molecule resolution,” Fang says, of biomolecules placed on the hybrid material’s surface.

Sheng Shen, an assistant professor of mechanical engineering at Carnegie Mellon University who was not involved in this research, says, “This work represents significant progress on understanding tunable interactions of light in graphene-hBN.” The work is “pretty critical” for providing the understanding needed to develop optoelectronic or photonic devices based on graphene and hBN, he says, and “could provide direct theoretical guidance on designing such types of devices. … I am personally very excited about this novel theoretical work.”

The research team also included Kin Hung Fung of Hong Kong Polytechnic University. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research.


Abstract of Tunable Light–Matter Interaction and the Role of Hyperbolicity in Graphene–hBN System

Hexagonal boron nitride (hBN) is a natural hyperbolic material, which can also accommodate highly dispersive surface phonon-polariton modes. In this paper, we examine theoretically the mid-infrared optical properties of graphene–hBN heterostructures derived from their coupled plasmon–phonon modes. We find that the graphene plasmon couples differently with the phonons of the two Reststrahlen bands, owing to their different hyperbolicity. This also leads to distinctively different interaction between an external quantum emitter and the plasmon–phonon modes in the two bands, leading to substantial modification of its spectrum. The coupling to graphene plasmons allows for additional gate tunability in the Purcell factor and narrow dips in its emission spectra.

Light-emitting, transparent flexible paper developed in China

Left: optical images of normal filter paper (bottom layer), nanocellulose-quantum dot paper (middle layer), and with acrylic resin coating added (top layer). Right: photo of luminescent nanocellulose-quantum dot paper in operation. (credit: Juan Xue et al./ACS Applied Materials & Interfaces)

The first light-emitting, transparent, flexible paper made from environmentally friendly materialshas been developed by scientists at Sichuan University in China, the scientists report in the journal ACS Applied Materials & Interfaces.

Most current flexible electronics paper designs rely on petroleum-based plastics and toxic materials.

The researchers developed a thin, clear nanocellulose paper made from wood flour and infused it with biocompatible quantum dots — tiny semiconducting crystals — made out of zinc and selenium. The paper glowed at room temperature and could be rolled and unrolled without cracking.

The researchers are currently developing papers that emit other colors than blue.

The authors acknowledge funding from the Research Fund for the Doctoral Program of Higher Education of China and the National Natural Science Foundation of China.


Abstract of Let It Shine: A Transparent and Photoluminescent Foldable Nanocellulose/Quantum Dot Paper

Exploration of environmentally friendly light-emitting devices with extremely low weight has been a trend in recent decades for modern digital technology. Herein, we describe a simple suction filtration method to develop a transparent and photoluminescent nanocellulose (NC) paper, which contains ZnSe quantum dot (QD) with high quantum yield as a functional filler. ZnSe QD can be dispersed uniformly in NC, and a quite low coefficient of thermal expansion is determined for the resultant composite paper, suggesting its good dimensional stability. These results indicate that the meeting of NC with ZnSe QD can bring a brilliant future during the information age.

Printing low-cost, flexible radio-frequency antennas with graphene ink

These scanning electron microscope images show graphene ink after it was deposited and dried (a) and then compressed (b)k, which makes the graphene nanoflakes more dense, so it improves its electrical conductivity (credit: Xianjun Huang, et al./University of Manchester)

The first low-cost, flexible, environmentally friendly radio-frequency antenna using compressed graphene ink has been printed by researchers from the University of Manchester and BGT Materials Limited. Potential uses of the new process include radio-frequency identification (RFID) tags, wireless sensors, wearable electronics, and printing on materials like paper and plastic.

Commercial RFID tags are currently made from metals like silver (very expensive) or aluminum or copper (both prone to being oxidized).

Graphene conductive ink avoids those problems and can be used to print circuits and other electronic components, but the ink contains one or more polymeric, epoxy, siloxane, and resin binders. These are required to form a continuous (unbroken) conductive film. The problem is that these binders are insulators, so they reduce the conductivity of the connection. Also, applying the binder material requires annealing, a high-heat process (similar to how soldering with a resin binder works), which would destroy materials like paper or plastic.

Printing graphene ink on paper

So the researchers developed a new process:

1. Graphene flakes are mixed with a solvent and the ink it dried and deposited on the desired surface (paper, in the case of the experiment). (This is shown in step a in the illustration above.)

2. The flakes are compressed (step b above) with a roller (similar to using a roller to compress asphalt when making a road). That step increases the graphene’s conductivity by more than 50 times.

Graphene printed on paper (credit: Xianjun Huang et al./Applied Physics Letters)

The researchers tested their compressed graphene laminate by printing a graphene antenna onto a piece of paper. The material radiated radio-frequency power effectively, said Xianjun Huang, the first author of the paper and a PhD candidate in the Microwave and Communications Group in the School of Electrical and Electronic Engineering.

The researchers plan to further develop graphene-enabled RFID tags, as well as sensors and wearable electronics. They present their results in the journal Applied Physics Letters from AIP Publishing.


Abstract of Binder-free highly conductive graphene laminate for low cost printed radio frequency applications

In this paper we demonstrate realization of printable RFID antenna by low temperature processing of graphene ink. The required ultra-low resistance is achieved by rolling compression of binder-free graphene laminate. With compression, the conductivity of graphene laminate is increased by more than 50 times compared to that of as-deposited one. Graphene laminate with conductivity of 4.3×104 S/m and sheet resistance of 3.8.

Robots master skills with ‘deep learning’ technique

Robot learns to use hammer. What could go wrong? (credit: UC Berkeley)

UC Berkeley researchers have developed new algorithms that enable robots to learn motor tasks by trial and error, using a process that more closely approximates the way humans learn.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

A new AI approach

“What we’re reporting on here is a new approach to empowering a robot to learn,” said Professor Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”

The work is part of a new People and Robots Initiative at UC’s Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the advances in artificial intelligence, robotics and automation aligned to human needs.

“Most robotic applications are in controlled environments where objects are in predictable positions,” said UC Berkeley faculty member Trevor Darrell, director of the Berkeley Vision and Learning Center. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”

Neural-inspired learning

Coat-hanger training (no wire hangers!) (credit: UC Berkeley)

Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

“For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,” said postdoctoral researcher Sergey Levine. “Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”

In the world of artificial intelligence, deep learning programs create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google’s speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.

Applying deep reinforcement learning to motor tasks in unstructured 3D environments has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

BRETT masters human tasks on its own

A little nightcap? BRETT learns to put a cap on a bottle by trial and error, calculating values for 92,000 parameters. (credit: UC Berkeley)

In the experiments, the UC Berkeley researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks.

They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT’s learning included a reward function that provided a score based upon how well the robot was doing with the task.

BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot’s movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.

This end-to-end training process underlies the robot’s ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.

With this approach, when given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

Abbeel says the field will likely see significant improvements as the ability to process vast amounts of data improves.

“With more data, you can start learning more complex things,” he said. “We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”

The latest developments will be presented on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA). The Defense Advanced Research Projects Agency, Office of Naval Research, U.S. Army Research Laboratory and National Science Foundation helped support this research.


UC Berkeley Campus Life | BRETT the robot learns to put things together on his own

Robotic arm precisely controlled by thought

Erik Sorto smoothly controls robotic arm with his brain (credit: Spencer Kellis and Christian Klaes /Caltech)

Paralyzed from the neck down, Erik G. Sorto now can smoothly move a robotic arm just by thinking about it, thanks to a clinical collaboration between Caltech, Keck Medicine of USC and Rancho Los Amigos National Rehabilitation Center,

Previous neural prosthetic devices, such as Braingate, were implanted in the motor cortex, resulting in delayed, jerky movements. The new device was implanted in the posterior parietal cortex (PPC), a part of the brain that controls the intent to move, not the movement directly.

That makes Sorto, who has been paralyzed for over 10 years, the first quadriplegic person in the world to perform a fluid hand-shaking gesture or play “rock, paper, scissors,” using a robotic arm.

In April 2013, Keck Medicine of USC surgeons implanted a pair of small electrode arrays in two parts of the posterior parietal cortex, one that controls reach and another that controls grasp.

Each 4-by-4 millimeter array contains 96 active electrodes that, in turn, each record the activity of single neurons in the PPC. The arrays are connected by a cable to a system of computers that process the signals, to decode the brain’s intent and control output devices, such as a computer cursor and a robotic arm.

Although he was able to immediately move the robot arm with his thoughts, after weeks of imagining, Sorto refined his control of the arm.

Now, Sorto is able to execute advanced tasks with his mind, such as controlling a computer cursor; drinking a beverage; making a hand-shaking gesture; and performing various tasks with the robotic arm.

Designed to test the safety and effectiveness of this new approach, the clinical trial was led by principal investigator Richard Andersen, the James G. Boswell Professor of Neuroscience at Caltech, neurosurgeon Charles Y. Liu, professor of neurological surgery and neurology at the Keck School of Medicine of USC and biomedical engineering at USC, and neurologist Mindy Aisen, chief medical officer at Rancho Los Amigos.

Aisen, also a clinical professor of neurology at the Keck School of Medicine of USC, says that advancements in prosthetics like these hold promise for the future of patient rehabilitation.

NeuroPort microelectrode array implanted in Erik Sorto’s posterior parietal cortex (credit: Blackrock Microsystems)

“This research is relevant to the role of robotics and brain-machine interfaces as assistive devices, but also speaks to the ability of the brain to learn to function in new ways,” Aisen said. “We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study.”

Sorto has signed on to continue working on the project for a third year. He says the study has inspired him to continue his education and pursue a master’s degree in social work.

The results of the clinical trial appear in the May 22, 2015, edition of the journal Science. The implanted device and signal processors used in the clinical trial were the NeuroPort Array and NeuroPort Bio-potential Signal Processors developed by Blackrock Microsystems in Salt Lake City, Utah. The robotic arm used in the trial was the Modular Prosthetic Limb, developed at the Applied Physics Laboratory at Johns Hopkins.

This trial was funded by the National Institutes of Health, the Boswell Foundation, the Department of Defense, and the USC Neurorestoration Center.


Caltech | Next Generation of Neuroprosthetics: Science Explained — R. Andersen May 2015


Keck Medicine of USC | Next Generation of Neuroprosthetics: Erik’s Story


Abstract of Decoding motor imagery from the posterior parietal cortex of a tetraplegic human

Nonhuman primate and human studies have suggested that populations of neurons in the posterior parietal cortex (PPC) may represent high-level aspects of action planning that can be used to control external devices as part of a brain-machine interface. However, there is no direct neuron-recording evidence that human PPC is involved in action planning, and the suitability of these signals for neuroprosthetic control has not been tested. We recorded neural population activity with arrays of microelectrodes implanted in the PPC of a tetraplegic subject. Motor imagery could be decoded from these neural populations, including imagined goals, trajectories, and types of movement. These findings indicate that the PPC of humans represents high-level, cognitive aspects of action and that the PPC can be a rich source for cognitive control signals for neural prosthetics that assist paralyzed patients.

Tunable liquid-metal antennas

Antenna, feed, and reservoir of a liquid metal antenna (credit: Jacob Adams)

Using electrochemistry, North Carolina State University (NCSU) researchers have created a reconfigurable, voltage-controlled liquid metal antenna that may play a role in future mobile devices and the coming Internet of Things.

By placing a positive or negative electrical voltage across the interface between the liquid metal and an electrolyte, they found that they could cause the liquid metal to spread (flow into a capillary) or contract, changing its operating frequency and radiation pattern.

“Using a liquid metal — such as eutectic gallium and indium — that can change its shape allows us to modify antenna properties [such as frequency] more dramatically than is possible with a fixed conductor,” explained Jacob Adams, an assistant professor in the Department of Electrical and Computer Engineering at NCSU and a co-author of an open-access paper in the Journal of Applied Physics, from AIP Publishing.

The positive voltage “electrochemically deposits an oxide on the surface of the metal that lowers the surface tension, while a negative [voltage] removes the oxide to increase the surface tension,” Adams said. These differences in surface tension dictate which direction the metal will flow.

This advance makes it possible to “remove or regenerate enough of the ‘oxide skin’ with an applied voltage to make the liquid metal flow into or out of the capillary. We call this ‘electrochemically controlled capillarity,’ which is much like an electrochemical pump for the liquid metal,” Adams noted.

Although antenna properties can be reconfigured to some extent by using solid conductors with electronic switches, the liquid metal approach greatly increases the range over which the antenna’s operating frequency can be tuned. “Our antenna prototype using liquid metal can tune over a range of at least two times greater than systems using electronic switches,” he pointed out.

Previous liquid-metal designs typically required external pumps that can’t be easily integrated into electronic systems.

Extending frequencies for mobile devices

“Mobile device sizes are continuing to shrink and the burgeoning Internet of Things will likely create an enormous demand for small wireless systems,” Adams said. “And as the number of services that a device must be capable of supporting grows, so too will the number of frequency bands over which the antenna and RF front-end must operate. This combination will create a real antenna design challenge for mobile systems because antenna size and operating bandwidth tend to be conflicting tradeoffs.”

This is why tunable antennas are highly desirable: they can be miniaturized and adapted to correct for near-field loading problems such as the iPhone 4′s well-publicized “death grip” issue of dropped calls when by holding it by the bottom. Liquid metal systems “yield a larger range of tuning than conventional reconfigurable antennas, and the same approach can be applied to other components such as tunable filters,” Adams said.

In the long term, Adams and colleagues hope to gain greater control of the shape of the liquid metal in two-dimensional surfaces to obtain nearly any desired antenna shape. “This would enable enormous flexibility in the electromagnetic properties of the antenna and allow a single adaptive antenna to perform many functions,” he added.


Abstract of A reconfigurable liquid metal antenna driven by electrochemically controlled capillarity 

We describe a new electrochemical method for reversible, pump-free control of liquid eutectic gallium and indium (EGaIn) in a capillary. Electrochemical deposition (or removal) of a surfaceoxide on the EGaIn significantly lowers (or increases) its interfacial tension as a means to induce the liquid metal in (or out) of the capillary. A fabricated prototype demonstrates this method in a reconfigurable antenna application in which EGaIn forms the radiating element. By inducing a change in the physical length of the EGaIn, the operating frequency of the antennatunes over a large bandwidth. This purely electrochemical mechanism uses low, DC voltages to tune the antenna continuously and reversibly between 0.66 GHz and 3.4 GHz resulting in a 5:1 tuning range. Gain and radiation pattern measurements agree with electromagnetic simulations of the device, and its measured radiation efficiency varies from 41% to 70% over its tuning range.