Future ‘lightwave’ computers could run 100,000 times faster

TeraHertz pulses in semiconductor crystal (credit: Fabian Langer, Regensburg University)

Using extremely short pulses of teraHertz (THz) radiation instead of electrical currents could lead to future computers that run ten to 100,000 times faster than today’s state-of-the-art electronics, according to an international team of researchers, writing in the journal Nature Photonics.

In a conventional computer, electrons moving through a semiconductor occasionally run into other electrons, releasing energy in the form of heat and slowing them down. With the proposed “lightwave electronics” approach, electrons could be guided by ultrafast THz pulses (the part of the electromagnetic spectrum between microwaves and infrared light). That means the travel time can be so short that the electrons would be statistically unlikely to hit anything, according to senior author Rupert Huber, a professor of physics at the University of Regensburg who led the experiment.

In the experiment, the researchers shined THz pulses into a crystal of the semiconductor gallium selenide.* These pulses were ultra-short (less than 100 femtoseconds, or 100 quadrillionths of a second). Each pulse popped electrons in the semiconductor into a higher energy level — which meant that they were free to move around.

When the electrons emitted light as they came down from the higher energy level, they emitted much shorter pulses than the electromagnetic radiation going in — just a few femtoseconds long — quick enough to read and write information to electrons at ultra-high speed.

But first, researchers need to be able to control electrons in a semiconductor. This work takes a step toward this by mobilizing groups of electrons inside a semiconductor crystal.

Quantum computation

Because femtosecond pulses are fast enough to trap an electron between being put into an excited state and coming down from that state, they can potentially also be used for quantum computations, using electrons in excited states as qubits. The researchers managed to launch one electron simultaneously via two excitation pathways, which is not classically possible.

An electron is small enough that it behaves like a wave as well as a particle, and when it is in an excited state, its wavelength changes. Because the electron was in two excited states at once, those two waves interfered with one another and left a fingerprint in the femtosecond pulse that the electron emitted.

The research is funded by the European Research Council and the German Research Foundation.

* “We generated high harmonics by irradiating a 40-μm-thick crystal of gallium selenide with intense, multi-THz pulses. These pulses were obtained by difference frequency mixing of two phase-correlated near-infrared pulse trains from a dual optical parametric amplifier pumped by a titanium sapphire amplifier. … The centre frequency was tunable and set to 33 THz in the experiments.” — F. Langer et al./Nature Photonics

Abstract of Symmetry-controlled temporal structure of high-harmonic carrier fields from a bulk crystal

High-harmonic (HH) generation in crystalline solids marks an exciting development, with potential applications in high-efficiency attosecond sources, all-optical bandstructure reconstruction and quasiparticle collisions. Although the spectral and temporal shape of the HH intensity has been described microscopically, the properties of the underlying HH carrier wave have remained elusive. Here, we analyse the train of HH waveforms generated in a crystalline solid by consecutive half cycles of the same driving pulse. Extending the concept of frequency combs to optical clock rates, we show how the polarization and carrier-envelope phase (CEP) of HH pulses can be controlled by the crystal symmetry. For certain crystal directions, we can separate two orthogonally polarized HH combs mutually offset by the driving frequency to form a comb of even and odd harmonic orders. The corresponding CEP of successive pulses is constant or offset by π, depending on the polarization. In the context of a quantum description of solids, we identify novel capabilities for polarization- and phase-shaping of HH waveforms that cannot be accessed with gaseous sources.

Engineers shrink atomic-force microscope to dime-sized device

A MEMS-based atomic force microscope developed by engineers at the University of Texas at Dallas that is about 1 square centimeter in size (top center), shown attached here to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. (credit: University of Texas at Dallas)

University of Texas at Dallas researchers have created an atomic force microscope (AFM) on a chip, dramatically shrinking the size — and, hopefully, the price — of a microscope used to characterize material properties down to molecular dimensions.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas.  “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

Moheimani and his colleagues describe their prototype device in this month’s issue of the IEEE Journal of Microelectromechanical Systems.

A conventional AFM consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image. (credit: CC/Opensource Handbook of Nanoscience and Nanotechnology)

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — roughly on the scale of individual molecules.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” explained Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science.

The MEMS version

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Anthony Fowler, PhD, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board that contains circuitry, sensors, and other miniaturized components that control the movement and other aspects of the device.

Conventional AFM (credit: Asylum Research, Inc.)

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive. “An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass-produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

Semiconductor-industry uses

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

Moheimani’s research has been funded by UT Dallas startup funds, the Von Ehr Distinguished Chair, and the Defense Advanced Research Projects Agency.


Abstract of On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach

The atomic force microscope (AFM) is an invaluable scientific tool; however, its conventional implementation as a relatively costly macroscale system is a barrier to its more widespread use. A microelectromechanical systems (MEMS) approach to AFM design has the potential to significantly reduce the cost and complexity of the AFM, expanding its utility beyond current applications. This paper presents an on-chip AFM based on a silicon-on-insulator MEMS fabrication process. The device features integrated xy electrostatic actuators and electrothermal sensors as well as an AlN piezoelectric layer for out-of-plane actuation and integrated deflection sensing of a microcantilever. The three-degree-of-freedom design allows the probe scanner to obtain topographic tapping-mode AFM images with an imaging range of up to 8μm×8μm in closed loop. [2016-0211]

A biocompatible stretchable material for brain implants and ‘electronic skin’

A printed electrode pattern of a new polymer being stretched to several times of its original length (top), and a transparent, highly stretchy “electronic skin” patch (bottom) from the same material, forming an intimate interface with the human skin to potentially measure various biomarkers (credit: Bao Lab)

Stanford chemical engineers have developed a soft, flexible plastic electrode that stretches like rubber but carries electricity like wires — ideal for brain interfaces and other implantable electronics, they report in an open-access March 10 paper in Science Advances.

Developed by Zhenan Bao, a professor of chemical engineering, and his team, the material is still a laboratory prototype, but the team hopes to develop it as part of their long-term focus on creating flexible materials that interface with the human body.

Flexible interface

“One thing about the human brain that a lot of people don’t know is that it changes volume throughout the day,” says postdoctoral research fellow Yue Wang, the first author on the paper. “It swells and de-swells.” The current generation of electronic implants can’t stretch and contract with the brain, making it complicated to maintain a good connection.

Illustration showing incorporation of ionic liquid-assisted stretchability and electrical conductivity (STEC) enhancers to convert conventional PEDOT:PSS film (top) to stretchable film (bottom). (credit: Wang et al., Sci. Adv.)

To create this flexible electrode, the researchers began with a plastic (PEDOT:PSS) with high electrical conductivity and biocompatibility (could be safely brought into contact with the human body), but was brittle. So they added a “STEC” (stretchability and electrical conductivity) molecule similar to the kind of additives used to thicken soups in industrial kitchens.

This additive transformed the plastic’s chunky and brittle molecular structure into a fishnet pattern with holes in the strands to allow the material to stretch and deform. The resulting plastic remained very conductive even when stretched 800 percent its original length.

Scientists at SLAC National Accelerator Laboratory, UCLA, the Materials Science Institute of Barcelona, and Samsung Advanced Institute of Technology were also involved in the research, which was funded by Samsung Electronics and the Air Force Office of Science Research.


Stanford University School of Engineering | Stretchable electrodes pave way for flexible electronics


Abstract of A highly stretchable, transparent, and conductive polymer

Previous breakthroughs in stretchable electronics stem from strain engineering and nanocomposite approaches. Routes toward intrinsically stretchablemolecularmaterials remain scarce but, if successful,will enable simpler fabrication processes, such as direct printing and coating, mechanically robust devices, and more intimate contact with objects. We report a highly stretchable conducting polymer, realized with a range of enhancers that serve dual functions to changemorphology andas conductivity-enhancingdopants inpoly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). The polymer films exhibit conductivities comparable to the best reported values for PEDOT:PSS, with higher than 3100 S/cm under 0% strain and higher than 4100 S/cm under 100% strain—among the highest for reported stretchable conductors. It is highly durable under cyclic loading,with the conductivitymaintained at 3600 S/cm even after 1000 cycles to 100% strain. The conductivity remained above 100 S/cm under 600% strain, with a fracture strain as high as 800%, which is superior to even the best silver nanowire– or carbon nanotube–based stretchable conductor films. The combination of excellent electrical andmechanical properties allowed it to serve as interconnects for field-effect transistor arrays with a device density that is five times higher than typical lithographically patterned wavy interconnects.

Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Neuron (blue) with dendrites (credit: Shelley Halpain/UC San Diego)

The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.

Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.

Illustration of neuron and dendrites. Dendrites receive electrochemical stimulation (via synapses, not shown here) from neurons (not shown here), and propagate that stimulation to the neuron cell body (soma). A neuron sends electrochemical stimulation via an axon to communicate with other neurons via telodendria (purple, right) at the end of the axon and synapses (not shown here). (credit: Quasar/CC).

Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).

Fundamentally changes our understanding of brain computation

The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”

“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Study with moving rats made discovery possible

Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

Funding was provided by the University of California.

Complete neuron cell diagram (credit: LadyofHats/CC)


Abstract of Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats

Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.

IBM-led international research team stores one bit of data on a single atom

Scanning tunneling microscope image of a single atom of holmium, an element that researchers used as a magnet to store one bit of data. (credit: IBM Research — Almaden)

An international team led by IBM has created the world’s smallest magnet, using a single atom of rare-earth element holmium, and stored one bit of data on it over several hours.

The achievement represents the ultimate limit of the classical approach to high-density magnetic storage media, according to a paper published March 8 in the journal Nature.

Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom may lead to significantly smaller and denser storage devices in the future. (The researchers are currently working in an ultrahigh vacuum at 1.2 K (a temperature near absolute zero.)

Using a scanning tunneling microscope* (STM), the researchers also showed that a device using two magnetic atoms could be written and read independently, even when they were separated by just one nanometer.

IBM microscope mechanic Bruce Melior at scanning tunneling microscope, used to view and manipulate atoms (credit: IBM Research — Almaden)

The researchers believe this tight spacing could eventually yield magnetic storage that is 1,000 times denser than today’s hard disk drives and solid state memory chips. So they could one day store 1,000 times more information in the same space. That means data centers, computers, and personal devices would be radically smaller and more powerful.

Single-atom write and read operations. (Left) To write the data onto the holmium atom, a pulse of electric current from the magnetized tip of a scanning tunneling microscope (STM) is used to flip the orientation of the atom’s field between a 0 or 1. The STM is also used to read it. (Right) A second read-out method used an iron atom as a magnetic sensor, which also allowed the team to read out multiple bits at the same time, making it more practical than an STM. (credit: IBM Research and Fabian D. Natterer et al./Nature)

Researchers at EPFL in Switzerland, University of Chinese Academy of Sciences in Hong Kong, University of Göttingen in Germany, Universität Zürich in Switzerland, Institute of Basic Science, Center for Quantum Nanoscience in South Korea, and Ewha Womans University in South Korea were also on the research team.

* The STM was developed in 1981, earning its inventors, Gerd Binnig and Heinrich Rohrer (at IBM Zürich), the Nobel Prize in Physics in 1986. IBM is planning future scanning tunneling microscope studies to investigate the potential of performing quantum information processing using individual magnetic atoms. Earlier this week, IBM announced it will be building the world’s first commercial quantum computers for business and science.


IBM Research | IBM Research Created the World’s Smallest Magnet — an Atom


How to control robots with your mind

The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer’s EEG signals, and it corrects its selection accordingly to properly sort an object (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

Two research teams are developing new ways to communicate with robots and shape them one day into the kind of productive workers featured in the current AMC TV show HUMANS (now in second season).

Programming robots to function in a real-world environment is normally a complex process. But now a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is creating a system that lets people correct robot mistakes instantly by simply thinking.

In the initial experiment, the system uses data from an electroencephalography (EEG) helmet to correct robot performance on an object-sorting task. Novel machine-learning algorithms enable the system to classify brain waves within 10 to 30 milliseconds.

The system includes a main experiment controller, a Baxter robot, and an EEG acquisition and classification system. The goal is to make the robot pick up the cup that the experimenter is thinking about. An Arduino computer (bottom) relays messages between the EEG system and robot controller. A mechanical contact switch (yellow) detects robot arm motion initiation. (credit: Andres F. Salazar-Gomez et al./MIT, Boston University)

While the system currently handles relatively simple binary-choice activities, we may be able one day to control robots in much more intuitive ways. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button, or even say a word,” says CSAIL Director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

The team used a humanoid robot named “Baxter” from Rethink Robotics, the company led by former CSAIL director and iRobot co-founder Rodney Brooks.


MITCSAIL | Brain-controlled Robots

Intuitive human-robot interaction

The system detects brain signals called “error-related potentials” (generated whenever our brains notice a mistake) to determine if the human agrees with a robot’s decision.

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” says Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.” Or if the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer.

The team believes that future systems could extend to more complex multiple-choice tasks. The system could even be useful for people who can’t communicate verbally: the robot could be controlled via a series of several discrete binary choices, similar to how paralyzed locked-in patients spell out words with their minds.

The project was funded in part by Boeing and the National Science Foundation. An open-access paper will be presented at the IEEE International Conference on Robotics and Automation (ICRA) conference in Singapore this May.

Here, robot, Fetch!

Robot asks questions, and based on a person’s language and gesture, infers what item to deliver. (credit: David Whitney/Brown University)

But what if the robot is still confused? Researchers in Brown University’s Humans to Robots Lab have an app for that.

“Fetching objects is an important task that we want collaborative robots to be able to do,” said computer science professor Stefanie Tellex. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”

Tellex’s lab previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. But it ran into problems when there were lots of very similar objects in close proximity to each other. For example, on the table above, simply asking for “a marker” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of markers are clustered close together.

“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex explained.

The new algorithm does just that, enabling the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”


David Whitney | Reducing Errors in Object-Fetching Interactions through Social Feedback

One of the important features of the system is that the robot doesn’t ask questions with every interaction; it asks intelligently.

And even though the system asks only a very simple question, it’s able to make important inferences based on the answer. For example, say a user asks for a marker and there are two markers on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other marker must be the one that the user wants, and will hand that one over without asking another question. Those kinds of inferences, known as “implicatures,” make the algorithm more efficient.

In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed. “Currently we do not consider the parse of the human’s speech. We would like the model to understand prepositional phrases (‘on the left,’ ‘nearest to me’). This would allow the robot to understand how items are spatially related to other items through language.”

Ultimately, Tellex hopes, systems like this will help robots become useful collaborators both at home and at work.

An open-access paper on the DARPA-funded research will also be presented at the International Conference on Robotics and Automation.


Abstract of Correcting Robot Mistakes in Real Time Using EEG Signals

Communication with a robot using brain activity from a human collaborator could provide a direct and fast
feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime
closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.


Abstract of Reducing Errors in Object-Fetching Interactions through Social Feedback

Fetching items is an important problem for a social robot. It requires a robot to interpret a person’s language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an itemfetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a person’s requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the itemdelivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our model’s improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-theart baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.

Should we use CRISPR to domesticate wild plants, creating ‘biologically inspired organisms’?

Accelerating the domestication of wild plants. During the domestication of ancestral crops, plants carrying spontaneous mutations in domestication genes were selected for. The same genes can be targeted in wild plants by genome editing, resulting in a rapidly domesticated plant.  (credit: Cell)

Here’s a radical new idea for creating new GMO (genetically modified organism) plants that may appeal to staunch organic-food consumers/farmers and even #NonGMOProjectVerified advocates: don’t insert a foreign gene in today’s domestic plants — delete already existing genes in semi-domesticated or even wild plants to make those plants more domestic, and reducing pesticide use in the process.

“All of the plants we eat today are mutants, but the crops we have now were selected for over thousands of years, and their mutations … such as reduced bitterness and those that facilitate easy harvest … arose by chance,” says Michael Palmgren, a botanist who heads an interdisciplinary think tank* called “Plants for a Changing World” at the University of Copenhagen. “With gene editing, we can create ‘biologically inspired organisms’ in that we don’t want to improve nature, we want to benefit from what nature has already created.”

Palmgen is senior author of an open-access review published March 2 in the journal Trends in Plant Science.

How to turn nitrogen in the atmosphere into fertilizer, reducing environmental damage

This strategy could also address problems from pesticide use and the damaging impact of large-scale agriculture on the environment. For example, runoff from excess nitrogen in fertilizers is a common pollutant; however, wild legumes, through symbiosis with bacteria, can turn nitrogen available in the atmosphere into their own fertilizer, he suggests.

Future logo? (credit: KurzweilAI)

Out of the more than 300,000 plant species in existence, fewer than 200 are commercially important, and only three species — rice, wheat, and maize — account for most of the plant matter that humans consume, partly because in the history of agriculture, mutations arose that made these crops the easiest to harvest, the reseachers note.

But with CRISPR technology, we don’t have to wait for nature to help us domesticate plants, argue the researchers. Instead, gene editing could make, for example, wild legumes, quinoa, or amaranth, which are already sustainable and nutritious, more farmable.

The approach has already been successful in accelerating domestication of undervalued crops using less precise gene-editing methods. For example, researchers used chemical mutagenesis to induce random mutations in weeping rice grass, an Australian wild relative of domestic rice, to make it more likely to hold onto its seeds after ripening. And in wild field cress, a type of weedy grass, scientists silenced genes with RNA interference involved with fatty acid synthesis, resulting in improved seed oil quality.

Palmgren’s group published a related open-access paper two years ago on using gene editing to make domesticated plants more “wild” and thus hardier for organic farmers.

While we’re at it, what about pharming (creating pharmaceuticals from plants) — using genetically modified wild plants?

* Supported by the University of Copenhagen Excellence Programme for Interdisciplinary Research.


Abstract of Accelerating the Domestication of New Crops: Feasibility and Approaches

The domestication of new crops would promote agricultural diversity and could provide a solution to many of the problems associated with intensive agriculture. We suggest here that genome editing can be used as a new tool by breeders to accelerate the domestication of semi-domesticated or even wild plants, building a more varied foundation for the sustainable provision of food and fodder in the future. We examine the feasibility of such plants from biological, social, ethical, economic, and legal perspectives.

Programmable shape-shifting molecular robots respond to DNA signals

Japanese researchers have developed an amoeba-like shape-changing molecular robot — assembled from biomolecules such as DNA, proteins, and lipids — that could act as a programmable and controllable robot for treating live culturing cells or monitoring environmental pollution, for example.

This the first time a molecular robotic system can recognize signals and control its shape-changing function, and their molecular robots could in the near future function in a way similar to living organisms, according to the researchers.

Developed by a research group at Tohoku University and Japan Advanced Institute of Science and Technology, the molecular robot integrates molecular machines within an artificial cell membrane and is about one micrometer in diameter — similar in size to human cells. It can start and stop its shape-changing function in response to a specific DNA signal.

Schematic diagram of the molecular robot. (A) In response to a start-stop DNA signal, molecular actuators (microtubules) inside the robot change the shape of the artificial cell membrane (liposome), controlled by a “molecular clutch” that transmits the force from the actuator (kinesin proteins, shown in green, assemble DNA to the cell membrane when activated). (B) Microscopy images of molecular robots. When the input DNA signal is “stop,” the clutch is turned “OFF,” deactivating the shape-changing behavior. The shape-changing is activated when the the clutch is turned “ON.” Scale bar: 20 μm. The white arrow indicates the molecular actuator part that transforms the shape of the membrane. (credit: Yusuke Sato)

The movement force is generated by molecular actuators (microtubules) controlled by a molecular clutch (composed of DNA and kinesin — a “walker” that carries molecules along microtubules in the body). The shape of the robot’s body (artificial cell membrane, or liposome — a vesicle made from a lipid bilayer) is changed (from static to active) by the actuator, triggered by specific DNA signals activated by UV irradiation.

Kinesin motor protein “walking” along microtubule filament (credit: Jzp706/CC)

The realization of a molecular robot whose components are designed at a molecular level and that can function in a small and complicated environment, such as the human body, is expected to significantly expand the possibilities of robotics engineering, according to the researchers.*

“With more than 20 chemicals at varying concentrations, it took us a year and a half to establish good conditions for working our molecular robots,” says Associate Professor Shin-ichiro Nomura at Tohoku University’s Graduate School of Engineering, who led the study. “It was exciting to see the robot shape-changing motion through the microscope. It meant our designed DNA clutch worked perfectly, despite the complex conditions inside the robot.”

Programmable by DNA computing devices

The research results were published in an open-access paper in Science Robotics on March 1, 2017.

The authors say that “combining other molecular devices would lead to the realization of a molecular robot with advanced functions. For example, artificial nanopores, such as an artificial channel composed of DNA, could be used to sense signal molecules in the surrounding environments through the channel.

“In addition, the behavior of a molecular robot could be programmed by DNA computing devices, such as judging the condition of environments. These implementations could allow for the development of molecular robots capable of chemotaxis [movement in a direction corresponding to a gradient of increasing or decreasing concentration of a particular substance], [similar to] white blood cells, and beyond.”

The research was supported by the JSPS KAKENHI, AMED-CREST and Tohoku University-DIARE.

* In the current design, “there are still limitations in the functions of the robot. For example, the switching of robot behavior is not reversible. The shape change is not directional and as yet not possible for complex tasks, for example, locomotion. However, to the best of our knowledge, this is the first implementation of a molecular robot that can control its shape-changing behavior in response to specific signal molecules.” — Yusuke Sato et al./Science Robotics


Abstract of Micrometer-sized molecular robot changes its shape in response to signal molecules

Rapid progress in nanoscale bioengineering has allowed for the design of biomolecular devices that act as sensors, actuators, and even logic circuits. Realization of micrometer-sized robots assembled from these components is one of the ultimate goals of bioinspired robotics. We constructed an amoeba-like molecular robot that can express continuous shape change in response to specific signal molecules. The robot is composed of a body, an actuator, and an actuator-controlling device (clutch). The body is a vesicle made from a lipid bilayer, and the actuator consists of proteins, kinesin, and microtubules. We made the clutch using designed DNA molecules. It transmits the force generated by the motor to the membrane, in response to a signal molecule composed of another sequence-designed DNA with chemical modifications. When the clutch was engaged, the robot exhibited continuous shape change. After the robot was illuminated with light to trigger the release of the signal molecule, the clutch was disengaged, and consequently, the shape-changing behavior was successfully terminated. In addition, the reverse process—that is, initiation of shape change by input of a signal—was also demonstrated. These results show that the components of the robot were consistently integrated into a functional system. We expect that this study can provide a platform to build increasingly complex and functional molecular systems with controllable motility.

Groundbreaking technology rewarms large-scale animal tissues preserved at low temperatures

Inductive radio-frequency heating of magnetic nanoparticles embedded in tissue (red material in container) preserved at very low temperatures restored the tissue without damage (credit: Navid Manuchehrabadi et al./Science Translational Medicine)

A research team led by the University of Minnesota has discovered a way to rewarm large-scale animal heart valves and blood vessels preserved at very low (cryogenic) temperatures without damaging the tissue. The discovery could one day lead to saving millions of human lives by creating cryogenic tissue and organ banks of organs and tissues for transplantation.

The research was published March 1 in an open-access paper in Science Translational Medicine.

Long-term preservation methods like vitrification cool biological samples to an ice-free glassy state, using very low temperatures between -160 and -196 degrees Celsius, but tissues larger than 1 milliliter (0.03 fluid ounce) often suffer major damage during the rewarming process, making them unusable for tissues.

In the new research, the researchers were able to restore 50 milliliters (1.7 fluid ounces) of tissue with warming at more than 130°C/minute without damage.

Radiofrequency inductive heating of iron nanoparticles

To achieve that, they developed a revolutionary new method using silica-coated iron-oxide nanoparticles dispersed throughout a cryoprotectant solution around the tissue. The nanoparticles act as tiny heaters around the tissue when they are activated using noninvasive radiofrequency inductive energy, rapidly and uniformly warming the tissue.

This transmission electron microscopy (TEM) image shows the iron oxide nanoparticles (coated in mesoporous silica) that are used in the tissue warming process. (credit: Haynes research group/University of Minnesota)

The results showed that none of the tissues displayed signs of harm — unlike control samples using vitrification and rewarmed slowly over ice or using convection warming. The researchers were also able to successfully wash away the iron oxide nanoparticles from the sample following the warming.

“This is the first time that anyone has been able to scale up to a larger biological system and demonstrate successful, fast, and uniform warming of hundreds of degrees Celsius per minute of preserved tissue without damaging the tissue,” said University of Minnesota mechanical engineering and biomedical engineering professor John Bischof, the senior author of the study.

Organs next

Bischof said there is a strong possibility they could scale up to even larger systems, like organs. The researchers plan to start with rodent organs (such as rat and rabbit) and then scale up to pig organs and then, hopefully, human organs. The technology might also be applied beyond cryogenics, including delivering lethal pulses of heat to cancer cells.

The researchers’ goal is to eliminate transplant waiting lists. Currently, hearts and lungs donated for transplantation must be discarded because these tissues cannot be kept on ice for longer than a matter of hours, according to the researchers.*

It will be interesting to see if the technology can one day be extended to cryonics.

The research was funded by the National Science Foundation (NSF), National Institutes of Health (NIH), U.S. Army Medical Research and Materiel Command, Minnesota Futures Grant from the University of Minnesota, and the University of Minnesota Carl and Janet Kuhrmeyer Chair in Mechanical Engineering. Researchers at Carnegie Mellon University, Clemson University and Tissue Testing Technologies LLC were also involved in the study.

* “A major limitation of transplantation is the ischemic injury that tissue and organs sustain during the time between recovery from the donor and implantation in the recipient. The maximum tolerable organ preservation for transplantation by hypothermic storage is typically 4 hours for heart and lungs; 8 to 12 hours for liver, intestine, and pancreas; and up to 36 hours for kidney transplants. In many cases, such limits actually prevent viable tissue or organs from reaching recipients. For instance, more than 60% of donor hearts and lungs are not used or transplanted partly because their maximum hypothermic preservation times have been exceeded. Further, if only half of these discarded organs were transplanted, then it has been estimated that wait lists for these organs could be extinguished within 2 to 3 years.” — Navid Manuchehrabadi et al./Science Translational Medicine


Abstract of Improved tissue cryopreservation using inductive heating of magnetic nanoparticles

Vitrification, a kinetic process of liquid solidification into glass, poses many potential benefits for tissue cryopreservation including indefinite storage, banking, and facilitation of tissue matching for transplantation. To date, however, successful rewarming of tissues vitrified in VS55, a cryoprotectant solution, can only be achieved by convective warming of small volumes on the order of 1 ml. Successful rewarming requires both uniform and fast rates to reduce thermal mechanical stress and cracks, and to prevent rewarming phase crystallization. We present a scalable nanowarming technology for 1- to 80-ml samples using radiofrequency-excited mesoporous silica–coated iron oxide nanoparticles in VS55. Advanced imaging including sweep imaging with Fourier transform and microcomputed tomography was used to verify loading and unloading of VS55 and nanoparticles and successful vitrification of porcine arteries. Nanowarming was then used to demonstrate uniform and rapid rewarming at >130°C/min in both physical (1 to 80 ml) and biological systems including human dermal fibroblast cells, porcine arteries and porcine aortic heart valve leaflet tissues (1 to 50 ml). Nanowarming yielded viability that matched control and/or exceeded gold standard convective warming in 1- to 50-ml systems, and improved viability compared to slow-warmed (crystallized) samples. Last, biomechanical testing displayed no significant biomechanical property changes in blood vessel length or elastic modulus after nanowarming compared to untreated fresh control porcine arteries. In aggregate, these results demonstrate new physical and biological evidence that nanowarming can improve the outcome of vitrified cryogenic storage of tissues in larger sample volumes.

Tiny fibers open new windows into the brain

A multifunctional flexible fiber that enables viral delivery, optical stimulation, and recording with one-step surgery. (credit: Seongjun Park et al./Nature Neuroscience)

Imagine a single flexible polymer fiber 200 micrometers across — about the width of a human hair — that can deliver a combination of optical, electrical, and chemical signals between different brain regions, with the softness and flexibility of brain tissue — allowing neuroscientists to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers.

That’s what a team of MIT scientists has reported in the journal Nature Neuroscience. (Previous research efforts in neuroscience have generally relied on separate devices: needles to inject viral vectors for optogenetics, optical fibers for light delivery, and arrays of electrodes for recording, adding complication and the need for tricky alignments among the different devices.)

Multifunctional

For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins (which sensitize neurons to light) through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All of this was done through a single flexible fiber.

“It can deliver the virus [containing the opsins] straight to the cell, and then stimulate the response and record the activity — and [the fiber] is sufficiently small and biocompatible so it can be kept in for a long time,” says Polina Anikeeva, a professor in the MIT Department of Materials Science and Engineering.

Since each fiber is so small, “potentially, we could use many of them to observe different regions of activity,” she says. In their initial tests, the researchers placed probes in two different brain regions at once, varying which regions they used from one experiment to the next, and measuring how long it took for responses to travel between them.

The key ingredient that made this multifunctional fiber possible was the development of conductive “wires” that maintained the needed flexibility while also carrying electrical signals well. The team engineered a composite of conductive polyethylene doped with graphite flakes. The polyethylene was initially formed into layers, sprinkled with graphite flakes, then compressed; then another pair of layers was added and compressed, and then another, and so on.

The team aims to reduce the width of the fibers further, to make their properties even closer to those of the neural tissue and use material that is even softer to match the adjacent tissue.

The research team included members of MIT’s Research Laboratory of Electronics, Department of Electrical Engineering and Computer Science, McGovern Institute for Brain Research, Department of Chemical Engineering, and Department of Mechanical Engineering, as well as researchers at Tohuku University in Japan and Virginia Polytechnic Institute. It was supported by the National Institute of Neurological Disorders and Stroke, the National Science Foundation, the MIT Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, and the McGovern Institute for Brain Research.


Abstract of One-step optogenetics with multifunctional flexible polymer fibers

Optogenetic interrogation of neural pathways relies on delivery of light-sensitive opsins into tissue and subsequent optical illumination and electrical recording from the regions of interest. Despite the recent development of multifunctional neural probes, integration of these modalities in a single biocompatible platform remains a challenge. We developed a device composed of an optical waveguide, six electrodes and two microfluidic channels produced via fiber drawing. Our probes facilitated injections of viral vectors carrying opsin genes while providing collocated neural recording and optical stimulation. The miniature (<200 μm) footprint and modest weight (<0.5 g) of these probes allowed for multiple implantations into the mouse brain, which enabled opto-electrophysiological investigation of projections from the basolateral amygdala to the medial prefrontal cortex and ventral hippocampus during behavioral experiments. Fabricated solely from polymers and polymer composites, these flexible probes minimized tissue response to achieve chronic multimodal interrogation of brain circuits with high fidelity.