Neuroscientists create organic-computing ‘Brainet’ network of rodent and primate brains — humans next

Experimental apparatus scheme for a Brainet computing device. A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered (red) as simultaneous intracortical microstimulation (ICMS) patterns (via implanted electrodes) to the somatosensory cortex of each rat. Neural activity (black) was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the other Brainet participants to receive water. (credit: Miguel Pais-Vieira et al./Scientific Reports)

Duke University neuroscientists have created a network called “Brainet” that uses signals from an array of electrodes implanted in the brains of multiple rodents in experiments to merge their collective brain activity and jointly control a virtual avatar arm or even perform sophisticated computations — including image pattern recognition and even weather forecasting.

Brain-machine interfaces (BMIs) are computational systems that allow subjects to use their brain signals to directly control the movements of artificial devices, such as robotic arms, exoskeletons or virtual avatars. The Duke researchers at the Center for Neuroengineering previously built BMIs to capture and transmit the brain signals of individual rats, monkeys, and even human subjects, to control devices.

“Supra-brain” — the Matrix for monkeys?

As reported in two open-access papers in the July 9th 2015 issue of Scientific Reports, in the new research, rhesus monkeys were outfitted with electrocorticographic (ECoG) multiple-electrode arrays implanted in their motor and somatosensory cortices to capture and transmit their brain activity.

For one experiment, two monkeys were placed in separate rooms where they observed identical images of an avatar on a display monitor in front of them, and worked together to move the avatar on the screen to touch a moving target.

In another experiment, three monkeys were able to mentally control three degrees of freedom (dimensions) of a virtual arm movement in 3-D space. To achieve this performance, all three monkeys had to synchronize their collective brain activity to produce a “supra-brain” in charge of generating the 3-D movements of the virtual arm.

In the second Brainet study, three to four rats whose brains have been interconnected via pairwise brain-to-brain interfaces (BtBIs) were able to perform a variety of sophisticated shared classification and other computational tasks in a distributed, parallel computing architecture.

Human Brainets next

These results support the original claim of the Duke researchers that brainets may serve as test beds for the development of organic computers created by interfacing multiple animals brains with computers. This arrangement would employ a unique hybrid digital-analog computational engine as the basis of its operation, in a clear departure from the classical digital-only mode of operation of modern computers.

“This is the first demonstration of a shared brain-machine interface, said Miguel Nicolelis, M.D., Ph. D., co-director of the Center for Neuroengineering at the Duke University School of Medicine and principal investigator of the study. “We foresee that shared-BMIs will follow the same track and soon be translated to clinical practice.”

Nicolelis and colleagues of the Walk Again Project, based at the project’s laboratory in Brazil, are currently working to implement a non-invasive human Brainet to be employed in their neuro-rehabilitation training paradigm with severely paralyzed patients.


In this movie, three monkeys share control over the movement of a virtual arm in 3-D space. Each monkey contributes to two of three axes (X, Y and Z). Monkey C contributes to y- and z-axes (red dot), monkey M contributes to x- and y-axes (blue dot), and monkey K contributes to y- and z-axes (green dot). The contribution of the two monkeys to each axis is averaged to determine the arm position (represented by the black dot). (credit: Arjun Ramakrishnan et al./Scientific Reports)


Abstract of Building an organic computing device with multiple interconnected brains

Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers.

Abstract of Computing arm movements with a monkey Brainet

Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal.

A graphene microphone and loudspeaker that operate at up to 500 kilohertz

Construction of graphene electrostatic wideband receiver (microphone). The graphene membrane is suspended across the supporting frame (A). The membrane is electrically contacted with gold wires, and spacers are added (B) to control the distance from the membrane to the gold-coated stationary electrodes (C). (credit: Qin Zhou et al./PNAS)

University of California, Berkeley, physicists have used graphene to build lightweight ultrasonic loudspeakers and microphones, enabling people to mimic bats or dolphins’ ability to use sound to communicate and gauge the distance and speed of objects around them.

More practically, the wireless ultrasound devices complement standard radio transmission using electromagnetic waves in areas where radio is impractical, such as underwater, but with far greater fidelity than current ultrasound or sonar devices. They can also be used to communicate through objects, such as steel, that electromagnetic waves can’t penetrate.

“Sea mammals and bats use high-frequency sound for echolocation and communication, but humans just haven’t fully exploited that before, in my opinion, because the technology has not been there,” said UC Berkeley physicist Alex Zettl. “Until now, we have not had good wideband ultrasound transmitters or receivers. These new devices are a technology opportunity.”

The diaphragms in the new devices are graphene sheets a mere one atom thick that have the right combination of stiffness, strength and light weight to respond to frequencies ranging from subsonic (below 20 hertz) to ultrasonic (above 20 kilohertz). Humans can hear from 20 hertz up to 20,000 hertz, whereas bats hear only in the kilohertz range, from 9 to 200 kilohertz. The graphene loudspeakers and microphones operate from well below 20 hertz to over 500 kilohertz.

Practical graphene uses

“There’s a lot of talk about using graphene in electronics and small nanoscale devices, but they’re all a ways away,” said Zettl, who is a senior scientist at Lawrence Berkeley National Laboratory and a member of the Kavli Energy NanoSciences Institute, operated jointly by UC Berkeley and Berkeley Lab. “The microphone and loudspeaker are some of the closest devices to commercial viability, because we’ve worked out how to make the graphene and mount it, and it’s easy to scale up.”

Zettl, UC Berkeley postdoctoral fellow Qin Zhou and colleagues describe their graphene microphone and ultrasonic radio in a paper appearing online this week in the Proceedings of the National Academy of Sciences.

One big advantage of graphene is that the atom-thick sheet is so lightweight that it responds well to the different frequencies of an electronic pulse, unlike today’s piezoelectric microphones and speakers. This comes in handy when using ultrasonic transmitters and receivers to transmit large amounts of information through many different frequency channels simultaneously, or to measure distance, as in sonar applications.

“Because our membrane is so light, it has an extremely wide frequency response and is able to generate sharp pulses and measure distance much more accurately than traditional methods,” Zhou said.

Graphene membranes are also more efficient, converting over 99 percent of the energy driving the device into sound, whereas today’s conventional loudspeakers and headphones convert only 8 percent into sound. Zettl anticipates that in the future, communications devices like cellphones will utilize not only electromagnetic waves – radio – but also acoustic or ultrasonic sound, which can be highly directional and long-range.

Bat chirps

Bat expert Michael Yartsev, a newly hired UC Berkeley assistant professor of bioengineering and member of the Helen Wills Neuroscience Institute, said, “These new microphones will be incredibly valuable for studying auditory signals at high frequencies, such as the ones used by bats.


A recording of the pipistrelle bat’s ultrasonic chirps, slowed to one-eighth normal speed (credit: Qin Zhou/UC Berkeley)

The use of graphene allows the authors to obtain very flat frequency responses in a wide range of frequencies, including ultrasound, and will permit a detailed study of the auditory pulses that are used by bats.”

Zettl noted that audiophiles would also appreciate the graphene loudspeakers and headphones, which have a flat response across the entire audible frequency range.

The work was supported by the U.S. Department of Energy, the Office of Naval Research and the National Science Foundation. Other co-authors were Zheng, Michael Crommie, a UC Berkeley professor of physics, and Seita Onishi.


Abstract of Graphene electrostatic microphone and ultrasonic radio

We present a graphene-based wideband microphone and a related ultrasonic radio that can be used for wireless communication. It is shown that graphene-based acoustic transmitters and receivers have a wide bandwidth, from the audible region (20∼20 kHz) to the ultrasonic region (20 kHz to at least 0.5 MHz). Using the graphene-based components, we demonstrate efficient high-fidelity information transmission using an ultrasonic band centered at 0.3 MHz. The graphene-based microphone is also shown to be capable of directly receiving ultrasound signals generated by bats in the field, and the ultrasonic radio, coupled to electromagnetic (EM) radio, is shown to function as a high-accuracy rangefinder. The ultrasonic radio could serve as a useful addition to wireless communication technology where the propagation of EM waves is difficult.

Crowdsourcing neurofeedback data

In front of an audience, the collective neurofeedback of 20 participants were projected on the 360° surface of the semi-transparent dome as artistic video animations with soundscapes generated based on a pre-recorded sound library and improvisations from live musicians (credit: Natasha Kovacevic et al./PLoS ONE/Photo: David Pisarek)

In a large-scale art-science installation called My Virtual Dream in Toronto in 2013, more than 500 adults wearing a Muse wireless electroencephalography (EEG) headband inside a 60-foot geodesic dom participated in an unusual neuroscience experiment.

As they played a collective neurofeedback computer game where they were required to manipulate their mental states of relaxation and concentration, the group’s collective EEG signals triggered a catalog of related artistic imagery displayed on the dome’s 360-degree interior, along with spontaneous musical interpretation by live musicians on stage.

“What we’ve done is taken the lab to the public. We collaborated with multimedia artists, made this experiment incredibly engaging, attracted highly motivated subjects, which is not easy to do in the traditional lab setting, and collected useful scientific data from their experience.”

Collective neurofeedback: a new kind of neuroscience research

Participant instructions (credit: Natasha Kovacevic et al./PLoS ONE)

Results from the experiment demonstrated the scientific viability of collective neurofeedback as a potential new avenue of neuroscience research that takes into account individuality, complexity and sociability of the human mind. They also yielded new evidence that neurofeedback learning can have an effect on the brain almost immediately the researchers say.

Studying brains in a social and multi-sensory environment is closer to real life and may help scientists to approach questions of complex real-life social cognition that otherwise are not accessible in traditional labs that study one person’s cognitive functions at a time.

“In traditional lab settings, the environment is so controlled that you can lose some of the fine points of real-time brain activity that occur in a social life setting,” said Natasha Kovacevic, creative producer of My Virtual Dream and program manager of the Centre for Integrative Brain Dynamics at Baycrest’s Rotman Research Institute.

The massive amount of EEG data collected in one night yielded an interesting and statistically relevant finding: that subtle brain activity changes were taking place within approximately one minute of the neurofeedback learning exercise — unprecedented speed of learning changes that have not been demonstrated before.

Building the world’s first virtual brain

“These results really open up a whole new domain of neuroscience study that actively engages the public to advance our understanding of the brain,” said Randy McIntosh, director of the Rotman Research Institute and vice-president of Research at Baycrest. He is a senior author on the paper.

The idea for the Nuit Blanche art-science experiment was inspired by Baycrest’s ongoing international project to build the world’s first functional, virtual brain — a research and diagnostic tool that could one day revolutionize brain healthcare.

Baycrest cognitive neuroscientists collaborated with artists and gaming and wearable technology industry partners for over a year to create the My Virtual Dream installation. Partners included the University of Toronto, Scotiabank Nuit Blanche, Muse, and Uken Games.

Plans are underway to travel My Virtual Dream to other cities around the world.


Abstract of ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants’ baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions.

A graphene-based molecule sensor

Shining infrared light on a graphene surface makes surface electrons oscillate in different ways that identify the specific molecule attached to the surface (EPFL/Miguel Spuch /Daniel Rodrigo )

European scientists have harnessed graphene’s unique optical and electronic properties to develop a highly sensitive sensor to detect molecules such as proteins and drugs — one of the first such applications of graphene.

The results are described in an article appearing in the latest edition of the journal Science.

The researchers at EPFL’s Bionanophotonic Systems Laboratory (BIOS) and the Institute of Photonic Sciences (ICFO, Spain) used graphene to improve on a molecule-detection method called infrared absorption spectroscopy, which uses infrared light is used to excite the molecules. Each type of molecule absorbs differently across the spectrum, creating a signature that can be recognized.

This method is not effective, however, in detecting molecules that are under 10 nanometers in size (such as proteins), because the size of the mid-infrared wavelengths used are huge in comparison — 2 to 6 micrometers (2,000 to 6,000 nanometers).

Conceptual view of the graphene biosensor. An infrared beam excites a plasmon resonance across the graphene nanoribbons. Protein sensing is achieved by changing the voltage applied to the graphene and detecting a plasmon resonance spectral shift accompanied by narrow dips corresponding to the molecular vibration bands of the protein. (credit: Daniel Rodrigo et al./Science)

Resonant vibrations

With the new graphene method, the target proteins to be analyzed are attached to the graphene surface. “We pattern nanostructures on the graphene surface by bombarding it with electron beams and etching it with oxygen ions,” said Daniel Rodrigo, co-author of the publication. “When the light arrives, the electrons in graphene nanostructures begin to oscillate. This phenomenon, known as ‘localized surface plasmon resonance,’ serves to concentrate light into tiny spots, which are comparable with the [tiny] dimensions of the target molecules. It is then possible to detect nanometric structures.”

This process can also reveal the nature of the bonds connecting the atoms that the molecule is composed of. When a molecule vibrates, it does so in a range of frequencies, which are generated by the bonds connecting the different atoms. To detect these frequencies,  the researchers “tuned” the graphene to different frequencies by applying voltage, which is not possible with current sensors. Making graphene’s electrons oscillate in different ways makes it possible to “read” all the vibrations of the molecule on its surface. “It gave us a full picture of the molecule,” said co-author Hatice Altug.

According to the researchers, this simple method shows that it is possible to conduct a complex analysis using only one device, while it normally requires many different ones, and without stressing or modifying the biological sample. “The method should also work for polymers, and many other substances,” she added.


Abstract of Mid-infrared plasmonic biosensing with graphene

Infrared spectroscopy is the technique of choice for chemical identification of biomolecules through their vibrational fingerprints. However, infrared light interacts poorly with nanometric-size molecules. We exploit the unique electro-optical properties of graphene to demonstrate a high-sensitivity tunable plasmonic biosensor for chemically specific label-free detection of protein monolayers. The plasmon resonance of nanostructured graphene is dynamically tuned to selectively probe the protein at different frequencies and extract its complex refractive index. Additionally, the extreme spatial light confinement in graphene—up to two orders of magnitude higher than in metals—produces an unprecedentedly high overlap with nanometric biomolecules, enabling superior sensitivity in the detection of their refractive index and vibrational fingerprints. The combination of tunable spectral selectivity and enhanced sensitivity of graphene opens exciting prospects for biosensing.

Omnidirectional wireless charging up to half a meter away from a power source

Omnidirectional wireless-charging system can charge multiple numbers of mobile devices simultaneously in a one-cubic-meter range. Above: charging transmitter; below: a Samsung Galaxy Note with embedded receiver. (credit: KAIST)

A group of researchers at KAIST in Korea has developed a wireless-power transfer (WPT) technology that allows mobile devices in the “Wi-Power” zone (within 0.5 meters from the power source) to be charged at any location and in any direction and orientation, tether-free.

The WPT system is capable of charging 30 smartphones with a power capacity of one watt each or 5 laptops with 2.4 watts.

The research team used its Dipole Coil Resonance System (DCRS) to induce magnetic fields, composed of two (transmitting and receiving) magnetic dipole coils, placed in parallel. Each coil has a ferrite core and is connected with a resonant capacitor.

Current wireless-power technologies require close contact with a charging pad and are limited to a fixed position.

The research was published in the June 2015 on-line issue of IEEE Transactions on Power Electronics.


KAIST | KAIST Omnidirectional Wireless Smartphone Charger at 1m


Abstract of Six Degrees of Freedom Mobile Inductive Power Transfer by Crossed Dipole Tx and Rx Coils

Crossed dipole coils for the wide-range 3-D omnidirectional inductive power transfer (IPT) are proposed. Free positioning of a plane receiving (Rx) coil is obtained for an arbitrary direction within 1m from a plane transmission (Tx) coil. Both the Tx and Rx coils consist of crossed dipole coils with an orthogonal phase difference; hence, a rotating magnetic field is generated from the Tx, which enables the Rx to receive power vertically or horizontally. Thus, the 3-D omnidirectional IPT is first realized for both the plate type Tx and Rx coils, which is crucial for practical applications where volumetric coil structure is highly prohibited. This optimized configuration of coils has been obtained through a general classification of power transfer and searching for mathematical constraints on multi-D omnidirectional IPT. Conventional loop coils are thoroughly analyzed and verified to be inadequate for the plate-type omnidirectional IPT in this paper. Simulation-based design of the proposed crossed dipole coils for a uniform magnetic field distribution is provided, and the 3-D omnidirectional IPT is experimentally verified by prototype Rx coils for a wireless power zone of 1 m3 with a prototype Tx coil of 1 m2 at an operating frequency of 280 kHz, meeting the Power Matters Alliance (PMA). The maximum overall efficiency was 33.6% when the input power was 100 W.

AI algorithm learns to ‘see’ features in galaxy images

Hubble Space Telescope image of the cluster of galaxies MACS0416.1-2403, one of the Hubble “Frontier Fields” images. Bright yellow “elliptical” galaxies can be seen, surrounded by numerous blue spiral and amorphous (star-forming) galaxies. This image forms the test data that the machine learning algorithm is applied to, having not previously “seen” the image. (credit: NASA/ESA/J. Geach/A. Hocking)

A team of astronomers and computer scientists at the University of Hertfordshire have taught a machine to “see” astronomical images, using data from the Hubble Space Telescope Frontier Fields set of images of distant clusters of galaxies that contain several different types of galaxies.

The technique, which uses a form of AI called unsupervised machine learning, allows galaxies to be automatically classified at high speed, something previously done by thousands of human volunteers in projects like Galaxy Zoo.

Image highlighting parts of the MACS0416.1-2403 cluster image that the algorithm has identified as “star-forming” galaxies (credit: NASA/ESA/J. Geach/A. Hocking)

“We have not told the machine what to look for in the images, but instead taught it how to ‘see,’” said graduate student Alex Hocking.

“Our aim is to deploy this tool on the next generation of giant imaging surveys where no human, or even group of humans, could closely inspect every piece of data. But this algorithm has a huge number of applications far beyond astronomy, and investigating these applications will be our next step,” said University of Hertfordshire Royal Society University Research Fellow James Geach, PhD.

The scientists are now looking for collaborators to make use of the technique in applications like medicine, where it could for example help doctors to spot tumors, and in security, to find suspicious items in airport scans.

Your entire viral infection history from a single drop of blood

Systematic viral epitope scanning (VirScan). This method allows comprehensive analysis of antiviral antibodies in human sera. VirScan combines DNA microarray synthesis and bacteriophage display to create a uniform, synthetic representation of peptide epitopes comprising the human virome. Immunoprecipitation and high-throughput DNA sequencing reveal the peptides recognized by antibodies in the sample. The color of each cell in the heatmap depicts the relative number of antigenic epitopes detected for a virus (rows) in each sample (columns). (credit: George J. Xu et al./Science).

New technology called called VirScan developed by Howard Hughes Medical Institute (HHMI) researchers makes it possible to test for current and past infections with any known human virus by analyzing a single drop of a person’s blood.

With VirScan, scientists can run a single test to determine which viruses have infected an individual, rather than limiting their analysis to particular viruses. That unbiased approach could uncover unexpected factors affecting individual patients’ health, and also expands opportunities to analyze and compare viral infections in large populations.

The comprehensive analysis can be performed for about $25 per blood sample, but the test is currently being used only as a research tool and is not yet commercially available.

Stephen J. Elledge, an HHMI investigator at Brigham and Women’s Hospital, led the development of VirScan. He and his colleagues have already used VirScan to screen the blood of 569 people in the United States, South Africa, Thailand, and Peru. The scientists described the new technology and reported their findings in the June 5, 2015, issue of the journal Science.

Virus antibodies: clues that last decades

Bacteriophage (credit: Wikimedia Commons)

VirScan works by screening the blood for antibodies against any of the 206 species of viruses known to infect humans*. The immune system ramps up production of pathogen-specific antibodies when it encounters a virus for the first time, and it can continue to produce those antibodies for years or decades after it clears an infection.

That means VirScan not only identifies viral infections that the immune system is actively fighting, but also provides a history of an individual’s past infections.

To develop the new test, Elledge and his colleagues synthesized more than 93,000 short pieces of DNA encoding different segments of viral proteins. They introduced those pieces of DNA into bacteria-infecting viruses called bacteriophage.

Each bacteriophage manufactured one of the protein segments — known as a peptide — and displayed the peptide on its surface. As a group, the bacteriophage displayed all of the protein sequences found in the more than 1,000 known strains of human viruses.

To test the method, the team used it to analyze blood samples from patients known to be infected with particular viruses, including HIV and hepatitis C. “It turns out that it works really well,” Elledge says. “We were in the sensitivity range of 95 to 100 percent for those, and the specificity was good—we  didn’t falsely identify people who were negative. That gave us confidence that we could detect other viruses, and when we did see them we would know they were real.”


Harvard Medical School | Viral History in a Drop of Blood

International study

Elledge and his colleagues used VirScan to analyze the antibodies in 569 people from four countries, examining about 100 million potential antibody/epitope interactions. They found that on average, each person had antibodies to ten different species of viruses. As expected, antibodies against certain viruses were common among adults but not in children, suggesting that children had not yet been exposed to those viruses. Individuals residing South Africa, Peru, and Thailand, tended to have antibodies against more viruses than people in the United States. The researchers also found that people infected with HIV had antibodies against many more viruses than did people without HIV.

Elledge says the team was surprised to find that antibody responses against specific viruses were surprisingly similar between individuals, with different people’s antibodies recognizing identical  amino acids in the viral peptides. “In this paper alone we identified more antibody/peptide interactions to viral proteins than had been identified in the previous history of all viral exploration,” he says. The surprising reproducibility of those interactions allowed the team to refine their analysis and improve the  sensitivity of VirScan, and Elledge says the method will continue to improve as his team analyzes more samples. Their findings on viral epitopes may also have important implications for vaccine design.

Elledge says the approach his team has developed is not limited to antiviral antibodies. His own lab is also using it to look for antibodies that attack a body’s own tissue in certain autoimmune diseases that are associated with cancer. A similar approach could also be used to screen for antibodies against other types of pathogens.

* Antibodies in the blood find their viral targets by recognizing unique features known as epitopes that are embedded in proteins on the virus surface. To perform the VirScan analysis, all of the peptide-displaying bacteriophage are allowed to mingle with a blood sample. Antiviral antibodies in the blood find and bind to their target epitopes within the displayed peptides. The scientists then retrieve the antibodies and wash away everything except for the few bacteriophage that cling to them. By sequencing the DNA of those bacteriophage, they can identify which viral protein pieces were grabbed onto by antibodies in the blood sample. That tells the scientists which viruses a person’s immune system has previously encountered, either through infection or through vaccination. Elledge estimates it would take about 2-3 days to process 100 samples, assuming sequencing is working optimally. He is optimistic the speed of the assay will increase with further development.


Abstract of Comprehensive serological profiling of human populations using a synthetic human virome

The human virome plays important roles in health and immunity. However, current methods for detecting viral infections and antiviral responses have limited throughput and coverage. Here, we present VirScan, a high-throughput method to comprehensively analyze antiviral antibodies using immunoprecipitation and massively parallel DNA sequencing of a bacteriophage library displaying proteome-wide peptides from all human viruses. We assayed over 108 antibody-peptide interactions in 569 humans across four continents, nearly doubling the number of previously established viral epitopes. We detected antibodies to an average of 10 viral species per person and 84 species in at least two individuals. Although rates of specific virus exposure were heterogeneous across populations, antibody responses targeted strongly conserved “public epitopes” for each virus, suggesting that they may elicit highly similar antibodies. VirScan is a powerful approach for studying interactions between the virome and the immune system.

Creating DNA-based nanostructures without water

Three different DNA nanostructures assembled at room temperature in water-free glycholine (left) and in 75 percent glycholine-water mixture (center and right). The structures are (from left to right) a tall rectangle two-dimensional DNA origami, a triangle made of single-stranded tails, and a six-helix bundle three-dimensional DNA origami (credit: Isaac Gállego).

Researchers at the Georgia Institute of Technology have discovered an new process for assembling DNA nanostructures in a water-free solvent, which may allow for fabricating more complex nanoscale structures — especially, nanoelectronic chips based on DNA.

Scientists have been using DNA to construct sophisticated new structures from nanoparticles (such as a recent development at Brookhaven National Labs reported by KurzweilAI May 26), but the use of DNA has required a water-based environment. That’s because DNA naturally functions inside the watery environment of living cells. However, the use of water limited the types of structures that are possible.

The viscosity of a new solvent used for assembling DNA nanostructures (credit: Rob Felt)

In addition, the Georgia Tech researchers discovered that, paradoxically, adding a small amount of water to their water-free solvent during the assembly process (and removing it later) increases the assembly rate. It could also allow for even more complex structures, by reducing the problem of DNA becoming trapped in unintended structures by aggregation (clumping).

The new solvent they used is known as glycholine, a mixture of glycerol (used for sweetening and preserving food) and choline chloride, but the researchers are exploring other materials.

The solvent system could improve the combined use of metallic nanoparticles and DNA based materials at room temperature. The solvent’s low volatility could also allow for storage of assembled DNA structures without the concern that a water-based medium would dry out.

The research on water-free solvents grew out of Georgia Tech researchers’ studies in the origins of life. They wondered if the molecules necessary for life, such as the ancestor of DNA, could have developed in a water-free solution. In some cases, they found, the chemistry necessary to make the molecules of life would be much easier without water being present.

Sponsored by the National Science Foundation and NASA, the research will be published as the cover story in Volume 54, Issue 23 of the journal Angewandte Chemie International Edition.

* The assembly rate of DNA nanostructures can be very slow, and depends strongly on temperature. Raising the temperature increases this rate, but temperatures that are too high can cause the DNA structures to fall apart. The solvent system developed at Georgia Tech adds a new level of control over DNA assembly. DNA structures assemble at lower temperatures in this solvent, and adding water can adjust the solvent’s viscosity (resistance to flow), which allows for faster assembly compared to the water-free version of the solvent.


Abstract of Folding and Imaging of DNA Nanostructures in Anhydrous and Hydrated Deep-Eutectic Solvents

There is great interest in DNA nanotechnology, but its use has been limited to aqueous or substantially hydrated media. The first assembly of a DNA nanostructure in a water-free solvent, namely a low-volatility biocompatible deep-eutectic solvent composed of a 4:1 mixture of glycerol and choline chloride (glycholine), is now described. Glycholine allows for the folding of a two-dimensional DNA origami at 20 °C in six days, whereas in hydrated glycholine, folding is accelerated (≤3 h). Moreover, a three-dimensional DNA origami and a DNA tail system can be folded in hydrated glycholine under isothermal conditions. Glycholine apparently reduces the kinetic traps encountered during folding in aqueous solvent. Furthermore, folded structures can be transferred between aqueous solvent and glycholine. It is anticipated that glycholine and similar solvents will allow for the creation of functional DNA structures of greater complexity by providing a milieu with tunable properties that can be optimized for a range of applications and nanostructures.

South Korean Team Kaist wins DARPA Robotics Challenge

DRC-Hubo robot turns valve 360 degrees in DARPA Robotics Challenge Final (credit: DARPA)

First place in the DARPA Robotics Challenge Finals this past weekend in Pomona, California went to Team Kaist of South Korea for its DRC-Hubo robot, winning $2 million in prize money.

Team IHMC Robotics of Pensacola, Fla., with its Running Man (Atlas) robot came in at second place ($1 million prize), followed by Tartan Rescue of Pittsburgh with its CHIMP robot ($500,000 prize).

DRC-Hubo, Running Man, and CHIMP (credit: DARPA)

The DARPA Robotics Challenge, with three increasingly demanding competitions over two years, was launched in response to a humanitarian need that became glaringly clear during the nuclear disaster at Fukushima, Japan, in 2011, DARPA said.

The goal was to “accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impacts of natural or man-made disasters.”

The difficult course of eight tasks simulated Fukushima-like conditions, such as driving alone, walking through rubble, tripping circuit breakers, turning valves, and climbing stairs.

Representing some of the most advanced robotics research and development organizations in the world, a dozen teams from the United States and another eleven from Japan, Germany, Italy, Republic of Korea and Hong Kong competed.

DARPA | DARPA Robotics Challenge 2015 Proving the Possible


DARPA | A Celebration of Risk (a.k.a., Robots Take a Spill)

More DARPA Robotics Challenge videos

Super-resolution electron microscopy of soft materials like biomaterials

CLAIRE image of Al nanostructures with an inset that shows a cluster of six Al nanostructures (credit: Lawrence Berkeley National Laboratory)

Soft matter encompasses a broad swath of materials, including liquids, polymers, gels, foam and — most importantly — biomolecules. At the heart of soft materials, governing their overall properties and capabilities, are the interactions of nano-sized components.

Observing the dynamics behind these interactions is critical to understanding key biological processes, such as protein crystallization and metabolism, and could help accelerate the development of important new technologies, such as artificial photosynthesis or high-efficiency photovoltaic cells.

Observing these dynamics at sufficient resolution has been a major challenge, but this challenge is now being met with a new non-invasive nanoscale imaging technique that goes by the acronym of CLAIRE.

CLAIRE stands for “cathodoluminescence activated imaging by resonant energy transfer.” Invented by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, CLAIRE extends the extremely high resolution of electron microscopy to the dynamic imaging of soft matter.

“Traditional electron microscopy damages soft materials and has therefore mainly been used to provide topographical or compositional information about robust inorganic solids or fixed sections of biological specimens,” says chemist Naomi Ginsberg, who leads CLAIRE’s development and holds appointments with Berkeley Lab’s Physical Biosciences Division and its Materials Sciences Division, as well as UC Berkeley’s departments of chemistry and physics.

“CLAIRE allows us to convert electron microscopy into a new non-invasive imaging modality for studying soft materials and providing spectrally specific information about them on the nanoscale.”

Ginsberg is also a member of the Kavli Energy NanoScience Institute (Kavli-ENSI) at Berkeley. She and her research group recently demonstrated CLAIRE’s imaging capabilities by applying the technique to aluminum nanostructures and polymer films that could not have been directly imaged with electron microscopy.

“What microscopic defects in molecular solids give rise to their functional optical and electronic properties? By what potentially controllable process do such solids form from their individual microscopic components, initially in the solution phase? The answers require observing the dynamics of electronic excitations or of molecules themselves as they explore spatially heterogeneous landscapes in condensed phase systems,” Ginsberg says.

“In our demonstration, we obtained optical images of aluminum nanostructures with 46 nanometer resolution, then validated the non-invasiveness of CLAIRE by imaging a conjugated polymer film. The high resolution, speed and non-invasiveness we demonstrated with CLAIRE positions us to transform our current understanding of key biomolecular interactions.”

How to avoid destroying soft matter with electron beams

CLAIRE works by essentially combining the best attributes of optical and scanning electron microscopy into a single imaging platform.

Scanning electron microscopes use beams of electrons rather than light for illumination and magnification. With much shorter wavelengths than photons of visible light, electron beams can be used to observe objects hundreds of times smaller than those that can be resolved with an optical microscope. However, these electron beams destroy most forms of soft matter and are incapable of spectrally specific molecular excitation.

Ginsberg and her colleagues get around these problems by employing a process called “cathodoluminescence,” in which an ultrathin scintillating film, about 20 nanometers thick, composed of cerium-doped yttrium aluminum perovskite, is inserted between the electron beam and the sample.

When the scintillating film is excited by a low-energy electron beam (about 1 KeV), it emits energy that is transferred to the sample, causing the sample to radiate. This luminescence is recorded and correlated to the electron beam position to form an image that is not restricted by the optical diffraction limit (which limits optical microscopy).

The CLAIRE imaging demonstration was carried out at the Molecular Foundry, a DOE Office of Science User Facility.

Observing biomolecular interactions, solar cells, and LEDs

While there is still more work to do to make CLAIRE widely accessible, Ginsberg and her group are moving forward with further refinements for several specific applications.

“We’re interested in non-invasively imaging soft functional materials like the active layers in solar cells and light-emitting devices,” she says. “It is especially true in organics and organic/inorganic hybrids that the morphology of these materials is complex and requires nanoscale resolution to correlate morphological features to functions.”

Ginsberg and her group are also working on the creation of liquid cells for observing biomolecular interactions under physiological conditions. Since electron microscopes can only operate in a high vacuum, as molecules in the air disrupt the electron beam, and since liquids evaporate in high vacuum, aqueous samples must either be freeze-dried or hermetically sealed in special cells.

“We need liquid cells for CLAIRE to study the dynamic organization of light-harvesting proteins in photosynthetic membranes,” Ginsberg says. “We should also be able to perform other studies in membrane biophysics to see how molecules diffuse in complex environments, and we’d like to be able to study molecular recognition at the single molecule level.”

In addition, Ginsberg and her group will be using CLAIRE to study the dynamics of nanoscale systems for soft materials in general. “We would love to be able to observe crystallization processes or to watch a material made of nanoscale components anneal or undergo a phase transition,” she says. “We would also love to be able to watch the electric double layer at a charged surface as it evolves, as this phenomenon is crucial to battery science.”

A paper describing the most recent work on CLAIRE has been published in the journal Nano Letters. This research was primarily supported by the DOE Office of Science and by the National Science Foundation.


Abstract of Cathodoluminescence-Activated Nanoimaging: Noninvasive Near-Field Optical Microscopy in an Electron Microscope

We demonstrate a new nanoimaging platform in which optical excitations generated by a low-energy electron beam in an ultrathin scintillator are used as a noninvasive, near-field optical scanning probe of an underlying sample. We obtain optical images of Al nanostructures with 46 nm resolution and validate the noninvasiveness of this approach by imaging a conjugated polymer film otherwise incompatible with electron microscopy due to electron-induced damage. The high resolution, speed, and noninvasiveness of this “cathodoluminescence-activated” platform also show promise for super-resolution bioimaging.