Gigapixel multicolor microscope is powerful new tool to advance drug research

Parallelized multispectral imaging. Each rainbow-colored bar is the fluorescent spectrum from a discrete point in a cell culture. The gigapixel multispectral microscope records nearly a million such spectra every second. (credit: Antony Orth et al./Optica)

A new multispectral microscope capable of processing nearly 17 billion pixels in a single image has been developed by a team of researchers from the United States and Australia — the largest such microscopic image ever created.

This level of multicolor detail is essential for studying the impact of experimental drugs on biological samples and is an important advancement over traditional microscope designs, the researchers say. The goal is to simultaneously process large amounts of data to deal with a major bottleneck in pharmaceutical research: rapid, data-rich biomedical imaging.

The microscope merges data simultaneously collected by thousands of microlenses to produce a continuous series of datasets that essentially reveal how much of multiple colors (frequencies) are present at each point in a single biological sample.

“We recognized that the microscopy part of the drug development pipeline was much slower than it could be and designed a system specifically for this task,” said Antony Orth, a researcher formerly at the Rowland Institute, Harvard University in Cambridge and now with the ARC Centre for Nanoscale BioPhotonics, RMIT University in Melbourne, Australia.

Orth and his colleagues published their results in Optica, a journal of The Optical Society.

Multispectral imaging 

Multispectral imaging is used for a variety of scientific and medical research applications. This process adds data about specific colors, or frequencies to images. Medical researchers are able to study these frequencies to learn about the composition and chemical processes that are taking place within a biological sample. This is essential for pharmaceutical research — particularly cancer research — to observe how cells and tissues respond to specific chemicals and experimental drugs.

Such research, however, is very data intensive and slow since current multispectral microscopes can only survey a single point at a time with few color channels, typically only 4 or 5. This process must then be repeated over and over to scan the entire sample.

Slices of a spectral data cube. HeLa cells are imaged at 11 wavelengths from blue to red. The bottom right panel is a composite of all wavelength channels. (credit: Antony Orth et al./Optica)

Microlenses and parallel processing for big data

To overcome these limitations, Orth and his team took inspiration from modern computing, in which massive amounts of data and calculations are simultaneous handled by multicore processors. In the case of imaging, however, the work of a single microscope lens is distributed among an entire array or microlenses, each responsible for collecting multispectral data for a very narrow portion of each sample.

To capture this data, a laser is focused onto a small spot on the sample by each microlens. The laser light causes the sample to fluoresce, emitting specific wavelengths of light that differ depending on the molecules that are present. This fluorescence is then imaged back onto the camera. This is done for thousands of microlenses at once.

This multipoint scanning greatly reduces the amount of time necessary to image a sample by simultaneously harnessing thousands of lenses.

“By recording the color spectrum of the fluorescence, we can determine how much of each fluorescing molecule is in the sample,” said Orth. “What makes our microscope particularly powerful is that it records many different colors at once, allowing researchers to highlight a large number of structures in a single experiment.”

To demonstrate their design, the researchers applied various dyes that adhere to specific molecules within a cell sample. These dyes respond to laser light by fluorescing at specific frequencies so they can be detected and localized with high precision. Each microlens then looked at a very small part of the sample, an area about 0.6 by 0.1 millimeters in size. The raw data produced by this was a series of small images roughly 1,200 by 200 pixels wide.

These individual multicolor images were then stitched together into a large mosaic image. By simultaneously imaging 13 separate colors bands, the dataset produced was nearly 17 billion pixels in size.

In scientific imaging, such multilayered files are referred to as “datacubes,” because they contain three dimensions — two spatial (the X and Y coordinates) and one dimension of color. “The dataset basically tells you how much of each color you have at any given X-Y position in the sample,” explained Orth.

This design is a significant improvement over regular, single-lens microscopes, which take a series of medium-sized pictures in a serial fashion. Since they cannot see the entire sample at once, it’s necessary to take one picture and then move the sample to capture the next. This means the sample has to remain still while the microscope is refocused or color filters are changed. Orth and his colleagues’ design eliminates much of this mechanical dead-time and is almost always imaging.

Multispectral fluorescence image of an entire cancer cell culture. A gradient wavelength filter is applied in post processing to visualize the full spectral nature of the dataset – 13 discrete wavelengths from red to blue. (credit: Antony Orth et al./Optica)

Speeding up drug discovery wtih big data

This novel approach initially presented a challenge in the data pipeline. The raw data is in the form of one megapixel images recorded at 200 frames per second — a data rate much higher than current microscopes, which required the team to capture and process a tremendous amount of data each second.

Over time, the availability and prices of fast cameras and fast hard drives have come down considerably, allowing for a much more affordable and efficient design. The current limiting factor is loading the recorded data from hard drives to active computer memory to produce an image. The researchers estimate that an active memory of about 100 gigabytes to store the raw dataset would improve the entire process even further.

The goal of this technology is to speed up drug discovery. For example, to study the impact of a new cancer drug it’s essential to determine if a specific drug kills cancer cells more often than healthy cells. This requires testing the same drug on thousands to millions of cells with varying doses and under different conditions, which is normally a very time-consuming and labor-intensive task.

The new microscope presented in this paper speeds up this process while also looking at many different colors at once. “This is important because the more colors you can see, the more insightful and efficient your experiments become,” noted Orth. “Because of this, the speed-up afforded by our microscope goes beyond just the improvement in raw data rate.”

Continuing this research, the team would like to expand to live cell imaging in which billion-pixel, time-lapse movies of cells moving and responding to various stimuli could be made, opening the door to experiments that currently aren’t possible with small-scale time-lapse movies.


Abstract of Gigapixel multispectral microscopy

Understanding the complexity of cellular biology often requires capturing and processing an enormous amount of data. In high-content drug screens, each cell is labeled with several different fluorescent markers and frequently thousands to millions of cells need to be analyzed in order to characterize biology’s intrinsic variability. In this work, we demonstrate a new microlens-based multispectral microscope designed to meet this throughput-intensive demand. We report multispectral image cubes of up to 1.26 gigapixels in the spatial domain, with up to 13 spectral samples per pixel, for a total image size of 16.4 billion spatial-spectral samples. To our knowledge, this is the largest multispectral microscopy dataset reported in the literature. Our system has highly reconfigurable spectral sampling and bandwidth settings and we have demonstrated spectral unmixing of up to 6 fluorescent channels.

Continued destruction of Earth’s plant life places humankind in jeopardy, say researchers

Earth-space battery. The planet is a positive charge of stored chemical energy (cathode) in the form of fossil and nuclear fuels and biomass. As this energy is dissipated by humans, it eventually radiates as heat toward the chemical equilibrium of deep space (anode). The battery is rapidly discharging without replenishment. (credit: John R. Schramski et al./PNAS)

Unless humans slow the destruction of Earth’s declining supply of plant life, civilization like it is now may become completely unsustainable, according to a paper published recently by University of Georgia researchers in the Proceedings of the National Academy of Sciences.

“You can think of the Earth like a battery that has been charged very slowly over billions of years,” said the study’s lead author, John Schramski, an associate professor in UGA’s College of Engineering. “The sun’s energy is stored in plants and fossil fuels, but humans are draining energy much faster than it can be replenished.”

Number of years of phytomass food potentially available to feed the global human population (credit: John R. Schramski et al./PNAS)

Earth was once a barren landscape devoid of life, he explained, and it was only after billions of years that simple organisms evolved the ability to transform the sun’s light into energy. This eventually led to an explosion of plant and animal life that bathed the planet with lush forests and extraordinarily diverse ecosystems.

The study’s calculations are grounded in the fundamental principles of thermodynamics, a branch of physics concerned with the relationship between heat and mechanical energy. Chemical energy is stored in plants, or biomass, which is used for food and fuel, but which is also destroyed to make room for agriculture and expanding cities.

Scientists estimate that the Earth contained approximately 1,000 billion tons of carbon in living biomass 2,000 years ago. Since that time, humans have reduced that amount by almost half. It is estimated that just over 10 percent of that biomass was destroyed in just the last century.

“If we don’t reverse this trend, we’ll eventually reach a point where the biomass battery discharges to a level at which Earth can no longer sustain us,” Schramski said.

Major causes: deforestation, large-scale farming, population growth

Working with James H. Brown from the University of New Mexico, Schramski and UGA’s David Gattie, an associate professor in the College of Engineering, the research shows that the vast majority of losses come from deforestation, hastened by the advent of large-scale mechanized farming and the need to feed a rapidly growing population. As more biomass is destroyed, the planet has less stored energy, which it needs to maintain Earth’s complex food webs and biogeochemical balances.

NASA Earth Observatory biomass map of the U.S. by Robert Simmon, generated from the National Biomass and Carbon Dataset (NBCD) assembled by scientists at the Woods Hole Research Center

“As the planet becomes less hospitable and more people depend on fewer available energy options, their standard of living and very survival will become increasingly vulnerable to fluctuations, such as droughts, disease epidemics and social unrest,” Schramski said.

If human beings do not go extinct, and biomass drops below sustainable thresholds, the population will decline drastically, and people will be forced to return to life as hunter-gatherers or simple horticulturalists, according to the paper.

“I’m not an ardent environmentalist; my training and my scientific work are rooted in thermodynamics,” Schramski said. “These laws are absolute and incontrovertible; we have a limited amount of biomass energy available on the planet, and once it’s exhausted, there is absolutely nothing to replace it.”

Schramski and his collaborators are hopeful that recognition of the importance of biomass, elimination of its destruction and increased reliance on renewable energy will slow the steady march toward an uncertain future, but the measures required to stop that progression may have to be drastic.

The model does not take into account potential future breakthroughs in more efficient biomass use and alternate energy systems.


Abstract of Human domination of the biosphere: Rapid discharge of the earth-space battery foretells the future of humankind

Earth is a chemical battery where, over evolutionary time with a trickle-charge of photosynthesis using solar energy, billions of tons of living biomass were stored in forests and other ecosystems and in vast reserves of fossil fuels. In just the last few hundred years, humans extracted exploitable energy from these living and fossilized biomass fuels to build the modern industrial-technological-informational economy, to grow our population to more than 7 billion, and to transform the biogeochemical cycles and biodiversity of the earth. This rapid discharge of the earth’s store of organic energy fuels the human domination of the biosphere, including conversion of natural habitats to agricultural fields and the resulting loss of native species, emission of carbon dioxide and the resulting climate and sea level change, and use of supplemental nuclear, hydro, wind, and solar energy sources. The laws of thermodynamics governing the trickle-charge and rapid discharge of the earth’s battery are universal and absolute; the earth is only temporarily poised a quantifiable distance from the thermodynamic equilibrium of outer space. Although this distance from equilibrium is comprised of all energy types, most critical for humans is the store of living biomass. With the rapid depletion of this chemical energy, the earth is shifting back toward the inhospitable equilibrium of outer space with fundamental ramifications for the biosphere and humanity. Because there is no substitute or replacement energy for living biomass, the remaining distance from equilibrium that will be required to support human life is unknown.

A graphene-based molecule sensor

Shining infrared light on a graphene surface makes surface electrons oscillate in different ways that identify the specific molecule attached to the surface (EPFL/Miguel Spuch /Daniel Rodrigo )

European scientists have harnessed graphene’s unique optical and electronic properties to develop a highly sensitive sensor to detect molecules such as proteins and drugs — one of the first such applications of graphene.

The results are described in an article appearing in the latest edition of the journal Science.

The researchers at EPFL’s Bionanophotonic Systems Laboratory (BIOS) and the Institute of Photonic Sciences (ICFO, Spain) used graphene to improve on a molecule-detection method called infrared absorption spectroscopy, which uses infrared light is used to excite the molecules. Each type of molecule absorbs differently across the spectrum, creating a signature that can be recognized.

This method is not effective, however, in detecting molecules that are under 10 nanometers in size (such as proteins), because the size of the mid-infrared wavelengths used are huge in comparison — 2 to 6 micrometers (2,000 to 6,000 nanometers).

Conceptual view of the graphene biosensor. An infrared beam excites a plasmon resonance across the graphene nanoribbons. Protein sensing is achieved by changing the voltage applied to the graphene and detecting a plasmon resonance spectral shift accompanied by narrow dips corresponding to the molecular vibration bands of the protein. (credit: Daniel Rodrigo et al./Science)

Resonant vibrations

With the new graphene method, the target proteins to be analyzed are attached to the graphene surface. “We pattern nanostructures on the graphene surface by bombarding it with electron beams and etching it with oxygen ions,” said Daniel Rodrigo, co-author of the publication. “When the light arrives, the electrons in graphene nanostructures begin to oscillate. This phenomenon, known as ‘localized surface plasmon resonance,’ serves to concentrate light into tiny spots, which are comparable with the [tiny] dimensions of the target molecules. It is then possible to detect nanometric structures.”

This process can also reveal the nature of the bonds connecting the atoms that the molecule is composed of. When a molecule vibrates, it does so in a range of frequencies, which are generated by the bonds connecting the different atoms. To detect these frequencies,  the researchers “tuned” the graphene to different frequencies by applying voltage, which is not possible with current sensors. Making graphene’s electrons oscillate in different ways makes it possible to “read” all the vibrations of the molecule on its surface. “It gave us a full picture of the molecule,” said co-author Hatice Altug.

According to the researchers, this simple method shows that it is possible to conduct a complex analysis using only one device, while it normally requires many different ones, and without stressing or modifying the biological sample. “The method should also work for polymers, and many other substances,” she added.


Abstract of Mid-infrared plasmonic biosensing with graphene

Infrared spectroscopy is the technique of choice for chemical identification of biomolecules through their vibrational fingerprints. However, infrared light interacts poorly with nanometric-size molecules. We exploit the unique electro-optical properties of graphene to demonstrate a high-sensitivity tunable plasmonic biosensor for chemically specific label-free detection of protein monolayers. The plasmon resonance of nanostructured graphene is dynamically tuned to selectively probe the protein at different frequencies and extract its complex refractive index. Additionally, the extreme spatial light confinement in graphene—up to two orders of magnitude higher than in metals—produces an unprecedentedly high overlap with nanometric biomolecules, enabling superior sensitivity in the detection of their refractive index and vibrational fingerprints. The combination of tunable spectral selectivity and enhanced sensitivity of graphene opens exciting prospects for biosensing.

Your entire viral infection history from a single drop of blood

Systematic viral epitope scanning (VirScan). This method allows comprehensive analysis of antiviral antibodies in human sera. VirScan combines DNA microarray synthesis and bacteriophage display to create a uniform, synthetic representation of peptide epitopes comprising the human virome. Immunoprecipitation and high-throughput DNA sequencing reveal the peptides recognized by antibodies in the sample. The color of each cell in the heatmap depicts the relative number of antigenic epitopes detected for a virus (rows) in each sample (columns). (credit: George J. Xu et al./Science).

New technology called called VirScan developed by Howard Hughes Medical Institute (HHMI) researchers makes it possible to test for current and past infections with any known human virus by analyzing a single drop of a person’s blood.

With VirScan, scientists can run a single test to determine which viruses have infected an individual, rather than limiting their analysis to particular viruses. That unbiased approach could uncover unexpected factors affecting individual patients’ health, and also expands opportunities to analyze and compare viral infections in large populations.

The comprehensive analysis can be performed for about $25 per blood sample, but the test is currently being used only as a research tool and is not yet commercially available.

Stephen J. Elledge, an HHMI investigator at Brigham and Women’s Hospital, led the development of VirScan. He and his colleagues have already used VirScan to screen the blood of 569 people in the United States, South Africa, Thailand, and Peru. The scientists described the new technology and reported their findings in the June 5, 2015, issue of the journal Science.

Virus antibodies: clues that last decades

Bacteriophage (credit: Wikimedia Commons)

VirScan works by screening the blood for antibodies against any of the 206 species of viruses known to infect humans*. The immune system ramps up production of pathogen-specific antibodies when it encounters a virus for the first time, and it can continue to produce those antibodies for years or decades after it clears an infection.

That means VirScan not only identifies viral infections that the immune system is actively fighting, but also provides a history of an individual’s past infections.

To develop the new test, Elledge and his colleagues synthesized more than 93,000 short pieces of DNA encoding different segments of viral proteins. They introduced those pieces of DNA into bacteria-infecting viruses called bacteriophage.

Each bacteriophage manufactured one of the protein segments — known as a peptide — and displayed the peptide on its surface. As a group, the bacteriophage displayed all of the protein sequences found in the more than 1,000 known strains of human viruses.

To test the method, the team used it to analyze blood samples from patients known to be infected with particular viruses, including HIV and hepatitis C. “It turns out that it works really well,” Elledge says. “We were in the sensitivity range of 95 to 100 percent for those, and the specificity was good—we  didn’t falsely identify people who were negative. That gave us confidence that we could detect other viruses, and when we did see them we would know they were real.”


Harvard Medical School | Viral History in a Drop of Blood

International study

Elledge and his colleagues used VirScan to analyze the antibodies in 569 people from four countries, examining about 100 million potential antibody/epitope interactions. They found that on average, each person had antibodies to ten different species of viruses. As expected, antibodies against certain viruses were common among adults but not in children, suggesting that children had not yet been exposed to those viruses. Individuals residing South Africa, Peru, and Thailand, tended to have antibodies against more viruses than people in the United States. The researchers also found that people infected with HIV had antibodies against many more viruses than did people without HIV.

Elledge says the team was surprised to find that antibody responses against specific viruses were surprisingly similar between individuals, with different people’s antibodies recognizing identical  amino acids in the viral peptides. “In this paper alone we identified more antibody/peptide interactions to viral proteins than had been identified in the previous history of all viral exploration,” he says. The surprising reproducibility of those interactions allowed the team to refine their analysis and improve the  sensitivity of VirScan, and Elledge says the method will continue to improve as his team analyzes more samples. Their findings on viral epitopes may also have important implications for vaccine design.

Elledge says the approach his team has developed is not limited to antiviral antibodies. His own lab is also using it to look for antibodies that attack a body’s own tissue in certain autoimmune diseases that are associated with cancer. A similar approach could also be used to screen for antibodies against other types of pathogens.

* Antibodies in the blood find their viral targets by recognizing unique features known as epitopes that are embedded in proteins on the virus surface. To perform the VirScan analysis, all of the peptide-displaying bacteriophage are allowed to mingle with a blood sample. Antiviral antibodies in the blood find and bind to their target epitopes within the displayed peptides. The scientists then retrieve the antibodies and wash away everything except for the few bacteriophage that cling to them. By sequencing the DNA of those bacteriophage, they can identify which viral protein pieces were grabbed onto by antibodies in the blood sample. That tells the scientists which viruses a person’s immune system has previously encountered, either through infection or through vaccination. Elledge estimates it would take about 2-3 days to process 100 samples, assuming sequencing is working optimally. He is optimistic the speed of the assay will increase with further development.


Abstract of Comprehensive serological profiling of human populations using a synthetic human virome

The human virome plays important roles in health and immunity. However, current methods for detecting viral infections and antiviral responses have limited throughput and coverage. Here, we present VirScan, a high-throughput method to comprehensively analyze antiviral antibodies using immunoprecipitation and massively parallel DNA sequencing of a bacteriophage library displaying proteome-wide peptides from all human viruses. We assayed over 108 antibody-peptide interactions in 569 humans across four continents, nearly doubling the number of previously established viral epitopes. We detected antibodies to an average of 10 viral species per person and 84 species in at least two individuals. Although rates of specific virus exposure were heterogeneous across populations, antibody responses targeted strongly conserved “public epitopes” for each virus, suggesting that they may elicit highly similar antibodies. VirScan is a powerful approach for studying interactions between the virome and the immune system.

Creating DNA-based nanostructures without water

Three different DNA nanostructures assembled at room temperature in water-free glycholine (left) and in 75 percent glycholine-water mixture (center and right). The structures are (from left to right) a tall rectangle two-dimensional DNA origami, a triangle made of single-stranded tails, and a six-helix bundle three-dimensional DNA origami (credit: Isaac Gállego).

Researchers at the Georgia Institute of Technology have discovered an new process for assembling DNA nanostructures in a water-free solvent, which may allow for fabricating more complex nanoscale structures — especially, nanoelectronic chips based on DNA.

Scientists have been using DNA to construct sophisticated new structures from nanoparticles (such as a recent development at Brookhaven National Labs reported by KurzweilAI May 26), but the use of DNA has required a water-based environment. That’s because DNA naturally functions inside the watery environment of living cells. However, the use of water limited the types of structures that are possible.

The viscosity of a new solvent used for assembling DNA nanostructures (credit: Rob Felt)

In addition, the Georgia Tech researchers discovered that, paradoxically, adding a small amount of water to their water-free solvent during the assembly process (and removing it later) increases the assembly rate. It could also allow for even more complex structures, by reducing the problem of DNA becoming trapped in unintended structures by aggregation (clumping).

The new solvent they used is known as glycholine, a mixture of glycerol (used for sweetening and preserving food) and choline chloride, but the researchers are exploring other materials.

The solvent system could improve the combined use of metallic nanoparticles and DNA based materials at room temperature. The solvent’s low volatility could also allow for storage of assembled DNA structures without the concern that a water-based medium would dry out.

The research on water-free solvents grew out of Georgia Tech researchers’ studies in the origins of life. They wondered if the molecules necessary for life, such as the ancestor of DNA, could have developed in a water-free solution. In some cases, they found, the chemistry necessary to make the molecules of life would be much easier without water being present.

Sponsored by the National Science Foundation and NASA, the research will be published as the cover story in Volume 54, Issue 23 of the journal Angewandte Chemie International Edition.

* The assembly rate of DNA nanostructures can be very slow, and depends strongly on temperature. Raising the temperature increases this rate, but temperatures that are too high can cause the DNA structures to fall apart. The solvent system developed at Georgia Tech adds a new level of control over DNA assembly. DNA structures assemble at lower temperatures in this solvent, and adding water can adjust the solvent’s viscosity (resistance to flow), which allows for faster assembly compared to the water-free version of the solvent.


Abstract of Folding and Imaging of DNA Nanostructures in Anhydrous and Hydrated Deep-Eutectic Solvents

There is great interest in DNA nanotechnology, but its use has been limited to aqueous or substantially hydrated media. The first assembly of a DNA nanostructure in a water-free solvent, namely a low-volatility biocompatible deep-eutectic solvent composed of a 4:1 mixture of glycerol and choline chloride (glycholine), is now described. Glycholine allows for the folding of a two-dimensional DNA origami at 20 °C in six days, whereas in hydrated glycholine, folding is accelerated (≤3 h). Moreover, a three-dimensional DNA origami and a DNA tail system can be folded in hydrated glycholine under isothermal conditions. Glycholine apparently reduces the kinetic traps encountered during folding in aqueous solvent. Furthermore, folded structures can be transferred between aqueous solvent and glycholine. It is anticipated that glycholine and similar solvents will allow for the creation of functional DNA structures of greater complexity by providing a milieu with tunable properties that can be optimized for a range of applications and nanostructures.

Super-resolution electron microscopy of soft materials like biomaterials

CLAIRE image of Al nanostructures with an inset that shows a cluster of six Al nanostructures (credit: Lawrence Berkeley National Laboratory)

Soft matter encompasses a broad swath of materials, including liquids, polymers, gels, foam and — most importantly — biomolecules. At the heart of soft materials, governing their overall properties and capabilities, are the interactions of nano-sized components.

Observing the dynamics behind these interactions is critical to understanding key biological processes, such as protein crystallization and metabolism, and could help accelerate the development of important new technologies, such as artificial photosynthesis or high-efficiency photovoltaic cells.

Observing these dynamics at sufficient resolution has been a major challenge, but this challenge is now being met with a new non-invasive nanoscale imaging technique that goes by the acronym of CLAIRE.

CLAIRE stands for “cathodoluminescence activated imaging by resonant energy transfer.” Invented by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, CLAIRE extends the extremely high resolution of electron microscopy to the dynamic imaging of soft matter.

“Traditional electron microscopy damages soft materials and has therefore mainly been used to provide topographical or compositional information about robust inorganic solids or fixed sections of biological specimens,” says chemist Naomi Ginsberg, who leads CLAIRE’s development and holds appointments with Berkeley Lab’s Physical Biosciences Division and its Materials Sciences Division, as well as UC Berkeley’s departments of chemistry and physics.

“CLAIRE allows us to convert electron microscopy into a new non-invasive imaging modality for studying soft materials and providing spectrally specific information about them on the nanoscale.”

Ginsberg is also a member of the Kavli Energy NanoScience Institute (Kavli-ENSI) at Berkeley. She and her research group recently demonstrated CLAIRE’s imaging capabilities by applying the technique to aluminum nanostructures and polymer films that could not have been directly imaged with electron microscopy.

“What microscopic defects in molecular solids give rise to their functional optical and electronic properties? By what potentially controllable process do such solids form from their individual microscopic components, initially in the solution phase? The answers require observing the dynamics of electronic excitations or of molecules themselves as they explore spatially heterogeneous landscapes in condensed phase systems,” Ginsberg says.

“In our demonstration, we obtained optical images of aluminum nanostructures with 46 nanometer resolution, then validated the non-invasiveness of CLAIRE by imaging a conjugated polymer film. The high resolution, speed and non-invasiveness we demonstrated with CLAIRE positions us to transform our current understanding of key biomolecular interactions.”

How to avoid destroying soft matter with electron beams

CLAIRE works by essentially combining the best attributes of optical and scanning electron microscopy into a single imaging platform.

Scanning electron microscopes use beams of electrons rather than light for illumination and magnification. With much shorter wavelengths than photons of visible light, electron beams can be used to observe objects hundreds of times smaller than those that can be resolved with an optical microscope. However, these electron beams destroy most forms of soft matter and are incapable of spectrally specific molecular excitation.

Ginsberg and her colleagues get around these problems by employing a process called “cathodoluminescence,” in which an ultrathin scintillating film, about 20 nanometers thick, composed of cerium-doped yttrium aluminum perovskite, is inserted between the electron beam and the sample.

When the scintillating film is excited by a low-energy electron beam (about 1 KeV), it emits energy that is transferred to the sample, causing the sample to radiate. This luminescence is recorded and correlated to the electron beam position to form an image that is not restricted by the optical diffraction limit (which limits optical microscopy).

The CLAIRE imaging demonstration was carried out at the Molecular Foundry, a DOE Office of Science User Facility.

Observing biomolecular interactions, solar cells, and LEDs

While there is still more work to do to make CLAIRE widely accessible, Ginsberg and her group are moving forward with further refinements for several specific applications.

“We’re interested in non-invasively imaging soft functional materials like the active layers in solar cells and light-emitting devices,” she says. “It is especially true in organics and organic/inorganic hybrids that the morphology of these materials is complex and requires nanoscale resolution to correlate morphological features to functions.”

Ginsberg and her group are also working on the creation of liquid cells for observing biomolecular interactions under physiological conditions. Since electron microscopes can only operate in a high vacuum, as molecules in the air disrupt the electron beam, and since liquids evaporate in high vacuum, aqueous samples must either be freeze-dried or hermetically sealed in special cells.

“We need liquid cells for CLAIRE to study the dynamic organization of light-harvesting proteins in photosynthetic membranes,” Ginsberg says. “We should also be able to perform other studies in membrane biophysics to see how molecules diffuse in complex environments, and we’d like to be able to study molecular recognition at the single molecule level.”

In addition, Ginsberg and her group will be using CLAIRE to study the dynamics of nanoscale systems for soft materials in general. “We would love to be able to observe crystallization processes or to watch a material made of nanoscale components anneal or undergo a phase transition,” she says. “We would also love to be able to watch the electric double layer at a charged surface as it evolves, as this phenomenon is crucial to battery science.”

A paper describing the most recent work on CLAIRE has been published in the journal Nano Letters. This research was primarily supported by the DOE Office of Science and by the National Science Foundation.


Abstract of Cathodoluminescence-Activated Nanoimaging: Noninvasive Near-Field Optical Microscopy in an Electron Microscope

We demonstrate a new nanoimaging platform in which optical excitations generated by a low-energy electron beam in an ultrathin scintillator are used as a noninvasive, near-field optical scanning probe of an underlying sample. We obtain optical images of Al nanostructures with 46 nm resolution and validate the noninvasiveness of this approach by imaging a conjugated polymer film otherwise incompatible with electron microscopy due to electron-induced damage. The high resolution, speed, and noninvasiveness of this “cathodoluminescence-activated” platform also show promise for super-resolution bioimaging.

Planarian regeneration model discovered by AI algorithm

Head-trunk-tail planarian regeneration results from experiments (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria — the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for more than 100 years. The work, published in the June 4 issue of PLOS Computational Biology (open access), demonstrates how “robot science” can help human scientists in the future.

To bioengineer complex organs, scientists need to understand the mechanisms by which those shapes are normally produced by the living organism.

However, there’s a significant knowledge gap between the molecular genetic components needed to produce a particular organism shape and understanding how to generate that particular complex shape in the correct size, shape and orientation, said the paper’s senior author, Michael Levin, Ph.D., Vannevar Bush professor of biology and director of the Tufts Center for Regenerative and Developmental Biology.

“Most regenerative models today derived from genetic experiments are arrow diagrams, showing which gene regulates which other gene. That’s fine, but it doesn’t tell you what the ultimate shape will be. You cannot tell if the outcome of many genetic pathway models will look like a tree, an octopus or a human,” said Levin.

“Most models show some necessary components for the process to happen, but not what dynamics are sufficient to produce the shape, step by step. What we need are algorithmic or constructive models, which you could follow precisely and there would be no mystery or uncertainty. You follow the recipe and out comes the shape.”

Such models are required to know what triggers could be applied to such a system to cause regeneration of particular components, or other desired changes in shape. However, no such tools yet exist for mining the fast-growing mountain of published experimental data in regeneration and developmental biology, said the paper’s first author, Daniel Lobo, Ph.D., post-doctoral fellow in the Levin lab.

An evolutionary computation algorithm

To address this challenge, Lobo and Levin developed an algorithm that could be used to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The algorithm generated networks by randomly combining previous networks and performing random changes, additions and deletions. Each candidate network was tested in a virtual worm, under simulated experiments. The algorithm compared the resulting shape from the simulation with real published data in the database.

As evolution proceeded, gradually the new networks could explain more experiments in the database comprising most of the known planarian experimental literature regarding head vs. tail regeneration.

Regenerative model discovered by AI

Regulatory network found by the automated system, explaining the combined phenotypic (forms and characteristics) experimental data of the key publications of head-trunk-tail planarian regeneration (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

The researchers ultimately applied the algorithm to a combined experimental dataset of 16 key planarian regeneration experiments to determine if the approach could identify a comprehensive regulatory network of planarian generation.

After 42 hours, the algorithm returned the discovered regulatory network, which correctly predicted all 16 experiments in the dataset. The network comprised seven known regulatory molecules as well as two proteins that had not yet been identified in existing papers on planarian regeneration.

“This represents the most comprehensive model of planarian regeneration found to date. It is the only known model that mechanistically explains head-tail polarity determination in planaria under many different functional experiments and is the first regenerative model discovered by artificial intelligence,” said Levin.

The paper represents a successful application of the growing field of “robot science.”

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said.

“One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining, but also inference of meaning of the data.”

This work was supported with funding from the National Science Foundation, National Institutes of Health, USAMRMC, and the Mathers Foundation.

Dynamically reprogramming matter

Various types of reprogramming DNA strands can be used to selectively trigger transformations to radically different phases (configurations) of the initial particle structure (credit: Brookhaven National Laboratory)

Scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have developed the capability of creating dynamic nanomaterials — ones whose structure and associated properties can be switched, on-demand. In a paper appearing in Nature Materials, they describe a way to selectively rearrange nanoparticles in three-dimensional arrays to produce different configurations, or “phases,” from the same nano-components.

“One of the goals in nanoparticle self-assembly has been to create structures by design,” said Oleg Gang, who led the work at Brookhaven’s Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility. “Until now, most of the structures we’ve built have been static.” KurzweilAI covered that development in a previous article, “Creating complex structures using DNA origami and nanoparticles.”

The new advance in nanoscale engineering builds on that previous work in developing ways to get nanoparticles to self-assemble into complex composite arrays, including linking them together with tethers constructed of complementary strands of synthetic DNA.

“We know that properties of materials built from nanoparticles are strongly dependent on their arrangements,” said Gang. “Previously, we’ve even been able to manipulate optical properties by shortening or lengthening the DNA tethers. But that approach does not permit us to achieve a global reorganization of the entire structure once it’s already built.”

DNA-directed rearrangement

“Now we are trying to achieve an even more ambitious goal,” reveal Gang: “Making materials that can transform so we can take advantage of properties that emerge with the particles’ rearrangements.”

The ability to direct particle rearrangements, or phase changes, will allow the scientists to choose the desired properties — say, the material’s response to light or a magnetic field — and switch them whenever needed. Such phase-changing materials could lead to radical new applications, such as dynamic energy-harvesting or responsive optical materials.

Injecting different kinds of reprogramming DNA strands can change the interparticle interactions in different ways depending on whether the new strands increase attraction or repulsion, or there’s a combination of these forces between particles (credit: Brookhaven National Laboratory)

In the new approach, the reprogramming DNA strands adhere to open binding sites on the already assembled nanoparticles. These strands exert additional forces on the linked-up nanoparticles.

“By introducing different types of reprogramming DNA strands, we modify the DNA shells surrounding the nanoparticles,” explained CFN postdoctoral fellow Yugang Zhang, the lead author on the paper. “Altering these shells can selectively shift the particle-particle interactions, either by increasing both attraction and repulsion, or by separately increasing only attraction or only repulsion. These reprogrammed interactions impose new constraints on the particles, forcing them to achieve a new structural organization to satisfy those constraints.”

Using their method, the team demonstrated that they could switch their original nanoparticle array, the “mother” phase, into multiple different daughter phases with precision control.

Introducing “reprogramming” of DNA strands in an already assembled nanoparticle array triggers a transition from a “mother phase,” where particles occupy the corners and center of a cube (left), to a more compact “daughter phase” (right). The change represented in the schematic diagrams is revealed by the associated small-angle x-ray scattering patterns. Such phase-changes could potentially be used to switch a material’s properties on demand. (credit: Brookhaven National Laboratory)

DNA-based matter reprogramming

This is quite different from phase changes driven by external physical conditions such as pressure or temperature, Gang said, which typically result in single phase shifts, or sometimes sequential ones. “In those cases, to go from phase A to phase C, you first have to shift from A to B and then B to C,” said Gang. “Our method allows us to pick which daughter phase we want and go right to that one because the daughter phase is completely determined by the type of DNA reprogramming strands we use.”

The scientists were able to observe the structural transformations to various daughter phases using a technique called in situ small-angle x-ray scattering at the National Synchrotron Light Source, a DOE Office of Science User Facility that operated at Brookhaven Lab from 1982 until last September (now replaced by NSLS-II, which produces x-ray beams 10,000 times brighter). The team also used computational modeling to calculate how different kinds of reprogramming strands would alter the interparticle interactions, and found their calculations agreed well with their experimental observations.

“The ability to dynamically switch the phase of an entire superlattice array will allow the creation of reprogrammable and switchable materials wherein multiple, different functions can be activated on demand,” said Gang. “Our experimental work and accompanying theoretical analysis confirm that reprogramming DNA-mediated interactions among nanoparticles is a viable way to achieve this goal.”

This research was done in collaboration with scientists from Columbia University’s School of Engineering and Applied Science and the Indian Institute of Technology Gandhinagar. The work was funded by the DOE Office of Science.


Abstract of Selective transformations between nanoparticle superlattices via the reprogramming of DNA-mediated interactions

The rapid development of self-assembly approaches has enabled the creation of materials with desired organization of nanoscale components. However, achieving dynamic control, wherein the system can be transformed on demand into multiple entirely different states, is typically absent in atomic and molecular systems and has remained elusive in designed nanoparticle systems. Here, we demonstrate with in situ small-angle X-ray scattering that, by using DNA strands as inputs, the structure of a three-dimensional lattice of DNA-coated nanoparticles can be switched from an initial ‘mother’ phase into one of multiple ‘daughter’ phases. The introduction of different types of reprogramming DNA strands modifies the DNA shells of the nanoparticles within the superlattice, thereby shifting interparticle interactions to drive the transformation into a particular daughter phase. Moreover, we mapped quantitatively with free-energy calculations the selective reprogramming of interactions onto the observed daughter phases.

Creating complex structures using DNA origami and nanoparticles

Cluster assembled from DNA-functionalized gold nanoparticles on vertices of a octahedral DNA origami frame (credit: Brookhaven National Laboratory))

Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and collaborators have developed a method using DNA for designing new customized materials with complex structures for applications in energy, optics, and medicine.

They used ropelike configurations of DNA to form a rigid geometrical framework and then added dangling pieces of single-stranded DNA to glue nanoparticles in place.

The method, described in the journal Nature Nanotechnology, produced predictable geometric configurations that are somewhat analogous to molecules made of atoms, according to Brookhaven physicist Oleg Gang, who led the project at the Lab’s Center for Functional Nanomaterials (CFN).

“While atoms form molecules based on the nature of their chemical bonds, there has been no easy way to impose such a specific spatial binding scheme on nanoparticles, he said. “This is exactly the problem that our method addresses.

“We may be able to design materials that mimic nature’s machinery to harvest solar energy, or manipulate light for telecommunications applications, or design novel catalysts for speeding up a variety of chemical reactions,” Gang said.

As a demonstration, the researchers used an octahedral (eight-sided) scaffold (structure) with particles positioned in precise locations on the scaffold according to specific DNA coding. They also used the geometrical clusters as building blocks for larger arrays, including linear chains and two-dimensional planar sheets.

“Our work demonstrates the versatility of this approach and opens up numerous exciting opportunities for high-yield precision assembly of tailored 3D building blocks in which multiple nanoparticles of different structures and functions can be integrated,” said CFN scientist Ye Tian, one of the lead authors on the paper.

A new DNA “origami” kit

Scientists built octahedrons using ropelike structures made of bundles of DNA double-helix molecules to form the frames (a). Single strands of DNA attached at the vertices (numbered in red) can be used to attach nanoparticles coated with complementary strands. This approach can yield a variety of structures, including ones with the same type of particle at each vertex (b), arrangements with particles placed only on certain vertices (c), and structures with different particles placed strategically on different vertices (d). (credit: Brookhaven National Laboratory)

This nanoscale construction approach takes advantage of two key characteristics of the DNA molecule: the twisted-ladder double helix shape, and the natural tendency of strands with complementary bases (the A, T, G, and C letters of the genetic code) to pair up in a precise way.

Here’s how the scientists built a complex structure with this “DNA origami” kit:

1. They created bundles of six double-helix DNA molecules.

2. They put four of these bundles together to make a stable, somewhat rigid building material — similar to the way individual fibrous strands are woven together to make a very strong rope.

3. They used these ropelike girders to form the frame of three-dimensional octahedrons, “stapling” the linear DNA chains together with hundreds of short complementary DNA strands. (“We refer to these as DNA origami octahedrons,” Gang said.)

4. To make it possible to “glue” nanoparticles to the 3D frames, the scientists engineered each of the original six-helix bundles to have one helix with an extra single-stranded piece of DNA sticking out from both ends.

5. When assembled into the 3D octahedrons, each vertex of the frame had a few of these “sticky end” tethers available for binding with objects coated with complementary DNA strands.

“When nanoparticles coated with single strand tethers are mixed with the DNA origami octahedrons, the ‘free’ pieces of DNA find one another so the bases can pair up according to the rules of the DNA complementarity code. Thus the specifically DNA-encoded particles can find their correspondingly designed place on the octahedron vertices,” Gang explained.

A combination cryo-electron microscopy image of an octahedral frame with one gold nanoparticle bound to each of the six vertices, shown from three different angles. (Credit: Brookhaven National Laboratory)

The scientists can also change what binds to each vertex by changing the DNA sequences encoded on the tethers. In one experiment, they encoded the same sequence on all the octahedron’s tethers, and attached strands with a complementary sequence to gold nanoparticles. The result: One gold nanoparticle attached to each of octahedron’s six vertices.

In additional experiments,the scientists changed the sequence of some vertices and used complementary strands on different kinds of particles, illustrating that they could direct the assembly and arrangement of the particles in a very precise way.

By strategically placing tethers on particular vertices, the scientists used the octahedrons to link nanoparticles into one-dimensional chainlike arrays (left) and two-dimensional square sheets (right). (Credit: Brookhaven National Laboratory)

In one case, they made two different arrangements of the same three pairs of particles of different sizes, producing products with different optical properties. They were even able to use DNA tethers on selected vertices to link octahedrons end-to-end, forming chains, and in 2D arrays, forming sheets.

Visualizing the structures

TEM image of part of the 1D array (credit: Brookhaven National Lab)

Confirming the particle arrangements and structures was a major challenge because the nanoparticles and the DNA molecules making up the frames have very different densities. Certain microscopy techniques would reveal only the particles, while others would distort the 3D structures.

To see both the particles and origami frames, the scientists used cryo-electron microscopy (cryo-EM), led by Brookhaven Lab and Stony Brook University biologist Huilin Li, an expert in this technique, and Tong Wang, the paper’s other lead co-author, who works in Brookhaven’s Biosciences department with Li.

They had to subtract information from the images to “see” the different density components separately, then combine the information using single particle 3D reconstruction and tomography to produce the final images.

This research was supported by the DOE Office of Science.


Abstract of Prescribed nanoparticle cluster architectures and low-dimensional arrays built using octahedral DNA origami frames

Three-dimensional mesoscale clusters that are formed from nanoparticles spatially arranged in pre-determined positions
can be thought of as mesoscale analogues of molecules. These nanoparticle architectures could offer tailored properties
due to collective effects, but developing a general platform for fabricating such clusters is a significant challenge. Here, we
report a strategy for assembling three-dimensional nanoparticle clusters that uses a molecular frame designed with
encoded vertices for particle placement. The frame is a DNA origami octahedron and can be used to fabricate clusters
with various symmetries and particle compositions. Cryo-electron microscopy is used to uncover the structure of the DNA
frame and to reveal that the nanoparticles are spatially coordinated in the prescribed manner. We show that the DNA
frame and one set of nanoparticles can be used to create nanoclusters with different chiroptical activities. We also show
that the octahedra can serve as programmable interparticle linkers, allowing one- and two-dimensional arrays to be
assembled with designed particle arrangements.

The geometry of immune system cloaking

Cloaking materials: The sugar polymers that make up the spheres in this image are designed to package and protect specially engineered cells that work to produce drugs and fight disease and remain undetected by the body’s natural defense system. However, the reddish markers on the spheres’ surfaces indicate that immune cells (blue/green) have discovered these invaders and begun to block them off from the rest of the body. — credit | MIT researchers

A team of MIT researchers has come up with a way to reduce immune-system rejection of implantable devices used for for drug delivery, tissue engineering, or sensing.

Previous research found that smooth surfaces, especially spheres, are better — but counterintuively, larger spheres actually work better at reducing scar tissue, the researchers discovered.

“We were surprised by how much the size and shape of an implant can affect its triggering of an immune response. What it’s made of is still an important piece of the puzzle, but it turns out if you really want to have the least amount of scar tissue you need to pick the right size and shape,” says Daniel Anderson, the Samuel A. Goldblith Associate Professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the paper’s senior author.

Tests of spheres

The study grew out of the researchers’ efforts to build an artificial pancreas. The goal is to deliver pancreatic islet cells encapsulated within a particle made of alginate — a polysaccharide (sugar) naturally found in algae — or another material. These implanted cells could replace patients’ pancreatic islet cells, which are nonfunctional in Type I diabetes.

Increasing the spherical diameter of a variety of materials including hydrogels, ceramics, metals and plastics (a) — scale bar: 2 millimeters — results in reduced foreign-body responses (b) — scale bar: 300 micrometers (credit: Omid Veiseh et al./Nature Materials)

The researchers tested spheres in two sizes — 0.5 and 1.5 millimeters in diameter. In tests of diabetic mice, the spheres were implanted within the abdominal cavity and the researchers tracked their ability to accurately respond to changes in glucose levels. The devices prepared with the smaller spheres were completely surrounded by scar tissue and failed after about a month, while the larger ones were not rejected and continued to function for more than six months.

The larger spheres also evaded the immune response in tests in nonhuman primates. Smaller spheres implanted under the skin were engulfed by scar tissue after only two weeks, while the larger ones remained clear for up to four weeks.

A universal size effect

This effect was seen not only with alginate, but also with spheres made of stainless steel, glass, polystyrene, and polycaprolactone, a type of polyester. “We realized that regardless of what the composition of the material is, this effect still persists, and that made it a lot more exciting because it’s a lot more generalizable,” said Koch Institute postdoc Omid Veiseh, one of the lead authors of a paper in the May 18 issue of Nature Materials.

The researchers believe this finding could also be applicable to any other type of implantable device, including drug-delivery vehicles and sensors for glucose and insulin, which could also help improve diabetes treatment. Optimizing particle size and shape could also help guide scientists in developing other types of implantable cells for treating diseases other than diabetes.

The research was funded by the Juvenile Diabetes Research Foundation, the Leona M. and Harry B. Helmsley Charitable Trust Foundation, the National Institutes of Health, the Koch Institute Support Grant from the National Cancer Institute, and the Tayebati Family Foundation. Veiseh was also supported by the Department of Defense.


Abstract of Size- and shape-dependent foreign body immune response to materials implanted in rodents and non-human primates

The efficacy of implanted biomedical devices is often compromised by host recognition and subsequent foreign body responses. Here, we demonstrate the role of the geometry of implanted materials on their biocompatibility in vivo. In rodent and non-human primate animal models, implanted spheres 1.5 mm and above in diameter across a broad spectrum of materials, including hydrogels, ceramics, metals and plastics, significantly abrogated foreign body reactions and fibrosis when compared with smaller spheres. We also show that for encapsulated rat pancreatic islet cells transplanted into streptozotocin-treated diabetic C57BL/6 mice, islets prepared in 1.5-mm alginate capsules were able to restore blood-glucose control for up to 180 days, a period more than five times longer than for transplanted grafts encapsulated within conventionally sized 0.5-mm alginate capsules. Our findings suggest that the in vivo biocompatibility of biomedical devices can be significantly improved simply by tuning their spherical dimensions.