New mass spectral imaging instrument maps cells’ composition in 3-D at more than 100 times higher resolution

A mass spectral imaging instrument instrument developed at Colorado State University (credit: William Cotton/Colorado State University)

A one-of-a-kind mass spectral imaging instrument built at Colorado State University (CSU) lets scientists map cellular composition in three dimensions at a nanoscale image resolution of 75 nanometers wide and 20 nanometers deep — more than 100 times higher resolution than was earlier possible, according to the scientists.

The instrument may be able to observe how well experimental drugs penetrate and are processed by cells as new medications are developed to combat disease, customize treatments for specific cell types in specific conditions, identify the sources of pathogens propagated for bioterrorism, or investigate new ways to overcome antibiotic resistance among patients with surgical implants, according to professor Dean Crick of the CSU Mycobacteria Research Laboratories.

Crick’s primary research interest is tuberculosis, an infectious respiratory disease that contributes to an estimated 1.5 million deaths around the world each year. “We’ve developed a much more refined instrument,” Crick said. “It’s like going from using a dull knife to using a scalpel. You could soak a cell in a new drug and see how it’s absorbed, how quickly, and how it affects the cell’s chemistry.”

Schematics showing the focused extreme ultraviolet laser beam ablating (removal of material from the surface) a sample to produce an ion stream that is analyzed by a mass spectrometer. (b) Atomic force microscope (AFM) images of craters ablated in polymethyl methacrylate (PMMA) by a single EUV laser shot at different irradiation particle rated. The craters show smooth profiles with no signs of thermal damage. (c) Schematic of the instrument setup including the collimating extreme ultraviolet laser optics, focusing zone plate and spectrometer. (credit: Ilya Kuznetsov et al./Nature Communications)

The earlier generation of laser-based mass-spectral imaging could identify the chemical composition of a cell and could map its surface in two dimensions at microscale (about one micrometer), but could not chart cellular anatomy at more-detailed nanoscale dimensions and in 3-D, Crick said.

The research is described in an open-access paper in Nature Communications and was funded by a $1 million grant from the National Institutes of Health as part of an award to the Rocky Mountain Regional Center of Excellence for Biodefense and Emerging Infectious Disease Research. The optical equipment that focuses the laser beam was created by the Center for X-Ray Optics at the Lawrence Berkeley National Laboratory in Berkeley, Calif.

A special issue of Optics and Photonics News this month highlights the CSU research as among “the most exciting peer-reviewed optics research to have emerged over the past 12 months.”


CSU College of Veterinary Medicine and Biomedical Sciences | Nanoscale Mass-Spectral Imaging in 3-D at Colorado State University


Abstract of Three-dimensional nanoscale molecular imaging by extreme ultraviolet laser ablation mass spectrometry

Analytical probes capable of mapping molecular composition at the nanoscale are of critical importance to materials research, biology and medicine. Mass spectral imaging makes it possible to visualize the spatial organization of multiple molecular components at a sample’s surface. However, it is challenging for mass spectral imaging to map molecular composition in three dimensions (3D) with submicron resolution. Here we describe a mass spectral imaging method that exploits the high 3D localization of absorbed extreme ultraviolet laser light and its fundamentally distinct interaction with matter to determine molecular composition from a volume as small as 50 zl in a single laser shot. Molecular imaging with a lateral resolution of 75 nm and a depth resolution of 20 nm is demonstrated. These results open opportunities to visualize chemical composition and chemical changes in 3D at the nanoscale.

Will this DNA molecular switch replace conventional transistors?

A model of one form of double-stranded DNA attached to two electrodes (credit: UC Davis)

What do you call a DNA molecule that changes between high and low electrical conductance (amount of current flow)?

Answer: a molecular switch (transistor) for nanoscale computing. That’s what a team of researchers from the University of California, Davis and the University of Washington have documented in a paper published in Nature Communications Dec. 9.

“As electronics get smaller they are becoming more difficult and expensive to manufacture, but DNA-based devices could be designed from the bottom-up using directed self-assembly techniques such as ‘DNA origami’,” said Josh Hihath, assistant professor of electrical and computer engineering at UC Davis and senior author on the paper.

DNA origami is the folding of DNA to create two- and three-dimensional shapes at the nanoscale level (see DNA origami articles on KurzweilAI).

Hihath suggests that DNA-based devices may also improve the energy efficiency of electronic circuits, compared to traditional transistors, where power density on-chip has increased as transistors have become miniaturized, limiting further miniaturization.

This illustration shows double-stranded DNA in two configurations, B-form (blue) and A-form (green), bound to gold electrodes (yellow). The linkers to the electrodes (either amines or thiols) are shown in orange. (credit: Juan Manuel Artés et al./Nature Communications)

To develop DNA into a reversible switch, the scientists focused on switching between two stable conformations of DNA, known as the A-form and the B-form. In DNA, the B-form is the conventional DNA duplex molecule. The A-form is a more compact version with different spacing and tilting between the base pairs. Exposure to ethanol forces the DNA into the A-form conformation, resulting in increased conductance. Removing the ethanol causes the DNA to switch back to the B-form and return to its original reduced conductance value.

But the authors advise that to develop this finding into a technologically viable platform for electronics will require a great deal of work to overcome two major hurdles: billions of active DNA molecular devices must be integrated into the same circuit, as is done currently in conventional electronics; and scientists must be able to gate specific devices individually in such a large system.


Abstract of Conformational gating of DNA conductance

DNA is a promising molecule for applications in molecular electronics because of its unique electronic and self-assembly properties. Here we report that the conductance of DNA duplexes increases by approximately one order of magnitude when its conformation is changed from the B-form to the A-form. This large conductance increase is fully reversible, and by controlling the chemical environment, the conductance can be repeatedly switched between the two values. The conductance of the two conformations displays weak length dependencies, as is expected for guanine-rich sequences, and can be fit with a coherence-corrected hopping model. These results are supported by ab initio electronic structure calculations that indicate that the highest occupied molecular orbital is more disperse in the A-form DNA case. These results demonstrate that DNA can behave as a promising molecular switch for molecular electronics applications and also provide additional insights into the huge dispersion of DNA conductance values found in the literature.

How to create a synthesized actor performance in post-production

Given a pair of facial performances (horizontal and vertical faces, left), a new performance (film strip, right) can be blended (credit: Charles Malleson et al./Disney Research)

Disney Research has devised a way to blend an actor’s facial performances from a few or multiple takes to allow a director to get just the right emotion, instead of re-shooting the scene multiple times.

“It’s not unheard of for a director to re-shoot a crucial scene dozens of times, even 100 or more times, until satisfied,” said Markus Gross, vice president of research at Disney Research. “That not only takes a lot of time — it also can be quite expensive. Now our research team has shown that a director can exert control over an actor’s performance after the shoot with just a few takes, saving both time and money.”

And the work can be done in post-production, rather than on an expensive film set.

How FaceDirector works

Developed jointly with the University of Surrey, the system, called FaceDirector, works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.

“The central challenge for combining an actor’s performances from separate takes is video synchronization,” said Jean-Charles Bazin, associate research scientist at Disney Research. “But differences in head pose, emotion, expression intensity, as well as pitch accentuation and even the wording of the speech, are just a few of many difficulties in syncing video takes.”

The system analyzes both facial expressions and audio cues, then identifies frames that correspond between the takes, using a graph-based framework. Once this synchronization has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow, and compositing.


Disney Research | FaceDirector: Continuous Control of Facial Performance in Video

To test the system, actors performed several lines of dialog, repeating the performances to convey different emotions – happiness, sadness, excitement, fear, anger, etc. The line readings were captured in HD resolution using standard compact cameras. The researchers were able to synchronize the videos in real-time and automatically on a standard desktop computer. Users could generate novel versions of the performances by interactively blending the video takes.

Multiple uses

The researchers showed how it could be used for a variety of purposes, including generation of multiple performances from just a few video takes (for use elsewhere in the video), for script correction and editing, and switching between voices (for example to create an entertaining performance with a sad voice over a happy face).

Speculation: It might also be possible to use this to create a fake video in which a person’s different facial expressions are combined, along with audio clips, to make a person show apparently inappropriate emotions, for example.

The researchers will present their findings at ICCV 2015, the International Conference on Computer Vision, Dec. 11–18, in Santiago, Chile.


Abstract of FaceDirector: Continuous Control of Facial Performance in Video

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

How to create a synthesized actor performance in post-production

Given a pair of facial performances (horizontal and vertical faces, left), a new performance (film strip, right) can be blended (credit: Charles Malleson et al./Disney Research)

Disney Research has devised a way to blend an actor’s facial performances from a few or multiple takes to allow a director to get just the right emotion, instead of re-shooting the scene multiple times.

“It’s not unheard of for a director to re-shoot a crucial scene dozens of times, even 100 or more times, until satisfied,” said Markus Gross, vice president of research at Disney Research. “That not only takes a lot of time — it also can be quite expensive. Now our research team has shown that a director can exert control over an actor’s performance after the shoot with just a few takes, saving both time and money.”

And the work can be done in post-production, rather than on an expensive film set.

How FaceDirector works

Developed jointly with the University of Surrey, the system, called FaceDirector, works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.

“The central challenge for combining an actor’s performances from separate takes is video synchronization,” said Jean-Charles Bazin, associate research scientist at Disney Research. “But differences in head pose, emotion, expression intensity, as well as pitch accentuation and even the wording of the speech, are just a few of many difficulties in syncing video takes.”

The system analyzes both facial expressions and audio cues, then identifies frames that correspond between the takes, using a graph-based framework. Once this synchronization has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow, and compositing.


Disney Research | FaceDirector: Continuous Control of Facial Performance in Video

To test the system, actors performed several lines of dialog, repeating the performances to convey different emotions – happiness, sadness, excitement, fear, anger, etc. The line readings were captured in HD resolution using standard compact cameras. The researchers were able to synchronize the videos in real-time and automatically on a standard desktop computer. Users could generate novel versions of the performances by interactively blending the video takes.

Multiple uses

The researchers showed how it could be used for a variety of purposes, including generation of multiple performances from just a few video takes (for use elsewhere in the video), for script correction and editing, and switching between voices (for example to create an entertaining performance with a sad voice over a happy face).

Speculation: It might also be possible to use this to create a fake video in which a person’s different facial expressions are combined, along with audio clips, to make a person show apparently inappropriate emotions, for example.

The researchers will present their findings at ICCV 2015, the International Conference on Computer Vision, Dec. 11–18, in Santiago, Chile.


Abstract of FaceDirector: Continuous Control of Facial Performance in Video

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

Musk, others commit $1 billion to non-profit AI research company to ‘benefit humanity’

(credit: OpenAI)

Elon Musk and associates announced OpenAI, a non-profit AI research company, on Friday (Dec. 11), committing $1 billion toward their goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

The funding comes from a group of tech leaders including Musk, Reid Hoffman, Peter Thiel, and Amazon Web Services, but the venture expects to only spend “a tiny fraction of this in the next few years.”

The founders note that it’s hard to predict how much AI could “damage society if built or used incorrectly” or how soon. But the hope is to have a leading research institution that can “prioritize a good outcome for all over its own self-interest … as broadly and evenly distributed as possible.”

Brains trust

OpenAI’s co-chairs are Musk, who is also the principal funder of Future of Life Institute, and Sam Altman, president of  venture-capital seed-accelerator firm Y Combinator, who is also providing funding.

I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.” — Elon Musk on Medium

The founders say the organization’s patents (if any) “will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

OpenAI’s research director is machine learning expert Ilya Sutskever, formerly at Google, and its CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are “world-class research engineers and scientists” Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. The company will be based in San Francisco.


If I’m Dr. Evil and I use it, won’t you be empowering me?

“There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.” — Sam Altman in an interview with Steven Levy on Medium.


The announcement follows recent announcements by Facebook to open-source the hardware design of its GPU-based “Big Sur” AI server (used for large-scale machine learning software to identify objects in photos and understand natural language, for example); by Google to open-source its TensorFlow machine-learning software; and by Toyota Corporation to invest $1 billion in a five-year private research effort in artificial intelligence and robotics technologies, jointly with Stanford University and MIT.

To follow OpenAI: @open_ai or info@openai.com

Worm research in life extension leads scientists to discover new metric to track aging

C. elegans roundworm (credit: The Goldstein Lab)

When researchers at The Scripps Research Institute (TSRI) in California administered an antidepressant called mianserin to the Caenorhabditis elegans roundworm in 2007, they discovered the drug increased the lifespan of the “young adulthood” of roundworms by 30–40 per cent.

So, does that mean it will work in humans? Not necessarily. “There are millions of years of evolution between worms and humans,” says TSRI researcher Michael Petrascheck. “We may have done this in worms, but we don’t want people to get the impression they can take the drug we used in our study to extend their own teens or early twenties.”

Nonetheless, the researchers are now aiming to find out how the drug worked. In a study published Dec. 1 in an open-access article in the journal eLife, the researchers report they treated thousands of worms with either water or mianserin. Then they looked at the activity of genes as the worms aged, compared to the activity of genes in young adults.

Curiously, as the worms aged, the team observed dramatic changes in gene expression: groups of genes that together play a role in the same function were found to unpredictably change expression in opposing directions — making it difficult to predict the effect of drugs, for example.

Extending youth only works at the right time of life

They’ve called this phenomenon “transcriptional drift.” And by examining data from mice and from 32 human brains aged 26 to 106 years, they confirmed that it also occurs in mammals.

And that means transcriptional drift can be used as a new metric for measuring age-associated changes that start in young adulthood, they believe. Using this new metric in their worm research revealed that treatment with mianserin can suppress transcriptional drift, but only when administered at the right time of life.

By 10 days old, treated worms still had the gene expression characteristics of a three-day-old — physiologically they were seven days younger. But by 12 days, the physiological changes required to extend lifespan were complete and lifelong exposure to the drug had no additional effect.

So how does this work? They suggest that mianserin blocked signals related to the regulation of serotonin and this delayed physiological changes associated with age, including transcriptional drift and degenerative processes that lead to death. But the effect only occurred during young adulthood; the duration of this period of life was significantly extended.

What about us mammals?

(credit: Kapa65/Pixabay CC)

“How much of our findings with regards to lifespan extension will spill over to mammals is anyone’s guess; for example, the extension of lifespan might not be as dramatic,” says Petrascheck.”

Meanwhile, the anomalous findings have opened up new avenues of research for the team and are likely to spawn a wealth of research by others.

A significant next step for the team will be to test the effect in mice and to investigate whether there are any side effects. Different environments could also produce different results and this will need to be explored. They would also like to test whether the impact is different for different organs in the body.


Abstract of Suppression of transcriptional drift extends C. elegans lifespan by postponing the onset of mortality

Longevity mechanisms increase lifespan by counteracting the effects of aging. However, whether longevity mechanisms counteract the effects of aging continually throughout life, or whether they act during specific periods of life, preventing changes that precede mortality is unclear. Here, we uncover transcriptional drift, a phenomenon that describes how aging causes genes within functional groups to change expression in opposing directions. These changes cause a transcriptome-wide loss in mRNA stoichiometry and loss of co-expression patterns in aging animals, as compared to young adults. Using Caenorhabditis elegans as a model, we show that extending lifespan by inhibiting serotonergic signals by the antidepressant mianserin attenuates transcriptional drift, allowing the preservation of a younger transcriptome into an older age. Our data are consistent with a model in which inhibition of serotonergic signals slows age-dependent physiological decline and the associated rise in mortality levels exclusively in young adults, thereby postponing the onset of major mortality.

MIT invention could boost resolution of 3-D depth cameras 1,000-fold

By combining the information from the Kinect depth frame in (a) with polarized photographs, MIT researchers reconstructed the 3-D surface shown in (c). Polarization cues can allow coarse depth sensors like Kinect to achieve laser scan quality (b). (credit: courtesy of the researchers)

MIT researchers have shown that by exploiting light polarization (as in polarized sunglasses) they can increase the resolution of conventional 3-D imaging devices such as the Microsoft Kinect as much as 1,000 times.

The technique could lead to high-quality 3-D cameras built into cellphones, and perhaps the ability to snap a photo of an object and then use a 3-D printer to produce a replica. Further out, the work could also help the development of driverless cars.

Headed Ramesh Raskar, associate professor of media arts and sciences in the MIT Media Lab, the researchers describe the new system, which they call Polarized 3D, in a paper they’re presenting at the International Conference on Computer Vision in December.

How polarized light works

If an electromagnetic wave can be thought of as an undulating squiggle, polarization refers to the squiggle’s orientation. It could be undulating up and down, or side to side, or somewhere in-between.

Polarization also affects the way in which light bounces off of physical objects. If light strikes an object squarely, much of it will be absorbed, but whatever reflects back will have the same mix of polarizations (horizontal and vertical) that the incoming light did. At wider angles of reflection, however, light within a certain range of polarizations is more likely to be reflected.

This is why polarized sunglasses are good at cutting out glare: Light from the sun bouncing off asphalt or water at a low angle features an unusually heavy concentration of light with a particular polarization. So the polarization of reflected light carries information about the geometry of the objects it has struck.

This relationship has been known for centuries, but it’s been hard to do anything with it, because of a fundamental ambiguity about polarized light. Light with a particular polarization, reflecting off of a surface with a particular orientation and passing through a polarizing lens is indistinguishable from light with the opposite polarization, reflecting off of a surface with the opposite orientation.

This means that for any surface in a visual scene, measurements based on polarized light offer two equally plausible hypotheses about its orientation. Canvassing all the possible combinations of either of the two orientations of every surface, in order to identify the one that makes the most sense geometrically, is a prohibitively time-consuming computation.

Polarization plus depth sensing

To resolve this ambiguity, the Media Lab researchers use coarse depth estimates provided by some other method, such as the time a light signal takes to reflect off of an object and return to its source. Even with this added information, calculating surface orientation from measurements of polarized light is complicated, but it can be done in real-time by a graphics processing unit, the type of special-purpose graphics chip found in most video game consoles.

The researchers’ experimental setup consisted of a Microsoft Kinect — which gauges depth using reflection time — with an ordinary polarizing photographic lens placed in front of its camera. In each experiment, the researchers took three photos of an object, rotating the polarizing filter each time, and their algorithms compared the light intensities of the resulting images.

On its own, at a distance of several meters, the Kinect can resolve physical features as small as a centimeter or so across. But with the addition of the polarization information, the researchers’ system could resolve features in the range of hundreds of micrometers, or one-thousandth the size.

For comparison, the researchers also imaged several of their test objects with a high-precision laser scanner, which requires that the object be inserted into the scanner bed. Polarized 3D still offered the higher resolution.

Uses in cameras and self-driving cars

A mechanically rotated polarization filter would probably be impractical in a cellphone camera, but grids of tiny polarization filters that can overlay individual pixels in a light sensor are commercially available. Capturing three pixels’ worth of light for each image pixel would reduce a cellphone camera’s resolution, but no more than the color filters that existing cameras already use.

The new paper also offers the tantalizing prospect that polarization systems could aid the development of self-driving cars. Today’s experimental self-driving cars are, in fact, highly reliable under normal illumination conditions, but their vision algorithms go haywire in rain, snow, or fog.

That’s because water particles in the air scatter light in unpredictable ways, making it much harder to interpret. The MIT researchers show that in some very simple test cases their system can exploit information contained in interfering waves of light to handle scattering.


Abstract of Polarized 3D: High-Quality Depth Sensing with Polarization Cues

Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques

Shaking out the nanomaterials: a new method to purify water

Extracting one- and two-dimensional nanomaterials from contaminated water (credit: Michigan Tech)

A new study published in the American Chemical Society’s journal Applied Materials and Interfaces has found a novel—and very simple—way to remove nearly 100 percent of nanomaterials from water.

Water and oil don’t mix, of course, but shaking them together is what makes salad dressing so great. Only instead of emulsifying and capturing bits of shitake or basil in tiny olive oil bubbles, this mixture grabs nanomaterials.

Dongyan Zhang, a research professor of physics at Michigan Tech, led the experiments, which covered tests on carbon nanotubes, graphene, boron nitride nanotubes, boron nitride nanosheets and zinc oxide nanowires. Those are used in everything from carbon fiber golf clubs to sunscreen.

“These materials are very, very tiny, and that means if you try to remove them and clean them out of contaminated water, that it’s quite difficult,” Zhang says, adding that techniques like filter paper or meshes often don’t work.

What makes shaking work is the shape of one- and two-dimensional nanomaterials. As the oil and water separate after some rigorous shaking, the wires, tubes and sheets settle at the bottom of the oil, just above the water. The oils trap them. However, zero-dimensional nanomaterials, such as nanospheres do not get trapped.

Green Nanotechnology

We don’t have to wait until the final vote is in on whether nanomaterials have a positive or negative impact on people’s health and environmental health. With the simplicity of this technique, and how prolific nanomaterials are becoming, removing nanomaterials makes sense. Also, finding ways to effectively remove nanomaterials sooner rather than later could improve the technology’s market potential.

“Ideally for a new technology to be successfully implemented, it needs to be shown that the technology does not cause adverse effects to the environment,” the authors write. “Therefore, unless the potential risks of introducing nanomaterials into the environment are properly addressed, it will hinder the industrialization of products incorporating nanotechnology.”

Purifying water and greening nanotechnology could be as simple as shaking a vial of water and oil.


Michigan Technological University | Shaking the Nanomaterials Out: New Method to Purify Water


Abstract of A Simple and Universal Technique To Extract One- and Two-Dimensional Nanomaterials from Contaminated Water

We demonstrate a universal approach to extract one- and two-dimensional nanomaterials from contaminated water, which is based on a microscopic oil–water interface trapping mechanism. Results indicate that carbon nanotubes, graphene, boron nitride nanotubes, boron nitride nanosheets, and zinc oxide nanowires can be successfully extracted from contaminated water at a successful rate of nearly 100%. The effects of surfactants, particle shape, and type of organic extraction fluids are evaluated. The proposed extraction mechanism is also supported by in situ monitoring of the extraction process. We believe that this extraction approach will prove important for the purification of water contaminated by nanoparticles and will support the widespread adoption of nanomaterial applications.

Periodic table of protein complexes helps predict novel protein structures

An interactive Periodic Table of Protein Complexes is available at http://sea31.user.srcf.net/periodictable/ (credit: EMBL-EBI/Spencer Phillips)

The Periodic Table of Protein Complexes, developed by researchers in the UK and to be published Dec. 11 in the journal Science, offers a new way of looking at the enormous variety of structures that proteins can build in nature. More importantly, it suggests which ones might be discovered next and how entirely novel structures could be engineered.

Created by an interdisciplinary team led by researchers at the Wellcome Genome Campus and the University of Cambridge, the Table provides a valuable tool for research into evolution and protein engineering.

Handling complexity

Almost every biological process depends on proteins interacting and assembling into complexes in a specific way, and many diseases are associated with problems in complex assembly. “Evolution has given rise to a huge variety of protein complexes, and it can seem a bit chaotic,” explains Joe Marsh of the MRC Human Genetics Unit at the University of Edinburgh. “But if you break down the steps proteins take to become complexes, there are some basic rules that can explain almost all of the assemblies people have observed so far.”

Fundamentally, protein complex assembly can be seen as endless variations on dimerization (one doubles, and becomes two), cyclisation (one forms a ring of three or more), and subunit addition (two different proteins bind to each other). Because these happen in a fairly predictable way, it’s not as hard as you might think to predict how a novel protein would form.

“By analyzing the tens of thousands of protein complexes for which three-dimensional structures have already been experimentally determined, we could see repeating patterns in the assembly transitions that occur — and with new data from mass spectrometry we could start to see the bigger picture,” says Marsh.


Abstract of Principles of assembly reveal a periodic table of protein complexes

INTRODUCTION: The assembly of proteins into complexes is crucial for most biological processes. The three-dimensional structures of many thousands of homomeric and heteromeric protein complexes have now been determined, and this has had a broad impact on our understanding of biological function and evolution. Despite this, the organizing principles that underlie the great diversity of protein quaternary structures observed in nature remain poorly understood, particularly in comparison with protein folds, which have been extensively classified in terms of their architecture and evolutionary relationships.

RATIONALE: In this work, we sought a comprehensive understanding of the general principles underlying quaternary structure organization. Our approach was to consider protein complexes in terms of their assembly. Many protein complexes assemble spontaneously via ordered pathways in vitro, and these pathways have a strong tendency to be evolutionarily conserved. Furthermore, there are strong similarities between protein complex assembly and evolutionary pathways, with assembly pathways often being reflective of evolutionary histories, and vice versa. This suggests that it may be useful to consider the types of protein complexes that have evolved from the perspective of what assembly pathways are possible.

RESULTS: We first examined the fundamental steps by which protein complexes can assemble, using electrospray mass spectrometry experiments, literature-curated assembly data, and a large-scale analysis of protein complex structures. We found that most assembly steps can be classified into three basic types: dimerization, cyclization, and heteromeric subunit addition. By systematically combining different assembly steps in different ways, we were able to enumerate a large set of possible quaternary structure topologies, or patterns of key interfaces between the proteins within a complex. The vast majority of real protein complex structures lie within these topologies. This enables a natural organization of protein complexes into a “periodic table,” because each heteromer can be related to a simpler symmetric homomer topology. Exceptions are mostly the result of quaternary structure assignment errors, or cases where sequence-identical subunits can have different interactions and thus introduce asymmetry. Many of these asymmetric complexes fit the paradigm of a periodic table when their assembly role is considered. Finally, we implemented a model based on the periodic table, which predicts the expected frequencies of each quaternary structure topology, including those not yet observed. Our model correctly predicts quaternary structure topologies of recent crystal and electron microscopy structures that are not included in our original data set.

CONCLUSION: This work explains much of the observed distribution of known protein complexes in quaternary structure space and provides a framework for understanding their evolution. In addition, it can contribute considerably to the prediction and modeling of quaternary structures by specifying which topologies are most likely to be adopted by a complex with a given stoichiometry, potentially providing constraints for multi-subunit docking and hybrid methods. Lastly, it could help in the bioengineering of protein complexes by identifying which topologies are most likely to be stable, and thus which types of essential interfaces need to be engineered.

When machines learn like humans

Humans and machines were given an image of a novel character (top) and asked to produce new versions. A machine generated the nine-character grid on the left (credit: Jose-Luis Olivares/MIT — figures courtesy of the researchers)

A team of scientists has developed an algorithm that captures human learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.

The work by researchers at MIT, New York University, and the University of Toronto, which appears in the latest issue of the journal Science, marks a significant advance in the field — one that dramatically shortens the time it takes computers to “learn” new concepts and broadens their application to more creative tasks, according to the researchers.

“Our results show that by reverse-engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine-learning tasks.”

The paper’s other authors are Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

When humans are exposed to a new concept — such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet — they often need only a few examples to understand its make-up and recognize new instances. But machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Salakhutdinov helped to launch recent interest in learning with “deep neural networks,” in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts — the digits 0-9 — from 6,000 examples each, or a total of 60,000 training examples.

Bayesian Program Learning

Simple visual concepts for comparing human and machine learning. 525 (out of 1623) character concepts, shown with one example each. (credit: Brenden M. Lake et al./Science)

In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge: learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts.

To do so, they developed a “Bayesian Program Learning” (BPL) framework, where concepts are represented as simple computer programs. For instance, the form of the letter “A” is represented by computer code that generates examples of that letter when the code is run. Yet no programmer is required during the learning process. Also, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how different people draw the letter “A.”

This differs from standard pattern-recognition algorithms, which represent concepts as configurations of pixels or collections of features. The BPL approach learns “generative models” of processes in the world, making learning a matter of “model building” or “explaining” the data provided to the algorithm.

The researchers “explained” to the system that characters in human writing systems consist of strokes (lines demarcated by the lifting of the pen) and substrokes, demarcated by points at which the pen’s velocity is zero. With that simple information, the system then analyzed hundreds of motion-capture recordings of humans drawing characters in several different writing systems, learning statistics on the relationships between consecutive strokes and substrokes, as well as on the variation tolerated in the execution of a single stroke.

That means that the system learned the concept of a character and what to ignore (minor variations) in any specific instance.

The BPL model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts — for example, using knowledge of the Latin alphabet to learn letters in the Greek alphabet.

Cipher for Futurama Alien Language 1 (credit: The Infosphere, the Futurama Wiki)

The authors applied their model to more than 1,600 types of handwritten characters in 50 of the world’s writing systems, including Sanskrit and Tibetan — and even some invented characters such as those from the television series “Futurama.”

Visual Turing tests

In addition to testing the algorithm’s ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through “visual Turing tests.” Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.

While judges’ correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.

“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” notes Tenenbaum. “I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties.

“We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts — even simple visual concepts such as handwritten characters — in ways that are hard to tell apart from humans.”

Beyond deep-learning methods

The researchers argue that their system captures something of the elasticity of human concepts, which often have fuzzy boundaries but still seem to delimit coherent categories. It also mimics the human ability to learn new concepts from few examples.

It thus offers hope, they say, that the type of computational structure it’s built on, called a probabilistic program, could help model human acquisition of more sophisticated concepts as well.

“I feel that this is a major contribution to science, of general interest to artificial intelligence, cognitive science, and machine learning,” says Zoubin Ghahramani, a professor of information engineering at the University of Cambridge. “Given the major successes of deep learning, the paper also provides a very sobering view of the limitations of such deep-learning methods — which are very data-hungry and perform poorly on the tasks in this paper — and an important alternative avenue for achieving human-level machine learning.”

The work was supported by grants from the National Science Foundation to MIT’s Center for Brains, Minds and Machines, the Army Research Office, the Office of Naval Research, and the Moore-Sloan Data Science Environment at New York University.


Brenden Lake | NYU fellow Brenden Lake on human-level concept learning


Abstract of Human-level concept learning through probabilistic program induction

People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.