Noninvasive imaging method can look twice as deep inside the living brain

In vivo 1.3-μm VCSEL SS-OCT imaging of a 12-week-old adult mouse with cranial window preparation. (a) Representative OCT image visualizing morphological details of the cerebral cortex and subsequent brain compartments. (b) OCT brain anatomy showing good correlation with photomicrograph of a Nissl-stained histology section of the same strain mouse brain. (credit: Allen Institute for Brain Science/Journal of Biomedical Optics)

University of Washington (UW) researchers have developed a noninvasive light-based imaging technology that can literally see inside the living brain at more than two times the depth, providing a new tool to study how diseases like dementia, Alzheimer’s, and brain tumors change brain tissue over time.

The work was reported Oct. 8 by Woo June Choi and Ruikang Wang of the UW Department of Bioengineering in the Journal of Biomedical Optics, published by SPIE, the international society for optics and photonics.

Noninvasive deep imaging

According to the authors, this new optical coherence tomography (OCT) approach to brain study may allow for examining acute and chronic morphological or functional vascular changes in the deep brain.

OCT is normally used to obtain sub-surface images of biological tissue at about the same resolution as a low-power microscope and can instantly deliver cross-section images of layers of tissue without invasive surgery or ionizing radiation. OCT images are based on light directly reflected from a sub-surface.

Widely used in clinical ophthalmology, OCT has recently been adapted for brain imaging in small animal models. Its application in neuroscience has been limited, however, because conventional OCT technology hasn’t been able to image more than 1 millimeter below the surface of biological tissue.

Portion of schematic of the 1.3-μm vertical cavity surface emitting laser (VCSEL) swept-source optical coherence tomography (SS-OCT) system. FC: optical fiber coupler; BD: dual balanced detector; DAQ: data acquisition. (credit: Woo June Choi and Ruikang K. Wang/Journal of Biomedical Optics)

In the paper, Choi and Wang describe how a new technique called “swept-source OCT” (SS-OCT) powered by a vertical-cavity surface-emitting laser (VCSEL) increases signal sensitivity, extending the imaging depth range to more than 2 millimeters. That may make it possible to do things that have been barely attempted in the OCT community, such as noninvasive imaging of the mouse hippocampus or full-length imaging of a human eye from cornea to retina.

It could also allow researchers to monitor deeper morphological changes caused by diseases such as Alzheimer’s disease and dementia, and even to study the effects of aging on the brain.


Abstract of Swept-source optical coherence tomography powered by a 1.3-μm vertical cavity surface emitting laser enables 2.3-mm-deep brain imaging in mice in vivo

We report noninvasive, in vivo optical imaging deep within a mouse brain by swept-source optical coherence tomography (SS-OCT), enabled by a 1.3-μm vertical cavity surface emitting laser (VCSEL). VCSEL SS-OCT offers a constant signal sensitivity of 105 dB throughout an entire depth of 4.25 mm in air, ensuring an extended usable imaging depth range of more than 2 mm in turbid biological tissue. Using this approach, we show deep brain imaging in mice with an open-skull cranial window preparation, revealing intact mouse brain anatomy from the superficial cerebral cortex to the deep hippocampus. VCSEL SS-OCT would be applicable to small animal studies for the investigation of deep tissue compartments in living brains where diseases such as dementia and tumor can take their toll.

Graphene nano-coils discovered to be powerful natural electromagnets

A nano-coil made of graphene could be an effective solenoid/inductor for electronic applications (credit: Yakobson Research Group/Rice University)

Rice University scientists have discovered that a widely used electronic part called a solenoid could be scaled down to nano-size with macro-scale performance.

The secret: a spiral form of atom-thin graphene that, remarkably, can be found in nature, even in common coal, according to Rice theoretical physicist Boris Yakobson and his colleagues.

The researchers determined that when a voltage is applied to such a “nano-coil,” current will flow around the helical path and produce a magnetic field, as it does in macroscale solenoids. The discovery is detailed in a new paper in the American Chemical Society journal Nano Letters.

“Perhaps this might work in reverse here: An electron current, pumped through by the applied voltage, at certain conditions may just cause the graphene spiral to spin, like a fast little electro-turbine,” Yakobson speculated.

Basic solenoid design. A current flowing through the coil generates a magnetic field, which causes a ferromagnetic plunger to be attracted or repelled. (credit: Society of Robots)

Solenoids are components with wires coiled around a metallic core. They produce a magnetic field when carrying current, turning them into electromagnets. These are widespread in electronic and mechanical devices, from circuit boards to transformers to cars.

They also serve as inductors, which are primary components in electric circuits that regulate current (the lump in power cables that feed electronic devices contains an inductor, which blocks RF interference). In their smallest form, inductors are a part of integrated circuits.

Nano-solenoids with a macro punch

While transistors get steadily smaller, basic inductors in electronics have become relatively bulky, said Fangbo Xu, a Rice alumnus and lead author of the paper. “It’s the same inside the circuits,” he said. “Commercial spiral inductors on silicon occupy excessive area. If realized, graphene nano-solenoids could change that.”

The nano-solenoids analyzed through computer models at Rice should be capable of producing powerful magnetic fields of about 1 tesla, about the same as the coils found in typical loudspeakers, according to Yakobson and his team — and about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral’s center.

The spiral form is attributable to a simple topological trick, he said. Graphene is made of hexagonal arrays of carbon atoms. Malformed hexagons known as dislocations along one edge force the graphene to twist around itself, akin to a continuous nanoribbon that mimics a mathematical construct known as a Riemann surface.

The researchers demonstrated theoretically how energy would flow through the hexagons in nano-solenoids with edges in either armchair or zigzag formations. In one case, they determined the performance of a conventional spiral inductor of 205 microns in diameter could be matched by a nano-solenoid just 70 nanometers wide.

Multiple uses

Because graphene has no energy band gap (which gives a material semiconducting properties), electricity should move through without any barriers. But in fact, the width of the spiral and the configuration of the edges — either armchair or zigzag — influences how the current is distributed, and thus its inductive properties.

The researchers suggested it should be possible to isolate graphene screw dislocations from crystals of graphitic carbon (graphene in bulk form), but enticing graphene sheets to grow in a spiral would allow for better control of its properties, Yakobson said.

Xu suggested nano-solenoids may also be useful as molecular relays or switchable traps for magnetic molecules or radicals in chemical probes.

Co-authors are Rice graduate student Henry Yu and alumnus Arta Sadrzadeh. Yakobson is the Karl F. Hasselmann Professor of Materials Science and NanoEngineering and a professor of chemistry.

The research was supported by the Office of Naval Research’s Multidisciplinary University Research Initiative (MURI), the National Science Foundation and the Air Force Office of Scientific Research MURI.

The strongest known permanent-magnet in the world is at the Los Alamos National Laboratory campus of the National High Magnetic Field Laboratory, at 100 tesla. Does that mean 100 of these nanocoils could equal it? Comment below.


Abstract of Surfaces of Carbon as Graphene Nanosolenoids

Traditional inductors in modern electronics consume excessive areas in the integrated circuits. Carbon nanostructures can offer efficient alternatives if the recognized high electrical conductivity of graphene can be properly organized in space to yield a current-generated magnetic field that is both strong and confined. Here we report on an extraordinary inductor nanostructure naturally occurring as a screw dislocation in graphitic carbons. Its elegant helicoid topology, resembling a Riemann surface, ensures full covalent connectivity of all graphene layers, joined in a single layer wound around the dislocation line. If voltage is applied, electrical currents flow helically and thus give rise to a very large (∼1 T at normal operational voltage) magnetic field and bring about superior (per mass or volume) inductance, both owing to unique winding density. Such a solenoid of small diameter behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance.

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Protein-folding discovery opens a window on basic life processes

Biochemists have discovered “impossible” shapes of proteins as they shift from one stable shape to a different, folded one. (credit: Oregon State University)

Biochemists at Oregon State University have made a fundamental discovery about protein structure that sheds new light on how proteins fold — one of the most basic processes of life. Even the process of thinking involves proteins at the end of one neuron passing a message to different proteins on the next neuron.

The findings, announced today (Oct. 16) in an open-access paper in Science Advances, promises to help scientists better understand some important changes that proteins undergo.

Scientists previously thought is was impossible to characterize these changes, in part because the transitions are so incredibly small and fleeting. Proteins convert from one observable shape to another in less than one trillionth of a second, and in molecules that are less than one millionth of an inch in size. These changes have been simulated by computers, but no one had ever observed how they happen.

Hiding in plain sight

“Actual evidence of these transitions was hiding in plain sight all this time,” said Andrew Brereton, an OSU doctoral student and lead author on this study. “We just didn’t know what to look for, and didn’t understand how significant it was.”

X-ray crystallography has been able to capture images of proteins in their more stable shapes. But the changes in shape needed for those transitions are fleeting and involve distortions in the molecules that are extreme and difficult to predict.

What the OSU researchers discovered is that these stable shapes actually contained some parts that were  trapped in the act of changing shape, conceptually similar to finding mosquitos trapped in amber.

“We discovered that some proteins were holding single building blocks in shapes that were supposed to be impossible to find in a stable form,” said Andrew Karplus, the corresponding author on the study and a distinguished professor of biochemistry and biophysics in the OSU College of Science.

“Apparently about one building block out of every 6,000 gets trapped in a highly unlikely shape that is like a single frame in a movie,” Karplus said. “The set of these trapped residues taken together have basically allowed us to make a movie that shows how these special protein shape changes occur. And what this movie shows has real differences from what the computer simulations have predicted.”

As with most fundamental discoveries, the researchers said, the full value of the findings may take years or decades to play out.

The movie below, created by Andrew E. Brereton and P. Andrew Karplus, is an alanine dipeptide animation generated according to the “general” model of the ψ ~ +90° conformational transition described in their paper.


Abstract of Native proteins trap high-energy transit conformations

During protein folding and as part of some conformational changes that regulate protein function, the polypeptide chain must traverse high-energy barriers that separate the commonly adopted low-energy conformations. How distortions in peptide geometry allow these barrier-crossing transitions is a fundamental open question. One such important transition involves the movement of a non-glycine residue between the left side of the Ramachandran plot (that is, ϕ < 0°) and the right side (that is, ϕ > 0°). We report that high-energy conformations with ϕ ~ 0°, normally expected to occur only as fleeting transition states, are stably trapped in certain highly resolved native protein structures and that an analysis of these residues provides a detailed, experimentally derived map of the bond angle distortions taking place along the transition path. This unanticipated information lays to rest any uncertainty about whether such transitions are possible and how they occur, and in doing so lays a firm foundation for theoretical studies to better understand the transitions between basins that have been little studied but are integrally involved in protein folding and function. Also, the context of one such residue shows that even a designed highly stable protein can harbor substantial unfavorable interactions.