Scientists reveal secrets for reaching age 100 (or more)

Leukocyte (white blood cell) telomere length in study participants up to 115 years of age. Statistical regression lines belonging to these groups are indicated by the same color as the data. (credit: Yasumichi Arai et al./EBioMedicine)

Scientists say they have cracked the secret of why some people live a healthy and physically independent life over the age of 100: keeping inflammation down and telomeres long.

Newcastle University’s Institute for Ageing in the U.K. and Keio University School of Medicine note that severe inflammation is part of many diseases in the old, such as diabetes or diseases attacking the bones or the body’s joints, and chronic inflammation can develop from any of them.

The study was published online in an open-access paper in EBioMedicine, a new open-access journal jointly supported by the journals Cell and Lancet.

“Centenarians and supercentenarians are different,” said Professor Thomas von Zglinicki, from Newcastle University’s Institute for Ageing, and lead author. “Put simply, they age slower. They can ward off diseases for much longer than the general population.”

Keeping telomeres long

The researchers studied groups of people aged 105 and over (semi-supercentenarians), those 100 to 104 (centenarians), and those nearly 100 and their offspring. They measured a number of health markers they believe contribute towards successful aging, including blood cell numbers, metabolism, liver and kidney function, inflammation, and telomere length.

Scientists expected to see a continuous shortening of telomeres with age. But what they found was that the children of centenarians, who have a good chance of becoming centenarians themselves, maintained their telomeres at a “youthful” level corresponding to about 60 years of age — even when they became 80 or older.

“Our data reveals that once you’re really old [meaning centenarians and those older than 100], telomere length does not predict further successful aging, said von Zglinicki. “However, it does show that [they] maintain their telomeres better than the general population, which suggests that keeping telomeres long may be necessary or at least helpful to reach extreme old age.”

Lower inflammation levels

Centenarian offspring maintained lower levels of markers for chronic inflammation. These levels increased in the subjects studied with age including centenarians and older, but those who were successful in keeping previously keeping them low had the best chance to maintain good cognition, independence, and longevity.

“It has long been known that chronic inflammation is associated with the aging process in younger, more ‘normal’ populations, but it’s only very recently we could mechanistically prove that inflammation actually causes accelerated aging in mice,” von Zglinicki said.

“This study, showing for the first time that inflammation levels predict successful aging even in the extreme old, makes a strong case to assume that chronic inflammation drives human aging too. … Designing novel, safe anti-inflammatory or immune-modulating medication has major potential to improve healthy lifespan.”

Data from three studies combined

Data was collated by combining three community-based group studies: Tokyo Oldest Old Survey on Total Health, Tokyo Centenarians Study, and Japanese Semi-Supercentenarians Study.

The research comprised 1,554 individuals, including 684 centenarians and (semi-) supercentenarians, 167 pairs of offspring and unrelated family of centenarians, and 536 very old people. The total group covered ages from around 50 up to the world’s oldest man at 115 years.

However, “presently available potent anti-inflammatories are not suited for long-term treatment of chronic inflammation because of their strong side-effects,” said Yasumichi Arai, Head of the Tokyo Oldest Old Survey on Total Health cohort and first author of the study.


Abstract of Inflammation, But Not Telomere Length, Predicts Successful Ageing at Extreme Old Age: A Longitudinal Study of Semi-supercentenarians

To determine the most important drivers of successful ageing at extreme old age, we combined community-based prospective cohorts: Tokyo Oldest Old Survey on Total Health (TOOTH), Tokyo Centenarians Study (TCS) and Japanese Semi-Supercentenarians Study (JSS) comprising 1554 individuals including 684 centenarians and (semi-)supercentenarians, 167 pairs of centenarian offspring and spouses, and 536 community-living very old (85 to 99 years). We combined z scores from multiple biomarkers to describe haematopoiesis, inflammation, lipid and glucose metabolism, liver function, renal function, and cellular senescence domains. In Cox proportional hazard models, inflammation predicted all-cause mortality with hazard ratios (95% CI) 1.89 (1.21 to 2.95) and 1.36 (1.05 to 1.78) in the very old and (semi-)supercentenarians, respectively. In linear forward stepwise models, inflammation predicted capability (10.8% variance explained) and cognition (8.6% variance explained) in (semi-) supercentenarians better than chronologic age or gender. The inflammation score was also lower in centenarian offspring compared to age-matched controls with Δ (95% CI) = − 0.795 (− 1.436 to − 0.154). Centenarians and their offspring were able to maintain long telomeres, but telomere length was not a predictor of successful ageing in centenarians and semi-supercentenarians. We conclude that inflammation is an important malleable driver of ageing up to extreme old age in humans.

Cheap, power-efficient flash memory for big data without sacrificing speed

A 20-node BlueDBM Cluster (credit: Sang-Woo Jun et al./ISCA 2015)

There’s a big problem with big data: the huge RAM memory required. Now MIT researchers have developed a new system called “BlueDBM” that should make servers using flash memory as efficient as those using conventional RAM for several common big-data applications, while preserving their power and cost savings.

Here’s the context: Data sets in areas such as genomics, geological data, and daily twitter feeds can be as large as 5TB to 20 TB. Complex data queries in such data sets require high-speed random-access memory (RAM). But that would require a huge cluster with up to 100 servers, each with 128GB to 256GBs of DRAM (dynamic random access memory).

Flash memory (used in smart phones and other portable devices) could provide an alternative to conventional RAM for such applications. It’s about a tenth as expensive, and it consumes about a tenth as much power. The problem: it’s also a tenth as fast.

But at the International Symposium on Computer Architecture in June, the MIT researchers, with colleagues at Quanta Computer, presented experimental evidence showing that if conventional servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash anyway.

In fact, they found that for a 10.5-terabyte computation, just 20 servers with 20 terabytes’ worth of flash memory each could do as well as 40 servers with 10 terabytes’ worth of RAM, and could consume only a fraction as much power. This was even without the researchers’ new techniques for accelerating data retrieval from flash memory.

“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize — everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”

Technical details

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.

With hardware contributed by some of their sponsors — Quanta, Samsung, and Xilinx — the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte — or 500-gigabyte — flash chips and to the two FPGAs nearest it in the server rack.

Because the FPGAs were connected to each other, they created a very fast network that allowed any server to retrieve data from any flash drive. They also controlled the flash drives, which is no simple task: The controllers that come with modern commercial flash drives have as many as eight different processors and a gigabyte of working memory.

Finally, the FPGAs also executed the algorithms that preprocessed the data stored on the flash drives. The researchers tested three such algorithms, geared to three popular big-data applications. One is image search, or trying to find matches for a sample image in a huge database. Another is an implementation of Google’s PageRank algorithm, which assesses the importance of different Web pages that meet the same search criteria. And the third is an application called Memcached, which big, database-driven websites use to store frequently accessed information.

FPGAs are about one-tenth as fast as purpose-built chips with hardwired circuits, but they’re much faster than central processing units using software to perform the same computations. Ordinarily, either they’re used to prototype new designs, or they’re used in niche products whose sales volumes are too small to warrant the high cost of manufacturing purpose-built chips.

But the MIT and Quanta researchers’ design suggests a new use for FPGAs: A host of applications could benefit from accelerators like the three the researchers designed. And since FPGAs are reprogrammable, they could be loaded with different accelerators, depending on the application. That could lead to distributed processing systems that lose little versatility while providing major savings in energy and cost.

“Many big-data applications require real-time or fast responses,” says Jihong Kim, a professor of computer science and engineering at Seoul National University. “For such applications, BlueDBM” — the MIT and Quanta researchers’ system — “is an appealing solution.”

Relative to some other proposals for streamlining big-data analysis, “The main advantage of BlueDBM might be that it can easily scale up to a lot bigger storage system with specialized accelerated supports,” Kim says.


Abstract of BlueDBM: An Appliance for Big Data Analytics

Complex data queries, because of their need for random accesses, have proven to be slow unless all the data can be accommodated in DRAM. There are many domains, such as genomics, geological data and daily twitter feeds where the datasets of interest are 5TB to 20 TB. For such a dataset, one would need a cluster with 100 servers, each with 128GB to 256GBs of DRAM, to accommodate all the data in DRAM. On the other hand, such datasets could be stored easily in the flash memory of a rack-sized cluster. Flash storage has much better random access performance than hard disks, which makes it desirable for analytics workloads. In this paper we present BlueDBM, a new system architecture which has flashbased storage with in-store processing capability and a low-latency high-throughput inter-controller network. We show that BlueDBM outperforms a flash-based system without these features by a factor of 10 for some important applications. While the performance of a ram-cloud system falls sharply even if only 5%~10% of the references are to the secondary storage, this sharp performance degradation is not an issue in BlueDBM. BlueDBM presents an attractive point in the cost-performance trade-off for Big Data analytics.

A fast, accurate, nanoscale ‘biochemical nose’ sensor

A nanoplasmonic resonator (NPR) consists of a thin silicon dioxide layer sandwiched between metallic nanodisks. NPRs can enhance surface-enhanced Raman spectroscopic (SERS) signals by a factor of 60 billion to detect target molecules with high sensitivity. (credit: Cheng Sun et al./ ACS Nano)

Imagine being able to test your food in your kitchen to quickly determine if it carried any deadly microbes. Technology now being commercialized by Optokey may soon make that possible.

Optokey, a startup based in Hayward, California, has developed a miniaturized sensor using surface-enhanced Raman spectroscopy (SERS) that can quickly and accurately detect or diagnose substances at a molecular level. The technology is based on research conducted at Lawrence Berkeley National Laboratory (Berkeley Lab) and published in 2010.

Molecular fingerprinting

“Our system can do chemistry, biology, biochemistry, molecular biology, clinical diagnosis, and chemical analysis,” said Optokey president and co-founder Fanqing Frank Chen, a scientist at Berkeley Lab who was co-author of an ACS Nano paper on the research. The system can be implemented “very cheaply, without much human intervention,” he said.

SERS is a highly sensitive analytical tool used for “molecular fingerprinting,” but the results have not easily reproducible. Chen and colleagues developed a solution to this problem using what they called “nanoplasmonic resonators,” which measures the interaction of photons with an activated surface using nanostructures to do chemical and biological sensing. The method produces measurements much more reliably.

“At Optokey we’re able to mass produce this nanoplasmonic resonator on a wafer scale,” Chen said. “We took something from the R&D realm and turned it into something industrial-strength.”

The miniaturized sensors use a microfluidic control system for “lab on a chip” automated liquid sampling. “We’re leveraging knowledge acquired from high-tech semiconductor manufacturing methods to get the cost, the volume, and the accuracy in the chip,” said VP of Manufacturing Robert Chebi, a veteran of the microelectronic industry who previously worked at Lam Research and Applied Materials. “We’re also leveraging all the knowledge in lasers and optics for this specific Raman-based method.”

A biochemical nose

Chebi calls Optokey’s product a “biochemical nose,” or an advanced nanophotonic automated system, with sensitivity to the level of a single molecule, far superior to sensors on the market today, he claims. “Today’s detection and diagnosis methods are far from perfect … Also, our system can provide information in minutes, or even on a continuous basis, versus other methods where it could take hours or even days, if samples have to be sent to another lab.”

The potential applications include food safety, environmental monitoring (of both liquids and gases), medical diagnosis, and chemical analysis. Optokey’s customers include a major European company interested in food safety, a Chinese petrochemical company interested in detecting impurities in its products, and a German company interested in point-of-care diagnosis.

“The product we’re envisioning is something that is compact and automated but also connected, and it can go into schools, restaurants, factories, hospitals, ambulances, airports, and even battlefields,” Chen said. Next, they plan to introduce it in the smart home, where a nanophotonic sensor could be built to scan for pollutants not just in food but also in air and water.

Key discovery: nanoplasmonic resonators

Ultimately, Chen and his Berkeley Lab group developed about 20 patents involving hybrid bionanomaterials. The key discovery that led to the formation of Optokey was the development of nanoplasmonic resonators to dramatically improve the signal and reliability of Raman spectroscopy. The method was initially used in the research lab to quickly and accurately detect a biomarker for prostate cancer, which has a high rate of false positives using conventional diagnostic tools.

“There was 10 years of research that went into this, funded by NIH, DARPA, the federal government, private foundations,” said Chen. “Berkeley Lab has a really good culture of multidisciplinary research, excellent engineering, and very strong basic science. Plus it has strong support for startups.”


Abstract of Time-Resolved Single-Step Protease Activity Quantification Using Nanoplasmonic Resonator Sensors

Protease activity measurement has broad application in drug screening, diagnosis and disease staging, and molecular profiling. However, conventional immunopeptidemetric assays (IMPA) exhibit low fluorescence signal-to-noise ratios, preventing reliable measurements at lower concentrations in the clinically important picomolar to nanomolar range. Here, we demonstrated a highly sensitive measurement of protease activity using a nanoplasmonic resonator (NPR). NPRs enhance Raman signals by 6.1 × 1010 times in a highly reproducible manner, enabling fast detection of proteolytically active prostate-specific antigen (paPSA) activities in real-time, at a sensitivity level of 6 pM (0.2 ng/mL) with a dynamic range of 3 orders of magnitude. Experiments on extracellular fluid (ECF) from the paPSA-positive cells demonstrate specific detection in a complex biofluid background. This method offers a fast, sensitive, accurate, and one-step approach to detect the proteases’ activities in very small sample volumes.

The MIND diet may slow brain aging by 7.5 years

Eating a group of specific foods — known as the MIND diet — may slow cognitive decline among aging adults, even when the person is not at risk of developing Alzheimer’s disease, according to researchers at Rush University Medical Center.

This finding supplements a previous study by the research team, reported by KurzweiliAI in March, that found that the MIND diet may reduce a person’s risk in developing Alzheimer’s disease.

The researchers’ new study shows that older adults who followed the MIND diet more rigorously showed an equivalent of being 7.5 years younger cognitively than those who followed the diet least. Results of the study were recently published online in the journal Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.

So what is the MIND diet?

The MIND diet, which is short for “Mediterranean-DASH Diet Intervention for Neurodegenerative Delay,” was developed by Martha Clare Morris, ScD, a nutritional epidemiologist, and her colleagues. As the name suggests, the MIND diet is a hybrid of the Mediterranean and DASH (Dietary Approaches to Stop Hypertension) diets. Both have been found to reduce the risk of cardiovascular conditions, like hypertensionheart attack and stroke.

“Everyone experiences decline with aging; and Alzheimer’s disease is now the sixth leading cause of death in the U.S., which accounts for 60 to 80 percent of dementia cases. Therefore, prevention of cognitive decline, the defining feature of dementia, is now more important than ever,” Morris says. “Delaying dementia’s onset by just five years can reduce the cost and prevalence by nearly half.”

The MIND diet has 15 dietary components, including 10 “brain-healthy food groups” and five “unhealthy groups” to avoid — red meat, butter and stick margarine, cheese, pastries and sweets, and fried or fast food.

To adhere to and benefit from the MIND diet, a person would need to eat at least three servings of whole grains, a green leafy vegetable and one other vegetable every day — along with a glass of wine (or red-grape juice) — snack most days on nuts, have beans every other day or so, eat poultry and berries at least twice a week and fish at least once a week.

In addition, the study found that to have a real shot at avoiding the devastating effects of cognitive decline, he or she must limit intake of the designated unhealthy foods, especially butter (less than 1 tablespoon a day), sweets and pastries, whole fat cheese, and fried or fast food (less than a serving a week for any of the three).

Berries are the only fruit specifically to be included in the MIND diet. “Blueberries are one of the more potent foods in terms of protecting the brain,” Morris says, and strawberries also have performed well in past studies of the effect of food on cognitive function.

The National Institute of Aging-funded study evaluated cognitive change over a period of 4.7 years among 960 older adults who were free of dementia on enrollment. Averaging 81.4 years in age, the study participants also were part of the Rush Memory and Aging Project, a study of residents of more than 40 retirement communities and senior public housing units in the Chicago area.

During the course of the study, they received annual, standardized testing for cognitive ability in five areas — episodic memory, working memory, semantic memory, visuospatial ability and perceptual speed. The study group also completed annual food frequency questionnaires, allowing the researchers to compare participants’ reported adherence to the MIND diet with changes in their cognitive abilities as measured by the tests.


Abstract of MIND diet slows cognitive decline with aging

Background: The Mediterranean and dash diets have been shown to slow cognitive decline; however, neither diet is specific to the nutrition literature on dementia prevention.

Methods: We devised the Mediterranean-Dietary Approach to Systolic Hypertension (DASH) diet intervention for neurodegenerative delay (MIND) diet score that specifically captures dietary components shown to be neuroprotective and related it to change in cognition over an average 4.7 years among 960 participants of the Memory and Aging Project.

Results: In adjusted mixed models, the MIND score was positively associated with slower decline in global cognitive score (β = 0.0092; P < .0001) and with each of five cognitive domains. The difference in decline rates for being in the top tertile of MIND diet scores versus the lowest was equivalent to being 7.5 years younger in age.

Conclusions: The study findings suggest that the MIND diet substantially slows cognitive decline with age. Replication of these findings in a dietary intervention trial would be required to verify its relevance to brain health.

Looking at living cells down to individual molecules

3D rendered correlative AFM/PALM image of a fixed mammalian cell (mouse embryonic fibroblast (MEF) cell) expressing the fusion protein paxillin-mEOS2 (credit: Pascal D. Odermatt et al./Nano Letters)

EPFL scientists have captured images of living cells with unprecedented nanoscale resolution — even the evolution of their structure and molecular characteristics.

They did that by combining two cutting edge microscopy techniques — high-speed atomic force microscopy and a single-molecule-localization, super-resolution optical imaging system — into one instrument.

Their work was published in the journal ACS Nano Letters.

The “correlated single molecule localization microscope” combines two methods:

  • An atomic force microscope (AFM) “feels” the surface being observed using a tiny force sensitive needle, capturing the 3D structure. It is installed above the sample.
  • Meanwhile, the microscope technique, known at PALM (photo-activated localization microscopy), observes the sample from below. It selectively stains the sample with fluorescent molecules to label certain selected molecules by making them blink, and then follows their path in the interior of a cell. Its inventors were awarded the Nobel Prize last year.

The scientists also developed special software that assembles the images from the two instruments, providing a precise 3D visualization of the observed sample.

Correlative AFM-SMLM: instrument setup. (a) Schematic of the aligned optical path with the AFM cantilever. By laterally translating the incoming laser beam using a micrometer screw, the TIRF illumination condition is enabled. The AFM cantilever is centered in the field of view by adjusting the position of the inverted optical microscope mounted on an x/y-translation stage (as shown in b and c). (b) Mechanical integration of an inverted optical microscope and the AFM. The inverted optical microscope is mounted on an x/y-translation stage. Around it a mechanical support structure is built to hold the AFM in place without mechanically contacting the microscope body. The whole instrument is placed on a vibration isolation platform inside an acoustic isolation box. (c) Photograph of the instrument and (d) zoom in to the AFM cantilever aligned to the optical axis. (credit: Pascal D. Odermatt et al./Nano Letters)

By taking successive images of the same living cell, the scientists were, for the first time ever, able to follow the behavior of protein clusters in relation to the 3D structure of the cell. “That could, for example, allow us to observe the inner workings of cell division, or unravel how stem cells react to mechanical forces” says Henrik Deschout, post doctoral researcher in EPFL’s Laboratory of Nanometer-Scale Biology, which is directed by Aleksandra Radenovic.

The prototype stage has already attracted the interest of many other researchers as well as leading microscope manufacturers. The microscope could be of great interest to researchers in cellular-, micro- and mechanobiology, allowing scientists to shed new light on the intricate mechanisms occurring in living cells, the researchers say.


Abstract of High-Resolution Correlative Microscopy: Bridging the Gap between Single Molecule Localization Microscopy and Atomic Force Microscopy

Nanoscale characterization of living samples has become essential for modern biology. Atomic force microscopy (AFM) creates topological images of fragile biological structures from biomolecules to living cells in aqueous environments. However, correlating nanoscale structure to biological function of specific proteins can be challenging. To this end we have built and characterized a correlated single molecule localization microscope (SMLM)/AFM that allows localizing specific, labeled proteins within high-resolution AFM images in a biologically relevant context. Using direct stochastic optical reconstruction microscopy (dSTORM)/AFM, we directly correlate and quantify the density of localizations with the 3D topography using both imaging modalities along (F-)actin cytoskeletal filaments. In addition, using photo activated light microscopy (PALM)/AFM, we provide correlative images of bacterial cells in aqueous conditions. Moreover, we report the first correlated AFM/PALM imaging of live mammalian cells. The complementary information provided by the two techniques opens a new dimension for structural and functional nanoscale biology.

Heating and cooling genetic samples with light leads to ultrafast DNA diagnostics

An artist’s rendering of photonic PCR on a chip using light to rapidly heat and cool electrons at the surface of a thin film of gold. This method yields gene amplification results in mere minutes, and promises to transform point-of-care diagnostics in fields as diverse as medicine, food security and evolutionary biology. (credit: Luke Lee’s BioPOETS lab)

New technology developed by bioengineers at the University of California, Berkeley, promises to dramatically speed up the polymerase chain reaction (PCR) DNA test and make it cheaper and more portable by simply accelerating the heating and cooling of genetic samples with the switch of a light.

This turbocharged thermal cycling, described in an open-access paper published Friday July 31 in the journal Light: Science & Application, greatly expands the clinical and research applications of the PCR test, with results in minutes instead of an hour or more.

The PCR test, which amplifies a single copy of a DNA sequence to produce thousands to millions of copies, has become vital in genomics applications, ranging from cloning research to forensic analysis to paternity tests. PCR is used in the early diagnosis of hereditary and infectious diseases, and even for analysis of ancient DNA samples of mummies and mammoths.

The huge impact of the PCR test in modern science was recognized in 1993 with a Nobel Prize in Chemistry for its inventors, Kary Mullis and Michael Smith.

Using light-emitting diodes (LEDs), the UC Berkeley researchers were able to heat electrons at the interface of thin films of gold and a DNA solution. They clocked the speed of heating the solution at around 55 degrees Fahrenheit per second. The rate of cooling was equally impressive, coming in at about 43.9 degrees per second.

The heating-time bottleneck

“PCR is powerful, and it is widely used in many fields, but existing PCR systems are relatively slow,” said study senior author Luke Lee, a professor of bioengineering. “It is usually done in a lab because the conventional heater used for this test requires a lot of power and is expensive. Because it takes an hour or longer to complete each test, it is not practical for use for point-of-care diagnostics. Our system can generate results within minutes.”

Schematic showing the ultrafast photonic PCR using LED lights under a thin gold film to amplify genetic samples. The repeated heating and cooling process, called thermal cycling, is needed to separate the double-stranded DNA (1-Denaturation). Complementary bases from a primer then bind to the single strand (2-Annealing and extension), resulting in two copies of the gene. The process is repeated for at least 30 cycles. (credit: Jun Ho Son, UC Berkeley)

The slowdown in conventional PCR tests comes from the time it takes to heat and cool the DNA solution. The PCR test requires repeated temperature changes — an average of 30 thermal cycles at three different temperatures — to amplify the genetic sequence. This process involves breaking up the double-stranded DNA and binding the single strand with a matching primer. With each heating-cooling cycle, the amount of the DNA sample is doubled.

To speed up this thermal cycling, Lee and his team of researchers took advantage of plasmonics, or the interaction between light and free electrons on a metal’s surface. When exposed to light, the free electrons get excited and begin to oscillate, generating heat. Once the light is off, the oscillations and the heating stop.

Gold, it turns out, is a popular metal for this plasmonic photothermal heating because it is so efficient at absorbing light. It has the added benefit of being inert to biological systems, so it can be used in biomedical applications.

For their experiments, the researchers used thin films of gold that were 120 nanometers thick, or about the width of a rabies virus. The gold was deposited onto a plastic chip with microfluidic wells to hold the PCR mixture with the DNA sample.

The light source was an array of off-the-shelf LEDs positioned beneath the PCR wells. The peak wavelength of the blue LED light was 450 nanometers, tuned to get the most efficient light-to-heat conversion.

The researchers were able to cycle from 131 degrees to 203 degrees Fahrenheit 30 times in less than five minutes.

They tested the ability of the photonic PCR system to amplify a sample of DNA, and found that the results compared well with conventional PCR tests.

“This photonic PCR system is fast, sensitive and low-cost,” said Lee, who is also co-director of the Berkeley Sensor and Actuator Center. “It can be integrated into an ultrafast genomic diagnostic chip, which we are developing for practical use in the field. Because this technology yields point-of-care results, we can use this in a wide range of settings, from rural Africa to a hospital ER.”


Abstract of Ultrafast photonic PCR

Nucleic acid amplification and quantification via polymerase chain reaction (PCR) is one of the most sensitive and powerful tools for clinical laboratories, precision medicine, personalized medicine, agricultural science, forensic science and environmental science. Ultrafast multiplex PCR, characterized by low power consumption, compact size and simple operation, is ideal for timely diagnosis at the point-of-care (POC). Although several fast/ultrafast PCR methods have been proposed, the use of a simple and robust PCR thermal cycler remains challenging for POC testing. Here, we present an ultrafast photonic PCR method using plasmonic photothermal light-to-heat conversion via photon–electron–phonon coupling. We demonstrate an efficient photonic heat converter using a thin gold (Au) film due to its plasmon-assisted high optical absorption (approximately 65% at 450 nm, the peak wavelength of heat source light-emitting diodes (LEDs)). The plasmon-excited Au film is capable of rapidly heating the surrounding solution to over 150 °C within 3 min. Using this method, ultrafast thermal cycling (30 cycles; heating and cooling rate of 12.79±0.93 °C s−1 and 6.6±0.29 °C s−1, respectively) from 55 °C (temperature of annealing) to 95 °C (temperature of denaturation) is accomplished within 5 min. Using photonic PCR thermal cycles, we demonstrate here successful nucleic acid (λ-DNA) amplification. Our simple, robust and low cost approach to ultrafast PCR using an efficient photonic-based heating procedure could be generally integrated into a variety of devices or procedures, including on-chip thermal lysis and heating for isothermal amplifications.

A precision brain-controlled prosthesis nearly as good as one-finger typing

Brain-controlled prostheses sample a few hundred neurons to estimate motor commands that involve millions of neurons. So tiny sampling errors can reduce the precision and speed of thought-controlled keypads. A Stanford technique can analyze this sample and quickly make dozens of corrective adjustments to make thought control more precise. (credit: Jonathan Kao, Shenoy Lab)

An interdisciplinary team led by Stanford electrical engineer Krishna Shenoy has developed a technique to improve brain-controlled prostheses. These brain-computer-interface (BCI) devices, for people with neurological disease or spinal cord injury, deliver thought commands to devices such as virtual keypads, bypassing the damaged area.

The new technique addresses a problem with these brain-controlled prostheses: they currently access a sample of only a few hundred neurons, so tiny errors in the sample — neurons that fire too fast or too slow — reduce the precision and speed of thought-controlled keypads.

Understanding brain dynamics for arm movements

In essence the new prostheses analyze the neuron sample and quickly make dozens of corrective adjustments to the estimate of the brain’s electrical pattern.

Shenoy’s team tested a brain-controlled cursor meant to operate a virtual keyboard. The system is intended for people with paralysis and amyotrophic lateral sclerosis (ALS), also called Lou Gehrig’s disease, a condition that Stephen Hawking has. ALS degrades one’s ability to move.

The new corrective technique is based on a recently discovered understanding of how monkeys naturally perform arm movements. The researchers studied animals that were normal in every way. The monkeys used their arms, hands and fingers to reach for targets presented on a video screen. The researchers sought to learn, through hundreds of experiments, what the electrical patterns from the 100- to 200-neuron sample looked like during a normal reach — to understand the “brain dynamics” underlying reaching arm movements.

“These brain dynamics are analogous to rules that characterize the interactions of the millions of neurons that control motions,” said Jonathan Kao, a doctoral student in electrical engineering and first author of the open-access Nature Communications paper on the research. “They enable us to use a tiny sample more precisely.”

A decoding algorithm

In their current experiments, Shenoy’s team members distilled their understanding of brain dynamics into an algorithm that could decode (analyze) the measured electrical signals that their prosthetic device obtained from the sampled neurons. The algorithm tweaked these measured signals so that the sample’s dynamics were more like the baseline brain dynamics and thus more precise.

To test this algorithm, the Stanford researchers first trained two monkeys to choose targets on a simplified keypad. The keypad consisted of several rows and columns of blank circles. When a light flashed on a given circle the monkeys were trained to reach for that circle with their arms.

To set a performance baseline, the researchers measured how many targets the monkeys could tap with their fingers in 30 seconds. The monkeys averaged 29 correct finger taps in 30 seconds.

In the actual experiment, the researchers only scored virtual taps that came from the monkeys’ brain-controlled cursor. Although the monkey may still have moved his fingers, the researchers only counted a hit when the brain-controlled cursor, corrected by the algorithm, sent the virtual cursor to the target.

The prosthetic scored 26 thought-taps in 30 seconds, about 90 percent as quickly as a monkey’s finger. (See video of hand versus thought-controlled cursor taps.)

Thought-controlled keypads are not unique to Shenoy’s lab. Other brain-controlled prosthetics use different techniques to solve the problem of sampling error. But of several alternative techniques tested by the Stanford team, the closest resulted in 23 targets in 30 seconds.

Next steps

The goal of all this research is to get thought-controlled prosthetics to people with ALS. Today these people may use an eye-tracking system to direct cursors or a “head mouse” that tracks the movement of the head. Both are fatiguing to use. Neither provides the natural and intuitive control of readings taken directly from the brain.

“Brain-controlled prostheses will lead to a substantial improvement in quality of life,” Shenoy said. “The speed and accuracy demonstrated in this prosthesis results from years of basic neuroscience research and from combining these scientific discoveries with the principled design of mathematical control algorithms.”

The U.S. Food and Drug Administration recently gave Shenoy’s team the green light to conduct a pilot clinical trial of their thought-controlled cursor on people with spinal cord injuries.

“This is a fundamentally new approach that can be further refined and optimized to give brain-controlled prostheses greater performance, and therefore greater clinical viability,” Shenoy said.


Abstract of Single-trial dynamics of motor cortex and their applications to brain-machine interfaces

Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain–machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs.

Unlikely graphene-nanotube combination forms high-speed digital switch

Hair-like boron nitride nanotubes intersect a sheet of graphene (top) to create a high-speed digital switch (credit: Michigan Tech, Yoke Khin Yap)

By themselves, graphene is too conductive while boron nitride nanotubes are too insulating, but combining them could create a workable digital switch — which can be used for controlling electrons in computers and other electronic devices.

To create this serendipitous super-hybrid, Yoke Khin Yap, a professor of physics at Michigan Technological University, and his team exfoliated (peeled off) graphene(from graphite) and modified the material’s surface with tiny pinholes, then grew the boron nitride nanotubes up and through the pinholes — like a plant randomly poking up through a crack in a concrete pavement. That formed a “band gap” mismatch, which created “a potential barrier  that stops electrons,” he said.

In other words, a switch.

The chemical structures of graphene (gray) and boron nitride nanotubes (pink and purple) can be used to create a digital switch at the point where the two materials come in contact (credit: Michigan Tech, Yoke Khin Yap)

High switching speed

The band gap mismatch results from the materials’ structure: graphene’s flat sheet conducts electricity quickly, and the atomic structure in the nanotubes halts electric currents. This disparity creates a barrier, caused by the difference in electron movement as currents move next to and past the hair-like boron nitride nanotubes. These points of contact between the materials, called heterojunctions, are what make the digital on/off switch possible.

Yap and his research team have also shown that because the materials are respectively so effective at conducting or stopping electricity, the resulting switching ratio is high. So how fast the materials can turn on and off is several orders of magnitude greater than current graphene switches. And this speed could eventually quicken the pace of electronics and computing.

Yap says this study is a continuation of past research into making transistors without semiconductors. The problem with semiconductors like silicon is that they can only get so small, and they give off a lot of heat; the use of graphene and nanotubes bypasses those problems. In addition, the graphene and boron nitride nanotubes have the same atomic arrangement pattern, or lattice matching. With their aligned atoms, the graphene-nanotube digital switches could avoid the issues of electron scattering.

“You want to control the direction of the electrons,” Yap explains, comparing the challenge to a pinball machine that traps, slows down and redirects electrons. “This is difficult in high speed environments, and the electron scattering reduces the number and speed of electrons.”

The journal Scientific Reports recently published their work in an open-access paper.


Abstract of Switching Behaviors of Graphene-Boron Nitride Nanotube Heterojunctions

High electron mobility of graphene has enabled their application in high-frequency analogue devices but their gapless nature has hindered their use in digital switches. In contrast, the structural analogous, h-BN sheets and BN nanotubes (BNNTs) are wide band gap insulators. Here we show that the growth of electrically insulating BNNTs on graphene can enable the use of graphene as effective digital switches. These graphene-BNNT heterojunctions were characterized at room temperature by four-probe scanning tunneling microscopy (4-probe STM) under real-time monitoring of scanning electron microscopy (SEM). A switching ratio as high as 105 at a turn-on voltage as low as 0.5 V were recorded. Simulation by density functional theory (DFT) suggests that mismatch of the density of states (DOS) is responsible for these novel switching behaviors.

Sleeping on your side may clear waste from your brain most effectively

The brain’s glymphatic pathway clears harmful wastes, especially during sleep. This lateral position could prove to be the best position for the brain-waste clearance process (credit: Stony Brook University)

Sleeping in the lateral, or side position, as compared to sleeping on one’s back or stomach, may more effectively remove brain waste, and could reduce the chances of developing Alzheimer’s, Parkinson’s and other neurological diseases, according to researchers at Stony Brook University.

Stony Brook University researchers discovered this in experiments with rodents by using dynamic contrast magnetic resonance imaging (MRI) to image the brain’s glymphatic pathway, a complex system that clears wastes and other harmful chemical solutes from the brain. They also used kinetic modeling to quantify the CSF-ISF exchange rates in anesthetized rodents’ brains in lateral, prone, and supine positions.

Colleagues at the University of Rochester used fluorescence microscopy and radioactive tracers to validate the MRI data and to assess the influence of body posture on the clearance of amyloid from the brains.

Their finding is published in the Journal of Neuroscience.

Most popular position in humans and animals

“It is interesting that the lateral sleep position is already the most popular in human and most animals —even in the wild — and it appears that we have adapted the lateral sleep position to most efficiently clear our brain of the metabolic waste products that built up while we are awake,” says Maiken Nedergaard, PhD, a co-author at the University of Rochester.

“The study therefore adds further support to the concept that sleep subserves a distinct biological function of sleep and that is to ‘clean up’ the mess that accumulates while we are awake. Many types of dementia are linked to sleep disturbances, including difficulties in falling asleep. It is increasing acknowledged that these sleep disturbances may accelerate memory loss in Alzheimer’s disease.”

The brain-waste clearing system

Cerebrospinal fluid (CSF) filters through the brain and exchanges with interstitial fluid (ISF) to clear waste in the glymphatic pathway, similar to the way the body’s lymphatic system clears waste from organs. The glymphatic pathway is most efficient during sleep. Brain waste includes amyloid β (amyloid) and tau proteins, chemicals that negatively affect brain processes if they build up.

Helene Benveniste, MD, PhD, Principal Investigator and a Professor in the Departments of Anesthesiology and Radiology at Stony Brook University School of Medicine, cautioned that further testing with MRI or other imaging methods in humans is necessary.

New York University Langone Medical Center was also involved in the research.


Abstract of The Effect of Body Posture on Brain Glymphatic Transport

The glymphatic pathway expedites clearance of waste, including soluble amyloidβ (Aβ) from the brain. Transport through this pathway is controlled by the brain’s arousal level because, during sleep or anesthesia, the brain’s interstitial space volume expands (compared with wakefulness), resulting in faster waste removal. Humans, as well as animals, exhibit different body postures during sleep, which may also affect waste removal. Therefore, not only the level of consciousness, but also body posture, might affect CSF–interstitial fluid (ISF) exchange efficiency. We used dynamic-contrast-enhanced MRI and kinetic modeling to quantify CSF-ISF exchange rates in anesthetized rodents” brains in supine, prone, or lateral positions. To validate the MRI data and to assess specifically the influence of body posture on clearance of Aβ, we used fluorescence microscopy and radioactive tracers, respectively. The analysis showed that glymphatic transport was most efficient in the lateral position compared with the supine or prone positions. In the prone position, in which the rat’s head was in the most upright position (mimicking posture during the awake state), transport was characterized by “retention” of the tracer, slower clearance, and more CSF efflux along larger caliber cervical vessels. The optical imaging and radiotracer studies confirmed that glymphatic transport and Aβ clearance were superior in the lateral and supine positions. We propose that the most popular sleep posture (lateral) has evolved to optimize waste removal during sleep and that posture must be considered in diagnostic imaging procedures developed in the future to assess CSF-ISF transport in humans.

Intracellular microlasers for precise labeling of a trillion individual cells

Massachusetts General Hospital investigators have induced subcutaneous fat cells in a piece of skin from a pig to emit laser light in response to energy delivered through an optical fiber (credit: Matjaž Humar and Seok Hyun Yun/Nature Photonics)

Imagine being able to label a trillion cells in the body to detect what’s going on in each individual cell.

That’s the eventual goal of a Massachusetts General Hospital (MGH) study to allow individual cells to produce laser light. The wavelengths of light emitted by these intracellular microlasers differ based on factors such as the size, shape, and composition of each microlaser, allowing precise labeling of individual cells.

“The fluorescent dyes currently used for research and for medical diagnosis are limited because they emit a very broad spectrum of light,” explains Seok Hyun Yun, PhD, of the Wellman Center for Photomedicine at MGH, corresponding author of the report. “As a result, only a handful of dyes can be used at a time, since their spectral signatures would overlap.”

(Left) Bright-field image of a HeLa cell containing a polystyrene fluorescent bead. (Right) False-color image of the cell. (scale bars: 10 micrometers) (credit: Matjaž Humar and Seok Hyun Yun/Nature Photonics)

Lead author Matjaž Humar, PhD, also of the Wellman Center, adds, “The narrow-band spectrum of light emitted by these intracellular lasers would allow us to label thousands — in principle, up to a trillion — of cells individually [the estimated number of cells in the human body], and the very specific wavelengths emitted by these microlasers also would allow us to measure small changes happening within a cell with much greater sensitivity than is possible with broadband fluorescence.”

The trick is to use solid plastic fluorescent microbeads, which are readily taken up into cells, each with a unique signature spectrum based on the size and number of beads within a cell and the fluorescent dye used.

“One immediate application of these intracellular lasers could be basic studies, such as understanding how cells move and respond to external forces,” says Yun, an associate professor of Dermatology at Harvard Medical School.

“Another challenging step will be figuring out how to use biologically generated energy from mechanical movement or a biochemical reaction to pump a cellular laser in a living body. Cells are smart machines, and we are interested in exploiting their amazing capabilities by developing smart-cell lasers that might be able to find diseases and fire light at them on their own.

“We can envision lasers completely made out of materials that are safe for use within the human body, which could enable remote sensing within the body or be used in laser-light therapies.”

The researchers’ report has received Advance Online Publication in Nature Photonics.


Abstract of Intracellular microlasers

Optical microresonators, which confine light within a small cavity, are widely exploited for various applications ranging from the realization of lasers and nonlinear devices to biochemical and optomechanical sensing. Here we use microresonators and suitable optical gain materials inside biological cells to demonstrate various optical functions in vitro including lasing. We explore two distinct types of microresonator—soft and hard—that support whispering-gallery modes. Soft droplets formed by injecting oil or using natural lipid droplets support intracellular laser action. The laser spectra from oil-droplet microlasers can chart cytoplasmic internal stress (∼500 pN μm–2) and its dynamic fluctuations at a sensitivity of 20 pN μm–2 (20 Pa). In a second form, whispering-gallery modes within phagocytized polystyrene beads of different sizes enable individual tagging of thousands of cells easily and, in principle, a much larger number by multiplexing with different dyes.