Semantic Scholar uses AI to transform scientific search

Example of the top return in a Semantic Scholar search for “quantum computer silicon” constrained to overviews (52 out of 1,397 selected papers since 1989) (credit: AI2)

The Allen Institute for Artificial Intelligence (AI2) launched Monday (Nov. 2) its free Semantic Scholar service, intended to allow scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work.

Semantic Scholar leverages AI2’s expertise in data mining, natural-language processing, and computer vision, according to according to Oren Etzioni, PhD, CEO at AI2. At launch, the system searches more than three million computer science papers, and will add scientific categories on an ongoing basis.

With Semantic Scholar, computer scientists can:

  • Home in quickly on what they are looking for, with advanced selection filtering tools. Researchers can filter search results by author, publication, topic, and date published. This gets the researcher to the most relevant result in the fastest way possible, and reduces information overload.
  • Instantly access a paper’s figures and findings. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
  • Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
  • Be prompted with key phrases within each paper to winnow the search further.

Example of figures and tables extracted from the first document discovered (“Quantum computation and quantum information”) in the search above (credit: AI2)

How Semantic Scholar works

Using machine reading and vision methods, Semantic Scholar crawls the web, finding all PDFs of publicly available scientific papers on computer science topics, extracting both text and diagrams/captions, and indexing it all for future contextual retrieval.

Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of paper and how influential its citations are. It provides the scientist with a simple user interface (optimized for mobile) that maps to academic researchers’ expectations.

Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Semantic Scholar builds from the foundation of other research-paper search applications such as Google Scholar, adding AI methods to overcome information overload.

“Semantic Scholar is a first step toward AI-based discovery engines that will be able to connect the dots between disparate studies to identify novel hypotheses and suggest experiments that would otherwise be missed,” said Etzione. “Our goal is to enable researchers to find answers to some of science’s thorniest problems.”

First complete pictures of cells’ DNA-copying machinery

These cartoons show the old “textbook” view of the replisome, left, and the new view, right, revealed by electron micrograph images in the current study. Prior to this study, scientists believed the two polymerases (green) were located at the bottom (or back end) of the helicase (tan), adding complementary DNA strands to the split DNA to produce copies side by side. The new images reveal that one of the polymerases is actually located at the front end (top) of the helicase. The scientists are conducting additional studies to explore the biological significance of this unexpected location. (credit: Brookhaven National Laboratory)

The first-ever electron microscope images of the protein complex that unwinds, splits, and copies double-stranded DNA reveal something rather different from the standard textbook view.

The images, created by scientists at the U.S. Department of Energy’s Brookhaven National Laboratory with partners from Stony Brook University and Rockefeller University, offer new insight into how this molecular machinery functions, including new possibilities about its role in DNA “quality control” and cell differentiation.

Huilin Li, a biologist with a joint appointment at Brookhaven Lab and Stony Brook University says the new images show the fully assembled and fully activated helicase protein complex — which encircles and separates the two strands of the DNA double helix as it passes through a central pore in the structure — and how the helicase coordinates with the two polymerase enzymes that duplicate each strand to copy the genome.

Three blind men and an elephant

Studying this molecular machinery, known collectively as a “replisome,” and the details of its DNA-copying process can help scientists understand what happens when DNA is miscopied — a major source of mutation that can lead to cancer. Scientists can also learn more about how a single cell can eventually develop into the many cell types that make up a multicellular organism.

“All the textbook drawings and descriptions of how a replisome should look and work are based on biochemical and genetic studies,” Li said, likening the situation to the famous parable of the three blind men trying to describe an elephant, each looking at only one part.

To test these assumptions, Li’s group turned to electron microscopy (EM). The team’s first-ever images of an intact replisome revealed that only one of the polymerases is located at the back of the helicase.

The other is on the front side of the helicase, where the helicase first encounters the double-stranded helix. This means that while one of the two split DNA strands is acted on by the polymerase at the back end, the other has to thread itself back through or around the helicase to reach the front-side polymerase before having its new complementary strand assembled.

Unforeseen functions?

The counterintuitive position of one polymerase at the front of the helicase suggests that it may have an unforeseen function. The authors suggest several possibilities, including keeping the two “daughter” strands separate to help organize them during replication and cell division. It might also be possible that, as the single strand moves over other portions of the structure, some “surveillance” protein components check for lesions or mistakes in the nucleotide sequence before it gets copied — a sort of molecular quality control.

This architecture could also potentially play an important role in developmental biology by providing a pathway for treating the two daughter strands differently. Many modifications to DNA, including how it is packaged with other proteins, control which of the many genes in the sequence are eventually expressed in cells. An asymmetric replisome may result in asymmetric treatment of the two daughter strands during cell division, an essential step for making different tissues within a multicellular organism.

“Clearly, further studies will be required to understand the functional implications of the unexpected replisome architecture reported here,” concludes the researchers’ paper published Monday (Nov. 2) online by the journal Nature Structural & Molecular Biology.


Brookhaven National Laboratory | Three-dimensional structure of the active DNA helicase bound to the front-end DNA polymerase (Pol epsilon). The DNA polymerase epsilon (green) sits on top rather than the bottom of the helicase.


Abstract of The Architecture of a Eukaryotic Replisome

At the eukaryotic DNA replication fork, it is widely believed that the Cdc45–Mcm2–7–GINS (CMG) helicase is positioned in front to unwind DNA and that DNA polymerases trail behind the helicase. Here we used single-particle EM to directly image a Saccharomyces cerevisiae replisome. Contrary to expectations, the leading strand Pol ε is positioned ahead of CMG helicase, whereas Ctf4 and the lagging-strand polymerase (Pol) α–primase are behind the helicase. This unexpected architecture indicates that the leading-strand DNA travels a long distance before reaching Pol ε, first threading through the Mcm2–7 ring and then making a U-turn at the bottom and reaching Pol ε at the top of CMG. Our work reveals an unexpected configuration of the eukaryotic replisome, suggests possible reasons for this architecture and provides a basis for further structural and biochemical replisome studies.

Just one junk-food snack triggers signals of metabolic syndrome

(credit: iStock)

Just one high-calorie milkshake was enough to make metabolic syndrome worse for some people. And overindulgence in just a single meal or snack (especially junk food) is enough to trigger the beginnings of metabolic syndrome, which is associated with the risk of developing cardiovascular disease and diabetes (obesity around the waist and trunk is the main sign).

That finding by researchers at the Microbiology and Systems Biology Group of the Netherlands Organisation for Applied Scientific Research (TNO) was reported in the online edition of the Nov. 2015 issue of The FASEB Journal.

For some people, “acute effects of diet are mostly small, but may have large consequences in the long run,” said TNO researcher Suzan Wopereis, Ph.D., senior author of the report.

The researchers gave male volunteers in two groups a high-fat milkshake consisting of 53% whipping cream, 3% sugar, and 44% water (1.6 g protein, 16 g fat, and 3.2 g carbohydrates).

The first group included 10 healthy male volunteers. They were also given a snack diet consisting of an additional 1300 kcal per day, in the form of sweets and savory products such as candy bars, tarts, peanuts, and crisps for four weeks.

The second group included nine volunteers with metabolic syndrome and who had a combination of two or more risk factors for heart disease, such as unhealthy cholesterol levels, high blood pressure, high blood sugar, high blood lipids, and abdominal fat.

Test results: not good

Both groups had blood samples taken, before and after the snacks. In these blood samples, the researchers measured 61 biomarkers, such as cholesterol and blood sugar.

For the subjects with metabolic syndrome, the blood tests showed that biochemical processes related to sugar metabolism, fat metabolism, and inflammation were abnormal.

For the 10 healthy male volunteers, the blood tests showed that signaling molecules such as hormones regulating the control of sugar and fat metabolism and inflammation were changed, resembling the very subtle start of negative health effects similar to those found at the start of metabolic disease.

“Eating junk food is one of those situations where our brains say ‘yes’ and our bodies say ‘no,’” said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “Unfortunately for us, this report shows that we need to use our brains and listen to our bodies. Even one unhealthy snack has negative consequences that extend far beyond any pleasure it brings.”


Abstract of Quantifying phenotypic flexibility as the response to a high-fat challenge test in different states of metabolic health

Metabolism maintains homeostasis at chronic hypercaloric conditions, activating postprandial response mechanisms, which come at the cost of adaptation processes such as energy storage, eventually with negative health consequences. This study quantified the metabolic adaptation capacity by studying challenge response curves. After a high-fat challenge, the 8 h response curves of 61 biomarkers related to adipose tissue mass and function, systemic stress, metabolic flexibility, vascular health, and glucose metabolism was compared between 3 metabolic health stages: 10 healthy men, before and after 4 wk of high-fat, high-calorie diet (1300 kcal/d extra), and 9 men with metabolic syndrome (MetS). The MetS subjects had increased fasting concentrations of biomarkers representing the 3 core processes, glucose, TG, and inflammation control, and the challenge response curves of most biomarkers were altered. After the 4 wk hypercaloric dietary intervention, these 3 processes were not changed, as compared with the preintervention state in the healthy subjects, whereas the challenge response curves of almost all endocrine, metabolic, and inflammatory processes regulating these core processes were altered, demonstrating major molecular physiologic efforts to maintain homeostasis. This study thus demonstrates that change in challenge response is a more sensitive biomarker of metabolic resilience than are changes in fasting concentrations.—Kardinaal, A. F. M., van Erk, M. J., Dutman, A. E., Stroeve, J. H. M., van de Steeg, E., Bijlsma, S., Kooistra, T., van Ommen, B., Wopereis, S. Quantifying phenotypic flexibility as the response to a high-fat challenge test in different states of metabolic health.

China plans world’s largest supercollider

China’s Circular Electron Positron Collider (CEPC) is expected to be at least twice the size of the world’s current leading collider, the Large Hadron Collider at CERN, partially shown here (credit: Maximilien Brice/CERN)

Chinese scientists are completing plans for the Circular Electron Positron Collider (CEPC), a supergiant particle collider. With a circumference of 80 kilometers (50 miles) when built, it will be at least twice the size of the world’s current leading collider, the Large Hadron Collider at CERN, outside Geneva, according to the Institute of High Energy Physics in Beijing. Work on the collider is expect to start in 2020.

The collider complex is initially designed to smash together electrons and their anti-matter counterparts, and later more massive protons, at velocities approaching the speed of light. This process hopes to recreate, inside the accelerator, the hyper-energy conditions that dominated following the Big Bang.

Physicists aim to explore the origins of matter, energy, and space-time. China says its collider will ultimately be able to reach higher energy levels than CERN; this might help physicists discover a new range of particles beyond those already charted in the Standard Model of Particle Physics.

According to Professor Nima Arkani-Hamed, a scholar at Princeton’s Institute for Advanced Study, a perfect circle-shaped city, hosting the globe’s leaders in experimental particle physics, new-technology firms and other future-oriented scholars and designers, could be created inside the massive Chinese collider complex. The complex would also host a multipurpose science-technology campus aimed at conducting secondary and supplemental science experiments.

Within the same 80km tunnel, the collider complex plans to be divided between two different super colliders. The Circular Electron Positron Collider (CEPC) is designed to study the Higgs boson and how it decays following a collision between electrons and anti-electrons. The Super Proton Proton Collider (SPPC) will be used to study the super-speed collisions of protons.

New quadrupole magnets, which focus particle beams before collisions, are one of the key technologies for the High-Luminosity LHC. (credit: CERN)

Last week, more than 230 scientists and engineers from around the world met at CERN to discuss the High-Luminosity LHC — a major upgrade to the Large Hadron Collider (LHC) that “will increase its discovery potential from 2025,” according to the CERN website.

Massive supercomputer simulation models universe from near birth until today

Galaxies have halos surrounding them, which may be composed of both dark and regular matter. This image shows a substructure within a halo in the Q Continuum simulation, with “subhalos” marked in different colors. (credit: Heitmann et al.)

The Q Continuum simulation, one of the largest cosmological simulations ever performed, has modeled the evolution of the universe from just 50 million years after the Big Bang to the present day.

DOE’s Argonne National Laborator led the simulation on the Titan supercomputer at DOE’s Oak Ridge National Laboratory.

Over the course of 13.8 billion years, the matter in the universe clumped together to form galaxies, stars, and planets. These kinds of simulations help scientists understand dark energy (a form of energy that affects the expansion rate of the universe) and the distribution of galaxies, composed of ordinary matter and mysterious dark matter.

This series run on the Titan supercomputer simulates the evolution of the universe. The images give an impression of the detail in the matter distribution in the simulation. At first, the matter is very uniform, but over time, gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form. (credit: Heitmann et. al.)

Intensive sky surveys with powerful telescopes, like the Sloan Digital Sky Survey and the new, more detailed Dark Energy Survey, show scientists where galaxies and stars were when their light was first emitted. And surveys of the Cosmic Microwave Background (light remaining from when the universe was only 300,000 years old) show us how the universe began — “very uniform, with matter clumping together over time,” said Katrin Heitmann, an Argonne physicist who led the simulation.

The simulation fills in the temporal gap to show how the universe might have evolved in between: “Gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form,” said Heitmann.

The Q Continuum simulation involved half a trillion particles — dividing the universe up into cubes with sides 100,000 kilometers long. This makes it one of the largest cosmology simulations at such high resolution. It ran using more than 90 percent of the supercomputer.

“This is a very rich simulation,” Heitmann said. “We can use this data to look at why galaxies clump this way, as well as the fundamental physics of structure formation itself.”

Analysis has already begun on the two and a half petabytes of data that were generated, and will continue for several years, she said. Scientists can pull information on such astrophysical phenomena as strong lensing, weak lensing shear, cluster lensing, and galaxy-galaxy lensing.


Abstract of The Q Continuum simulation: Harnessing the power of GPU accelerated supercomputers

Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the “Q Continuum” cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)3 and evolves more than half a trillion particles, leading to a particle mass resolution of ${m}_{{rm{p}}}simeq 1.5cdot {10}^{8};$ ${M}_{odot }$. At this mass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analytic modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan’s GPU accelerators.

Single-agent phototherapy system diagnoses and kills cancer cells

A new single-agent phototherapy system combines silicon naphthalocyanine (which is toxic to cancer) and PEG-PCL (biodegradable carrier) for diagnosis and treatment of cancer (credit: Oregon State University)

Researchers at Oregon State University have announced a new single-agent phototherapy (light-based) approach to combating cancer, using a single chemical compound (SiNc-PNP), for both diagnosis and treatment.

The compound makes cancer cells glow when exposed to near-infrared light so a surgeon can identify the cancer. The compound includes a copolymer called PEG-PCL as the biodegradable carrier. The carrier causes the silicon naphthalocyanine to accumulate selectively in cancer cells and reach a maximum level in the cells after about one day. At that point, doctors would do surgery, and then use phototherapy treatment to kill the remaining cancer cells. The compounds are naturally and completely excreted from the body.

In tests completed with laboratory animals, tumors were completely eradicated without side effects, and did not return.

The findings were presented Thursday (Nov. 29) at the annual meeting of the American Association of Pharmaceutical Scientists in Orlando, Florida, and were also recently published in Chemistry of Materials, a publication of the American Chemical Society.

An alternative to surgery, radiation, and chemotherapy

The researchers believe that phototherapy may become a new and promising addition to the three primary ways that most cancer is treated today: surgery, radiation, and/or chemotherapy. Phototherapy may have special value with cancers that have formed resistance to chemotherapeutic drugs, or present other problems that can’t be managed with existing therapies, the researchers suggest.

Their research so far has studied ovarian cancers in laboratory animals, but the treatment may also be useful for other solid tumors, they suggest. There were no apparent side effects on animals tested.

“A single-agent based system is simple and very good at targeting only cancer tumors and should significantly improve outcomes,” said Oleh Taratula, an assistant professor in the Oregon State University/Oregon Health & Science University College of Pharmacy. “It’s small, nontoxic, and highly efficient.”

In continued research with the OSU College of Veterinary Medicine, the treatment will eventually move on to human clinical trials.


Abstract of Naphthalocyanine-Based Biodegradable Polymeric Nanoparticles for Image-Guided Combinatorial Phototherapy

Image-guided phototherapy is extensively considered as a promising therapy for cancer treatment. To enhance translational potential of this modality, we developed a single agent-based biocompatible nanoplatform that provides both real time near-infrared (NIR) fluorescence imaging and combinatorial phototherapy with dual photothermal and photodynamic therapeutic mechanisms. The developed theranostic nanoplatform consists of two building blocks: (1) silicon naphthalocyanine (SiNc) as a NIR fluorescence imaging and phototherapeutic agent and (2) a copolymer, poly(ethylene glycol)-block-poly(ε-caprolactone) (PEG–PCL) as the biodegradable SiNc carrier. Our simple, highly reproducible, and robust approach results in preparation of spherical, monodisperse SiNc-loaded PEG–PCL polymeric nanoparticles (SiNc-PNP) with a hydrodynamic size of 37.66 ± 0.26 nm (polydispersity index = 0.06) and surface charge of −2.76 ± 1.83 mV. The SiNc-loaded nanoparticles exhibit a strong NIR light absorption with an extinction coefficient of 2.8 × 105 M–1 cm–1 and efficiently convert the absorbed energy into fluorescence emission (ΦF = 11.8%), heat (ΔT ∼ 25 °C), and reactive oxygen species. Moreover, the SiNc-PNP are characterized by superior photostability under extensive photoirradiation and structure integrity during storage at room temperature over a period of 30 days. Following intravenous injection, the SiNc-PNP accumulated selectively in tumors and provided high lesion-to-normal tissue contrast for sensitive fluorescence detection. Finally, adriamycin-resistant tumors treated with a single intravenous dose of SiNc-PNP (1.5 mg/kg) combined with 10 min of a 785 nm light irradiation (1.3 W/cm2) were completely eradicated from the mice without cancer recurrence or side effects. The reported characteristics make the developed SiNc-PNP a promising platform for future clinical application.

How to build a full-scale quantum computer in silicon

Physical layout of the surface code* quantum computer. The system comprises three layers. The 2D donor qubit array resides in the middle layer. A mutually perpendicular (crisscross) pattern of control gates in the upper and lower planes form a regular 3D grid of cells. (credit: Charles D. Hill et al./Science Advances)

A new 3D silicon-chip architecture based on single-atom quantum bits has been designed by researchers at UNSW Australia (The University of New South Wales) and the University of Melbourne.

The use of silicon makes it compatible with existing atomic-scale fabrication techniques, providing a way to build a large-scale quantum computer.**

The scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at UNSW, previously demonstrated a fabrication strategy. But the hard part in scaling up to an operational quantum computer was the architecture: How to precisely control multiple qubits in parallel across an array of many thousands of qubits and constantly correct for quantum errors in calculations.

The CQC2T collaboration says they have now designed such a device. In a study published Friday (Oct. 30) in an open-access paper in Science Advances, the CQC2T team describes a new silicon architecture that uses atomic-scale qubits aligned to control lines (essentially very narrow wires) inside a 3D design.


UNSW | How to build a quantum computer in silicon

Error correction

Errors (caused by decoherence and other quantum noise) are endemic to quantum computing, so error correction protocols are essential in creating a practical system that can be scaled up to larger numbers of qubits.

“The great thing about this work, and architecture, is that it gives us an endpoint,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T. “We now know exactly what we need to do in the international race to get there.”

In the team’s conceptual design, they have moved from the conventional one-dimensional array (in a line) of qubits to a two-dimensional array (in a surface), which is far more tolerant of errors. This qubit layer is “sandwiched” between two layers of control wires arranged in a 3D grid.

By applying voltages to a subset of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. They can also perform the 2D surface-code* error correction protocols, so any computational errors that creep into the calculation can be corrected faster than they occur.

The researchers believe their structure is scalable to millions of qubits, and that means they may be on the fast track to a full-scale quantum processor.

* “Surface code is a powerful quantum error correcting code that can be defined on a 2D square lattice of qubits with only nearest neighbor interactions.” — Austin G. Fowler et al. Surface code quantum error correction incorporating accurate error propagation. arXiv, 4/2010

** In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on). As a result, quantum computers will far exceed today’s most powerful supercomputers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modeling financial markets, optimizing huge metropolitan transport networks, and modeling complex biological molecules.


Abstract of A surface code quantum computer in silicon

The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

Is this the ‘ultimate’ battery?

False-color microscopic view of a reduced graphene oxide electrode (black), which hosts the large (about 20 micrometers) lithium hydroxide particles (pink) that form when a lithium-oxygen battery discharges (credit: T. Liu et al./Science)

University of Cambridge scientists have developed a working laboratory demonstrator of a lithium-oxygen battery that has very high energy density (storage capacity per unit volume), is more than 90% efficient, and can be recharged more than 2000 times (so far), showing how several of the problems holding back the development of more powerful batteries could be solved.

Lithium-oxygen (lithium-air) batteries have been touted as the “ultimate” battery due to their theoretical energy density, which is ten times higher than a lithium-ion battery. Such a high energy density would be comparable to that of gasoline — allowing for an electric car with a battery that is a fifth the cost and a fifth the weight of those currently on the market and that could drive about 666 km (414 miles) on a single charge. (This compares to 500 kilometers (311 miles) with the new University of Waterloo design, using a silicon anode — see “Longer-lasting, lighter lithium-ion batteries from silicon anodes.”)

The challenges associated with making a better battery are holding back the widespread adoption of two major clean technologies: electric cars and grid-scale storage for solar power.

A lab demonstrator based on graphene

The researchers have now demonstrated how some of the obstacles to the ultimate battery could be overcome in a lab-based demonstrator of a lithium-oxygen battery with higher capacity, increased energy efficiency, and improved stability over previous attempts.

SEM images of pristine, fully discharged, and charged reduced graphene oxide electrodes in lab demonstrator. Scale bars: 20 micrometers. (credit: Tao Liu et al./Science)

Their demonstrator relies on a highly porous, “fluffy” carbon electrode made from reduced graphene oxide (comprising one-atom-thick sheets of carbon atoms), and additives that alter the chemical reactions at work in the battery, making it more stable and more efficient. While the results, reported in the journal Science, are promising, the researchers caution that a practical lithium-air battery still remains at least a decade away.

“What we’ve achieved is a significant advance for this technology and suggests whole new areas for research — we haven’t solved all the problems inherent to this chemistry, but our results do show routes forward towards a practical device,” said Professor Clare Grey of Cambridge’s Department of Chemistry, the paper’s senior author.

Batteries are made of three components: a positive electrode, a negative electrode and an electrolyte. In the lithium-ion (Li-ion) batteries currently used in laptops and smartphones, the negative electrode is made of graphite (a form of carbon), the positive electrode is made of a metal oxide, such as lithium cobalt oxide, and the electrolyte is a lithium salt dissolved in an organic solvent. The action of the battery depends on the movement of lithium ions between the electrodes. Li-ion batteries are light, but their capacity deteriorates with age, and their relatively low energy densities mean that they need to be recharged frequently.

Over the past decade, researchers have been developing various alternatives to Li-ion batteries, and lithium-air batteries are considered the ultimate in next-generation energy storage, because of their extremely high theoretical energy density. However, attempts at working demonstrators so far have had low efficiency, poor rate performance, and unwanted chemical reactions. Also, they can only be cycled in pure oxygen.

What Liu, Grey and their colleagues have developed uses a very different chemistry: lithium hydroxide (LiOH) instead of lithium peroxide (Li2O2). With the addition of water and the use of lithium iodide as a “mediator,” their battery showed far less of the chemical reactions which can cause cells to die, making it far more stable after multiple charge and discharge cycles.

By precisely engineering the structure of the electrode, changing it to a highly porous form of graphene, adding lithium iodide, and changing the chemical makeup of the electrolyte, the researchers were able to reduce the “voltage gap” between charge and discharge to 0.2 volts. A small voltage gap equals a more efficient battery. Previous versions of a lithium-air battery have only managed to get the gap down to 0.5 – 1.0 volts, whereas 0.2 volts is closer to that of a Li-ion battery, and equates to an energy efficiency of 93%.

Problems to be solved

The highly porous graphene electrode also greatly increases the capacity of the demonstrator, although only at certain rates of charge and discharge. Other issues that still have to be addressed include finding a way to protect the metal electrode so that it doesn’t form spindly lithium metal fibers known as dendrites, which can cause batteries to explode if they grow too much and short-circuit the battery.

Additionally, the demonstrator can only be cycled in pure oxygen, while the air around us also contains carbon dioxide, nitrogen and moisture, all of which are generally harmful to the metal electrode.

The authors acknowledge support from the U.S. Department of Energy, the Engineering and Physical Sciences Research Council (EPSRC), Johnson Matthey, the European Union via Marie Curie Actions, and the Graphene Flagship. The technology has been patented and is being commercialized through Cambridge Enterprise, the University’s commercialization arm.


Abstract of Cycling Li-O2 batteries via LiOH formation and decomposition

The rechargeable aprotic lithium-air (Li-O2) battery is a promising potential technology for next-generation energy storage, but its practical realization still faces many challenges. In contrast to the standard Li-O2 cells, which cycle via the formation of Li2O2, we used a reduced graphene oxide electrode, the additive LiI, and the solvent dimethoxyethane to reversibly form and remove crystalline LiOH with particle sizes larger than 15 micrometers during discharge and charge. This leads to high specific capacities, excellent energy efficiency (93.2%) with a voltage gap of only 0.2 volt, and impressive rechargeability. The cells tolerate high concentrations of water, water being the dominant proton source for the LiOH; together with LiI, it has a decisive impact on the chemical nature of the discharge product and on battery performance.

Flexible phototransistor is world’s fastest, most sensitive

New phototransistor is flexible yet fastest and most responsive in the world, according to UW engineers (credit: Jung-Hun Seo)

University of Wisconsin-Madison (UW) electrical engineers have created the fastest, most responsive flexible silicon phototransistor ever made, inspired by mammals’ eyes.

Phototransistors (an advanced type of photodetector) convert light to electricity. They are widely used in products ranging from digital cameras, night-vision goggles, and smoke detectors to surveillance systems and satellites.

Developed by UW-Madison collaborators Zhenqiang “Jack” Ma, professor of electrical and computer engineering, and research scientist Jung-Hun Seo, the new phototransistor design uses thin-film single-crystalline silicon nanomembranes and has the highest-ever sensitivity and response time, the engineers say.

They suggest it could improve performance of products that rely on electronic light sensors. Integrated into a digital camera lens, for example, it could reduce bulkiness and boost the acquisition speed and quality of video or still photos.

Silicon nanomembrane phototransistor design. An anti-reflection coating (ARC) with a low refractive index increases light absorption by the silicon nanomembrane (Si NM) below, which is backed by transistor electrodes (source, gate, and drain), a reflective metal layer, and protective polyethylene terephthalate (PET). (credit: Jung-Hun Seo et al./Advanced Optical Materials)

While many phototransistors are fabricated on rigid surfaces, and therefore are flat, the new devices are flexible, meaning they more easily mimic the behavior of mammalian eyes. “We actually can make the curve any shape we like to fit the optical system,” Ma says. The new “flip-transfer” fabrication method deposits electrodes under the phototransistor’s ultrathin silicon nanomembrane layer and a reflective metal layer on the bottom. The metal layer and electrodes act as reflectors and improve light absorption sensitivity without the need for an external amplifier.

“Light absorption can be much more efficient because light is not blocked by any metal layers or other materials,” Ma says.

The researchers published details this week in the journal Advanced Optical Materials. The work was supported by the U.S. Air Force. The researchers are patenting the technology through the Wisconsin Alumni Research Foundation.


Abstract of Flexible Phototransistors Based on Single-Crystalline Silicon Nanomembranes

In this work, flexible phototransistors with a back gate configuration based on transferrable single-crystalline Si nanomembrane (Si NM) have been demonstrated. Having the Si NM as the top layer enables full exposure of the active region to an incident light and thus allows for effective light sensing. Flexible phototransistors are performed in two operation modes: 1) the high light detection mode that exhibits a photo-to-dark current ratio of 105 at voltage bias of VGS < 0.5 V, and VDS = 50 mV and 2) the high responsivity mode that shows a maximum responsivity of 52 A W−1 under blue illumination at voltage bias of VGS = 1 V, and VDS = 3 V. Due to the good mechanical flexibility of Si NMs with the assistance of a polymer layer to enhance light absorption, the device exhibits stable responsivity with less than 5% of variation under bending at small radii of curvatures (up to 15 mm). Overall, such flexible phototransistors with the capabilities of high sensitivity light detection and stable performance under the bending conditions offer great promises for high-performance flexible optical sensor applications, with easy integration for multifunctional applications.

Long-term aerobic exercise prevents age-related brain deterioration

Schematic illustration of age-related changes in the neurovascular unit that are prevented by exercise. In the aged cortex of sedentary mice, neurovascular dysfunction is evident by decreased numbers of pericytes (surrounding capillaries, pink), decline in basement membrane (BM) coverage (blue), increased transcytosis (a process that transports macromolecules across cells, allowing pathogens to invade) on endothelial cells (green), reduced expression of AQP4 in astrocytes, down-regulation of Apoe (an essential protein, light purple), decrease in synaptic proteins such as synaptophysin (SYN, green), and increased proinflammatory IBA1+ microglia/monocytes (indicating age-related neuroinflammation, yellow). These age-related changes were successfully prevented (horizontal T line, “Exercise”)  by 6 months of voluntary running during aging. (credit: Ileana Soto et al./PLOS Biology

A study of the brains of mice shows that structural deterioration associated with old age can be prevented by long-term aerobic exercise starting in mid-life, according to the authors of an open-access paper in the journal PLOS Biology yesterday (October 29).

Old age is the major risk factor for Alzheimer’s disease, like many other diseases, as the authors at The Jackson Laboratory in Bar Harbor, Maine, note. Age-related cognitive deficits are due partly to changes in neuronal function, but also correlate with deficiencies in the blood supply to the brain and with low-level inflammation.

“Collectively, our data suggests that normal aging causes significant dysfunction to the cortical neurovascular unit, including basement membrane reduction and pericyte (cells that wrap around blood capillaries) loss. These changes correlate strongly with an increase in microglia/monocytes in the aged cortex,” said Ileana Soto, lead author on the study.*

Benefits of aerobic exercise

However, the researchers found that if they let the mice run freely, the structural changes that make the blood-brain barrier leaky and result in inflammation of brain tissues in old mice can be mitigated. That suggests that there are also beneficial effects of exercise on dementia in humans.**

Further work will be required to establish the mechanism(s): what is the role of the complement-producing microglia/macrophages, how does Apoe decline contribute to age-related neurovascular decline, does the leaky blood-brain barrier allow the passage of damaging factors from the circulation into the brain?

This work was funded in part by The Jackson Laboratory Nathan Shock Center, the Fraternal Order of the Eagle, the Jane B Cook Foundation and NIH.

* The authors investigated the changes in the brains of normal young and aged laboratory mice by comparing by their gene expression profiles using a technique called RNA sequencing, and by comparing their structures at high-resolution by using fluorescence microscopy and electron microscopy. The gene expression analysis indicated age-related changes in the expression of genes relevant to vascular function (including focal adhesion, vascular smooth muscle and ECM-receptor interactions), and inflammation (especially related to the complement system, which clears foreign particles) in the brain cortex.

These changes were accompanied by a decline in the function of astrocytes (key support cells in the brain) and loss of pericytes (the contractile cells that surround small capillaries and venules and maintain the blood-brain barrier). There were also effects on the basement membrane, which forms an integral part of the blood-brain barrier, as well as an increase in the density and functional activation of the immune cells known as microglia/monocytes, which scavenge the brain for infectious agents and damaged cells.

** To investigate the impact of long-term physical exercise on the brain changes seen in the aging mice, the researchers provided the animals with a running wheel from 12 months old (equivalent to middle aged in humans) and assessed their brains at 18 months (equivalent to ~60yrs old in humans, when the risk of Alzheimer’s disease is greatly increased). Young and old mice alike ran about two miles per night, and this physical activity improved the ability and motivation of the old mice to engage in the typical spontaneous behaviors that seem to be affected by aging.

This exercise significantly reduced age-related pericyte loss in the brain cortex and improved other indicators of dysfunction of the vascular system and blood-brain barrier. Exercise also decreased the numbers of microglia/monocytes expressing a crucial initiating component of the complement pathway that others have shown previously to play are role in age-related cognitive decline. Interestingly, these beneficial effects of exercise were not seen in mice deficient in a gene called Apoe, variants of which are a major genetic risk factor for Alzheimer’s disease. The authors also report that Apoe expression in the brain cortex declines in aged mice and this decline can also be prevented by exercise.


Abstract of APOE Stabilization by Exercise Prevents Aging Neurovascular Dysfunction and Complement Induction

Aging is the major risk factor for neurodegenerative diseases such as Alzheimer’s disease, but little is known about the processes that lead to age-related decline of brain structures and function. Here we use RNA-seq in combination with high resolution histological analyses to show that aging leads to a significant deterioration of neurovascular structures including basement membrane reduction, pericyte loss, and astrocyte dysfunction. Neurovascular decline was sufficient to cause vascular leakage and correlated strongly with an increase in neuroinflammation including up-regulation of complement component C1QA in microglia/monocytes. Importantly, long-term aerobic exercise from midlife to old age prevented this age-related neurovascular decline, reduced C1QA+ microglia/monocytes, and increased synaptic plasticity and overall behavioral capabilities of aged mice. Concomitant with age-related neurovascular decline and complement activation, astrocytic Apoe dramatically decreased in aged mice, a decrease that was prevented by exercise. Given the role of APOE in maintaining the neurovascular unit and as an anti-inflammatory molecule, this suggests a possible link between astrocytic Apoe, age-related neurovascular dysfunction and microglia/monocyte activation. To test this, Apoe-deficient mice were exercised from midlife to old age and in contrast to wild-type (Apoe-sufficient) mice, exercise had little to no effect on age-related neurovascular decline or microglia/monocyte activation in the absence of APOE. Collectively, our data shows that neurovascular structures decline with age, a process that we propose to be intimately linked to complement activation in microglia/monocytes. Exercise prevents these changes, but not in the absence of APOE, opening up new avenues for understanding the complex interactions between neurovascular and neuroinflammatory responses in aging and neurodegenerative diseases such as Alzheimer’s disease.