Rechargeable batteries with almost infinite lifetimes coming, say MIT-Samsung engineers

Illustration of the crystal structure of a superionic conductor. The backbone of the material is a cubic-like arrangement of sulphur anions (yellow). Lithium atoms are depicted in green, PS4 tetrahedra in violet, and GeS4 tetrahedra in blue. (credit: Yan Wang)

MIT and Samsung researchers have developed a new approach to achieving long life and a 20 to 30 percent improvement in power density (the amount of power stored in a given space) in rechargeable batteries — using a solid electrolyte, rather than the liquid used in today’s most common rechargeables. The new materials could also greatly improve safety and last through “hundreds of thousands of cycles.”

The results are reported in the journal Nature Materials. Solid-state electrolytes could be “a real game-changer,” says co-author Gerbrand Ceder, MIT visiting professor of materials science and engineering, creating “almost a perfect battery, solving most of the remaining issues” in battery lifetime, safety, and cost.

Superionic lithium-ion conductors

The electrolyte in rechargeable batteries is typically a liquid organic solvent whose function is to transport charged particles from one of a battery’s two electrodes to the other during charging and discharging. That material has been responsible for the overheating and fires that, for example, resulted in a temporary grounding of all of Boeing’s 787 Dreamliner jets.

With a solid electrolyte, there’s no safety problem, he says. “You could throw it against the wall, drive a nail through it — there’s nothing there to burn.”

The key to making all this feasible, Ceder says, was finding solid materials that could conduct ions fast enough to be useful in a battery. The initial findings focused on a class of materials known as superionic lithium-ion conductors, which are compounds of lithium, germanium, phosphorus, and sulfur. But the principles derived from this research could lead to even more effective materials, the team says, and they could function below about minus 20 degrees Fahrenheit.

Researchers at the University of California at San Diego and the University of Maryland were also involved in the study.

The article title was corrected to read “infinite” instead of indefinite.


Abstract of Design principles for solid-state lithium superionic conductors

Lithium solid electrolytes can potentially address two key limitations of the organic electrolytes used in today’s lithium-ion batteries, namely, their flammability and limited electrochemical stability. However, achieving a Li+ conductivity in the solid state comparable to existing liquid electrolytes (>1 mS cm−1) is particularly challenging. In this work, we reveal a fundamental relationship between anion packing and ionic transport in fast Li-conducting materials and expose the desirable structural attributes of good Li-ion conductors. We find that an underlying body-centred cubic-like anion framework, which allows direct Li hops between adjacent tetrahedral sites, is most desirable for achieving high ionic conductivity, and that indeed this anion arrangement is present in several known fast Li-conducting materials and other fast ion conductors. These findings provide important insight towards the understanding of ionic transport in Li-ion conductors and serve as design principles for future discovery and design of improved electrolytes for Li-ion batteries.

Why wind — and soon solar — are already cheaper than fossil fuels

Global levelized cost of energy (LCOE) by various fuel types in $/megawatt-hours (credit: Citigroup)

Citigroup has published an analysis of the costs of various energy sources called “Energy Darwinism II.” It concludes that if all the costs of generation are included (known as the levelized cost of energy), renewables turn out to be cheaper than fossil fuels and a “benefit rather than a cost to society,” RenewEconomy reports.

“Capital costs are often cited by the promoters of fossil fuels as evidence that coal and gas are, and will, remain cheaper than renewable energy sources such as wind and gas. But this focuses on the short-term only — a trap repeated by opponents of climate action and clean energy, who focus on the upfront costs of policies.

Actually, fuel costs can “account for 80 per cent of the cost of gas-fired generation, and more than half the cost of coal,” RenewEconomy says.

Reneweables ahead

The graph above shows that the lowest-cost wind (in the best regions) is already beating coal and gas. Solar in the sunniest regions will do so by 2020, based on conservative estimates. And the cost of solar and wind will continue to fall, with solar eventually beating wind, RenewEconomy projects.

Citigroup estimates a “learning rate” of 19 per cent — meaning that solar costs will fall that much with each doubling in capacity (a variation of Moore’s Law). This translates into cost falls of 2 per cent a year.

“But as real-life experience shows, cost falls are happening faster than that. Last week, one of the big solar module manufacturers, Trina Solar, said costs had fallen 19 per cent in the past year, and would continue to fall by at least 5 per cent to 6 per cent a year in coming years as efficiencies were improved.”

“We should think of installing renewable energy as a benefit rather than a cost to society,” Citigroup writes.

 

Making hydrogen fuel from water and visible light at 100 times higher efficiency

Test unit schematic for temperature-induced photocatalytic hydrogen production from H2O with methanol as a sacrificial agent: (1) thermocouple (temperature sensor), (2) black Pt/TiO2 on SiO2 substrate, (3) quartz wool, (4) quartz tube reactor, (5) electrical tube furnace; (GC) gas chromatograph (analyzes gas components) (credit: Bing Han and Yun Hang Hu/Journal of Physical Chemistry)

Researchers at Michigan Technological University have found a way to convert light to hydrogen fuel more efficiently — a big step closer to mimicking photosynthesis.

Current methods for creating hydrogen fuel are based on using electrodes made from titanium dioxide (TiO2), which acts as a catalyst to stimulate the light–>water–>hydrogen chemical reaction. This works great with ultraviolet (UV) light, but UV comprises only about 4% of the total solar energy, making the overall process highly inefficient.*

The ideal would be to use visible light, since it constitutes about 45 percent of solar energy. Now two Michigan Tech scientists — Yun Hang Hu, the Charles and Carroll McArthur professor of Materials Science and Engineer, and his PhD student, Bing Han — have developed a way to do exactly that.

They report in Journal of Physical Chemistry that by absorbing the entire visible light spectrum, they have increased the yield and energy efficiency of creating hydrogen fuel by up to two magnitudes (100 times) greater than previously reported.**

As described in the paper, they used three new techniques to achieve that:

  • “Black titanium dioxide” (with 1 percent platinum) on a silicon dioxide substrate;
  • A “light-diffuse-reflected surface” to trap light;
  • An elevated reaction temperature (280 degrees Celsius).

In addition, the new setup is “convenient for scaling up commercially,” said Ho.

* TiO2 has a relatively large band gap energy (3.0−3.2 eV) and thus it can absorb only ultraviolet (UV) light (about 4% of the total solar energy), leading to a low photoconversion efficiency (less than 2% under AM 1.5 global sunlight illumination).

** The new method achieves a photo hydrogen yield of 497 mmol/h/g and an apparent quantum efficiency of 65.7% for the entire visible light range at 280 °C.


Abstract of Highly Efficient Temperature-Induced Visible Light Photocatalytic Hydrogen Production from Water

Intensive effort has led to numerous breakthroughs for photoprocesses. So far, however, energy conversion efficiency for the visible-light photocatalytic splitting of water is still very low. In this paper, we demonstrate (1) surface-diffuse-reflected-light can be 2 orders of magnitude more efficient than incident light for photocatalysis, (2) the inefficiency of absorbed visible light for the photocatalytic H2 production from water with a sacrificial agent is due to its kinetic limitation, and (3) the dispersion of black Pt/TiO2 catalyst on the light-diffuse-reflection-surface of a SiO2 substrate provides a possibility for exploiting a temperature higher than H2O boiling point to overcome the kinetic limitation of visible light photocatalytic hydrogen production. Those findings create a novel temperature-induced visible light photocatalytic H2production from water steam with a sacrificial agent, which exhibits a high photohydrogen yield of 497 mmol/h/gcat with a large apparent quantum efficiency (QE) of 65.7% for entire visible light range at 280 °C. The QE and yield are one and 2 orders of magnitude larger than most reported results, respectively.

‘Diamonds from the sky’ approach to turn CO2 into valuable carbon nanofibers

Researchers are removing a greenhouse gas from the air while generating carbon nanofibers like these (credit: Stuart Licht, Ph.D)

A research team of chemists at George Washington University has developed a technology that can economically convert atmospheric CO2 directly from the air into highly valued carbon nanofibers for industrial and consumer products — converting an anthropogenic greenhouse gas from a climate change problem to a valuable commodity, they say.

The team presented their research today (Aug. 19) at the 250th National Meeting & Exposition of the American Chemical Society (ACS).

“Such nanofibers are used to make strong carbon composites, such as those used in the Boeing Dreamliner, as well as in high-end sports equipment, wind turbine blades and a host of other products,” said Stuart Licht, Ph.D., team leader.

Previously, the researchers had made fertilizer and cement without emitting CO2, which they reported. Now, the team, which includes postdoctoral fellow Jiawen Ren, Ph.D., and graduate student Jessica Stuart, says their research could shift CO2 from a global-warming problem to a feed stock for the manufacture of in-demand carbon nanofibers.

Licht calls his approach “diamonds from the sky.” That refers to carbon being the material that diamonds are made of, and also hints at the high value of the products, such as carbon nanofibers.

A low-energy, high-efficiency process

The researchers claim this low-energy process can be run efficiently, using only a few volts of electricity, sunlight, and a whole lot of carbon dioxide. The system uses electrolytic syntheses to make the nanofibers. Here’s how:

  1. To power the syntheses, heat and electricity are produced through a hybrid and extremely efficient concentrating solar-energy system. The system focuses the sun’s rays on a photovoltaic solar cell to generate electricity and on a second system to generate heat and thermal energy, which raises the temperature of an electrolytic cell.
  2. CO2 is broken down in a high-temperature electrolytic bath of molten carbonates at 1,380 degrees F (750 degrees C).
  3. Atmospheric air is added to an electrolytic cell.
  4. The CO2 dissolves when subjected to the heat and direct current through electrodes of nickel and steel.
  5. The carbon nanofibers build up on the steel electrode, where they can be removed.

Licht estimates electrical energy costs of this “solar thermal electrochemical process” to be around $1,000 per ton of carbon nanofiber product. That means the cost of running the system is hundreds of times less than the value of product output, he says.

Decreasing CO2 to pre-industrial-revolution levels

“We calculate that with a physical area less than 10 percent the size of the Sahara Desert, our process could remove enough CO2 to decrease atmospheric levels to those of the pre-industrial revolution within 10 years,” he says.

At this time, the system is experimental. Licht’s biggest challenge will be to ramp up the process and gain experience to make consistently sized nanofibers. “We are scaling up quickly,” he adds, “and soon should be in range of making tens of grams of nanofibers an hour.”

Licht explains that one advance the group has recently achieved is the ability to synthesize carbon fibers using even less energy than when the process was initially developed. “Carbon nanofiber growth can occur at less than 1 volt at 750 degrees C, which for example is much less than the 3–5 volts used in the 1,000 degree C industrial formation of aluminum,” he says.

No published details on overall energy costs and efficiency are yet available (to be updated).


Abstract of New approach to carbon dioxide utilization: The carbon molten air battery

As the levels of carbon dioxide (CO2) increase in the Earth’s atmosphere, the effects on climate change become increasingly apparent. As the demand to reduce our dependence on fossils fuels and lower our carbon emissions increases, a transition to renewable energy sources is necessary. Cost effective large-scale electrical energy storage must be established for renewable energy to become a sustainable option for the future. We’ve previously shown that carbon dioxide can be captured directly from the air at solar efficiencies as high as 50%, and that carbon dioxide associated with cement formation and the production of other commodities can be electrochemically avoided in the STEP process.1-3

The carbon molten air battery, presented by our group in late 2013, is attractive due to its scalability, location flexibility, and construction from readily available resources, providing a battery that can be useful for large scale applications, such as the storage of renewable electricity.4

Uncommonly, the carbon molten air battery can utilize carbon dioxide directly from the air:
(1) charging: CO2(g) -> C(solid) + O2(g)
(2) discharging: C(solid) + O2(g) -> CO2(g)
More specifically, in a molten carbonate electrolyte containing added oxide, such as lithium carbonate with lithium oxide, the 4 electron charging reaction eq. 1 approaches 100% faradic efficiency and can be described as the following two equations:
(1a) O2-(dissolved) + CO2(g) -> CO32-(molten)
(1b) CO32-(molten) -> C(solid) + O2(g) + O2-(dissolved)
Thus, powered by carbon formed directly from the CO2 in our earth’s atmosphere, the carbon molten air battery is a viable system to provide large-scale energy storage.

1S. Licht, ”Efficient Solar-Driven Synthesis, Carbon Capture, and Desalinization, STEP: Solar Thermal Electrochemical Production of Fuels, Metals, Bleach,” Advanced Materials47, 5592 (2011).
2S. Licht, H. Wu, C. Hettige, B. Wang, J. Lau, J. Asercion, J. Stuart “STEP Cement: Solar Thermal Electrochemical Production of CaO without CO2 emission,” Chemical Communications, 48, 6019 (2012).
3S. Licht, B. Cui, B. Wang, F.-F. Li, J. Lau, S. Liu,” Ammonia synthesis by N2 and steam electrolysis in molten hydroxide suspensions of nanoscale Fe2O3,” Science, 345, 637 (2014).
4S. Licht, B. Cui, J. Stuart, B. Wang, J. Lau, “Molten Air Batteries – A new, highest energy class of rechargeable batteries,” Energy & Environmental Science, 6, 3646 (2013).

Glass paint could keep metal roofs and other structures cool even on sunny days

Silica-based paint (credit: American Chemical Society/Johns Hopkins University Applied Physics Lab)

Scientists at the Johns Hopkins University Applied Physics Lab have developed a new, environmentally friendly paint made from glass that bounces sunlight off metal surfaces — keeping them cool and durable.

“Most paints you use on your car or house are based on polymers, which degrade in the ultraviolet light rays of the sun,” says Jason J. Benkoski, Ph.D. “So over time you’ll have chalking and yellowing. Polymers also tend to give off volatile organic compounds, which can harm the environment. That’s why I wanted to move away from traditional polymer coatings to inorganic glass ones.”

Glass, which is made out of silica, would be an ideal coating. It’s hard, durable and has the right optical properties. But it’s very brittle.

To address that aspect in a new coating, Benkoski, started with silica, one of the most abundant materials in the earth’s crust. He modified one version of it, potassium silicate, that normally dissolves in water. His tweaks transformed the compound so that when it’s sprayed onto a surface and dries, it becomes water-resistant.

Unlike acrylic, polyurethane or epoxy paints, Benkoski’s paint is almost completely inorganic, which should make it last far longer than its counterparts that contain organic compounds. His paint is also designed to expand and contract with metal surfaces to prevent cracking.

Mixing pigments with the silicate gives the coating an additional property: the ability to reflect all sunlight and passively radiate heat. Since it doesn’t absorb sunlight, any surface coated with the paint will remain at air temperature, or even slightly cooler. That’s key to protecting structures from the sun.

“When you raise the temperature of any material, any device, it almost always by definition ages much more quickly than it normally would,” Benkoski says. “It’s not uncommon for aluminum in direct sunlight to heat 70 degrees Fahrenheit above ambient temperature. If you make a paint that can keep an outdoor surface close to air temperature, then you can slow down corrosion and other types of degradation.”


American Chemical Society | Glass Paint That Can Keep Structures Cool

The paint Benkoski’s lab is developing is intended for use on naval ships (with funding from the U.S. Office of Naval Research), but has many potential commercial applications.

“You might want to paint something like this on your roof to keep heat out and lower your air-conditioning bill in the summer,” he says. It could even go on metal playground slides or bleachers. And it would be affordable. The materials needed to make the coating are abundant and inexpensive.”

Benkoski says he expects his lab will start field-testing the material in about two years.

The researchers presented their work today at the 250th National Meeting & Exposition of the American Chemical Society (ACS), held in Boston through Thursday. It features more than 9,000 presentations on a wide range of science topics.


Abstract of Passive cooling with UV-resistant siloxane coatings in direct sunlight

Solar exposure is a leading cause of material degradation in outdoor use. Polymers and other organic materials photo-oxidize due to ultraviolet (UV) exposure. Even in metals, solar heating can cause unwanted property changes through precipitation and Ostwald ripening. In more complex systems, cyclic temperature changes cause fatigue failure wherever thermal expansion mismatch occurs. Most protective coatings designed to prevent these effects inevitably succumb to the same phenomena because of their polymeric matrix. In contrast, siloxane coatings have the potential provide indefinite solar protection because they do not undergo photo-oxidation. This study therefore demonstrates UV-reflective siloxane coatings with low solar absorptance and high thermal emissivity that prevent any increase in temperature above ambient conditions in direct sunlight. Mathematical modeling suggests that even sub-ambient cooling is possible for ZnO-filled potassium silicate. Preventing widespread adoption of potassium silicates until now has been their tendency to crack at large thicknesses, dissolve in water, and delaminate from untreated surfaces. This investigation has successfully addressed these limitations by formulating potassium silicates to behave more like a flexible siloxane polymer than a brittle inorganic glass. The addition of plasticizers (potassium, glycerol), gelling agents (polyethylenimine), and water-insoluble precipitates (zinc silicates, cerium silicates, organosilanes) make it possible to form thick, water resistant coatings that exhibit excellent adhesion even to untreated aluminum surfaces.

MIT designs small, modular, efficient fusion power plant

A cutaway view of the proposed ARC reactor (credit: MIT ARC team)

MIT plans to create a new compact version of a tokamak fusion reactor with the goal of producing practical fusion power, which could offer a nearly inexhaustible energy resource in as little as a decade.

Fusion, the nuclear reaction that powers the sun, involves fusing pairs of hydrogen atoms together to form helium, accompanied by enormous releases of energy.

The new fusion reactor, called ARC, would take advantage of new, commercially available superconductors — rare-earth barium copper oxide (REBCO) superconducting tapes (the dark brown areas in the illustration above) — to produce stronger magnetic field coils, according to Dennis Whyte, a professor of Nuclear Science and Engineering and director of MIT’s Plasma Science and Fusion Center.

The stronger magnetic field makes it possible to produce the required magnetic confinement of the superhot plasma — that is, the working material of a fusion reaction — but in a much smaller device than those previously envisioned. The reduction in size, in turn, makes the whole system less expensive and faster to build, and also allows for some ingenious new features in the power plant design.

The proposed reactor is described in a paper in the journal Fusion Engineering and Design, co-authored by Whyte, PhD candidate Brandon Sorbom, and 11 others at MIT.

Power plant prototype

The new reactor is designed for basic research on fusion and also as a potential prototype power plant that could produce 270MW of electrical power. The basic reactor concept and its associated elements are based on well-tested and proven principles developed over decades of research at MIT and around the world, the team says. An experimental tokamak was built at Princeton Plasma Physics Laboratory circa 1980.

The hard part has been confining the superhot plasma — an electrically charged gas — while heating it to temperatures hotter than the cores of stars. This is where the magnetic fields are so important — they effectively trap the heat and particles in the hot center of the device.

While most characteristics of a system tend to vary in proportion to changes in dimensions, the effect of changes in the magnetic field on fusion reactions is much more extreme: The achievable fusion power increases according to the fourth power of the increase in the magnetic field.

Tenfold boost in power

The new superconductors are strong enough to increase fusion power by about a factor of 10 compared to standard superconducting technology, Sorbom says. This dramatic improvement leads to a cascade of potential improvements in reactor design.

ITER — the world’s largest tokamak — is expected to be completed in 2019, with deuterium-tritium operations in 2027 and 2000–4000MW of fusion power onto the grid in 2040 (credit: ITER Organization)

The world’s most powerful planned fusion reactor, a huge device called ITER that is under construction in France, is expected to cost around $40 billion. Sorbom and the MIT team estimate that the new design, about half the diameter of ITER (which was designed before the new superconductors became available), would produce about the same power at a fraction of the cost, in a shorter construction time, and with the same physics.

Another key advance in the new design is a method for removing the fusion power core from the donut-shaped reactor without having to dismantle the entire device. That makes it especially well-suited for research aimed at further improving the system by using different materials or designs to fine-tune the performance.

In addition, as with ITER, the new superconducting magnets would enable the reactor to operate in a sustained way, producing a steady power output, unlike today’s experimental reactors that can only operate for a few seconds at a time without overheating of copper coils.

Liquid protection

Another key advantage is that most of the solid blanket materials used to surround the fusion chamber in such reactors are replaced by a liquid material that can easily be circulated and replaced, eliminating the need for costly replacement procedures as the materials degrade over time.

Right now, as designed, the reactor should be capable of producing about three times as much electricity as is needed to keep it running, but the design could probably be improved to increase that proportion to about five or six times, Sorbom says. So far, no fusion reactor has produced as much energy as it consumes, so this kind of net energy production would be a major breakthrough in fusion technology, the team says.

The design could produce a reactor that would provide electricity to about 100,000 people, they say. Devices of a similar complexity and size have been built within about five years, they say.

“Fusion energy is certain to be the most important source of electricity on earth in the 22nd century, but we need it much sooner than that to avoid catastrophic global warming,” says David Kingham, CEO of Tokamak Energy Ltd. in the UK, who was not connected with this research. “This paper shows a good way to make quicker progress,” he says.

The MIT research, Kingham says, “shows that going to higher magnetic fields, an MIT specialty, can lead to much smaller (and hence cheaper and quicker-to-build) devices.” The work is of “exceptional quality,” he says; “the next step … would be to refine the design and work out more of the engineering details, but already the work should be catching the attention of policy makers, philanthropists and private investors.”

The research was supported by the U.S. Department of Energy and the National Science Foundation.


Abstract of ARC: A compact, high-field, fusion nuclear science facility and demonstration power plant with demountable magnets

The affordable, robust, compact (ARC) reactor is the product of a conceptual design study aimed at reducing the size, cost, and complexity of a combined fusion nuclear science facility (FNSF) and demonstration fusion Pilot power plant. ARC is a ∼200–250 MWe tokamak reactor with a major radius of 3.3 m, a minor radius of 1.1 m, and an on-axis magnetic field of 9.2 T. ARC has rare earth barium copper oxide (REBCO) superconducting toroidal field coils, which have joints to enable disassembly. This allows the vacuum vessel to be replaced quickly, mitigating first wall survivability concerns, and permits a single device to test many vacuum vessel designs and divertor materials. The design point has a plasma fusion gain of Qp ≈ 13.6, yet is fully non-inductive, with a modest bootstrap fraction of only ∼63%. Thus ARC offers a high power gain with relatively large external control of the current profile. This highly attractive combination is enabled by the ∼23 T peak field on coil achievable with newly available REBCO superconductor technology. External current drive is provided by two innovative inboard RF launchers using 25 MW of lower hybrid and 13.6 MW of ion cyclotron fast wave power. The resulting efficient current drive provides a robust, steady state core plasma far from disruptive limits. ARC uses an all-liquid blanket, consisting of low pressure, slowly flowing fluorine lithium beryllium (FLiBe) molten salt. The liquid blanket is low-risk technology and provides effective neutron moderation and shielding, excellent heat removal, and a tritium breeding ratio ≥ 1.1. The large temperature range over which FLiBe is liquid permits an output blanket temperature of 900 K, single phase fluid cooling, and a high efficiency helium Brayton cycle, which allows for net electricity generation when operating ARC as a Pilot power plant.

How hybrid solar-cell materials may capture more solar energy

Innovative techniques for reducing solar-cell installation costs by capturing more solar energy per unit area by using hybrid materials have recently been announced by two universities.

Capturing more of the spectrum

Chemists at the University of California, Riverside have found an ingenious way to lower solar cell installation costs by reducing the size of solar collectors (credit: David Monniaux)

The University of California, Riverside strategy for making solar cells more efficient is to use the near-infrared region of the sun’s spectrum, which is not absorbed by current solar cells.

The researchers report in Nano Letters that a hybrid material that combines inorganic materials (cadmium selenide and lead selenide semiconductor nanocrystals) with organic molecules (diphenylanthracene and rubrene) could allow for an increase of solar photovoltaic efficiency by 30 percent or more, according to Christopher Bardeen, a UC Riverside professor of chemistry.

The new material also has wide-ranging applications such as in biological imaging, data storage and organic light-emitting diodes. “The ability to move light energy from one wavelength to [a] more useful region — for example, from red to blue — can impact any technology that involves photons as inputs or outputs,” he said.

The research was supported by grants from the National Science Foundation and the U.S. Army.

Plasmonic nanostructures and metal oxides

Rice researchers selectively filtered high-energy hot electrons from their less-energetic counterparts using a Schottky barrier (left) created with a gold nanowire on a titanium dioxide semiconductor. A second setup (right), which included a thin layer of titanium between the gold and the titanium dioxide, did not filter electrons based on energy level. (credit: B. Zheng/Rice University)

Meanwhile, new research from Rice’s Laboratory for Nanophotonics (LANP) has found a way to boost the efficiency and also reduce the cost of photovoltaic solar cells by using high-efficiency light-gathering plasmonic nanostructures combined with low-cost semiconductors, such as metal oxides.

“We can tune plasmonic structures to capture light across the entire solar spectrum,” claims Rice’s Naomi Halas, co-author of an open-access paper in Nature Communications. “The efficiency of [conventional] semiconductor-based solar cells can never be extended in this way because of the inherent optical properties of the semiconductors.”

The researchers found in an experiment that a solar cell using a “Schottky barrier” device allowed only “hot electrons” (electrons in the metal that have a much higher energy level) to pass from a gold nanowire to the semiconductor, unlike an “Ohmic device,” which let all electrons pass.

Today’s most efficient photovoltaic cells use a combination of semiconductors that are made from rare and expensive elements like gallium and indium, so this finding promises to further reduce the cost of solar cells.


Abstract of Hybrid Molecule–Nanocrystal Photon Upconversion Across the Visible and Near-Infrared

The ability to upconvert two low energy photons into one high energy photon has potential applications in solar energy, biological imaging, and data storage. In this Letter, CdSe and PbSe semiconductor nanocrystals are combined with molecular emitters (diphenylanthracene and rubrene) to upconvert photons in both the visible and the near-infrared spectral regions. Absorption of low energy photons by the nanocrystals is followed by energy transfer to the molecular triplet states, which then undergo triplet–triplet annihilation to create high energy singlet states that emit upconverted light. By using conjugated organic ligands on the CdSe nanocrystals to form an energy cascade, the upconversion process could be enhanced by up to 3 orders of magnitude. The use of different combinations of nanocrystals and emitters shows that this platform has great flexibility in the choice of both excitation and emission wavelengths.

Abstract of Distinguishing between plasmon-induced and photoexcited carriers in a device geometry

The use of surface plasmons, charge density oscillations of conduction electrons of metallic nanostructures, to boost the efficiency of light-harvesting devices through increased light-matter interactions could drastically alter how sunlight is converted into electricity or fuels. These excitations can decay directly into energetic electron–hole pairs, useful for photocurrent generation or photocatalysis. However, the mechanisms behind plasmonic carrier generation remain poorly understood. Here we use nanowire-based hot-carrier devices on a wide-bandgap semiconductor to show that plasmonic carrier generation is proportional to internal field-intensity enhancement and occurs independently of bulk absorption. We also show that plasmon-induced hot electrons have higher energies than carriers generated by direct excitation and that reducing the barrier height allows for the collection of carriers from plasmons and direct photoexcitation. Our results provide a route to increasing the efficiency of plasmonic hot-carrier devices, which could lead to more efficient devices for converting sunlight into usable energy.

Phosphorene could lead to ultrathin solar cells


Australian National University | Sticky tape the key to ultrathin solar cells

Scientists at Australian National University (ANU) have used simple transparent sticky (aka “Scotch”) tape to create single-atom-thick layers of phosphorene from “black phosphorus,” a black crystalline form of phosphorus similar to graphite (which is used to create graphene).

Unlike graphene, phosphorene is a natural semiconductor that can be switched on and off, like silicon, as KurzweilAI has reported. “Because phosphorene is so thin and light, it creates possibilities for making lots of interesting devices, such as LEDs or solar cells,” said lead researcher Yuerui (Larry) Lu, PhD, from ANU College of Engineering and Computer Science.

Properties that vary with layer thickness

Phosphorene is a thinner and lighter semiconductor than silicon, and it has unusual light emission properties that vary widely with the thickness of the layers, which enables more flexibility for manufacturing. “This property has never been reported before in any other material,” said Lu.

Schematic of the “puckered honeycomb” crystal structure of black phosphorus (credit: Vahid Tayari/McGill University)

“By changing the number of layers [peeled off] we can tightly control the band gap, which determines the material’s properties, such as the color of LED it would make.* “You can see quite clearly under the microscope the different colors of the sample, which tells you how many layers are there,” said Dr Lu.

The study was recently described in an open-access paper in the Nature journal Light: Science and Applications.

* Lu’s team found the optical gap for monolayer (single-layer) phosphorene was 1.75 electron volts, corresponding to red light of a wavelength of 700 nanometers. As more layers were added, the optical gap decreased. For instance, for five layers, the optical gap value was 0.8 electron volts, a infrared wavelength of 1550 nanometres. For very thick layers, the value was around 0.3 electron volts, a mid-infrared wavelength of around 3.5 microns.


Abstract of Optical tuning of exciton and trion emissions in monolayer phosphorene

Monolayer phosphorene provides a unique two-dimensional (2D) platform to investigate the fundamental dynamics of excitons and trions (charged excitons) in reduced dimensions. However, owing to its high instability, unambiguous identification of monolayer phosphorene has been elusive. Consequently, many important fundamental properties, such as exciton dynamics, remain underexplored. We report a rapid, noninvasive, and highly accurate approach based on optical interferometry to determine the layer number of phosphorene, and confirm the results with reliable photoluminescence measurements. Furthermore, we successfully probed the dynamics of excitons and trions in monolayer phosphorene by controlling the photo-carrier injection in a relatively low excitation power range. Based on our measured optical gap and the previously measured electronic energy gap, we determined the exciton binding energy to be ~0.3 eV for the monolayer phosphorene on SiO2/Si substrate, which agrees well with theoretical predictions. A huge trion binding energy of ~100 meV was first observed in monolayer phosphorene, which is around five times higher than that in transition metal dichalcogenide (TMD) monolayer semiconductor, such as MoS2. The carrier lifetime of exciton emission in monolayer phosphorene was measured to be ~220 ps, which is comparable to those in other 2D TMD semiconductors. Our results open new avenues for exploring fundamental phenomena and novel optoelectronic applications using monolayer phosphorene.

Continued destruction of Earth’s plant life places humankind in jeopardy, say researchers

Earth-space battery. The planet is a positive charge of stored chemical energy (cathode) in the form of fossil and nuclear fuels and biomass. As this energy is dissipated by humans, it eventually radiates as heat toward the chemical equilibrium of deep space (anode). The battery is rapidly discharging without replenishment. (credit: John R. Schramski et al./PNAS)

Unless humans slow the destruction of Earth’s declining supply of plant life, civilization like it is now may become completely unsustainable, according to a paper published recently by University of Georgia researchers in the Proceedings of the National Academy of Sciences.

“You can think of the Earth like a battery that has been charged very slowly over billions of years,” said the study’s lead author, John Schramski, an associate professor in UGA’s College of Engineering. “The sun’s energy is stored in plants and fossil fuels, but humans are draining energy much faster than it can be replenished.”

Number of years of phytomass food potentially available to feed the global human population (credit: John R. Schramski et al./PNAS)

Earth was once a barren landscape devoid of life, he explained, and it was only after billions of years that simple organisms evolved the ability to transform the sun’s light into energy. This eventually led to an explosion of plant and animal life that bathed the planet with lush forests and extraordinarily diverse ecosystems.

The study’s calculations are grounded in the fundamental principles of thermodynamics, a branch of physics concerned with the relationship between heat and mechanical energy. Chemical energy is stored in plants, or biomass, which is used for food and fuel, but which is also destroyed to make room for agriculture and expanding cities.

Scientists estimate that the Earth contained approximately 1,000 billion tons of carbon in living biomass 2,000 years ago. Since that time, humans have reduced that amount by almost half. It is estimated that just over 10 percent of that biomass was destroyed in just the last century.

“If we don’t reverse this trend, we’ll eventually reach a point where the biomass battery discharges to a level at which Earth can no longer sustain us,” Schramski said.

Major causes: deforestation, large-scale farming, population growth

Working with James H. Brown from the University of New Mexico, Schramski and UGA’s David Gattie, an associate professor in the College of Engineering, the research shows that the vast majority of losses come from deforestation, hastened by the advent of large-scale mechanized farming and the need to feed a rapidly growing population. As more biomass is destroyed, the planet has less stored energy, which it needs to maintain Earth’s complex food webs and biogeochemical balances.

NASA Earth Observatory biomass map of the U.S. by Robert Simmon, generated from the National Biomass and Carbon Dataset (NBCD) assembled by scientists at the Woods Hole Research Center

“As the planet becomes less hospitable and more people depend on fewer available energy options, their standard of living and very survival will become increasingly vulnerable to fluctuations, such as droughts, disease epidemics and social unrest,” Schramski said.

If human beings do not go extinct, and biomass drops below sustainable thresholds, the population will decline drastically, and people will be forced to return to life as hunter-gatherers or simple horticulturalists, according to the paper.

“I’m not an ardent environmentalist; my training and my scientific work are rooted in thermodynamics,” Schramski said. “These laws are absolute and incontrovertible; we have a limited amount of biomass energy available on the planet, and once it’s exhausted, there is absolutely nothing to replace it.”

Schramski and his collaborators are hopeful that recognition of the importance of biomass, elimination of its destruction and increased reliance on renewable energy will slow the steady march toward an uncertain future, but the measures required to stop that progression may have to be drastic.

The model does not take into account potential future breakthroughs in more efficient biomass use and alternate energy systems.


Abstract of Human domination of the biosphere: Rapid discharge of the earth-space battery foretells the future of humankind

Earth is a chemical battery where, over evolutionary time with a trickle-charge of photosynthesis using solar energy, billions of tons of living biomass were stored in forests and other ecosystems and in vast reserves of fossil fuels. In just the last few hundred years, humans extracted exploitable energy from these living and fossilized biomass fuels to build the modern industrial-technological-informational economy, to grow our population to more than 7 billion, and to transform the biogeochemical cycles and biodiversity of the earth. This rapid discharge of the earth’s store of organic energy fuels the human domination of the biosphere, including conversion of natural habitats to agricultural fields and the resulting loss of native species, emission of carbon dioxide and the resulting climate and sea level change, and use of supplemental nuclear, hydro, wind, and solar energy sources. The laws of thermodynamics governing the trickle-charge and rapid discharge of the earth’s battery are universal and absolute; the earth is only temporarily poised a quantifiable distance from the thermodynamic equilibrium of outer space. Although this distance from equilibrium is comprised of all energy types, most critical for humans is the store of living biomass. With the rapid depletion of this chemical energy, the earth is shifting back toward the inhospitable equilibrium of outer space with fundamental ramifications for the biosphere and humanity. Because there is no substitute or replacement energy for living biomass, the remaining distance from equilibrium that will be required to support human life is unknown.

A jet engine powered by lasers and nuclear explosions?

Lasers vaporize radioactive material and cause a fusion reaction — in effect, a small thermonuclear explosion (credit: Patent Yogi/YouTube)

The U.S. Patent and Trademark Office has awarded a patent (US 9,068,562) to Boeing engineers and scientists for a laser- and nuclear-driven airplane engine.

“A stream of pellets containing nuclear material such as Deuterium or Tritium is fed into a hot-stop within a thruster of the aircraft,” Patent Yogi explains. “Then multiple high powered laser beams are all focused onto the hot-spot. The pellet is instantly vaporized and the high temperature causes a nuclear fusion reaction. In effect, it causes a tiny nuclear explosion that scatters atoms and high energy neutrons in all directions. This flow of material is concentrated to exit out of the thruster thus propelling the aircraft forward with great force.

“And this is where Boeing has done something extremely clever. The inner walls of the thurster are coated with a fissile material like Uranium-238 that undergoes a nuclear fission upon being struck by the high energy neutrons. This releases enormous energy in the form of heat. A coolant is circulated along the inner walls to pick up this heat and power a turbine which in turn generates huge amounts of electric power. And guess what this electric power is used for? To power the same lasers that created the electric power! In effect, this space-craft is self-powered with virtually no external energy needed.

“Soon, tiny nuclear bombs exploding inside a plane may be business as usual.”

An artist’s conception of the NASA reference design for the Project Orion spacecraft powered by nuclear propulsion (credit: NASA)

The basic concept was initially proposed by physicist Freeman Dyson in his Project Orion concept in 1957 and described on George Dyson’s Project Orion — The Atomic Spaceship 1957-1965 book.