Most Earth-like worlds have yet to be born, says new NASA study

This is an artist’s impression of innumerable Earth-like planets that have yet to be born over the next trillion years in the evolving universe (credit: NASA, ESA, and G. Bacon (STScI); Science: NASA, ESA, P. Behroozi and M. Peeples (STScI))

When our solar system was born 4.6 billion years ago, only eight percent of the potentially habitable planets that will ever form in the universe existed, according to an assessment of data collected by NASA’s Hubble Space Telescope and Kepler space observatory and published today (Oct. 20) in an open-access paper in the Monthly Notices of the Royal Astronomical Society.


In related news, UCLA geochemists have found evidence that life probably existed on Earth at least 4.1 billion years ago, which is 300 million years earlier than previous research suggested. The research suggests life in the universe could be abundant, said Mark Harrison, co-author of the research and a professor of geochemistry at UCLA. The research was published Monday Oct. 19 in the online early edition of the journal Proceedings of the National Academy of Sciences.


The data show that the universe was making stars at a fast rate 10 billion years ago, but the fraction of the universe’s hydrogen and helium gas that was involved was very low. Today, star birth is happening at a much slower rate than long ago, but there is so much leftover gas available after the big bang that the universe will keep making stars and planets for a very long time to come.

A billion Earth-sized worlds

Based on the survey, scientists predict that there should already be 1 billion Earth-sized worlds in the Milky Way galaxy. That estimate skyrockets when you include the other 100 billion galaxies in the observable universe.

Kepler’s planet survey indicates that Earth-sized planets in a star’s habitable zone — the perfect distance that could allow water to pool on the surface — are ubiquitous in our galaxy. This leaves plenty of opportunity for untold more Earth-sized planets in the habitable zone to arise in the future — the last star isn’t expected to burn out until 100 trillion years from now.

The researchers say that future Earths are more likely to appear inside giant galaxy clusters and also in dwarf galaxies, which have yet to use up all their gas for building stars and accompanying planetary systems. By contrast, our Milky Way galaxy has used up much more of the gas available for future star formation.

A big advantage to our civilization arising early in the evolution of the universe is our being able to use powerful telescopes like Hubble to trace our lineage from the big bang through the early evolution of galaxies.

Regrettably, the observational evidence for the big bang and cosmic evolution, encoded in light and other electromagnetic radiation, will be all but erased away 1 trillion years from now, due to the runaway expansion of space. Any far-future civilizations that might arise will be largely clueless as to how or if the universe began and evolved.


Abstract of On The History and Future of Cosmic Planet Formation

We combine constraints on galaxy formation histories with planet formation models, yielding the Earth-like and giant planet formation histories of the Milky Way and the Universe as a whole. In the Hubble volume (1013 Mpc3), we expect there to be ∼1020 Earth-like and ∼1020giant planets; our own galaxy is expected to host ∼109 and ∼1010 Earth-like and giant planets, respectively. Proposed metallicity thresholds for planet formation do not significantly affect these numbers. However, the metallicity dependence for giant planets results in later typical formation times and larger host galaxies than for Earth-like planets. The Solar system formed at the median age for existing giant planets in the Milky Way, and consistent with past estimates, formed after 80 per cent of Earth-like planets. However, if existing gas within virialized dark matter haloes continues to collapse and form stars and planets, the Universe will form over 10 times more planets than currently exist. We show that this would imply at least a 92 per cent chance that we are not the only civilization the Universe will ever have, independent of arguments involving the Drake equation.


Abstract of Potentially biogenic carbon preserved in a 4.1 billion-year-old zircon

Evidence for carbon cycling or biologic activity can be derived from carbon isotopes, because a high12C/13C ratio is characteristic of biogenic carbon due to the large isotopic fractionation associated with enzymatic carbon fixation. The earliest materials measured for carbon isotopes at 3.8 Ga are isotopically light, and thus potentially biogenic. Because Earth’s known rock record extends only to ∼4 Ga, earlier periods of history are accessible only through mineral grains deposited in later sediments. We report 12C/13C of graphite preserved in 4.1-Ga zircon. Its complete encasement in crack-free, undisturbed zircon demonstrates that it is not contamination from more recent geologic processes. Its 12C-rich isotopic signature may be evidence for the origin of life on Earth by 4.1 Ga.

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Hybrid bio-robotic system models physics of human leg locomotion

Schematic of bio-robotic modeling system (credit: Benjamin D. Robertson and Gregory S. Sawicki/PNAS)

North Carolina State University (NC State) researchers have developed a bio-inspired system that models how human leg locomotion works, by using a computer-controlled nerve stimulator (acting as the spinal cord) to activate a biological muscle-tendon.

The findings could help design robotic devices that begin to merge human and machine to assist human locomotion, serving as prosthetic systems for people with mobility impairments or exoskeletons for increasing the abilities of able-bodied individuals.

The model is based on the natural spring-like physics (mass, stiffness, and leverage) of the ankle’s primary muscle-tendon unit (using a bullfrog’s muscle). The system used a feedback-controlled servomotor, simulating the inertial/gravitational environment of terrestrial gait.

Tuning for natural resonance

The research showed that the natural resonance* of the system is a likely mechanism behind springy leg behavior during locomotion, according to Gregory Sawicki, associate professor at NC State and University of North Carolina at Chapel Hill Joint Department of Biomedical Engineering. He is also co-author of a paper on the work published in Proceedings of the National Academy of Sciences.

In this case, the electrical system — the body’s nervous system — drives the mechanical system (the leg’s muscle-tendon unit) at a frequency that provides maximum power output.

The researchers found that by matching the stimulation frequency to the natural resonance frequency of the passive biomechanical system, muscle-tendon interactions (resulting in spring-like behavior) occur naturally and do not require closed-loop neural control — simplifying system design.

“In locomotion, resonance comes from tuning the interaction between the nervous system and the leg so they work together,” said Sawicki. “It turns out that if I know the mass, leverage, and stiffness of a muscle-tendon unit, I can tell you exactly how often I should stimulate it to get resonance in the form of spring-like, elastic behavior.”

“In the end, we found that the same simple underlying principles that govern resonance in simple mechanical systems also apply to these extraordinarily complicated physiological systems,” said Temple University post-doctoral researcher Ben Robertson, corresponding author of the paper.

“This outcome points to mechanical resonance as an underlying principle governing muscle-tendon interactions and provides a physiology-based framework for understanding how mechanically simple elastic limb behavior may emerge from a complex biological system comprised of many simultaneously tuned muscle-tendons within the lower limb,” the researchers conclude in the paper.

* NC State biomedical engineer Greg Sawicki likened resonance tuning to interacting with a slinky toy. “When you get it oscillating well, you hardly have to move your hand — it’s the timing of the interaction forces that matters.


Abstract of Unconstrained muscle-tendon workloops indicate resonance tuning as a mechanism for elastic limb behavior during terrestrial locomotion

In terrestrial locomotion, there is a missing link between observed spring-like limb mechanics and the physiological systems driving their emergence. Previous modeling and experimental studies of bouncing gait (e.g., walking, running, hopping) identified muscle-tendon interactions that cycle large amounts of energy in series tendon as a source of elastic limb behavior. The neural, biomechanical, and environmental origins of these tuned mechanics, however, have remained elusive. To examine the dynamic interplay between these factors, we developed an experimental platform comprised of a feedback-controlled servo-motor coupled to a biological muscle-tendon. Our novel motor controller mimicked in vivo inertial/gravitational loading experienced by muscles during terrestrial locomotion, and rhythmic patterns of muscle activation were applied via stimulation of intact nerve. This approach was based on classical workloop studies, but avoided predetermined patterns of muscle strain and activation—constraints not imposed during real-world locomotion. Our unconstrained approach to position control allowed observation of emergent muscle-tendon mechanics resulting from dynamic interaction of neural control, active muscle, and system material/inertial properties. This study demonstrated that, despite the complex nonlinear nature of musculotendon systems, cyclic muscle contractions at the passive natural frequency of the underlying biomechanical system yielded maximal forces and fractions of mechanical work recovered from previously stored elastic energy in series-compliant tissues. By matching movement frequency to the natural frequency of the passive biomechanical system (i.e., resonance tuning), muscle-tendon interactions resulting in spring-like behavior emerged naturally, without closed-loop neural control. This conceptual framework may explain the basis for elastic limb behavior during terrestrial locomotion.

Stephen Hawking on AI

Stephen Hawking on Last Week Tonight with John Oliver (credit: HBO)

Reddit published Stephen Hawking’s answers to questions in an “Ask me anything” (AMA) event on Thursday (Oct. 8).

Most of the answers focused on his concerns about the future of AI and its role in our future. Here are some of the most interesting ones. The full list is in this Wired article. (His answers to John Oliver below are funnier.)

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Forbes offers a different opinion on the last answer.


HBO | Last Week Tonight with John Oliver: Stephen Hawking Interview

A new way to create spintronic magnetic information storage

A magnetized cobalt disk (red) placed atop a thin cobalt-palladium film (light purple background) can be made to confer its own ringed configuration of magnetic moments (orange arrows) to the film below, creating a skyrmion in the film (purple arrows). The skyrmion might be usable in computer data storage systems. (credit: Dustin Gilbert / NIST)

Exotic ring-shaped magnetic effects called “skyrmions*” could be the basis for a new type of nonvolatile magnetic computer data storage, replacing current hard-drive technology, according to a team of researchers at the National Institute of Standards and Technology (NIST) and several universities.

Skyrmions have the advantage of operating at magnetic fields that are several orders of magnitude weaker, but have worked at only very low temperatures until now. The research breakthrough was the discovery of a practical way to create and access magnetic skyrmions, and under ambient room-temperature conditions.

The skrymion effect refers to extreme conditions in which certain magnetic materials can develop spots where the magnetic moments** curve and twist, forming a winding, ring-like configuration. To achieve that, the physicists placed arrays of tiny magnetized cobalt disks atop a thin film made of cobalt and palladium. That protects them from outside influence, meaning the data they store would not be corrupted easily.

But “seeing” these skyrmion configurations underneath was a challenge. The team solved that by using neutrons to see through the disk.

That discovery has implications for spintronics (using magnetic spin to store data). “The advantage [with skyrmions] is that you’d need way less power to push them around than any other method proposed for spintronics,” said NIST’s Dustin Gilbert. “What we need to do next is figure out how to make them move around.”

Physicists at the University of California, Davis; University of Maryland, College Park; University of California, Santa Cruz; and Lawrence Berkeley National Laboratory were also involved in the study.

* Named after the physicist who proposed them. 

** The force that a magnet can exert on electric currents and the torque that a magnetic field will exert on it.


Abstract of Realization of ground-state artificial skyrmion lattices at room temperature

The topological nature of magnetic skyrmions leads to extraordinary properties that provide new insights into fundamental problems of magnetism and exciting potentials for novel magnetic technologies. Prerequisite are systems exhibiting skyrmion lattices at ambient conditions, which have been elusive so far. Here, we demonstrate the realization of artificial Bloch skyrmion lattices over extended areas in their ground state at room temperature by patterning asymmetric magnetic nanodots with controlled circularity on an underlayer with perpendicular magnetic anisotropy (PMA). Polarity is controlled by a tailored magnetic field sequence and demonstrated in magnetometry measurements. The vortex structure is imprinted from the dots into the interfacial region of the underlayer via suppression of the PMA by a critical ion-irradiation step. The imprinted skyrmion lattices are identified directly with polarized neutron reflectometry and confirmed by magnetoresistance measurements. Our results demonstrate an exciting platform to explore room-temperature ground-state skyrmion lattices.

Invisibility cloak may enhance efficiency of solar cells

A special invisibility cloak redirects sunlight past solar-cell contacts to the active surface area of the solar cell (credit: Martin Schumann/KIT)

A new approach to increasing solar-cell panel efficiency using an “invisibility cloak” has been developed by scientists at Karlsruhe Institute of Technology (KIT) in Germany.

Up to one tenth of the surface area of solar cells is typically covered by “contact fingers” that extract current generated by solar cells. The fingers block some of the light from the active area of the solar cell, decreasing cell efficiency. By guiding the incident light around the contact fingers, the cloak layer makes the contact fingers nearly completely invisible, according to doctoral student Martin Schumann of the KIT Institute of Applied Physics, who conducted the experiments and simulations.

Coordinate transformations enabling invisible contacts on solar cells. The elongated metal contact can be arbitrarily shaped within the black region to make it invisible. (credit: Martin F. Schumann et al./Optica)

To achieve the cloaking effect, the scientists applied a polymer coating onto the solar cell and added a groove along the contact fingers, both helping to refract incident light away from the contact fingers and toward the active surface area of the solar cell. They expect an efficiency increase of 10 percent in followup tests.

The research was published Sept. 25 in an open-access article in the journal Optica.


Abstract of Cloaked contact grids on solar cells by coordinate transformations: designs and prototypes

Nontransparent contact fingers on the sun-facing side of solar cells represent optically dead regions which reduce the energy conversion per area. We consider two approaches for guiding the incident light around the contacts onto the active area. The first approach uses graded-index metamaterials designed by two-dimensional Schwarz–Christoffel conformal maps, and the second uses freeform surfaces designed by one-dimensional coordinate transformations of a point to an interval. We provide proof-of-principle demonstrators using direct laser writing of polymer structures on silicon wafers with opaque contacts. Freeform surfaces are amenable to mass fabrication and allow for complete recovery of the shadowing effect for all relevant incidence angles.

Pushing the resolution and exposure-time limits of lensless imaging

With “coherent diffraction imaging,” extreme ultraviolet light scatters off a sample and produces a diffraction pattern, which a computer then analyzes to reconstruct an image of the target material (credit: Dr. Michael Zürch, Friedrich Schiller University Jena, Germany)

Physicists at Friedrich Schiller University in Germany are pushing the boundaries of nanoscale imaging by shooting ultra-high-resolution, real-time images in extreme ultraviolet light — without lenses. The new method could be used to study everything from semiconductor chips to cancer cells, the scientists say.

They are improving a lensless imaging technique called “coherent diffraction imaging,” which has been around since the 1980s. To take a picture with this method, scientists fire an extreme ultraviolet laser or X-ray at a target. The light scatters off, and some of those photons interfere with one another and find their way onto a detector, creating a diffraction pattern.

Diffraction pattern of red laser beam (credit: Wisky/Wikipedia)

By analyzing that pattern, a computer then reconstructs the path those photons must have taken, which generates an image of the target material.

But the quality of the images depends on the radiation source. Traditionally, researchers have used big, powerful X-ray beams like the one at the SLAC National Accelerator Laboratory, which can pump out lots of photons.

To make the process more accessible, researchers have developed smaller machines using coherent laser-like beams, which are cheaper but produce lower-quality images and require short focal lengths (similar to placing a specimen close to a microscope to boost the magnification) and long exposure times.

As in conventional photography, that rules out large, real-time images.

Now, Michael Zürch and his research team have built an ultrafast laser that fires extreme UV photons 100 times faster than previous table-top machines and is able to snap an image at a resolution of 26 nanometers (the size of a blackline walnut virus) — almost the theoretical diffraction limit for the 33-nanometers UV light used. They were also able to get real-time images at a rate of one per second at the reduced resolution of 80 nanometers.

The prospect of high-resolution, real-time imaging using a relatively low-cost, small setup could lead to all kinds of applications, Zürch said. Engineers could use this to hunt for tiny defects in semiconductor chips. Biologists could zoom in on the organelles that make up a cell. Eventually, he said, the researchers might be able to reach shorter exposure times and higher resolution levels.

The team will present their work at Frontiers in Optics, the Optical Society’s annual meeting and conference in San Jose, California on October 22, 2015.

 

New ‘stealth dark matter’ theory may explain mystery of the universe’s missing mass

This 3D map illustrates the large-scale distribution of dark matter, reconstructed from measurements of weak gravitational lensing by using the Hubble Space Telescope (credit: image courtesy of DOE/Lawrence Livermore National Laboratory)

A new theory that may explain why dark matter has evaded direct detection in Earth-based experiments has been developed by team of Lawrence Livermore National Laboratory (LLNL) particle physicists known as the Lattice Strong Dynamics Collaboration.

The group has combined theoretical and computational physics techniques and used the Laboratory’s massively parallel 2-petaflop Vulcan supercomputer to devise a new model of dark matter. The model identifies today’s dark matter as naturally “stealthy.” But in the extremely high-temperature plasma conditions that pervaded the early universe, it would have been easy to see dark matter via interactions with ordinary matter, the model shows.

A balancing act in the early universe

“These interactions in the early universe are important because ordinary and dark matter abundances today are strikingly similar in size, suggesting this occurred because of a balancing act performed between the two before the universe cooled,” said Pavlos Vranas of LLNL, one of the authors of a paper in an upcoming edition of the journal Physical Review Letters.

Dark matter makes up 83 percent of all matter in the universe and does not interact directly with electromagnetic or strong and weak nuclear forces. Light does not bounce off of it, and ordinary matter goes through it with only the feeblest of interactions. It is essentially invisible, yet its interactions with gravity produce striking effects on the movement of galaxies and galactic clusters, leaving little doubt of its existence.

The key to stealth dark matter’s split personality is its compositeness and the miracle of confinement. Like quarks in a neutron, at high temperatures these electrically charged constituents interact with nearly everything. But at lower temperatures, they bind together to form an electrically neutral composite particle. Unlike a neutron, which is bound by the ordinary strong interaction of quantum chromodynamics (QCD), the stealthy neutron would have to be bound by a new and yet-unobserved strong interaction, a dark form of QCD.

CERN experiments may test the stealth dark matter theory

“It is remarkable that a dark matter candidate just several hundred times heavier than the proton could be a composite of electrically charged constituents and yet have evaded direct detection so far,” Vranas said.

Similar to protons, stealth dark matter is stable and does not decay over cosmic times. However, like QCD, it produces a large number of other nuclear particles that decay shortly after their creation. These particles can have net electric charge but would have decayed away a long time ago. In a particle collider with sufficiently high energy (such as the Large Hadron Collider in Switzerland), these particles can be produced again for the first time since the early universe. They could generate unique signatures in the particle detectors because they could be electrically charged.

“Underground direct detection experiments or experiments at the Large Hadron Collider may soon find evidence of (or rule out) this new stealth dark matter theory,” Vranas said.

Other collaborators include researchers from Yale University, Boston University, Institute for Nuclear Theory, Argonne Leadership Computing Facility, University of California, Davis, University of Oregon, University of Colorado, Brookhaven National Laboratory, and Syracuse University. The DOE Office of Science High Energy Theory and the High Energy Physics Lattice SciDAC program supported this research.


Abstract of Direct Detection of Stealth Dark Matter through Electromagnetic Polarizability

We calculate the spin-independent scattering cross section for direct detection that results from the electromagnetic polarizability of a composite scalar baryon dark matter candidate — “Stealth Dark Matter”, that is based on a dark SU(4) confining gauge theory. In the nonrelativistic limit, electromagnetic polarizability proceeds through a dimension-7 interaction leading to a very small scattering cross section for dark matter with weak scale masses. This represents a lower bound on the scattering cross section for composite dark matter theories with electromagnetically charged constituents. We carry out lattice calculations of the polarizability for the lightest baryons in SU(3) and SU(4) gauge theories using the background field method on quenched configurations. We find the polarizabilities of SU(3) and SU(4) to be comparable (within about 50%) normalized to the baryon mass, which is suggestive for extensions to larger SU(N) groups. The resulting scattering cross sections with a xenon target are shown to be potentially detectable in the dark matter mass range of about 200-700 GeV, where the lower bound is from the existing LUX constraint while the upper bound is the coherent neutrino background. Significant uncertainties in the cross section remain due to the more complicated interaction of the polarizablity operator with nuclear structure, however the steep dependence on the dark matter mass, 1/m_B^6, suggests the observable dark matter mass range is not appreciably modified. We briefly highlight collider searches for the mesons in the theory as well as the indirect astrophysical effects that may also provide excellent probes of stealth dark matter.

How to catch a molecule

With a nano-ring-based toroidal trap, cold polar molecules near the gray shaded surface approaching the central region may be trapped within a nanometer-scale volume (credit: ORNL)

In a paper published in Physical Review AOak Ridge National Laboratory and University of Tennessee physicists describe conceptually how they may be able to trap and exploit a molecule’s energy to advance a number of fields.

“A single molecule has many degrees of freedom, or ways of expressing its energy and dynamics, including vibrations, rotations and translations,” said Ali Passian of Oak Ridge National Lab. “For years, physicists have searched for ways to take advantage of these molecular states, including how they could be used in high-precision instruments or as an information storage device for applications such as quantum computing.”

It’s a trap!

Catching a molecule with minimal disturbance is not an easy task, considering its size — about 1 nanometer — but this paper proposes a method that may overcome that obstacle.

When interacting with laser light, the ring toroidal nanostructure can trap the slower molecules at its center. That’s because the nano-trap, which can be made of gold using conventional nanofabrication techniques, creates a highly localized force field surrounding the molecules. The team envisions using scanning probe microscopy techniques, which can measure extremely small forces, to access individual nano-traps.

“Once trapped, we can interrogate the molecules for their spectroscopic and electromagnetic properties and study them in isolation without disturbance from the neighboring molecules,” Passian said.

Previous demonstrations of trapping molecules have relied on large systems to confine charged particles such as single ions. Next, the researchers plan to build actual nanotraps and conduct experiments to determine the feasibility of fabricating a large number of traps on a single chip.

“If successful, these experiments could help enable information storage and processing devices that greatly exceed what we have today, thus bringing us closer to the realization of quantum computers,” Passian said.


Abstract of Toroidal nanotraps for cold polar molecules

Electronic excitations in metallic nanoparticles in the optical regime that have been of great importance in surface-enhanced spectroscopy and emerging applications of molecular plasmonics, due to control and confinement of electromagnetic energy, may also be of potential to control the motion of nanoparticles and molecules. Here, we propose a concept for trapping polarizable particles and molecules using toroidal metallic nanoparticles. Specifically, gold nanorings are investigated for their scattering properties and field distribution to computationally show that the response of these optically resonant particles to incident photons permit the formation of a nanoscale trap when proper aspect ratio, photon wavelength, and polarization are considered. However, interestingly the resonant plasmonic response of the nanoring is shown to be detrimental to the trap formation. The results are in good agreement with analytic calculations in the quasistatic limit within the first-order perturbation of the scalar electric potential. The possibility of extending the single nanoring trapping properties to two-dimensional arrays of nanorings is suggested by obtaining the field distribution of nanoring dimers and trimers.