How to use laser cloaking to hide Earth from remote detection by aliens

A 22W laser used for adaptive optics on the Very Large Telescope in Chile. A suite of similar lasers could be used to cloak our planet’s transit around the Sun. (credit: ESO/G. Hüdepohl)

We could use lasers to conceal the Earth from observation by an advanced extraterrestrial civilization by shining massive  laser beams aimed at a specific star where aliens might be located — thus masking our planet during its transit of the Sun, suggest two astronomers at Columbia University in an open-access paper in Monthly Notices of the Royal Astronomical Society.

The idea comes from the NASA Kepler mission’s search method for exoplanets (planets around other stars), which looks for transits (a planet crossing in front a star) — identified by a tiny decrease in the star’s brightness.*

To detect exoplanets, NASA’s Kepler measures the light from a star. When a planet crosses in front of a star, the event is called a transit. The planet is usually too small to see, but it can produce a small change in a star’s brightness of about 1/10,000 (100 parts per million), lasting for 2 to 16 hours. (credit: NASA Ames)

Kepler has confirmed the existence of more than 1,000 planets using this technique, with tens of these worlds similar in size to the Earth. Kipping and Teachey speculate that alien scientists could use this approach to locate Earth, since it’s in the “habitable zone” of our Sun (a distance where the temperature is right for liquid water, so it may be a promising place for life), and may be of interest to aliens.*

How to cloak our Earth from aliens

Columbia Professor David Kipping and graduate student Alex Teachey suggest that transits could be masked by controlled laser emission, with the beam directed at the star where the suspected aliens might be located. When the planet’s transit takes place, the laser would be switched on to compensate for the dip in light.**

Illustration (not to scale) of the transit cloaking device. To cloak the Earth, a laser beam (orange) is fired from the night side of the Earth (blue circle) toward a target star (“receiver”) during the transit. (credit: David M. Kipping and Alex Teachey/MNRAS)

According to the authors, emitting a continuous 30 MW laser for about 10 hours, once a year, would be enough to eliminate the transit signal, at least in the visible-light range. The energy needed is comparable to that collected by the International Space Station solar array in a year. A chromatic (multi-wavelength) cloak, effective at all solar wavelengths, is more challenging, and would need a large array of tuneable lasers with a total power of 250 MW.***

“Alternatively, we could cloak only the atmospheric signatures associated with biological activity, such as oxygen, which is achievable with a peak laser power of just 160 kW per transit. To another civilization, this should make the Earth appear as if life never took hold on our world”, said Teachey.


Cool Worlds Lab/Columbia University | A Cloaking Device for Planets

Broadcasting our existence: the METI (message SETI) approach

The lasers could also be used to broadcast our existence by modifying the light from the Sun during a transit to make it obviously artificial, such as modifying the normal “U” transit light curve (the intensity vs. time pattern during transit). The authors suggest that we could even transmit information by modulating the laser beams at the same time, providing a way to send messages to aliens.

However, several prominent scientists, including Stephen Hawking, have cautioned against humanity broadcasting our presence to intelligent life on other planets. Hawking and others are concerned that extraterrestrials might wish to take advantage of the Earth’s resources, and that their visit, rather than being benign, could be as devastating as when Europeans first traveled to the Americas. (See Are you ready for contact with extraterrestrial intelligence? and METI: should we be shouting at the cosmos?)

Perhaps aliens have had the same thought. The two astronomers propose that the Search for Extraterrestrial Intelligence (SETI), which currently looks mailing for alien radio signals, could be broadened to search for artificial star transits. Such signatures could also be readily searched in the NASA archival data of Kepler transit surveys.

* Once detected, the planet’s orbital size can be calculated from the period (how long it takes the planet to orbit once around the star) and the mass of the star using Kepler’s Third Law of planetary motion. The size of the planet is found from the depth of the transit (how much the brightness of the star drops) and the size of the star. From the orbital size and the temperature of the star, the planet’s characteristic temperature can be calculated. From this, the question of whether or not the planet is habitable (not necessarily inhabited) can be answered. — Kepler and K2, NASA Mission Overview

** It’s not clear what indicators might lead to such a suspicion, aside from a confirmed SETI transmission detection. It would be interesting to calculate the required number and locations of lasers, their operational schedule, and their power requirements for a worst-case scenario — assuming potential threats from certain types of stars, or all stars — considering laser beam divergence angle, beam flux gradients, and maximum star distance within about one degree of a planet’s ecliptic plane can see it transit in the ecliptic plane, based on assumed maximum alien telescope resolving power.

[UPDATE 1/3/2016: Kipping correction: "within about one degree" and added "based on assumed maximum alien telescope resolving power"]

[UPDATE 1/3/2016: from Kipping regarding beam divergence angle, flux gradients, and primary focus of the paper]: “Beam shaping, through the use of multiple beams, can produce effectively isotropic radiation within the beam width. Unless the target is very close, the beam width typically encompasses the entire alien solar system by the time it reaches, due to beam divergence. So we don’t even really need to know the position of the target planet that well (although we likely do anyway thanks to our detection methods). A common misunderstanding of our paper is to erroneously assume that we are advocating that humanity should build this for the Earth, but actually we are pointing out that if even our current technology can pull off a pretty effective cloak then other more advanced civilizations may be able to hide from us perfectly.”]

*** For example, a chromatic cloak for the NIRSpec instrument planned for James Webb Space Telescope covering from 0.6 to 5 µm would require approximately 6000 monochromatic lasers in the array.


Abstract of A Cloaking Device for Transiting Planets

The transit method is presently the most successful planet discovery and characterization tool at our disposal. Other advanced civilizations would surely be aware of this technique and appreciate that their home planet’s existence and habitability is essentially broadcast to all stars lying along their ecliptic plane. We suggest that advanced civilizations could cloak their presence, or deliberately broadcast it, through controlled laser emission. Such emission could distort the apparent shape of their transit light curves with relatively little energy, due to the collimated beam and relatively infrequent nature of transits. We estimate that humanity could cloak the Earth from Kepler-like broadband surveys using an optical monochromatic laser array emitting a peak power of ∼30 MW for ∼10 hours per year. A chromatic cloak, effective at all wavelengths, is more challenging requiring a large array of tunable lasers with a total power of ∼250 MW. Alternatively, a civilization could cloak only the atmospheric signatures associated with biological activity on their world, such as oxygen, which is achievable with a peak laser power of just ∼160 kW per transit. Finally, we suggest that the time of transit for optical SETI is analogous to the water-hole in radio SETI, providing a clear window in which observers may expect to communicate. Accordingly, we propose that a civilization may deliberately broadcast their technological capabilities by distorting their transit to an artificial shape, which serves as both a SETI beacon and a medium for data transmission. Such signatures could be readily searched in the archival data of transit surveys.

Lawrence Livermore National Laboratory and IBM build brain-inspired supercomputer

Lawrence Livermore’s new supercomputer system uses 16 IBM TrueNorth chips developed by IBM Research (credit: IBM Research)

Lawrence Livermore National Laboratory (LLNL) has purchased IBM Research’s supercomputing platform for deep-learning inference, based on 16 IBM TrueNorth neurosynaptic computer chips, to explore deep learning algorithms.

IBM says the scalable platform processing power is the equivalent of 16 million artificial “neurons” and 4 billion “synapses.” The brain-like neural-network design of the IBM Neuromorphic System can process complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips, says IBM.

The technology represents a fundamental departure from computer design that has been prevalent for the past 70 years and could be incorporated in next-generation supercomputers able to perform at exascale speeds — 50 times faster than today’s most advanced petaflop (quadrillion floating point operations per second) systems.

Ultra-low-energy TrueNorth processor

IBM TrueNorth neuromorphic supercomputing processor chip (credit: IBM Research)

The TrueNorth processor chip was introduced in 2014 (see IBM launches functioning brain-inspired chip). It consists of 5.4 billion transistors wired together to create an array of 1 million digital “neurons” that communicate with one another via 256 million electrical “synapses.”

Like the human brain, neurosynaptic systems require significantly less electrical power and volume. The 16 TrueNorth chips consume the energy equivalent of only a tablet computer — 2.5 watts of power. At 0.8 volts, each chip consumes 70 milliwatts of power running in real time and delivers 46 giga synaptic operations per second.

TrueNorth was originally developed under the auspices of DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University (see IBM simulates 530 billion neurons, 100 trillion synapses on supercomputer).

“The delivery of this advanced computing platform represents a major milestone as we enter the next era of cognitive computing,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Prior to design and fabrication, we simulated the IBM TrueNorth processor using LLNL’s Sequoia supercomputer. This collaboration will push the boundaries of brain-inspired computing to enable future systems that deliver unprecedented capability and throughput, while helping to minimize the capital, operating, and programming costs.”

Protecting the US nuclear stockpile

The new system will be used to explore new computing capabilities important to the National Nuclear Security Administration’s (NNSA) missions in cyber security — stewardship of the nation’s nuclear deterrent and non-proliferation.

NNSA’s Advanced Simulation and Computing (ASC) program — a cornerstone of NNSA’s Stockpile Stewardship Program — will evaluate machine learning applications, deep learning algorithms, and architectures, and conduct general computing feasibility studies.

Researchers use optogenetic light to block tumor development

 

Setup for delivering spatio-temporally precise light stimulation of optogenetic proteins expressed in tadpole embryo induced tumor-like structures. (credit: Brook T. Chernet et al./Oncotarget)

Tufts University biologists have demonstrated (using a frog model*) for the first time that it is possible to prevent tumors from forming (and to normalize tumors after they have formed) by using optogenetics (light) to control bioelectrical signalling among cells.

Light/bioelectric control of tumors

Virtually all healthy cells maintain a more negative voltage in the cell interior compared with the cell exterior. But the opening and closing of ion channels in the cell membrane can cause the voltage to become more positive (depolarizing the cell) or more negative (polarizing the cell). That makes it possible to detect tumors by their abnormal bioelectrical signature before they are otherwise apparent.

The study was published online in an open-access paper in Oncotarget on March 16.

The use of light to control ion channels has been a ground-breaking tool in research on the nervous system and brain, but optogenetics had not yet been applied to cancer.

Optogenetics modulation of membrane voltage to control induced tumor-like structures. (Top) Tumor induced in tadpole embryo. (Bottom left) Control embryo not injected with light-sensitive protein is highly fluorescent, indicating relative depolarization. (Bottom right) Embryo injected with light-sensitive protein exhibits hyperpolarization, significantly lowering the incidence of tumor formation. Scale bar = 150 micrometers. (credit: Brook T. Chernet et al./Oncotarget)

The researchers first injected  cells in Xenopus laevis (frog) embryos with RNA that encoded a mutant RAS oncogene known to cause cancer-like growths.

The researchers then used blue light to activate positively charged ion channels,which induced an electric current that caused the cells to go from a cancer-like depolarized state to a normal, more negative polarized state. The did the same with a green light-activated proton pump, Archaerhodopsin (Arch). Activation of both agents significantly lowered the incidence of tumor formation and also increased the frequency with which tumors regressed into normal tissue.

“These electrical properties are not merely byproducts of oncogenic processes. They actively regulate the deviations of cells from their normal anatomical roles towards tumor growth and metastatic spread,” said senior and corresponding author Michael Levin, Ph.D., who holds the Vannevar Bush chair in biology and directs the Center for Regenerative and Developmental Biology at Tufts School of Arts and Sciences.

“Discovering new ways to specifically control this bioelectrical signaling could be an important path towards new biomedical approaches to cancer. This provides proof of principle for a novel class of therapies which use light to override the action of oncogenic mutations,” said Levin. “Using light to specifically target tumors would avoid subjecting the whole body to toxic chemotherapy or similar reagents.”

This work was supported by the G. Harold and Leila Y. Mathers Charitable Foundation.

* Frogs are a good model for basic science research into cancer because tumors in frogs and mammals share many of the same characteristics. These include rapid cell division, tissue disorganization, increased vascular growth, invasiveness and cells that have an abnormally positive internal electric voltage.


Abstract of Use of genetically encoded, light-gated ion translocators to control tumorigenesis

It has long been known that the resting potential of tumor cells is depolarized relative to their normal counterparts. More recent work has provided evidence that resting potential is not just a readout of cell state: it regulates cell behavior as well. Thus, the ability to control resting potential in vivo would provide a powerful new tool for the study and treatment of tumors, a tool capable of revealing living-state physiological information impossible to obtain using molecular tools applied to isolated cell components. Here we describe the first use of optogenetics to manipulate ion-flux mediated regulation of membrane potential specifically to prevent and cause regression of oncogene-induced tumors. Injection of mutant-KRAS mRNA induces tumor-like structures with many documented similarities to tumors, in Xenopus tadpoles. We show that expression and activation of either ChR2D156A, a blue-light activated cation channel, or Arch, a green-light activated proton pump, both of which hyperpolarize cells, significantly lowers the incidence of KRAS tumor formation. Excitingly, we also demonstrate that activation of co-expressed light-activated ion translocators after tumor formation significantly increases the frequency with which the tumors regress in a process called normalization. These data demonstrate an optogenetic approach to dissect the biophysics of cancer. Moreover, they provide proof-of-principle for a novel class of interventions, directed at regulating cell state by targeting physiological regulators that can over-ride the presence of mutations.

New 2D material could upstage graphene

The atoms in the new structure are arranged in a hexagonal pattern as in graphene, but that is where the similarity ends. The three elements forming the new material all have different sizes; the bonds connecting the atoms are also different. As a result, the sides of the hexagons formed by these atoms are unequal, unlike in graphene. (credit: Madhu Menon)

A new one-atom-thick flat material made up of silicon, boron, and nitrogen can function as a conductor or semiconductor (unlike graphene) and could upstage graphene and advance digital technology, say scientists at the University of Kentucky, Daimler in Germany, and the Institute for Electronic Structure and Laser (IESL) in Greece.

Reported in Physical Review B, Rapid Communications, the new Si2BN material was discovered in theory (not yet made in the lab). It uses light, inexpensive earth-abundant elements and is extremely stable, a property many other graphene alternatives lack, says University of Kentucky Center for Computational Sciences physicist Madhu Menon, PhD.

Limitations of other 2D semiconducting materials

A search for new 2D semiconducting materials has led researchers to a new class of three-layer materials called transition-metal dichalcogenides (TMDCs). TMDCs are mostly semiconductors and can be made into digital processors with greater efficiency than anything possible with silicon. However, these are much bulkier than graphene and made of materials that are not necessarily earth-abundant and inexpensive.

Other graphene-like materials have been proposed but lack the strengths of the new material. Silicene, for example, does not have a flat surface and eventually forms a 3D surface. Other materials are highly unstable, some only for a few hours at most.

The new Si2BN material is metallic, but by attaching other elements on top of the silicon atoms, its band gap can be changed (from conductor to semiconductor, for example) — a key advantage over graphene for electronics applications and solar-energy conversion.

The presence of silicon also suggests possible seamless integration with current silicon-based technology, allowing the industry to slowly move away from silicon, rather than precipitously, notes Menon.


University of Kentucky | Dr. Madhu Menon Proposes New 2D Material


Abstract of Prediction of a new graphenelike Si2BN solid

While the possibility to create a single-atom-thick two-dimensional layer from any material remains, only a few such structures have been obtained other than graphene and a monolayer of boron nitride. Here, based upon ab initio theoretical simulations, we propose a new stable graphenelike single-atomic-layer Si2BN structure that has all of its atoms with sp2 bonding with no out-of-plane buckling. The structure is found to be metallic with a finite density of states at the Fermi level. This structure can be rolled into nanotubes in a manner similar to graphene. Combining first- and second-row elements in the Periodic Table to form a one-atom-thick material that is also flat opens up the possibility for studying new physics beyond graphene. The presence of Si will make the surface more reactive and therefore a promising candidate for hydrogen storage.

New type of molecular tag makes MRI 10,000 times more sensitive

Duke scientists have discovered a new class of inexpensive, long-lived molecular tags that enhance MRI signals by 10,000 times. To activate the tags, the researchers mix them with a newly developed catalyst (center) and a special form of hydrogen (gray), converting them into long-lived magnetic resonance “lightbulbs” that might be used to track disease metabolism in real time. (credit: Thomas Theis, Duke University)

Duke University researchers have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions, such as those involved in cancer and heart disease, and in real time.


Let’s review how MRI (magnetic resonance imaging) works: MRI takes advantage of a property called spin, which makes the nuclei in hydrogen atoms act like tiny magnets. By generating a strong magnetic field (such as 3 Tesla) and a series of radio-frequency waves, MRI induces these hydrogen magnets in atoms to broadcast their locations. Since most of the hydrogen atoms in the body are bound up in water, the technique is used in clinical settings to create detailed images of soft tissues like organs (such as the brain), blood vessels, and tumors inside the body.


MRI’s ability to track chemical transformations in the body has been limited by the low sensitivity of the technique. That makes it impossible to detect small numbers of molecules (without using unattainably more massive magnetic fields).

So to take MRI a giant step further in sensitivity, the Duke researchers created a new class of molecular “tags” that can track disease metabolism in real time, and can last for more than an hour, using a technique called hyperpolarization.* These tags are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

“This represents a completely new class of molecules that doesn’t look anything at all like what people thought could be made into MRI tags,” said Warren S. Warren, James B. Duke Professor and Chair of Physics at Duke, and senior author on the study. “We envision it could provide a whole new way to use MRI to learn about the biochemistry of disease.”

Sensitive tissue detection without radiation

The new molecular tags open up a new world for medicine and research by making it possible to detect what’s happening in optically opaque tissue instead of requiring expensive positron emission tomography (PET), which uses a radioactive tracer chemical to look at organs in the body and only works for (typically) about 20 minutes, or CT x-rays, according to the researchers.

This research was reported in the March 25 issue of Science Advances. It was supported by the National Science Foundation, the National Institutes of Health, the Department of Defense Congressionally Directed Medical Research Programs Breast Cancer grant, the Pratt School of Engineering Research Innovation Seed Fund, the Burroughs Wellcome Fellowship, and the Donors of the American Chemical Society Petroleum Research Fund.

* For the past decade, researchers have been developing methods to “hyperpolarize” biologically important molecules. “Hyperpolarization gives them 10,000 times more signal than they would normally have if they had just been magnetized in an ordinary magnetic field,” Warren said. But while promising, Warren says these hyperpolarization techniques face two fundamental problems: incredibly expensive equipment — around 3 million dollars for one machine — and most of these molecular “lightbulbs” burn out in a matter of seconds.

“It’s hard to take an image with an agent that is only visible for seconds, and there are a lot of biological processes you could never hope to see,” said Warren. “We wanted to try to figure out what molecules could give extremely long-lived signals so that you could look at slower processes.”

So the researchers synthesized a series of molecules containing diazarines — a chemical structure composed of two nitrogen atoms bound together in a ring. Diazirines were a promising target for screening because their geometry traps hyperpolarization in a “hidden state” where it cannot relax quickly. Using a simple and inexpensive approach to hyperpolarization called SABRE-SHEATH, in which the molecular tags are mixed with a spin-polarized form of hydrogen and a catalyst, the researchers were able to rapidly hyperpolarize one of the diazirine-containing molecules, greatly enhancing its magnetic resonance signals for over an hour.

The scientists believe their SABRE-SHEATH catalyst could be used to hyperpolarize a wide variety of chemical structures at a fraction of the cost of other methods.


Abstract of Direct and cost-efficient hyperpolarization of long-lived nuclear spin states on universal 15N2-diazirine molecular tags

Conventional magnetic resonance (MR) faces serious sensitivity limitations, which can be overcome by hyperpolarization methods, but the most common method (dynamic nuclear polarization) is complex and expensive, and applications are limited by short spin lifetimes (typically seconds) of biologically relevant molecules. We use a recently developed method, SABRE-SHEATH, to directly hyperpolarize 15N2 magnetization and long-lived 15N2 singlet spin order, with signal decay time constants of 5.8 and 23 min, respectively. We find >10,000-fold enhancements generating detectable nuclear MR signals that last for more than an hour. 15N2-diazirines represent a class of particularly promising and versatile molecular tags, and can be incorporated into a wide range of biomolecules without significantly altering molecular function.

A new nontoxic way to generate portable power

In this time-lapse series of photos, progressing from top to bottom, a coating of sucrose (ordinary sugar) over a wire made of carbon nanotubes is lit at the left end, and burns from one end to the other. As it heats the wire, it drives a wave of electrons along with it, thus converting the heat into electricity. (credit: MIT)

Here’s a new idea for a nontoxic battery: light fuel-coated carbon nanotubes on fire (like a fuse) to generate electricity.

Sounds crazy but it works, according to inventor Michael Strano, the Carbon P. Dubbs Professor in Chemical Engineering at MIT. Plus it avoids toxic materials such as lithium, which can be difficult to dispose of and that have limited global supplies),

The new approach is based on a discovery announced in 2010 by Strano and his co-workers: A wire made from carbon nanotubes can produce an electrical current when it is progressively heated from one end to the other — for example, by coating it with a combustible material and then lighting one end to let it burn like a fuse.

Basically, the effect arises as a pulse of heat pushes electrons through the bundle of carbon nanotubes, carrying the electrons with it like a bunch of surfers riding a wave.

Experiments at the time produced only a minuscule amount of current in a simple laboratory setup. But now, Strano and his team have increased the efficiency of the process more than a thousandfold and have produced devices that can put out power that is, pound for pound, in the same ballpark as what can be produced by today’s best batteries. The researchers caution, however, that it could take several years to develop the concept into a commercializable product.

The new results were published in the journal Energy & Environmental Science, in a paper by Strano, doctoral students Sayalee Mahajan PhD ’15 and Albert Liu, and five others.


MPC-MIT | Experimenting With Thermopower Waves

Virtually indefinite shelf life

The improvements in efficiency, he says, “brings [the technology] from a laboratory curiosity to being within striking distance of other portable energy technologies,” such as lithium-ion batteries or fuel cells. In their latest version, the device is more than 1 percent efficient in converting heat energy to electrical energy, the team reports, which is about 10,000 times greater than that reported in the original discovery paper.

“It took lithium-ion technology 25 years to get where they are” in terms of efficiency, Strano points out, whereas this technology has had only about a fifth of that development time. And lithium is extremely flammable if the material ever gets exposed to the open air — unlike the fuel used in the new device, which is much safer and also a renewable resource.

Already, the device is powerful enough to show that it can power simple electronic devices such as an LED light. And unlike batteries that can gradually lose power if they are stored for long periods, the new system should have a virtually indefinite shelf life, Liu says. That could make it suitable for uses such as a deep-space probe that remains dormant for many years as it travels to a distant planet and then needs a quick burst of power to send back data when it reaches its destination.

In addition, the new system is very scalable for use wearable devices. Batteries and fuel cells have limitations that make it difficult to shrink them to tiny sizes, Mahajan says, whereas this system “can scale down to very small limits. The scale of this is unique.”

This work is “an important demonstration of increasing the energy and lifetime of thermopower wave-based systems,” says Kourosh Kalantar-Zadeh, a professor of electrical and computer engineering at RMIT University in Australia, who was not involved in this research. “I believe that we are still far from the upper limit that the thermopower wave devices can potentially reach,” he says. “However, this step makes the technology more attractive for real applications.”

He adds that with this technology, “We can obtain phenomenal bursts of power, which is not possible from batteries. For instance, the thermopower wave systems can be used for powering long-distance transmission units in micro- and nano-telecommunication hubs.”

The work was supported by the Air Force Office of Scientific Research and the Office of Naval Research.


Abstract of Sustainable power sources based on high efficiency thermopower wave devices

There is a pressing need to find alternatives to conventional batteries such as Li-ion, which contain toxic metals, present recycling difficulties due to harmful inorganic components, and rely on elements in finite global supply. Thermopower wave (TPW) devices, which convert chemical to electrical energy by means of self-propagating reaction waves guided along nanostructured thermal conduits, have the potential to address this demand. Herein, we demonstrate orders of magnitude higher chemical-to-electrical conversion efficiency of thermopower wave devices, in excess of 1%, with sustainable fuels such as sucrose and NaN3 for the first time, that produce energy densities on par with Li-ion batteries operating at 80% efficiency (0.2 MJ L−1 versus 0.8 MJ L−1). We show that efficiency can be increased significantly by selecting fuels such as sodium azide or sucrose with potassium nitrate to offset the inherent penalty in chemical potential imposed by strongly p-doping fuels, a validation of the predictions of Excess Thermopower theory. Such TPW devices can be scaled to lengths greater than 10 cm and durations longer than 10 s, an over 5-fold improvement over the highest reported values, and they are capable of powering a commercial LED device. Lastly, a mathematical model of wave propagation, coupling thermal and electron transport with energy losses, is presented to describe the dynamics of power generation, explaining why both unipolar and bipolar waveforms can be observed. These results represent a significant advancement toward realizing TPW devices as new portable, high power density energy sources that are metal-free.

A wearable graphene-based biomedical device to monitor and treat diabetes

Graphene-based patch for non-invasive blood-sugar diabetes monitoring and painless drug delivery (credit: IBS)

A  wearable graphene-based patch that allows for accurate non-invasive blood-sugar diabetes monitoring and painless drug delivery has been developed by researchers at The Institute for Basic Science (IBS) Center for Nanoparticle Research in South Korea.

The device uses a hybrid of gold-doped graphene and a serpentine-shape gold mesh to measure pH (blood acidity level) and temperature by measuring the amount of glucose in sweat. If abnormally high levels of glucose are detected, an insulin drug (such as Metformin) is released into a patient’s bloodstream via drug-loaded microneedles.*

Wireless smartphone monitoring of glucose levels (credit: IBS)

The current treatments available to diabetics are painful, inconvenient, and costly, requiring regular visits to a doctor, the researchers note. Home testing kits are available to record glucose levels, but for treatment, patients have to inject uncomfortable insulin** shots to regulate glucose levels. The IBS device provides non-invasive, painless, and stress-free monitoring of important markers of diabetes using multifunctional wearable devices, reducing lengthy and expensive cycles of visiting doctors and pharmacies, according to the researchers.

“The device shows dramatic advances over current treatment methods by allowing non-invasive treatments,” according to Center for Nanoparticle Research scientist Kim Dae-Hyeong.

Diabetes monitoring and drug-delivery device. (Left) Schematic of diabetes patch, composed of sweat-control (sweat-uptake layer and waterproof film) and sensing (humidity, glucose, pH and tremor sensors) components; (Middle) therapeutic components (microneedles, heater, and temperature sensor); (Right) graphene-hybrid electrochemical unit: electrochemically active and soft functional materials (red), gold-doped graphene (yellow spheres), and serpentine gold mesh (bottom). (credit: Hyunjae Lee et al./Nature Nanotechnology)

* The researchers tested the therapeutic effects by experimenting on diabetic mice. They applied the device near the abdomen of the  mouse. Microneedles pierced the skin of the mouse and released Metformin, an insulin-regulating drug, into the bloodstream. The group treated with microneedles showed a significant suppression of blood glucose concentrations with respect to control groups. Two healthy human males also participated in tests to demonstrate the sweat-based glucose sensing of the device. Glucose and pH levels of both subjects were recorded; a statistical analysis confirmed a reliable correlation between sweat-glucose data from the diabetes patch and those from commercial glucose tests. 

** Insulin is produced in the pancreas and regulates the use of glucose, maintaining a balance in blood sugar levels. Diabetes causes an imbalance: insufficient amounts of insulin results in high blood glucose levels, known as hyperglycemia. Type 2 diabetes is the most common form of diabetes with no known cure.


Abstract of A graphene-based electrochemical device with thermoresponsive microneedles for diabetes monitoring and therapy

Owing to its high carrier mobility, conductivity, flexibility and optical transparency, graphene is a versatile material in micro- and macroelectronics. However, the low density of electrochemically active defects in graphene synthesized by chemical vapour deposition limits its application in biosensing. Here, we show that graphene doped with gold and combined with a gold mesh has improved electrochemical activity over bare graphene, sufficient to form a wearable patch for sweat-based diabetes monitoring and feedback therapy. The stretchable device features a serpentine bilayer of gold mesh and gold-doped graphene that forms an efficient electrochemical interface for the stable transfer of electrical signals. The patch consists of a heater, temperature, humidity, glucose and pH sensors and polymeric microneedles that can be thermally activated to deliver drugs transcutaneously. We show that the patch can be thermally actuated to deliver Metformin and reduce blood glucose levels in diabetic mice.

How to detect radioactive material remotely

Researchers have proposed a new way to detect radioactive material using two co-located laser beams that interact with elevated levels of oxygen ions near a gamma-ray emitting source (credit: Joshua Isaacs, et al./University of Maryland)

University of Maryland researchers have proposed a new technique to remotely detect the radioactive materials* in dirty bombs or other sources from up to a few hundred meters away based on ion density. The technique might be used to screen vehicles, suspicious packages, or cargo.

The researchers calculate that a low-power laser aimed near the radioactive material could free electrons from the oxygen ions. A second, high-power laser could energize the electrons and start a cascading breakdown of the air. When the breakdown process reaches a certain critical point, the high-power laser light is reflected back. The more radioactive material in the vicinity, the more quickly the critical point is reached.

“We calculate we could easily detect 10 milligrams [of cobalt-60] with a laser aimed within half a meter from an unshielded source, which is a fraction of what might go into a dirty bomb,” said Joshua Isaacs, first author on the paper and a graduate student working with University of Maryland physics and engineering professors Phillip Sprangle and Howard Milchberg. Lead could shield radioactive substances, but most ordinary materials like walls or glass do not stop gamma rays.


In 2004 British national Dhiren Barot was arrested for conspiring to commit a public nuisance by the use of radioactive materials, among other charges. Authorities claimed that Barot had researched the production of “dirty bombs,” and planned to detonate them in New York City, Washington DC, and other cities. A dirty bomb combines conventional explosives with radioactive material. Although Barot did not build the bombs, national security experts believe terrorists continue to be interested in such devices for terror plots.


The lasers themselves could be located up to a few hundred meters away from the radioactive source, Isaacs said, as long as line-of-sight was maintained and the air was not too turbulent or polluted with aerosols. He estimated that the entire device, when built, could be transported by truck through city streets or past shipping containers in ports. It could also help police or security officials detect radiation without being too close to a potentially dangerous gamma ray emitter.

The proposed remote radiation detection method has advantages over two other approaches. Terahertz radiation, proposed as a way to breakdown air in the vicinity of radioactive materials, requires complicated and costly equipment. A high-power infrared laser can strip electrons and break down the air, but the method requires the detector be located in the opposite direction of the laser, making it impractical as a mobile device.

The new method is described in a paper in the journal Physics of Plasmas, from AIP Publishing.

* Radioactive materials are routinely used at hospitals for diagnosing and treating diseases, at construction sites for inspecting welding seams, and in research facilities. Cobalt-60, for example, is used to sterilize medical equipment, produce radiation for cancer treatment, and preserve food, among many other applications. In 2013, thieves in Mexico stole a shipment of cobalt-60 pellets used in hospital radiotherapy machines, although the shipment was later recovered intact.

Cobalt-60 and many other radioactive elements emit highly energetic gamma rays when they decay. The gamma rays strip electrons from the molecules in the surrounding air, and the resulting free electrons lose energy and readily attach to oxygen molecules to create elevated levels of negatively charged oxygen ions around the radioactive materials.


Abstract of Remote Monostatic Detection of Radioactive Materials by Laser-induced Breakdown

This paper analyzes and evaluates a concept for remotely detecting the presence of radioactivity using electromagnetic signatures. The detection concept is based on the use of laser beams and the resulting electromagnetic signatures near the radioactive material. Free electrons, generated from ionizing radiation associated with the radioactive material, cascade down to low energies and attach to molecular oxygen. The resulting ion density depends on the level of radioactivity and can be readily photo-ionized by a low-intensity laser beam. This process provides a controllable source of seed electrons for the further collisional ionization (breakdown) of the air using a high-power, focused, CO2 laser pulse. When the air breakdown process saturates, the ionizing CO2 radiation reflects off the plasma region and can be detected. The time required for this to occur is a function of the level of radioactivity. This monostatic detection arrangement has the advantage that both the photo-ionizing and avalanche laser beams as well as the detector can be co-located.

When slower is faster: how to get rid of traffic lights

Intersection congestion (credit: Google Earth)

Traffic-light-free transportation design, if it ever arrives, could allow twice as much traffic to use the roads, according to a newly published open-access study in PLoS One co-authored by MIT researchers.

The idea is based on future vehicles equipped with the kind of sensors used in autonomous vehicles and that communicate wirelessly with each other, rather than grinding to a halt at traffic lights.

The researchers created a mathematical model based on a scenario in which high-tech vehicles use sensors to remain at a safe distance from each other as they move through a four-way intersection. By removing the waits caused by traffic lights, these new “Slot-based Intersections” (SIs) speed up traffic flow in the model.


MIT Senseable City Lab | Light Traffic | MIT Senseable City Lab

The greater capacity of the system, notes Paolo Santi, a researcher in the MIT SENSEable City Lab (a member of the Italian National Research Council and a co-author of the study), does not stem from vehicles moving more quickly. Rather, it comes from creating a more consistent flow at an optimal middle speed, at which automobiles can keep moving. “You want the car to use the intersection for the shortest possible time,” Santi says.

Such a system would also reduce pollution and save on gasoline.

The “slower is faster” effect

To see why the system could at least work in theory, consider what the researchers call the “slower is faster” effect. When passengers board an airplane, they tend to move faster if they are in smaller clusters that keep going steadily, as opposed to a scenario in which everyone crowds around the entrance, creating a giant bottleneck.

“If you need to slow down the vehicles because there is a lot of traffic,” says Santi, “you slow them down early in the road, so they approach the intersection at slow speed, but then when they cross, you use the best speed.”

By “early,” Santi means that control of intelligent vehicles in the proposed system would occur not just at intersections but on the road segments leading to them. The modeling in the paper also finds that the most efficient traffic flow puts vehicles together in batches — like those smaller groups of passengers boarding a plane — and then slides them through the intersection accordingly.

The authors noted that in many cities, intersections with lights are often placed relatively close to each other. So how would the dynamics of traffic at one intersection propagate through a whole urban network of roads? They’re working on that.

A previous study with the same idea, but based on driverless vehicles, was done at Virginia Tech Transportation Research (see “Driverless vehicles to zip at full speed through intersections“).

Vehicle communication systems

So exactly how would this work? The authors suggest that vehicles might communicate with roadside infrastructure and other vehicles to produce better coordinated flows. The notion of a vehicle-to-vehicle (V2V) communication system has been studied fairly extensively (see “V2V: Department of Transportation’s new communication system helps cars avoid crashes by talking to each other” and “A 3,000-vehicle test of wireless crash-avoidance system“) but has been criticized as hackable.


Abstract of Revisiting Street Intersections Using Slot-Based Systems

Since their appearance at the end of the 19th century, traffic lights have been the primary mode of granting access to road intersections. Today, this centuries-old technology is challenged by advances in intelligent transportation, which are opening the way to new solutions built upon slot-based systems similar to those commonly used in aerial traffic: what we call Slot-based Intersections (SIs). Despite simulation-based evidence of the potential benefits of SIs, a comprehensive, analytical framework to compare their relative performance with traffic lights is still lacking. Here, we develop such a framework. We approach the problem in a novel way, by generalizing classical queuing theory. Having defined safety conditions, we characterize capacity and delay of SIs. In the 2-road crossing configuration, we provide a capacity-optimal SI management system. For arbitrary intersection configurations, near-optimal solutions are developed. Results theoretically show that transitioning from a traffic light system to SI has the potential of doubling capacity and significantly reducing delays. This suggests a reduction of non-linear dynamics induced by intersection bottlenecks, with positive impact on the road network. Such findings can provide transportation engineers and planners with crucial insights as they prepare to manage the transition towards a more intelligent transportation infrastructure in cities.

You’ll interact with smartphones and smartwatches by writing/gesturing on any surface, using sonar signals

FingerIO lets you interact with mobile devices by writing or gesturing on any nearby surface, turning a smartphone or smartwatch into an active sonar device (credit: Dennis Wise, University of Washington)

A new sonar technology called FingerIO will make it easier to interact with screens on smartwatches and smartphones by simply writing or gesturing on any nearby surface. It’s is an active sonar system using the device’s own microphones and speakers to track fine-grained finger movements (to within 8mm).

Because sound waves travel through fabric and do not require line of sight, users can even interact with these devices (including writing text) inside a front pocket or a smartwatch hidden under a sweater sleeve.


University of Washington Computer Science & Engineering | FingerIO

Developed by University of Washington computer scientists and electrical engineers, FingerIO uses the device’s own speaker to emit an inaudible ultrasonic wave. That signal bounces off the finger, and those “echoes” are recorded by the device’s microphones and used to calculate the finger’s location in space.

Using sound waves to track finger motion offers several advantages over cameras — which don’t work without line-of-sight or when the device is hidden by fabric or another obstructions — and other technologies like radar that require both custom sensor hardware and greater computing power, said senior author and UW assistant professor of computer science and engineering Shyam Gollakota.

But standard sonar echoes are weak and typically not accurate enough to track finger motion at high resolution. Errors of a few centimeters would make it impossible to differentiate between writing individual letters or subtle hand gestures.

So the UW researchers used “Orthogonal Frequency Division Multiplexing” (used in cellular telecommunications and WiFi), allowing for tracking phase changes in the echoes and correcting for any errors in the finger location.

Applications of fingerIO. a) Transform any surface into a writing interface; b) provide a new interface for smartwatch form factor devices; c) enable gesture interaction with a phone in a pocket; d) work even when the watch is occluded. (credit: R. Nandakumar et al.)

Two microphones are needed to track finger motion in two dimensions, and three for three dimensions. So this system may work (when available commercially*) with some smartphones (it was tested with a Samsung Galaxy S4), but today’s smartwatches typically only have one microphone.

Next steps for the research team include demonstrating how FingerIO can be used to track multiple fingers moving at the same time, and extending its tracking abilities into three dimensions by adding additional microphones to the devices.

The research was funded by the National Science Foundation and Google and will be described in a paper to be presented in May at the Association for Computing Machinery’s CHI 2016 conference in San Jose, California.

* Hint: Microsoft Research principal researcher Desney Tan is a co-author.


editor’s comments: This tech will be great for students and journalists taking notes and for controlling music and videos. It could also help prevent robberies. How would you use it?


Abstract of FingerIO: Using Active Sonar for Fine-Grained Finger Tracking

We present fingerIO, a novel fine-grained finger tracking solution for around-device interaction. FingerIO does not require instrumenting the finger with sensors and works even in the presence of occlusions between the finger and the device. We achieve this by transforming the device into an active sonar system that transmits inaudible sound signals and tracks the echoes of the finger at its microphones. To achieve subcentimeter level tracking accuracies, we present an innovative approach that use a modulation technique commonly used in wireless communication called Orthogonal Frequency Division Multiplexing (OFDM). Our evaluation shows that fingerIO can achieve 2-D finger tracking with an average accuracy of 8 mm using the in-built microphones and speaker of a Samsung Galaxy S4. It also tracks subtle finger motion around the device, even when the phone is inside a pocket. Finally, we prototype a smart watch form-factor fingerIO device and show that it can extend the interaction space to a 0.5×0.25 m2 region on either side of the device and work even when it is fully occluded from the finger.