Phosphorene could lead to ultrathin solar cells


Australian National University | Sticky tape the key to ultrathin solar cells

Scientists at Australian National University (ANU) have used simple transparent sticky (aka “Scotch”) tape to create single-atom-thick layers of phosphorene from “black phosphorus,” a black crystalline form of phosphorus similar to graphite (which is used to create graphene).

Unlike graphene, phosphorene is a natural semiconductor that can be switched on and off, like silicon, as KurzweilAI has reported. “Because phosphorene is so thin and light, it creates possibilities for making lots of interesting devices, such as LEDs or solar cells,” said lead researcher Yuerui (Larry) Lu, PhD, from ANU College of Engineering and Computer Science.

Properties that vary with layer thickness

Phosphorene is a thinner and lighter semiconductor than silicon, and it has unusual light emission properties that vary widely with the thickness of the layers, which enables more flexibility for manufacturing. “This property has never been reported before in any other material,” said Lu.

Schematic of the “puckered honeycomb” crystal structure of black phosphorus (credit: Vahid Tayari/McGill University)

“By changing the number of layers [peeled off] we can tightly control the band gap, which determines the material’s properties, such as the color of LED it would make.* “You can see quite clearly under the microscope the different colors of the sample, which tells you how many layers are there,” said Dr Lu.

The study was recently described in an open-access paper in the Nature journal Light: Science and Applications.

* Lu’s team found the optical gap for monolayer (single-layer) phosphorene was 1.75 electron volts, corresponding to red light of a wavelength of 700 nanometers. As more layers were added, the optical gap decreased. For instance, for five layers, the optical gap value was 0.8 electron volts, a infrared wavelength of 1550 nanometres. For very thick layers, the value was around 0.3 electron volts, a mid-infrared wavelength of around 3.5 microns.


Abstract of Optical tuning of exciton and trion emissions in monolayer phosphorene

Monolayer phosphorene provides a unique two-dimensional (2D) platform to investigate the fundamental dynamics of excitons and trions (charged excitons) in reduced dimensions. However, owing to its high instability, unambiguous identification of monolayer phosphorene has been elusive. Consequently, many important fundamental properties, such as exciton dynamics, remain underexplored. We report a rapid, noninvasive, and highly accurate approach based on optical interferometry to determine the layer number of phosphorene, and confirm the results with reliable photoluminescence measurements. Furthermore, we successfully probed the dynamics of excitons and trions in monolayer phosphorene by controlling the photo-carrier injection in a relatively low excitation power range. Based on our measured optical gap and the previously measured electronic energy gap, we determined the exciton binding energy to be ~0.3 eV for the monolayer phosphorene on SiO2/Si substrate, which agrees well with theoretical predictions. A huge trion binding energy of ~100 meV was first observed in monolayer phosphorene, which is around five times higher than that in transition metal dichalcogenide (TMD) monolayer semiconductor, such as MoS2. The carrier lifetime of exciton emission in monolayer phosphorene was measured to be ~220 ps, which is comparable to those in other 2D TMD semiconductors. Our results open new avenues for exploring fundamental phenomena and novel optoelectronic applications using monolayer phosphorene.

3D-printing basic electronic components

UC Berkeley engineers created a “smart cap” using 3-D-printed plastic with embedded electronics to wirelessly monitor the freshness of milk (credit: Photo and schematic by Sung-Yueh Wu)

UC Berkeley engineers, in collaboration with colleagues at Taiwan’s National Chiao Tung University, have developed a 3D printing process for creating basic electronic components, such as resistors, inductors, capacitors, and integrated wireless electrical sensing systems.

As a test, they printed a wireless “smart cap” for a milk carton that detected signs of spoilage using embedded sensors.

The findings were published Monday, July 20, in a new open-access journal in the Nature Publishing Group called Microsystems & Nanoengineering.

“Our paper describes the first demonstration of 3-D printing for working basic electrical components, as well as a working wireless sensor,” said senior author Liwei Lin, a professor of mechanical engineering and co-director of the Berkeley Sensor and Actuator Center.“One day, people may simply download 3D-printing files from the Internet with customized shapes and colors and print out useful devices at home.”

Engineers created a range of 3-D-printed electrical components, including an electrical resistor, inductor, capacitor and an LC tank (integrated inductor-capacitor system) (the penny is used for scale), shown here before removal of wax (credit: Photo by Sung-Yueh Wu)

The researchers started by printing polymers and wax. They then removed the wax, leaving hollow tubes into which liquid metal —- in their experiments they used silver — was injected and then cured.

The shape and design of the metal determined the function of different electrical components. For instance, thin wires acted as resistors, and flat plates were made into capacitors.

To demonstrate their use, the researchers integrated the electronic components into a plastic milk carton cap to monitor signs of spoilage. The “smart cap” was fitted with a capacitor and an inductor to form a resonant circuit. A quick flip of the carton allowed a bit of milk to get trapped in the cap’s capacitor gap, and the entire carton was then left unopened at room temperature (about 71.6 degrees Fahrenheit) for 36 hours.

3D-printed components (credit: Sung-Yueh Wu et al./Microsystems & Nanoengineering)

The circuit could detect the changes in electrical signals that accompany increased levels of bacteria. The researchers periodically monitored the changes with a wireless radio-frequency probe at the start of the experiment and every 12 hours thereafter, up to 36 hours. The property of milk changes gradually as it degrades, leading to variations in its electrical characteristics*.

Those changes were detected wirelessly using the smart cap, which found that the peak vibration frequency of the room-temperature milk dropped by 4.3 percent after 36 hours. In comparison, a carton of milk kept in refrigeration at 39.2 degrees Fahrenheit saw a relatively minor 0.12 percent shift in frequency over the same time period.

Cheap DIY electronic circuits 

“This 3D-printing technology could eventually make electronic circuits cheap enough to be added to packaging to provide food safety alerts for consumers,” said Lin. “You could imagine a scenario where you can use your cellphone to check the freshness of food while it’s still on the store shelves.”

As 3D printers become cheaper and better, the options for electronics will expand, said Lin, though he does not think people will be printing out their own smartphones or computers anytime soon.

“That would be very difficult because of the extremely small size of modern electronics,” he said. “It might also not be practical in terms of price since current integrated circuits are made by batch fabrication to keep costs low. Instead, I see 3D-printed microelectronic devices as very promising for systems that would benefit from customization.”

Lin said his lab is working on developing this technology for health applications, such as implantable devices with embedded transducers that can monitor blood pressure, muscle strain and drug concentrations.

* As the trapped milk hardened, the dielectric constant increased, increasing the capacitance and thus decreasing the frequency.


Abstract of 3D-printed microelectronics for integrated circuitry and passive wireless sensors

Three-dimensional (3D) additive manufacturing techniques have been utilized to make 3D electrical components, such as resistors, capacitors, and inductors, as well as circuits and passive wireless sensors. Using the fused deposition modeling technology and a multiple-nozzle system with a printing resolution of 30 μm, 3D structures with both supporting and sacrificial structures are constructed. After removing the sacrificial materials, suspensions with silver particles are injected subsequently solidified to form metallic elements/interconnects. The prototype results show good characteristics of fabricated 3D microelectronics components, including an inductor–capacitor-resonant tank circuitry with a resonance frequency at 0.53 GHz. A 3D “smart cap” with an embedded inductor–capacitor tank as the wireless passive sensor was demonstrated to monitor the quality of liquid food (e.g., milk and juice) wirelessly. The result shows a 4.3% resonance frequency shift from milk stored in the room temperature environment for 36 h. This work establishes an innovative approach to construct arbitrary 3D systems with embedded electrical structures as integrated circuitry for various applications, including the demonstrated passive wireless sensors.

Common chemicals may act together to increase cancer risk, international study finds

Disruptive potential of environmental exposures to mixtures of chemicals (credit: William H.Goodson III et al./Carcinogenesis)

Common environmental chemicals assumed to be safe at low doses may act separately or together to disrupt human tissues in ways that eventually lead to cancer, according to a task force of almost 200 scientists from 28 countries.

In a nearly three-year investigation of the state of knowledge about environmentally influenced cancers, the scientists studied low-dose effects of 85 common chemicals not considered to be carcinogenic to humans.

Common chemicals

The researchers reviewed the actions of these chemicals against a long list of mechanisms that are important for cancer development. Drawing on hundreds of laboratory studies, large databases of cancer information, and models that predict cancer development, they compared the chemicals’ biological activity patterns to 11 known cancer “hallmarks” – distinctive patterns of cellular and genetic disruption associated with early development of tumors.

The chemicals included bisphenol A (BPA), used in plastic food and beverage containers; rotenone, a broad-spectrum insecticide; paraquat, an agricultural herbicide; and triclosan, an antibacterial agent used in soaps and cosmetics.

In their survey, the researchers learned that 50 of the 85 chemicals had been shown to disrupt functioning of cells in ways that correlated with known early patterns of cancer, even at the low, presumably benign levels at which most people are exposed.

For 13 of them, the researchers found evidence of a dose-response threshold — a level of exposure at which a chemical is considered toxic by regulators. For 22, there was no toxicity information at all.

Synergistic effects over time

“Our findings also suggest these molecules may be acting in synergy to increase cancer activity,” said William Bisson, an assistant professor and cancer researcher at Oregon State University and a team leader on the study. For example, EDTA, a metal-ion-binding compound used in manufacturing and medicine, interferes with the body’s repair of damaged genes.

“EDTA doesn’t cause genetic mutations itself,” said Bisson, “but if you’re exposed to it along with some substance that is mutagenic, it enhances the effect because it disrupts DNA repair, a key layer of cancer defense.”

Bisson said the main purpose of this study was to highlight gaps in knowledge of environmentally influenced cancers and to set forth a research agenda for the next few years. He added that more research is still necessary to assess early exposure and to understand early stages of cancer development.

The study is part of the Halifax Project, sponsored by the Canadian nonprofit organization Getting to Know Cancer. The organization’s mission is to advance scientific knowledge about cancer linked to environmental exposures. The team’s findings are published in an open-access paper in a special issue of the journal Carcinogenesis.

Traditional risk assessment has historically focused on a quest for single chemicals and single modes of action — approaches that may underestimate cancer risk, said Bisson, an expert on computational chemical genomics (the modeling of biochemical molecular interactions in cancer processes). This study takes a different tack, examining the interplay over time of independent molecular processes triggered by low-dose exposures to chemicals.

“Cancer is a disease of diseases,” said Bisson. “It follows multi-step development patterns, and in most cases it has a long latency period. It has to be tackled from an angle that considers the complexity of these patterns.

“A better understanding of what’s driving things to the point where they get uncontrollable will be key for the development of effective strategies for prevention and early detection.”


Abstract of Assessing the carcinogenic potential of low-dose exposures to chemical mixtures in the environment: the challenge ahead

Lifestyle factors are responsible for a considerable portion of cancer incidence worldwide, but credible estimates from the World Health Organization and the International Agency for Research on Cancer (IARC) suggest that the fraction of cancers attributable to toxic environmental exposures is between 7% and 19%. To explore the hypothesis that low-dose exposures to mixtures of chemicals in the environment may be combining to contribute to environmental carcinogenesis, we reviewed 11 hallmark phenotypes of cancer, multiple priority target sites for disruption in each area and prototypical chemical disruptors for all targets, this included dose-response characterizations, evidence of low-dose effects and cross-hallmark effects for all targets and chemicals. In total, 85 examples of chemicals were reviewed for actions on key pathways/mechanisms related to carcinogenesis. Only 15% (13/85) were found to have evidence of a dose-response threshold, whereas 59% (50/85) exerted low-dose effects. No dose-response information was found for the remaining 26% (22/85). Our analysis suggests that the cumulative effects of individual (non-carcinogenic) chemicals acting on different pathways, and a variety of related systems, organs, tissues and cells could plausibly conspire to produce carcinogenic synergies. Additional basic research on carcinogenesis and research focused on low-dose effects of chemical mixtures needs to be rigorously pursued before the merits of this hypothesis can be further advanced. However, the structure of the World Health Organization International Programme on Chemical Safety ‘Mode of Action’ framework should be revisited as it has inherent weaknesses that are not fully aligned with our current understanding of cancer biology.

Deep Genomics launches, uniting deep learning and genome biology

“Deep learning” reveals the genetic origins of disease. A computational system mimics the biology of RNA splicing by correlating DNA elements with splicing levels in healthy human tissues. The system can scan DNA and identify damaging genetic variants, including those deep within introns.This procedure has led to insights into the genetics of autism, cancers, and spinal muscular atrophy. (credit: Hui Y. Xiong et al./Science)

Deep Genomics, a University of Toronto spinoff, launched today (July 22), combining deep learning and artificial intelligence with the study of the human genome. The company is building on more than a decade of research and expertise in both fields.

Using deep learning allows Deep Genomics to predict the consequences of genomic alteration on various cell mechanisms to make life-changing decisions, potentially via personalized medicine treatment, the researchers say.

“Our vision is to change the course of genomic medicine and help save lives by determining smarter treatment options,” says Brendan Frey, the company’s president and CEO, a Fellow of the Canadian Institute for Advanced Research, and a Professor at the University of Toronto.

SPIDEX, a Google for DNA mutation effects

Professor Brendan Frey (center-right) and colleagues at the University of Toronto Faculty of Applied Science & Engineering (credit: Roberta Baker/ U of T Engineering)

The scientific community has spent decades searching for mutations within specific genes that can be connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. But there is a vast amount of mutations and combinations of mutations that have neither been observed nor studied, posing a challenge for diagnostics and therapeutics today.

“We envision a future where computers are trusted to predict the outcome of laboratory experiments and treatments, long before anyone picks up a test tube. Our first step will be to open up a genome-wide database of over 300 million potentially disease-causing variants, most of which are in regions of the genome that can’t be examined using other methods.”

Deep Genomics’ first product, called SPIDEX, provides information about how these DNA mutations may alter splicing in the cell, a process that is crucial for normal development. It also connects the dots between a variant or mutation of unknown significance and a variant that has been linked to a disease to determine its level of danger.

Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described in the January 9, 2015 issue of the journal Science.

Labs will send the mutations they’ve collected to Deep Genomics, and the company will use their proprietary deep learning system, which includes SPIDEX, to “read” the genome and assess how likely the mutation is to cause a problem. SPIDEX can also connect the dots between a variant of unknown significance and a variant that has been linked to disease.

The company plans to further grow its team of machine learning, genome biology, and computational biology experts, and continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA.

The company’s scientific advisory board includes Yann LeCun, Director, Facebook AI Research; Stephen Scherer, Director, The Center for Applied Genomics; and Jordan Lerner-Ellis, Director, Molecular Diagnostics at Mount Sinai Hospital.

More information: www.deepgenomics.com.


Abstract of The human splicing code reveals new insights into the genetic determinants of disease

To facilitate precision medicine and whole-genome annotation, we developed a machine-learning technique that scores how strongly genetic variants affect RNA splicing, whose alteration contributes to many diseases. Analysis of more than 650,000 intronic and exonic variants revealed widespread patterns of mutation-driven aberrant splicing. Intronic disease mutations that are more than 30 nucleotides from any splice site alter splicing nine times as often as common variants, and missense exonic disease mutations that have the least impact on protein function are five times as likely as others to alter splicing. We detected tens of thousands of disease-causing mutations, including those involved in cancers and spinal muscular atrophy. Examination of intronic and exonic variants found using whole-genome sequencing of individuals with autism revealed misspliced genes with neurodevelopmental phenotypes. Our approach provides evidence for causal variants and should enable new discoveries in precision medicine.

Korean researchers grow wafer-scale graphene on a silicon substrate

Wafer-scale (4 inch in diameter) synthesis of multi-layer graphene using high-temperature carbon ion implantation on nickel/SiO2/silicon (credit: J.Kim/Korea University, Korea)

Taking graphene a step closer to realistic commercial applications in silicon microelectronics, Korea University researchers have developed a simple microelectronics-compatible method for growing multi-layer graphene on a high-quality, wafer-scale (four inches in diameter) silicon substrate.

The method is based on the ion implantation technique — a process in which ions are accelerated under an electrical field and smashed into a semiconductor. The impacting ions change the physical, chemical, or electrical properties of the semiconductor.

Because of its high conductivity, “graphene is a potential contact electrode and an interconnection material linking semiconductor devices to form the desired electrical circuits, explained Jihyun Kim, the team leader and a professor in the Department of Chemical and Biological Engineering at Korea University.

However, “to deposit large-area graphene that is free of wrinkles, tears, and residues on silicon wafers requires low temperatures. That can’t be achieved with conventional chemical vapor deposition, which requires a high growth temperature — above 1,000 degrees Celsius.” That can cause strains, metal spiking, cracks, wrinkles, and contaminants from diffusion of dopants.

“Our synthesis method is controllable and scalable, allowing us to obtain graphene as large as the size of the silicon wafer,” Kim said. The researchers’ next step is to further lower the temperature in the synthesis process and to control the thickness of the graphene for manufacturing production.

The research is described in an open-access paper published this week in the journal Applied Physics Letters.


Abstract of Wafer-scale synthesis of multi-layer graphene by high-temperature carbon ion implantation

We report on the synthesis of wafer-scale (4 in. in diameter) high-quality multi-layer graphene using high-temperature carbon ion implantation on thin Ni films on a substrate of SiO2/Si.Carbon ions were bombarded at 20 keV and a dose of 1 × 1015 cm−2 onto the surface of the Ni/SiO2/Si substrate at a temperature of 500 °C. This was followed by high-temperature activation annealing (600–900 °C) to form a sp2-bonded honeycomb structure. The effects of post-implantation activation annealing conditions were systematically investigated by micro-Raman spectroscopy and transmission electron microscopy. Carbon ion implantation at elevated temperatures allowed a lower activation annealing temperature for fabricating large-area graphene. Our results indicate that carbon-ion implantation provides a facile and direct route for integrating graphene with Si microelectronics.

Deep neural network program recognizes sketches more accurately than a human

The Sketch-a-Net program successfully identified a seagull, pigeon, flying bird and standing bird better than humans (credit: QMUL, Mathias Eitz, James Hays and Marc Alexa)

The first computer program that can recognize hand-drawn sketches better than humans has been developed by researchers from Queen Mary University of London.

Known as Sketch-a-Net, the program correctly identified the subject of sketches 74.9 per cent of the time compared to humans that only managed a success rate of 73.1 per cent.

As sketching becomes more relevant with the increase in the use of touchscreens, it could lead to new ways to interact with computers. Touchscreens could understand what you are drawing enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories.

The improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish “seagull,” “flying-bird,” “standing-bird” and “pigeon” with 42.5 per cent accuracy compared to humans, who only achieved 24.8 per cent.

Sketch-a-Net is a “deep neural network” program, designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.


Abstract of Sketch-a-Net that Beats Humans

We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.

Metal foams found to excel in shielding X-rays, gamma rays, neutron radiation

Lightweight composite metal foams like this one have been found effective at blocking X-rays, gamma rays and neutron radiation, and are capable of absorbing the energy of high impact collisions — holding promise for use in nuclear safety, space exploration, and medical technology applications (credit: Afsaneh Rabiei, North Carolina State University)

North Carolina State University researchers have found that lightweight composite metal foams they had developed are effective at blocking X-rays, gamma rays, and neutron radiation, and are capable of absorbing the energy of high-impact collisions. The finding holds promise for use in nuclear power plants, space exploration, and CT-scanner shielding.

“This work means there’s an opportunity to use composite metal foam to develop safer systems for transporting nuclear waste, more efficient designs for spacecraft and nuclear structures, and new shielding for use in CT scanners,” says

Afsaneh Rabiei, a professor of mechanical and aerospace engineering at NC State, first developed the strong, lightweight metal foam made of steel, tungsten, and and vanadium for use in transportation and military applications. But she wanted to determine whether the foam could be used for nuclear or space exploration applications — could it provide structural support and protect against high impacts while providing shielding against various forms of radiation?

So she and her colleagues conducted multiple tests to see how effective it was at blocking X-rays, gamma rays, and neutron radiation. She then compared the material’s performance to the performance of bulk materials that are currently used in shielding applications. The comparison was made using samples of the same “areal” density – meaning that each sample had the same weight, but varied in volume.

Better than lead and non-toxic

The researchers found that the high-Z foam was comparable to bulk materials at blocking high-energy gamma rays, but was much better than bulk materials — even bulk steel — at blocking low-energy gamma rays; it outperformed other materials at blocking neutron radiation; and was better than most materials at blocking X-rays. It was not quite as effective as lead, but with the advantages of  being lightweight and more environmentally friendly.

“However, we are working to modify the composition of the metal foam to be even more effective than lead at blocking X-rays, and our early results are promising,” Rabiei says. “And our foams have the advantage of being non-toxic, which means that they are easier to manufacture and recycle. In addition, the extraordinary mechanical and thermal properties of composite metal foams, and their energy absorption capabilities, make the material a good candidate for various nuclear structural applications.”

The research paper was published in Radiation Physics and Chemistry. It was supported by DOE’s Office of Nuclear Energy under Nuclear Energy University Program.


Abstract of Attenuation efficiency of X-ray and comparison to gamma ray and neutrons in composite metal foams

Steel-steel composite metal foams (S-S CMFs) and Aluminum-steel composite metal foams (Al-S CMFs) with various sphere sizes and matrix materials were manufactured and investigated for nuclear and radiation environments applications. 316 L stainless steel, high-speed T15 steel and aluminum materials were used as the matrix material together with 2, 4 and 5.2 mm steel hollow spheres to manufacture various types of composite metal foams (CMFs). High-speed T15 steel is selected due to its high tungsten and vanadium concentration (both high-Z elements) to further improve the shielding efficiency of CMFs. This new type of S-S CMF is called High-Z steel-steel composite metal foam (HZ S-S CMF). Radiation shielding efficiency of all types of CMFs was explored for the attenuation of X-ray, gamma ray and neutron. The experimental results were compared with pure lead and Aluminum A356, and verified theoretically through XCOM and Monte Carlo Z-particle Transport Code (MCNP). It was observed that the radiation shielding effectiveness of CMFs is relatively independent of sphere sizes as long as the ratio of sphere-wall thickness to its outer-diameter stays constant. However, the smaller spheres seem to be more efficient in general due to the fine fluctuation in the gray value profile of their 2D Micro-CT images. S-S CMFs and Al-S CMFs are respectively 275% and 145% more effective for X-ray attenuation than Aluminum A356. Compared to pure lead, CMFs show adequate attenuation with additional advantages of being lightweight and more environmentally friendly. The mechanical performance of HZ S-S CMFs under quasi-static compression was compared to that of other classes of S-S CMF. It is observed that the addition of high-Z elements to the matrix of CMFs improved their shielding against X-rays, low energy gamma rays and neutrons, while maintained their low density, high mechanical properties and high-energy absorption capability.

Russian billionaire, Hawking announce $100 million search for ET

Green Bank Telescope (credit: Geremia/Wikimedia Commons)

Russian billionaire Yuri Milner, Stephen Hawking, Martin Rees, Frank Drake and others announced at The Royal Society today $100 million funding for Breakthrough Listen — the “most powerful, comprehensive, and intensive scientific search ever undertaken for signs of intelligent life beyond Earth.”

They also announced $1 million prize funding for Breakthrough Message, a competition to generate messages representing humanity and planet Earth.

“It’s time to commit to finding the answer to search for life beyond Earth,” said Hawking. “We are live, we are intelligent, we must know … if we are alone in the dark.”

The search will be done at two of the largest radio telescopes, the 100 Meter Robert C. Byrd Green Bank Telescope in West Virginia, the world’s largest steerable radio telescope; and the 64-metre diameter Parkes Telescope in New South Wales, Australia.  And the Automated Planet Finder Telescope at Lick Observatory in California will undertake the world’s deepest and broadest search for optical laser transmissions.

More sensitive, faster, wider spectrum, more sky coverage

The Breakthrough Listen initiative will be 50 times more sensitive than previous programs dedicated to SETI research, the scientists say. It will cover ten times more of the sky than previous programs and will scan at least five times more of the radio spectrum, and 100 times faster.  It will survey the one million closest stars to Earth and the 100 closest galaxies.

The program will generate what may be the largest amount of scientific data ever made available to the public, at tens of gigaHertz bandwidth, the scientists said, and all data and software will be open-source and available to the public. The initiative will also be joining and supporting SETI@home, UC Berkeley’s distributed computing platform, with 9 million volunteers donating their spare computing power to search astronomical data for signs of life.

The second initiative, Breakthrough Message, will be an international competition to create digital messages that represent humanity and planet Earth, with prizes totaling $1,000,000. It will not be a commitment to send messages.

Other leaders for the two initiatives are astronomer Pete Worden, Chairman, Breakthrough Prize Foundation and former Director, NASA Ames Research Center; professor of astronomy Geoff Marcy, UC Berkeley; writer/producer Ann Druyan, Creative Director of the Interstellar Message, NASA Voyager; SETI@home project chief scientist Dan Werthimer; and Andrew Siemion, Director, Berkeley SETI Research Center.

More information: Breakthrough Initiatives.


Breakthrough Life In The Universe Initiatives Press Conference

Brain-inspired algorithms may make for optimized computational networks

Salk and Carnegie Mellon researchers developed a new model for building efficient networks by studying the rate at which the brain prunes back some of its connections during development. In this model, nodes (such as neurons or sensors) make too many connections (left) before pruning back to connections that are most relevant (right). The team applied their synaptic pruning-based algorithm to air flight patterns and found it was able to create routes to allow passengers to reach their destinations efficiently. (credit: Salk Institute and Carnegie Mellon University)

The developing brain prunes (eliminates) unneeded connections between neurons during early childhood. Now researchers from the Salk Institute for Biological Studies and Carnegie Mellon University have determined the rate at which that happens, and the implications of that finding for computational networks.


Neurons create networks through a process called pruning. At birth and throughout early childhood, the brain’s neurons make a vast number of connections — many more than the brain actually needs to function. So as the brain matures and learns, it begins to quickly cut away connections that aren’t being used. When the brain reaches adulthood, it has about 50 to 60 percent less synaptic connections than it had at its peak in childhood. Understanding how the network of neurons in the brain organizes to form its adult structure is key to understanding how the brain learns and functions.


“By thinking computationally about how the brain develops, we questioned how rates of synapse pruning may affect network topology and function,” says Saket Navlakha, assistant professor at the Salk Institute’s Center for Integrative Biology and a former postdoctoral researcher in Carnegie Mellon’s Machine Learning Department. “We have used the resulting insights to develop new algorithms for constructing adaptive and robust networks in other domains.” The findings were recently published in an open-access paper in PLOS Computational Biology,

But the processes the brain and network engineers conventionally use to learn the optimal network structure are very different. Computer science and engineering networks initially contain a small number of connections and then add more connections as needed.

An improved computer-network algorithm based on brain pruning

“Engineered networks are built by adding connections rather than removing them. You would think that developing a network using a pruning process would be wasteful,” says Ziv Bar-Joseph, associate professor in Carnegie Mellon’s Machine Learning and Computational Biology departments. “But as we showed, there are cases where such a process can prove beneficial for engineering as well.”

The researchers first determined key aspects of the pruning process by counting the number of synapses present in a mouse model’s somatosensory cortex over time. After counting synapses in more than 10,000 electron microscopy images, they found that synapses were rapidly pruned early in development, and then as time progressed, the pruning rate slowed.

The results of these experiments allowed the team to develop an algorithm for designing computational networks based on the brain pruning approach. Using simulations and theoretical analysis they found that the neuroscience-based algorithm produced computer networks that were much more efficient than the current engineering methods. The flow of information was more direct, and provided multiple paths for information to reach the same endpoint, minimizing the risk of network failure.

Optimizing airline routes as a test case

Delta U.S. routes (not the focus of this study) (credit: David Galvin/University of Notre Dame)

“We took this high-level algorithm that explains how neural structures are built during development and used that to inspire an algorithm for an engineered network,” says Alison Barth, professor in Carnegie Mellon’s Department of Biological Sciences and member of the university’s BrainHubSM initiative. “It turns out that this neuroscience-based approach could offer something new for computer scientists and engineers to think about as they build networks.”

Improving airline efficiency and robustness using pruning algorithms. Based on actual data of travel frequency among 122 popular cities from the 3rd quarter of 2013, researchers derived a comparison of efficiency (travel time in terms of number of hops) and robustness (number of alternative routes with the same number of hops) using different algorithms. Decreasing-rate pruning produced more efficient networks with similar robustness. (credit: Saket Navlakha1et al. PLOS Computational Biology)

As a test of how the algorithm could be used outside of neuroscience, Navlakha applied the algorithm to flight data from the U.S. Department of Transportation. He found that the synaptic pruning-based algorithm created the most effective routes to allow passengers to reach their destinations.

“We realize that it wouldn’t be cost effective to apply this to networks that require significant infrastructure, like railways or pipelines,” Navlakha said. “But for those that don’t, like wireless networks and sensor networks, this could be a valuable adaptive method to guide the formation of networks.”


Abstract of Decreasing-rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks

Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we use this process as inspiration for a new network design algorithm, which also led to a new experimental hypothesis. In particular, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. This inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

Can your phone really know you’re depressed?

StudentLife app, sensing, and analytics system architecture (credit: Rui Wang et al.)

Northwestern scientists believe an open-access android cell phone app called Purple Robot can detect depression simply by tracking the number of minutes you use the phone and your daily geographical locations.

The more time you spend using your phone, the more likely you are depressed, they found in a small Northwestern Medicine study published yesterday (July 15) in the Journal of Medical Internet Research. The average daily usage for depressed individuals was about 68 minutes, while for non-depressed individuals it was about 17 minutes.

Another factor was your location. Spending most of your time at home and most of your time in fewer locations — as measured by GPS tracking — also are linked to depression.

In addition, having a less regular day-to-day schedule, leaving your house and going to work at different times each day, for example, also is linked to depression.

Based on those three factors, they claim they could identify which of 28 individuals they recruited from Craig’s List had depressive symptoms — based on a standardized questionnaire measuring depression called the PHQ-9 — 87 percent accuracy.

Example phone usage data from a participant. Each row is a day, and the black bars show the extent of time during which the phone has been is use. The bars on the right side show the overall phone usage duration for each day. (credit: Sohrab Saeb et al./Journal of Medical Internet Research)

“The significance of this is we can detect if a person has depressive symptoms and the severity of those symptoms without asking them any questions,” said senior author David Mohr, director of the Center for Behavioral Intervention Technologies at Northwestern University Feinberg School of Medicine. “We now have an objective measure of behavior related to depression. And we’re detecting it passively. Phones can provide data unobtrusively and with no effort on the part of the user.”

Better than questionnaires

The smartphone data was more reliable in detecting depression than daily questions participants answered about how sad they were feeling on a scale of 1 to 10. Those answers may be rote and often not reliable, said lead author Sohrob Saeb, a postdoctoral fellow and computer scientist in preventive medicine at Feinberg.

“The data showing depressed people tended not to go many places reflects the loss of motivation seen in depression,” said Mohr, who is a clinical psychologist and professor of preventive medicine at Feinberg. “When people are depressed, they tend to withdraw and don’t have the motivation or energy to go out and do things.”

The research could ultimately lead to monitoring people at risk of depression and enabling health care providers to intervene more quickly, they suggest.

While the phone usage data didn’t identify how people were using their phones, Mohr suspects people who spent the most time on them were surfing the web or playing games, rather than talking to friends. “People are likely, when on their phones, to avoid thinking about things that are troubling, painful feelings or difficult relationships,” Mohr said. “It’s an avoidance behavior we see in depression.”

That assumption seems questionable; non-depressed people often spend time on phones texting, checking Facebook, reading, emails, etc.

But Saeb also analyzed the GPS locations and phone usage for 28 individuals (20 females and eight males, average age of 29) over two weeks. The sensor tracked GPS locations every five minutes.

To determine the relationship between phone usage and geographical location and depression, the subjects took a widely used standardized questionnaire measuring depression, the PHQ-9, at the beginning of the two-week study. The PHQ-9 asks about symptoms used to diagnose depression such as sadness, loss of pleasure, hopelessness, disturbances in sleep and appetite, and difficulty concentrating. Then, Saeb developed algorithms using the GPS and phone usage data collected from the phone, and correlated the results of those GPS and phone usage algorithms with the subjects’ depression test results.

Of the participants, 14 did not have any signs of depression and 14 had symptoms ranging from mild to severe depression.

The goal of the research is to passively detect depression and different levels of emotional states related to depression, Saeb said. The information ultimately could be used to monitor people who are at risk of depression to, perhaps, offer them interventions if the sensor detected depression or to deliver the information to their clinicians. Future Northwestern research will look at whether getting people to change those behaviors linked to depression improves their mood.

“We will see if we can reduce symptoms of depression by encouraging people to visit more locations throughout the day, have a more regular routine, spend more time in a variety of places or reduce mobile phone use,” Saeb said.

In addition to studies that use mobile phone sensor data to better understand depression, Mohr’s team also is running clinical trials to treat depression and anxiety using evidence-based interventions.

Contact ehealth@northwestern.edu or 855-682-2487 to learn how to join one of their paid research studies, or visit http://cbitshealth.northwestern.edu/.

This research was funded by research grants from the National Institute of Mental Health of the National Institutes of Health.


Abstract of Mobile Phone Sensor Correlates of Depressive Symptom Severity in Daily-Life Behavior: An Exploratory Study

Background: Depression is a common, burdensome, often recurring mental health disorder that frequently goes undetected and untreated. Mobile phones are ubiquitous and have an increasingly large complement of sensors that can potentially be useful in monitoring behavioral patterns that might be indicative of depressive symptoms.

Objective: The objective of this study was to explore the detection of daily-life behavioral markers using mobile phone global positioning systems (GPS) and usage sensors, and their use in identifying depressive symptom severity.

Methods: A total of 40 adult participants were recruited from the general community to carry a mobile phone with a sensor data acquisition app (Purple Robot) for 2 weeks. Of these participants, 28 had sufficient sensor data received to conduct analysis. At the beginning of the 2-week period, participants completed a self-reported depression survey (PHQ-9). Behavioral features were developed and extracted from GPS location and phone usage data.

Results: A number of features from GPS data were related to depressive symptom severity, including circadian movement (regularity in 24-hour rhythm;r=-.63, P=.005), normalized entropy (mobility between favorite locations; r=-.58,P=.012), and location variance (GPS mobility independent of location; r=-.58,P=.012). Phone usage features, usage duration, and usage frequency were also correlated (r=.54, P=.011, and r=.52, P=.015, respectively). Using the normalized entropy feature and a classifier that distinguished participants with depressive symptoms (PHQ-9 score ≥5) from those without (PHQ-9 score <5), we achieved an accuracy of 86.5%. Furthermore, a regression model that used the same feature to estimate the participants’ PHQ-9 scores obtained an average error of 23.5%.

Conclusions: Features extracted from mobile phone sensor data, including GPS and phone usage, provided behavioral markers that were strongly related to depressive symptom severity. While these findings must be replicated in a larger study among participants with confirmed clinical symptoms, they suggest that phone sensors offer numerous clinical opportunities, including continuous monitoring of at-risk populations with little patient burden and interventions that can provide just-in-time outreach.