New nanomaterial, quantum encryption system could be ultimate defenses against hackers

New physically unclonable nanomaterial (credit: Abdullah Alharbi et al./ACS Nano)

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data — from medical records to bank transactions — could be vulnerable to attack.

Now, a new low-cost nanomaterial developed by New York University Tandon School of Engineering researchers can be tuned to act as a secure authentication key to encrypt computer hardware and data. The layered molybdenum disulfide (MoS2) nanomaterial cannot be physically cloned (duplicated) — replacing programming, which can be hacked.

In a paper published in the journal ACS Nano, the researchers explain that the new nanomaterial has the highest possible level of structural randomness, making it physically unclonable. It achieves this with randomly occurring regions that alternately emit or do not emit light. When exposed to light, this pattern can be used to create a one-of-a-kind binary cryptographic authentication key that could secure hardware components at minimal cost.

The research team envisions a future in which similar nanomaterials can be inexpensively produced at scale and applied to a chip or other hardware component. “No metal contacts are required, and production could take place independently of the chip fabrication process,” according to Davood Shahrjerdi, Assistant Professor of Electrical and Computer Engineering. “It’s maximum security with minimal investment.”

The National Science Foundation and the U.S. Army Research Office supported the research.

A high-speed quantum encryption system to secure the future internet

Schematic of the experimental quantum key distribution setup (credit: Nurul T. Islam et al./Science Advances)

Another approach to the hacker threat is being developed by scientists at Duke University, The Ohio State University and Oak Ridge National Laboratory. It would use the properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

Called quantum key distribution (QKD), it takes advantage of one of the fundamental properties of quantum mechanics: Measuring tiny bits of matter like electrons or photons automatically changes their properties, which would immediately alert both parties to the existence of a security breach. However, current QKD systems can only transmit keys at relatively low rates — up to hundreds of kilobits per second — which are too slow for most practical uses on the internet.

The new experimental QKD system is capable of creating and distributing encryption codes at megabit-per-second rates — five to 10 times faster than existing methods and on a par with current internet speeds when running several systems in parallel. In an online open-access article in Science Advances, the researchers show that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

This research was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency, and Oak Ridge National Laboratory.


Abstract of Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2

Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS2) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark–Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.


Abstract of Provably secure and high-rate quantum key distribution with time-bin qudits

The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.

Space dust may transport life between worlds

Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)

Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.

A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.

The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.

The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.

“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**

The study was partly funded by the U.K. Science and Technology Facilities Council.

* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia

** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology


Abstract of Space Dust Collisions as a Planetary Escape Mechanism

It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.

Using microrobots to diagnose and treat illness in remote areas of the body

Spirulina algae coated with magnetic particles to form a microrobot. Devices such as these could be developed to diagnose and treat illness in hard-to-reach parts of the body. (credit: Yan et al./Science Robotics)

Imagine a swarm of remote-controlled microrobots, a few micrometers in length (blood-vessel-sized), unleashed into your body to swim through your intestinal track or blood vessels, for example. Goal: to diagnose illness and treat it in hard-to-reach areas of the body.

An international team of researchers, led by the Chinese University of Hong Kong, is now experimenting with this idea (starting with rats) — using microscopic Spirulina algae coated with biocompatible magnetic nanoparticles to form the microswimmers.

Schematic of dip-coating S. platensis algae in a suspension of magnetite nanoparticles and growing microrobots. The time taken for the robots to function and biodegrade within the body could be tailored by adjusting the thickness of the coating. (credit: Xiaohui Yan et al./Science Robotics)

There are two methods being studied: (1) track the microswimmers in tissue close to the skin’s surface by imaging the algae’s natural luminescence; and (2) track them in hard-to-reach deeper tissue by coating with magnetite (Fe3O4) to make them detectable with magnetic resonance imaging (MRI). The devices could also sense chemical changes linked to the onset of illness.

In lab tests, during degradation, the microswimmers were able to release potent compounds from the algae core that selectively attacked cancer cells while leaving healthy cells unharmed. Further research could show whether this might have potential as a treatment, the researchers say.

The study, published in an open-access paper in Science Robotics, was carried out in collaboration with the Universities of Edinburgh and Manchester and was supported by the Research Grants Council of Hong Kong.


Abstract of Multifunctional biohybrid magnetite microrobots for imaging-guided therapy

Magnetic microrobots and nanorobots can be remotely controlled to propel in complex biological fluids with high precision by using magnetic fields. Their potential for controlled navigation in hard-to-reach cavities of the human body makes them promising miniaturized robotic tools to diagnose and treat diseases in a minimally invasive manner. However, critical issues, such as motion tracking, biocompatibility, biodegradation, and diagnostic/therapeutic effects, need to be resolved to allow preclinical in vivo development and clinical trials. We report biohybrid magnetic robots endowed with multifunctional capabilities by integrating desired structural and functional attributes from a biological matrix and an engineered coating. Helical microswimmers were fabricated from Spirulinamicroalgae via a facile dip-coating process in magnetite (Fe3O4) suspensions, superparamagnetic, and equipped with robust navigation capability in various biofluids. The innate properties of the microalgae allowed in vivo fluorescence imaging and remote diagnostic sensing without the need for any surface modification. Furthermore, in vivo magnetic resonance imaging tracked a swarm of microswimmers inside rodent stomachs, a deep organ where fluorescence-based imaging ceased to work because of its penetration limitation. Meanwhile, the microswimmers were able to degrade and exhibited selective cytotoxicity to cancer cell lines, subject to the thickness of the Fe3O4 coating, which could be tailored via the dip-coating process. The biohybrid microrobots reported herein represent a microrobotic platform that could be further developed for in vivo imaging–guided therapy and a proof of concept for the engineering of multifunctional microrobotic and nanorobotic devices.

Take a fantastic 3D voyage through the brain with immersive VR system


Wyss Center for Bio and Neuroengineering/Lüscher lab (UNIGE) | Brain circuits related to natural reward

What happens when you combine access to unprecedented huge amounts of anatomical data of brain structures with the ability to display billions of voxels (3D pixels) in real time, using high-speed graphics cards?

Answer: An awesome new immersive virtual reality (VR) experience for visualizing and interacting with up to 10 terabytes (trillions of bytes) of anatomical brain data.

Developed by researchers from the Wyss Center for Bio and Neuroengineering and the University of Geneva, the system is intended to allow neuroscientists to highlight, select, slice, and zoom on down to individual neurons at the micrometer cellular level.

This 2-D brain image of a mouse brain injected with a fluorescent retrograde virus in the brain stem — captured with a lightsheet microscope — represents the kind of rich, detailed visual data that can be explored with a new VR system. (credit: Courtine Lab/EPFL/Leonie Asboth, Elodie Rey)

The new VR system grew out of a problem with using the Wyss Center’s lightsheet microscope (one of only three in the world): how can you navigate and make sense out the immense volume of neuroanatomical data?

“The system provides a practical solution to experience, analyze and quickly understand these exquisite, high-resolution images,” said Stéphane Pages, PhD, Staff Scientist at the Wyss Center and Senior Research Associate at the University of Geneva, senior author of a dynamic poster presented November 15 at the annual meeting of the Society for Neuroscience 2017.

For example, using “mini-brains,” researchers will be able to see how new microelectrode probes behave in brain tissue, and how tissue reacts to them.

Journey to the center of the cell: VR movies

A team of researchers in Australia has taken the next step: allowing scientists, students, and members of the public to explore these kinds of images — even interact with cells and manipulate models of molecules.

As described in a paper published in the journal Traffic, the researchers built a 3D virtual model of a cell, combining lightsheet microscope images (for super-resolution, real-time, single-molecule detection of fluorescent proteins in cells and tissues) with scanning electron microscope imaging data (for a more complete view of the cellular architecture).

To demonstrate this, they created VR movies (shown below) of the surface of a breast cancer cell. The movies can be played on a Samsung Gear VR or Google cardboard device or using the built-in YouTube 360 player with Chrome, Firefox, MS Edge, or Opera browsers. The movies will also play on a conventional smartphone (but without 3D immersion).

UNSW 3D Visualisation Aesthetics Lab | The cell “paddock” view puts the user on the surface of the cell and demonstrates different mechanisms by which nanoparticles can be internalized into cells.

UNSW 3D Visualisation Aesthetics Lab | The cell “cathedral” view takes the user inside the cell and allows them to explore key cellular compartments, including the mitochondria (red), lysosomes (green), early endosomes (light blue), and the nucleus (purple).


Abstract of Analyzing volumetric anatomical data with immersive virtual reality tools

Recent advances in high-resolution 3D imaging techniques allow researchers to access unprecedented amounts of anatomical data of brain structures. In parallel, the computational power of commodity graphics cards has made rendering billions of voxels in real-time possible. Combining these technologies in an immersive virtual reality system creates a novel tool wherein observers can physically interact with the data. We present here the possibilities and demonstrate the value of this approach for reconstructing neuroanatomical data. We use a custom built digitally scanned light-sheet microscope (adapted from Tomer et al., Cell, 2015), to image rodent clarified whole brains and spinal cords in which various subpopulations of neurons are fluorescently labeled. Improvements of existing microscope designs allow us to achieve an in-plane submicronic resolution in tissue that is immersed in a variety of media (e. g. organic solvents, Histodenz). In addition, our setup allows fast switching between different objectives and thus changes image resolution within seconds. Here we show how the large amount of data generated by this approach can be rapidly reconstructed in a virtual reality environment for further analyses. Direct rendering of raw 3D volumetric data is achieved by voxel-based algorithms (e.g. ray marching), thus avoiding the classical step of data segmentation and meshing along with its inevitable artifacts. Visualization in a virtual reality headset together with interactive hand-held pointers allows the user with to interact rapidly and flexibly with the data (highlighting, selecting, slicing, zooming etc.). This natural interface can be combined with semi-automatic data analysis tools to accelerate and simplify the identification of relevant anatomical structures that are otherwise difficult to recognize using screen-based visualization. Practical examples of this approach are presented from several research projects using the lightsheet microscope, as well as other imaging techniques (e.g., EM and 2-photon).


Abstract of Journey to the centre of the cell: Virtual reality immersion into scientific data

Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.

Disturbing video depicts near-future ubiquitous lethal autonomous weapons


Campaign to Stop Killer Robots | Slaughterbots

In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.

Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.

Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.

“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”

“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

For more information about autonomous weapons:

* As suggested in this U.S. Department of Defense video:


Perdix Drone Swarm – Fighters Release Hive-mind-controlled Weapon UAVs in Air | U.S. Naval Air Systems Command

How to open the blood-brain-barrier with precision for safer drug delivery

Schematic representation of the feedback-controlled focused ultrasound drug delivery system. Serving as the acoustic indicator of drug-delivery dosage, the microbubble emission signal was sensed and compared with the expected value. The difference was used as feedback to the ultrasound transducer for controlling the level of the ultrasound transmission. The ultrasound transducer and sensor were located outside the rat skull. The microbubbles were generated in the bloodstream at the target location in the brain. (credit: Tao Sun/Brigham and Women’s Hospital; adapted by KurzweilAI)

Researchers at Brigham and Women’s Hospital have developed a safer way to use focused ultrasound to temporarily open the blood-brain barrier* to allow for delivering vital drugs for treating glioma brain tumors — an alternative to invasive incision or radiation.

Focused ultrasound drug delivery to the brain uses “cavitation” — creating microbubbles — to temporarily open the blood-brain barrier. The problem with this method has been that if these bubbles destabilize and collapse, they could damage the critical vasculature in the brain.

To create a finer degree of control over the microbubbles and improve safety, the researchers placed a sensor outside of the rat brain to listen to ultrasound echoes bouncing off the microbubbles, as an indication of how stable the bubbles were.** That data was used to modify the ultrasound intensity, stabilizing the microbubbles to maintain safe ultrasound exposure.

The team tested the approach in both healthy rats and in an animal model of glioma brain cancer. Further research will be needed to adapt the technique for humans, but the approach could offer improved safety and efficacy control for human clinical trials, which are now underway in Canada.

The research, published this week in the journal Proceedings of the National Academy of Sciences, was supported by the National Institutes of Health in Canada.

* The blood brain barrier is an impassable obstacle for 98% of drugs, which it treats as pathogens and blocks them from passing from patients’ bloodstream into the brain. Using focused ultrasound, drugs can administered using an intravenous injection of innocuous lipid-coated gas microbubbles.

** For the ultrasound transducer, the researchers combined two spherically curved transducers (operating at a resonant frequency at 274.3 kHz) to double the effective aperture size and provide significantly improved focusing in the axial direction.


Abstract of Closed-loop control of targeted ultrasound drug delivery across the blood–brain/tumor barriers in a rat glioma model

Cavitation-facilitated microbubble-mediated focused ultrasound therapy is a promising method of drug delivery across the blood–brain barrier (BBB) for treating many neurological disorders. Unlike ultrasound thermal therapies, during which magnetic resonance thermometry can serve as a reliable treatment control modality, real-time control of modulated BBB disruption with undetectable vascular damage remains a challenge. Here a closed-loop cavitation controlling paradigm that sustains stable cavitation while suppressing inertial cavitation behavior was designed and validated using a dual-transducer system operating at the clinically relevant ultrasound frequency of 274.3 kHz. Tests in the normal brain and in the F98 glioma model in vivo demonstrated that this controller enables reliable and damage-free delivery of a predetermined amount of the chemotherapeutic drug (liposomal doxorubicin) into the brain. The maximum concentration level of delivered doxorubicin exceeded levels previously shown (using uncontrolled sonication) to induce tumor regression and improve survival in rat glioma. These results confirmed the ability of the controller to modulate the drug delivery dosage within a therapeutically effective range, while improving safety control. It can be readily implemented clinically and potentially applied to other cavitation-enhanced ultrasound therapies.

Consumer Technology Association inducts Ray Kurzweil, 11 other visionaries into the 2017 Consumer Technology Hall of Fame

Gary Shapiro (left) and Ray Kurzweil (right) (credit: CTA)

The Consumer Technology Association (CTA) inducted Ray Kurzweil and 11 other industry leaders into the Consumer Technology (CT) Hall of Fame at its 19th annual awards dinner, held Nov. 7, 2017 at the Rainbow Room, atop 30 Rockefeller Center in Manhattan.

CTA, formerly Consumer Electronics Association (CEA), created the Hall of Fame in 2000 to honor industry visionaries and pioneers.

A noted inventor, author, and futurist, Ray Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. He has written five national best-selling books, including New York Times best sellers The Singularity Is Near (2005) and How to Create a Mind (2012).

This year’s honorees also include Mike Lazaridis, founder of BlackBerry, which created the first smartphone; Mitch Mohr, founder of Celluphone; and Charles Tandy, legendary retailer. Also honored: the team that developed the MPEG video-file-compression technique — Leonardo Chiariglione, PhD, and Hiroshi Yasuda, PhD; and the team responsible for developing a breakthrough circuit that enabled high-power sound amplification with low distortion — McIntosh Labs founder Frank McIntosh and McIntosh president Gordon Gow.

Gary Shapiro, president and CEO of CTA, praised the inductees for their contributions to the growth of the $321 billion U.S. consumer technology industry.

Kurzweil: “A bright future”

Concluding the evening, Kurzweil gave a few predictions on where he sees the industry heading: “Technology is accelerating, it’s growing exponentially. Technology is also miniaturizing. We will have devices that are as powerful as our cell phones today that are the size of blood cells in the 2030s, and they will go through our bloodstream, keeping us healthy.

“Technology has been making life better. Over the next decade with biotechnology, we will get little devices that are robotic, intelligent and can augment our immune system. I think the future is going to be dramatically better.

“Despite the progress that I’ve alluded to — there’s still a lot of human suffering — it is the advance of these exponential technologies that is going to help us overcome age-old afflictions like disease, poverty, and environmental degradation. If we keep our focus on both the promise and the peril, we’ll have a very bright future.”

With the 2017 class, the CT Hall of Fame grows to 246 inventors, engineers, retailers, journalists, and entrepreneurs who conceived, promoted, and/or wrote about the innovative technologies, products and services that connect and improve the lives of consumers around the world. The Hall of Fame inductees have been selected by a group of media and industry professionals, who judge the nominations submitted by manufacturers, retailers and industry journalists.

Complete profiles of the honorees will be included in the forthcoming November issue of It Is Innovation (i3) magazine.

 

Nearly every job is becoming more digital — Brookings study

The shares of U.S. jobs that require substantial digital knowledge rose rapidly between 2002 and 2016 — mostly due to large changes in the digital content of existing occupations. (source: Brookings analysis of O*Net, OES, and Moody’s data)

Digital technology is disrupting the American workforce, but in vastly uneven ways, according to a new analysis of 545 occupations in a report published today by the Brookings Metropolitan Policy Program.

The report, “Digitalization and the American workforce,” provides a detailed analysis of changes since 2001 in the digital content of 545 occupations that represent 90 percent of the workforce in all industries. It suggests that acquiring digital skills is now a prerequisite for economic success for American workers, industries, and metropolitan areas.

In recent decades, the diffusion of digital technology into nearly every business and workplace, also known as “digitalization,” has been remaking the U.S. economy and the world of work. The “digitalization of everything” has increased the potential of individuals, firms, and society, but has also contributed to troublesome impacts and inequalities, such as worker pay disparities across many demographics, and the divergence of metropolitan economic outcomes.

Mean digital scores and share of jobs in high digital skill occupations in 100 largest U.S. metro areas, 2016 (source: Brookings analysis of O*Net, OES, and Moody’s data)

While the digital content of virtually all jobs has been increasing (the average digital score across all occupations rose 57 percent from 2002 to 2016), occupations in the middle and lower end of the digital skill spectrum have increased digital scores most dramatically. Workers, industries, and metropolitan areas benefit from increased digital skills via enhanced wage growth, higher productivity and pay, and reduced risk with automation.

The report offers recommendations for improving digital education and training while mitigating its potentially harmful effects, such as worker pay disparities and the divergence of metropolitan area economic outcomes.

“We definitely need more coders and high-end IT professionals, but it’s just as important that many more people learn the basic tech skills that are needed in virtually every job, said Mark Muro, a senior fellow at Brookings and the report’s senior author. “Not everybody needs to go to a coding boot camp but they probably do need to know Excel and basic office productivity software and enterprise platforms.”

Key findings of the report

(credit: Brookings Metropolitan Policy Program)

Wages: The mean annual wage for workers in high-level digital occupations reached $72,896 in 2016 — a 0.8 percent annual wage growth since 2010, whereas workers in middle-level digital jobs earned $ 48,274 on average (0.3 percent annual wage growth since 2010), and workers in low-level digital occupations earned $30,393 on average (0.2 percent annual wage decline since 2010).

Uneven job growth: While job growth has been rapid in high and low digital level occupations, middle digital occupations (that can be more readily automated), such as office-administrative and education jobs, have seen much slower job growth.

Automation: Nearly 60 percent of tasks performed in low-digital occupations appear susceptible to automation, compared to only around 30 percent of tasks in highly digital occupations.

Gender: Women, with slightly higher aggregate digital scores (48) than men (45), represent about three quarters of the workforce in many of the largest medium-digital occupational groups, such as health care, office administration, and education. But men continue to dominate the highest-level digital occupations, as well as lower digital occupations such as transportation, construction, natural resources, and building and grounds occupations.

Race/ethnicity: Whites and Asians remain over-represented in high-level digital occupations such as engineering, management and math professions; blacks are over-represented in medium-digital occupations such as office and administrative support, community and social service, as well as low-level digital jobs; and Hispanics are significantly underrepresented in high-level digital technical, business and finance occupational groups.

Regional disparities: The most digitalized metros include Washington, Seattle, San Francisco and Boston; fast followers such as Austin and Denver; and university towns such as Madison and Raleigh. Locations with low digital scores include Las Vegas and several metros in California , including Riverside, Fresno, Stockton and Bakersfield.

Mapping connections of single neurons using a holographic light beam

Controlling single neurons using optogenetics (credit: the researchers)

Researchers at MIT and Paris Descartes University have developed a technique for precisely mapping connections of individual neurons for the first time by triggering them with holographic laser light.

The technique is based on optogenetics (using light to stimulate or silence light-sensitive genetically modified protein molecules called “opsins” that are embedded in specific neurons). Current optogenetics techniques can’t isolate individual neurons (and their connections) because the light strikes a relatively large area — stimulating axons and dendrites of other neurons simultaneously (and these neurons may have different functions, even when nearby).

The new technique stimulates only the soma (body) of the neuron, not its connections. To achieve that, the researchers combined two new advances: an optimized holographic light-shaping microscope* and a localized, more powerful opsin protein called CoChR.

Two-photon computer-generated holography (CGH) was used to create three-dimensional sculptures of light that envelop only a target cell, using a conventional pulsed laser coupled with a widefield epifluorescence imaging system. (credit: Or A. Shemesh et al./Nature Nanoscience)

The researchers used an opsin protein called CoChR, which generates a very strong electric current in response to light, and fused it to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body, forming “somatic channelrhodopsin” (soCoChR). This new opsin enabled photostimulation of individual cells (regions of stimulation are highlighted by magenta circles) in mouse cortical brain slices with single-cell resolution and with less than 1 millisecond temporal (time) precision — achieving connectivity mapping on intact cortical circuits without crosstalk between neurons. (credit: Or A. Shemesh et al./Nature Nanoscience)

In the new study, by combining this approach with new ““somatic channelrhodopsin” opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research. Boyden is co-senior author with Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, of a study that appears in the Nov. 13 issue of Nature Neuroscience.

Mapping neural connections in real time

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This may pave the way for more precise diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

Optogenetics was co-developed in 2005 by Ed Boyden (credit: MIT)

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

* Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer-generated holography, the interferogram is calculated by a computer without the need of any original object. Combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.


Abstract of Temporally precise single-cell-resolution optogenetics

Optogenetic control of individual neurons with high temporal precision within intact mammalian brain circuitry would enable powerful explorations of how neural circuits operate. Two-photon computer-generated holography enables precise sculpting of light and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high-efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in mouse cortical brain slices with single-cell resolution and <1-ms temporal precision. We used soCoChR to perform connectivity mapping on intact cortical circuits.

New method 3D-prints fully functional electronic circuits

(Left) Conductive and polymeric inks were simultaneously inkjet-printed and solidified in a single process using UV irradiation. (Right) Microcontroller, batteries, and motors were then manually embedded in the system, creating a functioning miniature car. (credit: Ehab Saleh et al./University of Nottingham)

Researchers at the University of Nottingham have developed a method for rapidly 3D-printing fully functional electronic circuits such as antennas, medical devices, and solar-energy-collecting structures.

Unlike conventional 3D printers, these circuits can contain both both electrically conductive metallic inks (like the silver wires in the photo above) and insulating polymeric inks (like the yellow and orange support structure). A UV light is used rapidly solidify the inks).

The “multifunctional additive manufacturing” (MFAM) method combines 3D printing, which is based on layer-by-layer deposition of materials to create 3D devices, with 2D-printed electronics. It prints both conductors and insulators in a single step, expanding the range of functions in electronics (but not integrated circuits and other complex devices).

A schematic diagram showing how UV irradiation heats and solidifies conductive and dielectric inks to form the letter N  and silver tracks that connect a green LED to a power source. (credit: University of Nottingham)

The researchers discovered that silver nanoparticles in conductive inks are capable of absorbing UV light efficiently. The absorbed UV energy is converted into heat, which evaporates the solvents of the conductive ink and fuses the silver nanoparticles. This process affects only the conductive ink so it doesn’t damage any adjacent printed polymers.

For example, the method could create wristband that includes a pressure sensor, wireless communication circuitry, and capacitor (functioning as a battery), customized for the wearer — all in a single process.

The new method overcomes some of the challenges in manufacturing fully functional devices that contain plastic and metal components in complex structures, where different methods are required to solidify each material. The method speeds up the solidification process of the conductive inks to less than a minute per layer. Previously, this process took much longer to be completed using conventional heat sources such as ovens and hot plates, making it impractical when hundreds of layers are needed to form an object.


Abstract of 3D Inkjet Printing of Electronics Using UV Conversion

The production of electronic circuits and devices is limited by current manufacturing methods that limit both the form and potentially the performance of these systems. Additive manufacturing (AM) is a technology that has been shown to provide cross-sectoral manufacturing industries with significant geometrical freedom. A research domain known as multifunctional AM (MFAM) in its infancy looks to couple the positive attributes of AM with application in the electronics sector can have a significant impact on the development of new products; however, there are significant hurdles to overcome. This paper reports on the single step MFAM of 3D electronic circuitry within a polymeric structure using a combination of conductive and nonconductive materials within a single material jetting-based AM system. The basis of this breakthrough is a study of the optical absorption regions of a silver nanoparticle (AgNP) conductive ink which leads to a novel method to rapidly process and sinter AgNP inks in ambient conditions using simple UV radiation contemporaneously with UV-curing of deposited polymeric structures.