Integrated circuits printed directly onto fabric for the first time

A sample integrated circuit printed on fabric. (credit: Felice Torrisi)

Researchers at the University of Cambridge, working with colleagues in Italy and China, have incorporated washable, stretchable, and breathable integrated electronic circuits into fabric for the first time — opening up new possibilities for smart textiles and wearable textile electronic devices.

The circuits were made with cheap, safe, and environmentally friendly inks, and printed using conventional inkjet-printing techniques.

The new method directly prints graphene inks and other two-dimensional materials on fabric to produce integrated electronic circuits that are comfortable to wear and can survive up to 20 cycles in a typical washing machine. The technology opens up new applications of smart fabrics ranging from personal health to wearable computing, military garments, fashion, and wearable energy harvesting and storage.

(Left) Final step in fabrication of an inkjet-printed field effect transistor (FET) heterostructure on textile. (Right) Side-view schematic and photo. (credit for images: Tian Carey et al./Nature Communications; composite: KurzweilAI)

Based on earlier work on the formulation of graphene inks for printed electronics, the team designed new low-boiling-point inks, allowing them to be directly printed onto polyester fabric. They also found that roughness of the fabric improved the performance of the printed devices. The versatility of this process also allowed the researchers to design all-printed integrated electronic circuits combining active and passive components.

Non-toxic, flexible, low-power, scalable

Most wearable electronic devices that are currently available rely on rigid electronic components mounted on plastic, rubber or textiles. These have limited compatibility with the skin, are damaged when washed, and are uncomfortable to wear because they are not breathable.

“Other inks for printed electronics normally require toxic solvents and are not suitable to be worn, whereas our inks are both cheap, safe and environmentally friendly, and can be combined to create electronic circuits by simply printing different two-dimensional materials on the fabric,” said Felice Torrisi, PhD, of the Cambridge Graphene Centre, senior author of a paper describing the research in the open-access journal Nature Communications.

The process is scalable and according to the researchers, there are no fundamental obstacles to the technological development of wearable electronic devices — both in terms of their complexity and performance. The printed components are flexible, washable, and require low power — essential requirements for applications in wearable electronics.

The teams at the Cambridge Graphene Centre and Politecnico di Milano are also involved in the Graphene Flagship, an EC-funded, pan-European project dedicated to bringing graphene and GRM technologies to commercial applications.

The research was supported by grants from the Graphene Flagship, the European Research Council’s Synergy Grant, The Engineering and Physical Science Research Council, The Newton Trust, the International Research Fellowship of the National Natural Science Foundation of China and the Ministry of Science and Technology of China. The technology is being commercialized by Cambridge Enterprise, the University’s commercialization arm.


Abstract of Fully inkjet-printed two-dimensional material field-effect heterojunctions for wearable and textile electronics

Fully printed wearable electronics based on two-dimensional (2D) material heterojunction structures also known as heterostructures, such as field-effect transistors, require robust and reproducible printed multi-layer stacks consisting of active channel, dielectric and conductive contact layers. Solution processing of graphite and other layered materials provides low-cost inks enabling printed electronic devices, for example by inkjet printing. However, the limited quality of the 2D-material inks, the complexity of the layered arrangement, and the lack of a dielectric 2D-material ink able to operate at room temperature, under strain and after several washing cycles has impeded the fabrication of electronic devices on textile with fully printed 2D heterostructures. Here we demonstrate fully inkjet-printed 2D-material active heterostructures with graphene and hexagonal-boron nitride (h-BN) inks, and use them to fabricate all inkjet-printed flexible and washable field-effect transistors on textile, reaching a field-effect mobility of ~91 cm2 V−1 s−1, at low voltage (<5 V). This enables fully inkjet-printed electronic circuits, such as reprogrammable volatile memory cells, complementary inverters and OR logic gates.

New silicon ‘Neuropixels’ probes record activity of hundreds of neurons simultaneously

A section of a Neuropixels probe (credit: Howard Hughes Medical Institute)

In a $5.5 million international collaboration, researchers and engineers have developed powerful new “Neuropixels” brain probes that can simultaneously monitor the neural activity of hundreds of neurons at several layers of a rodent’s brain for the first time.

Described in a paper published today (November 8, 2017) in Nature, Neuropixels probes represent a significant advance in neuroscience measurement technology, and will allow for the most precise understanding yet of how large networks of nerve cells coordinate to give rise to behavior and cognition, according to the researchers.

Illustrating the ability to record several layers of brain regions simultaneously, 741 electrodes in two Neuropixels probes recorded signals from five major brain structures in 13 awake head-fixed mice. The number of putative single neurons from each structure is shown in parentheses. (Approximate probe locations are shown overlaid on the Allen Mouse Brain Atlas at the left.) (credit: James J. Jun et al./Nature)

Neuropixels probes are similar to electrophysiology probes that neuroscientists have used for decades to detect extracellular electrical activity in the brains of living animals — but they incorporate two critical advances:

  • The new probes are thinner (70 x 20 micrometers) than a human hair, but about as long as a mouse brain (one centimeter), so they pass through and collect data from multiple brain regions at the same time. The 960 electrical sensors and recording electrodes are densely packed along their length, recording hundreds of well-resolved single neuron signal traces, making it easier for researchers to pinpoint the cellular sources of brain activity.
  • Each of the new probes incorporates a nearly complete recording system — reducing hardware size and cost and eliminating hundreds of bulky output wires.

These features will allow researchers to collect more meaningful data in a single experiment than current technologies (which can measure the activity of individual neurons within a specific spot in the brain or can reveal larger, regional patterns of activity, but not both simultaneously).

To develop the probes, researchers at HHMI’s Janelia Research Campus, the Allen Institute for Brain Science, and University College London worked with engineers at imec, an international nanoelectronics research center in Leuven, Belgium, with grant funding from the Gatsby Charitable Foundation and Wellcome.

Recordings from 127 neurons in entorhinal and medial prefrontal cortices using chronic implants in unrestrained rats. (a) Schematic representation of the implant in the entorhinal cortex. MEnt, medial entorhinal cortex; PrS, presubiculum; S, subiculum; V1B, primary visual cortex, binocular area; V2L, secondary visual cortex, lateral area. (b) Filtered voltage traces from 130 channels spanning 1.3 mm of the shank. (credit: James J. Jun et al./Nature)

Accelerating neuroscience research

With nearly 1,000 electrical sensors positioned along a probe thinner than a human hair but long enough to access many regions of a rodent’s brain simultaneously, the new technology could greatly accelerate neuroscience research, says Timothy Harris, senior fellow at Janelia, leader of the Neuropixels collaboration. “You can [detect the activity of] large numbers of neurons from multiple brain regions with higher fidelity and much less difficulty,” he says.

There are currently more than 400 prototype Neuropixels probes in testing at research centers worldwide.

“At the Allen Institute for Brain Science, one of our chief goals is to decipher the cellular-level code used by the brain,” says Christof Koch, President and Chief Scientist at the Allen Institute for Brain Science. “The Neuropixels probes represent a significant leap forward in measurement technology and will allow for the most precise understanding yet of how large coalitions of nerve cells coordinate to give rise to behavior and cognition.”

Neuropixels probes are expected to be available for purchase by research laboratories by mid-2018.

Scientists from the consortium will present data collected using prototype Neuropixels probes at the Annual Meeting of the Society for Neuroscience in Washington, DC, November 11–15, 2017.

Scientists decipher mechanisms in cells for extending human longevity

Aging cells periodically switch their chromatin state. The image illustrates the “on” and “off” patterns in individual cells. (credit: UC San Diego)

A team of scientists at the University of California San Diego led by biologist Nan Hao have combined engineering, computer science, and biology technologies to decode the molecular processes in cells that influence aging.

Protecting DNA from damage

As cells age, damage in their DNA accumulates over time, leading to decay in normal functioning — eventually resulting in death. But a natural biochemical process known as “chromatin silencing” helps protect DNA from damage by converting specific regions of DNA from a loose, open state into a closed one, thus shielding DNA regions. (Chromatin is a complex of macromolecules found in cells, consisting of DNA, protein, and RNA.)

Among the molecules that promote silencing is a family of proteins — broadly conserved from bacteria to humans — known as sirtuins. In recent years, chemical activators of sirtuins have received much attention and are being marketed as nutraceuticals (such as resveratrol and more recently, NMN, as discussed on KurzweilAI) to aid chromatin silencing in the hopes of slowing the aging process.

To silence or not to silence? It’s all about the dynamics.

However, scientists have also found that such chromatin silencing also stops the protected DNA regions from expressing RNAs and proteins that carry out biological functions, so excessive silencing could derail normal cell physiology.

To learn more, the UC San Diego scientists turned to cutting-edge computational and experimental approaches in yeast, as described in an open-access study published in Proceedings of the National Academy of Sciences. That allowed the researchers to track chromatin silencing in unprecedented detail through generations during aging.

Here’s the puzzle: They found that a complete loss of silencing leads to cell aging and death. But continuous chromatin silencing also leads cells to a shortened lifespan, they found. OK, so is chromatin silencing or not silencing the answer to delay aging? The answer derived from the new study: Both.

According to the researchers, nature has developed a clever way to solve this dilemma. “Instead of staying in the silencing or silencing loss state, cells switch their DNA between the open (silencing loss) and closed (silencing) states periodically during aging,” said Hao. “In this way, cells can avoid a prolonged duration in either state, which is detrimental, and maintain a time-based balance important for their function and longevity.”

What about nutraceuticals?

So are nutraceuticals to aid chromatin silencing still advised? According to a statement provided to KurzweilAI, “since the study focused on yeast aging, much more investigation is needed to inform any questions about chromatin silencing and nutraceuticals for human benefit, which is a much more complex issue requiring more intricate studies.”

“When cells grow old, they lose their ability to maintain this periodic switching, resulting in aged phenotypes and eventually death,” explained Hao. “The implication here is that if we can somehow help cells to reinforce switching, especially as they age, we can slow their aging. And this possibility is what we are currently pursuing.

“I believe this collaboration will produce in the near future many new insights that will transform our understanding in the basic biology of aging and will lead to new strategies to promote longevity in humans.”

The research was supported by the National Science Foundation, University of California Cancer Research Coordinating Committee (L.P.); Department of Defense, Air Force Office of Scientific Research, National Defense Science and Engineering; Human Frontier Science Program; and the San Diego Center for Systems Biology National Institutes of Health.


Nan Hao | This time-lapse movie tracks the replicative aging of individual yeast cells throughout their entire life spans.


Nan Hao | periodic switching during aging


Abstract of Multigenerational silencing dynamics control cell aging

Cellular aging plays an important role in many diseases, such as cancers, metabolic syndromes, and neurodegenerative disorders. There has been steady progress in identifying aging-related factors such as reactive oxygen species and genomic instability, yet an emerging challenge is to reconcile the contributions of these factors with the fact that genetically identical cells can age at significantly different rates. Such complexity requires single-cell analyses designed to unravel the interplay of aging dynamics and cell-to-cell variability. Here we use microfluidic technologies to track the replicative aging of single yeast cells and reveal that the temporal patterns of heterochromatin silencing loss regulate cellular life span. We found that cells show sporadic waves of silencing loss in the heterochromatic ribosomal DNA during the early phases of aging, followed by sustained loss of silencing preceding cell death. Isogenic cells have different lengths of the early intermittent silencing phase that largely determine their final life spans. Combining computational modeling and experimental approaches, we found that the intermittent silencing dynamics is important for longevity and is dependent on the conserved Sir2 deacetylase, whereas either sustained silencing or sustained loss of silencing shortens life span. These findings reveal that the temporal patterns of a key molecular process can directly influence cellular aging, and thus could provide guidance for the design of temporally controlled strategies to extend life span.

New magnetism-control method could lead to ultrafast, energy-efficient computer memory

A cobalt layer on top of a gadolinium-iron alloy allows for switching memory with a single laser pulse in just 7 picoseconds. The discovery may lead to a computing processor with high-speed, non-volatile memory right on the chip. (credit: Jon Gorchon et al./Applied Physics Letters)

Researchers at UC Berkeley and UC Riverside have developed an ultrafast new method for electrically controlling magnetism in certain metals — a breakthrough that could lead to more energy-efficient computer memory and processing technologies.

“The development of a non-volatile memory that is as fast as charge-based random-access memories could dramatically improve performance and energy efficiency of computing devices,” says Berkeley electrical engineering and computer sciences (EECS) professor Jeffrey Bokor, coauthor of a paper on the research in the open-access journal Science Advances. “That motivated us to look for new ways to control magnetism in materials at much higher speeds than in today’s MRAM.”


Background: RAM vs. MRAM memory

Computers use different kinds of memory technologies to store data. Long-term memory, typically a hard disk or flash drive, needs to be dense in order to store as much data as possible but is slow. The central processing unit (CPU) — the hardware that enables computers to compute — requires fast memory to keep up with the CPU’s calculations, so the memory is only used for short-term storage of information (while operations are executed).

Random access memory (RAM) is one example of such short-term memory. Most current RAM technologies are based on charge (electron) retention, and can be written at rates of billions of bits per second (bits/nanosecond). The downside of these charge-based technologies is that they are volatile, requiring constant power or else they will lose the data.

In recent years, “spintronics” magnetic alternatives to RAM, known as Magnetic Random Access Memory (MRAM), have reached the market. The advantage of using magnets is that they retain information even when memory and CPU are powered off, allowing for energy savings. But that efficiency comes at the expense of speed, which is on the order of hundreds of picoseconds to write a single bit of information. (For comparison, silicon field-effect transistors have switching delays less than 5 picoseconds.)


The researchers found a magnetic alloy made up of gadolinium and iron that could accomplish those higher speeds — switching the direction of the magnetism with a series of electrical pulses of about 10 picoseconds (one picosecond is 1,000 times shorter than one nanosecond) — more than 10 times faster than MRAM.*

A faster version, using an energy-efficient optical pulse

In a second study, published in Applied Physics Letters, the researchers were able to further improve the performance by stacking a single-element magnetic metal such as cobalt on top of the gadolinium-iron alloy, allowing for switching with a single laser pulse in just 7 picoseconds. As a single pulse, it was also more energy-efficient. The result was a computing processor with high-speed, non-volatile memory right on the chip, functionally similar to an IBM Research “in-memory” computing architecture profiled in a recent KurzweilAI article.

“Together, these two discoveries provide a route toward ultrafast magnetic memories that enable a new generation of high-performance, low-power computing processors with high-speed, non-volatile memories right on chip,” Bokor says.

The research was supported by grants from the National Science Foundation and the U.S. Department of Energy.

* The electrical pulse temporarily increases the energy of the iron atom’s electrons, causing the magnetism in the iron and gadolinium atoms to exert torque on one another, and eventually leads to a reorientation of the metal’s magnetic poles. It’s a completely new way of using electrical currents to control magnets, according to the researchers.


Abstract of Ultrafast magnetization reversal by picosecond electrical pulses

The field of spintronics involves the study of both spin and charge transport in solid-state devices. Ultrafast magnetism involves the use of femtosecond laser pulses to manipulate magnetic order on subpicosecond time scales. We unite these phenomena by using picosecond charge current pulses to rapidly excite conduction electrons in magnetic metals. We observe deterministic, repeatable ultrafast reversal of the magnetization of a GdFeCo thin film with a single sub–10-ps electrical pulse. The magnetization reverses in ~10 ps, which is more than one order of magnitude faster than any other electrically controlled magnetic switching, and demonstrates a fundamentally new electrical switching mechanism that does not require spin-polarized currents or spin-transfer/orbit torques. The energy density required for switching is low, projecting to only 4 fJ needed to switch a (20 nm)3 cell. This discovery introduces a new field of research into ultrafast charge current–driven spintronic phenomena and devices.


Abstract of Single shot ultrafast all optical magnetization switching of ferromagnetic Co/Pt multilayers

A single femtosecond optical pulse can fully reverse the magnetization of a film within picoseconds. Such fast operation hugely increases the range of application of magnetic devices. However, so far, this type of ultrafast switching has been restricted to ferrimagnetic GdFeCo
films. In contrast, all optical switching of ferromagnetic films require multiple pulses, thereby being slower and less energy efficient. Here, we demonstrate magnetization switching induced by a single laser pulse in various ferromagnetic Co/Pt multilayers grown on GdFeCo, by exploiting
the exchange coupling between the two magnetic films. Table-top depth-sensitive time-resolved magneto-optical experiments show that the Co/Pt magnetization switches within 7 ps. This coupling approach will allow ultrafast control of a variety of magnetic films, which is critical for
applications.

talk | Future of Artificial Intelligence and its Impact on Society


organization: Council on Foreign Relations | link
event: Annual Term Member Conference | link
talk title: The Future of Artificial Intelligence & Its Impact on Society | link

host: Nicholas Thompson — Editor in Chief at Wired
speaker: Ray Kurzweil — leading inventor, futurist, best selling author

presentation date: November 3, 2017

about from CFR | In a wide-ranging discussion at a Council on Foreign Relations event, Ray Kurzweil covers issues from how AI will enhance us — expanding our intelligence while improving our lives and even our spirituality — to ethical issues such as dealing with technological risks.

Each year CFR * organizes more than 100 on-the-record events, conference calls, and podcasts in which senior government officials, global leaders, business executives, and prominent thinkers discuss pressing international issues.

* CFR is the Council on Foreign Relations


on the web | essentials

Council on Foreign Relations | main
Council on Foreign Relations | events: main
Council on Foreign Relations | events: The Future of Artificial Intelligence & Its Impact on Society

Council on Foreign Relations | YouTube channel: main
Council on Foreign Relations | YouTube channel: The Future of Artificial Intelligence & Its Impact on Society — video

Wikipedia | Council on Foreign Relations


transcript: full talk

organizer: Council on Foreign Relations
event: Annual Term Member Conference

talk title: The Future of Artificial Intelligence & Its Impact on Society

speaker: Ray Kurzweil — leading inventor, futurist, best selling author
host: Nicholas Thompson — Editor in Chief at Wired

THOMPSON: All right. Hello, everybody. Welcome to the closing session of the Council on Foreign Relations 22nd annual Term Member Conference with Ray Kurzweil.

I’m Nicholas Thompson. I will be presiding over today’s session.

I’d also like to thank Andrew Gundlach, him and the Anna-Maria and Stephen Kellen Foundation, for their generous support of the CFR Term Member Program. I was a term member a couple years ago. I love this program. What a great event. I’m so glad to be here. I’m also glad to be here with Ray. All right, I’m going to read Ray’s biography, and then I’m going to dig into some questions about how the world is changing, and he will blow your mind.

So Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a 30 year track record of accurate predictions. It’s true. If you look at his early books, they’re 96% and 98% accurate. Called “the restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes magazine, he was selected as one of the top entrepreneurs by Inc. magazine, which described him as the rightful heir to Thomas Edison. PBS selected him as one of the 16 revolutionaries who made America. Reading his bio, I was upset that he quotes all of my competitors, so I’m going to add a quote from Wired magazine, which is “His mission is bolder than any voyage to space” — Wired magazine.

Ray was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Among Ray’s many honors, he received a Grammy Award for outstanding achievements in music technology, he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds 21 honorary Doctorates, and honors from three U.S. presidents. Amazing.

Ray has written five national bestselling books, which you should all buy immediately if you’re on your phones, including New York Times bestsellers “The Singularity Is Near” and How To Create A Mind. He is co-founder and chancellor of Singularity University — and a director of engineering at Google, heading up a team developing machine intelligence and natural language understanding.

He also, as I learned walking here, is the father of one of my sister’s friends from grade school, who referred to him as the cool dad with the electric pianos — welcome, Ray.

KURZWEIL: Great to be here.

THOMPSON: All right. So some of you are probably familiar with his work. Some of you may not be. But let’s begin, Ray, by talking about the law of accelerating returns, what that means for technology. Lay out a framework for what’s about to happen, and then we’ll dig into how foreign policy is going to be turned on its head.

KURZWEIL: Sure. Well, that’s the basis of my futurism. In 1981 I realized that the key to being successful as an inventor was timing. The inventors whose names you recognize, like Thomas Edison or my new boss — my first boss, Larry Page, were in the right place with the right idea at the right time. And timing turns out to be important for everything from writing magazine articles to making investments to romance. You got to be at the right place at the right time.

I started with the common wisdom that you cannot predict the future, and I made a very surprising discovery. There’s actually one thing about the future that’s remarkably predictable, and that is that the price, performance, and capacity not of everything, not of every technology, but of every information technology follows a very predictable path. And that path is exponential, not linear. So that’s the law of accelerating returns. But it bears a little explanation.

I had the price/performance of computing — calculations per second per constant dollar — going back to the 1890 Census through 1980 on a logarithmic scale, where a straight line is exponential growth. It was a gradual, second level of exponential, but it was a very smooth curve. And you could not see World War I or World War II or the Great Depression or the Cold War on that curve. So I projected it out to 2050. We’re now 36 years later. It’s exactly where it should be. So this is not just looking backward now and over-fitting to past data, but this has been a forward-looking progression that started in 1981. And it’s true of many different measures of information technology.

And the progression is not linear. It’s exponential. And our brains are linear. If you wonder why we have a brain, it’s to predict the future. But the kind of challenges we had, you know, 50,000 years ago when our brains were evolving were linear ones. We’d look up and say: OK, that animal’s going that way, I’m coming up the path this way, we’re going to meet at that rock. That’s not a good idea. I’m going to take a different path. That was good for survival. That became hardwired in our brains. We didn’t expect that animal to speed up as it went along. We made a linear projection.

The primary difference between myself and my critics, and many of them are coming around, is we look at the same world. They apply their linear intuition. For example, halfway through the Genome Project, 1 percent of the genome had been collected after seven years. So mainstream critics said: I told you this wasn’t going to work. Here you are, seven years, 1 percent. It’s going to take 700 years, just like we said. My reaction at the time was, well, we finished 1 percent, we’re almost done, because 1 percent is only seven doublings from 100 percent, and have been doubling every year. Indeed, that continued. The project was finished seven years later. That’s continued since the end of the Genome Project. That first genome cost a billion dollars. We’re now down to $1,000. And every other aspect of what we call biotechnology — understanding this data, modeling, simulating it, and, most importantly, reprograming it, is progressing exponentially.

And I’ll mention just one implication of the law of accelerating returns, because it has many ripple effects and it’s really behind this remarkable digital revolution we see, is the 50 percent deflation rate in information technologies. So I can get the same computation, communication, genetic sequencing, brain data as I could a year ago for half the price today. That’s why you can buy an iPhone or an Android phone that’s twice as good as the one two years ago for twice the price. You put some of the improved price performance in price and some of it into performance. So you asked me actually just a few minutes a question that I was also asked by Christine Lagarde, head of the IMF, at her annual meeting recently: How come we don’t see this in productivity statistics? And that’s because we factor it out. We put it in the numerator and the denominator.

So when this girl in Africa buys a smartphone for $75, it counts as $75 of economic activity, despite the fact that it’s literally a trillion dollars of computation circa 1960, a billion dollars circa 1980. It’s got millions of dollars of free information apps, just one of which is an encyclopedia far better than the one I saved up for years as a teenager to buy. All that counts for zero in economic activity because it’s free. So we really don’t count the value of these products. And people who compile these statistics say, well, we take into consideration quality improvement in products, but really using models that use the old linear assumption.

So then Christine said, yes, it’s true the digital world’s amazing. We can do all these remarkable things. But you can’t wear information technology. You can’t eat it. You can’t wear it. You can’t live in it. And that’s — and my next point is all of that’s going to change. We’ll be able to print out clothing using 3-D printers. Not today. We’re kind of in the hype phase of 3-D printing, but the 2020s — early 2020s, we’ll be able to print out clothing.

There’ll be lots of cool open-source designs you can download for free. We’ll still have a fashion industry, just like we still have a music and movie and book industry. Coexistence of free, open-source products — which are a great leveler — and proprietary products. We’ll print — we’ll be able to create food very inexpensively using 3-D — vertical agriculture, using hydroponic plants for fruits and vegetables, in-vitro cloning of muscle tissue for meat. The first hamburger to be produced this way has already been consumed. It was expensive. It was a few hundred thousand dollars, but — but it was very good. (Laughter.)

THOMPSON: A free side — that’s like, what it costs.

KURZWEIL: But that’s research costs. So it’s a long discussion, but all of these different resources are going to become information technologies. A building was put together recently, as a demo, using little modules snapped together, Lego style, printed on a 3-D printer in Asia. Put together a three-story office building in a few days. That’ll be the nature of construction in the 2020s. 3-D printers will print out the physical things we need.

She said, OK, but we’re getting very crowded. Land is not expanding. That’s not an information technology. And I said, actually there’s lots of land. We just have decided to crowd ourselves together so we can work and play together. Cities was an early invention. We’re already spreading out with even the crude virtual and augmented reality we have today. Try taking a train trip anywhere in the world, and you’ll see that 97% of the land is unused.

Daydreaming means you’re smart and creative

MRI scan showing regions of the default mode network, involved in daydreaming (credit: C C)

Daydreaming during meetings or class might actually be a sign that you’re smart and creative, according to a Georgia Institute of Technology study.

“People with efficient brains may have too much brain capacity to stop their minds from wandering,” said Eric Schumacher, an associate psychology professor who co-authored a research paper published in the journal Neuropsychologia.

Participants were instructed to focus on a stationary fixation point for five minutes in an MRI machine. Functional MRI (fMRI), which measures changes in brain oxygen levels as a proxy for neural activity, was used.

The researchers examined that data to find correlated brain patterns (parts of the brain that worked together) between the daydreaming “default mode network” (DMN)* and two other brain networks. The team compared that data with tests of the participants that measured their intellectual and creative ability and with a questionnaire about how much a participant’s mind wandered in daily life.**

Is your brain efficient?

The scientists found a correlation between mind wandering and fluid intelligence, creative ability, and more efficient brain systems.

How can you tell if your brain is efficient? One clue is that you can zone in and out of conversations or tasks when appropriate, then naturally tune back in without missing important points or steps, according to Schumacher. He says higher efficiency also means more capacity to think and the ability to mind-wander when performing easy tasks.

“Our findings remind me of the absent-minded professor — someone who’s brilliant, but off in his or her own world, sometimes oblivious to their own surroundings,” said Schumacher. “Or school children who are too intellectually advanced for their classes. While it may take five minutes for their friends to learn something new, they figure it out in a minute, then check out and start daydreaming.”

The study also involved researchers at the University of New Mexico, U.S. Army Research Laboratory, University of Pennsylvania, and Charles River Analytics. It was funded by the National Science Foundation and based on work supported by the U.S. Intelligence Advanced Research Projects Activity (IARPA).

Performing on autopilot

However, recent research at the University of Cambridge published in the Proceedings of National Academy of Sciences showed that daydreaming also plays an important role in allowing us to switch to “autopilot” once we are familiar with a task.

“Rather than waiting passively for things to happen to us, we are constantly trying to predict the environment around us,” says Deniz Vatansever, who carried out the study as part of his PhD at the University of Cambridge and who is now based at the University of York.

“Our evidence suggests it is the default mode network that enables us do this. It is essentially like an autopilot that helps us make fast decisions when we know what the rules of the environment are. So for example, when you’re driving to work in the morning along a familiar route, the default mode network will be active, enabling us to perform our task without having to invest lots of time and energy into every decision.”

In the study, 28 volunteers took part in a task while lying inside a magnetic resonance imaging (MRI) scanner. Functional MRI (fMRI) was also used.

This new study supports an idea expounded upon by Daniel Kahneman, Nobel Memorial Prize in Economics laureate 2002, in his book Thinking, Fast and Slow: that there are two systems that help us make decisions — a rational system that helps us reach calculated decisions, and a fast system that allows us to make intuitive decisions. The new research suggests this latter system may be linked with the DMN.

The researchers believe their findings have relevance to brain injury, particularly following traumatic brain injury, where problems with memory and impulsivity can substantially compromise social reintegration. They say the findings may also have relevance for mental health disorders, such as addiction, depression and obsessive compulsive disorder — where particular thought patterns drive repeated behaviors — and with the mechanisms of anesthetic agents and other drugs on the brain.

* In 2001, scientists at the Washington University School of Medicine found that a collection of brain regions appeared to be more active during such states of rest. This network was named the “default mode network.” While it has since been linked to, among other things, daydreaming, thinking about the past, planning for the future, and creativity, its precise function is unclear.

** Specifically, the researchers examined the extent to which the default mode network (DMN), along with the dorsal attention network (DAN) and frontoparietal control network (FPCN), correlate with the tendency to mind wandering in daily life, based on a five-minute resting state fMRI scan. They also used measures of executive function, fluid intelligence (Ravens), and creativity (Remote Associates Task).


Abstract of Functional connectivity within and between intrinsic brain networks correlates with trait mind wandering

Individual differences across a variety of cognitive processes are functionally associated with individual differences in intrinsic networks such as the default mode network (DMN). The extent to which these networks correlate or anticorrelate has been associated with performance in a variety of circumstances. Despite the established role of the DMN in mind wandering processes, little research has investigated how large-scale brain networks at rest relate to mind wandering tendencies outside the laboratory. Here we examine the extent to which the DMN, along with the dorsal attention network (DAN) and frontoparietal control network (FPCN) correlate with the tendency to mind wander in daily life. Participants completed the Mind Wandering Questionnaire and a 5-min resting state fMRI scan. In addition, participants completed measures of executive functionfluid intelligence, and creativity. We observed significant positive correlations between trait mind wandering and 1) increased DMN connectivity at rest and 2) increased connectivity between the DMN and FPCN at rest. Lastly, we found significant positive correlations between trait mind wandering and fluid intelligence (Ravens) and creativity (Remote Associates Task). We interpret these findings within the context of current theories of mind wandering and executive function and discuss the possibility that certain instances of mind wandering may not be inherently harmful. Due to the controversial nature of global signal regression (GSReg) in functional connectivity analyses, we performed our analyses with and without GSReg and contrast the results from each set of analyses.


Abstract of Default mode contributions to automated information processing

Concurrent with mental processes that require rigorous computation and control, a series of automated decisions and actions govern our daily lives, providing efficient and adaptive responses to environmental demands. Using a cognitive flexibility task, we show that a set of brain regions collectively known as the default mode network plays a crucial role in such “autopilot” behavior, i.e., when rapidly selecting appropriate responses under predictable behavioral contexts. While applying learned rules, the default mode network shows both greater activity and connectivity. Furthermore, functional interactions between this network and hippocampal and parahippocampal areas as well as primary visual cortex correlate with the speed of accurate responses. These findings indicate a memory-based “autopilot role” for the default mode network, which may have important implications for our current understanding of healthy and adaptive brain processing.

A tool to debug ‘black box’ deep-learning neural networks

Oops! A new debugging tool called DeepXplore generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug. (credit: Columbia Engineering)

Researchers at Columbia and Lehigh universities have developed a method for error-checking the reasoning of the thousands or millions of neurons in unsupervised (self-taught) deep-learning neural networks, such as those used in self-driving cars.

Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning, such as the incident last year when Tesla’s autonomous car collided with a truck it mistook for a cloud, killing its passenger. Deep learning systems don’t explain how they make their decisions, which makes them hard to trust.

Modeled after the human brain, deep learning uses layers of artificial neurons that process and consolidate information. This results in a set of rules to solve complex problems, from recognizing friends’ faces online to translating email written in Chinese. The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics are growing.

Finding bugs by generating test images

Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way to measure how thoroughly logic within the network has been checked for errors. Current limited approaches include randomly feeding manually generated test images into the network until one triggers a wrong decision (telling the car to veer into the guardrail, for example); and “adversarial testing,” which automatically generates test images that it alters incrementally until one image tricks the system.

The new DeepXplore solution — presented Oct. 29, 2017 in an open-access paper at ACM’s Symposium on Operating Systems Principles in Shanghai — can find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions, according to the researchers.

To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera’s view. A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets of neurons to tell it to go right.

After inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug. Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated.

“You can think of our testing process as reverse-engineering the learning process to understand its logic,” said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. “This gives you some visibility into what the system is doing and where it’s going wrong.”

Testing their software on 15 state-of-the-art neural networks, including Nvidia’s Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons — 30 percent more on average than either random or adversarial testing — and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.*

The ultimate goal: certifying a neural network is bug-free

Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can’t certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.

A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore’s full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.

“Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good,” he said. “However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can’t find any bugs, that there are, in fact, no bugs.”

DeepXplore has applications beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software, for example.

The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works.

* The team evaluated DeepXplore on real-world datasets including Udacity self-driving car challenge data, image data from ImageNet and MNIST, Android malware data from Drebin, PDF malware data from Contagio/VirusTotal, and production-quality deep neural networks trained on these datasets, such as these ranked top in Udacity self-driving car challenge. Their results show that DeepXplore found thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails) in 15 state-of-the-art deep learning models with a total of 132,057 neurons trained on five popular datasets containing around 162 GB of data.


Abstract of DeepXplore: Automated Whitebox Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.

We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.

DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model’s accuracy by up to 3%.

Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time

Purdue Engineering researchers have developed a system that can show what people are seeing in real-world videos, decoded from their fMRI brain scans — an advanced new form of  “mind-reading” technology that could lead to new insights in brain function and to advanced AI systems.

The research builds on previous pioneering research at UC Berkeley’s Gallant Lab, which created a computer program in 2011 that translated fMRI brain-wave patterns into images that loosely mirrored a series of images being viewed.

The new system also decodes moving images that subjects see in videos and does it in near-real-time. But the researchers were also able to determine the subjects’ interpretations of the images they saw — for example, interpreting an image as a person or thing — and could even reconstruct a version of the original images that the subjects saw.

Deep-learning AI system for watching what the brain sees

Watching in near-real-time what the brain sees. Visual information generated by a video (a) is processed in a cascade from the retina through the thalamus (LGN area) to several levels of the visual cortex (b), detected from fMRI activity patterns (c) and recorded. A powerful deep-learning technique (d) then models this detected cortical visual processing. Called a convolutional neural network (CNN), this model transforms every video frame into multiple layers of features, ranging from orientations and colors (the first visual layer) to high-level object categories (face, bird, etc.) in semantic (meaning) space (the eighth layer). The trained CNN model can then be used to reverse this process, reconstructing the original videos — even creating new videos that the CNN model had never watched. (credit: Haiguang Wen et al./Cerebral Cortex)

The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including clips showing people or animals in action and nature scenes.

To decode the  fMRI images, the research pioneered the use of a deep-learning technique called a convolutional neural network (CNN). The trained CNN model was able to accurately decode the fMRI blood-flow data to identify specific image categories (such as the face, bird, ship, and scene examples in the above figure). The researchers could compare (in near-real-time) these viewed video images side-by-side with the computer’s visual interpretation of what the person’s brain saw.

Reconstruction of a dynamic visual experience in the experiment. The top row shows the example movie frames seen by one subject; the bottom row shows the reconstruction of those frames based on the subject’s cortical fMRI responses to the movie. (credit: Haiguang Wen et al./ Cerebral Cortex)

The researchers were also able to figure out how certain locations in the visual cortex were associated with specific information a person was seeing.

Decoding how the visual cortex works

CNNs have been used to recognize faces and objects, and to study how the brain processes static images and other visual stimuli. But the new findings represent the first time CNNs have been used to see how the brain processes videos of natural scenes. This is “a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said doctoral student Haiguang Wen.

Wen was first author of a paper describing the research, appearing online Oct. 20 in the journal Cerebral Cortex.

“Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen explained. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”

The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called “cross-subject encoding and decoding.” This finding is important because it demonstrates the potential for broad applications of such models to study brain function, including people with visual deficits.

The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper are also publicly available at the Laboratory of Integrated Brain Imaging website.

UPDATE Oct. 28, 2017 — Additional figure added, comparing the original images and those reconstructed from the subject’s cortical fMRI responses to the movie; subhead revised to clarify the CNN function. Two references also added.


Abstract of Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision

Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.

IBM scientists say radical new ‘in-memory’ computing architecture will speed up computers by 200 times

(Left) Schematic of conventional von Neumann computer architecture, where the memory and computing units are physically separated. To perform a computational operation and to store the result in the same memory location, data is shuttled back and forth between the memory and the processing unit. (Right) An alternative architecture where the computational operation is performed in the same memory location. (credit: IBM Research)

IBM Research announced Tuesday (Oct. 24, 2017) that its scientists have developed the first “in-memory computing” or “computational memory” computer system architecture, which is expected to yield 200x improvements in computer speed and energy efficiency — enabling ultra-dense, low-power, massively parallel computing systems.

Their concept is to use one device (such as phase change memory or PCM*) for both storing and processing information. That design would replace the conventional “von Neumann” computer architecture, used in standard desktop computers, laptops, and cellphones, which splits computation and memory into two different devices. That requires moving data back and forth between memory and the computing unit, making them slower and less energy-efficient.

The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in memory. (credit: IBM Research)

Especially useful in AI applications

The researchers believe this new prototype technology will enable ultra-dense, low-power, and massively parallel computing systems that are especially useful for AI applications. The researchers tested the new architecture using an unsupervised machine-learning algorithm running on one million phase change memory (PCM) devices, successfully finding temporal correlations in unknown data streams.

“This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” says Evangelos Eleftheriou, PhD, an IBM Fellow and co-author of an open-access paper in the peer-reviewed journal Nature Communications. “As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers.”

“Memory has so far been viewed as a place where we merely store information, said Abu Sebastian, PhD. exploratory memory and cognitive technologies scientist, IBM Research and lead author of the paper. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes.” Sebastian also leads a European Research Council funded project on this topic.

* To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:

  • Simulated Data: one million binary (0 or 1) random processes organized on a 2D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing.
  • Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled “1” and if it didn’t “0”. Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering. 


Abstract of Temporal correlation detection using computational phase-change memory

Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.

This voice-authentication wearable could block voice-assistant or bank spoofing

“Alexa, turn off my security system.” (credit: Amazon)

University of Michigan (U-M) scientists have developed a voice-authentication system for reducing the risk of being spoofed when you use a biometric system to log into secure services or a voice assistant (such as Amazon Echo and Google Home).

A hilarious example of spoofing a voice assistant happened during a Google commercial during the 2017 Super Bowl. When actors voiced “OK Google” commands on TV, viewers’ Google Home devices obediently began to play whale noises, flip lights on, and take other actions.

More seriously, an adversary could possibly bypass current voice-as-biometric authentication mechanisms, such as Nuance’s “FreeSpeech” customer authentication platform (used in a call centers and banks) by simply impersonating the user’s voice (possibly by using Adobe Voco software), the U-M scientists also point out.*

The VAuth system

VAuth system (credit: Kassem Fawaz/ACM Mobicom 2017)

The U-M VAuth (continuous voice authentication, pronounced “vee-auth”) system aims to make that a lot more difficult. It uses a tiny wearable device (which could be built in to a necklace, earbud/earphones/headset, or eyeglasses) containing an accelerometer (or a special microphone) that detects and measures vibrations on the skin of a person’s face, throat, or chest.

VAuth prototype features accelerometer chip for detecting body voice vibrations and Bluetooth transmitter (credit: Huan Feng et al./ACM)

The team has built a prototype using an off-the-shelf accelerometer and a Bluetooth transmitter, which sends the vibration signal to a real-time matching engine in a device (such as Google Home). It matches these vibrations with the sound of that person’s voice to create a unique, secure signature that is constant during an entire session (not just at the beginning). The team has also developed matching algorithms and software for Google Now.

Security holes in voice authentication systems

“Increasingly, voice is being used as a security feature but it actually has huge holes in it,” said Kang Shin, the Kevin and Nancy O’Connor Professor of Computer Science and professor of electrical engineering and computer science at U-M. “If a system is using only your voice signature, it can be very dangerous. We believe you have to have a second channel to authenticate the owner of the voice.”

VAuth doesn’t require training and is also immune to voice changes over time and different situations, such as sickness (a sore throat) or tiredness — a major limitation of voice biometrics, which require training from each individual who will use them, says the team.

The team tested VAuth with 18 users and 30 voice commands. It achieved a 97-percent detection accuracy and less than 0.1 percent false positive rate, regardless of its position on the body and the user’s language, accent or even mobility. The researchers say it also successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks.

A study on VAuth was presented Oct. 19 at the International Conference on Mobile Computing and Networking, MobiCom 2017, in Snowbird, Utah and is available for open-access download.

The work was supported by the National Science Foundation. The researchers have applied for a patent and are seeking commercialization partners to help bring the technology to market.

* As explained in this KurzweilAI articleAdobe Voco technology (aka “Photoshop for voice”) makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.


Abstract of Continuous Authentication for Voice Assistants

Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant’s microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth’s position on the body and the user’s language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.