round-up | New high-resolution virtual-reality and augmented-reality headsets

Oculus Go (credit: Oculus)

Oculus  announced Monday (April 30) that its much-anticipated Oculus Go virtual-reality headset is now available, priced at $199 (32GB storage version).

Oculus Go is a standalone headset (no need for a tethered PC, as with Oculus Rift, or for inserting a phone, as with Gear VR and other consumer VR devices). It features a high-resolution 2560 x 1440 pixels (538 ppi) display, hand controller, and built-in surround sound and microphone. (Ars Technica has an excellent review, including technical details.)

Anshar Online (credit: OZWE Games/Ars Technica)

More than 1,000 games and experiences for the Oculus Go are already available. Notable: futuristic Ready Player One-style Anshar Online multiplayer space shooter and Oculus Rooms, a social lounge where friends (as avatars) can watch TV or movies together on a 180-inch virtual screen and play Hasbro board games — redefining the solitary VR experience.

Oculus Rooms (credit: Oculus)

Next-gen Oculus VR headsets

Today (May 2) at F8, Facebook’s developer conference, the company revealed a prototype Oculus VR headset called “Half Dome.” It goes beyond Oculus Go’s 100 degrees to 140 degrees to see more of the visual world in your periphery. Its variable-focus displays move up and down depending on what you’re focusing on, and make objects that are close to the wearer’s eyes appear much sharper and crisper, Engadget reports.

Prototype Half Dome VR headset (credit: Facebook)

Apple’s super-high-resolution wireless VR-AR headset

Meanwhile, Apple is working on a powerful wireless headset for AR and VR, for 2020 release, CNET reported Friday, based on sources (Apple has not commented).* Code-named T288, the headset will feature two super-high-resolution 8K** (four times higher resolution than today’s 4K TVs) displays (for each eye), making the VR and AR images look highly lifelike — way beyond any devices currently available on the market.

To handle that extremely high resolution (and associated data rate) while connecting to an external processor box, T288 will use high-speed, short-range 60GHz wireless technology called WiGig, capable of data transfer rates up to 7 Gbit/s. The headset would also use a smaller, faster, more-power-efficient 5-nanometer-node processor, to be designed by Apple.

A patent application for VR/AR experiences in vehicles

(credit: Apple/USPTO)

In related news, Apple has applied for a patent for a system using virtual reality (VR) and augmented reality (AR) to help alleviate both motion sickness and boredom in a moving vehicle.

The patent application covers three cases: experiences and environments using VR, AR, and mixed reality (VR + AR + the real world). Apple claims that the design will reduce nausea from vehicle movement (or from perceived movement in a virtual-reality experience).

It also claims to provide (as entertainment) motion or acceleration effects via a vehicle seat, vehicle movement, or vehicle on-board systems (such as audio effects and using a heating/cooling fan to blow “wind” or moving/shaking the seat).

Jurassic World: Blue (credit: Universal, Felix & Paul)

For example, imagine riding in a car on a road that morphs into a prehistoric theme park (such as the Jurassic World: Blue VR experience from Universal Pictures and Felix & Paul Studios, which was announced Monday to coincide with the release of Oculus Go).

* At its 2017 June Worldwide Developers Conference (WWDC), Apple unveiled ARKit to enable developers to create augmented-reality apps for Apple products. Apple will hold its 29th WWDC on June 4–8 in San Jose.

** “8K resolution (8K UHD), is the current highest ultra high definition television (UHD TV) resolution in digital television and digital cinematography,” according to Wikipedia. “8K refers to the horizontal resolution, 7,680 pixels, forming the total image dimensions (7680×4320), otherwise known as 4320p. … It has four times as many pixels [as 4K], or 16 times as many pixels as Full HD. … 8K is speculated to become a mainstream consumer display resolution around 2023.” 

New sensors monitor brain activity and blood flow deeper in the brain with high sensitivity and high speed

Magnetic calcium-responsive nanoparticles (dark centers are magnetic cores) respond within seconds to calcium ion changes by clustering (Ca+ ions, right) or expanding (Ca- ions, left), creating a magnetic contrast change that can be detected with MRI, indicating brain activation. (High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses.) Blue: C2AB “molecular glue” (credit: The researchers)

Calcium-based MRI sensor enables deep brain imaging

MIT neuroscientists have developed a new magnetic resonance imaging (MRI) sensor that allows them to monitor neural activity deep within the brain by tracking calcium ions.

Calcium ions are directly linked to neuronal firing at high resolution — unlike the changes in blood flow detected by functional MRI (fMRI), which provide only an indirect indication of neural activity. The new sensor can also monitor large areas, compared to fluorescent molecules, used to label calcium in the brain and image it with traditional microscopy, which is limited to small areas of the brain.

A calcium-based MRI sensor could allow researchers to link specific brain functions directly to specific neuron activity, and to determine how distant brain regions communicate with each other during particular tasks. The research is described in a paper in the April 30 issue of Nature Nanotechnology. Source: MIT

New technique for measuring blood flow in the brain uses laser light shined into the head (“sample arm” path) through the skull. The return signal is boosted by a reference light beam and returned to a detector camera chip. (credit: Srinivasan lab, UC Davis)

Measuring deep-tissue blood flow at high speed

Biomedical engineers at the University of California, Davis, have developed a more-effective, lower-cost technique for measuring deep tissue blood flow in the brain at high speed. It could be especially useful for patients with stroke or traumatic brain injury.

The technique, called “interferometric diffusing wave spectroscopy” (iDWS), replaces about 20 photon-counting detectors in diffusing wave spectroscopy (DWS) devices (which cost a few thousand dollars each) with a single low-cost CMOS-based digital-camera chip.

The NIH-funded work is described in an open-access paper published April 26 in the journal Optica. Source: UC Davis

 

 

 

 

 

 

 

 

 

round-up | Three radical new user interfaces

Holodeck-style holograms could revolutionize videoconferencing

A “truly holographic” videoconferencing system has been developed by researchers at Queen’s University in Kingston Montreal. With TeleHuman 2, objects appear as stereoscopic images, as if inside a pod (not a two-dimensional video projected on a flat piece of glass). Multiple users can walk around and view the objects from all sides simultaneously — as in Star Trek’s Holodeck.

Teleporting for distance meetings. TeleHuman 2 “teleports” people live — allowing for meetings at a distance. No headset or 3D glasses required.

The researchers presented the system in an open-access paper at CHI 2018, the ACM CHI Conference on Human Factors in Computing Systems in Montreal on April 25.

(Left) Remote capture room with stereo 2K cameras, multiple surround microphones, and displays. (Right) Telehuman 2 display and projector (credit: Human Media Lab)

 

 

 

 

 

 

 

 


Interactive smart wall acts as giant touch screen, senses electromagnetic activity in room

Researchers at Carnegie Mellon University and Disney Research have devised a system called Wall++ for creating interactive “smart walls” that sense human touch, gestures, and signals from appliances.

By using masking tape and nickel-based conductive paint, a user would create a pattern of capacitive-sensing electrodes on the wall of a room (or a building) and then paint it over. The electrodes would be connected to sensors.

Wall ++ (credit: Carnegie Mellon University)

Acting as a sort of huge tablet, touch-tracking or motion-sensing uses could include dimming or turning lights on/off, controlling speaker volume, acting as smart thermostats, playing full-body video games, or creating a huge digital white board, for example.

A passive electromagnetic sensing mode could also allow for detecting devices that are on or off (by noise signature). And a small, signal-emitting wristband could enable user localization and identification for collaborative gaming or teaching, for example.

The researchers also presented an open-access paper at CHI 2018.


A smart-watch screen on your skin

LumiWatch, another interactive interface out of Carnegie Mellon, projects a smart-watch touch screen onto your skin. It solves the tiny-interface bottleneck with smart watches — providing more than five times the interactive surface area for common touchscreen operations, such as tapping and swiping. It was also presented in an open-access paper at CHI 2018.

New immunotherapy treatment for lung cancer dramatically improves survival, researchers report

(credit: Merck)

An immunotherapy treatment — one that boosts the immune system — has improved survival in people newly diagnosed with the most common form of lung cancer (advanced non–small-cell lung cancer), according to an open-access study published in the New England Journal of Medicine.

The study results were presented last Monday, April 16, at the annual American Association for Cancer Research conference in Chicago.

Cutting the risk of dying in half. The new study, led by thoracic medical oncologist Leena Gandhi, MD, PhD, associate professor of medicine and director of the thoracic medical oncology program at NYU’s Perlmutter Cancer Center, shows that treating lung cancer by a combination of immunotherapy with Merck’s Keytruda (aka pembrolizumab) and chemotherapy is more effective than chemotherapy alone, according to a statement by NYU Langone Health.

The combination cut in half the risk of dying or having the cancer worsen, compared to chemo alone, after nearly one year, the Associated Press reported in The New York Times. “The results are expected to quickly set a new standard of care for about 70,000 patients each year in the United States whose lung cancer has already spread by the time it’s found,” the AP stated.

“Another study found that an immunotherapy combo — the Bristol-Myers Squibb drugs Opdivo and Yervoy — worked better than chemo for delaying the time until cancer worsened in advanced lung cancer patients whose tumors have many gene flaws, as nearly half do. But the benefit lasted less than two months on average and it’s too soon to know if the combo improves overall survival, as Keytruda did.”

Micrograph of a squamous carcinoma, a type of non-small-cell lung cancer (credit: Wikipedia)

Removing a cloak. All three of these “checkpoint inhibitor” treatments remove a “cloak” that some cancer cells have that hides the cancer cells from the immune system.

These immune-therapy treatments — which are administered through IVs and cost about $12,500 a month — worked for only about half of patients. But that’s far better than chemo alone has done in the past, notes the AP.

The American Cancer Society estimates that in 2018, there will be about 234,030 new cases of lung cancer in the U.S and about 154,050 deaths from lung cancer.

Augmented-reality system lets doctors see medical images projected on patients’ skin

Projected medical image (credit: University of Alberta)

New technology is bringing the power of augmented reality into clinical practice. The system, called ProjectDR, shows clinicians 3D medical images such as CT scans and MRI data, projected directly on a patient’s skin.

The technology uses motion capture, similar to how it’s done in movies. Infrared cameras track invisible (to human vision) markers on the patient’s body. That allows the system to track the orientation of the body, allowing the projected images to move as the patient does.

Applications include teaching, physiotherapy, laparoscopic surgery, and surgical planning.

ProjectDR can also present segmented images — for example, only the lungs or only the blood vessels — depending on what a clinician is interested in seeing.

The researchers plan to test ProjectDR in an operating room to simulate surgery, according to Pierre Boulanger, PhD, a professor in the Department of Computing Science at the University of Alberta, Canada. “We are also doing pilot studies to test the usability of the system for teaching chiropractic and physical therapy procedures,” he said.

They next plan to conduct real surgical pilot studies.


UAlbertaScience | ProjectDR demonstration video

 

New microscope captures awesome animated 3D movies of cells at high resolution and speed

HHMI Howard Hughes Medical Institute | An immune cell explores a zebrafish’s inner ear

By combining two state-of-the-art imaging technologies, Howard Hughes Medical Institute Janelia Research Campus scientists, led by 2014 chemistry Nobel laureate physicist Eric Betzig, have imaged living cells at unprecedented 3D detail and speed, the scientists report on April 19, 2018 in an open-access paper in the journal Science.

In stunning videos of animated worlds, cancer cells crawl, spinal nerve circuits rewire, and we travel down through the endomembrane mesh of a zebrafish eye.

Microscope flaws. The new adaptive optics/lattice light sheet microscopy (AO-LLSM) system addresses two fundamental flaws with traditional microscopes. They’re too slow to study natural three-dimensional (3D) cellular processes in real time and in detail (the sharpest views have been limited to isolated cells immobilized on glass slides).

And the bright light required for imaging causes photobleaching and other cellular damages. These microscopes bathe cells with light thousands to millions of times more intense than the desert sun, says Betzig — damaging or killing the organism being studied.

Merging adaptive optics and rapid scanning. To meet these challenges, Betzig and his team created a microscopy system that merges two technologies: Aberration-correcting adaptive-optics technology used by astronomers to provide clear views of distant celestial objects through Earth’s turbulent atmosphere; and non-invasive lattice light sheet microscopy, which rapidly and repeatedly sweeps an ultra-thin sheet of light through the cell (avoiding light damage) while acquiring a series of 2D images and building a high-resolution 3D movie of subcellular dynamics.

Zebrafish embryo spinal cord neural circuit development (credit: HHMI Howard Hughes Medical Institute)

The combination allows for the study of 3D subcellular processes in their native multicellular environments at high spatiotemporal (space and time) resolution.

Desk version. Currently, the new microscope fills a 10-foot-long table. “It’s a bit of a Frankenstein’s monster right now,” says Betzig. His team is working on a next-generation version that should fit on a small desk at a cost within the reach of individual labs. The first such instrument will go to Janelia’s Advanced Imaging Center, where scientists from around the world can apply to use it. Plans that scientists can use to create their own microscopes will also be made freely available.

Ultimately, Betzig hopes that the adaptive optical version of the lattice microscope will be commercialized, as was the base lattice instrument before it. That could bring adaptive optics into the mainstream.

Movie Gallery: Lattice Light Sheet Microscopy with Adaptive Optics 

Endocytosis in a human stem cell derived organoid. Clathrin-mediated endocytosis in vivo. Clathrin localization in muscle fibers.

How deep learning is about to transform biomedical science

Human induced pluripotent stem cell neurons imaged in phase contrast (gray pixels, left) — currently processed manually with fluorescent labels (color pixels) to make them visible. That’s about to radically change. (credit: Google)

Researchers at Google, Harvard University, and Gladstone Institutes have developed and tested new deep-learning algorithms that can identify details in terabytes of bioimages, replacing slow, less-accurate manual labeling methods.

Deep learning is a type of machine learning that can analyze data, recognize patterns, and make predictions. A new deep-learning approach to biological images, which the researchers call “in silico labeling” (ISL), can automatically find and predict features in images of “unlabeled” cells (cells that have not been manually identified by using fluorescent chemicals).

The new deep-learning network can identify whether a cell is alive or dead, and get the answer right 98 percent of the time (humans can typically only identify a dead cell with 80 percent accuracy) — without requiring invasive fluorescent chemicals, which make it difficult to track tissues over time. The deep-learning network can also predict detailed features such as nuclei and cell type (such as neural or breast cancer tissue).

The deep-learning algorithms are expected to make it possible to handle the enormous 3–5 terabytes of data per day generated by Gladstone Institutes’ fully automated robotic microscope, which can track individual cells for up to several months.

The research was published in the April 12, 2018 issue of the journal Cell.

How to train a deep-learning neural network to predict the identity of cell features in microscope images


Using fluorescent labels with unlabeled images to train a deep neural network to bring out image detail. (Left) An unlabeled phase-contrast microscope transmitted-light image of rat cortex — the center image from the z-stack (vertical stack) of unlabeled images. (Right three images) Labeled images created with three different fluorescent labels, revealing invisible details of cell nuclei (blue), dendrites (green), and axons (red). The numbered outsets at the bottom show magnified views of marked subregions of images. (credit: Finkbeiner Lab)

To explore the new deep-learning approach, Steven Finkbeiner, MD, PhD, the director of the Center for Systems and Therapeutics at Gladstone Institutes in San Francisco, teamed up with computer scientists at Google.

“We trained the [deep learning] neural network by showing it two sets of matching images of the same cells: one unlabeled [such as the black and white "phase contrast"microscope image shown in the illustration] and one with fluorescent labels [such as the three colored images shown above],” explained Eric Christiansen, a software engineer at Google Accelerated Science and the study’s first author. “We repeated this process millions of times. Then, when we presented the network with an unlabeled image it had never seen, it could accurately predict where the fluorescent labels belong.” (Fluorescent labels are created by adding chemicals to tissue samples to help visualize details.)

The study used three cell types: human motor neurons derived from induced pluripotent stem cells, rat cortical cultures, and human breast cancer cells. For instance, the deep-learning neural network can identify a physical neuron within a mix of cells in a dish. It can go one step further and predict whether an extension of that neuron is an axon or dendrite (two different but similar-looking elements of the neural cell).

For this study, Google used TensorFlow, an open-source machine learning framework for deep learning originally developed by Google AI engineers. The code for this study, which is open-source on Github, is the result of a collaboration between Google Accelerated Science and two external labs: the Lee Rubin lab at Harvard and the Steven Finkbeiner lab at Gladstone.

Animation showing the same cells in transmitted light (black and white) and fluorescence (colored) imaging, along with predicted fluorescence labels from the in silico labeling model. Outset 2 shows the model predicts the correct labels despite the artifact in the transmitted-light input image. Outset 3 shows the model infers these processes are axons, possibly because of their distance from the nearest cells. Outset 4 shows the model sees the hard-to-see cell at the top, and correctly identifies the object at the left as DNA-free cell debris. (credit: Google)

Transforming biomedical research

“This is going to be transformative,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco. “Deep learning is going to fundamentally change the way we conduct biomedical science in the future, not only by accelerating discovery, but also by helping find treatments to address major unmet medical needs.”

In his laboratory, Finkbeiner is trying to find new ways to diagnose and treat neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS). “We still don’t understand the exact cause of the disease for 90 percent of these patients,” said Finkbeiner. “What’s more, we don’t even know if all patients have the same cause, or if we could classify the diseases into different types. Deep learning tools could help us find answers to these questions, which have huge implications on everything from how we study the disease to the way we conduct clinical trials.”

Without knowing the classifications of a disease, a drug could be tested on the wrong group of patients and seem ineffective, when it could actually work for different patients. With induced pluripotent stem cell technology, scientists could match patients’ own cells with their clinical information, and the deep network could find relationships between the two datasets to predict connections. This could help identify a subgroup of patients with similar cell features and match them to the appropriate therapy, Finkbeiner suggests.

The research was funded by Google, the National Institute of Neurological Disorders and Stroke of the National Institutes of Health, the Taube/Koret Center for Neurodegenerative Disease Research at Gladstone, the ALS Association’s Neuro Collaborative, and The Michael J. Fox Foundation for Parkinson’s Research.


Abstract of In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images

Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary pertur-bations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call ‘‘in silico labeling’’ (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire.

A future ultraminiature computer the size of a pinhead?

Thin-film MRAM surface structure comprising one-monolayer iron (Fe) deposited on a boron, gallium, aluminum, or indium nitride substrate. (credit: Jie-Xiang Yu and Jiadong Zang/Science Advances)

University of New Hampshire researchers have discovered a combination of materials that they say would allow for smaller, safer magnetic random access memory (MRAM) storage — ultimately leading to ultraminiature computers.

Unlike conventional RAM (read-only memory) SRAM and DRAM chip technologies, with MRAM, data is stored by magnetic storage elements, instead of energy-expending electric charge or current flows. MRAM is also nonvolatile memory (the data is preserved when the power if turned off). The elements are formed from two ferromagnetic plates, each of which can hold a magnetization, separated by a thin insulating layer.

In their study, published March 30, 2018 in the open-access journal Science Advances, the researchers describe a new design* comprising ultrathin films, known as Fe (iron) monolayers, grown on a substrate made up of non-magnetic substances —  boron, gallium, aluminum, or indium nitride.

Ultrahigh storage density

The new design has an estimated 10-year data retention at room temperature. It can “ultimately lead to nanomagnetism and promote revolutionary ultrahigh storage density in the future,” said Jiadong Zang, an assistant professor of physics and senior author. “It opens the door to possibilities for much smaller computers for everything from basic data storage to traveling on space missions. Imagine launching a rocket with a computer the size of a pin head — it not only saves space but also a lot of fuel.”

MRAM is already challenging flash memory in a number of applications where persistent or nonvolatile memory (such as flash) is currently being used, and it’s also taking on RAM chips “in applications such as AI, IoT, 5G, and data centers,” according to a recent article in Electronic Design.**

* A provisional patent pending has been filed by UNHInnovation. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences.

** More broadly, MRAM applications are in consumer electronics, robotics, automotive , enterprise storage, and aerospace & defense, according to a market analysis and 2018–2023 forecast by Market Desk.


Abstract of Giant perpendicular magnetic anisotropy in Fe/III-V nitride thin films

Large perpendicular magnetic anisotropy (PMA) in transition metal thin films provides a pathway for enabling the intriguing physics of nanomagnetism and developing broad spintronics applications. After decades of searches for promising materials, the energy scale of PMA of transition metal thin films, unfortunately, remains only about 1 meV. This limitation has become a major bottleneck in the development of ultradense storage and memory devices. We discovered unprecedented PMA in Fe thin-film growth on the Embedded Image N-terminated surface of III-V nitrides from first-principles calculations. PMA ranges from 24.1 meV/u.c. in Fe/BN to 53.7 meV/u.c. in Fe/InN. Symmetry-protected degeneracy between x2y2 and xy orbitals and its lift by the spin-orbit coupling play a dominant role. As a consequence, PMA in Fe/III-V nitride thin films is dominated by first-order perturbation of the spin-orbit coupling, instead of second-order in conventional transition metal/oxide thin films. This game-changing scenario would also open a new field of magnetism on transition metal/nitride interfaces.

 

Google announces new ‘Talk to Books’ semantic-search feature

Google announced today, April 13, 2018, a new experimental publicly available technology called Talk to Books, which lets you ask questions in plain-English sentences to discover relevant information from more than 100,000 books, comprising 600 million sentences.

For example, if you ask, “Can AIs have consciousness?,” Talk to Books returns a list of books that include information on that specific question.

The new feature was developed by a team at Google Research headed by Ray Kurzweil, a Google director of engineering. As Kurzweil and associates note in a Google Research Blog post today, “With Talk to Books, we’re combining two powerful ideas: semantic search and a new way to discover books.”

Experiments in understanding language

Semantic search is based on searching meaning, rather than on keywords or phrases. Developed with machine learning, it uses “natural language understanding” of words and phrases. Semantic search is explained further on Google’s new “Semantic Experiences” page, which includes a link to Semantris, a set of word-association games that lets you explore how Google’s AI has learned to predict which words are semantically related.

The new semantic-search feature is based on the research by Kurzweil and his team in developing an enhanced version of Google’s “smart reply” feature (which provides suggestions for responding to each of your Google emails), as explained in an arXiv paper by Kurzweil’s team.

That research is further described in a March 29, 2018 arXiv paper. Also released is a version of the underlying technology that will enable developers to use these new semantic-search tools — including a universal sentence encoder — in their own applications, similar to Talk to Books.

Intelligence-augmentation device lets users ‘speak silently’ with a computer by just thinking

MIT Media Lab researcher Arnav Kapur demonstrates the AlterEgo device. It picks up neuromuscular facial signals generated by his thoughts; a bone-conduction headphone lets him privately hear responses from his personal devices. (credit: Lorrie Lejeune/MIT)

MIT researchers have invented a system that allows someone to communicate silently and privately with a computer or the internet by simply thinking — without requiring any facial muscle movement.

The AlterEgo system consists of a wearable device with electrodes that pick up otherwise undetectable neuromuscular subvocalizations — saying words “in your head” in natural language. The signals are fed to a neural network that is trained to identify subvocalized words from these signals. Bone-conduction headphones also transmit vibrations through the bones of the face to the inner ear to convey information to the user — privately and without interrupting a conversation. The device connects wirelessly to any external computing device via Bluetooth.

A silent, discreet, bidirectional conversation with machines. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?,” says Arnav Kapur, a graduate student at the MIT Media Lab who led the development of the new system. Kapur is first author on an open-access paper on the research presented in March at the IUI ’18 23rd International Conference on Intelligent User Interfaces.

In one of the researchers’ experiments, subjects used the system to silently report opponents’ moves in a chess game and silently receive recommended moves from a chess-playing computer program. In another experiment, subjects were able to undetectably answer difficult computational problems, such as the square root of large numbers or obscure facts. The researchers achieved 92% median word accuracy levels, which is expected to improve.  “I think we’ll achieve full conversation someday,” Kapur said.

Non-disruptive. “We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”*


Even the tiniest signal to her jaw or larynx might be interpreted as a command. Keeping one hand on the sensitivity knob, she concentrated to erase mistakes the machine kept interpreting as nascent words.

            Few people used subvocals, for the same reason few ever became street jugglers. Not many could operate the delicate systems without tipping into chaos. Any normal mind kept intruding with apparent irrelevancies, many ascending to the level of muttered or almost-spoken words the outer consciousness hardly noticed, but which the device manifested visibly and in sound.
            Tunes that pop into your head… stray associations you generally ignore… memories that wink in and out… impulses to action… often rising to tickle the larynx, the tongue, stopping just short of sound…
            As she thought each of those words, lines of text appeared on the right, as if a stenographer were taking dictation from her subvocalized thoughts. Meanwhile, at the left-hand periphery, an extrapolation subroutine crafted little simulations.  A tiny man with a violin. A face that smiled and closed one eye… It was well this device only read the outermost, superficial nervous activity, associated with the speech centers.
            When invented, the sub-vocal had been hailed as a boon to pilots — until high-performance jets began plowing into the ground. We experience ten thousand impulses for every one we allow to become action. Accelerating the choice and decision process did more than speed reaction time. It also shortcut judgment.
            Even as a computer input device, it was too sensitive for most people.  Few wanted extra speed if it also meant the slightest sub-surface reaction could become embarrassingly real, in amplified speech or writing.

            If they ever really developed a true brain to computer interface, the chaos would be even worse.

— From EARTH (1989) chapter 35 by David Brin (with permission)


IoT control. In the conference paper, the researchers suggest that an “internet of things” (IoT) controller “could enable a user to control home appliances and devices (switch on/off home lighting, television control, HVAC systems etc.) through internal speech, without any observable action.” Or schedule an Uber pickup.

Peripheral devices could also be directly interfaced with the system. “For instance, lapel cameras and smart glasses could directly communicate with the device and provide contextual information to and from the device. … The device also augments how people share and converse. In a meeting, the device could be used as a back-channel to silently communicate with another person.”

Applications of the technology could also include high-noise environments, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press, suggests Thad Starner, a professor in Georgia Tech’s College of Computing. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally.”

* Or users could, conceivably, simply zone out — checking texts, email messages, and twitter (all converted to voice) during boring meetings, or even reply, using mentally selected “smart reply” type options.