A sneak peak at radical future user interfaces for phones, computers, and VR

Grabity: a wearable haptic interface for simulating weight and grasping in VR (credit: UIST 2017)

Drawing in air, touchless control of virtual objects, and a modular mobile phone with snap-in sections (for lending to friends, family members, or even strangers) are among the innovative user-interface concepts to be introduced at the 30th ACM User Interface Software and Technology Symposium (UIST 2017) on October 22–25 in Quebec City, Canada.

Here are three concepts to be presented, developed by researchers at Dartmouth College’s human computer interface lab.

Retroshape: tactile watch feedback

Darthmouth’s Retroshape concept would add a shape-deforming tactile feedback system to the back of a future watch, allowing you to both see and feel virtual objects, such as a bouncing ball or exploding asteroid. Each pixel on RetroShape’s screen has a corresponding “taxel” (tactile pixel) on the back of the watch, using 16 independently moving pins.


UIST 2017 | RetroShape: Leveraging Rear-Surface Shape Displays for 2.5D Interaction on Smartwatches

Frictio smart ring

Current ring-gadget designs will allow users to control things. Instead, Frictio uses controlled rotation to provide silent haptic alerts and other feedback.


UIST 2017 — Frictio: Passive Kinesthetic Force Feedback for Smart Ring Output

Pyro: fingertip control

Pyro is a covert gesture-recognition concept, based on moving the thumb tip against the index finger — a natural, fast, and unobtrusive way to interact with a computer or other devices. It uses an energy-efficient thermal infrared sensor to detect to detect micro control gestures, based on patterns of heat radiating from fingers.


UIST 2017 — Pyro: Thumb-Tip Gesture Recognition Using Pyroelectric Infrared Sensing

Highlights from other presentations at UIST 2017:


UIST 2017 Technical Papers Preview

AlphaGo Zero trains itself to be most powerful Go player in the world

(credit: DeepMind)

Deep Mind has just announced AlphaGo Zero, an evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is now arguably the strongest Go player in history, according to the company.

While previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go, AlphaGo Zero skips this step. It learns to play from scratch, simply by playing games against itself, starting from completely random play.

(credit: DeepMind)

It surpassed Alpha Lee in 3 days, then surpassed human level of play, defeating the previously published champion-defeating version of AlphaGo by 100 games to 0 in just 40 days.

The achievement is described in the journal Nature today (Oct. 18, 2017)


DeepMind | AlphaGo Zero: Starting from scratch


Abstract of Mastering the game of Go without human knowledge

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Leading brain-training game improves memory and attention better than competing method

EEGs taken before and after the training showed that the biggest changes occurred in the brains of the group that trained using the “dual n-back” method (right). (credit: Kara J. Blacker/JHU)

A leading brain-training game called “dual n-back” was significantly better in improving memory and attention than a competing “complex span” game, Johns Hopkins University researchers found in a recent experiment.*

These results, published Monday Oct. 16, 2017 in an open-access paper in the Journal of Cognitive Enhancement, suggest it’s possible to train the brain like other body parts — with targeted workouts to improve the cognitive skills needed when tasks are new and you can’t just rely on old knowledge and habits, says co-author Susan Courtney, a Johns Hopkins neuroscientist and professor of psychological and brain sciences.


Johns Hopkins University | The Best Way to Train Your Brain: A Game

The dual n-back game is a memory sequence test in which you must remember a constantly updating sequence of visual and auditory stimuli. As shown in a simplified version in the video above, participants saw squares flashing on a grid while hearing letters. But in the experiment, the subjects also had to remember if the square they just saw and the letter they heard were both the same as one round back.

As the test got harder, they had to recall squares and letters two, three, and four rounds back. The subjects also showed significant changes in brain activity in the prefrontal cortex, the critical region responsible for higher learning.

With the easier complex span game, there’s a distraction between items, but participants don’t need to continually update the previous items in their mind.

((You can try an online version of the dual n-back test/game here and of the digit-span test here. The training programs Johns Hopkins compared are tools scientists rely on to test the brain’s working memory, not the commercial products sold to consumers. )

30 percent improvement in working memory

The researchers found that the group that practiced the dual n-back exercise showed a 30 percent improvement in their working memory — nearly double the gains in the group using complex span. “The findings suggest that [the dual n-back] task is changing something about the brain,” Courtney said. “There’s something about sequencing and updating that really taps into the things that only the pre-frontal cortex can do, the real-world problem-solving tasks.”

The next step, the researchers say, is to figure out why dual n-back is so good at improving working memory, then figure out how to make it even more effective so that it can become a marketable or even clinically useful brain-training program.

* Scientists trying to determine if brain exercises make people smarter have had mixed results. Johns Hopkins researchers suspected the problem wasn’t the idea of brain training, but the type of exercise researchers chose to test it. They decided to compare directly the leading types of exercises and measure people’s brain activity before and after training; that had never been attempted before, according to lead author Kara J. Blacker, a former Johns Hopkins post-doctoral fellow in psychological and brain sciences, now a researcher at the Henry M. Jackson Foundation for Advancement of Military Medicine, Inc. For the experiment, the team assembled three groups of participants, all young adults. Everyone took an initial battery of cognitive tests to determine baseline working memory, attention, and intelligence. Everyone also got an electroencephalogram, or EEG, to measure brain activity. Then, everyone was sent home to practice a computer task for a month. One group used one leading brain exercise while the second group used the other. The third group practiced on a control task. Everyone trained five days a week for 30 minutes, then returned to the lab for another round of tests to see if anything about their brain or cognitive abilities had changed.


Abstract of N-back Versus Complex Span Working Memory Training

Working memory (WM) is the ability to maintain and manipulate task-relevant information in the absence of sensory input. While its improvement through training is of great interest, the degree to which WM training transfers to untrained WM tasks (near transfer) and other untrained cognitive skills (far transfer) remains debated and the mechanism(s) underlying transfer are unclear. Here we hypothesized that a critical feature of dual n-back training is its reliance on maintaining relational information in WM. In experiment 1, using an individual differences approach, we found evidence that performance on an n-back task was predicted by performance on a measure of relational WM (i.e., WM for vertical spatial relationships independent of absolute spatial locations), whereas the same was not true for a complex span WM task. In experiment 2, we tested the idea that reliance on relational WM is critical to produce transfer from n-back but not complex span task training. Participants completed adaptive training on either a dual n-back task, a symmetry span task, or on a non-WM active control task. We found evidence of near transfer for the dual n-back group; however, far transfer to a measure of fluid intelligence did not emerge. Recording EEG during a separate WM transfer task, we examined group-specific, training-related changes in alpha power, which are proposed to be sensitive to WM demands and top-down modulation of WM. Results indicated that the dual n-back group showed significantly greater frontal alpha power after training compared to before training, more so than both other groups. However, we found no evidence of improvement on measures of relational WM for the dual n-back group, suggesting that near transfer may not be dependent on relational WM. These results suggest that dual n-back and complex span task training may differ in their effectiveness to elicit near transfer as well as in the underlying neural changes they facilitate.

Scientists report first detection of gravitational waves produced by colliding neutron stars

Astronomers detect gravitational waves and a gamma-ray burst from two colliding neutron stars. (credit: National Science Foundation/LIGO/Sonoma State University/A. Simonnet)

Scientists reported today (Oct. 16, 2017) the first simultaneous detection of both gravitational waves and light — an astounding collision of two neutron stars.

The discovery was made nearly simultaneously by three gravitational-wave detectors, followed by observations by some 70 ground- and space-based light observatories.

Neutron stars are the smallest, densest stars known to exist and are formed when massive stars explode in supernovas.


MIT | Neutron Stars Collide

As these neutron stars spiraled together, they emitted gravitational waves that were detectable for about 100 seconds. When they collided, a flash of light in the form of gamma rays was emitted and seen on Earth about two seconds after the gravitational waves. In the days and weeks following the smashup, other forms of light, or electromagnetic radiation — including X-ray, ultraviolet, optical, infrared, and radio waves — were detected.

The stars were estimated to be in a range from around 1.1 to 1.6 times the mass of the sun, in the mass range of neutron stars. A neutron star is about 20 kilometers, or 12 miles, in diameter and is so dense that a teaspoon of neutron star material has a mass of about a billion tons.

The initial gamma-ray measurements, combined with the gravitational-wave detection, provide confirmation for Einstein’s general theory of relativity, which predicts that gravitational waves should travel at the speed of light. The observations also reveal signatures of recently synthesized material, including gold and platinum, solving a decades-long mystery of where about half of all elements heavier than iron are produced.


Georgia Tech | The Collision of Two Neutron Stars (audible frequencies start at ~25 seconds)

“This detection has genuinely opened the doors to a new way of doing astrophysics,” said Laura Cadonati, professor of physics at Georgia Tech and deputy spokesperson for the LIGO Scientific Collaboration. I expect it will be remembered as one of the most studied astrophysical events in history.”

In the weeks and months ahead, telescopes around the world will continue to observe the afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

The research was published today in Physical Review Letters and in an open-access paper in The Astrophysical Journal Letters.

Timeline

KurzweilAI has assembled this timeline of the observations from various reports:

  • About 130 million years ago: Two neutron stars are in their final moments of orbiting each other, separated only by about 300 kilometers (200 miles) and gathering speed while closing the distance between them. As the stars spiral faster and closer together, they stretch and distort the surrounding space-time, giving off energy in the form of powerful gravitational waves, before smashing into each other. At the moment of collision, the bulk of the two neutron stars merge into one ultradense object, emitting a “fireball” of gamma rays.
  • Aug. 17, 2017, 1241:04 ET: Virgo detector in Pisa, Italy picks up a new strong “chirp” gravitational wave signal, designated GW170817. The LIGO detector in Livingston, Louisiana detects the signal just 22 milliseconds later, then the twin LIGO detector in Hanford, Washington, 3 milliseconds after that. Based on the signal duration (about 100 minutes) and the signal frequencies, scientists at the three facilities conclude it’s likely from neutron stars — not from more massive black holes (as in the previously three gravitational wave detections). And based on the signal strengths and timing between the three detectors, scientists are able to precisely  triangulate the position in the sky.  (The most precise gravitational-wave detection so far.)
  •  1.7 seconds later: NASA’s Fermi Gamma-ray Space Telescope and the European INTEGRAL satellite detect a gamma-ray burst (GRB) lasting nearly 2 seconds from the same general direction of sky. Both the Fermi and LIGO teams quickly alert astronomers around the world to search for an afterglow.
  • Hours later: Armed with these precise coordinates, a handful of observatories around the world starts searching the region of the sky where the signal was thought to originate. A new point of light, resembling a new star, is found by optical telescopes first. Known as a “kilonova,” it’s a phenomenon by which the material that is left over from the neutron star collision, which glows with light, is blown out of the immediate region and far out into space.
  • Days and weeks following: About 70 observatories on the ground and in space observe the event at various longer wavelengths (starting at gamma and then X-ray, ultraviolet, optical, infrared, and ending up at radio wave frequencies).
  •  In the weeks and months ahead: Telescopes around the world will continue to observe the radio-wave afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

“Multimessenger” astronomy

Caltech’s David H. Reitze, executive director of the LIGO Laboratory puts the observations in context: “This detection opens the window of a long-awaited ‘multimessenger’ astronomy. It’s the first time that we’ve observed a cataclysmic astrophysical event in both gravitational waves and electromagnetic waves — our cosmic messengers. Gravitational-wave astronomy offers new opportunities to understand the properties of neutron stars in ways that just can’t be achieved with electromagnetic astronomy alone.”


caltech | Variety of Gravitational Waves and a Chirp (audible sound for GW170817 starts ~30 seconds)

Using ‘cooperative perception’ between intelligent vehicles to reduce risks

Networked intelligent vehicles (credit: EPFL)

Researchers at École polytechnique fédérale de Lausanne (EPFL) have combined data from two autonomous cars to create a wider field of view, extended situational awareness, and greater safety.

Autonomous vehicles get their intelligence from cameras, radar, light detection and ranging (LIDAR) sensors, and navigation and mapping systems. But there are ways to make them even smarter. Researchers at EPFL are working to improve the reliability and fault tolerance of these systems by sharing data between vehicles. For example, this can extend the field of view of a car that is behind another car.

Using simulators and road tests, the team has developed a flexible software framework for networking intelligent vehicles so that they can interact.

Cooperative perception

“Today, intelligent vehicle development is focused on two main issues: the level of autonomy and the level of cooperation,” says Alcherio Martinoli, who heads EPFL’s Distributed Intelligent Systems and Algorithms Laboratory (DISAL). As part of his PhD thesis, Milos Vasic has developed cooperative perception algorithms, which extend an intelligent vehicle’s situational awareness by fusing data from onboard sensors with data provided by cooperative vehicles nearby.

Milos Vasic, PhD, and Alcherio Martinoli made two regular cars intelligent using off-the-shelf equipment. (credit: Alain Herzog/EPFL)

The researchers used  cooperative perception algorithms as the basis for the software framework. Cooperative perception means that an intelligent vehicle can combine its own data with that of another vehicle to help make driving decisions.

They developed an assistance system that assesses the risk of passing, for example. The risk assessment factors in the probability of an oncoming car in the opposite lane as well as kinematic conditions such as driving speeds, the distance required to overtake, and the distance to the oncoming car.

Difficulties in fusing data

The team retrofitted two Citroen C-Zero electric cars with a Mobileye camera, an accurate localization system, a router to enable Wi-Fi communication, a computer to run the software and an external battery to power everything. “These were not autonomous vehicles,” says Martinoli, “but we made them intelligent using off-the-shelf equipment.”

One of the difficulties in fusing data from the two vehicles involved relative localization. The cars needed to be able to know precisely where they are in relation to each other as well to objects in the vicinity.

For example, if a single pedestrian does not appear to both cars to be in the same exact spot, there is a risk that, together, they will see two figures instead of one. By using other signals, particularly those provided by the LIDAR sensors and cameras, the researchers were able to correct flaws in the navigation system and adjust their algorithms accordingly. This exercise was even more challenging because the data had to be processed in real time while the vehicles were in motion.

Although the tests involved only two vehicles, the longer-term goal is to create a network between multiple vehicles as well with the roadway infrastructure.

In addition to driving safety and comfort, cooperative networks of this sort could eventually be used to optimize a vehicle’s trajectory, save energy, and improve traffic flows.

Of course, determining liability in case of an accident becomes more complicated when vehicles cooperate. “The answers to these issues will play a key role in determining whether autonomous vehicles are accepted,” says Martinoli.


École polytechnique fédérale de Lausanne (EPFL) | Networked intelligent vehicles

Controlled by a synthetic gene circuit, self-assembling bacteria build working electronic sensors

Bacteria create a functioning 3D pressure-sensor device. A gene circuit (left) triggers the production of an engineered protein that enables pattern-forming bacteria on growth membranes (center) to assemble gold nanoparticles into a hybrid organic-inorganic dome structure whose size and shape can be controlled by altering the growth environment. In this proof-of-concept demonstration, the gold structure serves as a functioning pressure switch (right) that responds to touch. (credit: Yangxiaolu Cao et al./Nature Biotechnology)

Using a synthetic gene circuit, Duke University researchers have programmed self-assembling bacteria to build useful electronic devices — a first.

Other experiments have successfully grown materials using bacterial processes (for example, MIT engineers have coaxed bacterial cells to produce biofilms that can incorporate nonliving materials, such as gold nanoparticles and quantum dots). However, they have relied entirely on external control over where the bacteria grow and they have been limited to two dimensions.

In the new study, the researchers demonstrated the production of a composite structure by programming the cells themselves and controlling their access to nutrients, but still leaving the bacteria free to grow in three dimensions.*

As a demonstration, the bacteria were programmed to assemble into a finger-pressure sensor.

To create the pressure sensor, two identical arrays of domes were grown on a membrane (left) on two substrate surfaces. The two substrates were then sandwiched together (center) so that each dome was positioned directly above its counterpart on the other substrate. A battery was connected to the domes by copper wiring. When pressure was applied (right) to the sandwich, the domes pressed into one another, causing a deformation, resulting in an increase in conductivity, with resulting increased current (as shown the arrow in the ammeter). (credit: Yangxiaolu Cao et al./Nature Biotechnology)

Inspired by nature, but going beyond it

“This technology allows us to grow a functional device from a single cell,” said Lingchong You, the Paul Ruffin Scarborough Associate Professor of Engineering at Duke. “Fundamentally, it is no different from programming a cell to grow an entire tree.”

Nature is full of examples of life combining organic and inorganic compounds to make better materials. Mollusks grow shells consisting of calcium carbonate interlaced with a small amount of organic components, resulting in a microstructure three times tougher than calcium carbonate alone. Our own bones are a mix of organic collagen and inorganic minerals made up of various salts.

Harnessing such construction abilities in bacteria would have many advantages over current manufacturing processes. In nature, biological fabrication uses raw materials and energy very efficiently. In this synthetic system, for example, tweaking growth instructions to create different shapes and patterns could theoretically be much cheaper and faster than casting the new dies or molds needed for traditional manufacturing.

“Nature is a master of fabricating structured materials consisting of living and non-living components,” said You. “But it is extraordinarily difficult to program nature to create self-organized patterns. This work, however, is a proof-of-principle that it is not impossible.”

Self-healing materials

According to the researchers, in addition to creating circuits from bacteria, if the bacteria are kept alive, it may be possible to create materials that could heal themselves and respond to environmental changes.

“Another aspect we’re interested in pursuing is how to generate much more complex patterns,” said You. “Bacteria can create complex branching patterns, we just don’t know how to make them do that ourselves — yet.”

It’s a “very exciting work,” Timothy Lu, a synthetic biologist at MIT, who was not involved in the research, told The Register. “I think this represents a major step forward in the field of living materials.” Lu believes self-assembling materials “could create new manufacturing processes that may use less energy or be better for the environment than the ones today,” the article said. “But ‘the design rules for enabling bottoms-up assembly of novel materials are still not well understood,’ he cautioned.”

The study appeared online on October 9, 2107 in Nature Biotechnology. This study was supported by the Office of Naval Research, the National Science Foundation, the Army Research Office, the National Institutes of Health, the Swiss National Science Foundation, and a David and Lucile Packard Fellowship.

* The gene circuit is like a biological package of instructions that researchers embed into a bacterium’s DNA. The directions first tell the bacteria to produce a protein called T7 RNA polymerase (T7RNAP), which then activates its own expression in a positive feedback loop. It also produces a small molecule called AHL that can diffuse into the environment like a messenger. As the cells multiply and grow outward, the concentration of the small messenger molecule hits a critical concentration threshold, triggering the production of two more proteins called T7 lysozyme and curli. The former inhibits the production of T7RNAP while the latter acts as sort of biological Velcro, which grabs onto gold nanoparticles supplied by the researchers, forming a dome shell (the structure of the sensor). The researchers were able to alter the size and shape of the dome by controlling the properties of the porous membrane it grows on. For example, changing the size of the pores or how much the membrane repels water affects how many nutrients are passed to the cells, altering their growth pattern.


Abstract of Programmed assembly of pressure sensors using pattern-forming bacteria

Conventional methods for material fabrication often require harsh reaction conditions, have low energy efficiency, and can cause a negative impact on the environment and human health. In contrast, structured materials with well-defined physical and chemical properties emerge spontaneously in diverse biological systems. However, these natural processes are not readily programmable. By taking a synthetic-biology approach, we demonstrate here the programmable, three-dimensional (3D) material fabrication using pattern-forming bacteria growing on top of permeable membranes as the structural scaffold. We equip the bacteria with an engineered protein that enables the assembly of gold nanoparticles into a hybrid organic-inorganic dome structure. The resulting hybrid structure functions as a pressure sensor that responds to touch. We show that the response dynamics are determined by the geometry of the structure, which is programmable by the membrane properties and the extent of circuit activation. Taking advantage of this property, we demonstrate signal sensing and processing using one or multiple bacterially assembled structures. Our work provides the first demonstration of using engineered cells to generate functional hybrid materials with programmable architecture.

3D ‘body-on-a-chip’ project aims to accelerate drug testing, reduce costs

Scientists created miniature models (“organoids”) of heart, liver, and lung  in dishes and combined them into an integrated “body-on-a-chip” system fed with nutrient-rich fluid, mimicking blood. (credit: Wake Forest Baptist Medical Center)

A team of scientists at Wake Forest Institute for Regenerative Medicine and nine other institutions has engineered miniature 3D human hearts, lungs, and livers to achieve more realistic testing of how the human body responds to new drugs.

The “body-on-a-chip” project, funded by the Defense Threat Reduction Agency, aims to help reduce the estimated $2 billion cost and 90 percent failure rate that pharmaceutical companies face when developing new medications. The research is described in an open-access paper in Scientific Reports, published by Nature.

Using the same expertise they’ve employed to build new organs for patients, the researchers connected together micro-sized 3D liver, heart, and lung organs-on-a chip (or “organoids”) on a single platform to monitor their function. They selected heart and liver for the system because toxicity to these organs is a major reason for drug candidate failures and drug recalls. And lungs were selected because they’re the point of entry for toxic particles and for aerosol drugs such as asthma inhalers.

The integrated three-tissue organ-on-a-chip platform combines liver, heart, and lung organoids. (Top) Liver and cardiac modules are created by bioprinting spherical organoids using customized bioinks, resulting in 3D hydrogel constructs (upper left) that are placed into the microreactor devices. (Bottom) Lung modules are formed by creating layers of cells over porous membranes within microfluidic devices. TEER (trans-endothelial [or epithelial] electrical resistance sensors allow for monitoring tissue barrier function integrity over time. The three organoids are placed in a sealed, monitored system with a real-time camera. A nutrient-filled liquid that circulates through the system keeps the organoids alive and is used to introduce potential drug therapies into the system. (credit: Aleksander Skardal et al./Scientific Reports)

Why current drug testing fails

Drug compounds are currently screened in the lab using human cells and then tested in animals. But these methods don’t adequately replicate how drugs affect human organs. “If you screen a drug in livers only, for example, you’re never going to see a potential side effect to other organs,” said Aleks Skardal, Ph.D., assistant professor at Wake Forest Institute for Regenerative Medicine and lead author of the paper.

In many cases during testing of new drug candidates — and sometimes even after the drugs have been approved for use — drugs also have unexpected toxic effects in tissues not directly targeted by the drugs themselves, he explained. “By using a multi-tissue organ-on-a-chip system, you can hopefully identify toxic side effects early in the drug development process, which could save lives as well as millions of dollars.”

“There is an urgent need for improved systems to accurately predict the effects of drugs, chemicals and biological agents on the human body,” said Anthony Atala, M.D., director of the institute and senior researcher on the multi-institution study. “The data show a significant toxic response to the drug as well as mitigation by the treatment, accurately reflecting the responses seen in human patients.”

Advanced drug screening, personalized medicine

The scientists conducted multiple scenarios to ensure that the body-on-a-chip system mimics a multi-organ response.

For example, they introduced a drug used to treat cancer into the system. Known to cause scarring of the lungs, the drug also unexpectedly affected the system’s heart. (A control experiment using only the heart failed to show a response.) The scientists theorize that the drug caused inflammatory proteins from the lung to be circulated throughout the system. As a result, the heart increased beats and then later stopped altogether, indicating a toxic side effect.

“This was completely unexpected, but it’s the type of side effect that can be discovered with this system in the drug development pipeline,” Skardal noted.

Test of “liver on a chip” response to two drugs to demonstrate clinical relevance. Liver construct toxicity response was assessed following exposure to acetaminophen (APAP) and the clinically-used APAP countermeasure N-acetyl-L-cysteine (NAC). Liver constructs in the fluidic system (left) were treated with no drug (b), 1 mM APAP (c), and 10 mM APAP (d) — showing progressive loss of function and cell death, compared to 10 mM APAP +20 mM NAC (e), which mitigated those negative effects. The data shows both a significant cytotoxic (cell-damage) response to APAP as well as its mitigation by NAC treatment — accurately reflecting the clinical responses seen in human patients. (credit: Aleksander Skardal et al./Scientific Reports)

The scientists are now working to increase the speed of the system for large scale screening and add additional organs.

“Eventually, we expect to demonstrate the utility of a body-on-a-chip system containing many of the key functional organs in the human body,” said Atala. “This system has the potential for advanced drug screening and also to be used in personalized medicine — to help predict an individual patient’s response to treatment.”

Several patent applications comprising the technology described in the paper have been filed.

The international collaboration included researchers at Wake Forest Institute for Regenerative Medicine at the Wake Forest School of Medicine, Harvard-MIT Division of Health Sciences and Technology, Wyss Institute for Biologically Inspired Engineering at Harvard University, Biomaterials Innovation Research Center at Harvard Medical School, Bloomberg School of Public Health at Johns Hopkins University, Virginia Tech-Wake Forest School of Biomedical Engineering and Sciences, Brigham and Women’s Hospital, University of Konstanz, Konkuk University (Seoul), and King Abdulaziz University.


Abstract of Multi-tissue interactions in an integrated three-tissue organ-on-a-chip platform

Many drugs have progressed through preclinical and clinical trials and have been available – for years in some cases – before being recalled by the FDA for unanticipated toxicity in humans. One reason for such poor translation from drug candidate to successful use is a lack of model systems that accurately recapitulate normal tissue function of human organs and their response to drug compounds. Moreover, tissues in the body do not exist in isolation, but reside in a highly integrated and dynamically interactive environment, in which actions in one tissue can affect other downstream tissues. Few engineered model systems, including the growing variety of organoid and organ-on-a-chip platforms, have so far reflected the interactive nature of the human body. To address this challenge, we have developed an assortment of bioengineered tissue organoids and tissue constructs that are integrated in a closed circulatory perfusion system, facilitating inter-organ responses. We describe a three-tissue organ-on-a-chip system, comprised of liver, heart, and lung, and highlight examples of inter-organ responses to drug administration. We observe drug responses that depend on inter-tissue interaction, illustrating the value of multiple tissue integration for in vitro study of both the efficacy of and side effects associated with candidate drugs.

Teleoperating robots with virtual reality: getting inside a robot’s head

A new VR system from MIT’s Computer Science and Artificial Intelligence Laboratory could make it easy for factory workers to telecommute. (credit: Jason Dorfman, MIT CSAIL)

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system that lets you teleoperate a robot using an Oculus Rift or HTC Vive VR headset.

CSAIL’s “Homunculus Model” system (the classic notion of a small human sitting inside the brain and controlling the actions of the body) embeds you in a VR control room with multiple sensor displays, making it feel like you’re inside the robot’s head. By using gestures, you can control the robot’s matching movements to perform various tasks.

The system can be connected either via a wired local network or via a wireless network connection over the Internet. (The team demonstrated that the system could pilot a robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, DC to control Baxter at MIT.)

According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv paper about the system (presented this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver), “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now.”

Jobs for video-gamers too

The researchers imagine that such a system could even help employ jobless video-gamers by “game-ifying” manufacturing positions. (Users with gaming experience had the most ease with the system, the researchers found in tests.)

Homunculus Model system. A Baxter robot (left) is outfitted with a stereo camera rig and various end-effector devices. A virtual control room (user’s view, center), generated on an Oculus Rift CV1 headset (right), allows the user to feel like they are inside Baxter’s head while operating it. Using VR device controllers, including Razer Hydra hand trackers used for inputs (right), users can interact with controls that appear in the virtual space — opening and closing the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm. (credit: Jeffrey I. Lipton et al./arXiv).

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The team demonstrated the Homunculus Model system using the Baxter humanoid robot from Rethink Robotics, but the approach could work on other robot platforms, the researchers said.

In tests involving pick and place, assembly, and manufacturing tasks (such as “pick an item and stack it for assembly”) comparing the Homunculus Model system with existing state-of-the-art automated remote-control, CSAIL’s Homunculus Model system had a 100% success rate compared with a 66% success rate for state-of-the-art automated systems. The CSAIL system was also better at grasping objects 95 percent of the time and 57 percent faster at doing tasks.*

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team plans to eventually focus on making the system more scalable, with many users and different types of robots that are compatible with current automation technologies.

* The Homunculus Model system solves a delay problem with existing systems, which use a GPU or CPU, introducing delay. 3D reconstruction from the stereo HD cameras is instead done by the human’s visual cortex, so the user constantly receives visual feedback from the virtual world with minimal latency (delay). This also avoids user fatigue and nausea caused by motion sickness (known as simulator sickness) generated by “unexpected incongruities, such as delays or relative motions, between proprioception and vision [that] can lead to the nausea,” the researchers explain in the paper.


MITCSAIL | Operating Robots with Virtual Reality


Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing

Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This system can be deployed on multiple network architectures from a wired local network to a wireless network connection over the Internet. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor display, dynamic mapping between the user and robot, does not require the production of duals for the robot, or its environment. The control room is mapped to a space inside the robot to provide a sense of co-location within the robot. We compared our system with state of the art automation algorithms for assembly tasks, showing a 100% success rate for our system compared with a 66% success rate for automated systems. We demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.

Fast-moving spinning magnetized nanoparticles could lead to ultra-high-speed, high-density data storage

Artist’s impression of skyrmion data storage (credit: Moritz Eisebitt)

An international team led by MIT associate professor of materials science and engineering Geoffrey Beach has demonstrated a practical way to use “skyrmions” to create a radical new high-speed, high-density data-storage method that could one day replace disk drives — and even replace high-speed RAM memory.

Rather than reading and writing data one bit at a time by changing the orientation of magnetized nanoparticles on a surface, Skyrmions could store data using only a tiny area of a magnetic surface — perhaps just a few atoms across — and for long periods of time, without the need for further energy input (unlike disk drives and RAM).

Beach and associates conceive skyrmions as little sub-nanosecond spin-generating eddies of magnetism controlled by electric fields — replacing the magnetic-disk system of reading and writing data one bit at a time. In experiments, skyrmions have been generated on a thin metallic film sandwiched with non-magnetic heavy metals and transition-metal ferromagnetic layers — exploiting a defect, such as a constriction in the magnetic track.*

Skyrmions are also highly stable to external magnetic and mechanical perturbations, unlike the individual magnetic poles in a conventional magnetic storage device — allowing for vastly more data to be written onto a surface of a given size.

A practical data-storage system

Google data center (credit: Google Inc.)

Beach has recently collaborated with researchers at MIT and others in Germany** to demonstrate experimentally for the first time that it’s possible to create skyrmions in specific locations, which is needed for a data-storage system. The new findings were reported October 2, 2017 in the journal Nature Nanotechnology.

Conventional magnetic systems are now reaching speed and density limits set by the basic physics of their existing materials. The new system, once perfected, could provide a way to continue that progress toward ever-denser data storage, Beach says.

However, the researchers note that to create a commercialized system will require an efficient, reliable way to create skyrmions when and where they were needed, along with a way to read out the data (which now requires sophisticated, expensive X-ray magnetic spectroscopy). The team is now pursuing possible strategies to accomplish that.***

* The system focuses on the boundary region between atoms whose magnetic poles are pointing in one direction and those with poles pointing the other way. This boundary region can move back and forth within the magnetic material, Beach says. What he and his team found four years ago was that these boundary regions could be controlled by placing a second sheet of nonmagnetic heavy metal very close to the magnetic layer. The nonmagnetic layer can then influence the magnetic one, with electric fields in the nonmagnetic layer pushing around the magnetic domains in the magnetic layer. Skyrmions are little swirls of magnetic orientation within these layers. The key to being able to create skyrmions at will in particular locations lays in material defects. By introducing a particular kind of defect in the magnetic layer, the skyrmions become pinned to specific locations on the surface, the team found. Those surfaces with intentional defects can then be used as a controllable writing surface for data encoded in the skyrmions.

** The team also includes researchers at the Max Born Institute and the Institute of Optics and Atomic Physics, both in Berlin; the Institute for Laser Technologies in Medicine and Metrology at the University of Ulm, in Germany; and the Deutches Elektroniken-Syncrotron (DESY), in Hamburg. The work was supported by the U.S. Department of Energy and the German Science Foundation.

*** The researchers believe an alternative way of reading the data is possible, using an additional metal layer added to the other layers. By creating a particular texture on this added layer, it may be possible to detect differences in the layer’s electrical resistance depending on whether a skyrmion is present or not in the adjacent layer.


Abstract of Field-free deterministic ultrafast creation of magnetic skyrmions by spin–orbit torques

Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii–Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin–orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin–orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin–orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin–orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.