MIT’s Cheetah 3 blind robot can climb a staircase littered with debris, leap, and gallop across rough terrain

MIT’s Cheetah 3 robot — an upgrade to the Cheetah 2, can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved — all while essentially blind.

The 90-pound robot is intentionally designed to do all this without relying on cameras or any external environmental sensors. The idea is to allow it to “feel” its way through its surroundings via “blind locomotion,” (like making your way across a pitch-black room), eliminating visual distractions, which would slow the robot down.

“Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow, said the robot’s designer, Sangbae Kim, associate professor of mechanical engineering at MIT. “So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast.”

Faster, more nimble, more cat-like

Warning: Cheetah 3 can jump on your desk (credit: MIT)

Cheetah3 has an expanded range of motion compared to its predecessor Cheetah 2, which allows the robot to stretch backwards and forwards, and twist from side to side, much like a cat limbering up to pounce. Cheetah 3 can blindly make its way up staircases and through unstructured terrain, and can quickly recover its balance in the face of unexpected forces, thanks to two new algorithms developed by Kim’s team: a contact detection algorithm, and a model-predictive control algorithm.

The contact detection algorithm helps the robot determine the best time for a given leg to switch from swinging in the air to stepping on the ground. For example, if the robot steps on a light twig versus a hard, heavy rock, how it reacts — and whether it continues to carry through with a step, or pulls back and swings its leg instead — can make or break its balance.

The researchers tested the algorithm in experiments with the Cheetah 3 trotting on a laboratory treadmill and climbing on a staircase. Both surfaces were littered with random objects such as wooden blocks and rolls of tape.

The robot’s blind locomotion was also partly due to the model-predictive control algorithm, which predicts how much force a given leg should apply once it has committed to a step. The model-predictive control algorithm calculates the multiplicative positions of the robot’s body and legs a half-second into the future, if a certain force is applied by any given leg as it makes contact with the ground.

Cameras to be activated later

The team had already added cameras to the robot to give it visual feedback of its surroundings. This will help in mapping the general environment, and will give the robot a visual heads-up on larger obstacles such as doors and walls. But for now, the team is working to further improve the robot’s blind locomotion

“We want a very good controller without vision first,” Kim says. “And when we do add vision, even if it might give you the wrong information, the leg should be able to handle [obstacles]. Because what if it steps on something that a camera can’t see? What will it do? That’s where blind locomotion can help. We don’t want to trust our vision too much.”

Within the next few years, Kim envisions the robot carrying out tasks that would otherwise be too dangerous or inaccessible for humans to take on.

This research was supported, in part, by Naver, Toyota Research Institute, Foxconn, and Air Force Office of Scientific Research.

Source: MIT.

Discovering new drugs and materials by ‘touching’ molecules in virtual reality

To figure out how to block a bacteria’s attempt to create multi-resistance to antibiotics, a researcher grabs a simulated ligand (binding molecule) — a type of penicillin called benzylpenicillin (red) — and interactively guides that molecule to dock within a larger enzyme molecule (blue-orange) called β-lactamase, which is produced by bacteria in an attempt to disable penicillin (making a patient resistant to a class of antibiotics called β-lactam). (credit: University of Bristol)

University of Bristol researchers, in collaboration with developers at Bristol based start-up Interactive Scientific, Oracle Corporation and a joint team of computer science and chemistry researchers, have designed and tested a new virtual reality (VR) cloud-based system intended to allow researchers to reach out and “touch” molecules as they move — folding them, knotting them, plucking them, and changing their shape to test how the molecules interact. The virtual reality cloud based system, called Nano Simbox, is the proprietary technology of Interactive Scientific, who collaborated with the University of Bristol to do the testing. Using an HTC Vive virtual-reality device, it could lead to creating new drugs and materials and improving the teaching of chemistry.

More broadly, the goal is to accelerate progress in nanoscale molecular engineering areas that include conformational mapping, drug development, synthetic biology, and catalyst design.

Real-time collaboration via the cloud

Two users passing a fullerene (C60) molecule back and forth in real time over a cloud-based network. The researchers are each wearing a VR head-mounted display (HMD) and holding two small wireless controllers that function as atomic “tweezers” to manipulate the real-time molecular dynamic of the C60 molecule. Each user’s position is determined using a real-time optical tracking system composed of synchronized infrared light sources, running locally on a GPU-accelerated computer. (credit: University of Bristol)

The multi-user system, developed by developed by a team led by University of Bristol chemists and computer scientists, uses an “interactive molecular dynamics virtual reality” (iMD VR) app that allows users to visualize and sample (with atomic-level precision) the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment.

Because each VR client has access to global position data of all other users, any user can see through his/her headset a co-located visual representation of all other users at the same time. So far, the system has uniquely allowed for simultaneously co-locating six users in the same room within the same simulation.

Testing on challenging molecular tasks

The team designed a series of molecular tasks for testing, using traditional mouse, keyboard, and touchscreens compared to virtual reality. The tasks included threading a small molecule through a nanotube, changing the screw-sense of a small organic helix, and tying a small string-like protein into a simple knot, and a variety of dynamic molecular problems, such as binding drugs to its target, protein folding, and chemical reactions. The researchers found that for complex 3D tasks, VR offers a significant advantage over current methods. For example, participants were ten times more likely to succeed in difficult tasks such as molecular knot tying.

Anyone can try out the tasks described in the open-access paper by downloading the software and launching their own cloud-hosted session.


David Glowacki | This video, made by University of Bristol PhD student Helen M. Deeks, shows the actions she took using a wireless set of “atomic tweezers” (using the HTC Vive) to interactively dock a single benzylpenicillin drug molecule into the active site of the β-lactamase enzyme. 


David Glowacki | The video shows the cloud-mounted virtual reality framework, with several different views overlaid to give a sense of how the interaction works. The video outlines the four different parts of the user studies: (1) manipulation of buckminsterfullerene, enabling users to familarize themselves with the interactive controls; (2) threading a methane molecule through a nanotube; (3) changing the screw-sense of a helicene molecule; and (4) tying a trefoil knot in 17-Alanine.

Ref: Science Advances (open-access). Source: University of Bristol.

There’s no known upper limit to human longevity, study suggests

Chiyo Miyako of Japan is the world’s oldest verified living person at 117 years, as of June 29, 2018, according to the Gerontology Research Group. She credits eating eel, drinking red wine, and never smoking for her longevity, and enjoys calligraphy. (credit: Medical Review Co., Ltd.)

Human death risk increases exponentially from 65 up to about age 80. At that point, the range of risks starts to increase. But by age 105, the death risk actually levels off — suggesting there’s no known upper limit for human lifespan.*

That’s the conclusion of a controversial study by an international team of scientists, published Thursday, June 28 in the journal Science.

“The increasing number of exceptionally long-lived people and the fact that their mortality beyond 105 is seen to be declining across cohorts — lowering the mortality plateau or postponing the age when it appears — strongly suggest that longevity is continuing to increase over time and that a limit, if any, has not been reached,” the researchers wrote.

High-quality data

Logarithmic plot of the exponential risk of death (“hazard”) from ages 65 to 115 (on a logarithmic plot, exponentials are shown as a diagonal straight line). For ages up to 105, the data is from the Human Mortality Database (HMD). Note that starting at age 80, the range of risks of death (blue bars) starts to increase (people live to different ages; some live longer) — it’s no longer a fixed probability, as in the traditional “Gompertz” model (black line). However by age 105, based on data from the new Italian ISTAT model, the risk of death actually hits a plateau (stops increasing, as shown in dashed black line with orange background) and the odds of someone dying from one birthday to the next are roughly 50:50.** (credit: E. Barbi et al., Science)

The new study was based on “high-quality data from Italians aged 105 and older, collected by the Italian National Institute of Statistics (ISTAT).” That data provided “accuracy and precision that were not possible before,” the researchers say.

Previous data for supercentenarians (age 110 or older) in the Max Planck Institute for Demographic Research International Database on Longevity (IDL) were “problematic in terms of age reporting [due to] sparse data pooled from 11 countries,” according to the authors of the Science paper.

Instead, ISTAT “collected and validated the individual survival trajectory” of all inhabitants of Italy aged 105 and older in the period from 1 January 2009 to 31 December 2015, including birth certificates, suggesting that “misreporting is believed to be minimal in these data.”

Ref.: Science.

* The current record for the longest human life span was set in 1997 when French woman Jeanne Calment died at the age of 122.

** This chart shows yearly hazards (probability of death) on a logarithmic scale for the cohort (group of subjects with a matching characteristic) of Italian women born in 1904. The straight-line prediction (black) is based on fitting a Gompertz model to ages 65 to 80. Confidence intervals (blue) — the range of death probabilities — were derived from Human Mortality Database (HMD) data for ages up to 105, and from ISTAT data beyond age 105. Note the longer intervals and increased diversion from straight-line prediction (black) after age 80; and the estimated plateau in probability of death values beyond age 105 (black dashed line with orange background), based on the model parameters, which were in turn based on the full ISTAT database.

New material eliminates need for motors or actuators in future robots, other devices

A “mini arm” made up of two hinges of actuating nickel hydroxide-oxyhydroxide material (left) can lift an object 50 times its own weight when triggered (right) by light or electricity. (credit: University of Hong Kong)

University of Hong Kong researchers have invented a radical new lightweight material that could replace traditional bulky, heavy motors or actuators in robots, medical devices, prosthetic muscles, exoskeletons, microrobots, and other types of devices.

The new actuating material — nickel hydroxide-oxyhydroxide — can be instantly triggered and wirelessly powered by low-intensity visible light or electricity at relatively low intensity. It can exert a force of up to 3000 times its own weight — producing stress and speed comparable to mammalian skeletal muscles, according to the researchers.

The material is also responsive to heat and humidity changes, which could allow autonomous machines to harness tiny energy changes in the environment.

The major component is nickel, so the material cost is low, and fabrication uses a simple electrodeposition process, allowing for scaling up and manufacture in industry.

Developing actuating materials was identified as the leading grand challenge in “The grand challenges of Science Robotics” to “deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries.”

Using a light blocker (top) a mini walking bot (bottom) with the “front leg” bent and straightened alternatively can walk towards a light source. (credit: University of Hong Kong)


University of Hong Kong | Future Robots need No Motors

Ref.: Science Robotics. Source: University of Hong Kong.

How robots aided by deep learning could help autism therapists


MIT Media Lab  (no sound) | Intro: Personalized Machine Learning for Robot Perception of Affect and Engagement in Autism Therapy. This is an example of a therapy session augmented with SoftBank Robotics’ humanoid robot NAO and deep-learning software. The 35 children with autism who participated in this study ranged in age from 3 to 13. They reacted in various ways to the robots during their 35-minute sessions — from looking bored and sleepy in some cases to jumping around the room with excitement, clapping their hands, and laughing or touching the robot.

Robots armed with personalized “deep learning” software could help therapists interpret behavior and personalize therapy of autistic children, while making the therapy more engaging and natural. That’s the conclusion of a study by an international team of researchers at MIT Media Lab, Chubu University, Imperial College London, and University of Augsburg.*

Children with autism-spectrum conditions often have trouble recognizing the emotional states of people around them — distinguishing a happy face from a fearful face, for instance. So some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways.

Personalized autism therapy

But the MIT research team realized that deep learning would help the therapy robots perceive the children’s behavior more naturally, they report in a Science Robotics paper.

Personalization is especially important in autism therapy, according to the paper’s senior author, Rosalind Picard, PhD, a professor at MIT who leads research in affective computing: “If you have met one person, with autism, you have met one person with autism,” she said, citing a famous adage.


“Computers will have emotional intelligence by 2029”… by which time, machines will “be funny, get the joke, and understand human emotion.” — Ray Kurzweil


“The challenge of using AI [artificial intelligence] that works in autism is particularly vexing, because the usual AI methods require a lot of data that are similar for each category that is learned,” says Picard, in explaining the need for deep learning. “In autism, where heterogeneity reigns, the normal AI approaches fail.”

How personalized robot-assisted therapy for autism would work

Robot-assisted therapy** for autism often works something like this: A human therapist shows a child photos or flash cards of different faces meant to represent different emotions, to teach them how to recognize expressions of fear, sadness, or joy. The therapist then programs the robot to show these same emotions to the child, and observes the child as she or he engages with the robot. The child’s behavior provides valuable feedback that the robot and therapist need to go forward with the lesson.

“Therapists say that engaging the child for even a few seconds can be a big challenge for them. [But] robots attract the attention of the child,” says lead author Ognjen Rudovic, PhD, a postdoctorate fellow at the MIT Media Lab. “Also, humans change their expressions in many different ways, but the robots always do it in the same way, and this is less frustrating for the child because the child learns in a very structured way how the expressions will be shown.”


SoftBank Robotics | The researchers used NAO humanoid robots in this study. Almost two feet tall and resembling an armored superhero or a droid, NAO conveys different emotions by changing the color of its eyes, the motion of its limbs, and the tone of its voice.

However, this type of therapy would work best if the robot could also smoothly interpret the child’s own behavior — such as excited or paying attention — during the therapy, according to the researchers. To test this assertion, researchers at the MIT Media Lab and Chubu University developed a personalized deep learning network that helps robots estimate the engagement and interest of each child during these interactions, they report.**

The researchers built a personalized framework that could learn from data collected on each individual child. They captured video of each child’s facial expressions, head and body movements, poses and gestures, audio recordings and data on heart rate, body temperature, and skin sweat response from a monitor on the child’s wrist.

Most of the children in the study reacted to the robot “not just as a toy but related to NAO respectfully, as it if was a real person,” said Rudovic, especially during storytelling, where the therapists asked how NAO would feel if the children took the robot for an ice cream treat.

In the study, the researchers found that the robots’ perception of the children’s responses agreed with assessments by human experts with a high correlation score of 60 percent, the scientists report.*** (It can be challenging for human observers to reach high levels of agreement about a child’s engagement and behavior. Their correlation scores are usually between 50 and 55 percent, according to the researchers.)

Ref.: Science Robotics (open-access). Source: MIT

* The study was funded by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology; Chubu University; and the European Union’s HORIZON 2020 grant (EngageME).

** A deep-learning system uses hierarchical, multiple layers of data processing to improve its tasks, with each successive layer amounting to a slightly more abstract representation of the original raw data. Deep learning has been used in automatic speech and object-recognition programs, making it well-suited for a problem such as making sense of the multiple features of the face, body, and voice that go into understanding a more abstract concept such as a child’s engagement.

Overview of the key stages (sensing, perception, and interaction) during robot-assisted autism therapy.
Data from three modalities (audio, visual, and autonomic physiology) were recorded using unobtrusive audiovisual sensors and sensors worn on the child’s wrist, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data. The focus of this work is the robot perception, for which we designed the personalized deep learning framework that can automatically estimate levels of the child’s affective states and engagement. These can then be used to optimize the child-robot interaction and monitor the therapy progress (see Interpretability and utility). The images were obtained by using Softbank Robotics software for the NAO robot. (credit: Ognjen Rudovic et al./Science Robotics)

“In the case of facial expressions, for instance, what parts of the face are the most important for estimation of engagement?” Rudovic says. “Deep learning allows the robot to directly extract the most important information from that data without the need for humans to manually craft those features.”

The robots’ personalized deep learning networks were built from layers of these video, audio, and physiological data, information about the child’s autism diagnosis and abilities, their culture and their gender. The researchers then compared their estimates of the children’s behavior with estimates from five human experts, who coded the children’s video and audio recordings on a continuous scale to determine how pleased or upset, how interested, and how engaged the child seemed during the session.

*** Trained on these personalized data coded by the humans, and tested on data not used in training or tuning the models, the networks significantly improved the robot’s automatic estimation of the child’s behavior for most of the children in the study, beyond what would be estimated if the network combined all the children’s data in a “one-size-fits-all” approach, the researchers found. Rudovic and colleagues were also able to probe how the deep learning network made its estimations, which uncovered some interesting cultural differences between the children. “For instance, children from Japan showed more body movements during episodes of high engagement, while in Serbs large body movements were associated with disengagement episodes,” Rudovic notes.

Spotting fake images with AI

This tampered image (left) can be detected by noting visual artifacts (red rectangle, showing the unnaturally high contrast along the baseball player’s edges), compared to authentic regions (the parking lot background); and by noting noise pattern inconsistencies between the tampered regions and the background (as seen in “Noise” image). The “ground-truth” image is the outline of the added (fake) image used in the experiment. (credit: Adobe)

Thanks to user-friendly image editing software like Adobe Photoshop, it’s becoming increasingly difficult and time-consuming to spot some deceptive image manipulations.

Now, funded by DARPA, researchers at Adobe and the University of Maryland, College Park have turned to AI to detect the more subtle methods now used in doctoring images.

Forensic AI

What used to take an image-forensic expert several hours to do can now be done in seconds with AI, says Vlad Morariu, PhD, a senior research scientist at Adobe. “Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network* to recognize image manipulation in each image,” he explains.

“We focused on three common tampering techniques — splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,” he notes.

The neural network looks for two things: changes to the red, green and blue color values of pixels; and inconsistencies in the random variations of color and brightness generated by a camera’s sensor or by later software manipulations, such as Gaussian smoothing.

Ref: 2018 Computer Vision and Pattern Recognition Proceedings (open access). Source: Adobe Blog.

* The researchers used a “two-stream Faster R-CNN” (a type of convolutional neural network) that they trained end-to-end to detect the tampered regions in a manipulated image. The two streams are RGB (red-green-blue — the millions of different colors) to find tampering artifacts like strong contrast difference and unnatural tampered boundaries; and noise (inconsistency of noise patterns between authentic and tampered regions — in the example above, note that the baseball player’s image is lighter, for example, in addition to more-subtle differences that can be detected by the algorithm — even what tampering technique was used). These two features are then fused together to further identify spatial co-occurrence of these two modalities (RGB and noise).

New wearable, high-precision brain scanner allows for patients to move around

(Left) Current stationary MEG scanner. (Right) New wearable scanner allows patients to move around, even play ping pong. (credit: National Institute of Mental Health and University of Nottingham)

A radical new wearable magnetoencephalography (MEG) brain scanner under development at the University of Nottingham allows a patient to move around, instead of having to sit or lie still inside a massive scanner.

Currently, MEG scanners* weigh around 500 kilograms (about 1100 pounds) because they require bulky superconducting sensors refrigerated in a liquid helium dewar at -269°C. Patients must  keep still — even a 5mm movement can ruin images of brain activity. That immobility severely limits the range of brain activities and experiences and makes the scanner unsuitable for children and many patients.

Natural movement, increased sensitivity

The new wearable, compact, non-invasive MEG technology** now makes it possible for patients to move around, which could revolutionize diagnosis and treatment of neurological disorders, say the researchers. Using compact, scalp-mounted sensors, it allows for natural movements in the real world, such as head nodding, stretching, drinking, and even playing ping pong.***

The new design also provides a four times increase in sensitivity in adults and a 15 to 20 times increase in infants, compared to existing MEG systems, according to the researchers. Brain events such as an epileptic seizure could be captured and movement disorders such as Parkinson’s disease could be studied in more precise detail. Now young patients could also have a brain scan, even if they need to fidget.

The new system also supports better targeting, diagnosis, and treatment of mental health and neurological conditions, and wider ranges of social, environmental, and physical conditions.

“In a few more years, we could be imaging brain function while people do, quite literally, anything [using virtual-reality-based environments],” said Matt Brookes, Ph.D., director of the Sir Peter Mansfield Imaging Centre at the University of Nottingham.

The scientists next plan to design a bike-helmet-size scanner, offering more freedom of movement and a generic fit and allowing for the technology to be applied to a wider range of head sizes.

Ref.: Nature. Source: University of Nottingham

* MEG scanners are unique in allowing for precise whole-brain coverage and high resolution in both space and time, compared to EEG and MRI.

** Instead of a superconducting quantum interference device (SQUID), the new system is based on an array of optically pumped magnetometers (OPMs) — magnetic field sensors that rely on the atomic properties of alkali metals.

*** As is the case with current MEG scanners, special walls are required to block the Earth’s magnetic field. That constraints patient movements to a small room.

 

How to supervise a robot with your mind and hand gestures

A user supervises and controls an autonomous robot using brain signals to detect mistakes and muscle signals to redirect a robot in a task to move a power drill to one of three possible targets on the body of a mock airplane. (credit: MIT)

Getting robots to do things isn’t easy. Usually, scientists have to either explicitly program them, or else train them to understand human language. Both options are a lot of work.

Now a new system developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Vienna University of Technology, and Boston University takes a simpler approach: It uses a human’s brainwaves and hand gestures to instantly correct robot mistakes.

Plug and play

Instead of trying to mentally guide the robot (which would require a complex, error-prone system and extensive operator training), the system identifies robot errors in real time by detecting a specific type of electroencephalogram (EEG) signal called “error-related potentials,” using a brain-computer interface (BCI) cap. These potentials (voltage spikes) are unconsciously produced in the brain when people notice mistakes — no user training required.

If an error-related potential signal is detected, the system automatically stops. That allows the supervisor to correct the robot by simply flicking a wrist — generating an electromyogram (EMG) signal that is detected by a muscle sensor in the supervisor’s arm to provide specific instructions to the robot.*

To develop the system, the researchers used “Baxter,” a popular humanoid robot from Rethink Robotics, shown here folding a shirt. (credit: Rethink Robotics)

Remarkably, the “plug and play” system works without requiring supervisors to be trained. So organizations could easily deploy it in real-world use in manufacturing and other areas. Supervisors can even manage teams of robots.**

For the project, the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time in a multi-target selection task for a mock drilling operation.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

“A more natural and intuitive extension of us”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Hmm … so could this system help Tesla speed up its lagging Model 3 production?

A paper will be presented at the Robotics: Science and Systems (RSS) conference in Pittsburgh, Pennsylvania next week.

Ref.: Robotics: Science and Systems Proceedings (forthcoming). Source: MIT.

* EEG and EMG both have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

** The “plug and play” supervisory control system

“If an [error-related potential] or a gesture is detected, the robot halts and requests assistance. The human then gestures to the left or right to naturally scroll through possible targets. Once the correct target is selected, the robot resumes autonomous operation. … The system includes an experiment controller and the Baxter robot as well as EMG and EEG data acquisition and classification systems. A mechanical contact switch on the robot’s arm detects initiation of robot arm motion. A human supervisor closes the loop.” —  Joseph DelPreto et al. Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection. Robotics: Science and Systems Proceedings (forthcoming). (credit: MIT)

Are virtual reality and augmented reality the future of education?

People learn better through virtual, immersive environments instead of more traditional platforms like a two-dimensional desktop computer: That sounds intuitively right, but researchers at the College of Computer, Mathematical, and Natural Sciences at the University of Maryland (UMD) have now supported that idea with evidence.

The researchers conducted an experiment using the “memory palace” method, where people recall an object or item located in an imaginary physical location, like a building or town. This “spatial mnemonic encoding” method has been used since classical times, taking advantage of the human brain’s ability to spatially organize thoughts and memories.*

Giulio Camillo’s Theater of Memory (1511 AD)

The researchers compared subjects’ recall accuracy when using with a VR head-mounted display vs. using a desktop computer. The results showed an 8.8 percent improvement overall in recall accuracy using the VR headsets.**

The two virtual-memory-palace scenes used in the experiment (credit: Eric Krokos et al./Virtual Reality)

Replacing rote memorization

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” says Amitabh Varshney, PhD., UMD professor of computer science. Varshney leads several major research efforts on the UMD campus involving virtual and augmented reality (AR), including close collaboration with health care professionals interested in developing AR-based diagnostic tools for emergency medicine and in VR training for surgical residents.

“As Marshall McLuhan pointed out years ago, modern 3-D gaming and particularly virtual-reality explorations, like movies, are ‘hotter’ than TV, while still requiring participation,” said Western Michigan University educational psychologist Warren E. Lacefield, Ph.D. “That greatly enhances the ‘message,’ as this formal memory study illustrates with statistical significance.”

Inside the Augmentarium***, a revolutionary virtual and augmented reality facility at UMD (credit: UMD)

Learning by moving around in VR

Other research suggests the possibility that “a spatial virtual memory palace — experienced in an immersive virtual environment — could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” says UMD researcher and co-author Catherine Plaisant. A good example is the obsessive AR game Pokémon GO, which combines the fun and hyperfast learning one doesn’t experience in conventional education.

While walking (or running around) in VR sounds like fun, it’s currently a challenge (ouch!). But computer scientists from Stony Brook University, NVIDIA and Adobe have developed a clever way to allow VR users to walk around freely in a virtual world while avoiding obstacles. The trick: the system makes imperceptible small adjustments in your VR-based visual reality during your blinks and saccades (quick eye movements) — while you’re (unknowingly) totally blind.

Eye fixations (in green) and saccades (in red). A blink fully suppresses visual perception. (credit: Eike Langbehn)

In other words, it messes with your mind. For example, a short step in your living room could make you believe you’re taking a large stride in a giant desert or planet — without knocking over the TV.

That system will be presented at SIGGRAPH August 12–16 in Vancouver. Watch out for the VR walkers.

Ref. Virtual Reality (open access), ACM Transactions on Graphics (forthcoming). Source: University of Maryland, SIGGRAPH, Stony Brook University, NVIDIA, Adobe.

* A remarkable example, described here, is a Russian who was thought to have a literally limitless memory.

** For the study, the UMD researchers recruited 40 volunteers—mostly UMD students unfamiliar with virtual reality. The researchers split the participants into two groups: [to eliminate one viewed information first via a VR head-mounted display and then on a desktop; the other did the opposite. Both groups received printouts of well-known faces—including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe—and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting—Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed. The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces—and also the location of the image relative to the user’s own body.

The results showed an 8.8 percent improvement overall in recall accuracy using the VR headsets, a statistically significant number according to the research team. Many of the participants said the immersive “presence” while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display.

*** The  Augmentarium is a visualization testbed lab on the UMD campus that launched in 2015 with funding from the National Science Foundation. The researchers describe it as “a revolutionary virtual and augmented reality (VR and AR) facility that brings together a unique assembly of projection displays, augmented reality visors, GPU clusters, human vision and human-computer interaction technologies, to study and facilitate visual augmentation of human intelligence and amplify situational awareness.”

Research suggests that humans could one day regrow limbs

Planaria flatworm (credit: Holger Brandl et al./Wikipedia)

In the June 14, 2018, issue of the journal Cell, researchers at Stowers Institute for Medical Research published a landmark study whose findings have important implications for advancing the study of stem cell biology and regenerative medicine, according to the researchers.*

Over a century ago, scientists traced regenerative powers in a flatworm known as planaria to a special population of planaria adult stem cells called neoblasts (a type of adult pluripotent stem cell — meaning a cell that can transform into any type of cell). Scientists believe these neoblasts hold the secret to regeneration. But until recently, scientists lacked the tools necessary to identify exactly which of the individual types of neoblasts were actually capable of regeneration.

However, with a special technique that combined genomics, single-cell analysis, and imaging, the scientists were able to identify 12 different subgroups of neoblasts. The problem was to find the specific neoblasts that were pluripotent (able to create any kind of cell, instead of becoming specific cells, like muscle or skin). By further analyzing the 12 neoblast markers (genetic signatures), they narrowed it down one specific subgroup, called Nb2.

Planarian flatworm adult stem cells known as neoblasts can be clustered based on their gene expression profiles (left panel). A neoblast subpopulation termed Nb2 expresses the cell membrane protein TSPAN-1 (center panel, a representative Nb2 cell with TSPAN-1 protein shown in green and DNA in blue). Nb2 neoblasts were found to be able to repopulate stem cell-depleted animals (right panel, representative animals at different time points after Nb2 single-cell transplants). (credit: Stowers Institute for Medical Research)

To see if the Nb2 type of neoblast was truly capable of regeneration, they irradiated a group of planaria and then inserted the Nb2 into the planaria. They found the Nb2 subgroup was in fact able to repopulate the planaria.

“We have enriched for a pluripotent stem cell population, which opens the door to a number of experiments that were not possible before,” says senior author Alejandro Sánchez Alvarado, Ph.D. “The fact that the marker we discovered is expressed not only in planarians but also in humans suggests that there are some conserved mechanisms that we can exploit.

“I expect those first principles will be broadly applicable to any organism that ever relied on stem cells to become what they are today. And that essentially is everybody.”

Ref.: Cell. Source: Stowers Institute for Medical Research

* The work was funded by the Stowers Institute for Medical Research, the Howard Hughes Medical Institute, and the National Institute of General Medical Sciences of the National Institutes of Health.