Disney Research’s ‘Magic Bench’ makes augmented reality a headset-free group experience

Magic Bench (credit: Disney Research)

Disney Research has created the first shared, combined augmented/mixed-reality experience, replacing first-person head-mounted displays or handheld devices with a mirrored image on a large screen — allowing people to share the magical experience as a group.

Sit on Disney Research’s Magic Bench and you may see an elephant hand you a glowing orb, hear its voice, and feel it sit down next to you, for example. Or you might get rained on and find yourself underwater.

How it works

Flowchart of the Magic Bench installation (credit: Disney Research)

People seated on the Magic Bench can see themselves on a large video display in front of them. The scene is reconstructed using a combined depth sensor/video camera (Microsoft‰ Kinect) to image participants, bench, and surroundings. An image of the participants is projected on a large screen, allowing them to occupy the same 3D space as a computer-generated character or object. The system can also infer participants’ gaze.*

Speakers and haptic sensors built into the bench add to the experience (by vibrating the bench when the elephant sits down in this example).

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, which began Sunday, July 30 in Los Angeles.

* To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop, according to the researchers. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.


DisneyResearchHub | Magic Bench


Abstract of Magic Bench

Mixed Reality (MR) and Augmented Reality (AR) create exciting opportunities to engage users in immersive experiences, resulting in natural human-computer interaction. Many MR interactions are generated around a €first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. In one vignettŠe an elephant hands a participant a glowing orb. ŒThis demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

Alphabet’s X announces Glass Enterprise Edition, a hands-free device for hands-on workers

Glass Enterprise Edition (credit: X)

Alphabet’s X announced today Glass Enterprise Edition (EE) — an augmented-reality device targeted mainly to hands-on workers.

Glass EE is an improved version of the “Explorer Edition” — an experimental 2013 corporate version of the original Glass product.

On the left is an assembly engine manual that GE’s mechanics used to consult. Now they use Glass Enterprise Edition on the right. (credit: X)

On January 2015, the Enterprise team in X quietly began shipping the Enterprise Edition to corporate solution partners like GE and DHL.

Now, there are more than 50 businesses, including AGCO, Dignity HealthNSF InternationalSutter Health, The Boeing Company, and Volkswagen, all of who have been using Glass to complete their work faster and more easily than before, the X blog reports.

Workers can access training videos, images annotated with instructions, or quality assurance checklists, for example, or invite others to “see what you see” through a live video stream so you can collaborate and troubleshoot in real-time.

AGCO workers use Glass to see assembly instructions, make reports and get remote video support. (credit: X)

Glass EE enables workers to scan a machine’s serial number to instantly bring up a manual, photo, or video they may need to build a tractor. (credit: AGCO)

Significant improvements

The new “Glass 2.0″ design makes significant improvements over the original Glass, according to Jay Kothari, project lead on the Glass enterprise team, as reported by Wired. It’s accessible for those who wear prescription lenses. A release switch allows for removing the “Glass Pod” electronics part from the frame for use with safety glasses for the factory floor. EE also has faster WiFi, faster processing, extended battery life, an 8-megapixel camera (up from 5), and a (much-requested) red light to indicate recording is in process.

Using Glass with Augmedix, doctors and nurses at Dignity Health can focus on patient care rather than record keeping. (credit: X)

But uses are not limited to factories. EE exclusive distributor Glass Partners also offers Glass devices, specialized software solutions, and ongoing support for such applications as Augmedix, a documentation automation platform powered by human experts and software, which frees physicians from computer work (“Glass has “brought the joys of medicine back to my doctors,” says Albert Chan, M.D., Sutter Health), and swyMed, which gives medical care teams the ability to reliably connect to doctors for real-time telemedicine.

And there are even (carefully targeted) uses for non-workers: Aira provides instant access to information to blind and low-vision people.

A recent Forrester Research report predicts that by 2025, nearly 14.4 million U.S. workers will wear smart glasses.


sutterhealth | Smart Glass Transforms Doctor’s Office Visits, Improves Satisfaction

 

VR glove powered by soft robotics provides missing sense of touch

Prototype of haptic VR glove, using soft robotic “muscles” to provide realistic tactile feedback for VR experiences (credit: Jacobs School of Engineering/UC San Diego)

Engineers at UC San Diego have designed a light, flexible glove with soft robotic muscles that provide realistic tactile feedback for virtual reality (VR) experiences.

Currently, VR tactile-feedback user interfaces are bulky, uncomfortable to wear and clumsy, and they simply vibrate when a user touches a virtual surface or object.

“This is a first prototype, but it is surprisingly effective,” said Michael Tolley, a mechanical engineering professor at the Jacobs School of Engineering at UC San Diego and a senior author of a paper presented at the Electronic Imaging, Engineering Reality for Virtual Reality conference in Burlingame, California and published May 31, 2017 in Advanced Engineering Materials.

The key soft-robotic component of the new glove is a version of the “McKibben muscle” (a pneumatic, or air-based, actuator invented in 1950s by the physician Joseph L. McKibben for use in prosthetic limbs), using soft latex chambers covered with braided fibers. To apply tactile feedback when the user moves their fingers, the muscles respond like springs. The board controls the muscles by inflating and deflating them.*

Prototype haptic VR glove system. A computer generates an image of a virtual world (in this case, a piano keyboard with a river and trees in the background) that it sends to the VR device (such as an Oculus Rift). A Leap Motion depth-camera (on the table) detects the position and movement of the user’s hands and sends data to a computer. It sends an image of the user’s hands superimposed over the keyboard (in the demo case) to the VR display and to a custom fluidic control board. The board then feeds back a signal to soft robotic components in the glove to individually inflate or deflate fingers, mimicking the user’s applied forces.

The engineers conducted an informal pilot study of 15 users, including two VR interface experts. The demo allowed them to play the piano in VR. They all agreed that the gloves increased the immersive experience, which they described as “mesmerizing” and “amazing.”

VR headset image of a piano, showing user’s finger actions (credit: Jacobs School of Engineering/UC San Diego)

The engineers say they’re working on making the glove cheaper, less bulky, and more portable. They would also like to bypass the Leap Motion device altogether to make the system more self-contained and compact. “Our final goal is to create a device that provides a richer experience in VR,” Tolley said. “But you could imagine it being used for surgery and video games, among other applications.”

* The researchers 3D-printed a mold to make the gloves’ soft exoskeleton. This will make the devices easier to manufacture and suitable for mass production, they said. Researchers used silicone rubber for the exoskeleton, with Velcro straps embedded at the joints.


JacobsSchoolNews | A glove powered by soft robotics to interact with virtual reality environments


Abstract of Soft Robotics: Review of Fluid-Driven Intrinsically Soft Devices; Manufacturing, Sensing, Control, and Applications in Human-Robot Interaction

The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, and high Tg elastomers. Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens of megapascals down to kilopascals; robots composed of such materials are intrinsically soft − they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot − those powered via fluidic pressurization.

A deep-learning tool that lets you clone an artistic style onto a photo

The Deep Photo Style Transfer tool lets you add artistic style and other elements from a reference photo onto your photo. (credit: Cornell University)

“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.

An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.

The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)

“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”

To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.

The software is still in the research stage.

Bala, Cornell doctoral student Fujun Luan, and Adobe collaborators Sylvian Paris and Eli Shechtman will present their paper at the Conference on Computer Vision and Pattern Recognition on July 21–26 in Honolulu.

This research is supported by a Google Faculty Re-search Award and NSF awards.


Abstract of Deep Photo Style Transfer

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.


Virtual-reality therapy found effective for treating phobias and PTSD

A soldier using “Bravemind” VR therapy (credit: USC Institute for Creative Technologies)

Virtual reality (VR) technology can be an effective part of treatment for phobias, post-traumatic stress disorder (PTSD) in combat veterans, and other mental health conditions, according to an open-access research review in the May/June issue of the Harvard Review of Psychiatry.

“VR-based exposure therapy” (VRE) has been found effective for treating panic disorder, schizophrenia, acute and chronic pain, addictions (including smoking), social anxiety disorder, claustrophobia, agoraphobia (fear or open spaces), eating disorders, “generalized anxiety disorder” (where daily functioning becomes difficult), obsessive-compulsive disorder, chronic pain, obsessive-compulsive disorder, and even schizophrenia.

iPhone VR Therapy System, including apps (lower right) (credit: Virtually Better, Inc.)

VR allows providers to “create computer-generated environments in a controlled setting, which can be used to create a sense of presence and immersion in the feared environment for individuals suffering from anxiety disorders,” says lead author Jessica L. Maples-Keller, PhD, of University of Georgia.

One dramatic example is progressive exposure to frightening situations in patients with specific phobias, such as fear of flying. This typically includes eight steps, from walking through an airport terminal to flying during a thunderstorm with turbulence, including specific stimuli linked to these symptoms (such as the sound of the cabin door closing). The patient can virtually experience repeated takeoffs and landings without going on an actual flight.

VR can simulate exposures that would be costly or impractical to recreate in real life, such as combat conditions, or to control the “dose” and specific aspects of the exposure environment.

“A VR system will typically include a head-mounted display and a platform (for the patients) and a computer with two monitors — one for the provider’s interface in which he or she constructs the exposure in real time, and another for the provider’s view of the patient’s position in the VR environment,” the researchers note.

However, research so far on VR applications has had limitations, including small numbers of patients and lack of comparison groups; and mental health care providers will need specific training, the authors warn.

The senior author of the paper, Barbara O. Rothbaum, PhD, disclosed one advisory board payment from Genentech and equity in Virtually Better, Inc., which creates virtual reality products.


Abstract of The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders

Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

Precision typing on a smartwatch with finger gestures

The “Watchsense” prototype uses a small depth camera attached to the arm, mimicking a depth camera on a smartwatch. It could make it easy to type, or in a music program, volume could be increased by simply raising a finger. (credit: Srinath Sridhar et al.)

If you wear a smartwatch, you know how limiting it is to type it on or otherwise operate it. Now European researchers have developed an input method that uses a depth camera (similar to the Kinect game controller) to track fingertip touch and location on the back of the hand or in mid-air, allowing for precision control.

The researchers have created a prototype called “WatchSense,” worn on the user’s arm. It captures the movements of the thumb and index finger on the back of the hand or in the space above it. It would also work with smartphones, smart TVs, and virtual-reality or augmented reality devices, explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

KurzweilAI has covered a variety of attempts to use depth cameras for controlling devices, but developers have been plagued with the lack of precise control with current camera devices and software.

The new software, based on machine learning, recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, says Sridhar, identifying specific fingers and dealing with the unevenness of the back of the hand and the fact that fingers can occlude each other when they are moved.

A smartwatch (or other device) could have an embedded depth sensor on its side, aimed at the back of the hand and the space above it, allowing for easy typing and control. (credit: Srinath Sridhar et al.)

“The currently available depth sensors do not fit inside a smartwatch, but from the trend it’s clear that in the near future, smaller depth sensors will be integrated into smartwatches,” Sridhar says.

The researchers, which include Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen, and Antti Oulasvirta at Aalto University in Finland, will present WatchSense at the ACM CHI Conference on Human Factors in Computing Systems in Denver (May 6–11, 2017). Their open-access paper is also available.


Srinath Sridhar et al. | WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor


Abstract of WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor

This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user’s forearm (simulating an integrated depth sensor). Our prototype—which runs in real-time on consumer mobile devices—enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.

‘Strange Beasts’: Is this the future of augmented reality?

(credit: Magali Barbe)

“Strange Beasts” — a five-minute short science fiction movie produced by Magali Barbe, is in the form of an augmented-reality-game promo. Victor Weber, founder of Strange Beasts, says the game “allows players to create, customize, and grow your very own creature.”

Supervision (credit: Magali Barbe)

Weber explains that this is made possible by “nanoretinal technology” that “superimposes computer-graphics-composed imagery over real world objects by projecting a digital light field directly into your eye.” The imagery is reminiscent of Magic Leap promos — but using surgically implanted “supervision” displays.

The movie’s surprise ending raises disturbing questions about where augmented-reality may one day take us.

Reboot of The Matrix in the works

(credit: Warner Bros.)

Warner Bros. is in the early stages of developing a relaunch of The Matrix, The Hollywood Reporter revealed today (March 14, Pi day, appropriately).

The Matrix, the iconic 1999 sci-fi movie, “is considered one of the most original films in cinematic history,” says THR.

The film “depicts a dystopian future in which reality as perceived by most humans is actually a simulated reality called ‘the Matrix,’ created by sentient machines to subdue the human population, while their bodies’ heat and electrical activity are used as an energy source,” Wikipedia notes. “Computer programmer ‘Neo’ learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the ‘dream world.’”

Keanu Reeves said he would be open to returning for another installment of the franchise if the Wachowskis were involved, according to THR (they are not currently involved).

Interestingly, Carrie-Anne Moss, who played Trinity in the film series, now stars in HUMANS as a scientist developing the technology to upload a person’s consciousness into a synth body.

Terasem Colloquium in Second Life

The 2016 Terasem Annual Colloquium on the Law of Futuristic Persons will take place in Second Life  in  ”Terasem sim” on Saturday, Dec. 10, 2016 at noon EDT. The main themes: “Legal Aspects of Futuristic Persons: Cyber-Humans” and “A Tribute to the ‘Father of Artificial Intelligence,’ Marvin Minsky, PhD.”

Each year on December 10th, International Human Rights Day, Terasem conducts a Colloquium on the Law of Futuristic Persons. The event seeks to provide the public with informed perspectives regarding the legal rights and obligations of “futuristic persons” via VR events with expert presentations and discussions. Terasem hopes to facilitate development of a body of law covering the rights and obligations of entities that transcend, and yet encompass, conventional conceptions of humanness,” according to Terasem Movement, Inc.

12:10–12:30PM —How Marvin Minsky Inspired Me To Have a Mindclone Living on An O’Neill Space Habitat
Martine Rothblatt, JD, PhD
Co-Founder, Terasem Movement, Inc.
Space Coast, FL
Avatar name: Vitology Destiny

12:30–12:50PM — Formal Interaction

12:50–1:10PM — The Emerging Law of Cyborgs
Woodrow “Woody” Barfield, PhD, JD, LLM
Author: Cyber-Humans: Our Future with Machines
Chapel Hill, NC
Avatar name: WoodyBarfield

1:10–1:30PM — Formal Interaction

1:30–1:50PM — Cyborgs and Family Law Challenges
Rich Lee
Human Enhancement & Augmentation
St. George, UT
Avatar name: RichLee78

1:50–2:10PM — Formal Interaction

2:10–2:30PM — Synthetic Brain Simulations and Mens Rea*
Stephen Thaler, PhD.
President & CEO, Imagination-engines, Inc.
St. Charles, MO
Avatar name: SteveThaler

* Mens Rea refers to criminal intent. Moreover, it is the state of mind indicating culpability which is required by statute as an element of a crime. — Cornell University Legal Information Institute

 

Disney Research wants to make VR haptics as immersive as visuals

The framework of  the VR360HD app. The game engine renders animated audiovisual and haptic media defined by triggers and user behaviors and associates them with a VR media stream. Haptics is played back on a passive user sitting on or wearing a haptic device. (credit: Disney Research)

Disney Research has developed a 360-degree virtual-reality app that enables users to enhance their experience by adding customized haptic (body sensations) effects that can be triggered by user movements, biofeedback, or timelines.

A team led by Ali Israr, senior research engineer at Disney Research, has demonstrated the haptic plugin using a unique chair to provide full body sensations and a library of “feel effects” that enabled users to select and customize sensations such as falling rain, a beating heart, or a cat walking.

Beyond buzz

“Our team is working to make VR haptic sensations just as rich as the 360-degree visual media now available,” said Israr.

“Current VR systems provide ‘buzz-like’ haptic sensations through hand controllers,” he said. “But technology exists for much richer sensations. We’ve created a framework that would enable users to select from a wide range of meaningful sensations that can be adjusted to complement the visual scene and to play them through a variety of haptic feedback devices.”

The haptic playback and authoring plugin developed by the researchers connects a VR game engine to a custom haptic device. It allows users to create, personalize and associate haptic feedback to the events triggered in the VR game engine.

Whole lotta shakin’ goin’ on

The haptic app, called VR360HD, was developed and tested using a consumer headset and Disney Research’s haptic chair. The chair features a grid of six vibrotactile actuators in its back and two subwoofers, or “shakers,” in the seat and back. The grid produces localized moving sensations in the back, while the subwoofers shake two different regions of the body and can create a sensation of motion.

Users were able to select from a library of feel effects, also assembled and tested by Disney Research. These feel effects are identified with common terms such as rain, pulsing, or rumbling, and can be adjusted so that people can distinguish, for instance, between a light sprinkle and a heavy downpour.

“Combining creativity and innovation, this research continues Disney’s rich legacy of inventing new ways to tell great stories and leveraging technology required to build the future of entertainment,” the researchers note.

The researchers shook up the ACM Symposium on Virtual Reality Software and Technology with their VR360 player on Nov. 2–4 in Munich.


Abstract of VR360HD- A VR360° Player with Enhanced Haptic Feedback

We present a VR360° video player with haptic feedback playback. The VR360HD application enhances VR viewing experience by triggering customized haptic effects associated with user’s activities, biofeedback, network messages and customizable timeline triggers incorporated in the VR media. The app is developed in the Unity3D game engine and tested using a GearVR headset, therefore allowing users to add animations to VR gameplay and to the VR360° streams. A custom haptic plugin allows users to author and associate animated haptic effects to the triggers, and playback these effects on a custom haptic hardware, the Haptic Chair. We show that the VR360HD app creates rich tactile effects and can be easily adapted to other media types.