Discovering new drugs and materials by ‘touching’ molecules in virtual reality

To figure out how to block a bacteria’s attempt to create multi-resistance to antibiotics, a researcher grabs a simulated ligand (binding molecule) — a type of penicillin called benzylpenicillin (red) — and interactively guides that molecule to dock within a larger enzyme molecule (blue-orange) called β-lactamase, which is produced by bacteria in an attempt to disable penicillin (making a patient resistant to a class of antibiotics called β-lactam). (credit: University of Bristol)

University of Bristol researchers, in collaboration with developers at Bristol based start-up Interactive Scientific, Oracle Corporation and a joint team of computer science and chemistry researchers, have designed and tested a new virtual reality (VR) cloud-based system intended to allow researchers to reach out and “touch” molecules as they move — folding them, knotting them, plucking them, and changing their shape to test how the molecules interact. The virtual reality cloud based system, called Nano Simbox, is the proprietary technology of Interactive Scientific, who collaborated with the University of Bristol to do the testing. Using an HTC Vive virtual-reality device, it could lead to creating new drugs and materials and improving the teaching of chemistry.

More broadly, the goal is to accelerate progress in nanoscale molecular engineering areas that include conformational mapping, drug development, synthetic biology, and catalyst design.

Real-time collaboration via the cloud

Two users passing a fullerene (C60) molecule back and forth in real time over a cloud-based network. The researchers are each wearing a VR head-mounted display (HMD) and holding two small wireless controllers that function as atomic “tweezers” to manipulate the real-time molecular dynamic of the C60 molecule. Each user’s position is determined using a real-time optical tracking system composed of synchronized infrared light sources, running locally on a GPU-accelerated computer. (credit: University of Bristol)

The multi-user system, developed by developed by a team led by University of Bristol chemists and computer scientists, uses an “interactive molecular dynamics virtual reality” (iMD VR) app that allows users to visualize and sample (with atomic-level precision) the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment.

Because each VR client has access to global position data of all other users, any user can see through his/her headset a co-located visual representation of all other users at the same time. So far, the system has uniquely allowed for simultaneously co-locating six users in the same room within the same simulation.

Testing on challenging molecular tasks

The team designed a series of molecular tasks for testing, using traditional mouse, keyboard, and touchscreens compared to virtual reality. The tasks included threading a small molecule through a nanotube, changing the screw-sense of a small organic helix, and tying a small string-like protein into a simple knot, and a variety of dynamic molecular problems, such as binding drugs to its target, protein folding, and chemical reactions. The researchers found that for complex 3D tasks, VR offers a significant advantage over current methods. For example, participants were ten times more likely to succeed in difficult tasks such as molecular knot tying.

Anyone can try out the tasks described in the open-access paper by downloading the software and launching their own cloud-hosted session.


David Glowacki | This video, made by University of Bristol PhD student Helen M. Deeks, shows the actions she took using a wireless set of “atomic tweezers” (using the HTC Vive) to interactively dock a single benzylpenicillin drug molecule into the active site of the β-lactamase enzyme. 


David Glowacki | The video shows the cloud-mounted virtual reality framework, with several different views overlaid to give a sense of how the interaction works. The video outlines the four different parts of the user studies: (1) manipulation of buckminsterfullerene, enabling users to familarize themselves with the interactive controls; (2) threading a methane molecule through a nanotube; (3) changing the screw-sense of a helicene molecule; and (4) tying a trefoil knot in 17-Alanine.

Ref: Science Advances (open-access). Source: University of Bristol.

Spotting fake images with AI

This tampered image (left) can be detected by noting visual artifacts (red rectangle, showing the unnaturally high contrast along the baseball player’s edges), compared to authentic regions (the parking lot background); and by noting noise pattern inconsistencies between the tampered regions and the background (as seen in “Noise” image). The “ground-truth” image is the outline of the added (fake) image used in the experiment. (credit: Adobe)

Thanks to user-friendly image editing software like Adobe Photoshop, it’s becoming increasingly difficult and time-consuming to spot some deceptive image manipulations.

Now, funded by DARPA, researchers at Adobe and the University of Maryland, College Park have turned to AI to detect the more subtle methods now used in doctoring images.

Forensic AI

What used to take an image-forensic expert several hours to do can now be done in seconds with AI, says Vlad Morariu, PhD, a senior research scientist at Adobe. “Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network* to recognize image manipulation in each image,” he explains.

“We focused on three common tampering techniques — splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in,” he notes.

The neural network looks for two things: changes to the red, green and blue color values of pixels; and inconsistencies in the random variations of color and brightness generated by a camera’s sensor or by later software manipulations, such as Gaussian smoothing.

Ref: 2018 Computer Vision and Pattern Recognition Proceedings (open access). Source: Adobe Blog.

* The researchers used a “two-stream Faster R-CNN” (a type of convolutional neural network) that they trained end-to-end to detect the tampered regions in a manipulated image. The two streams are RGB (red-green-blue — the millions of different colors) to find tampering artifacts like strong contrast difference and unnatural tampered boundaries; and noise (inconsistency of noise patterns between authentic and tampered regions — in the example above, note that the baseball player’s image is lighter, for example, in addition to more-subtle differences that can be detected by the algorithm — even what tampering technique was used). These two features are then fused together to further identify spatial co-occurrence of these two modalities (RGB and noise).

Are virtual reality and augmented reality the future of education?

People learn better through virtual, immersive environments instead of more traditional platforms like a two-dimensional desktop computer: That sounds intuitively right, but researchers at the College of Computer, Mathematical, and Natural Sciences at the University of Maryland (UMD) have now supported that idea with evidence.

The researchers conducted an experiment using the “memory palace” method, where people recall an object or item located in an imaginary physical location, like a building or town. This “spatial mnemonic encoding” method has been used since classical times, taking advantage of the human brain’s ability to spatially organize thoughts and memories.*

Giulio Camillo’s Theater of Memory (1511 AD)

The researchers compared subjects’ recall accuracy when using with a VR head-mounted display vs. using a desktop computer. The results showed an 8.8 percent improvement overall in recall accuracy using the VR headsets.**

The two virtual-memory-palace scenes used in the experiment (credit: Eric Krokos et al./Virtual Reality)

Replacing rote memorization

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” says Amitabh Varshney, PhD., UMD professor of computer science. Varshney leads several major research efforts on the UMD campus involving virtual and augmented reality (AR), including close collaboration with health care professionals interested in developing AR-based diagnostic tools for emergency medicine and in VR training for surgical residents.

“As Marshall McLuhan pointed out years ago, modern 3-D gaming and particularly virtual-reality explorations, like movies, are ‘hotter’ than TV, while still requiring participation,” said Western Michigan University educational psychologist Warren E. Lacefield, Ph.D. “That greatly enhances the ‘message,’ as this formal memory study illustrates with statistical significance.”

Inside the Augmentarium***, a revolutionary virtual and augmented reality facility at UMD (credit: UMD)

Learning by moving around in VR

Other research suggests the possibility that “a spatial virtual memory palace — experienced in an immersive virtual environment — could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” says UMD researcher and co-author Catherine Plaisant. A good example is the obsessive AR game Pokémon GO, which combines the fun and hyperfast learning one doesn’t experience in conventional education.

While walking (or running around) in VR sounds like fun, it’s currently a challenge (ouch!). But computer scientists from Stony Brook University, NVIDIA and Adobe have developed a clever way to allow VR users to walk around freely in a virtual world while avoiding obstacles. The trick: the system makes imperceptible small adjustments in your VR-based visual reality during your blinks and saccades (quick eye movements) — while you’re (unknowingly) totally blind.

Eye fixations (in green) and saccades (in red). A blink fully suppresses visual perception. (credit: Eike Langbehn)

In other words, it messes with your mind. For example, a short step in your living room could make you believe you’re taking a large stride in a giant desert or planet — without knocking over the TV.

That system will be presented at SIGGRAPH August 12–16 in Vancouver. Watch out for the VR walkers.

Ref. Virtual Reality (open access), ACM Transactions on Graphics (forthcoming). Source: University of Maryland, SIGGRAPH, Stony Brook University, NVIDIA, Adobe.

* A remarkable example, described here, is a Russian who was thought to have a literally limitless memory.

** For the study, the UMD researchers recruited 40 volunteers—mostly UMD students unfamiliar with virtual reality. The researchers split the participants into two groups: [to eliminate one viewed information first via a VR head-mounted display and then on a desktop; the other did the opposite. Both groups received printouts of well-known faces—including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe—and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting—Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed. The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces—and also the location of the image relative to the user’s own body.

The results showed an 8.8 percent improvement overall in recall accuracy using the VR headsets, a statistically significant number according to the research team. Many of the participants said the immersive “presence” while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display.

*** The  Augmentarium is a visualization testbed lab on the UMD campus that launched in 2015 with funding from the National Science Foundation. The researchers describe it as “a revolutionary virtual and augmented reality (VR and AR) facility that brings together a unique assembly of projection displays, augmented reality visors, GPU clusters, human vision and human-computer interaction technologies, to study and facilitate visual augmentation of human intelligence and amplify situational awareness.”

round-up | New high-resolution virtual-reality and augmented-reality headsets

Oculus Go (credit: Oculus)

Oculus  announced Monday (April 30) that its much-anticipated Oculus Go virtual-reality headset is now available, priced at $199 (32GB storage version).

Oculus Go is a standalone headset (no need for a tethered PC, as with Oculus Rift, or for inserting a phone, as with Gear VR and other consumer VR devices). It features a high-resolution 2560 x 1440 pixels (538 ppi) display, hand controller, and built-in surround sound and microphone. (Ars Technica has an excellent review, including technical details.)

Anshar Online (credit: OZWE Games/Ars Technica)

More than 1,000 games and experiences for the Oculus Go are already available. Notable: futuristic Ready Player One-style Anshar Online multiplayer space shooter and Oculus Rooms, a social lounge where friends (as avatars) can watch TV or movies together on a 180-inch virtual screen and play Hasbro board games — redefining the solitary VR experience.

Oculus Rooms (credit: Oculus)

Next-gen Oculus VR headsets

Today (May 2) at F8, Facebook’s developer conference, the company revealed a prototype Oculus VR headset called “Half Dome.” It goes beyond Oculus Go’s 100 degrees to 140 degrees to see more of the visual world in your periphery. Its variable-focus displays move up and down depending on what you’re focusing on, and make objects that are close to the wearer’s eyes appear much sharper and crisper, Engadget reports.

Prototype Half Dome VR headset (credit: Facebook)

Apple’s super-high-resolution wireless VR-AR headset

Meanwhile, Apple is working on a powerful wireless headset for AR and VR, for 2020 release, CNET reported Friday, based on sources (Apple has not commented).* Code-named T288, the headset will feature two super-high-resolution 8K** (four times higher resolution than today’s 4K TVs) displays (for each eye), making the VR and AR images look highly lifelike — way beyond any devices currently available on the market.

To handle that extremely high resolution (and associated data rate) while connecting to an external processor box, T288 will use high-speed, short-range 60GHz wireless technology called WiGig, capable of data transfer rates up to 7 Gbit/s. The headset would also use a smaller, faster, more-power-efficient 5-nanometer-node processor, to be designed by Apple.

A patent application for VR/AR experiences in vehicles

(credit: Apple/USPTO)

In related news, Apple has applied for a patent for a system using virtual reality (VR) and augmented reality (AR) to help alleviate both motion sickness and boredom in a moving vehicle.

The patent application covers three cases: experiences and environments using VR, AR, and mixed reality (VR + AR + the real world). Apple claims that the design will reduce nausea from vehicle movement (or from perceived movement in a virtual-reality experience).

It also claims to provide (as entertainment) motion or acceleration effects via a vehicle seat, vehicle movement, or vehicle on-board systems (such as audio effects and using a heating/cooling fan to blow “wind” or moving/shaking the seat).

Jurassic World: Blue (credit: Universal, Felix & Paul)

For example, imagine riding in a car on a road that morphs into a prehistoric theme park (such as the Jurassic World: Blue VR experience from Universal Pictures and Felix & Paul Studios, which was announced Monday to coincide with the release of Oculus Go).

* At its 2017 June Worldwide Developers Conference (WWDC), Apple unveiled ARKit to enable developers to create augmented-reality apps for Apple products. Apple will hold its 29th WWDC on June 4–8 in San Jose.

** “8K resolution (8K UHD), is the current highest ultra high definition television (UHD TV) resolution in digital television and digital cinematography,” according to Wikipedia. “8K refers to the horizontal resolution, 7,680 pixels, forming the total image dimensions (7680×4320), otherwise known as 4320p. … It has four times as many pixels [as 4K], or 16 times as many pixels as Full HD. … 8K is speculated to become a mainstream consumer display resolution around 2023.” 

round-up | Three radical new user interfaces

Holodeck-style holograms could revolutionize videoconferencing

A “truly holographic” videoconferencing system has been developed by researchers at Queen’s University in Kingston Montreal. With TeleHuman 2, objects appear as stereoscopic images, as if inside a pod (not a two-dimensional video projected on a flat piece of glass). Multiple users can walk around and view the objects from all sides simultaneously — as in Star Trek’s Holodeck.

Teleporting for distance meetings. TeleHuman 2 “teleports” people live — allowing for meetings at a distance. No headset or 3D glasses required.

The researchers presented the system in an open-access paper at CHI 2018, the ACM CHI Conference on Human Factors in Computing Systems in Montreal on April 25.

(Left) Remote capture room with stereo 2K cameras, multiple surround microphones, and displays. (Right) Telehuman 2 display and projector (credit: Human Media Lab)

 

 

 

 

 

 

 

 


Interactive smart wall acts as giant touch screen, senses electromagnetic activity in room

Researchers at Carnegie Mellon University and Disney Research have devised a system called Wall++ for creating interactive “smart walls” that sense human touch, gestures, and signals from appliances.

By using masking tape and nickel-based conductive paint, a user would create a pattern of capacitive-sensing electrodes on the wall of a room (or a building) and then paint it over. The electrodes would be connected to sensors.

Wall ++ (credit: Carnegie Mellon University)

Acting as a sort of huge tablet, touch-tracking or motion-sensing uses could include dimming or turning lights on/off, controlling speaker volume, acting as smart thermostats, playing full-body video games, or creating a huge digital white board, for example.

A passive electromagnetic sensing mode could also allow for detecting devices that are on or off (by noise signature). And a small, signal-emitting wristband could enable user localization and identification for collaborative gaming or teaching, for example.

The researchers also presented an open-access paper at CHI 2018.


A smart-watch screen on your skin

LumiWatch, another interactive interface out of Carnegie Mellon, projects a smart-watch touch screen onto your skin. It solves the tiny-interface bottleneck with smart watches — providing more than five times the interactive surface area for common touchscreen operations, such as tapping and swiping. It was also presented in an open-access paper at CHI 2018.

Augmented-reality system lets doctors see medical images projected on patients’ skin

Projected medical image (credit: University of Alberta)

New technology is bringing the power of augmented reality into clinical practice. The system, called ProjectDR, shows clinicians 3D medical images such as CT scans and MRI data, projected directly on a patient’s skin.

The technology uses motion capture, similar to how it’s done in movies. Infrared cameras track invisible (to human vision) markers on the patient’s body. That allows the system to track the orientation of the body, allowing the projected images to move as the patient does.

Applications include teaching, physiotherapy, laparoscopic surgery, and surgical planning.

ProjectDR can also present segmented images — for example, only the lungs or only the blood vessels — depending on what a clinician is interested in seeing.

The researchers plan to test ProjectDR in an operating room to simulate surgery, according to Pierre Boulanger, PhD, a professor in the Department of Computing Science at the University of Alberta, Canada. “We are also doing pilot studies to test the usability of the system for teaching chiropractic and physical therapy procedures,” he said.

They next plan to conduct real surgical pilot studies.


UAlbertaScience | ProjectDR demonstration video

 

The Princess Leia project: ‘volumetric’ 3D images that float in ‘thin air’

Inspired by the iconic Stars Wars scene with Princess Leia in distress, Brigham Young University engineers and physicists have created the “Princess Leia project” — a new technology for creating 3D “volumetric images” that float in the air and that you can walk all around and see from almost any angle.*

“Our group has a mission to take the 3D displays of science fiction and make them real,” said electrical and computer engineering professor and holography expert Daniel Smalley, lead author of a Jan. 25 Nature paper on the discovery.

The image of Princess Leia portrayed in the movie is actually not a hologram, he explains. A holographic display scatters light only on a 2D surface. So you have to be looking at a limited range of angles to see the image, which is also normally static. Instead, a moving volumetric display can be seen from any angle and you can even reach your hand into it. Examples include the 3D displays Tony Stark interacts with in Ironman and the massive image-projecting table in Avatar.*

How to create a 3D volumetric image from a single moving particle

BYU student Erich Nygaard, depicted as a moving 3D image, mimicks the Princess Leia projection in the iconic Star Wars scene (“Help me Obi Wan Kenobi, you’re my only hope”). (credit: Dan Smalley Lab)

The team’s free-space volumetric display technology, called “Optical Trap Display,” is based on photophoretic** optical trapping (controlled by a laser beam) of a rapidly moving particle (of a plant fiber called cellulose in this case). This technique takes advantage of human persistence of vision (at more than 10 images per second we don’t see a moving point of light, just the pattern it traces in space — the same phenomenon that makes movies and video work).

As the laser beam moves the trapped particle around, three more laser beams illuminate the particle with RGB (red-green-blue) light. The resulting fast-moving dot traces out a color image in three dimensions (you can see the vertical scan lines in one vertical slice in the Princess Leia image above) — producing a full-color, volumetric (3D) still image in air with 10-micrometer resolution, which allows for fine detail. The technology also features low noticeable speckle (the annoying specks seen in holograms).***

Applications in the real (and virtual) world

So far, Smalley and his student researchers have 3D light-printed a butterfly, a prism, the stretch-Y BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. The images in this proof-of-concept prototype are still in the range of millimeters. But in the Nature paper, the researchers say they anticipate that the device “can readily be scaled using parallelism and [they] consider this platform to be a viable method for creating 3D images that share the same space as the user, as physical objects would.”

What about augmented and virtual-reality uses? “While I think this technology is not really AR or VR but just ‘R,’  there are a lot of interesting ways volumetric images can enhance and augment the world around us,” Smalley told KurzweilAI in an email. “A very-near-term application could be the use of levitated particles as ‘streamers’ to show the expected flow of air over actual physical objects. That is, instead of looking at a computer screen to see fluid flow over a turbine blade, you could set a volumetric projector next to the actual turbine blade and see particles form ribbons to shown expected fluid flow juxtaposed on the real object.

“In a scaled-up version of the display, a projector could place a superimposed image of a part on an engine showing a technician the exact location and orientation of that part. An even more refined version could create a magic portal in your home where you could see the size of shoes you just ordered and set your foot inside to (visually) check the fit. Other applications would included sparse telepresence, satellite tracking, command and control surveillance, surgical planning, tissue tagging, catheter guidance and other medical visualization applications.”

How soon? “I won’t make a prediction on exact timing but if we make as much progress in the next four years as we did in the last four years (a big ‘if’), then we would have a display of usable size by the end of that period. We have had a number of interested parties from a variety of fields. We are open to an exclusive agreement, given the right partner.”

* Smalley says he has long dreamed of building the kind of 3D holograms that pepper science-fiction films. But watching inventor Tony Stark thrust his hands through ghostly 3D body armor in the 2008 film Iron Man, Smalley realized that he could never achieve that using holography, the current standard for high-tech 3D display, because Stark’s hand would block the hologram’s light source. “That irritated me,” he says. He immediately tried to work out how to get around that.

** “Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light.” — Wikipedia

*** Previous researchers have created volumetric imagery, but the Smalley team says it’s the first to use optical trapping and color effectively. “Among volumetric systems, we are aware of only three such displays that have been successfully demonstrated in free space: induced plasma displays, modified air displays, and acoustic levitation displays. Plasma displays have yet to demonstrate RGB color or occlusion in free space. Modified air displays and acoustic levitation displays rely on mechanisms that are too coarse or too inertial to compete directly with holography at present.” — D.E. Smalley et al./Nature


Nature video | Pictures in the air: 3D printing with light


Abstract of A photophoretic-trap volumetric display

Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.

Take a fantastic 3D voyage through the brain with immersive VR system


Wyss Center for Bio and Neuroengineering/Lüscher lab (UNIGE) | Brain circuits related to natural reward

What happens when you combine access to unprecedented huge amounts of anatomical data of brain structures with the ability to display billions of voxels (3D pixels) in real time, using high-speed graphics cards?

Answer: An awesome new immersive virtual reality (VR) experience for visualizing and interacting with up to 10 terabytes (trillions of bytes) of anatomical brain data.

Developed by researchers from the Wyss Center for Bio and Neuroengineering and the University of Geneva, the system is intended to allow neuroscientists to highlight, select, slice, and zoom on down to individual neurons at the micrometer cellular level.

This 2-D brain image of a mouse brain injected with a fluorescent retrograde virus in the brain stem — captured with a lightsheet microscope — represents the kind of rich, detailed visual data that can be explored with a new VR system. (credit: Courtine Lab/EPFL/Leonie Asboth, Elodie Rey)

The new VR system grew out of a problem with using the Wyss Center’s lightsheet microscope (one of only three in the world): how can you navigate and make sense out the immense volume of neuroanatomical data?

“The system provides a practical solution to experience, analyze and quickly understand these exquisite, high-resolution images,” said Stéphane Pages, PhD, Staff Scientist at the Wyss Center and Senior Research Associate at the University of Geneva, senior author of a dynamic poster presented November 15 at the annual meeting of the Society for Neuroscience 2017.

For example, using “mini-brains,” researchers will be able to see how new microelectrode probes behave in brain tissue, and how tissue reacts to them.

Journey to the center of the cell: VR movies

A team of researchers in Australia has taken the next step: allowing scientists, students, and members of the public to explore these kinds of images — even interact with cells and manipulate models of molecules.

As described in a paper published in the journal Traffic, the researchers built a 3D virtual model of a cell, combining lightsheet microscope images (for super-resolution, real-time, single-molecule detection of fluorescent proteins in cells and tissues) with scanning electron microscope imaging data (for a more complete view of the cellular architecture).

To demonstrate this, they created VR movies (shown below) of the surface of a breast cancer cell. The movies can be played on a Samsung Gear VR or Google cardboard device or using the built-in YouTube 360 player with Chrome, Firefox, MS Edge, or Opera browsers. The movies will also play on a conventional smartphone (but without 3D immersion).

UNSW 3D Visualisation Aesthetics Lab | The cell “paddock” view puts the user on the surface of the cell and demonstrates different mechanisms by which nanoparticles can be internalized into cells.

UNSW 3D Visualisation Aesthetics Lab | The cell “cathedral” view takes the user inside the cell and allows them to explore key cellular compartments, including the mitochondria (red), lysosomes (green), early endosomes (light blue), and the nucleus (purple).


Abstract of Analyzing volumetric anatomical data with immersive virtual reality tools

Recent advances in high-resolution 3D imaging techniques allow researchers to access unprecedented amounts of anatomical data of brain structures. In parallel, the computational power of commodity graphics cards has made rendering billions of voxels in real-time possible. Combining these technologies in an immersive virtual reality system creates a novel tool wherein observers can physically interact with the data. We present here the possibilities and demonstrate the value of this approach for reconstructing neuroanatomical data. We use a custom built digitally scanned light-sheet microscope (adapted from Tomer et al., Cell, 2015), to image rodent clarified whole brains and spinal cords in which various subpopulations of neurons are fluorescently labeled. Improvements of existing microscope designs allow us to achieve an in-plane submicronic resolution in tissue that is immersed in a variety of media (e. g. organic solvents, Histodenz). In addition, our setup allows fast switching between different objectives and thus changes image resolution within seconds. Here we show how the large amount of data generated by this approach can be rapidly reconstructed in a virtual reality environment for further analyses. Direct rendering of raw 3D volumetric data is achieved by voxel-based algorithms (e.g. ray marching), thus avoiding the classical step of data segmentation and meshing along with its inevitable artifacts. Visualization in a virtual reality headset together with interactive hand-held pointers allows the user with to interact rapidly and flexibly with the data (highlighting, selecting, slicing, zooming etc.). This natural interface can be combined with semi-automatic data analysis tools to accelerate and simplify the identification of relevant anatomical structures that are otherwise difficult to recognize using screen-based visualization. Practical examples of this approach are presented from several research projects using the lightsheet microscope, as well as other imaging techniques (e.g., EM and 2-photon).


Abstract of Journey to the centre of the cell: Virtual reality immersion into scientific data

Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.

A sneak peak at radical future user interfaces for phones, computers, and VR

Grabity: a wearable haptic interface for simulating weight and grasping in VR (credit: UIST 2017)

Drawing in air, touchless control of virtual objects, and a modular mobile phone with snap-in sections (for lending to friends, family members, or even strangers) are among the innovative user-interface concepts to be introduced at the 30th ACM User Interface Software and Technology Symposium (UIST 2017) on October 22–25 in Quebec City, Canada.

Here are three concepts to be presented, developed by researchers at Dartmouth College’s human computer interface lab.

Retroshape: tactile watch feedback

Darthmouth’s Retroshape concept would add a shape-deforming tactile feedback system to the back of a future watch, allowing you to both see and feel virtual objects, such as a bouncing ball or exploding asteroid. Each pixel on RetroShape’s screen has a corresponding “taxel” (tactile pixel) on the back of the watch, using 16 independently moving pins.


UIST 2017 | RetroShape: Leveraging Rear-Surface Shape Displays for 2.5D Interaction on Smartwatches

Frictio smart ring

Current ring-gadget designs will allow users to control things. Instead, Frictio uses controlled rotation to provide silent haptic alerts and other feedback.


UIST 2017 — Frictio: Passive Kinesthetic Force Feedback for Smart Ring Output

Pyro: fingertip control

Pyro is a covert gesture-recognition concept, based on moving the thumb tip against the index finger — a natural, fast, and unobtrusive way to interact with a computer or other devices. It uses an energy-efficient thermal infrared sensor to detect to detect micro control gestures, based on patterns of heat radiating from fingers.


UIST 2017 — Pyro: Thumb-Tip Gesture Recognition Using Pyroelectric Infrared Sensing

Highlights from other presentations at UIST 2017:


UIST 2017 Technical Papers Preview

Teleoperating robots with virtual reality: getting inside a robot’s head

A new VR system from MIT’s Computer Science and Artificial Intelligence Laboratory could make it easy for factory workers to telecommute. (credit: Jason Dorfman, MIT CSAIL)

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system that lets you teleoperate a robot using an Oculus Rift or HTC Vive VR headset.

CSAIL’s “Homunculus Model” system (the classic notion of a small human sitting inside the brain and controlling the actions of the body) embeds you in a VR control room with multiple sensor displays, making it feel like you’re inside the robot’s head. By using gestures, you can control the robot’s matching movements to perform various tasks.

The system can be connected either via a wired local network or via a wireless network connection over the Internet. (The team demonstrated that the system could pilot a robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, DC to control Baxter at MIT.)

According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv paper about the system (presented this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver), “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now.”

Jobs for video-gamers too

The researchers imagine that such a system could even help employ jobless video-gamers by “game-ifying” manufacturing positions. (Users with gaming experience had the most ease with the system, the researchers found in tests.)

Homunculus Model system. A Baxter robot (left) is outfitted with a stereo camera rig and various end-effector devices. A virtual control room (user’s view, center), generated on an Oculus Rift CV1 headset (right), allows the user to feel like they are inside Baxter’s head while operating it. Using VR device controllers, including Razer Hydra hand trackers used for inputs (right), users can interact with controls that appear in the virtual space — opening and closing the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm. (credit: Jeffrey I. Lipton et al./arXiv).

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The team demonstrated the Homunculus Model system using the Baxter humanoid robot from Rethink Robotics, but the approach could work on other robot platforms, the researchers said.

In tests involving pick and place, assembly, and manufacturing tasks (such as “pick an item and stack it for assembly”) comparing the Homunculus Model system with existing state-of-the-art automated remote-control, CSAIL’s Homunculus Model system had a 100% success rate compared with a 66% success rate for state-of-the-art automated systems. The CSAIL system was also better at grasping objects 95 percent of the time and 57 percent faster at doing tasks.*

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team plans to eventually focus on making the system more scalable, with many users and different types of robots that are compatible with current automation technologies.

* The Homunculus Model system solves a delay problem with existing systems, which use a GPU or CPU, introducing delay. 3D reconstruction from the stereo HD cameras is instead done by the human’s visual cortex, so the user constantly receives visual feedback from the virtual world with minimal latency (delay). This also avoids user fatigue and nausea caused by motion sickness (known as simulator sickness) generated by “unexpected incongruities, such as delays or relative motions, between proprioception and vision [that] can lead to the nausea,” the researchers explain in the paper.


MITCSAIL | Operating Robots with Virtual Reality


Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing

Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This system can be deployed on multiple network architectures from a wired local network to a wireless network connection over the Internet. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor display, dynamic mapping between the user and robot, does not require the production of duals for the robot, or its environment. The control room is mapped to a space inside the robot to provide a sense of co-location within the robot. We compared our system with state of the art automation algorithms for assembly tasks, showing a 100% success rate for our system compared with a 66% success rate for automated systems. We demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.