“ANA AVATAR” selected as a top-prize concept at XPRIZE Visioneers 2016 Summit

(credit: ANA)

A concept for remote-controlled “avatar” humanoid robots, presented by ANA, Japan’s largest airline, was named one of the three “top prize concept” finalists at XPRIZE’s recent inaugural Visioneers event.

The ANA AVATAR Team, led by scientist Harry Kloor, PhD, presented an ambitious vision of a future in which human pilots would hear, see, talk, touch, and feel as if a humanoid robotic body were their own — instantaneously across the globe and beyond. Ray Kurzweil is Co-Prize Developer.

The avatar concept, an extension of the avatar portrayed in the movie Avatar, involves haptics, motion capture, AR, VR, artificial intelligence, and deep machine learning technologies.

“The ANA team presented a compelling AVATAR XPRIZE concept,” said Dr. Peter H. Diamandis, executive chairman and founder of XPRIZE. “The Avatar XPRIZE will initiate a revolution in the way we work, travel, explore and interact with each other.”

“The top three Visioneers teams, with prize concepts in the areas of cancer, avatars, and ALS, have been certified by XPRIZE as ‘ready to launch’ and we look forward to working with the teams to finalize the prize designs, secure the necessary funding, and launch one or all of these world-changing competitions,” Marcus Shingles, CEO of XPRIZE told KurzweilAI.

Other top concepts were presented by Deloitte and Caterpillar for conquering cancer and ALS, respectively. The Visioneers Summit is intended to help the XPRIZE strategic approach evolve and advance its prize designs.


How to watch the US presidential debates in VR

NBC has teamed with AltSpaceVR to stream the U.S. presidential debate Monday night Sept. 26 live in virtual reality for HTC Vive, Oculus Rift, and Samsung Gear VR devices.

Or as late-night comic Jimmy Fallon put it, “If you’re wearing a VR headset, it will be like the candidates are lying right to your face.”

You’ll be watching the debate on a virtual screen in NBC’s “Virtual Democracy Plaza.” AltSpaceVR will also stream three other debates and Election Night on Nov. 8, as well as other VR events. You can also host your own debate watch party and make it public or friends-only.

NBC plans to host related VR events running up to the elections, including watch parties for debates, Q&A sessions with political experts, and political comedy shows.

To participate, download the AltSpace VR app for Vive, Rift, or Gear VR; also available in 2D mode for PC, Mac, Netflix, YouTube, and Twitch.

The debates will also be livestreamed on YouTube, and by Twitter (partnering with Bloomberg) and Facebook, partnering with ABC News.

New movie display allows for glasses-free 3-D for a theater audience

A new prototype display could show 3-D movies to any seat in a theater — no eyewear required (credit: Christine Daniloff / MIT)

A new “Cinema 3D” display lets audiences watch 3-D films in a movie theater without cumbersome glasses.

Developed by a team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Weizmann Institute of Science, the prototype display uses a special array of lenses and mirrors to enable viewers to watch a 3-D movie from any seat in a theater.

Left: the standard approach to the design of automultiscopic 3D displays, which attempts to cover the angular range of all viewer positions, but involves poor spatial/angular resolution or a restricted range of screen distances. Right: an automultiscopic 3D display architecture that only presents a narrow range of angular images across the small set of viewing positions of a single seat, and replicates that narrow-angle content to all seats in the cinema and at all screen distances. (credit: Netalee Efrat et al./ACM Trans.Graph., )

Glasses-free 3-D already exists: Traditional methods for TV sets use a series of slits in front of the screen (a “parallax barrier”) that allows each eye to see a different set of pixels, creating a simulated sense of depth.

But because parallax barriers have to be at a consistent distance from the viewer, this approach isn’t practical for larger spaces like theaters, which have viewers at different angles and distances. Other methods, including one from the MIT Media Lab, involve developing completely new physical projectors that cover the entire angular range of the audience. However, this often comes at a cost of lower image-resolution.

The key insight with Cinema 3D is that people in movie theaters move their heads only over a very small range of angles, limited by the width of their seat. So it’s enough to display images to a narrow range of angles and replicate that to all seats in the theater, using a series of mirrors and lenses. (The team’s prototype requires 50 sets of mirrors and lenses, which is currently expensive and impractical, the researchers say.)

The team presented Cinema 3D in an open-access paper at last week’s SIGGRAPH computer-graphics conference in Anaheim, California. The work was funded by the Israel Science Foundation and the European Research Council.


MITCSAIL | Cinema 3D: A movie screen for glasses-free 3D


Abstract of Cinema 3D: large scale automultiscopic display

While 3D movies are gaining popularity, viewers in a 3D cinema still need to wear cumbersome glasses in order to enjoy them. Automultiscopic displays provide a better alternative to the display of 3D content, as they present multiple angular images of the same scene without the need for special eyewear. However, automultiscopic displays cannot be directly implemented in a wide cinema setting due to variants of two main problems: (i) The range of angles at which the screen is observed in a large cinema is usually very wide, and there is an unavoidable tradeoff between the range of angular images supported by the display and its spatial or angular resolutions. (ii) Parallax is usually observed only when a viewer is positioned at a limited range of distances from the screen. This work proposes a new display concept, which supports automultiscopic content in a wide cinema setting. It builds on the typical structure of cinemas, such as the fixed seat positions and the fact that different rows are located on a slope at different heights. Rather than attempting to display many angular images spanning the full range of viewing angles in a wide cinema, our design only displays the narrow angular range observed within the limited width of a single seat. The same narrow range content is then replicated to all rows and seats in the cinema. To achieve this, it uses an optical construction based on two sets of parallax barriers, or lenslets, placed in front of a standard screen. This paper derives the geometry of such a display, analyzes its limitations, and demonstrates a proof-of-concept prototype.

New tech could have helped police locate shooters in Dallas

Potential shooter location in Dallas (credit: Fox News)

JULY 8, 3:56 AM EDT — Livestreamed data from multiple users with cell phones and other devices could be used to help police locate shooters in a situation like the one going on right now in Dallas, says Jon Fisher, CEO of San Francisco-based CrowdOptic.

Here’s how it would work: You view (or record a video of) a shooter with your phone. Your location and the direction you are facing is now immediately available on your device and could be coordinated with data from other persons at the scene to triangulate the position of the shooter.

A CrowdOptic “cluster” with multiple people focused on the same object (credit: CrowdOptic)

This technology, called the “CrowdOptic Interactive Streaming platform,” is already in place, using Google Glass livestreaming, in several organizations, including UCSF Medical Center, Denver Broncos, and Futton, Inc. (working with Chinese traffic police).

Fisher told KurzweilAI his company’s software is also integrated with Cisco Jabber livestreaming video and conferencing products (and soon Spark), and with Sony SmartEyeglass, and that iOS and Android apps are planned.

CrowdOptic also has a product called CrowdOptic Eye, a “powerful, low-bandwidth live streaming device designed to … broadcast live video and two-way audio from virtually anywhere.”

“We’re talking about phones now, but think about all other devices, such as drones, that will be delivering these feeds to CNN and possibly local police,” he said.

ADDED July 11:

“When all attempts to negotiate with the suspect, Micah Johnson, failed under the exchange of gunfire, the Department utilized the mechanical tactical robot, as a last resort, to deliver an explosion device to save the lives of officers and citizens. The robot used was the Remotec, Model  F-5, claw and arm extension with an explosive device of C4 plus ‘Det’ cord.  Approximate weight of total charge was one pound.” — Statement July 9, 2016 by Dallas police chief David O. Brown

The Dallas police department’s decision to use a robot to kill the shooter Thursday July 7, raises questions. For example: Why wasn’t a non-lethal method used with the robot, such as a tranquilizer dart, which also might have given police an opportunity to acquire more information, including the location of claimed bombs and cohorts possibly associated with the crime?

How to bring the entire web to VR

Google is working on new features to bring the web to VR, according to Google Happiness Evangelist .

To help web developers embed VR content in their web pages, the Google Chromium team has been working towards WebVR support in Chromium (programmers: see Chromium Code Reviews), Beaufort said. That means you can now use Cardboard- or Daydream-ready VR viewers to see pages with compliant VR content while browsing the web with Chrome.

(credit: Google)

“The team is just getting started on making the web work well for VR so stay tuned, there’s more to come!” he said.

Google previously launched VR view,  which enables developers to embed immersive content on Android, iOS and the web. Users can view it on their phone, with a Cardboard viewer, or with a Chrome browser on their desktop computer.

For native apps, programmers can embed a VR view in an app or web page by grabbing the latest Cardboard SDK for Android or iOS and adding a few lines of code.

On the web, embedding a VR view is as simple as adding an iframe on your site, as KurzweilAI did in the 360-degrees view shown at the top of this page, using iframe code copied from the HTML on Introducing VR view: embed immersive content into your apps and websites on Google Developers Blog. (Chrome browser is required. In addition to a VR viewer, you can use either the mouse or the four arrow keys to explore the image in 360 degrees.)

An augmented-reality head-up display in a diver’s helmet

Prototype of the Divers Augmented Vision Display (DVAD) in a dive helmet (credit: Richard Manley/U.S. Navy photo)

The U.S. Navy’s Naval Surface Warfare Center has developed what may be the first underwater augmented-reality head-up display (HUD) built into a diving helmet.

The Divers Augmented Vision Display (DAVD) gives divers a real-time, high-res visual display of everything from sector sonar (real-time topside view of the diver’s location and dive site), text messages, diagrams, and photographs to augmented-reality videos. Having real-time visual data enables them to be more effective and safe in their missions — providing expanded situational awareness and increased accuracy in navigating to a ship, downed aircraft, or other objects of interest.

Lab simulation view of an augmented reality image of an airplane through the Divers Augmented Vision Display (DVAD). Divers Augmented Vision Display (DVAD) (credit: Richard Manley/U.S. Navy photo)

Lab simulation view of a sector sonar image with navigation aids through the Divers Augmented Vision Display (DVAD) (credit: Richard Manley/U.S. Navy photo)

The Naval Sea Systems Command is now developing the next generation of the DAVD, with enhanced sensors such as miniaturized sonar and video video systems to enable divers to see in higher resolution up close, even when water visibility is near zero.


U.S. Navy | Diver Augmented Vision Display (DAVD)

Two inventions deal with virtual-reality sickness

Single-eye view of a virtual environment before (left) and after (right) a dynamic field-of-view modification that subtly restricts the size of the image during image motion to reduce motion sickness (credit: Ajoy Fernandes and Steve Feiner/Columbia Engineering)

Columbia Engineering researchers announced earlier this week that they have developed a simple way to reduce VR motion sickness that can be applied to existing consumer VR devices, such as Oculus Rift, HTC Vive, Sony PlayStation VR, Gear VR, and Google Cardboard devices.

The trick is to subtly change the field of view (FOV), or how much of an image you can see, during visually perceived motion. In an experiment conducted by Computer Science Professor Steven K. Feiner and student Ajoy Fernandes, most of the participants were not even aware of the intervention.

What causes VR sickness is the clash between the visual motion cues that users see and the physical motion cues that they receive from their inner ears’ vestibular system, which provide our sense of motion, equilibrium, and spatial orientation. When the visual and vestibular cues conflict, users can feel quite uncomfortable, even nauseated.

Decreasing the field of view can decrease these symptoms, but can also decrease the user’s sense of presence (reality) in the virtual environment, making the experience less compelling. So the researchers worked on subtly decreasing FOV in situations when a larger FOV would be likely to cause VR sickness (when the mismatch between physical and virtual motion increases) and restoring the FOV when VR sickness is less likely to occur (when the mismatch decreases).


Columbia University | Combating VR Sickness through Subtle Dynamic Field-Of-View Modification

They developed software that functions as a pair of “dynamic FOV restrictors” that can partially obscure each eye’s view with a virtual soft-edged cutout. They then determined how much the user’s field of view should be reduced, and the speed with which it should be reduced and then restored, and tested the system in an experiment.

Most of the experiment participants who used the restrictors did not notice them, and all those who did notice them said they would prefer to have them in future VR experiences.

The study was presented at IEEE 3DUI 2016 (IEEE Symposium on 3D User Interfaces) on March 20, where it won the Best Paper Award.

Galvanic Vestibular Stimulation

A different, more ambitious approach was announced in March by vMocion, LLC, an entertainment technology company, based on the Mayo Clinic‘s patented Galvanic Vestibular Stimulation (GVS) technology*, which electrically stimulates the vestibular system. vMocion’s new 3v Platform (virtual, vestibular and visual) was actually developed to add a “magical” sensation of motion in existing gaming, movies, amusement parks and other entertainment environments.

The 3v system can generate roll, pitch, and yaw sensations (credit: Wikipedia)

But it turns out GVS also works to reduce VR motion sickness. vMocion says it will license the 3v Platform to VR and other media and entertainment companies. The system’s software that can be integrated into existing operating systems, and added to existing devices such as head-mounted devices — along with smartphones, 3-D glasses and TVs, says Bradley Hillstrom Jr., CEO of vMocion.


vMocion | Animation of Mayo Clinic’s Galvanic Vestibular Stimulation (GVS) Technology

Integrating into VR headsets

“vMocion is are already in talks with companies in the gaming and entertainment industries,” Hillstrom told KurzweilAI, “and we hope to work with systems integrators and other strategic partners who can bring this technology directly to consumers very soon.” Hillstrom said the technology can  be integrated into existing headsets and other devices.

Samsung has announced plans to sell a system using GVS, called Entrim 4D, although it’s not clear from the video (showing a Gear VR device) how it connects to the front and rear electrodes (apparently needed for pitch sensations).


Samsung | Entrim 4D


Mayo Clinic | The Story Behind Mayo Clinic’s GVS Technology & vMocion’s 3v Platform

* The technology grew out of decade-long medical research by Mayo Clinic’s Aerospace Medicine and Vestibular Research Laboratory (AMVRL) team, which consists of experts in aerospace medicine, internal medicine and computational science, as well as neurovestibular specialists, in collaboration with Vivonics, Inc., a biomedical engineering company. The technology is based on work supported by the grants from U.S. Army and U.S. Navy.


Abstract of Combating VR sickness through subtle dynamic field-of-view modification

Virtual Reality (VR) sickness can cause intense discomfort, shorten the duration of a VR experience, and create an aversion to further use of VR. High-quality tracking systems can minimize the mismatch between a user’s visual perception of the virtual environment (VE) and the response of their vestibular system, diminishing VR sickness for moving users. However, this does not help users who do not or cannot move physically the way they move virtually, because of preference or physical limitations such as a disability. It has been noted that decreasing field of view (FOV) tends to decrease VR sickness, though at the expense of sense of presence. To address this tradeoff, we explore the effect of dynamically, yet subtly, changing a physically stationary person’s FOV in response to visually perceived motion as they virtually traverse a VE. We report the results of a two-session, multi-day study with 30 participants. Each participant was seated in a stationary chair, wearing a stereoscopic head-worn display, and used control and FOV-modifying conditions in the same VE. Our data suggests that by strategically and automatically manipulating FOV during a VR session, we can reduce the degree of VR sickness perceived by participants and help them adapt to VR, without decreasing their subjective level of presence, and minimizing their awareness of the intervention.

‘On-the-fly’ 3-D printing system prints what you design, as you design it

This wire frame prototype of a toy aircraft was printed in just 10 minutes, including testing for correct fit, and modified during printing to create the cockpit. The file was updated in the process, and could be used to print a finished model. (credit: Cornell University)

Cornell researchers have developed an interactive prototyping system that prints a wire frame of your design as you design it. You can pause anywhere in the process to test or measure and make needed changes, which will be added to the physical model still in the printer.

In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the On-the-Fly-Print system, the nozzle instead extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file and allows the designer to make refinements while printing is in progress.

Wireframe test models printed with On-The-Fly Print system (credit: Cornell University)

The printer’s stage can be rotated to present any face of the model facing up; so an airplane fuselage, for example, can be turned on its side to add a wing. There is also a cutter to remove parts of the model, say, to give the airplane a cockpit, and the nozzle can reach through the wire mesh to make changes inside. The system also adds yaw and pitch for five degrees of freedom.

The researchers described the On-the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. The work was supported in part by the National Science Foundation and by Autodesk Corp.


Huaishu Peng | On-the-Fly Print: Incremental Printing While Modelling

Harold Cohen: in memoriam

TCM#2 (1995) 45×58 — Dye on paper, painted with the AARON painting machine (credit: Howard Cohen)

By Paul Cohen

Harold Cohen, artist and pioneer in the field of computer-generated art, died on April 27, 2016 at the age of 87. Cohen is the author of AARON, perhaps the longest-lived and certainly the most creative artificial intelligence program in daily use.

Cohen viewed AARON as his collaborator. At times during their decades-long relationship, AARON was quite autonomous, responsible for the composition, coloring and other aspects of a work; more recently, AARON served Cohen by making drawings that Cohen would develop into paintings. Cohen’s death is the end of a lengthy partnership between an artist and an artificial intelligence.

Cohen grew up in England. He studied painting at the Slade School of Fine Arts in London, and later taught at the Slade as well as Camberwell, Nottingham and other arts schools. He represented Great Britain at major international festivals during the 60′s, including the Venice Biennale, Documenta 3, and the Paris Biennale. He showed widely and successfully at the Robert Fraser Gallery, the Alan Stone Gallery, the Whitechapel Gallery, the Arnolfini Gallery, the Victoria and Albert Museum, and many other notable venues in England and Europe.

Then, in 1968, he left London for a one-year visiting faculty appointment in the Art Department at the University of California, San Diego. One year became many,  Cohen became Department Chair, then Director of the Center for Research in Computing and the Arts at UCSD, and eventually retired emeritus in 1994.

The Last Machine Age (2015) 35.75×65.75 — Pigment ink on canvas was finger painted using Harold Cohen’s finger painting system (credit: Howard Cohen)

A scientist and engineer of art

Leaving the familiar, rewarding London scene presaged a career of restless invention. By 1971, Cohen had taught himself to program a computer and exhibited computer-generated art at the Fall Joint Computer Conference. The following year, he exhibited not only a program but also a drawing machine at the Los Angeles County Museum. A skilled engineer, Cohen built many display devices: flatbed plotters, a robotic “turtle” that roamed and drew on huge sheets of paper, even a painting robot that mixed its own colors.

These machines and the museum-goers’ experiences were always important to Cohen, whose fundamental question was, “What makes images evocative?”  The distinguished computer scientist and engineer Gordon Bell notes that “Harold was really a scientist and engineer of art.”

Indeed, AARON was a thoroughly empirical project: Cohen studied how children draw, he tracked down the petroglyphs of California’s Native Americans,  he interviewed viewers and he experimented with algorithms to discover the characteristics of images that make them seem to stand for something. Although AARON went through an overtly representational phase, in which images were recognizably of people or potted plants,  Cohen and AARON returned to abstraction and evocation and methods for making images that produce cascades of almost-recognition and associations in the minds of viewers.

Harold Cohen and AARON: Ray Kurzweil interviews Harold Cohen about AARON (credit: Computer History Museum/Kurzweil Foundation)

“Harold Cohen is one of those rare individuals in the Arts who performs at the highest levels both in the art world and the scientific world,” said Professor Edward Feigenbaum of Stanford University’s Artificial Intelligence Laboratory, where Cohen was exposed to the ideas and techniques of artificial intelligence. “All discussions of creativity by computer invariably cite Cohen’s work,” said Feigenbaum.

Cohen had no patience for the “is it art?” question. He showed AARON’s work in the world’s  galleries, museums and science centers — the Tate, the Stedelijk, the San Francisco Museum of Art, Documenta, the Boston Computer Museum, the Ontario Science Center, and many others. His audiences might have been drawn in by curiosity and the novelty of computer-generated art, but they would soon ask, “how can a machine make such marvelous pictures? How does it work?” The very questions that Cohen asked himself throughout his career.

AARON’s images and Cohen’s essays and videos can be viewed at www.aaronshome.com.

Cohen is survived by his partner Hiromi Ito; by his brother Bernard Cohen; by Paul Cohen, Jenny Foord and Zana Itoh Cohen; by Sara Nishi, Kanoko Nishi-Smith, and Uta and Oscar Nishi-Smith; by Becky Cohen; and by Allegra Cohen, Jacob and Abigail Foord, and Harley and Naomi Kuych-Cohen.


ACM SIGGRAPH Awards | Harold Cohen, Distinguished Artist Award for Lifetime Achievement

NYU Holodeck to be model for year 2041 cyberlearning

NYU-X Holodeck (credit: Winslow Burleson and Armanda Lewis)

In an open-access paper in the Journal of Artificial Intelligence Education, Winslow Burleson, PhD, MSE, associate professor, New York University Rory Meyers College of Nursing, suggests that “advanced cyberlearning environments that involve VR and AI innovations are needed to solve society’s “wicked challenges*” — entrenched and seemingly intractable societal problems.

Burleson and and co-author Armanda Lewis imagine such technology in a year 2041 Holodeck, which Burleson’s NYU-X Lab is currently developing in prototype form, in collaboration with colleagues at NYU Courant, Tandon, Steinhardt, and Tisch.

“The “Holodeck” will support a broad range of transdisciplinary collaborations, integrated education, research, and innovation by providing a networked software/hardware infrastructure that can synthesize visual, audio, physical, social, and societal components,” said Burleson.

It’s intended as a model for the future of cyberlearning experience, integrating visual, audio, and physical (haptics, objects, real-time fabrication) components, with shared computation, integrated distributed data, immersive visualization, and social interaction to make possible large-scale synthesis of learning, research, and innovation.


This reminds me of the book Education and Ecstasy, written in 1968 by George B. Leonard, a respected editor for LOOK magazine and, in many respects, a pioneer in what has become the transhumanism movement. That book laid out the justification and promise of advanced educational technology in the classroom for an entire generation. Other writers, such as Harry S. Broudy in the Real World of the Public Schools (1972) followed, arguing that we can not afford “master teachers” in every classroom, but still need to do far better, both then and now.

Today, theories and models of automated planning using computers in complex situations are advanced and “wicked” social simulations can demonstrate the “potholes” in proposed action scenarios. Virtual realties, holodecks, interactive games, robotic and/or AI assistants offer “sandboxes” for learning and for sharing that learning with others. Leonard’s vision, proposed in 1968 for the year 2000, has not yet been realized. However, by 2041, according to these authors, it just might be.

— Warren E. Lacefield, Ph.D. President/CEO Academic Software, Inc.; Associate Professor (retired), Evaluation, Measurement, and Research Program, Department of Educational Leadership, Research, and Technology, Western Michigan University (aka “Asiwel” on KurzweilAI)


Key aspects of the Holodeck: personal stories and interactive experiences that make it a rich environment; open streaming content that make it real and compelling; and contributions that personalize the learning experience. The goal is to create a networked infrastructure and communication environment where “wicked challenges” can be iteratively explored and re-solved, utilizing visual, acoustic, and physical sensory feedback, human dynamics with and social collaboration.

Burleson and Lewis envision that in 2041, learning is unlimited — each individual can create a teacher, team, community, world, galaxy or universe of their own.

* In the late 1960s, urban planners Horst Rittel and Melvin Webber began formulating the concept of “wicked problems” or “wicked challenges” –problems so vexing in the realm of social and organizational planning that they could not be successfully ameliorated with traditional linear, analytical, systems-engineering types of approaches.

These “wicked challenges” are poorly defined, abstruse, and connected to strong moral, political and professional issues.  Some examples might include: “How should we deal with crime and violence in our schools?  “How should we wage the ‘War on Terror’? or “What is good national immigration policy?”

“Wicked problems,” by their very nature, are strongly stakeholder dependent; there is often little consensus even about what the problem is, let alone how to deal with it. And, the challenges themselves are ever shifting sets of inherently complex, interacting issues evolving in a dynamic social context. Often, new forms of “wicked challenges” emerge as a result of trying to understand and treat just one challenge in isolation.


Abstract of Optimists’ Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041)

This essay imagines the role that artificial intelligence innovations play in the integrated living, learning and research environments of 2041. Here, in 2041, in the context of increasingly complex wicked challenges, whose solutions by their very nature continue to evade even the most capable experts, society and technology have co-evolved to embrace cyberlearning as an essential tool for envisioning and refining utopias–non-existent societies described in considerable detail. Our society appreciates that evolving these utopias is critical to creating and resolving wicked challenges and to better understanding how to create a world in which we are actively “learning to be” – deeply engaged in intrinsically motivating experiences that empower each of us to reach our full potential. Since 2015, Artificial Intelligence in Education (AIED) has transitioned from what was primarily a research endeavour, with educational impact involving millions of user/learners, to serving, now, as a core contributor to democratizing learning (Dewey 2004) and active citizenship for all (billions of learners throughout their lives). An expansive experiential super computing cyberlearning environment, we affectionately call the “Holodeck,” supports transdisciplinary collaboration and integrated education, research, and innovation, providing a networked software/hardware infrastructure that synthesizes visual, audio, physical, social, and societal components. The Holodeck’s large-scale integration of learning, research, and innovation, through real-world problem solving and teaching others what you have learned, effectively creates a global meritocratic network with the potential to resolve society’s wicked challenges while empowering every citizen to realize her or his full potential.