Intelligence-augmentation device lets users ‘speak silently’ with a computer by just thinking

MIT Media Lab researcher Arnav Kapur demonstrates the AlterEgo device. It picks up neuromuscular facial signals generated by his thoughts; a bone-conduction headphone lets him privately hear responses from his personal devices. (credit: Lorrie Lejeune/MIT)

MIT researchers have invented a system that allows someone to communicate silently and privately with a computer or the internet by simply thinking — without requiring any facial muscle movement.

The AlterEgo system consists of a wearable device with electrodes that pick up otherwise undetectable neuromuscular subvocalizations — saying words “in your head” in natural language. The signals are fed to a neural network that is trained to identify subvocalized words from these signals. Bone-conduction headphones also transmit vibrations through the bones of the face to the inner ear to convey information to the user — privately and without interrupting a conversation. The device connects wirelessly to any external computing device via Bluetooth.

A silent, discreet, bidirectional conversation with machines. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?,” says Arnav Kapur, a graduate student at the MIT Media Lab who led the development of the new system. Kapur is first author on an open-access paper on the research presented in March at the IUI ’18 23rd International Conference on Intelligent User Interfaces.

In one of the researchers’ experiments, subjects used the system to silently report opponents’ moves in a chess game and silently receive recommended moves from a chess-playing computer program. In another experiment, subjects were able to undetectably answer difficult computational problems, such as the square root of large numbers or obscure facts. The researchers achieved 92% median word accuracy levels, which is expected to improve.  “I think we’ll achieve full conversation someday,” Kapur said.

Non-disruptive. “We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”*


Even the tiniest signal to her jaw or larynx might be interpreted as a command. Keeping one hand on the sensitivity knob, she concentrated to erase mistakes the machine kept interpreting as nascent words.

            Few people used subvocals, for the same reason few ever became street jugglers. Not many could operate the delicate systems without tipping into chaos. Any normal mind kept intruding with apparent irrelevancies, many ascending to the level of muttered or almost-spoken words the outer consciousness hardly noticed, but which the device manifested visibly and in sound.
            Tunes that pop into your head… stray associations you generally ignore… memories that wink in and out… impulses to action… often rising to tickle the larynx, the tongue, stopping just short of sound…
            As she thought each of those words, lines of text appeared on the right, as if a stenographer were taking dictation from her subvocalized thoughts. Meanwhile, at the left-hand periphery, an extrapolation subroutine crafted little simulations.  A tiny man with a violin. A face that smiled and closed one eye… It was well this device only read the outermost, superficial nervous activity, associated with the speech centers.
            When invented, the sub-vocal had been hailed as a boon to pilots — until high-performance jets began plowing into the ground. We experience ten thousand impulses for every one we allow to become action. Accelerating the choice and decision process did more than speed reaction time. It also shortcut judgment.
            Even as a computer input device, it was too sensitive for most people.  Few wanted extra speed if it also meant the slightest sub-surface reaction could become embarrassingly real, in amplified speech or writing.

            If they ever really developed a true brain to computer interface, the chaos would be even worse.

— From EARTH (1989) chapter 35 by David Brin (with permission)


IoT control. In the conference paper, the researchers suggest that an “internet of things” (IoT) controller “could enable a user to control home appliances and devices (switch on/off home lighting, television control, HVAC systems etc.) through internal speech, without any observable action.” Or schedule an Uber pickup.

Peripheral devices could also be directly interfaced with the system. “For instance, lapel cameras and smart glasses could directly communicate with the device and provide contextual information to and from the device. … The device also augments how people share and converse. In a meeting, the device could be used as a back-channel to silently communicate with another person.”

Applications of the technology could also include high-noise environments, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press, suggests Thad Starner, a professor in Georgia Tech’s College of Computing. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally.”

* Or users could, conceivably, simply zone out — checking texts, email messages, and twitter (all converted to voice) during boring meetings, or even reply, using mentally selected “smart reply” type options.

AI algorithm with ‘social skills’ teaches humans how to collaborate

(credit: Iyad Rahwan)

An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games.

The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (“S sharp”), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” said lead author BYU computer science professor Jacob Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”

“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

How casual talk by AI helps humans be more cooperative

One important finding: colloquial phrases (called “cheap talk” in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”

And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine — a sort of mini “Turing test.”

The research findings, Crandall hopes, could have long-term implications for human relationships. “In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

The research is described in an open-access paper in Nature Communications.

A human-machine collaborative chatbot system 

An actual conversation on Evorus, combining multiple chatbots and workers. (credit: T. Huang et al.)

In a related study, Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called Evorus that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.

Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.

Humans are simultaneously training the system’s AI, making it gradually less dependent on people, says Jeff Bigham, associate professor in the CMU Human-Computer Interaction Institute.

The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.

Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).

The preliminary system is available for download and use by anyone willing to be part of the research effort. It is deployed via Google Hangouts, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.

A open-access research paper on Evorus, available online, will be presented at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal, April 21–26, 2018.


Abstract of Cooperating with machines

Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.


Abstract of A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.

This voice-authentication wearable could block voice-assistant or bank spoofing

“Alexa, turn off my security system.” (credit: Amazon)

University of Michigan (U-M) scientists have developed a voice-authentication system for reducing the risk of being spoofed when you use a biometric system to log into secure services or a voice assistant (such as Amazon Echo and Google Home).

A hilarious example of spoofing a voice assistant happened during a Google commercial during the 2017 Super Bowl. When actors voiced “OK Google” commands on TV, viewers’ Google Home devices obediently began to play whale noises, flip lights on, and take other actions.

More seriously, an adversary could possibly bypass current voice-as-biometric authentication mechanisms, such as Nuance’s “FreeSpeech” customer authentication platform (used in a call centers and banks) by simply impersonating the user’s voice (possibly by using Adobe Voco software), the U-M scientists also point out.*

The VAuth system

VAuth system (credit: Kassem Fawaz/ACM Mobicom 2017)

The U-M VAuth (continuous voice authentication, pronounced “vee-auth”) system aims to make that a lot more difficult. It uses a tiny wearable device (which could be built in to a necklace, earbud/earphones/headset, or eyeglasses) containing an accelerometer (or a special microphone) that detects and measures vibrations on the skin of a person’s face, throat, or chest.

VAuth prototype features accelerometer chip for detecting body voice vibrations and Bluetooth transmitter (credit: Huan Feng et al./ACM)

The team has built a prototype using an off-the-shelf accelerometer and a Bluetooth transmitter, which sends the vibration signal to a real-time matching engine in a device (such as Google Home). It matches these vibrations with the sound of that person’s voice to create a unique, secure signature that is constant during an entire session (not just at the beginning). The team has also developed matching algorithms and software for Google Now.

Security holes in voice authentication systems

“Increasingly, voice is being used as a security feature but it actually has huge holes in it,” said Kang Shin, the Kevin and Nancy O’Connor Professor of Computer Science and professor of electrical engineering and computer science at U-M. “If a system is using only your voice signature, it can be very dangerous. We believe you have to have a second channel to authenticate the owner of the voice.”

VAuth doesn’t require training and is also immune to voice changes over time and different situations, such as sickness (a sore throat) or tiredness — a major limitation of voice biometrics, which require training from each individual who will use them, says the team.

The team tested VAuth with 18 users and 30 voice commands. It achieved a 97-percent detection accuracy and less than 0.1 percent false positive rate, regardless of its position on the body and the user’s language, accent or even mobility. The researchers say it also successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks.

A study on VAuth was presented Oct. 19 at the International Conference on Mobile Computing and Networking, MobiCom 2017, in Snowbird, Utah and is available for open-access download.

The work was supported by the National Science Foundation. The researchers have applied for a patent and are seeking commercialization partners to help bring the technology to market.

* As explained in this KurzweilAI articleAdobe Voco technology (aka “Photoshop for voice”) makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.


Abstract of Continuous Authentication for Voice Assistants

Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant’s microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth’s position on the body and the user’s language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.

‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Facebook’s internet-beaming drone completes first test flight

(credit: Facebook)

Facebook Connectivity Lab announced today the first full-scale test flight of Aquila — a solar-powered unmanned airplane/drone designed to bring affordable internet access to some of the 1.6 billion people living in remote locations with no access to mobile broadband networks.

When complete, Aquila will be able to circle a region up to 60 miles in diameter, beaming internet connectivity down from an altitude of more than 60,000 feet to people within a 60-mile communications diameter for up to 90 days at a time. It will be part of a future fleet of drones.


Facebook

Facebook’s Secret Conversations

(credit: Facebook)

Facebook began today (Friday, July 8) rolling out a new beta-version feature for Messenger called “Secret Conversations,” allowing for “one-to-one secret conversations … that will be end-to-end encrypted and which can only be read on one device of the person you’re communicating with.”

Facebook suggests the feature will be useful for discussing an illness or sending financial information (as in the pictures above).  You can choose to set a timer to control the length of time each message you send remains visible within the conversation. (Rich content like GIFs, videos, and making payments are not supported.)

The technology, described in a technical whitepaper (open access), is based on the Signal Protocol developed by Open Whisper Systems, which is also used in Open Whisper Systems’ own Signal messaging app (Chrome, iOS, Android),  WhatsApp, and Google’s Allo (not yet launched).

Unlike WhatsApp and iMessage, which automatically encrypt every message, Secret Conversations only works from a single device and is opt-in, which “will likely rankle many privacy advocates,” says Wired .

But not as much as all of these encrypted services rankle law enforcement agencies, since the feature hampers surveillance capabilities, it adds.

 

 


 

 

 

 

How to bring the entire web to VR

Google is working on new features to bring the web to VR, according to Google Happiness Evangelist .

To help web developers embed VR content in their web pages, the Google Chromium team has been working towards WebVR support in Chromium (programmers: see Chromium Code Reviews), Beaufort said. That means you can now use Cardboard- or Daydream-ready VR viewers to see pages with compliant VR content while browsing the web with Chrome.

(credit: Google)

“The team is just getting started on making the web work well for VR so stay tuned, there’s more to come!” he said.

Google previously launched VR view,  which enables developers to embed immersive content on Android, iOS and the web. Users can view it on their phone, with a Cardboard viewer, or with a Chrome browser on their desktop computer.

For native apps, programmers can embed a VR view in an app or web page by grabbing the latest Cardboard SDK for Android or iOS and adding a few lines of code.

On the web, embedding a VR view is as simple as adding an iframe on your site, as KurzweilAI did in the 360-degrees view shown at the top of this page, using iframe code copied from the HTML on Introducing VR view: embed immersive content into your apps and websites on Google Developers Blog. (Chrome browser is required. In addition to a VR viewer, you can use either the mouse or the four arrow keys to explore the image in 360 degrees.)

Your smartphone and tablet may be making you ADHD-like

(credit: KurzweilAI)

Smartphones and other digital technology may be causing ADHD-like symptoms, according to an open-access study published in the proceedings of ACM CHI ’16, the Human-Computer Interaction conference of the Association for Computing Machinery, ongoing in San Jose.

In a two-week experimental study, University of Virginia and University of British Columbia researchers showed that when students kept their phones on ring or vibrate and with notification alerts on, they reported more symptoms of inattention and hyperactivity than when they kept their phones on silent.

The results suggest that even people who have not been diagnosed with ADHD may experience some of the disorder’s symptoms, including distraction, fidgeting, having trouble sitting still, difficulty doing quiet tasks and activities, restlessness, and difficulty focusing and getting bored easily when trying to focus, the researchers said.

“We found the first experimental evidence that smartphone interruptions can cause greater inattention and hyperactivity — symptoms of attention deficit hyperactivity disorder — even in people drawn from a nonclinical population,”said Kostadin Kushlev, a psychology research scientist at the University of Virginia who led the study with colleagues at the University of British Columbia.

In the study, 221 students at the University of British Columbia drawn from the general student population were assigned for one week to maximize phone interruptions by keeping notification alerts on, and their phones within easy reach.

Indirect effects of manipulating smartphone interruptions on psychological well-being via inattention symptoms. Numbers are unstandardized regression coefficients. (credit: Kostadin Kushlev et al./CHI 2016)

During another week participants were assigned to minimize phone interruptions by keeping alerts off and their phones away.

At the end of each week, participants completed questionnaires assessing inattention and hyperactivity. Unsurprisingly, the results showed that the participants experienced significantly higher levels of inattention and hyperactivity when alerts were turned on.

Digital mobile users focus more on concrete details than the big picture

Using digital platforms such as tablets and laptops for reading may also make you more inclined to focus on concrete details rather than interpreting information more contemplatively or abstractly (seeing the big picture), according to another open-access study published in ACM CHI ’16 proceedings.

Researchers at Dartmouth’s Tiltfactor lab and the Human-Computer Interaction Institute at Carnegie Mellon University conducted four studies with a total of 300 participants. Participants were tested by reading a short story and a table of information about fictitious Japanese car models.

The studies revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical printout) exhibited a lower level of “construal” (abstract) thinking. However, the researchers also found that engaging the subjects in a more abstract mindset prior to an information processing task on a digital platform appeared to help facilitate a better performance on tasks that require abstract thinking.

Coping with digital overload

Given the widespread acceptance of digital devices, as evidenced by millions of apps, ubiquitous smartphones, and the distribution of iPads in schools, surprisingly few studies exist about how digital tools affect us, the researchers noted.

“The ever-increasing demands of multitasking, divided attention, and information overload that individuals encounter in their use of digital technologies may cause them to ‘retreat’ to the less cognitively demanding lower end of the concrete-abstract continuum,” according to the authors. They also say the new research suggests that “this tendency may be so well-ingrained that it generalizes to contexts in which those resource demands are not immediately present.”

Their recommendation for human-computer interaction designers and researchers: “Consider strategies for encouraging users to see the ‘forest’ as well as the ‘trees’ when interacting with digital platforms.”

Jony Ive, are you listening?


Abstract of “Silence your phones”: Smartphone notifications increase inattention and hyperactivity symptoms

As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.


Abstract of High-Low Split: Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms

The present research investigated whether digital and non-digital platforms activate differing default levels of cognitive construal. Two initial randomized experiments revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical print-out) exhibited a lower level of construal, one prioritizing immediate, concrete details over abstract, decontextualized interpretations. This pattern emerged both in digital platform participants’ greater preference for concrete versus abstract descriptions of behaviors as well as superior performance on detail-focused items (and inferior performance on inference-focused items) on a reading comprehension assessment. A pair of final studies found that the likelihood of correctly solving a problem-solving task requiring higher-level “gist” processing was: (1) higher for participants who processed the information for task on a non-digital versus digital platform and (2) heightened for digital platform participants who had first completed an activity activating an abstract mindset, compared to (equivalent) performance levels exhibited by participants who had either completed no prior activity or completed an activity activating a concrete mindset.

(credit: KurzweilAI/Apple)

What happens when drones and people sync their vision?

Multiple recon drones in the sky all suddenly aim their cameras at a person of interest on the ground, synced to what persons on the ground see …

That could be a reality soon, thanks to an agreement just announced by the mysterious SICdrone, an unmanned aircraft system manufacturer, and CrowdOptic, an “interactive streaming platform that connects the world through smart devices.”

A CrowdOptic “cluster” — multiple people focused on the same object.  (credit: CrowdOptic)

CrowdOptic’s technology lets a “cluster” (multiple people or objects) point their cameras or smartphones at the same thing (say, at a concert or sporting event), with different views, allowing for group chat or sharing content.

Drone air control

For SICdrone, the idea is to use CrowdOptic tech to automatically orchestrate the drones’ onboard cameras to track and capture multiple camera angles (and views) of a single point of interest.* Beyond that, this tech could provide vital flight-navigation systems to coordinate multiple drones without having them conflict (or crash), says CrowdOptic CEO Jon Fisher.

This disruptive innovation might become essential (and mandated by law?) as AmazonFlirtey, and others compete to dominate drone delivery. It could also possibly help with the growing concern about drone risk to airplanes.**

Other current (and possible) users of CrowdOptics tech include first responders, news and sports reporting, advertising analytics (seeing what people focus on), linking up augmented-reality and VR headset users, and “social TV” (live attendees — using the Periscope app, for example — provide the most interesting video to people watching at home), Fisher explained to KurzweilAI.

* This uses several CrowdOptic patents (U.S. Patents 8,527,340, 9,020,832, and 9,264,474).

** Drone Comes Within 200 Feet Of Passenger Jet Coming In To Land At LAX

Can human-machine superintelligence solve the world’s most dire problems?


Human Computation Institute | Dr. Pietro Michelucci

“Human computation” — combining human and computer intelligence in crowd-powered systems — might be what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.

In an article published in the journal Science, the authors present a new vision of human computation that takes on hard problems that until recently have remained out of reach.

Humans surpass machines at many things, ranging from visual pattern recognition to creative abstraction. And with the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot, the authors say.

Microtasking

Microtasking: Crowdsourcing breaks large tasks down into microtasks, which can be things at which humans excel, like classifying images. The microtasks are delivered to a large crowd via a user-friendly interface, and the data are aggregated for further processing. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

Most of today’s human-computation systems rely on “microtasking” — sending “micro-tasks” to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.

Another example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human.

“Microtasking is well suited to problems that can be addressed by repeatedly applying the same simple process to each part of a larger data set, such as stitching together photographs contributed by residents to decide where to drop water during a forest fire,” the authors note.

But this microtasking approach alone cannot address the tough challenges we face today, say the authors. “A radically new approach is needed to solve ‘wicked problems’ — those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences, such as climate change, disease, and geopolitical conflict, which are dynamic, involve multiple, interacting systems, and have non-obvious secondary effects, such as political exploitation of a pandemic crisis.”

New human-computation technologies

New human-computation technologies: In creating problem-solving ecosystems, researchers are beginning to explore how to combine the cognitive processing of many human contributors with machine-based computing to build faithful models of the complex, interdependent systems that underlie the world’s most challenging problems. (credit: Pietro Michelucci and Janis L. Dickinson/Science)

The authors say new human computation technologies can help build flexible collaborative environments. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind.

This idea is already taking shape in several human-computation projects:

  • YardMap.org, launched by the Cornell in 2012, maps global conservation efforts. It allows participants to interact and build on each other’s work — something that crowdsourcing alone cannot achieve.
  • WeCureAlz.com accelerates Cornell-based Alzheimer’s disease research by combining two successful microtasking systems into an interactive analytic pipeline that builds blood-flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.

“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years,” says HCI director and lead author, Pietro Michelucci, PhD. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”


Abstract of The power of crowds

Human computation, a term introduced by Luis von Ahn, refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone. The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives. But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy, it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.