AI algorithm with ‘social skills’ teaches humans how to collaborate

(credit: Iyad Rahwan)

An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games.

The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (“S sharp”), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” said lead author BYU computer science professor Jacob Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”

“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

How casual talk by AI helps humans be more cooperative

One important finding: colloquial phrases (called “cheap talk” in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”

And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine — a sort of mini “Turing test.”

The research findings, Crandall hopes, could have long-term implications for human relationships. “In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

The research is described in an open-access paper in Nature Communications.

A human-machine collaborative chatbot system 

An actual conversation on Evorus, combining multiple chatbots and workers. (credit: T. Huang et al.)

In a related study, Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called Evorus that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.

Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.

Humans are simultaneously training the system’s AI, making it gradually less dependent on people, says Jeff Bigham, associate professor in the CMU Human-Computer Interaction Institute.

The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.

Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).

The preliminary system is available for download and use by anyone willing to be part of the research effort. It is deployed via Google Hangouts, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.

A open-access research paper on Evorus, available online, will be presented at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal, April 21–26, 2018.


Abstract of Cooperating with machines

Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.


Abstract of A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.

A tool to debug ‘black box’ deep-learning neural networks

Oops! A new debugging tool called DeepXplore generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug. (credit: Columbia Engineering)

Researchers at Columbia and Lehigh universities have developed a method for error-checking the reasoning of the thousands or millions of neurons in unsupervised (self-taught) deep-learning neural networks, such as those used in self-driving cars.

Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning, such as the incident last year when Tesla’s autonomous car collided with a truck it mistook for a cloud, killing its passenger. Deep learning systems don’t explain how they make their decisions, which makes them hard to trust.

Modeled after the human brain, deep learning uses layers of artificial neurons that process and consolidate information. This results in a set of rules to solve complex problems, from recognizing friends’ faces online to translating email written in Chinese. The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics are growing.

Finding bugs by generating test images

Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way to measure how thoroughly logic within the network has been checked for errors. Current limited approaches include randomly feeding manually generated test images into the network until one triggers a wrong decision (telling the car to veer into the guardrail, for example); and “adversarial testing,” which automatically generates test images that it alters incrementally until one image tricks the system.

The new DeepXplore solution — presented Oct. 29, 2017 in an open-access paper at ACM’s Symposium on Operating Systems Principles in Shanghai — can find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions, according to the researchers.

To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera’s view. A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets of neurons to tell it to go right.

After inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug. Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated.

“You can think of our testing process as reverse-engineering the learning process to understand its logic,” said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. “This gives you some visibility into what the system is doing and where it’s going wrong.”

Testing their software on 15 state-of-the-art neural networks, including Nvidia’s Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons — 30 percent more on average than either random or adversarial testing — and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.*

The ultimate goal: certifying a neural network is bug-free

Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can’t certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.

A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore’s full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.

“Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good,” he said. “However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can’t find any bugs, that there are, in fact, no bugs.”

DeepXplore has applications beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software, for example.

The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works.

* The team evaluated DeepXplore on real-world datasets including Udacity self-driving car challenge data, image data from ImageNet and MNIST, Android malware data from Drebin, PDF malware data from Contagio/VirusTotal, and production-quality deep neural networks trained on these datasets, such as these ranked top in Udacity self-driving car challenge. Their results show that DeepXplore found thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails) in 15 state-of-the-art deep learning models with a total of 132,057 neurons trained on five popular datasets containing around 162 GB of data.


Abstract of DeepXplore: Automated Whitebox Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.

We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.

DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model’s accuracy by up to 3%.

This voice-authentication wearable could block voice-assistant or bank spoofing

“Alexa, turn off my security system.” (credit: Amazon)

University of Michigan (U-M) scientists have developed a voice-authentication system for reducing the risk of being spoofed when you use a biometric system to log into secure services or a voice assistant (such as Amazon Echo and Google Home).

A hilarious example of spoofing a voice assistant happened during a Google commercial during the 2017 Super Bowl. When actors voiced “OK Google” commands on TV, viewers’ Google Home devices obediently began to play whale noises, flip lights on, and take other actions.

More seriously, an adversary could possibly bypass current voice-as-biometric authentication mechanisms, such as Nuance’s “FreeSpeech” customer authentication platform (used in a call centers and banks) by simply impersonating the user’s voice (possibly by using Adobe Voco software), the U-M scientists also point out.*

The VAuth system

VAuth system (credit: Kassem Fawaz/ACM Mobicom 2017)

The U-M VAuth (continuous voice authentication, pronounced “vee-auth”) system aims to make that a lot more difficult. It uses a tiny wearable device (which could be built in to a necklace, earbud/earphones/headset, or eyeglasses) containing an accelerometer (or a special microphone) that detects and measures vibrations on the skin of a person’s face, throat, or chest.

VAuth prototype features accelerometer chip for detecting body voice vibrations and Bluetooth transmitter (credit: Huan Feng et al./ACM)

The team has built a prototype using an off-the-shelf accelerometer and a Bluetooth transmitter, which sends the vibration signal to a real-time matching engine in a device (such as Google Home). It matches these vibrations with the sound of that person’s voice to create a unique, secure signature that is constant during an entire session (not just at the beginning). The team has also developed matching algorithms and software for Google Now.

Security holes in voice authentication systems

“Increasingly, voice is being used as a security feature but it actually has huge holes in it,” said Kang Shin, the Kevin and Nancy O’Connor Professor of Computer Science and professor of electrical engineering and computer science at U-M. “If a system is using only your voice signature, it can be very dangerous. We believe you have to have a second channel to authenticate the owner of the voice.”

VAuth doesn’t require training and is also immune to voice changes over time and different situations, such as sickness (a sore throat) or tiredness — a major limitation of voice biometrics, which require training from each individual who will use them, says the team.

The team tested VAuth with 18 users and 30 voice commands. It achieved a 97-percent detection accuracy and less than 0.1 percent false positive rate, regardless of its position on the body and the user’s language, accent or even mobility. The researchers say it also successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks.

A study on VAuth was presented Oct. 19 at the International Conference on Mobile Computing and Networking, MobiCom 2017, in Snowbird, Utah and is available for open-access download.

The work was supported by the National Science Foundation. The researchers have applied for a patent and are seeking commercialization partners to help bring the technology to market.

* As explained in this KurzweilAI articleAdobe Voco technology (aka “Photoshop for voice”) makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.


Abstract of Continuous Authentication for Voice Assistants

Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant’s microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth’s position on the body and the user’s language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.

How to turn audio clips into realistic lip-synced video


UW (University of Washington) | UW researchers create realistic video from audio files alone

University of Washington researchers at the UW Graphics and Image Laboratory have developed new algorithms that turn audio clips into a realistic, lip-synced video, starting with an existing video of  that person speaking on a different topic.

As detailed in a paper to be presented Aug. 2 at  SIGGRAPH 2017, the team successfully generated a highly realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics, using audio clips of those speeches and existing weekly video addresses in which he originally spoke on a different topic decades ago.

Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings (streaming audio over the internet takes up far less bandwidth than video, reducing video glitches), or holding a conversation with a historical figure in virtual reality, said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering.


Supasorn Suwajanakorn | Teaser — Synthesizing Obama: Learning Lip Sync from Audio

This beats previous audio-to-video conversion processes, which have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. The new machine learning tool may also help overcome the “uncanny valley” problem, which has dogged efforts to create realistic video from audio.

How to do it

A neural network first converts the sounds from an audio file into basic mouth shapes. Then the system grafts and blends those mouth shapes onto an existing target video and adjusts the timing to create a realistic, lip-synced video of the person delivering the new speech. (credit: University of Washington)

1. Find or record a video of the person (or use video chat tools like Skype to create a new video) for the neural network to learn from. There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources, the researchers note. (Obama was chosen because there were hours of presidential videos in the public domain.)

2. Train the neural network to watch videos of the person and translate different audio sounds into basic mouth shapes.

3. The system then uses the audio of an individual’s speech to generate realistic mouth shapes, which are then grafted onto and blended with the head of that person. Use a small time shift to enable the neural network to anticipate what the person is going to say next.

4. Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data, with only an hour of video to learn from, for instance, instead of 14 hours.

Fakes of fakes

So the obvious question is: Can you use someone else’s voice on a video (assuming enough videos)? The researchers said they decided against going down the path, but they didn’t say it was impossible.

Even more pernicious: the original video person’s words (not just the voice) could be faked using Princeton/Adobe’s “VoCo” software (when available) — simply by editing a text transcript of their voice recording — or the fake voice itself could be modified.

Or Disney Research’s FaceDirector could be used to edit recorded substitute facial expressions (along with the fake voice) into the video.

However, by reversing the process — feeding video into the neural network instead of just audio — one could also potentially develop algorithms that could detect whether a video is real or manufactured, the researchers note.

The research was funded by Samsung, Google, Facebook, Intel, and the UW Animation Research Labs. You can contact the research team at audiolipsync@cs.washington.edu.


Abstract of Synthesizing Obama: Learning Lip Sync from Audio

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

In a neurotechnology future, human-rights laws will need to be revisited

New forms of brainwashing include transcranial magnetic stimulation (TMS) to neuromodulate the brain regions responsible for social prejudice and political and religious beliefs, say researchers. (credit: U.S. National Library of Medicine)

New human rights laws to prepare for rapid current advances in neurotechnology that may put “freedom of mind” at risk have been proposed in the open access journal Life Sciences, Society and Policy.

Four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy, the authors of the study suggest: The right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

Advances in neural engineering, brain imaging, and neurotechnology put freedom of the mind at risk, says Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel. “Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Potential misuses

Sophisticated brain imaging and the development of brain-computer interfaces have moved away from a clinical setting into the consumer domain. There’s a risk that the technology could be misused and create unprecedented threats to personal freedom. For example:

  • Uses in criminal court as a tool for assessing criminal responsibility or even the risk of re-offending.*
  • Consumer companies using brain imaging for “neuromarketing” to understand consumer behavior and elicit desired responses from customers.
  • “Brain decoders” that can turn a person’s brain imaging data into images, text or sound.**
  • Hacking, allowing a third-party to eavesdrop on someone’s mind.***

International human rights laws currently make no specific mention of neuroscience. But as with the genetic revolution, the on-going neurorevolution will require consideration of human-rights laws and even the creation of new ones, the authors suggest.

* “A possibly game-changing use of neurotechnology in the legal field has been illustrated by Aharoni et al. (2013). In this study, researchers followed a group of 96 male prisoners at prison release. Using fMRI, prisoners’ brains were scanned during the performance of computer tasks in which they had to make quick decisions and inhibit impulsive reactions. The researchers followed the ex-convicts for 4 years to see how they behaved. The study results indicate that those individuals showing low activity in a brain region associated with decision-making and action (the Anterior Cingulate Cortex, ACC) are more likely to commit crimes again within 4 years of release (Aharoni et al. 2013). According to the study, the risk of recidivism is more than double in individuals showing low activity in that region of the brain than in individuals with high activity in that region. Their results suggest a “potential neurocognitive biomarker for persistent antisocial behavior”. In other words, brain scans can theoretically help determine whether certain convicted persons are at an increased risk of reoffending if released.” — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy

** NASA and Jaguar are jointly developing a technology called Mind Sense, which will measure brainwaves to monitor the driver’s concentration in the car (Biondi and Skrypchuk 2017). If brain activity indicates poor concentration, then the steering wheel or pedals could vibrate to raise the driver’s awareness of the danger. This technology can contribute to reduce the number of accidents caused by drivers who are stressed or distracted. However, it also opens theoretically the possibility for third parties to use brain decoders to eavesdropping on people’s states of mind. — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy

*** Criminally motivated actors could selectively erase memories from their victims’ brains to prevent being identified by them later on or simply to cause them harm. On the long term-scenario, they could be used by surveillance and security agencies with the purpose of selectively erasing dangerous, inconvenient from people’s brain as portrayed in the movie Men in Black with the so-called neuralyzer— Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy


Abstract of Towards new human rights in the age of neuroscience and neurotechnology

Rapid advancements in human neuroscience and neurotechnology open unprecedented possibilities for accessing, collecting, sharing and manipulating information from the human brain. Such applications raise important challenges to human rights principles that need to be addressed to prevent unintended consequences. This paper assesses the implications of emerging neurotechnology applications in the context of the human rights framework and suggests that existing human rights may not be sufficient to respond to these emerging issues. After analysing the relationship between neuroscience and human rights, we identify four new rights that may become of great relevance in the coming decades: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

Global night-time lights provide unfiltered data on human activities and socio-economic factors

Night-time lights seen from space correlate to everything from electricity consumption and CO2 emissions, to gross domestic product, population and poverty. (credit: NASA)

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Environmental Defense Fund (EDF) have developed an online tool that incorporates 21 years of night-time lights data to understand and compare changes in human activities in countries around the world.

The research is published in PLOS One.

The tool compares the brightness of a country’s night-time lights with the corresponding electricity consumption, GDP, population, poverty, and emissions of CO2, CH4, N2O, and F-gases since 1992, without relying on national statistics with often differing methodologies and motivations by those collecting them.

Consistent with previous research, the team found the highest correlations between night-time lights and GDP, electricity consumption, and CO2 emissions. Correlations with population, N2O, and CH4 emissions were still slightly less pronounced and, as expected, there was an inverse correlation between the brightness of lights and of poverty.

“This is the most comprehensive tool to date to look at the relationship between night-time lights and a series of socio-economic indicators,” said Gernot Wagner, a research associate at SEAS and coauthor of the paper.

The data source is the Defense Meteorological Satellite Program (DMSP) dataset, providing 21 years worth of night-time data. The researchers also use Google Earth Engine (GEE), a platform recently made available to researchers that allows them to explore more comprehensive global aggregate relationships at national scales between DMSP and a series of economic and environmental variables.


Abstract of Night-time lights: A global, long term look at links to socio-economic trends

We use a parallelized spatial analytics platform to process the twenty-one year totality of the longest-running time series of night-time lights data—the Defense Meteorological Satellite Program (DMSP) dataset—surpassing the narrower scope of prior studies to assess changes in area lit of countries globally. Doing so allows a retrospective look at the global, long-term relationships between night-time lights and a series of socio-economic indicators. We find the strongest correlations with electricity consumption, CO2 emissions, and GDP, followed by population, CH4 emissions, N2O emissions, poverty (inverse) and F-gas emissions. Relating area lit to electricity consumption shows that while a basic linear model provides a good statistical fit, regional and temporal trends are found to have a significant impact.

Brain-imaging headband measures how our minds mirror a speaker when we communicate

A cartoon image of brain “coupling” during communication (credit: Drexel University)

Drexel University biomedical engineers and Princeton University psychologists have used a wearable brain-imaging device called functional near-infrared spectroscopy (fNIRS) to measure brain synchronization when humans interact. fNIRS uses light to measure neural activity in the cortex of the brain (based on blood-oxygenation changes) during real-life situations and can be worn like a headband.

(KurzweilAI recently covered research with a fNIRS brain-computer interface that allows completely locked-in patients to communicate.)

A fNIRS headband (credit: Wyss Center for Bio and Neuroengineering)

Mirroring the speaker’s brain activity

The researchers found that a listener’s brain activity (in brain areas associated with speech comprehension) mirrors the speaker’s brain when he or she is telling a story about a real-life experience, with about a five-second delay. They also found that higher coupling is associated with better understanding.

The researchers believe the system can be used to offer important information about how to better communicate in many different environments, such as how people learn in classrooms and how to improve business meetings and doctor-patient communication. They also mentioned uses in analyzing political rallies and how people handle cable news.

“We now have a tool that can give us richer information about the brain during everyday tasks — such as person-to-person communication — that we could not receive in artificial lab settings or from single brain studies,” said Hasan Ayaz, PhD, an associate research professor in Drexel’s School of Biomedical Engineering, Science and Health Systems, who led the research team.

Traditional brain imaging methods like fMRI have limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of setup, it’s not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face. Which is why the Drexel researchers turned to a portable fNIRS system, which could probe brain-to-brain coupling question in natural settings.

For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.

The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person’s capacity to discern beliefs, desires, and goals of others. They hypothesized that a listener’s brain activity would correlate with the speaker’s only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI to compare the two methods.

They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject’s brains, the listeners’ brain activity matched only with the English speakers.* These results also correlated with the previous fMRI study.

The researchers believe the new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. One can also imagine possible invasive uses in areas such as law enforcement and military interrogation.

The research was published in open-access Scientific Reports on Monday, Feb. 27.

* “During brain-to-brain coupling, activity in areas of prefrontal [in the speaker] and parietal cortex [in the listeners] previously reported to be involved in sentence comprehension were robustly correlated across subjects, as revealed in the inter-subject correlation analysis. As these are task-related (active listening) activation periods (not resting, etc.), the correlations reflect modulation of these regions by the time-varying content of the narratives, and comprise linguistic, conceptual and affective processing.” — Yichuan Liu et al./Scientific Reports)


Abstract of Measuring speaker–listener neural coupling with functional near infrared spectroscopy

The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.

Trump considering libertarian reformer to head FDA

The Seasteading Institute wants to create new societies at sea, away from FDA (and other government) regulations. (credit: Seasteading Institute)

President-elect Donald Trump’s transition team is considering libertarian Silicon Valley investor Jim O’Neill, a Peter Thiel associate, to head the Food and Drug Administration, Bloomberg Politics has reported.

O’Neill, the Managing Director of Mithril Capital Management LLC, doesn’t have a medical background, but served in the George W. Bush administration as principal associate deputy secretary at the Department of Health and Human Services. He’s also a board member of the Seasteading Institute, a Thiel-backed venture to create new societies at sea, away from existing governments.

“We should reform FDA so there is approving drugs after their sponsors have demonstrated safety — and let people start using them, at their own risk, but not much risk of safety,” O’Neill said in a speech at the August 2014 Rejuvenation Biotechnology conference. “O’Neill also advocated anti-aging medicine in that speech, saying he believed it was scientifically possible to develop treatments that would reverse aging,” said Bloomberg.

O’Neill’s prospective nomination could also bring about “significant changes to medical cannabis policy and potentially address the regulations that have prevented medical cannabis research,” Mike Liszewski, the director of government affairs at Americans for Safe Access, told ATTN:.

Scott Gottlieb, M.D., a former FDA official and now at the American Enterprise Institute (AEI), is also reportedly under consideration, according to The Hill.

In a recent related announcement, Trump has selected Rep. Tom Price, M.D. (R., Ga.), a leader in the efforts to replace ObamaCare, to be his secretary of Health and Human Services. “His most frequent objection to [the Affordable Care Act] is that it interferes with the ability of patients and doctors to make medical decisions,” The New York Times notes. Price also proposes to deregulate the market for medical services, according to the AEI.

 

Terasem Colloquium in Second Life

The 2016 Terasem Annual Colloquium on the Law of Futuristic Persons will take place in Second Life  in  ”Terasem sim” on Saturday, Dec. 10, 2016 at noon EDT. The main themes: “Legal Aspects of Futuristic Persons: Cyber-Humans” and “A Tribute to the ‘Father of Artificial Intelligence,’ Marvin Minsky, PhD.”

Each year on December 10th, International Human Rights Day, Terasem conducts a Colloquium on the Law of Futuristic Persons. The event seeks to provide the public with informed perspectives regarding the legal rights and obligations of “futuristic persons” via VR events with expert presentations and discussions. Terasem hopes to facilitate development of a body of law covering the rights and obligations of entities that transcend, and yet encompass, conventional conceptions of humanness,” according to Terasem Movement, Inc.

12:10–12:30PM —How Marvin Minsky Inspired Me To Have a Mindclone Living on An O’Neill Space Habitat
Martine Rothblatt, JD, PhD
Co-Founder, Terasem Movement, Inc.
Space Coast, FL
Avatar name: Vitology Destiny

12:30–12:50PM — Formal Interaction

12:50–1:10PM — The Emerging Law of Cyborgs
Woodrow “Woody” Barfield, PhD, JD, LLM
Author: Cyber-Humans: Our Future with Machines
Chapel Hill, NC
Avatar name: WoodyBarfield

1:10–1:30PM — Formal Interaction

1:30–1:50PM — Cyborgs and Family Law Challenges
Rich Lee
Human Enhancement & Augmentation
St. George, UT
Avatar name: RichLee78

1:50–2:10PM — Formal Interaction

2:10–2:30PM — Synthetic Brain Simulations and Mens Rea*
Stephen Thaler, PhD.
President & CEO, Imagination-engines, Inc.
St. Charles, MO
Avatar name: SteveThaler

* Mens Rea refers to criminal intent. Moreover, it is the state of mind indicating culpability which is required by statute as an element of a crime. — Cornell University Legal Information Institute