Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research

Beneficial AI conference (credit: Future of Life Institute)

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California — a sequel to the 2015 AI Safety conference in Puerto Rico — the Future of Life Institute (FLI) brought together more 100 AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

FLI hosted a two-day workshop for its grant recipients, followed by a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the resulting technology is beneficial.

Beneficial AI conference participants (credit: Future of Life Institute)

The result was 23 Asilomar AI Principles, intended to suggest AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address longer-term issues — notably, “ Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

To date, 2515 AI researchers and others are signatories of the Principles. The process is described here.

The conference location has historic significance. In 2009, the Association for the Advancement of Artificial Intelligence held the Asilomar Meeting on Long-Term AI Futures to address similar concerns. And in 1975, the Asilomar Conference on Recombinant DNA was held to discuss potential biohazards and regulation of emerging biotechnology.

The non-profit Future of Life Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Boston University Ph.D. candidate in Developmental Sciences Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. Its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”

FLI’s scientific advisory board includes physicist Stephen Hawking, SpaceX CEO Elon Musk, Astronomer Royal Martin Rees, and UC Berkeley Professor of Computer Science/Smith-Zadeh Professor in Engineering Stuart Russell.


Future of Life Institute
| Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI [artificial general intelligence] (and beyond), and also what we would like to happen.

 

‘Bits & Watts’: integrating inexpensive energy sources into the electric grid

Bits & Watts initiative (credit: SLAC National Accelerator Laboratory)

Stanford University and DOE’s SLAC National Accelerator Laboratory launched today an initiative called “Bits & Watts” aimed at integrating low-carbon, inexpensive energy sources, like wind and solar, into the electric grid.

The interdisciplinary initiative hopes to develop “smart” technology that will bring the grid into the 21st century while delivering reliable, efficient, affordable power to homes and businesses.

That means you’ll be able to feed extra power from a home solar collector, for instance, into the grid — without throwing it off balance and triggering potential outages.

The three U.S. power grids (credit: Microsoft Encarta Encyclopedia)

A significant challenge. For starters, the U.S. electric grid is actually two giant, continent-spanning networks, plus a third, smaller network in Texas, that connect power sources and consumers via transmission lines. Each network runs like a single machine, with all its parts humming along at the same frequency, and their operators try to avoid unexpected surges and drops in power that could set off a chain reaction of disruptions and even wreck equipment or hurt people.

Remember the Northeast blackout of 2003, the second largest in history? It knocked out power for an estimated 45 million people in eight U.S. states and 10 million people in the Canadian province of Ontario, some for nearly a week.

“The first challenge was to bring down the cost of wind, solar and other forms of distributed power. The next challenge is to create an integrated system. We must develop the right technologies, financial incentives and investment atmosphere to take full advantage of the lowering costs of clean energy.” — Steven Chu, a Stanford professor, Nobel laureate, former U.S. Energy Secretary, and one of the founding researchers of Bits & Watts. (credit: U.S. Department of Energy)

“Today’s electric grid is … an incredibly complex and finely balanced ecosystem that’s designed to handle power flows in only one direction — from centralized power plants to the consumer,” explained Arun Majumdar, a Stanford professor of mechanical engineering who co-directs both Bits & Watts and the university’s Precourt Institute for Energy, which oversees the initiative.

“As we incorporate more low-carbon, highly variable sources like wind and solar — including energy generated, stored and injected back into the grid by individual consumers — we’ll need a whole new set of tools, from computing and communications to controls and data sciences, to keep the grid stable, efficient and secure and provide affordable electricity.”

Coordination and integration of transmission and distribution systems  (credit: SLAC National Accelerator Laboratory)

The initiative also plans to develop market structures, regulatory frameworks, business models and pricing mechanisms that are crucial for making the grid run smoothly, working with industry and policymakers to identify and solve problems that stand in the way of grid modernization.

(Three bigger grid problems the Stanford announcement today didn’t mention: a geomagnetic solar storm-induced Carrington event, an EMP attack, and a grid cyber attack.)

Simulating the Grid in the Lab

Sila Kiliccote, head of SLAC’s GISMo (Grid Integration, Systems and Mobility) lab, and Stanford graduate student Gustavo Cezar look at a computer dashboard showing how appliances, batteries, lighting and other systems in a “home hub” network could be turned on and off in response to energy prices, consumer preferences and demands on the grid. The lab is part of the Bits & Watts initiative. (credit: SLAC National Accelerator Laboratory)

Researchers will develop ways to use digital sensors and controls to collect data from millions of sources, from rooftop solar panels to electric car charging stations, wind farms, factory operations and household appliances and thermostats, and provide the real-time feedback grid operators need to seamlessly incorporate variable sources of energy and automatically adjust power distribution to customers.

All of the grid-related software developed by Bits & Watts will be open source, so it can be rapidly adopted by industry and policymakers and used by other researchers.

The initiative includes research projects that will:

  • Simulate the entire smart grid, from central power plants to networked home appliances (Virtual Megagrid).
  • Analyze data on electricity use, weather, geography, demographic patterns, and other factors to get a clear understanding of customer behavior via an easy-to-understand graphical interface (VISDOM).
  • Develop a “home hub” system that controls and monitors a home’s appliances, heating and cooling and other electrical demands and can switch them on and off in response to fluctuating electricity prices, demands on the power grid, and the customer’s needs (Powernet).
  • Gather vast and growing sources of data from buildings, rooftop solar modules, electric vehicles, utility equipment, energy markets and so on, and analyze it in real time to dramatically improve the operation and planning of the electricity grid (VADER). This project will incorporate new data science tools such as machine learning, and validate those tools using data from utilities and industry.
  • Create a unique data depository for the electricity ecosystem (DataCommons).

Through the Grid Modernization Initiative, initial Bits & Watts projects are being funded for a combined $8.6 million from two DOE programs, the Advanced Research Projects Agency-Energy (ARPA-E) and the Grid Modernization Laboratory Consortium; $2.2 million from the California Energy Commission; and $1.6 million per year from industrial members, including China State Grid, PG&E (Pacific Gas & Electric), innogy SE (formerly RWE), Schneider Electric and Meidensha Corp.

 

Mars-bound astronauts face brain damage from galactic cosmic ray exposure, says NASA-funded study

An (unshielded) view of Mars (credit: SpaceX)

A NASA-funded study of rodents exposed to highly energetic charged particles — similar to the galactic cosmic rays that will bombard astronauts during extended spaceflights — found that the rodents developed long-term memory deficits, anxiety, depression, and impaired decision-making (not to mention long-term cancer risk).

The study by University of California, Irvine (UCI) scientists* appeared Oct. 10 in Nature’s open-access Scientific Reports. It follows one last year that appeared in the May issue of open-access Science Advances, showing somewhat shorter-term brain effects of galactic cosmic rays.

The rodents were subjected to charged particle irradiation (ionized charged atomic nuclei from oxygen and titanium) at the NASA Space Radiation Laboratory at New York’s Brookhaven National Laboratory.

Digital imaging revealed a reduction of dendrites (green) and spines (red) on neurons of  irradiated rodents, disrupting the transmission of signals among brain cells and thus impairing the brain’s neural network. Left: dendrites in unirradiated brains. Center: dendrites exposed to 0.05 Gy** ionized oxygen. Right: dendrites exposed to 0.30 Gy ionized oxygen. (credit: Vipan K. Parihar et al./Scientific Reports)

Six months after exposure, the researchers still found significant levels of brain inflammation and damage to neurons,  poor performance on behavioral tasks designed to test learning and memory, and reduced “fear extinction” (an active process in which the brain suppresses prior unpleasant and stressful associations) — leading to elevated anxiety.

Similar types of more severe cognitive dysfunction (“chemo brain”) are common in brain cancer patients who have received high-dose, photon-based radiation treatments.

“The space environment poses unique hazards to astronauts,” said Charles Limoli, a professor of radiation oncology in UCI’s School of Medicine. “Exposure to these particles can lead to a range of potential central nervous system complications that can occur during and persist long after actual space travel. Many of these adverse consequences to cognition may continue and progress throughout life.”

NASA health hazards advisory

“During a 360-day round trip [to Mars], an astronaut would receive a dose of about 662 millisieverts (0.662 Gy) [twice the highest amount of radiation used in the UCI experiment with rodents] according to data from the Radiation Assessment Detector (RAD) … piggybacking on Curiosity,” said Cary Zeitlin, PhD, a principal scientist in Southwest Research Institute Space Science and Engineering Division and lead author of an article published in the journal Science in 2013. “In terms of accumulated dose, it’s like getting a whole-body CT scan once every five or six days [for a year],” he said in a NASA press release. There’s also the risk from increased radiation during periodic solar storms.

In addition, as dramatized in the movie The Martian (and explained in this analysis), there’s a risk on the surface of Mars, although less than in space, thanks to the atmosphere, and thanks to nighttime shielding of solar radiation by the planet.

In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.

“There’s going to be some risk of radiation, but it’s not deadly,” claimed SpaceX CEO Elon Musk Sept. 27 in an announcement of plans to establish a permanent, self-sustaining civilization of a million people on Mars (with an initial flight as soon as 2024). “There will be some slightly increased risk of cancer, but I think it’s relatively minor. … Are you prepared to die? If that’s OK, you’re a candidate for going.”

Sightseeers expose themselves to galactic cosmic radiation on Europa, a moon of Jupiter, shown in the background (credit: SpaceX)

Not to be one-upped by Musk, President Obama said in an op-ed on the CNN blog on Oct. 11 (perhaps channeling JFK) that “we have set a clear goal vital to the next chapter of America’s story in space: sending humans to Mars by the 2030s and returning them safely to Earth, with the ultimate ambition to one day remain there for an extended time.”

In a follow-up explainer, NASA Administrator Charles Bolden and John Holdren, Director of the White House Office of Science and Technology Policy, announced that in August, NASA selected six companies (under the  Next Space Technologies for Exploration Partnerships-2 (NextSTEP-2) program) to produce ground prototypes for deep space habitat modules. No mention of plans for avoiding astronaut brain damage, and the NextSTEP-2 illustrations don’t appear to address that either.

Concept image of Sierra Nevada Corporation’s habitation prototype, based on its Dream Chaser cargo module. No multi-ton shielding is apparent. (credit: Sierra Nevada)

Hitchhiking on an asteroid

So what are the solutions (if any)? Material shielding can be effective against galactic cosmic rays, but it’s expensive and impractical for space travel. For instance, a NASA design study for a large space station envisioned four metric tons per square meter of shielding to drop radiation exposure to 2.5 millisieverts (mSv) (or 0.0025 Gy) annually (the annual global average dose from natural background radiation is 2.4 mSv (3.6 in the U.S., including X-rays), according to a United Nations report in 2008).

Various alternate shielding scheme have been proposed. NASA scientist Geoffrey A. Landis suggested in a 1991 paper the use of magnetic deflection of charged radiation particles (imitating the Earth’s magnetosphere***). Improvements in superconductors since 1991 may make this more practical today and possibly more so in future.

In a 2011 paper in Acta Astronautica, Gregory Matloff of New York City College of Technology suggested that a Mars-bound spacecraft could tunnel into the asteroid for shielding, as long as the asteroid is at least 33 feet wide (if the asteroid were especially iron-rich, the necessary width would be smaller), National Geographic reported.

The calculated orbit of (357024) 1999 YR14 (credit: Lowell Observatory Near-Earth-Object Search)

“There are five known asteroids that fit the criteria and will pass from Earth to Mars before the year 2100. … The asteroids 1999YR14 and 2007EE26, for example, will both pass Earth in 2086, and they’ll make the journey to Mars in less than a year,” he said. Downside: it would be five years before either asteroid would swing around Mars as it heads back toward Earth.

Meanwhile, future preventive treatments may help. Limoli’s group is working on pharmacological strategies involving compounds that scavenge free radicals and protect neurotransmission.

* An Eastern Virginia Medical School researcher also contributed to the study.

** The Scientific Reports paper shows these values as centigray (cGy), a decimal fraction (0.01) of the SI derived Gy (Gray) unit of absorbed dose and specific energy (energy per unit mass). Such energies are usually associated with ionizing radiation such as gamma particles or X-rays.

*** Astronauts working for extended periods on the International Space Station do not face the same level of bombardment with galactic cosmic rays because they are still within the Earth’s protective magnetosphere. Astronauts on Apollo and Skylab missions received on average 1.2 mSv (0.0012 Gy) per day and 1.4 mSv (0.0014 Gy) per day respectively, according to a NASA study.


Abstract of Cosmic radiation exposure and persistent cognitive dysfunction

The Mars mission will result in an inevitable exposure to cosmic radiation that has been shown to cause cognitive impairments in rodent models, and possibly in astronauts engaged in deep space travel. Of particular concern is the potential for cosmic radiation exposure to compromise critical decision making during normal operations or under emergency conditions in deep space. Rodents exposed to cosmic radiation exhibit persistent hippocampal and cortical based performance decrements using six independent behavioral tasks administered between separate cohorts 12 and 24 weeks after irradiation. Radiation-induced impairments in spatial, episodic and recognition memory were temporally coincident with deficits in executive function and reduced rates of fear extinction and elevated anxiety. Irradiation caused significant reductions in dendritic complexity, spine density and altered spine morphology along medial prefrontal cortical neurons known to mediate neurotransmission interrogated by our behavioral tasks. Cosmic radiation also disrupted synaptic integrity and increased neuroinflammation that persisted more than 6 months after exposure. Behavioral deficits for individual animals correlated significantly with reduced spine density and increased synaptic puncta, providing quantitative measures of risk for developing cognitive impairment. Our data provide additional evidence that deep space travel poses a real and unique threat to the integrity of neural circuits in the brain.


Abstract of What happens to your brain on the way to Mars

As NASA prepares for the first manned spaceflight to Mars, questions have surfaced concerning the potential for increased risks associated with exposure to the spectrum of highly energetic nuclei that comprise galactic cosmic rays. Animal models have revealed an unexpected sensitivity of mature neurons in the brain to charged particles found in space. Astronaut autonomy during long-term space travel is particularly critical as is the need to properly manage planned and unanticipated events, activities that could be compromised by accumulating particle traversals through the brain. Using mice subjected to space-relevant fluences of charged particles, we show significant cortical- and hippocampal-based performance decrements 6 weeks after acute exposure. Animals manifesting cognitive decrements exhibited marked and persistent radiation-induced reductions in dendritic complexity and spine density along medial prefrontal cortical neurons known to mediate neurotransmission specifically interrogated by our behavioral tasks. Significant increases in postsynaptic density protein 95 (PSD-95) revealed major radiation-induced alterations in synaptic integrity. Impaired behavioral performance of individual animals correlated significantly with reduced spine density and trended with increased synaptic puncta, thereby providing quantitative measures of risk for developing cognitive decrements. Our data indicate an unexpected and unique susceptibility of the central nervous system to space radiation exposure, and argue that the underlying radiation sensitivity of delicate neuronal structure may well predispose astronauts to unintended mission-critical performance decrements and/or longer-term neurocognitive sequelae.

Elon Musk unveils plans for Mars civilization

(credit: SpaceX)

In a talk on Tuesday at the International Astronautical Congress in Guadalajara, Mexico, SpaceX CEO Elon Musk laid out engineering details to establish a permanent, self-sustaining civilization of a million people on Mars, with an initial flight as soon as 2024.

SpaceX is designing a massive reusable Interplanetary Transport System spacecraft with cabins. The trip would initially cost $500,000 per person, with a long-term goal of 100 passengers per trip.

Musk plans to make humanity a “multiplanetary species” to ensure survival in case of a calamity like an asteroid strike. “This is really about minimizing existential risk and having a tremendous sense of adventure,” he said.

Artist’s impression of Interplanetary Transport System on Europa (note humans for scale) (credit: SpaceX)

The new rocket could also be used for other interplanetary trips to places like Europa, the icy moon of Jupiter.

(credit: SpaceX)


SpaceX | SpaceX Interplanetary Transport System


Space X | Making Humans a Multiplanetary Species

 

Someone is learning how to take down the Internet

Submarine cables map (credit: Teleography)

“Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet,” according to a blog post by security expert Bruce Schneier.

“These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar.”

Schneier said major companies that provide the basic infrastructure that makes the Internet work [presumably, ones such as Cisco] have seen an increase in distributed denial of service (DDoS) attacks against them, and the attacks are significantly larger, last longer, and are more sophisticated.

“They look like probing — being forced to demonstrate their defense capabilities for the attacker.” This is similar to flying reconnaissance planes over a country to detect capabilities by making the enemy turn on air-defense radars.

Who might do this? “The size and scale of these probes — and especially their persistence — point to state actors. … China or Russia would be my first guesses.”

 

 

 

 

DARPA’s plan for total surveillance of low-flying drones over cities

An artist’s concept of Aerial Dragnet system: several UAS carrying sensors form a network that provides wide-area surveillance of all low-flying UAS in an urban setting (credit: DARPA)

DARPA’s recently announced Aerial Dragnet program is seeking innovative technologies to “provide persistent, wide-area surveillance of all unmanned aerial systems (UAS), such as quadcopters, operating below 1,000 feet in a large city.

UAS devices can be adapted for terrorist or military purposes, so U.S. forces will “increasingly be challenged by the need to quickly detect and identify such craft — especially in urban areas, where sight lines are limited and many objects may be moving at similar speeds,” DARPA said.

While Aerial Dragnet’s focus is on protecting military troops operating in urban settings overseas, the system could ultimately find civilian application to help protect U.S. metropolitan areas from UAS-enabled terrorist threats, DARPA said.

AI-controlled armed, autonomous UAVs may take over when things start to happen faster than human thought in future wars. From Call of Duty Black Ops 2. (credit: Activision Publishing)

DARPA envisions a network of surveillance nodes, each providing coverage of a neighborhood-sized urban area, perhaps mounted on tethered or long-endurance UAS. Sensors could look over and between buildings, the surveillance nodes would maintain UAS tracks, even when the craft disappear from sight around corners or behind objects.

The Aerial Dragnet program seeks teams with expertise in sensors, signal processing, and — interestingly — “networked autonomy.” A Broad Agency Announcement (BAA) solicitation detailing the goals and technical details of the program is available here.

ARGUS view from 20,000 feet (credit: DARPA)

Aerial Dragnet could conceivably link with ARGUS-IS — a 1.8-gigapixel video surveillance platform that can resolve details as small as six inches from an altitude of 20,000 feet (probably the highest-resolution camera in the world).

It could also tie in with a system being developed at NASA Ames Research Center for drone traffic management called UAS traffic management (UTM). Designed to enable safe low-altitude civilian UAS operations, it would provide [drone] pilots information needed to maintain separation from other aircraft by reserving areas for specific routes, with consideration of restricted airspace and adverse weather conditions.

The dynamic drone scene may get even more interesting on Monday Sept. 19, when GoPro plans to announce the much-anticipated high-maneuverability Karma camera drone and Hero 5.


GoPro: Karma Is Out There


Drone Compilations: Top 5 Drone Inventions of 2016

AI beats top U.S. Air Force tactical air combat experts in combat simulation

Retired U.S. Air Force Colonel Gene Lee, in a flight simulator, takes part in simulated air combat versus artificial intelligence technology developed by a team from industry, the U.S. Air Force, and University of Cincinnati. (credit: Lisa Ventre, University of Cincinnati Distribution A: Approved for public release; distribution unlimited. 88ABW Cleared 05/02/2016; 88ABW-2016-2270)

The U.S. Air Force got a wakeup call recently when AI software called ALPHA — running on a tiny $35 Raspberry Pi computer — repeatedly defeated retired U.S. Air Force Colonel Gene Lee, a top aerial combat instructor and Air Battle Manager, and other expert air-combat tacticians at the U.S. Air Force Research Lab (AFRL) in Dayton, Ohio. The contest was conducted in a high-fidelity air combat simulator.

According to Lee, who has considerable fighter-aircraft expertise (and has been flying in simulators against AI opponents since the early 1980s), ALPHA is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.” In fact, he was shot out of the air every time during protracted engagements in the simulator, he said.

ALPHA’s secret? Custom-designed “genetic fuzzy” algorithms designed for simulated air-combat missions, according to an open-access, unclassified paper published in the authoritative Journal of Defense Management. The paper was authored by a team of industry, Air Force, and University of Cincinnati researchers, including the AFRL Branch Chief.

ALPHA, which now runs on a standard consumer-grade PC, was developed by Psibernetix, Inc., an AFRL contractor founded by University of Cincinnati College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest*, president and CEO of the firm, and a team of former Air Force aerial combat experts, including Lee.

“AI wingmen”

Today’s fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. The cost for a mistake is very high. Microseconds matter, but an average human visual reaction time is 0.15 to 0.30 seconds, and “an even longer time to think of optimal plans and coordinate them with friendly forces,” the researchers note in the paper.

Side view during active combat in simulator between two Blue (human-controlled) fighters vs. four Red (AI) fighters with >150 inputs but handicapped data sources. All Reds have successfully evaded missiles; one Blue has been destroyed. Blue AWACS [Airborne early warning and control system aircraft] shown in distance. (credit: Nicholas Ernest et al./Journal of Defense Management)

In fact, ALPHA works 250 times faster than humans, the researchers say. Nonetheless, ALPHA’s future role will stop short of fully autonomous combat.

According to the AFRL team, ALPHA will first be tested on “Unmanned Combat Aerial Vehicles (UCAV),” where ALPHA will be organizing data and creating a complete mapping of a combat scenario, such as a flight of four fighter aircraft — which it can do in less than a millisecond.

The AFRL team sees ACAVs as “AI wingmen” capable of engaging in air combat when teamed with manned aircraft.

The ACAVs will include an onboard battle management system able to process situational awareness, determine reactions, select tactics, and manage weapons — simultaneously evading dozens of hostile missiles, taking accurate shots at multiple targets, coordinating actions of squad mates, and recording and learning from observations of enemy tactics and capabilities.

Genetic fuzzy systems

The researchers based the design of ALPHA on a “genetic fuzzy tree” (GFT) — a subtype of “fuzzy logic” algorithms. The GFT is described in another open-access paper in Journal of Defense Management by Ernest and University of Cincinnati aerospace professor Kelly Cohen.

“Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily,” said Cohen. “However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved — unless that challenge and all those inputs are broken down into a cascade of sub-decisions.

“Most AI programming uses numeric-based control and provides very precise parameters for operations,” he said. In contrast, the AI algorithms that Ernest and his team developed are language-based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control, or fuzzy logic, can be verified and validated, Cohen says.

The “genetic” part of the “genetic fuzzy tree” system started with numerous automatically generated versions of ALPHA that proved themselves against a manually tuned version of ALPHA. The successful strings of code were then “bred” with each other, favoring the stronger, or highest performance versions.

In other words, only the best-performing code was used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

“In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles as the IBM Deep Blue vs. Kasparov was to chess,” said Cohen. Or as Alpha Go was to Go.

* Support for Ernest’s doctoral research was provided  by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.


UC | Flying in Simulator


Abstract of Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions

Breakthroughs in genetic fuzzy systems, most notably the development of the Genetic Fuzzy Tree methodology, have allowed fuzzy logic based Artificial Intelligences to be developed that can be applied to incredibly complex problems. The ability to have extreme performance and computational efficiency as well as to be robust to uncertainties and randomness, adaptable to changing scenarios, verified and validated to follow safety specifications and operating doctrines via formal methods, and easily designed and implemented are just some of the strengths that this type of control brings. Within this white paper, the authors introduce ALPHA, an Artificial Intelligence that controls flights of Unmanned Combat Aerial Vehicles in aerial combat missions within an extreme-fidelity simulation environment. To this day, this represents the most complex application of a fuzzy-logic based Artificial Intelligence to an Unmanned Combat Aerial Vehicle control problem. While development is on-going, the version of ALPHA presented within was assessed by Colonel (retired) Gene Lee who described ALPHA as “the most aggressive, responsive, dynamic and credible AI (he’s) seen-to-date.” The quality of these preliminary results in a problem that is not only complex and rife with uncertainties but also contains an intelligent and unrestricted hostile force has significant implications for this type of Artificial Intelligence. This work adds immensely to the body of evidence that this methodology is an ideal solution to a very wide array of problems.

You can now be identified by your ‘brainprint’ with 100% accuracy

(credit: Jonathan Cohen/Binghamton University)

Binghamton University researchers have developed a biometric identification method called Cognitive Event-RElated Biometric REcognition (CEREBRE) for identifying an individual’s unique “brainprint.” They recorded the brain activity of 50 subjects wearing an electroencephalograph (EEG) headset while looking at selected images from a set of 500 images.

The researchers found that participants’ brains reacted uniquely to each image — enough so that a computer system that analyzed the different reactions was able to identify each volunteer’s “brainprint” with 100 percent accuracy.

In their original brainprint study in 2015, published in Neurocomputing (see ‘Brainprints’ could replace passwords), the research team was able to identify one person out of a group of 32 by that person’s responses, with 97 percent accuracy. That study only used words. Switching to images made a huge difference.

High-security sites

It’s only a three-point difference, but going from 97 to 100 percent makes possible a reliable system for high-security situations, such as “ensuring the person going into the Pentagon or the nuclear launch bay is the right person,” said Assistant Professor of Psychology Sarah Laszlo. “You don’t want to be 97 percent accurate for that, you want to be 100 percent accurate.”

Laszlo says brain biometrics are appealing because they can be cancelled (meaning the person can simple do another EEG session) and cannot be imitated or stolen by malicious means, the way a finger or retina can (as in the movie Minority Report).

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint — the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo explained.

Analyzing “event-related potential” brain signals 

Reference ERPs (originally recorded from the subject) and challenge ERPs (the ERP response detected from the subject being tested, which must match the reference ERP to verify it’s the same person) from two representative participants in the experiment, in response to viewing a “black and white foods” image, measured over the middle occipital (Oz) channel in the brain. Notice that even by eye, it is possible to determine which challenge ERP corresponds to which reference ERP. (credit: Maria V. Ruiz-Blondet et al./IEEE Trans. Inf. Forensics Security)

The researchers found in their original study that the key to detecting differences in brain signals was to look at and analyze “event-related potential (ERP) brain signals recorded from each subject. ERPs are brain signals that are triggered by specific events (such as seeing a photo). Unlike EEG signals, ERPs are unique and happen over a period of a few milliseconds.


How ERPs are identified

The researchers used six types of stimuli in the CEREBRE protocol: sine gratings, low frequency words, color versions of black and white images, black and white foods, black and white celebrity faces, and color foods. For the foods and celebrity faces, they used ten tokens of each stimulus type (e.g., 10 different foods).

As the authors note in a new paper in The IEEE Transactions on Information Forensics and Security, “We … predict that, while ERPs elicited in response to single categories of stimulation (e.g., foods) will be somewhat identifiable, combinations of ERPs elicited in response to multiple categories of stimulation will be even more identifiable.

“This prediction is supported by the likelihood that each category of stimulation will draw upon differing (though overlapping) brain systems. For example, if the sine gratings call primarily upon the primary visual cortex, and the foods call primarily on the ventral midbrain, then considering both responses together for biometric identification provides multiple, independent, pieces of information about the user’s functional brain organization — each of which can contribute unique variability to the overall biometric solution.”


Andrew Hatling/Binghamton University | The New Biometric — Brainprint


Abstract of CEREBRE: A Novel Method for Very High Accuracy Event-Related Potential Biometric Identification

The vast majority of existing work on brain biometrics has been conducted on the ongoing electroencephalogram. Here, we argue that the averaged event-related potential (ERP) may provide the potential for more accurate biometric identification, as its elicitation allows for some control over the cognitive state of the user to be obtained through the design of the challenge protocol. We describe the Cognitive Event-RElated Biometric REcognition (CEREBRE) protocol, an ERP biometric protocol designed to elicit individually unique responses from multiple functional brain systems (e.g., the primary visual, facial recognition, and gustatory/appetitive systems). Results indicate that there are multiple configurations of data collected with the CEREBRE protocol that all allow 100% identification accuracy in a pool of 50 users. We take this result as the evidence that ERP biometrics are a feasible method of user identification and worthy of further research.

How to detect radioactive material remotely

Researchers have proposed a new way to detect radioactive material using two co-located laser beams that interact with elevated levels of oxygen ions near a gamma-ray emitting source (credit: Joshua Isaacs, et al./University of Maryland)

University of Maryland researchers have proposed a new technique to remotely detect the radioactive materials* in dirty bombs or other sources from up to a few hundred meters away based on ion density. The technique might be used to screen vehicles, suspicious packages, or cargo.

The researchers calculate that a low-power laser aimed near the radioactive material could free electrons from the oxygen ions. A second, high-power laser could energize the electrons and start a cascading breakdown of the air. When the breakdown process reaches a certain critical point, the high-power laser light is reflected back. The more radioactive material in the vicinity, the more quickly the critical point is reached.

“We calculate we could easily detect 10 milligrams [of cobalt-60] with a laser aimed within half a meter from an unshielded source, which is a fraction of what might go into a dirty bomb,” said Joshua Isaacs, first author on the paper and a graduate student working with University of Maryland physics and engineering professors Phillip Sprangle and Howard Milchberg. Lead could shield radioactive substances, but most ordinary materials like walls or glass do not stop gamma rays.


In 2004 British national Dhiren Barot was arrested for conspiring to commit a public nuisance by the use of radioactive materials, among other charges. Authorities claimed that Barot had researched the production of “dirty bombs,” and planned to detonate them in New York City, Washington DC, and other cities. A dirty bomb combines conventional explosives with radioactive material. Although Barot did not build the bombs, national security experts believe terrorists continue to be interested in such devices for terror plots.


The lasers themselves could be located up to a few hundred meters away from the radioactive source, Isaacs said, as long as line-of-sight was maintained and the air was not too turbulent or polluted with aerosols. He estimated that the entire device, when built, could be transported by truck through city streets or past shipping containers in ports. It could also help police or security officials detect radiation without being too close to a potentially dangerous gamma ray emitter.

The proposed remote radiation detection method has advantages over two other approaches. Terahertz radiation, proposed as a way to breakdown air in the vicinity of radioactive materials, requires complicated and costly equipment. A high-power infrared laser can strip electrons and break down the air, but the method requires the detector be located in the opposite direction of the laser, making it impractical as a mobile device.

The new method is described in a paper in the journal Physics of Plasmas, from AIP Publishing.

* Radioactive materials are routinely used at hospitals for diagnosing and treating diseases, at construction sites for inspecting welding seams, and in research facilities. Cobalt-60, for example, is used to sterilize medical equipment, produce radiation for cancer treatment, and preserve food, among many other applications. In 2013, thieves in Mexico stole a shipment of cobalt-60 pellets used in hospital radiotherapy machines, although the shipment was later recovered intact.

Cobalt-60 and many other radioactive elements emit highly energetic gamma rays when they decay. The gamma rays strip electrons from the molecules in the surrounding air, and the resulting free electrons lose energy and readily attach to oxygen molecules to create elevated levels of negatively charged oxygen ions around the radioactive materials.


Abstract of Remote Monostatic Detection of Radioactive Materials by Laser-induced Breakdown

This paper analyzes and evaluates a concept for remotely detecting the presence of radioactivity using electromagnetic signatures. The detection concept is based on the use of laser beams and the resulting electromagnetic signatures near the radioactive material. Free electrons, generated from ionizing radiation associated with the radioactive material, cascade down to low energies and attach to molecular oxygen. The resulting ion density depends on the level of radioactivity and can be readily photo-ionized by a low-intensity laser beam. This process provides a controllable source of seed electrons for the further collisional ionization (breakdown) of the air using a high-power, focused, CO2 laser pulse. When the air breakdown process saturates, the ionizing CO2 radiation reflects off the plasma region and can be detected. The time required for this to occur is a function of the level of radioactivity. This monostatic detection arrangement has the advantage that both the photo-ionizing and avalanche laser beams as well as the detector can be co-located.

New ‘machine unlearning’ technique deletes unwanted data

The novel approach to making systems forget data is called “machine unlearning” by the two researchers who are pioneering the concept. Instead of making a model directly depend on each training data sample (left), they convert the learning algorithm into a summation form (right) – a process that is much easier and faster than retraining the system from scratch. (credit: Yinzhi Cao and Junfeng Yang)

Machine learning systems are becoming ubiquitous, but what about false or damaging information about you (and others) that these systems have learned? Is it even possible for that information to be ever corrected? There are some heavy security and privacy questions here. Ever Google yourself?

Some background: machine-learning software programs calculate predictive relationships from massive amounts of data. The systems identify these predictive relationships using advanced algorithms — a set of rules for solving math problems — and “training data.” This data is then used to construct the models and features that enable a system to predict things, like the probability of rain next week or when the Zika virus will arrive in your town.

This intricate process means that a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data’s “lineage” (a term coined by Yinzhi Cao, a Lehigh University assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University).

“Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity,” said Cao. “These systems must remove the data and undo its effects so that all future operations run as if the data never existed.”

Widely used learning systems such as Google Search are, for the most part, only able to forget a user’s raw data upon request — not the data’s lineage (what the user’s data connects to). However, in October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users’ right to control what appears when their names are searched. In July 2015, Google said it had received more than a quarter-million such requests.

How “machine unlearning” works

Now the two researchers say they have developed a way to forget faster and more effectively. Their concept, called “machine unlearning,” led to a four-year, $1.2 million National Science Foundation grant to develop the approach.

Building on work that was presented at a 2015 IEEE Symposium and then published, Cao and Yang’s “machine unlearning” method is based on the assumption that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.

Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. That means the learning algorithms depend only on the summations and not on individual data.

Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Simply recomputing a small number of summations would remove the data and its lineage completely — and much faster than through retraining the system from scratch, the researchers claim.

Verification?

Cao and Yang tested their unlearning approach on four diverse, real-world systems: LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter; and PJScan, an open-source PDF malware detector.

Cao and Yang are now adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.

“We foresee easy adoption of forgetting systems because they benefit both users and service providers,” they said. “With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems.”

The researchers envision “forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data, because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership.”


editor’s comments: I’d like to see case studies and critical reviews of this software by independent security and privacy experts. Yes, I’m paranoid but… etc. Your suggestions? To be continued…


Abstract of Towards Making Systems Forget with Machine Unlearning

Today’s systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data’s lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations — asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.