FAA to team with local, state, and tribal governments and companies to develop safe drone operations

Future drone war as portrayed in “Call of Duty Black Ops 2” (credit: Activision Publishing)

U.S. Secretary of Transportation Elaine L. Chao announced today (May 9, 2018) that 10 state, local, and tribal governments have been selected* as participants in the U.S. Department of Transportation’s Unmanned Aircraft Systems (UAS) Integration Pilot Program.

The goal of the program: set up partnerships between the FAA and local, state and tribal governments. These will then partner with private sector participants to safely explore the further integration of drone operations.

“Data gathered from these pilot projects will form the basis of a new regulatory framework to safely integrate drones into our national airspace,” said Chao. Over the next two and a half years, the team will collect drone data involving night operations, flights over people and beyond the pilot’s line of sight, package delivery, detect-and-avoid technologies and the reliability and security of data links between pilot and aircraft.


North Carolina has been selected to test medical delivery with Zipline’s drones, which have been
tested in more than 4000 flights in Rwanda, according to
MIT Technology Review

At least 200 companies were approved to partner in the program, including Airbus, Intel, Qualcomm, Boeing, Ford Motor Co., Uber Technologies Inc., and Fedex (but not Amazon).

“At Memphis International Airport, drones may soon be inspecting planes and delivering airplane parts for FedEx Corp.,” reports Bloomberg. “In Virginia, drones operated by Alphabet’s Project Wing will be used to deliver goods to various communities and then researchers will get feedback from local residents. The data can be used to help develop regulations allowing widespread and routine deliveries sometime in the future.”


The city of Reno, Nevada is partnered with Nevada-based Flirtey, a company that has experimented with delivering defibrillators by drone.

In less than a decade, the potential economic benefit of integrating [unmanned aircraft systems] in the nation’s airspace is estimated at $82 billion and could create 100,000 jobs,” the announcement said. “Fields that could see immediate opportunities from the program include commerce, photography, emergency management, public safety, precision agriculture and infrastructure inspections.”

Criminals and terrorists already see immediate opportunities

But could making drones more accessible and ubiquitous have unintended consequences?

Consider these news reports:

  • A small 2-foot-long quadcopter — a drone with four propellers — crashed onto the White House grounds on January 26, 2015. The event raises some troubling questions about the possibility that terrorists using armed drones could one day attack the White House or other tightly guarded U.S. government locations. — CNN
  • ISIS flew over 300 drone missions in one month during the battle for Mosul, said Peter Singer, a senior fellow and strategist at the New America Foundation, during a November 2017 presentation. About one-third of those flights were armed strike missions. — C4ISRNET
  • ISIS released a propaganda video in 2017 showing them (allegedly) dropping a bomb on a Syrian army ammunition depot. — Vocativ
  • Footage obtained by the BBC shows a drone delivering drugs and mobile phones to London prisoners in April 2016. — BBC

“Last month the FAA said reports of drone-safety incidents, including flying improperly or getting too close to other aircraft, now average about 250 a month, up more than 50 percent from a year earlier,” according to a Nov. 2017 article by Bloomberg. “The reports include near-collisions described by pilots on airliners, law-enforcement helicopters or aerial tankers fighting wildfires.”

Worse, last winter, a criminal gang used a drone swarm to obstruct an FBI hostage raid, Defense One reported on May 3, 2018. The gang buzzed the hostage rescue team and fed video to the group’s other members via YouTube, according to Joe Mazel, the head of the FBI’s operational technology law unit.

“Some criminal organizations have begun to use drones as part of witness intimidation schemes: they continuously surveil police departments and precincts in order to see ‘who is going in and out of the facility and who might be co-operating with police,’ he revealed. … Drones are also playing a greater role in robberies and the like,” the article points out. “Beyond the well-documented incidence of house break-ins, criminal crews are using them to observe bigger target facilities, spot security gaps, and determine patterns of life: where the security guards go and when.

“In Australia, criminal groups have begun have used drones as part of elaborate smuggling schemes,” Mazel said. And Andrew Scharnweber, associate chief of U.S. Customs and Border Protection, “described how criminal networks were using drones to watch Border Patrol officers, identify their gaps in coverage, and exploit them. Cartels are able to move small amounts of high-value narcotics across the border via drones with ‘little or no fear of arrest,’ he said.”

Congressional bill H.R. 4: FAA Reauthorization Act of 2018 attempts to address these problems by making it illegal to “weaponize” consumer drones and would require drones that fly beyond their operators’ line of sight to broadcast an identity code, allowing law enforcement to track and connect them to a real person, the article noted.

How terrorists could use AI-enhanced autonomous drones


The Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, released this fictional video to depict a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

But the next generation of drones might use AI-enabled swarming to become even more powerful and deadlier, in addition to self-driving vehicles for their next car bombs or assassinations, Defense One warned in another article on May 3, 2018.

“Max Tegmark’s book Life 3.0 notes the concern of UC Berkeley computer scientist Stuart Russell, who worries that the biggest winners from an AI arms race would be ‘small rogue states and non-state actors such as terrorists’ who can access these weapons through the black market,” the article notes.

“Tegmark writes that they are ‘mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone.’ Would-be assassins could simply ‘upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.’”

* The 10 selectees are:

  • Choctaw Nation of Oklahoma, Durant, OK
  • City of San Diego, CA
  • Virginia Tech – Center for Innovative Technology, Herndon, VA
  • Kansas Department of Transportation, Topeka, KS
  • Lee County Mosquito Control District, Ft. Myers, FL
  • Memphis-Shelby County Airport Authority, Memphis, TN
  • North Carolina Department of Transportation, Raleigh, NC
  • North Dakota Department of Transportation, Bismarck, ND
  • City of Reno, NV
  • University of Alaska-Fairbanks, Fairbanks, AK

round-up | Hawking’s radical instant-universe-as-hologram theory and the scary future of information warfare

A timeline of the Universe based on the cosmic inflation theory (credit: WMAP science team/NASA)

Stephen Hawking’s final cosmology theory says the universe was created instantly (no inflation, no singularity) and it’s a hologram

There was no singularity just after the big bang (and thus, no eternal inflation) — the universe was created instantly. And there were only three dimensions. So there’s only one finite universe, not a fractal or a multiverse — and we’re living in a projected hologram. That’s what Hawking and co-author Thomas Hertog (a theoretical physicist at the Catholic University of Leuven) have concluded — contradicting Hawking’s former big-bang singularity theory (with time as a dimension).

Problem: So how does time finally emerge? “There’s a lot of work to be done,” admits Hertog. Citation (open access): Journal of High Energy Physics, May 2, 2018. Source (open access): Science, May 2, 2018


Movies capture the dynamics of an RNA molecule from the HIV-1 virus. (photo credit: Yu Xu et al.)

Molecular movies of RNA guide drug discovery — a new paradigm for drug discovery

Duke University scientists have invented a technique that combines nuclear magnetic resonance imaging and computationally generated movies to capture the rapidly changing states of an RNA molecule.

It could lead to new drug targets and allow for screening millions of potential drug candidates. So far, the technique has predicted 78 compounds (and their preferred molecular shapes) with anti-HIV activity, out of 100,000 candidate compounds. Citation: Nature Structural and Molecular Biology, May 4, 2018. Source: Duke University, May 4, 2018.


Chromium tri-iodide magnetic layers between graphene conductors. By using four layers, the storage density could be multiplied. (credit: Tiancheng Song)

Atomically thin magnetic memory

University of Washington scientists have developed the first 2D (in a flat plane) atomically thin magnetic memory — encoding information using magnets that are just a few layers of atoms in thickness — a miniaturized, high-efficiency alternative to current disk-drive materials.

In an experiment, the researchers sandwiched two atomic layers of chromium tri-iodide (CrI3) — acting as memory bits — between graphene contacts and measured the on/off electron flow through the atomic layers.

The U.S. Dept. of Energy-funded research could dramatically increase future data-storage density while reducing energy consumption by orders of magnitude. Citation: Science, May 3, 2018. Source: University of Washington, May 3, 2018.


Definitions of artificial intelligence (credit: House of Lords Select Committee on Artificial Intelligence)

A Magna Carta for the AI age

A report by the House of Lords Select Committee on Artificial Intelligence in the U.K. lays out “an overall charter for AI that can frame practical interventions by governments and other public agencies.”

The key elements: Be developed for the common good. Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used. Respect rights to privacy. Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online. Never be given the autonomous power to hurt, destroy or deceive human beings.

Source: The Washington Post, May 2, 2018.


(credit: CB Insights)

The future of information warfare

Memes and social networks have become weaponized, but many governments seem ill-equipped to understand the new reality of information warfare.

The weapons include: Computational propaganda: digitizing the manipulation of public opinion; advanced digital deception technologies; malicious AI impersonating and manipulating people; and AI-generated fake video and audio. Counter-weapons include: Spotting AI-generated people; uncovering hidden metadata to authenticate images and videos; blockchain for tracing digital content back to the source; and detecting image and video manipulation at scale.

Source (open-access): CB Insights Research Brief, May 3, 2018.

The Doomsday Clock is now two minutes before midnight

(credit: Bulletin of the Atomic Scientists)

Citing growing nuclear risks and unchecked climate dangers, the Doomsday Clock — the symbolic point of annihilation — is now two minutes to midnight, the closest the Clock has been since 1953 at the height of the Cold War, according to a statement today (Jan. 25) by the Bulletin of the Atomic Scientists.

“In 2017, world leaders failed to respond effectively to the looming threats of nuclear war and climate change, making the world security situation more dangerous than it was a year ago — and as dangerous as it has been since World War II,” according to the Atomic Scientists’ Science and Security Board in consultation with the Board of Sponsors, which includes 15 Nobel Laureates.


“This is a dangerous time, but the danger is of our own making. Humankind has invented the implements of apocalypse; so can it invent the methods of controlling and eventually eliminating them. This year, leaders and citizens of the world can move the Doomsday Clock and the world away from the metaphorical midnight of global catastrophe by taking common-sense action.” — Lawrence Krauss, director of the Origins Project at Arizona State University, Foundation Professor at School of Earth and Space Exploration and Physics Department, Arizona State University, and chair, Bulletin of the Atomic Scientists’ Board of Sponsors.


The increased risks driving the decision to move the clock include:

Nuclear. Hyperbolic rhetoric and provocative actions from North Korea and the U.S. have increased the possibility of nuclear war by accident or miscalculation. These include U.S.-Russian military entanglements, South China Sea tensions, escalating rhetoric between Pakistan and India,  uncertainty about continued U.S. support for the Iran nuclear deal.

Decline of U.S. leadership and a related demise of diplomacy under the Trump Administration. “In 2017, the United States backed away from its longstanding leadership role in the world, reducing its commitment to seek common ground and undermining the overall effort toward solving pressing global governance challenges. Neither allies nor adversaries have been able to reliably predict U.S. actions or understand when U.S. pronouncements are real and when they are mere rhetoric. International diplomacy has been reduced to name-calling, giving it a surrealistic sense of unreality that makes the world security situation ever more threatening.”

Climate change. “The nations of the world will have to significantly decrease their greenhouse gas emissions to keep climate risks manageable, and so far, the global response has fallen far short of meeting this challenge.”

How to #RewindtheDoomsdayClock

According to Bulletin of the Atomic Scientists:

* U.S. President Donald Trump should refrain from provocative rhetoric regarding North Korea, recognizing the impossibility of predicting North Korean reactions. The U.S. and North Korean governments should open multiple channels of communication.

* The world community should pursue, as a short-term goal, the cessation of North Korea’s nuclear weapon and ballistic missile tests. North Korea is the only country to violate the norm against nuclear testing in 20 years.

* The Trump administration should abide by the terms of the Joint Comprehensive Plan of Action for Iran’s nuclear program unless credible evidence emerges that Iran is not complying with the agreement or Iran agrees to an alternative approach that meets U.S. national security needs.

* The United States and Russia should discuss and adopt measures to prevent peacetime military incidents along the borders of NATO.

* U.S. and Russian leaders should return to the negotiating table to resolve differences over the INF treaty, to seek further reductions in nuclear arms, to discuss a lowering of the alert status of the nuclear arsenals of both countries, to limit nuclear modernization programs that threaten to create a new nuclear arms race, and to ensure that new tactical or low-yield nuclear weapons are not built, and existing tactical weapons are never used on the battlefield.

* U.S. citizens should demand, in all legal ways, climate action from their government. Climate change is a real and serious threat to humanity.

* Governments around the world should redouble their efforts to reduce greenhouse gas emissions so they go well beyond the initial, inadequate pledges under the Paris Agreement.

* The international community should establish new protocols to discourage and penalize the misuse of information technology to undermine public trust in political institutions, in the media, in science, and in the existence of objective reality itself.

Worldwide deployments of nuclear weapons, 2017

“As of mid-2017, there are nearly 15,000 nuclear weapons in the world, located at some 107 sites in 14 countries. Roughly, 9400 of these weapons are in military arsenals; the remaining weapons are retired and awaiting dismantlement. Nearly 4000 are operationally available, and some 1800 are on high alert and ready for use on short notice.

“By far, the largest concentrations of nuclear weapons reside in Russia and the United States, which possess 93 percent of the total global inventory. In addition to the seven other countries with nuclear weapon stockpiles (Britain, France, China, Israel, India, Pakistan, and North Korea), five nonnuclear NATO allies (Belgium, Germany, Italy, the Netherlands, and Turkey) host about 150 US nuclear bombs at six air bases.”

Hans M. Kristensen & Robert S. Norris, Worldwide deployments of nuclear weapons, Bulletin of the Atomic Scientists 2017. Pages 289-297 | Published online: 31 Aug 2017.

DARPA-funded ‘unhackable’ computer could avoid future flaws like Spectre and Meltdown

(credit: University of Michigan)

A University of Michigan (U-M) team has announced plans to develop an “unhackable” computer, funded by a new $3.6 million grant from the Defense Advanced Research Projects Agency (DARPA).

The goal of the project, called MORPHEUS, is to design computers that avoid the vulnerabilities of most current microprocessors, such as the Spectre and Meltdown flaws announced  last week.*

The $50 million DARPA System Security Integrated Through Hardware and Firmware (SSITH) program aims to build security right into chips’ microarchitecture, instead of relying on software patches.*

The U-M grant is one of nine that DARPA has recently funded through SSITH.

Future-proofing

The idea is to protect against future threats that have yet to be identified. “Instead of relying on software Band-Aids to hardware-based security issues, we are aiming to remove those hardware vulnerabilities in ways that will disarm a large proportion of today’s software attacks,” said Linton Salmon, manager of DARPA’s System Security Integrated Through Hardware and Firmware program.

Under MORPHEUS, the location of passwords would constantly change, for example. And even if an attacker were quick enough to locate the data, secondary defenses in the form of encryption and domain enforcement would throw up additional roadblocks.

More than 40 percent of the “software doors” that hackers have available to them today would be closed if researchers could eliminate seven classes of hardware weaknesses**, according to DARPA.

DARPA is aiming to render these attacks impossible within five years. “If developed, MORPHEUS could do it now,” said Todd Austin, U-M professor of computer science and engineering, who leads the project. Researchers at The University of Texas and Princeton University are also working with U-M.

* Apple released today (Jan. 8) iOS 11.2.2 and macOS 10.13.2 updates with Spectre fix for Safari and WebKit, according to MacWorld. Threatpost has an update (as of Jan. 7) on efforts by Intel and others in dealing with Meltdown and Spectre processor vulnerabilities .

** Permissions and privileges, buffer errors, resource management, information leakage, numeric errors, crypto errors, and code injection.

UPDATE 1/9/2018: BLUE-SCREEN ALERT: Read this if you have a Windows computer with an AMD processor: Microsoft announced today it has temporarily paused sending some Windows operating system updates (intended to protect against Spectre and Meltdown chipset vulnerabilities) to devices that have impacted AMD processors. “Microsoft has received reports of some AMD devices getting into an unbootable state after installation of recent Windows operating system security updates.”

 

 

 

Disturbing video depicts near-future ubiquitous lethal autonomous weapons


Campaign to Stop Killer Robots | Slaughterbots

In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.

UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.

Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.

Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.

“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”

“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

For more information about autonomous weapons:

* As suggested in this U.S. Department of Defense video:


Perdix Drone Swarm – Fighters Release Hive-mind-controlled Weapon UAVs in Air | U.S. Naval Air Systems Command

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

Why futurist Ray Kurzweil isn’t worried about technology stealing your job — Fortune

1985: Ray Kurzweil looks on as Stevie Wonder experiences the Kurzweil 250, the first synthesizer to accurately reproduce the sounds of the piano — replacing piano-maker jobs but adding many more jobs for musicians (credit: Kurzweil Music Systems)

Last week, Fortune magazine asked Ray Kurzweil to comment on some often-expressed questions about the future.

Does AI pose an existential threat to humanity?

Kurzweil sees the future as nuanced, notes writer Michal Lev-Ram. “A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation,” Kurzweil said. “It’s very important for your survival to be sensitive to bad news. … I think if you look at history, though, we’re being helped [by new technology] more than we’re being hurt.”

How will artificial intelligence and other technologies impact jobs?

“We have already eliminated all jobs several times in human history,” said Kurzweil, pointing out that “for every job we eliminate, we’re going to create more jobs at the top of the skill ladder. … You can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.”

Why are we so bad at predicting certain things? For example, Donald Trump winning the presidency?

Kurzweil: “He’s not technology.”

Read Fortune article here.

‘Fog computing’ could improve communications during natural disasters

Hurricane Irma at peak intensity near the U.S. Virgin Islands on September 6, 2017 (credit: NOAA)

Researchers at the Georgia Institute of Technology have developed a system that uses edge computing (also known as fog computing) to deal with the loss of internet access in natural disasters such as hurricanes, tornados, and floods.

The idea is to create an ad hoc decentralized network that uses computing power built into mobile phones, routers, and other hardware to provide actionable data to emergency managers and first responders.

In a flooded area, for example, search and rescue personnel could continuously ping enabled phones, surveillance cameras, and “internet of things” devices in an area to determine their exact locations. That data could then be used to create density maps of people to prioritize and guide emergency response teams.

Situational awareness for first responders

“We believe fog computing can become a potent enabler of decentralized, local social sensing services that can operate when internet connectivity is constrained,” said Kishore Ramachandran, PhD, computer science professor at Georgia Tech and senior author of a paper presented in April this year at the 2nd International Workshop on Social Sensing*.

“This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations.”

The team has proposed a generic software architecture for social sensing applications that is capable of exploiting the fog-enabled devices. The design has three components: a central management function that resides in the cloud, a data processing element placed in the fog infrastructure, and a sensing component on the user’s device.

Beyond emergency response during natural disasters, the team believes its proposed fog architecture can also benefit communities with limited or no internet access — for public transportation management, job recruitment, and housing, for example.

To monitor far-flung devices in areas with no internet access, a bus or other vehicle could be outfitted with fog-enabled sensing capabilities, the team suggests. As it travels in remote areas, it would collect data from sensing devices. Once in range of internet connectivity, the “data mule” bus would upload that information to centralized cloud-based platforms.

* “Social sensing has emerged as a new paradigm for collecting sensory measurements by means of “crowd-sourcing” sensory data collection tasks to a human population. Humans can act as sensor carriers (e.g., carrying GPS devices that share location data), sensor operators (e.g., taking pictures with smart phones), or as sensors themselves (e.g., sharing their observations on Twitter). The proliferation of sensors in the possession of the average individual, together with the popularity of social networks that allow massive information dissemination, heralds an era of social sensing that brings about new research challenges and opportunities in this emerging field.” — SocialSens2017

Leading AI country will be ‘ruler of the world,’ says Putin

DoD autonomous drone swarms concept (credit: U.S. Dept. of Defense)

Russian President Vladimir Putin warned Friday (Sept. 1, 2017) that the country that becomes the leader in developing artificial intelligence will be “the ruler of the world,” reports the Associated Press.

AI development “raises colossal opportunities and threats that are difficult to predict now,” Putin said in a lecture to students, warning that “it would be strongly undesirable if someone wins a monopolist position.”

Future wars will be fought by autonomous drones, Putin suggested, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

U.N. urged to address lethal autonomous weapons

AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk* and Google DeepMind’s Mustafa Suleyman, signed an open letter asking the United Nations to “urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.”

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Unfortunately, the box may have already been opened. Three examples:

Russia. In 2014, Dmitry Andreyev of the Russian Strategic Missile Forces announced that mobile robots would be standing guard over five ballistic missile installations, New Scientist reported. Armed with a heavy machine gun, this “mobile robotic complex … can detect and destroy targets, without human involvement.”

Uran-9 unmanned combat ground vehicle (credit: Vitaly V. Kuzmin/CC)

In 2016, Russian military equipment manufacturer JSC 766 UPTK announced what appears to be the commercial version: the Uran-9 multipurpose unmanned ground combat vehicle. “In autonomous mode, the vehicle can automatically identify, detect, track and defend [against] enemy targets based on the pre-programmed path set by the operator,” the company said.

United States. In a 2016 report, the U.S. Department of Defense advocated self-organizing “autonomous unmanned” (UA) swarms of small drones that would assist frontline troops in real time by surveillance, jamming/spoofing enemy electronics, and autonomously firing against the enemy.

The authors warned that “autonomy — fueled by advances in artificial intelligence — has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike.” The report advised that the Department of Defense “must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”**

South Korea. Designed initially for the DMZ, Super aEgis II, a robot-sentry machine gun designed by Dodaam Systems, can identify, track, and automatically destroy a human target 3 kilometers away, assuming that capability is turned on.

* “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.” — Elon Musk tweet 2:33 AM – 4 Sep 2017

** While it doesn’t use AI, the U.S. Navy’s computer-controlled, radar-guided Phalanx gun system can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat.

UPDATE Sept. 5, 2017: Added Musk tweet in footnote

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”