The first autonomous soft robot powered only by a chemical reaction

Harvard’s “octobot” is powered by a chemical reaction and controlled with a soft logic board. A reaction inside the bot transforms a small amount of liquid fuel (hydrogen peroxide) into a large amount of oxygen gas, which flows into the octobot’s arms and inflates them like a balloon. The team used a microfluidic logic circuit, a soft analogue of a simple electronic oscillator, to control when hydrogen peroxide decomposes to gas in the octobot. Octopi have long been a source of inspiration in soft robotics. These curious creatures can perform incredible feats of strength and dexterity with no internal skeleton. (SD card shown for scale only.) (credit: Lori Sanders)

The first autonomous, untethered, entirely soft 3-D-printed robot (powered only by a chemical reaction) has been demonstrated by a team of Harvard University researchers and described in the journal Nature.

Nicknamed “octobot,” the bot combines soft lithography, molding, and 3-D printing.

“One longstanding vision for the field of soft robotics has been to create robots that are entirely soft, but the struggle has always been in replacing rigid components like batteries and electronic controls with analogous soft systems and then putting it all together,” said Harvard professor Robert Wood. “This research demonstrates that we can easily manufacture the key components of a simple, entirely soft robot, which lays the foundation for more complex designs.”

Powered by hydrogen peroxide

Octobot structure. A system of check valves and switch valves within the soft controller regulates fluid flow into and through the system. The reaction chambers convert the hydrogen peroxide to oxygen, which then inflates the bot arms. The 500-micrometers-high “VERITAS” letters are patterned into the soft controller as an indication of scale. (credit: Michael Wehner et al./Nature)

Harvard’s octobot is pneumatic-based — powered by gas under pressure. A reaction inside the bot transforms a small amount of liquid fuel (hydrogen peroxide) into a large amount of gas, which flows into the octobot’s arms and inflates them like balloons. To control the reaction, the team used a microfluidic logic circuit based on pioneering work by co-author and chemist George Whitesides.

Octobot mechanical schematic (top) and electronic analogue (bottom). Check valves, fuel tanks, oscillator, reaction chambers, actuators and vent orifices are analogous to diodes, supply capacitors, electrical oscillator, amplifiers, capacitors and pull-down resistors, respectively. (credit: Michael Wehner at al./Nature)

The circuit, a soft analogue of a simple electronic oscillator, controls when hydrogen peroxide decomposes to gas in the octobot, triggering actuators.

The proof-of-concept octobot design could pave the way for a new generation of such machines, which could help revolutionize how humans interact with machines, the researchers suggest. They hope their approach for creating autonomous soft robots inspires roboticists, material scientists, and researchers focused on advanced manufacturing.

Next, the Harvard team hopes to design an octobot that can crawl, swim, and interact with its environment.

Robert Wood, the Charles River Professor of Engineering and Applied Sciences, and Jennifer A. Lewis, the Hansjorg Wyss Professor of Biologically Inspired Engineering, at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), led the research. Lewis and Wood are also core faculty members of the Wyss Institute for Biologically Inspired Engineering at Harvard University. George Whitesides is the Woodford L. and Ann A. Flowers University Professor and a core faculty member of the Wyss.

The research was supported by the National Science Foundation through the Materials Research Science and Engineering Center at Harvard and by the Wyss Institute.


Harvard University | Introducing the Octobot


Harvard University | Powering the Octobot: A chemical reaction


Abstract of An integrated design and fabrication strategy for entirely soft, autonomous robots

Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.

Harvard, Caltech design mechanical signaling, diodes, logic gates for soft robots

The Harvard/Caltech system for transmitting a mechanical signal consists of a series of bistable elements (the vertical beam, d, shown here) connected by soft coupling elements (wiggly lines), with two stable states. (Top) When a beam is displaced (by amount x), it stores energy. (Bottom) When it snaps back, it releases that stored energy into the coupling element on the right, which continues down the line, like dominos. (Scale bars represent 5 mm.) (credit: Jordan R. Raney/PNAS)

A new way to send mechanical signals through soft robots and other autonomous soft systems has been developed by researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with colleagues at the California Institute of Technology, described in the journal Proceedings of the National Academy of Sciences.

Soft autonomous systems, just like the human body, can perform delicate movements that are safe for humans, unlike mechanical actuators controlled by wires. The problem is that in sending a mechanical signal through a soft material — to make a robot “muscle” move, for example — the signal becomes dissipated (weakened) and dispersed (scattered).

Think tapping on a solid wall to communicate via morse code with someone in the next room vs. tapping out a muffled message on a wall covered with thick, soft foam.

Transmitting signals through soft materials

The researchers solved this problem by using “bistable beams” (structures that function in two distinct states) to store and release elastic energy along the path of a wave.

This new system consists of a chain of bistable elastomeric (rubber-like) beam structures connected by elastomeric linear springs. When a beam is deformed (bent), it snaps and stores energy. As the signal travels along the elastomer, it snaps the beam back into place, releasing the beam’s stored energy and sending the signal downstream, like a line of dominos. This simple bistable system prevents the signal from dissipating downstream.

“This design solves two fundamental problems in transmitting information through materials,” said Katia Bertoldi, the John L. Loeb Associate Professor of the Natural Sciences at SEAS and senior author of the paper.  “It not only overcomes dissipation, but it also eliminates dispersive [spreading out] effects, so that the signal propagates without distortion.  As such, we maintain signal strength and clarity from start to end.” The team used advanced 3D printing techniques to fabricate the system.

Soft diodes and logic gates

(A) A bifurcated (split into two) signal chain demonstrating tunable logic in a soft mechanical system. The distance d(out) determines the logical behavior, producing either an AND or an OR gate from the same system. (B) When d(out) is small (in this case, 16.7 mm) the energy barrier is higher, so both input signals must be strong to enable the wave to propagate through the output — a logical AND gate; (C) By increasing d(out) (to 18.6 mm in this case), the energy barrier decreases, producing a logical OR gate; in which case, either (or both) input signal has sufficient energy to trigger an output signal. (credit: Jordan R. Raney/PNAS)

The team also took the system a step further, designing and 3D-printing soft diodes and logic gates (a basic computational element that is normally part of a computer chip) using this same signal-transmission design. The gate can be controlled to act either as an AND (both inputs must be present to trigger the gate to fire) or as an OR gate (either one or both will trigger the gate to fire).

This research was supported by the National Science Foundation and the Harvard University Materials Research Science and Engineering Center (MRSEC).


Abstract of Stable propagation of mechanical signals in soft media using stored elastic energy

Soft structures with rationally designed architectures capable of large, nonlinear deformation present opportunities for unprecedented, highly-tunable devices and machines. However, the highly-dissipative nature of soft materials intrinsically limits or prevents certain functions, such as the propagation of mechanical signals. Here, we present an architected soft system comprised of elastomeric bistable beam elements connected by elastomeric linear springs. The dissipative nature of the polymer readily damps linear waves, preventing propagation of any mechanical signal beyond a short distance, as expected. However, the unique architecture of the system enables propagation of stable, nonlinear solitary transition waves with constant, controllable velocity and pulse geometry over arbitrary distances. Since the high damping of the material removes all other linear, small amplitude excitations, the desired pulse propagates with high delity and controllability. This phenomenon can be used to control signals, as demonstrated by the design of soft mechanical diodes and logic gates.

IBM scientists emulate neurons with phase-change technology

A prototype chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. The devices are accessed via an array of probes in this prototype to allow for characterization and testing. The tiny squares are contact pads used to access the nanometer-scale phase-change cells (inset).  Each set of probes can access a population of 100 cells. There are thousands to millions of these cells on one chip and IBM accesses them (in this particular photograph) by means of the sharp needles (probe card). (credit: IBM Research)

Scientists at IBM Research in Zurich have developed artificial neurons that emulate how neurons spike (fire). The goal is to create energy-efficient, high-speed, ultra-dense integrated neuromorphic (brain-like) technologies for applications in cognitive computing, such as unsupervised learning for detecting and analyzing patterns.

Applications could include internet of things sensors that collect and analyze volumes of weather data for faster forecasts and detecting patterns in financial transactions, for example.

The results of this research appeared today (Aug. 3) as a cover story in the journal Nature Nanotechnology.

Emulating neuron spiking

General pattern of a neural spike (action potential). A neuron fires (generates a rapid action potential, or voltage, when triggered by a stimulus) a signal from a synapse (credit: Chris 73/Diberri CC)

IBM’s new neuron-like spiking mechanism is based on a recent IBM breakthrough in phase-change materials. Phase-change materials are used for storing and processing digital data in re-writable Blu-ray discs, for example. The new phase-change materials developed by IBM recently are used instead for storing and processing analog data — like the synapses and neurons in our biological brains.

The new phase-change materials also overcome  problems in conventional computing, where there’s a separate memory and logic unit, slowing down computation. These functions are combined in the new artificial neurons, just as they are in a biological neuron.

In biological neurons, a thin lipid-bilayer membrane separates the electrical charges inside the cell from those outside it. The membrane potential is altered by the arrival of excitatory and inhibitory postsynaptic potentials through the dendrites of the neuron, and upon sufficient excitation of the neuron (a phase change), an action potential, or spike, is generated. IBM’s new germanium-antimony-tellurium (GeSbTe or GST) phase-change material emulates this process. It has two stable states: an amorphous one (without a clearly defined structure) and a crystalline one (with a structure). (credit: Tomas Tuma et al./Nature Nanotechnology)

Alternative to von-Neumann-based algorithms

In addition, previous attempts to build artificial neurons are built using CMOS-based circuits, the standard transistor technology we have in our computers. The new phase-change technology can reproduce similar functionality at reduced power consumption. The artificial neurons are also superior in functioning at nanometer-length-scale dimensions and feature native stochasticity (based on random variables, simulating neurons).

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, a co-author of the paper.

“The relatively complex computational tasks, such as Bayesian inference, that stochastic neuronal populations can perform with collocated processing and storage render them attractive as a possible alternative to von-Neumann-based algorithms in future cognitive computers,” the IBM scientists state in the paper.

IBM scientists have organized hundreds of these artificial neurons into populations and used them to represent fast and complex signals. These artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt lightbulb.


IBM Research | All-memristive neuromorphic computing with level-tuned neurons


Abstract of Stochastic phase-change neurons

Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.

Flirtey drone delivers Reno 7-Eleven slurpies in first commercial drone delivery to a residence

Flirty 7-Eleven delivery (credit: Flirtey)

Drone delivery service Flirtey completed the first FAA-approved autonomous drone delivery to a customer’s residence on July 22, ferrying sandwiches and Slurpees from a 7-Eleven in Reno, Nevada.

The two companies plan to expand drone delivery tests in Reno and expect drone packages to include “everyday essentials” such as batteries and sunscreen in the future, according to 7‑Eleven EVP Jesus H. Delgado-Jenkins.

Flirtey previously conducted the first FAA-approved drone delivery last July, a series of urgent medical deliveries to a rural healthcare clinic. The company also completed the first fully autonomous, FAA-approved urban drone delivery in the U.S. on March 25 to an uninhabited residential setting in Hawthorne, Nevada. The package included bottled water, emergency food, and a first aid kit.

And in June, Flirtey performed the first drone delivery of stool, blood, and urine samples from land to a medical testing facility on a barge in New Jersey’s Delaware Bay. Johns Hopkins University researchers on the barge sent back water purification tablets, insulin and a First Aid kit back to shore.

Amazon Prime Air testing moves to the UK

Amazon Prime Air’s new drone design, now being tested in the UK (credit: Amazon)

Meanwhile, hampered by the FAA’s strict new requirement* that commercially operated drones must fly within the operator’s line of sight at all times, on Monday July 25, Amazon.com Inc. announced  plans for a partnership with the UK Government to make the delivery of parcels (up to 5 pounds to customers in 30 minutes or less) by Amazon’s planned Prime Air delivery service a reality.

Supported by the UK Civil Aviation Authority (CAA), Amazon now has UK permissions to “explore beyond line of sight operations in rural and suburban areas, test sensor performance to make sure the drones can identify and avoid obstacles, and [for] flights where one person operates multiple highly-automated drones.”

* According to an FAA announcement June 21 of the first operational rules for “unmanned aircraft drones weighing less than 55 pounds that are conducting non-hobbyist operations … pilots must “keep an unmanned aircraft within visual line of sight. Operations are allowed during daylight and during twilight if the drone has anti-collision lights. The new regulations also address height and speed restrictions and other operational limits, such as prohibiting flights over unprotected people on the ground who aren’t directly participating in the UAS operation.”

The FAA notes that “according to industry estimates, the [new] rule could generate more than $82 billion for the U.S. economy and create more than 100,000 new jobs over the next 10 years.”


Flirity | Flirtey making history with the first U.S. drone delivery (demo)

Musk’s new master plan for Tesla

Tesla Autopilot (credit: Tesla Motors)

Elon Musk revealed his new master plan for Tesla today (July 20) in a blog post published on Tesla’s website:

  • Create stunning solar roofs with seamlessly integrated battery storage.
  • Expand the electric vehicle product line to address all major segments.
  • Develop a self-driving capability that is 10X safer than manual via massive fleet learning.
  • Enable your car to make money for you when you aren’t using it.

Increasing safety: “morally reprehensible to delay”

In the context of the recent Autopilot problem, Musk clarified why Tesla is deploying partial autonomy now, rather than waiting until some point in the future: “When used correctly, it is already significantly safer than a person driving by themselves and it would therefore be morally reprehensible to delay release simply for fear of bad press or some mercantile calculation of legal liability.

“According to the recently released 2015 NHTSA report, automotive fatalities increased by 8% to one death every 89 million miles. Autopilot miles will soon exceed twice that number and the system gets better every day. It would no more make sense to disable Tesla’s Autopilot, as some have called for, than it would to disable autopilot in aircraft, after which our system is named.”

Another way to increase safety, he says, is new heavy-duty trucks and high passenger-density urban transport, both planned for unveiling next year. “With the advent of autonomy, it will probably make sense to shrink the size of buses and transition the role of bus driver to that of fleet manager. … Traffic congestion would improve due to increased passenger areal density by eliminating the center aisle and putting seats where there are currently entryways, and matching acceleration and braking to other vehicles, thus avoiding the inertial impedance to smooth traffic flow of traditional heavy buses. It would also take people all the way to their destination.”

Lowering the cost of an autonomous car

Musk said that when true self-driving is approved by regulators, “it will mean that you will be able to summon your Tesla from pretty much anywhere. Once it picks you up, you will be able to sleep, read, or do anything else enroute to your destination.

“You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you’re at work or on vacation, significantly offsetting and at times potentially exceeding the monthly loan or lease cost. This dramatically lowers the true cost of ownership to the point where almost anyone could own a Tesla. Since most cars are only in use by their owner for 5% to 10% of the day, the fundamental economic utility of a true self-driving car is likely to be several times that of a car which is not.”

Musk said that in cities where demand exceeds the supply of customer-owned cars, “Tesla will operate its own fleet, ensuring you can always hail a ride from us no matter where you are.”

Why partially automated cars should be deployed in ‘light-duty vehicles’

Crash avoidance technologies now available in non-luxury vehicles include Lane Departure Warning (LDW), Forward Collision Warning (FCW), and Blind Spot Monitoring Roadway (BSM). (credit: Corey D. Harper et al./Accident Analysis and Prevention)

U.S. National Highway Traffic Safety Administration chief Mark Rosekind said at a conference today (July 20) that the government “will not abandon efforts to speed the development of self-driving cars … to reduce the 94 percent of car crashes attributed to human error, despite a fatal accident involving a Tesla Model S operating on an autopilot system,” Reuters reports. But autonomous vehicles must be “much safer” than human drivers before they are deployed on U.S. roads, he added.

However, Carnegie Mellon College of Engineering researchers suggest that already-available partially automated vehicle crash-avoidance technologies are a practical interim solution, they conclude in a study published in the journal Accident Analysis and Prevention.

These technologies — which include forward collision warning and avoidance, lane departure warning, blind spot monitoring, and partially autonomous braking or controls — are already available in non-luxury vehicles such as the Honda Accord and Mazda CX-9. If these technologies were deployed in all “light-duty vehicles,” it could prevent or reduce the severity of up 1.3 million crashes a year, including 10,100 fatal wrecks, according to the study.

“While there is much discussion about driverless vehicles, we have demonstrated that even with partial automation, there are financial and safety benefits,” says Chris T. Hendrickson, director of the Carnegie Mellon Traffic21 Institute.

When the team compared the price of equipping cars with safety technology to the expected annual reduction in the costs of crashes (based on government and insurance industry data), they discovered a net cost benefit (in addition to life-saving benefits) in two scenarios:

  • In the perfect-world scenario in which all relevant crashes are avoided with these technologies, there is an annual benefit of $202 billion or $861 per car.
  • On the more conservative side, when only observed crash reductions in vehicles equipped with blind spot monitoring, lane departure and forward collision crash avoidance systems are considered, there is still an annual positive net benefit of $4 billion dollars or $20 a vehicle (and lower prices could lead to larger net benefits over time).

Carnegie Mellon’s Technologies for Safe and Efficient Transportation (T-SET) University Transportation Center, the National Science Foundation, and the Hillman Foundation funded the project.


Abstract of Cost and benefit estimates of partially-automated vehicle collision avoidance technologies

Many light-duty vehicle crashes occur due to human error and distracted driving. Partially-automated crash avoidance features offer the potential to reduce the frequency and severity of vehicle crashes that occur due to distracted driving and/or human error by assisting in maintaining control of the vehicle or issuing alerts if a potentially dangerous situation is detected. This paper evaluates the benefits and costs of fleet-wide deployment of blind spot monitoring, lane departure warning, and forward collision warning crash avoidance systems within the US light-duty vehicle fleet. The three crash avoidance technologies could collectively prevent or reduce the severity of as many as 1.3 million U.S. crashes a year including 133,000 injury crashes and 10,100 fatal crashes. For this paper we made two estimates of potential benefits in the United States: (1) the upper bound fleet-wide technology diffusion benefits by assuming all relevant crashes are avoided and (2) the lower bound fleet-wide benefits of the three technologies based on observed insurance data. The latter represents a lower bound as technology is improved over time and cost reduced with scale economies and technology improvement. All three technologies could collectively provide a lower bound annual benefit of about $18 billion if equipped on all light-duty vehicles. With 2015 pricing of safety options, the total annual costs to equip all light-duty vehicles with the three technologies would be about $13 billion, resulting in an annual net benefit of about $4 billion or a $20 per vehicle net benefit. By assuming all relevant crashes are avoided, the total upper bound annual net benefit from all three technologies combined is about $202 billion or an $861 per vehicle net benefit, at current technology costs. The technologies we are exploring in this paper represent an early form of vehicle automation and a positive net benefit suggests the fleet-wide adoption of these technologies would be beneficial from an economic and social perspective.

Robot mimics vertebrate motion

Pleurobot (credit: EPFL)

École polytechnique fédérale de Lausanne (EPFL) scientists have invented a new robot called “Pleurobot” that mimics the way salamanders walk and swim with unprecedented detail.

Aside from being cool (and a likely future Disney attraction), the researchers believe designing the robot will provide a new tool for understanding the evolution of vertebrate locomotion. That could lead to better understanding of how the spinal cord controls the body’s locomotion, which may help develop future therapies and neuroprosthetic devices for paraplegic patients and amputees.

Pleurobot mimics Salamander. Neurobiologists say electrical stimulation of the spinal cord is what determines whether the salamander walks, crawls or swims: At lowest level of stimulation, the salamander walks; with higher stimulation, its pace increases, and beyond some threshold the salamander begins to swim. (credit: EPFL)

Simulating the 3D motion of the salamander’s locomotion requires exceptional precision. The Biorobotics Laboratory scientists started by shooting detailed x-ray videos of the salamander species Pleurodeles waltl from the top and the side, tracking up to 64 points along its skeleton while it performed different types of motion in water and on the ground.

Auke Ijspeert and his team at EPFL then 3D-printed bones and motorized joints, and even created a “nervous system” using electronic circuitry, allowing the Pleurobot to walk, crawl, and even swim underwater.*

Ijspeert thinks that the design methodology used for the Pleurobot can help develop other types of “biorobots,” which could become important tools in neuroscience and biomechanics.

The research, described in the Royal Society journal Interface, received funding from the Swiss National Center of Competence in Research (NCCR) in Robotics and from the Swiss National Science Foundation.


École polytechnique fédérale de Lausanne | A new robot mimics vertebrate motion

* In the design process, the researchers identified the minimum number of motorized segments required, as well as the optimal placement along the robot’s body, to replicate many of the salamander’s types of movement. That made it possible to construct Pleurobot with fewer bones and joints than the real-life creature — only 27 motors and 11 segments along its spine (the real animal has 40 vertebrae and multiple joints, some of which can even rotate freely and move side-to-side or up and down). 


Abstract of From cineradiography to biorobots: an approach for designing robots to emulate and study animal locomotion

Robots are increasingly used as scientific tools to investigate animal locomotion. However, designing a robot that properly emulates the kinematic and dynamic properties of an animal is difficult because of the complexity of musculoskeletal systems and the limitations of current robotics technology. Here, we propose a design process that combines high-speed cineradiography, optimization, dynamic scaling, three-dimensional printing, high-end servomotors and a tailored dry-suit to construct Pleurobot: a salamander-like robot that closely mimics its biological counterpart, Pleurodeles waltl. Our previous robots helped us test and confirm hypotheses on the interaction between the locomotor neuronal networks of the limbs and the spine to generate basic swimming and walking gaits. With Pleurobot, we demonstrate a design process that will enable studies of richer motor skills in salamanders. In particular, we are interested in how these richer motor skills can be obtained by extending our spinal cord models with the addition of more descending pathways and more detailed limb central pattern generator networks. Pleurobot is a dynamically scaled amphibious salamander robot with a large number of actuated degrees of freedom (DOFs: 27 in total). Because of our design process, the robot can capture most of the animal’s DOFs and range of motion, especially at the limbs. We demonstrate the robot’s abilities by imposing raw kinematic data, extracted from X-ray videos, to the robot’s joints for basic locomotor behaviours in water and on land. The robot closely matches the behaviour of the animal in terms of relative forward speeds and lateral displacements. Ground reaction forces during walking also resemble those of the animal. Based on our results, we anticipate that future studies on richer motor skills in salamanders will highly benefit from Pleurobot’s design.

First self-driving vehicle death

Tesla Model S (credit: Tesla)

A Tesla Model S car was involved in a fatal crash yesterday, Tesla Motors announced today, June 30, on its blog.

“Joshua Brown, a 40-year-old Ohio owner of a Tesla Model S, died when his electric car drove under the trailer of an 18-wheel truck on a highway in Williston, Fla.,” The Wall Street Journal reports, based on local news reports.

“The vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S,” according to the blog. “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.”

The blog also noted that whenever autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time” and “makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”

Tesla said it reported the accident to the National Highway Traffic Safety Administration immediately after learning about it yesterday.

Brown maintained a YouTube channel focused on the Tesla 7.0 Autopilot.

AI beats top U.S. Air Force tactical air combat experts in combat simulation

Retired U.S. Air Force Colonel Gene Lee, in a flight simulator, takes part in simulated air combat versus artificial intelligence technology developed by a team from industry, the U.S. Air Force, and University of Cincinnati. (credit: Lisa Ventre, University of Cincinnati Distribution A: Approved for public release; distribution unlimited. 88ABW Cleared 05/02/2016; 88ABW-2016-2270)

The U.S. Air Force got a wakeup call recently when AI software called ALPHA — running on a tiny $35 Raspberry Pi computer — repeatedly defeated retired U.S. Air Force Colonel Gene Lee, a top aerial combat instructor and Air Battle Manager, and other expert air-combat tacticians at the U.S. Air Force Research Lab (AFRL) in Dayton, Ohio. The contest was conducted in a high-fidelity air combat simulator.

According to Lee, who has considerable fighter-aircraft expertise (and has been flying in simulators against AI opponents since the early 1980s), ALPHA is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.” In fact, he was shot out of the air every time during protracted engagements in the simulator, he said.

ALPHA’s secret? Custom-designed “genetic fuzzy” algorithms designed for simulated air-combat missions, according to an open-access, unclassified paper published in the authoritative Journal of Defense Management. The paper was authored by a team of industry, Air Force, and University of Cincinnati researchers, including the AFRL Branch Chief.

ALPHA, which now runs on a standard consumer-grade PC, was developed by Psibernetix, Inc., an AFRL contractor founded by University of Cincinnati College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest*, president and CEO of the firm, and a team of former Air Force aerial combat experts, including Lee.

“AI wingmen”

Today’s fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. The cost for a mistake is very high. Microseconds matter, but an average human visual reaction time is 0.15 to 0.30 seconds, and “an even longer time to think of optimal plans and coordinate them with friendly forces,” the researchers note in the paper.

Side view during active combat in simulator between two Blue (human-controlled) fighters vs. four Red (AI) fighters with >150 inputs but handicapped data sources. All Reds have successfully evaded missiles; one Blue has been destroyed. Blue AWACS [Airborne early warning and control system aircraft] shown in distance. (credit: Nicholas Ernest et al./Journal of Defense Management)

In fact, ALPHA works 250 times faster than humans, the researchers say. Nonetheless, ALPHA’s future role will stop short of fully autonomous combat.

According to the AFRL team, ALPHA will first be tested on “Unmanned Combat Aerial Vehicles (UCAV),” where ALPHA will be organizing data and creating a complete mapping of a combat scenario, such as a flight of four fighter aircraft — which it can do in less than a millisecond.

The AFRL team sees ACAVs as “AI wingmen” capable of engaging in air combat when teamed with manned aircraft.

The ACAVs will include an onboard battle management system able to process situational awareness, determine reactions, select tactics, and manage weapons — simultaneously evading dozens of hostile missiles, taking accurate shots at multiple targets, coordinating actions of squad mates, and recording and learning from observations of enemy tactics and capabilities.

Genetic fuzzy systems

The researchers based the design of ALPHA on a “genetic fuzzy tree” (GFT) — a subtype of “fuzzy logic” algorithms. The GFT is described in another open-access paper in Journal of Defense Management by Ernest and University of Cincinnati aerospace professor Kelly Cohen.

“Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily,” said Cohen. “However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved — unless that challenge and all those inputs are broken down into a cascade of sub-decisions.

“Most AI programming uses numeric-based control and provides very precise parameters for operations,” he said. In contrast, the AI algorithms that Ernest and his team developed are language-based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control, or fuzzy logic, can be verified and validated, Cohen says.

The “genetic” part of the “genetic fuzzy tree” system started with numerous automatically generated versions of ALPHA that proved themselves against a manually tuned version of ALPHA. The successful strings of code were then “bred” with each other, favoring the stronger, or highest performance versions.

In other words, only the best-performing code was used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

“In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles as the IBM Deep Blue vs. Kasparov was to chess,” said Cohen. Or as Alpha Go was to Go.

* Support for Ernest’s doctoral research was provided  by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.


UC | Flying in Simulator


Abstract of Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions

Breakthroughs in genetic fuzzy systems, most notably the development of the Genetic Fuzzy Tree methodology, have allowed fuzzy logic based Artificial Intelligences to be developed that can be applied to incredibly complex problems. The ability to have extreme performance and computational efficiency as well as to be robust to uncertainties and randomness, adaptable to changing scenarios, verified and validated to follow safety specifications and operating doctrines via formal methods, and easily designed and implemented are just some of the strengths that this type of control brings. Within this white paper, the authors introduce ALPHA, an Artificial Intelligence that controls flights of Unmanned Combat Aerial Vehicles in aerial combat missions within an extreme-fidelity simulation environment. To this day, this represents the most complex application of a fuzzy-logic based Artificial Intelligence to an Unmanned Combat Aerial Vehicle control problem. While development is on-going, the version of ALPHA presented within was assessed by Colonel (retired) Gene Lee who described ALPHA as “the most aggressive, responsive, dynamic and credible AI (he’s) seen-to-date.” The quality of these preliminary results in a problem that is not only complex and rife with uncertainties but also contains an intelligent and unrestricted hostile force has significant implications for this type of Artificial Intelligence. This work adds immensely to the body of evidence that this methodology is an ideal solution to a very wide array of problems.

Boston Dynamics | Introducing SpotMini

SpotMini is a new smaller version of the Spot robot, weighing 55 lbs dripping wet (65 lbs if you include its arm). SpotMini is all-electric (no hydraulics) and runs for about 90 minutes on a charge, depending on what it is doing. SpotMini is one of the quietest robots we have ever built. It has a variety of sensors, including depth cameras, a solid state gyro (IMU) and proprioception sensors in the limbs. These sensors help with navigation and mobile manipulation. SpotMini performs some tasks autonomously, but often uses a human for high-level guidance. For more information about SpotMini visit our website at www.BostonDynamics.com

— Boston Dynamics