This microrobot could be a model for a future dual aerial-aquatic vehicle

The Harvard RoboBee concept (credit: Harvard Microrobotics Lab)

In 1939, Russian engineer Boris Ushakov proposed a “flying submarine” — a cool James Bond-style vehicle that could seamlessly transition from air to water and back again. Ever since, engineers have been trying to design one, with little success. The biggest challenge: aerial vehicles require large airfoils like wings or sails to generate lift, while underwater vehicles need to minimize surface area to reduce drag.

Engineers at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS) decided to try that a new version of their RoboBee microbot (see “A robotic insect makes first controlled test flight“), taking a clue from puffins. These birds with flamboyant beaks employ flapping motions that are similar in air and water.


Harvard University | RoboBee: From Aerial to Aquatic

But to make that actually work, the team had to first solve four thorny problems:

Surface tension. The RoboBee is so small and lightweight that it cannot break the surface tension of the water. To overcome this hurdle, the RoboBee hovers over the water at an angle, momentarily switches off its wings, and then crashes unceremoniously into the water to make itself sink.

Water’s increased density (1,000 times denser than air), which would snap the wing off the RoboBee. Solution: the team lowered the wing speed from 120 flaps per second to nine but kept the flapping mechanisms and hinge design the same. A swimming RoboBee simply changes its direction by adjusting the stroke angle of the wings, the same way it does in air.

Shorting out. Like the flying version, it’s tethered to a power source. Solution: use deionized water and coat the electrical connections with glue.

Moving from water to air. Problem: it can’t generate enough lift without snapping one of its wings. They researchers say they’re working on that next.

“We believe the RoboBee has the potential to become the world’s first successful dual aerial, aquatic insect-scale vehicle,” the researchers claim in a paper presented at the International Conference on Intelligent Robots and Systems in Germany. The research was funded by the National Science Foundation and the Wyss Institute for Biologically Inspired Engineering.

Hmm, maybe we’ll see a vehicle based on the RoboBee in a future Bond film?

Artificial ‘skin’ system transmits the pressure of touch

“Gimmie five”: Model robotic hand with artificial mechanoreceptors (credit: Bao Research Group, Stanford University)

Researchers have created a sensory system that mimics the ability of human skin to feel pressure and have transmitted the digital signals from the system’s sensors to the brain cells of mice. These new developments, reported in the October 16 issue of Science, could one day allow people living with prosthetics to feel sensation in their artificial limbs.

Artificial mechanoreceptors mounted on the fingers of a model robotic hand (credit: Bao Research Group, Stanford University)

The system consists of printed plastic circuits, designed to be placed on robotic fingertips. Digital signals transmitted by the system would increase as the fingertips came closer to an object, with the signal strength growing as the fingertips gripped the object tighter.

How to simulate human fingertip sensations

To simulate this human sensation of pressure, Zhenan Bao of Stanford University and her colleagues developed a number of key components that collectively allow the system to function.

As our fingers first touch an object, how we physically “feel” it depends partially on the mechanical strain that the object exerts on our skin. So the research team used a sensor with a specialized circuit that translates pressure into digital signals.

To allow the sensory system to feel the same range of pressure that human fingertips can, the team needed a highly sensitive sensor. They used carbon nanotubes in formations that are highly effective at detecting the electrical fields of inanimate objects.

Stretchable skin with flexible artificial mechanoreceptors (credit: Bao Research Group, Stanford University)

Bao noted that the printed circuits of the new sensory system would make it easy to produce in large quantities. “We would like to make the circuits with stretchable materials in the future, to truly mimic skin,” Bao said. “Other sensations, like temperature sensing, would be very interesting to combine with touch sensing.”


Abstract of A skin-inspired organic digital mechanoreceptor

Human skin relies on cutaneous receptors that output digital signals for tactile sensing in which the intensity of stimulation is converted to a series of voltage pulses. We present a power-efficient skin-inspired mechanoreceptor with a flexible organic transistor circuit that transduces pressure into digital frequency signals directly. The output frequency ranges between 0 and 200 hertz, with a sublinear response to increasing force stimuli that mimics slow-adapting skin mechanoreceptors. The output of the sensors was further used to stimulate optogenetically engineered mouse somatosensory neurons of mouse cortex in vitro, achieving stimulated pulses in accordance with pressure levels. This work represents a step toward the design and use of large-area organic electronic skins with neural-integrated touch feedback for replacement limbs.

Telsa Motors to introduce new self-driving features Thursday

 

Tesla Model S (credit: Tesla)

Tesla Motors will introduce on Thursday (October 15, 2015) an advanced “beta test” set of autonomous driving features, The Wall Street Journal reports.

The software will allow hands- and feet-free driving in everything from stop-and-go traffic to highway speeds, and enables a car to park itself, the journal says. It will be available for 50,000 newer Model S cars world-wide via software download.

However, staying within licensing regulations, the software (at least the current version) requires the driver to grab the steering wheel every 10 seconds or so to avoid having the vehicle slow.

“Over time, long term, you won’t have to keep your hands on the wheel — we explicitly describe this as beta,” said Tesla Motors CEO Elon Musk at a press event. Notably, unlike other car makers, Tesla Motors is pushing the new features via an over-the-air software update.

 

 

 

This deep-learning method could predict your daily activities from your lifelogging camera images

Example images from dataset of 40,000 egocentric images with their respective labels. The classes are representative of the number of images per class for the dataset. (credit: Daniel Castro et al./Georgia Institute of Technology)

Georgia Institute of Technology researchers have developed a deep-learning method that uses a wearable smartphone camera to track a person’s activities during a day. It could lead to more powerful Siri-like personal assistant apps and tools for improving health.

In the research, the camera took more than 40,000 pictures (one every 30 to 60 seconds) over a six-month period. The researchers taught a computer to categorize these pictures across 19 activity classes, including cooking, eating, watching TV, working, spending time with family, and driving. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.

Wearable smartphone camera used in the research (credit: Daniel Castro et al.)

It knows what you are going to do next

The method was then able to determine with 83 percent accuracy what activity a person was doing at a given time, based only on the images.

The researchers believe they have gathered the largest annotated dataset of first-person images used to demonstrate that deep-learning can “understand” human behavior and the habits of a specific person.

The researchers believe that within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.*

“Imagine if a device could learn what I would be doing next — ideally predict it — and recommend an alternative?” says Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, who helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”

That could be based on a future version of a smartphone app like Waze, which allows drivers to share real-time traffic and road info. In possibly related news, Apple Inc. recently acquired Perceptio, a startup developing image-recognition systems for smartphones, using deep learning.

The open-access research, which was conducted in the School of Interactive Computing and the Institute for Robotics and Intelligent Machines at Georgia Tech, can be found here.

* Or not. “As more consumers purchase wearable tech, they unknowingly expose themselves to both potential security breaches and ways that their data may be legally used by companies without the consumer ever knowing,” TechRepublic notes.


Abstract of  Predicting Daily Activities From Egocentric Images Using Deep Learning

We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-theart deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person’s activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.

Stephen Hawking on AI

Stephen Hawking on Last Week Tonight with John Oliver (credit: HBO)

Reddit published Stephen Hawking’s answers to questions in an “Ask me anything” (AMA) event on Thursday (Oct. 8).

Most of the answers focused on his concerns about the future of AI and its role in our future. Here are some of the most interesting ones. The full list is in this Wired article. (His answers to John Oliver below are funnier.)

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Forbes offers a different opinion on the last answer.


HBO | Last Week Tonight with John Oliver: Stephen Hawking Interview

A realistic bio-inspired robotic finger

Heating and cooling a 3D-printed shape memory alloy to operate a robotic finger (credit: Florida Atlantic University/Bioinspiration & Biomimetics)

A realistic 3D-printed robotic finger using a shape memory alloy (SMA) and a unique thermal training technique has been developed by Florida Atlantic University assistant professor Erik Engeberg, Ph.D.

“We have been able to thermomechanically train our robotic finger to mimic the motions of a human finger, like flexion and extension,” said Engeberg. “Because of its light weight, dexterity and strength, our robotic design offers tremendous advantages over traditional mechanisms, and could ultimately be adapted for use as a prosthetic device, such as on a prosthetic hand.”

Most robotic parts used today are rigid, have a limited range of motion and don’t look lifelike.

In the study, described in an open-access article in the journal Bioinspiration & Biomimetics, Engeberg and his team used a resistive heating process called “Joule” heating that involves the passage of electric currents through a conductor that releases heat.

How to create a robotic finger

  • The researchers first downloaded a 3-D computer-aided design (CAD) model of a human finger from the Autodesk 123D website (under creative commons license).
  • With a 3-D printer, they created the inner and outer molds that housed a flexor and extensor actuator and a position sensor. The extensor actuator takes a straight shape when it’s heated and the flexor actuator takes a curved shape when heated.
  • They used SMA plates and a multi-stage casting process to assemble the finger.
  • Electric currents flow through each SMA actuator from an electric power source at the base of the finger as a heating and cooling process to operate the robotic finger.

Results from the study showed a rapid flexing and extending motion of the finger and ability to recover its trained shape accurately and completely, confirming the biomechanical basis of its trained shape.

Initial use in underwater robotics

“Because SMAs require a heating process and cooling process, there are challenges with this technology, such as the lengthy amount of time it takes for them to cool and return to their natural shape, even with forced air convection,” said Engeberg. So they used the technology for underwater robotics, which would provide a rapid-cooling environment.

Engeberg used thermal insulators at the fingertip, which were kept open to facilitate water flow inside the finger. As the finger flexed and extended, water flowed through the inner cavity within each insulator to cool the actuators.

“Because our robotic finger consistently recovered its thermomechanically trained shape better than other similar technologies, our underwater experiments clearly demonstrated that the water cooling component greatly increased the operational speed of the finger,” said Engeberg.

Undersea applications using Engeberg’s new technology could help to address some of the difficulties and challenges humans encounter while working in ocean depths.


FAU – BioRobotics Lab | Bottle Pick and Drop Demo UR10 and Shadow Hand


FAU – BioRobotics Lab | Simultaneous Grasp Synergies Controlled by EMG


FAU – BioRobotics Lab | Shadow Hand and UR10 – Grab Bottle, Pour Liquid


Abstract of Anthropomorphic finger antagonistically actuated by SMA plates

Most robotic applications that contain shape memory alloy (SMA) actuators use the SMA in a linear or spring shape. In contrast, a novel robotic finger was designed in this paper using SMA plates that were thermomechanically trained to take the shape of a flexed human finger when Joule heated. This flexor actuator was placed in parallel with an extensor actuator that was designed to straighten when Joule heated. Thus, alternately heating and cooling the flexor and extensor actuators caused the finger to flex and extend. Three different NiTi based SMA plates were evaluated for their ability to apply forces to a rigid and compliant object. The best of these three SMAs was able to apply a maximum fingertip force of 9.01N on average. A 3D CAD model of a human finger was used to create a solid model for the mold of the finger covering skin. Using a 3D printer, inner and outer molds were fabricated to house the actuators and a position sensor, which were assembled using a multi-stage casting process. Next, a nonlinear antagonistic controller was developed using an outer position control loop with two inner MOSFET current control loops. Sine and square wave tracking experiments demonstrated minimal errors within the operational bounds of the finger. The ability of the finger to recover from unexpected disturbances was also shown along with the frequency response up to 7 rad s−1. The closed loop bandwidth of the system was 6.4 rad s−1 when operated intermittently and 1.8 rad s−1 when operated continuously.

Smart robot accelerates cancer treatment research by finding optimal treatment combinations

Iterative search for anti-cancer drug combinations. The procedure starts by generating an initial generation (population) of drug combinations randomly or guided by biological prior knowledge and assumptions. In each iteration the aim is to propose a new generation of drug combinations based on the results obtained so far. The procedure iterates through a number of generations until a stop criterion for a predefined fitness function is satisfied. (credit: M. Kashif et al./Scientific Reports)

A new smart research system developed at Uppsala University accelerates research on cancer treatments by finding optimal treatment drug combinations. It was developed by a research group led by Mats Gustafsson, Professor of Medical Bioinformatics.

The “lab robot” system plans and conducts experiments with many substances, and draws its own conclusions from the results. The idea is to gradually refine combinations of substances so that they kill cancer cells without harming healthy cells.

Instead of just combining a couple of substances at a time, the new lab robot can handle about a dozen drugs simultaneously. The future aim is to handle many more, preferably hundreds.

There are a few such laboratories in the world with this type of lab robot, but researchers “have only used the systems to look for combinations that kill the cancer cells, not taking the side effects into account,” says Gustafsson.

The next step: make the robot system more automated and smarter. The scientists also want to build more knowledge into the guiding algorithm of the robot, such as prior knowledge about drug targets and disease pathways.

For patients with the same cancer type returning multiple times, sometimes the cancer cells develop resistance against the pharmacotherapy used. The new robot systems may also become important in the efforts to find new drug compounds that make these resistant cells sensitive again.

The research is described in an open-access article published Tuesday (Sept. 22, 2015) in Scientific Reports.


Abstract of In vitro discovery of promising anti-cancer drug combinations using iterative maximisation of a therapeutic index

In vitro-based search for promising anti-cancer drug combinations may provide important leads to improved cancer therapies. Currently there are no integrated computational-experimental methods specifically designed to search for combinations, maximizing a predefined therapeutic index (TI) defined in terms of appropriate model systems. Here, such a pipeline is presented allowing the search for optimal combinations among an arbitrary number of drugs while also taking experimental variability into account. The TI optimized is the cytotoxicity difference (in vitro) between a target model and an adverse side effect model. Focusing on colorectal carcinoma (CRC), the pipeline provided several combinations that are effective in six different CRC models with limited cytotoxicity in normal cell models. Herein we describe the identification of the combination (Trichostatin A, Afungin, 17-AAG) and present results from subsequent characterisations, including efficacy in primary cultures of tumour cells from CRC patients. We hypothesize that its effect derives from potentiation of the proteotoxic action of 17-AAG by Trichostatin A and Afungin. The discovered drug combinations against CRC are significant findings themselves and also indicate that the proposed strategy has great potential for suggesting drug combination treatments suitable for other cancer types as well as for other complex diseases.

AI system solves SAT geometry questions as well as average American 11th-grade student

Examples of questions (left column) and interpretations (right column) derived by GEOS (credit: Minjoon Seo et al./Proceedings of EMNLP)

An AI system that can solve SAT geometry questions as well as the average American 11th-grade student has been developed by researchers at the Allen Institute for Artificial Intelligence (AI2) and University of Washington.

This system, called GeoS, uses a combination of computer vision to interpret diagrams, natural language processing to read and understand text, and a geometric solver, achieving 49 percent accuracy on official SAT test questions.

If these results were extrapolated to the entire Math SAT test, the computer roughly achieved an SAT score of 500 (out of 800), the average test score for 2015.

These results, presented at the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Lisbon, Portugal, were achieved by GeoS solving unaltered SAT questions that it had never seen before and that required an understanding of implicit relationships, ambiguous references, and the relationships between diagrams and natural-language text.

The best-known current test of an AI’s intelligence is the Turing test, which involves fooling a human in a blind conversation. “Unlike the Turing Test, standardized tests such as the SAT provide us today with a way to measure a machine’s ability to reason and to compare its abilities with that of a human,” said Oren Etzioni, CEO of AI2. “Much of what we understand from text and graphics is not explicitly stated, and requires far more knowledge than we appreciate.”

How GeoS Works

GeoS is the first end-to-end system that solves SAT plane geometry problems. It does this by first interpreting a geometry question by using the diagram and text in concert to generate the best possible logical expressions of the problem, which it sends to a geometric solver to solve. Then it compares that answer to the multiple-choice answers for that question.

This process is complicated by the fact that SAT questions contain many unstated assumptions. For example, in top example in the SAT problem above, there are several unstated assumptions, such as the fact that lines BD and AC intersect at E.

GeoS had a 96 percent accuracy rate on questions it was confident enough to answer. AI2 researchers said they are moving to solve the full set of SAT math questions in the next three years.

An open-access paper outlining the research, “Solving Geometry Problems: Combining Text and Diagram Interpretation,” and a demonstration of the system’s problem-solving are available. All data sets and software are also available for other researchers to use.

The researchers say they are also building systems that can tackle science tests, which require a knowledge base that includes elements of the unstated, common-sense knowledge that humans generate over their lives. This Aristo project is described here.


Abstract of Solving geometry problems: Combining text and diagram interpretation

This paper introduces GeoS, the first automated system to solve unaltered SAT geometry questions by combining text understanding and diagram interpretation. We model the problem of understanding geometry questions as submodular optimization, and identify a formal problem description likely to be compatible with both the question text and diagram. GeoS then feeds the description to a geometric solver that attempts to determine the correct answer. In our experiments, GeoS achieves a 49% score on official SAT questions, and a score of 61% on practice questions. Finally, we show that by integrating textual and visual information, GeoS boosts the accuracy of dependency and semantic parsing of the question text.

New laser design could dramatically shrink autonomous-vehicle 3-D laser-ranging systems

This self-sweeping laser couples an optical field with the mechanical motion of a high-contrast grating (HCG) mirror. The HCG mirror is supported by mechanical springs connected to layers of semiconductor material. The red layer represents the laser’s gain (for light amplification), and the blue layers form the system’s second mirror. The force of the light causes the top mirror to vibrate at high speed. The vibration allows the laser to automatically change color as it scans. (credit: Weijian Yang)

UC Berkeley engineers have invented a new laser-ranging system that can reduce the power consumption, size, weight and cost of LIDAR (light detection and ranging, aka “light radar”), which is used in self-driving vehicles* to determine the distance to an object, and in real-time image capture for 3D videos.

“The advance could shrink components that now take up the space of a shoebox down to something compact and lightweight enough for smartphones or small UAVs [unmanned aerial vehicles],” said Connie Chang-Hasnain, a professor of electrical engineering and computer sciences at UC Berkeley.

Google self-driving cars use LIDAR (shown on top) to determine the distance of objects around them (credit: Google)

A system called optical coherence tomography (OCT) used in 3D medical imaging (especially for the retina) would also benefit.

A minaturized 3-D laser-mirror system

A team used a novel concept to automate the way a light source changes its wavelength as it sweeps the surrounding landscape, as reported in an open-access paper in the journal Scientific Reports, published Thursday, Sept. 3.

In both applications, as the laser moves along, it must continuously change its frequency so that it can calculate the difference between the incoming, reflected light and the outgoing light. To change the frequency, at least one of the two mirrors in the laser cavity must move precisely.

“The mechanisms needed to control the mirrors are a part of what makes current LIDAR and OCT systems bulky, power-hungry, slow and complex,” study lead author Weijian Yang explained. “The faster the system must perform — such as in self-driving vehicles that must avoid collisions — the more power it needs.”

The novelty of the new design is that they have integrated the semiconductor laser with the mirror. That means a laser can be as small as a few hundred micrometers square, powered by an AA battery.

The study authors said the next stage of the research will be to incorporate this new laser design in current LIDAR or OCT systems and demonstrate its application in 3-D video imaging.

A U.S. Department of Defense National Security Science and Engineering Faculty Fellowship helped support this work.

* Google’s robotic cars have about $150,000 in equipment including a $70,000 LIDAR system. The range finder mounted on the top is a Velodyne 64-beam laser. This laser allows the vehicle to generate a detailed 3D map of its environment. The car then takes these generated maps and combines them with high-resolution maps of the world, producing different types of data models that allow it to drive itself. — “Google driverless car,” Wikipedia


Abstract of Laser optomechanics

Cavity optomechanics explores the interaction between optical field and mechanical motion. So far, this interaction has relied on the detuning between a passive optical resonator and an external pump laser. Here, we report a new scheme with mutual coupling between a mechanical oscillator supporting the mirror of a laser and the optical field generated by the laser itself. The optically active cavity greatly enhances the light-matter energy transfer. In this work, we use an electrically-pumped vertical-cavity surface-emitting laser (VCSEL) with an ultra-light-weight (130 pg) high-contrast-grating (HCG) mirror, whose reflectivity spectrum is designed to facilitate strong optomechanical coupling, to demonstrate optomechanically-induced regenerative oscillation of the laser optomechanical cavity. We observe >550 nm self-oscillation amplitude of the micromechanical oscillator, two to three orders of magnitude larger than typical, and correspondingly a 23 nm laser wavelength sweep. In addition to its immediate applications as a high-speed wavelength-swept source, this scheme also offers a new approach for integrated on-chip sensors.

Toyota invests $50 million in intelligent vehicle technology at Stanford, MIT AI research centers

MIT’s iconic Stata Center, which houses the Computer Science and Artificial Intelligence Laboratory (credit: MIT)

Toyota Motor Corporation (TMC) announced today (Fri. Sept. 4) that it will be investing approximately $50 million over the next five years to establish joint research centers at the Stanford Artificial Intelligence Lab (SAIL) and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Toyota also said Dr. Gill Pratt, former Program Manager at DARPA and leader of its recent Robotics Challenge, has joined Toyota to direct and accelerate these research activities and their application to intelligent vehicles and robotics.

Rather than fully autonomous vehicles (as in Google’s research), the program will initially focus on the acceleration of intelligent vehicle technology to help eliminate traffic casualties*, with the ultimate goal of helping improve quality of life through enhanced mobility and robotics, according to Kiyotaka Ise, who heads R&D at Toyota.

Specific research areas will include “improving the ability of intelligent vehicle technologies to recognize objects around the vehicle in diverse environments, provide elevated judgment of surrounding conditions, and safely collaborate with vehicle occupants, other vehicles, and pedestrians,” Pratt added. “The joint research will also look at applications of the same technology to human-interactive robotics and information service.”

“[The car] must ensure that it does no harm, not only some of the time, but almost all of the time,” said Pratt.

MIT research

Research at MIT, led by CSAIL director Professor Daniela Rus, will “develop advanced architectures that allow cars to better perceive and navigate their surroundings,” eventually developing a vehicle “incapable of getting into a collision.”

CSAIL researchers plan to explore an approach in which the human driver pays attention at all times, with an autonomous system that is there to jump in to save the driver in the event of an unavoidable accident. That will involve areas from computer vision and perception to planning and control to decision-making.

Rus envisions creating a system that could “prevent collisions and also provide drivers with assistance navigating tricky situations; support a tired driver by watching for unexpected dangers and diversions; and even offer helpful tips such as letting the driver know she is out of milk at home and planning a new route home that allows the driver to swing by the grocery store.”

Research at the new center will also include building new tools for collecting and analyzing navigation data, with the goal of learning from human driving; creating perception and decision-making systems for safe navigation; developing predictive models that can anticipate the behavior of humans, vehicles, and the larger environment; inventing state-of-the-art tools to handle congestion and high-speed driving in challenging situations including adverse weather; improving machine-vision algorithms used to detect and classify objects; and creating more intelligent user interfaces.

Stanford research

Led by Associate Professor Fei-Fei Li, the new SAIL-Toyota Center for AI Research will focus on teaching computers to see and make critical decisions about how to interact with the world.

Early on, the new effort will focus on AI-assisted driving to avoid automobile-related accidents. Li, a world-renowned expert in computer vision, said that Stanford will tackle the problem by addressing four main challenges of making a computer think like a person: perception, learning, reasoning, and interaction.

Stanford’s computer scientists will train computers to recognize objects and speech as well as data, and then use machine learning and statistical modeling to extract the meaningful data points — for instance, a swerving car versus a parked one. Other researchers will teach the AI platform to look at this critical data set and plot the safest driving action.

The first cars with AI technology will work as partners with the driver to make safe decisions, Li said, so devising ways to carefully and comfortably share control between the human and the computer will be instrumental in this technology gaining the public’s trust.

* The World Health Organization estimates that 3,400 people die each day from traffic-related accidents.