Deep neural network program recognizes sketches more accurately than a human

The Sketch-a-Net program successfully identified a seagull, pigeon, flying bird and standing bird better than humans (credit: QMUL, Mathias Eitz, James Hays and Marc Alexa)

The first computer program that can recognize hand-drawn sketches better than humans has been developed by researchers from Queen Mary University of London.

Known as Sketch-a-Net, the program correctly identified the subject of sketches 74.9 per cent of the time compared to humans that only managed a success rate of 73.1 per cent.

As sketching becomes more relevant with the increase in the use of touchscreens, it could lead to new ways to interact with computers. Touchscreens could understand what you are drawing enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories.

The improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish “seagull,” “flying-bird,” “standing-bird” and “pigeon” with 42.5 per cent accuracy compared to humans, who only achieved 24.8 per cent.

Sketch-a-Net is a “deep neural network” program, designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.


Abstract of Sketch-a-Net that Beats Humans

We propose a multi-scale multi-channel deep neural network framework that, for the first time, yields sketch recognition performance surpassing that of humans. Our superior performance is a result of explicitly embedding the unique characteristics of sketches in our model: (i) a network architecture designed for sketch rather than natural photo statistics, (ii) a multi-channel generalisation that encodes sequential ordering in the sketching process, and (iii) a multi-scale network ensemble with joint Bayesian fusion that accounts for the different levels of abstraction exhibited in free-hand sketches. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photo or sketch. Our network on the other hand not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs.

3-D-printed robot is hard inside, soft outside, and capable of jumping without hurting itself

Left: the rigid top fractures on landing, while the top made of nine layers going from rigid to flexible remains intact (credit: Jacobs School of Engineering/UC San Diego, Harvard University)

Engineers at Harvard University and the University of California, San Diego, have created the first robot with a 3D-printed body that transitions from a rigid core to a soft exterior. The robot is capable of more than 30 untethered jumps at a time and is powered by a mix of butane and oxygen.

The researchers describe the robot’s design, manufacturing and testing in the July 10 issue of the journal Science. Michael Tolley, an assistant professor of mechanical engineering at UC San Diego, and one of the paper’s co-lead authors, believes bringing together soft and rigid materials will help create a “new generation of fast, agile robots that are more robust and adaptable than their predecessors and can safely work side by side with humans.” And maybe help prevent (or cushion) those “that’s gotta hurt” falls experienced by some robots participating in the recent DARPA Robotics Challenge.

The idea of blending soft and hard materials into the robot’s body came from nature, Tolley said. For example, certain species of mussels have a foot that starts out soft and then becomes rigid at the point where it makes contact with rocks. “In nature, complexity has a very low cost,” Tolley said. “Using new manufacturing techniques like 3D printing, we’re trying to translate this to robotics.”

Soft robots tend to be slow, especially when accomplishing tasks without being tethered to power sources and other electronics, said Tolley, who recently co-authored a research review on soft robotics for Nature (Rus, Tolley, v. 521, pp. 467-475). Adding rigid components should help, without compromising the safety of the humans who would work with them.

In the case of the robot described in Science, rigid layers also make for a better interface with the device’s electronic brains and power sources. The soft layers make it less vulnerable to damage when it lands after jumping .

How it works

The robot is made of two nestled hemispheres. The top hemisphere is like a half shell, 3D-printed in once piece, with nine different layers of stiffness, creating a structure that goes from rubber-like flexibility on the exterior to full rigidity near to core. Researchers tried several versions of the design and concluded that a fully rigid top would make for higher jumps. But a more flexible top was more likely to survive impacts on landing, allowing the robot to be reused. They decided to go with the more flexible design.

The bottom half of the robot is flexible and includes a small chamber where oxygen and butane are injected before it jumps. After the gases are ignited, this half behaves very much like a basketball that gets inflated almost instantaneously, propelling the robot into a jump. When the chemical charge is exhausted, the bottom hemisphere goes back to its original shape.

The two hemispheres surround a rigid core module that houses a custom circuit board, high-voltage power source, battery, miniature air compressor, butane fuel cell and other components. In a series of tests, the robot jumped two and a half feet (0.75 m) in height and half a foot (0.15m) laterally. In experiments, the robot jumped more than 100 times and survived an additional 35 falls from a height of almost four feet.


Jacobs School of Engineering/UC San Diego, Harvard University | 3D-printed robot is hard at heart, soft on the outside


Wyss Institute/Harvard University | 3D Printed Soft Jumping Robot


Abstract of A 3D-printed, functionally graded soft robot powered by combustion

Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping.

AI algorithm learns to ‘see’ features in galaxy images

Hubble Space Telescope image of the cluster of galaxies MACS0416.1-2403, one of the Hubble “Frontier Fields” images. Bright yellow “elliptical” galaxies can be seen, surrounded by numerous blue spiral and amorphous (star-forming) galaxies. This image forms the test data that the machine learning algorithm is applied to, having not previously “seen” the image. (credit: NASA/ESA/J. Geach/A. Hocking)

A team of astronomers and computer scientists at the University of Hertfordshire have taught a machine to “see” astronomical images, using data from the Hubble Space Telescope Frontier Fields set of images of distant clusters of galaxies that contain several different types of galaxies.

The technique, which uses a form of AI called unsupervised machine learning, allows galaxies to be automatically classified at high speed, something previously done by thousands of human volunteers in projects like Galaxy Zoo.

Image highlighting parts of the MACS0416.1-2403 cluster image that the algorithm has identified as “star-forming” galaxies (credit: NASA/ESA/J. Geach/A. Hocking)

“We have not told the machine what to look for in the images, but instead taught it how to ‘see,’” said graduate student Alex Hocking.

“Our aim is to deploy this tool on the next generation of giant imaging surveys where no human, or even group of humans, could closely inspect every piece of data. But this algorithm has a huge number of applications far beyond astronomy, and investigating these applications will be our next step,” said University of Hertfordshire Royal Society University Research Fellow James Geach, PhD.

The scientists are now looking for collaborators to make use of the technique in applications like medicine, where it could for example help doctors to spot tumors, and in security, to find suspicious items in airport scans.

South Korean Team Kaist wins DARPA Robotics Challenge

DRC-Hubo robot turns valve 360 degrees in DARPA Robotics Challenge Final (credit: DARPA)

First place in the DARPA Robotics Challenge Finals this past weekend in Pomona, California went to Team Kaist of South Korea for its DRC-Hubo robot, winning $2 million in prize money.

Team IHMC Robotics of Pensacola, Fla., with its Running Man (Atlas) robot came in at second place ($1 million prize), followed by Tartan Rescue of Pittsburgh with its CHIMP robot ($500,000 prize).

DRC-Hubo, Running Man, and CHIMP (credit: DARPA)

The DARPA Robotics Challenge, with three increasingly demanding competitions over two years, was launched in response to a humanitarian need that became glaringly clear during the nuclear disaster at Fukushima, Japan, in 2011, DARPA said.

The goal was to “accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impacts of natural or man-made disasters.”

The difficult course of eight tasks simulated Fukushima-like conditions, such as driving alone, walking through rubble, tripping circuit breakers, turning valves, and climbing stairs.

Representing some of the most advanced robotics research and development organizations in the world, a dozen teams from the United States and another eleven from Japan, Germany, Italy, Republic of Korea and Hong Kong competed.

DARPA | DARPA Robotics Challenge 2015 Proving the Possible


DARPA | A Celebration of Risk (a.k.a., Robots Take a Spill)

More DARPA Robotics Challenge videos

Planarian regeneration model discovered by AI algorithm

Head-trunk-tail planarian regeneration results from experiments (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria — the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for more than 100 years. The work, published in the June 4 issue of PLOS Computational Biology (open access), demonstrates how “robot science” can help human scientists in the future.

To bioengineer complex organs, scientists need to understand the mechanisms by which those shapes are normally produced by the living organism.

However, there’s a significant knowledge gap between the molecular genetic components needed to produce a particular organism shape and understanding how to generate that particular complex shape in the correct size, shape and orientation, said the paper’s senior author, Michael Levin, Ph.D., Vannevar Bush professor of biology and director of the Tufts Center for Regenerative and Developmental Biology.

“Most regenerative models today derived from genetic experiments are arrow diagrams, showing which gene regulates which other gene. That’s fine, but it doesn’t tell you what the ultimate shape will be. You cannot tell if the outcome of many genetic pathway models will look like a tree, an octopus or a human,” said Levin.

“Most models show some necessary components for the process to happen, but not what dynamics are sufficient to produce the shape, step by step. What we need are algorithmic or constructive models, which you could follow precisely and there would be no mystery or uncertainty. You follow the recipe and out comes the shape.”

Such models are required to know what triggers could be applied to such a system to cause regeneration of particular components, or other desired changes in shape. However, no such tools yet exist for mining the fast-growing mountain of published experimental data in regeneration and developmental biology, said the paper’s first author, Daniel Lobo, Ph.D., post-doctoral fellow in the Levin lab.

An evolutionary computation algorithm

To address this challenge, Lobo and Levin developed an algorithm that could be used to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The algorithm generated networks by randomly combining previous networks and performing random changes, additions and deletions. Each candidate network was tested in a virtual worm, under simulated experiments. The algorithm compared the resulting shape from the simulation with real published data in the database.

As evolution proceeded, gradually the new networks could explain more experiments in the database comprising most of the known planarian experimental literature regarding head vs. tail regeneration.

Regenerative model discovered by AI

Regulatory network found by the automated system, explaining the combined phenotypic (forms and characteristics) experimental data of the key publications of head-trunk-tail planarian regeneration (credit: Daniel Lobo and Michael Levin/PLOS Computational Biology)

The researchers ultimately applied the algorithm to a combined experimental dataset of 16 key planarian regeneration experiments to determine if the approach could identify a comprehensive regulatory network of planarian generation.

After 42 hours, the algorithm returned the discovered regulatory network, which correctly predicted all 16 experiments in the dataset. The network comprised seven known regulatory molecules as well as two proteins that had not yet been identified in existing papers on planarian regeneration.

“This represents the most comprehensive model of planarian regeneration found to date. It is the only known model that mechanistically explains head-tail polarity determination in planaria under many different functional experiments and is the first regenerative model discovered by artificial intelligence,” said Levin.

The paper represents a successful application of the growing field of “robot science.”

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said.

“One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining, but also inference of meaning of the data.”

This work was supported with funding from the National Science Foundation, National Institutes of Health, USAMRMC, and the Mathers Foundation.

Robot servants push the boundaries in HUMANS

(credit: AMC)

AMC announced today HUMANS, an eight-part TV science-fiction thriller that takes place in a parallel present featuring sophisticated, life-like robot servants and caregivers called Synths (personal synthetics).

The show explores conflicts as the lines between humans and machines become increasingly blurred.

The upgrade you’ve been waiting for is here (credit: AMC)

The series is set to premiere on AMC June 28 with HUMANS 101: The Hawkins family buys a Synth, Anita. But are they in danger from this machine and the young man Leo who seems desperate to find her?

The show features Oscar-winning actor William Hurt, Katherine Parkinson (The IT Crowd), Colin Morgan (Merlin), and Gemma Chan (Secret Diary of a Call Girl).


AMC

More HUMANS trailers

Emulating animals, these robots can recover from damage in two minutes

Researchers in France and the U.S. have developed a new technology that enables robots to quickly recover from an injury in less than two minutes, similar to how injured animals adapt. Such autonomous mobile robots would be useful in remote or hostile environments such as disaster areas, space, and deep oceans.

The video above shows a six-legged robot that adapts to keep walking even if two of its legs are broken. It also shows a robotic arm that learned how to correctly place an object even with several broken motors.

“When injured, animals do not start learning from scratch,” says Jean-Baptiste Mouret from Pierre and Marie Curie University. “Instead, they have intuitions about different ways to behave. These intuitions allow them to intelligently select a few, different behaviors to try out and, after these tests, they choose one that works in spite of the injury. We made robots that can do the same.”

The researchers developed an “Intelligent Trial and Error” algorithm that allows robots to emulate animals: the robots conduct experiments to rapidly discover a compensatory behavior that works despite the damage.

“For example, if walking, mostly on its hind legs, does not work well, it will next try walking mostly on its front legs,”  explains Antoine Cully, lead author of a May 28 cover article on this research in the journal Nature. “What’s surprising is how quickly it can learn a new way to walk. It’s amazing to watch a robot go from crippled and flailing around to efficiently limping away in about two minutes.”


Abstract of Robots that can adapt like animals.

Robots have transformed many industries, most notably manufacturing1, and have the power to deliver tremendous benefits to society, such as in search and rescue2, disaster response3, health care4 and transportation5. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets6 to deep oceans7. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility68. Whereas animals can quickly adapt to injuries, current robots cannot ‘think outside the box’ to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes9, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots68. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage1011, but current techniques are slow even with small, constrained search spaces12. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot’s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.

MIT cheetah robot now jumps over obstacles autonomously


Massachusetts Institute of Technology| MIT cheetah robot lands the running jump

The MIT researchers who built a robotic cheetah have now trained it to see and jump over hurdles as it runs — making it the first four-legged robot to run and jump over obstacles autonomously.

The robot estimates an obstacle’s height and distance, gauges the best distance from which to jump, and adjusts its stride to land just short of the obstacle, before exerting enough force to push up and over. Based on the obstacle’s height, the robot then applies a certain amount of force to land safely, before resuming its initial pace.

In experiments on a treadmill and an indoor track, the cheetah robot successfully cleared obstacles up to 18 inches tall — more than half of the robot’s own height — while maintaining an average running speed of 5 miles per hour.

“A running jump is a truly dynamic behavior,” says Sangbae Kim, an assistant professor of mechanical engineering at MIT. “You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviors.”

Onboard LIDAR + path-planing algorithm –> autonomous control

As KurzweilAI reported last September, the engineers previously demonstrated that the robotic cheetah was able to run untethered— performed “blind,” without the use of cameras or other vision systems.

Now, the robot can “see,” with the use of onboard LIDAR — a visual system that uses reflections from a laser to map terrain (also used in autonomous vehicles). The team developed a three-part algorithm to plan out the robot’s path, based on LIDAR data. Both the vision and path-planning system are onboard the robot, giving it complete autonomous control.

The team tested the cheetah’s jumping ability first on a treadmill, then on a track. On the treadmill, the robot ran tethered in place, as researchers placed obstacles of varying heights on the belt. After multiple runs, the robot successfully cleared about 70 percent of the hurdles.

In comparison, tests on an indoor track proved much easier, as the robot had more space and time in which to see, approach, and clear obstacles. In these runs, the robot successfully cleared about 90 percent of obstacles.

The team is now working on getting the MIT cheetah to jump over hurdles while running on softer terrain, like a grassy field.

This research was funded in part by the Defense Advanced Research Projects Agency .

Disney researchers develop 2-legged robot that walks like an animated character

Robot mimics character’s movements (credit: Disney Research)

Disney researchers have found a way for a robot to mimic an animated character’s walk, bringing a cartoon (or other) character to life in the real world.

Beginning with an animation of a diminutive, peanut-shaped character that walks with a rolling, somewhat bow-legged gait, Katsu Yamane and his team at Disney Research Pittsburgh analyzed the character’s motion to design a robotic frame that could duplicate the walking motion. using 3D-printed links and servo motors, while also fitting inside the character’s skin. They then created control software that could keep the robot balanced while duplicating the character’s gait as closely as possible.

“The biggest challenge is that designers don’t necessarily consider physics when they create an animated character,” said Yamane, senior research scientist. Roboticists, however, wrestle with physical constraints throughout the process of creating a real-life version of the character.

“It’s important that, despite physical limitations, we do not sacrifice style or the quality of motion,” Yamane said. The robots will need to not only look like the characters, but move in the way people are accustomed to seeing those characters move.

(credit: Disney Research)

The researchers are describing the techniques and technologies they used to create the bipedal robot at the IEEE International Conference on Robotics and Automation, ICRA 2015, May 26–30 in Seattle.


DisneyResearchHub | Development of a Bipedal Robot that Walks Like an Animation Character

Intelligent handheld robots could make is easier for people to learn new skills

An intelligent handheld robot assisting a user in placing correct colored tiles (credit: University of Bristol)

What if your handheld tools knew what needs to be done and were even able to guide and help you complete jobs that require skills? University of Bristol researchers are finding out by building and testing intelligent handheld robots.

Think of them as smart power tools that “know” what they’re doing — and could even help you use them.

The robot tools would have three levels of autonomy, said Walterio Mayol-Cuevas, Reader in Robotics Computer Vision and Mobile Systems: “No autonomy, semi-autonomous — the robot advises the user but does not act, and fully autonomous — the robot advises and acts even by correcting or refusing to perform incorrect user actions.”

The Bristol team has experimented with tasks such as picking and dropping different objects to form tile patterns and aiming in 3D for simulated painting.

The robot designs are open source and available on the university’s HandheldRobotics page.


HandheldRobotics | The Design and Evaluation of a Cooperative Handheld Robot