Schematic of a new kind of 3D printer that can print touch sensors directly on a model hand. (credit: Shuang-Zhuang Guo and Michael McAlpine/Advanced Materials )
Engineering researchers at the University of Minnesota have developed a process for 3D-printing stretchable, flexible, and sensitive electronic sensory devices that could give robots or prosthetic hands — or even real skin — the ability to mechanically sense their environment.
One major use would be to give surgeons the ability to feel during minimally invasive surgeries instead of using cameras, or to increase the sensitivity of surgical robots. The process could also make it easier for robots to walk and interact with their environment.
Printing electronics directly on human skin could be used for pulse monitoring, energy harvesting (of movements), detection of finger motions (on a keyboard or other devices), or chemical sensing (for example, by soldiers in the field to detect dangerous chemicals or explosives). Or imagine a future computer mouse built into your fingertip, with haptic touch on any surface.
“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study.* “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”
The researchers also visualize use in “bionic organs.”
A unique skin-compatible 3D-printing process
(left) Schematic of the tactile sensor. (center) Top view. (right) Optical image showing the conformally printed 3D tactile sensor on a fingertip. Scale bar = 4 mm. (credit: Shuang-Zhuang Guo et al./Advanced Materials)
McAlpine and his team made the sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone**, top and bottom electrodes made of a silver-based piezoresistive conducting ink, a coil-shaped pressure sensor, and a supporting layer that holds the top layer in place while it sets (later washed away in the final manufacturing process).
Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. The sensors can stretch up to three times their original size.
The researchers say the next step is to move toward semiconductor inks and printing on a real surface. “The manufacturing is built right into the process, so it is ready to go now,” McAlpine said.
The research was published online in the journal Advanced Materials. It was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.
* McAlpine integrated electronics and novel 3D-printed nanomaterials to create a “bionic ear” in 2013.
** The silicone rubber has a low modulus of elasticity of 150 kPa, similar to that of skin, and lower hardness (Shore A 10) than that of human skin, according to the Advanced Materials paper.
College of Science and Engineering, UMN | 3D Printed Stretchable Tactile Sensors
Abstract of 3D Printed Stretchable Tactile Sensors
The development of methods for the 3D printing of multifunctional devices could impact areas ranging from wearable electronics and energy harvesting devices to smart prosthetics and human–machine interfaces. Recently, the development of stretchable electronic devices has accelerated, concomitant with advances in functional materials and fabrication processes. In particular, novel strategies have been developed to enable the intimate biointegration of wearable electronic devices with human skin in ways that bypass the mechanical and thermal restrictions of traditional microfabrication technologies. Here, a multimaterial, multiscale, and multifunctional 3D printing approach is employed to fabricate 3D tactile sensors under ambient conditions conformally onto freeform surfaces. The customized sensor is demonstrated with the capabilities of detecting and differentiating human movements, including pulse monitoring and finger motions. The custom 3D printing of functional materials and devices opens new routes for the biointegration of various sensors in wearable electronics systems, and toward advanced bionic skin applications.
Last week, KurzweilAIreported that Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail” — quoting Google CEO Sundar Pichai.
We noted that the new smart-reply version is now able to handle challenging sentences like “That interesting person at the cafe we like gave me a glance,” as Google research scientist Brian Strope and engineering director Ray Kurzweil noted in a Google Research blog post.
But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they wrote.
How does it work? “The content of language is deeply hierarchical, reflected in the structure of language itself, going from letters to words to phrases to sentences to paragraphs to sections to chapters to books to authors to libraries, etc.,” they explained.
So a hierarchical approach to learning “is well suited to the hierarchical nature of language. We have found that this approach works well for suggesting possible responses to emails. We use a hierarchy of modules, each of which considers features that correspond to sequences at different temporal scales, similar to how we understand speech and language.”*
Simplfying communication
“With Smart Reply, Google is assuming users want to offload the burdensome task of communicating with one another to our more efficient counterparts,” says Wired writer Liz Stinson.
“It’s not wrong. The company says the machine-generated replies already account for 12 percent of emails sent; expect that number to boom once everyone with the Gmail app can send one-tap responses.
“In the short term, that might mean more stilted conversations in your inbox. In the long term, the growing number of people who use these canned responses is only going to benefit Google, whose AI grows smarter with every email sent.”
Another challenge is that our emails, particularly from mobile devices, “tend to be riddled with idioms [such as urban lingo] that make no actual sense,” suggestsWashington Post writer Hayley Tsukayama. “Things change depending on context: Something ‘wicked’ could be good or very bad, for example. Not to mention, sarcasm is a thing.
“Which is all to warn you that you may still get a wildly random and even potentially inappropriate suggestion — I once got an ‘Oh no!’ suggestion to a friend’s self-deprecating pregnancy announcement, for example. If the email only calls for a one- or two-sentence response, you’ll probably find Smart Reply useful. If it requires any nuance, though, it’s still best to use your own human judgment.”
* The initial release of Smart Reply encoded input emails word-by-word with a long-short-term-memory (LSTM) recurrent neural network, and then decoded potential replies with yet another word-level LSTM. While this type of modeling is very effective in many contexts, even with Google infrastructure, it’s an approach that requires substantial computation resources. Instead of working word-by-word, we found an effective and highly efficient path by processing the problem more all-at-once, by comparing a simple hierarchy of vector representations of multiple features corresponding to longer time spans. — Brian Strope and Ray Kurzweil, Google Research Blog.
The game results show that placing slightly “noisy” bots in a central location (high-degree nodes) improves human coordination by reducing same-color neighbor nodes (the goal of the game). Square nodes show the bots and round nodes show human players; thick red lines show color conflicts, which are reduced with bot participation (right). (credit: Hirokazu Shirado and Nicholas A. Christakis/Nature)
It’s not about artificial intelligence (AI) taking over — it’s about AI improving human performance, a new study by Yale University researchers has shown.
“Much of the current conversation about artificial intelligence has to do with whether AI is a substitute for human beings. We believe the conversation should be about AI as a complement to human beings,” said Nicholas Christakis, Yale University co-director of the Yale Institute for Network Science (YINS) and senior author of a study by Yale Institute for Network Science.*
AI doesn’t even have to be super-sophisticated to make a difference in people’s lives; even “dumb AI” can help human groups, based on the study, which appears in the May 18, 2017 edition of the journal Nature.
How bots can boost human performance
In a series of experiments using teams of human players and autonomous software agents (“bots”), the bots boosted the performance of human groups and the individual players, the researchers found.
The experiment design involved an online color-coordination game that required groups of people to coordinate their actions for a collective goal. The collective goal was for every node to have a color different than all of its neighbor nodes. The subjects were paid a US$2 show-up fee and a declining bonus of up to US$3 depending on the speed of reaching a global solution to the coordination problem (in which every player in a group had chosen a different color from their connected neighbors). When they did not reach a global solution within 5 min, the game was stopped and the subjects earned no bonus.
The human players also interacted with anonymous bots that were programmed with three levels of behavioral randomness — meaning the AI bots sometimes deliberately made mistakes (introduced “noise”). In addition, sometimes the bots were placed in different parts of the social network to try different strategies.
The result: The bots reduced the median time for groups to solve problems by 55.6%. The experiment also showed a cascade effect: People whose performance improved when working with the bots then influenced other human players to raise their game. More than 4,000 people participated in the experiment, which used Yale-developed software called breadboard.
The findings have implications for a variety of situations in which people interact with AI technology, according to the researchers. Examples include human drivers who share roadways with autonomous cars and operations in which human soldiers work in tandem with AI.
“There are many ways in which the future is going to be like this,” Christakis said. “The bots can help humans to help themselves.”
Practical business AI tools
One example: Salesforce CEO Marc Benioff uses a bot called Einstein to help him run his company, Business Intelligencereported Thursday (May 18, 2017).
“Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customised for every single customer,” according to the Salesforce blog. “It will learn, self-tune and get smarter with every interaction and additional piece of data. And most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.”
Benioff says he also uses a version called Einstein Guidance for forecasting and modeling. It even helps end internal politics at executive meetings, calling out under-performing executives.
“AI is the next platform. All future apps for all companies will be built on AI,” Benioff predicts.
* Christakis is a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale. Grants from the Robert Wood Johnson Foundation and the National Institute of Social Sciences supported the research.
Abstract of Locally noisy autonomous agents improve global human coordination in network experiments
Coordination in groups faces a sub-optimization problem and theory suggests that some randomness may help to achieve global optima. Here we performed experiments involving a networked colour coordination game in which groups of humans interacted with autonomous software agents (known as bots). Subjects (n = 4,000) were embedded in networks (n = 230) of 20 nodes, to which we sometimes added 3 bots. The bots were programmed with varying levels of behavioural randomness and different geodesic locations. We show that bots acting with small levels of random noise and placed in central locations meaningfully improve the collective performance of human groups, accelerating the median solution time by 55.6%. This is especially the case when the coordination problem is hard. Behavioural randomness worked not only by making the task of humans to whom the bots were connected easier, but also by affecting the gameplay of the humans among themselves and hence creating further cascades of benefit in global coordination in these heterogeneous systems.
A smarter version of Smart Reply (credit: Google Research)
Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail,” Google CEO Sundar Pichai said today (May 17, 2017) in a keynote at the annual Google I/O conference.
Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Smart Reply was previously only available to users of Google Inbox (an app that helps Gmail users organize their email messages and reply efficiently).
Hierarchical model
Developed by a team headed by Ray Kurzweil, a Google director of engineering, “the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system,” said Kurzweil in a Google Research blog post with research colleague Brian Strope today. “And that efficiency now makes it feasible for us to provide Smart Reply for Gmail.”
For example, a sentence like “That interesting person at the cafe we like gave me a glance” is difficult to interpret. Was it a positive or negative gesture? But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they write.
The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.
Culture and Technology
(credit: Google)
The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.
Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.
The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.
TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool
Miguel Nicolelis (credit: Duke University)
Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.
He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.
Theme: Transhumanism
Dervishes at Royal Opera House with Matthew Herbert (credit: ?)
Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.
Theme: Instrument Design
Berklee College of Music
Michael Bierylo (credit: Moogfest)
Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.
Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.
Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.
Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music
Chris Ianuzzi (credit: William Murray)
Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.
Theme: Hacking Systems
Argus Project (credit: Moogfest)
The Argus Project fromGan Golan andRon Morrison of NEW INCis a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.
By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one. The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.
Argus Exo Suit Design (credit: Argus Project)
Theme: Protest
Found Sound Nation (credit: Moogfest)
Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.
Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.
Theme: Protest
(credit: Land Marking)
Land Marking, fromHalsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.
Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.
Theme: Protest
Taeyoon Choi (credit: Moogfest)
Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshopas one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.
Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.
Theme: Protest
(credit: Moogfest)
irlbb fromVivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.
Theme: Protest
Ryan Shaw and Michael Clamann (credit: Duke University)
Duke Professors Ryan Shaw, and Michael Clamann will lead adaily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.
Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.
Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.
Theme: Hacking Systems
Dave Smith (credit: Moogfest)
Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.
As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.
Theme: Future Thought
Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)
EM-u Systems Founder Dave Rossum, Ableton CEOGerhard Behles, and LZX FounderLars Larsen will take part in conversations as part of the Instruments Innovators program.
Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.
Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.
LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.
Science
ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)
The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session, a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.
The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?
“Atlas Boogie” (referencing Higgs Boson):
ATLAS Experiment | The ATLAS Boogie
(credit: Kate Shaw)
Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.
The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?
Theme: Future Thought
Arecibo (credit: Joe Davis/MIT)
In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.
Theme: Future Thought
Immortality bus (credit: Zoltan Istvan)
Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.
Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST
Theme: Transhumanism/Biotechnology
(credit: Moogfest)
Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.
Which of these presentation methods make the robot look most real: live, VR, 3D TV, or 2D TV? (credit: Constanze Schreiner/University of Koblenz-Landau, Martina Mara/Ars Electronica Futuerlab, and Markus Appel/ University of Wurzburg)
How do you make humanoid robots look least creepy? With increasing use of industrial (and soon, service robots), it’s a good question.
Researchers at the University of Koblenz-Landau, University of Wurzburg, and Arts Electronica Futurelab decided to find out with an experiment. They created a skit with a human actor and the Roboy robot, and presented scripted human-robot interactions (HRIs), using four types of presentations: live, virtual reality (VR), 3D TV, and 2D TV. Participants saw Roboy assisting the human in organizing appointments, conducting web searches, and finding a birthday present for the human’s mother.
People who watched live interactions with the robot were most likely to consider the robot as real, followed by viewing the same interaction via VR. Robots presented in VR also scored high in human likeness, but lower than in the live presentation.
The Deep Photo Style Transfer tool lets you add artistic style and other elements from a reference photo onto your photo. (credit: Cornell University)
“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.
An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.
The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)
“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”
To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.
This research is supported by a Google Faculty Re-search Award and NSF awards.
Abstract of Deep Photo Style Transfer
This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.
British biomedical engineers have developed a new generation of intelligent prosthetic limbs that allows the wearer to reach for objects automatically, without thinking — just like a real hand.
The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of movements in the hand — all within milliseconds.
A deep learning-based artificial vision and grasp system
Biomedical engineers at Newcastle University and associates developed a convolutional neural network (CNN), trained it with images of more than 500 graspable objects, and taught it to recognize the grip needed for different types of objects.
Object recognition (top) vs. grasp recognition (bottom) (credit: Ghazal Ghazaei/Journal of Neural Engineering)
Grouping objects by size, shape and orientation, according to the type of grasp that would be needed to pick them up, the team programmed the hand to perform four different grasps: palm wrist neutral (such as when you pick up a cup); palm wrist pronated (such as picking up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).
“We would show the computer a picture of, for example, a stick,” explains lead author Ghazal Ghazae. “But not just one picture; many images of the same stick from different angles and orientations, even in different light and against different backgrounds, and eventually the computer learns what grasp it needs to pick that stick up.”
A block diagram representation of the method (credit: Ghazal Ghazaei/Journal of Neural Engineering)
Current prosthetic hands are controlled directly via the user’s myoelectric signals (electrical activity of the muscles recorded from the skin surface of the stump). That takes learning, practice, concentration and, crucially, time.
A small number of amputees have already trialed the new technology. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%. Now the Newcastle University team is working with experts at Newcastle upon Tyne Hospitals NHS Foundation Trust to offer the “hands with eyes” to patients at Newcastle’s Freeman Hospital.
A future bionic hand
The work is part of a larger research project to develop a bionic hand that can sense pressure and temperature and transmit the information back to the brain.
Led by Newcastle University and involving experts from the universities of Leeds, Essex, Keele, Southampton and Imperial College London, the aim is to develop novel electronic devices that connect neural networks to the forearm to allow two-way communications with the brain.
Abstract of Deep learning-based artificial vision for grasp classification in myoelectric hands
Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached for the seen and for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Architectural-scale dome section case study for 3-D printing system (top view). For initial tests, the system fabricated the foam-insulation framework used to form a finished concrete structure. As a proof of concept, the researchers used a prototype to build the basic structure of the walls of a 50-foot-diameter, 12-foot-high dome — a project that was completed in less than 14 hours of “printing” time. (credit: Steven Keating, Julian Leland, Levi Cai, and Neri Oxman/Mediated Matter Group)
MIT researchers have designed a “Digital Construction Platform” system that can 3-D print the basic structure of an entire building. It could enable faster, cheaper, more adaptable building construction — replacing traditional fabrication technologies that are dangerous, slow, and energy-intensive in the annual $8.5 trillion construction industry.
The Digital Construction Platform system consists of a tracked vehicle that carries a large, industrial robotic arm, which has a smaller, precision-motion robotic arm (orange) at its end. This highly controllable arm can be used to direct any conventional (or unconventional) construction nozzle, such as those used for pouring concrete or spraying insulation material. The nozzles can be adapted to vary the density of the material being poured, and even to mix different materials as it goes along. The system is equipped with a scoop that could be used to both prepare the building surface and acquire local materials, such as dirt for a rammed-earth building, for the construction itself. The whole system could be operated electrically, even powered by solar panels, as shown here. The system can also create complex shapes and overhangs, which the team demonstrated by including a wide, built-in bench in their prototype dome. (credit: Steven J. Keating et al./Science Robotics)
Described in an open-access paper in the journal Science Robotics, this free-moving system is intended to be self-sufficient and can construct an object of almost any size. It could enable the design and construction of new kinds of buildings that would not be feasible with traditional building methods.
A building could be completely customized to the needs of a particular site and the desires of its maker. Even the internal structure could be modified in new ways — different materials could be incorporated as the process goes along, and material density could be varied to provide optimum combinations of strength, insulation, or other properties.
Rendering showing use of the Digital Construction Platform in an urban environment, including robotic chain welding fabrication — a building as an organism, computationally grown, additively manufactured, and possibly biologically augmented. In the future, the supporting pillars of such a building could be placed in optimal locations based on ground-penetrating radar analysis of the site, and walls could have varying thickness depending on their orientation. For example, a building could have thicker, more insulated walls on its north side in cold climates, or walls that taper from bottom to top as their load-bearing requirements decrease, or curves that help the structure withstand winds. (credit: Steven J. Keating et al./Science Robotics)
The researchers showed that the system can be easily adapted to existing building sites and equipment, and that it will fit existing building codes without requiring whole new evaluations. Such systems could be deployed to remote regions, for example in the developing world, or to areas for disaster relief after a major storm or earthquake, to provide durable shelter rapidly.
Keating says the team’s analysis shows that such construction methods could produce a structure faster and less expensively than present methods can, and would also be much safer by reducing hands-on work*. In addition, because shapes and thicknesses can be optimized for what is needed structurally, rather than having to match what’s available in premade lumber and other materials, the total amount of material needed could be reduced.
For initial tests, the system fabricated a foam-insulation framework. In this construction method, polyurethane foam molds are filled with concrete, similar to traditional commercial insulated-concrete formwork techniques. Any needed wiring and plumbing can be inserted into the mold before the concrete is poured, providing a finished wall structure all at once. It can even incorporate data about the site collected during the process, using built-in sensors for temperature, light, and other parameters to make adjustments to the structure as it is built. (credit: Steven J. Keating et al./Science Robotics)
The ultimate vision is “in the future, to have something totally autonomous, that you could send to the moon or Mars or Antarctica, and it would just go out and make these buildings for years,” says Keating, who led the development of the system as his doctoral thesis work. Meanwhile, “with this process, we can replace one of the key parts of making a building, right now,” he says.
Automated ice structure fabrication in polar environment with power sourced through rollable photovoltaic panels and materials gathered locally. (credit: Steven J. Keating et al./Science Robotics)
Fabrication with local sand to create fractal structures for future immersion in the ocean to support coral reef regrowth. Power sourced via deployable rollable photovoltaics. (credit: Steven J. Keating et al./Science Robotics)
* The International Labour Organization estimated in 2005 that more than 50,000 people die globally in the construction industry per year, accounting for 17% of workplace accident fatalities.
Abstract of Toward site-specific and self-sufficient robotic fabrication on architectural scales
Contemporary construction techniques are slow, labor-intensive, dangerous, expensive, and constrained to primarily rectilinear forms, often resulting in homogenous structures built using materials sourced from centralized factories. To begin to address these issues, we present the Digital Construction Platform (DCP), an automated construction system capable of customized on-site fabrication of architectural-scale structures using real-time environmental data for process control. The system consists of a compound arm system composed of hydraulic and electric robotic arms carried on a tracked mobile platform. An additive manufacturing technique for constructing insulated formwork with gradient properties from dynamic mixing was developed and implemented with the DCP. As a case study, a 14.6-m-diameter, 3.7-m-tall open dome formwork structure was successfully additively manufactured on site with a fabrication time under 13.5 hours. The DCP system was characterized and evaluated in comparison with traditional construction techniques and existing large-scale digital construction research projects. Benefits in safety, quality, customization, speed, cost, and functionality were identified and reported upon. Early exploratory steps toward self-sufficiency—including photovoltaic charging and the sourcing and use of local materials—are discussed along with proposed future applications for autonomous construction.
“Hey Siri, what’s the name of that person I met yesterday?” (credit: Apple Inc.)
Instead of replacing humans with robots, artificial intelligence should be used more for augmenting human memory and other human weaknesses, Apple Inc. executive Tom Gruber suggested at the TED 2017 conference yesterday (April 25, 2017).
Thanks to the internet and our smartphones, much of our personal data is already being captured, notes Gruber, who was one the inventors of voice-controlled intelligent-assistant Siri. Future AI memory enhancement could be especially life-changing for those with Alzheimer’s or dementia, he suggested.
Limitless
“Superintelligence should give us super-human abilities,” he said. “As machines get smarter, so do we. Artificial intelligence can enable partnerships where each human on the team is doing what they do best. Instead of asking how smart we can make our machines, let’s ask how smart our machines can make us.
“I can’t say when or what form factors are involved, but I think it is inevitable,” he said. “What if you could have a memory that was as good as computer memory and is about your life? What if you could remember every person you ever met? How to pronounce their name? Their family details? Their favorite sports? The last conversation you had with them?”
Gruber’s ideas mesh with a prediction by Ray Kurzweil: “Once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle human levels of pattern recognition with the natural advantages of machine intelligence, in speed, memory capacity, and, most importantly, the ability to quickly share knowledge and skills.”
But trusting machines also raises security concerns, Gruber warned. “We get to choose what is and is not recalled,” he said. “It’s absolutely essential that this be kept very secure.”