MIT Media Lab (no sound) | Intro: Personalized Machine Learning for Robot Perception of Affect and Engagement in Autism Therapy. This is an example of a therapy session augmented with SoftBank Robotics’ humanoid robot NAO and deep-learning software. The 35 children with autism who participated in this study ranged in age from 3 to 13. They reacted in various ways to the robots during their 35-minute sessions — from looking bored and sleepy in some cases to jumping around the room with excitement, clapping their hands, and laughing or touching the robot.
Robots armed with personalized “deep learning” software could help therapists interpret behavior and personalize therapy of autistic children, while making the therapy more engaging and natural. That’s the conclusion of a study by an international team of researchers at MIT Media Lab, Chubu University, Imperial College London, and University of Augsburg.*
Children with autism-spectrum conditions often have trouble recognizing the emotional states of people around them — distinguishing a happy face from a fearful face, for instance. So some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways.
Personalized autism therapy
But the MIT research team realized that deep learning would help the therapy robots perceive the children’s behavior more naturally, they report in a Science Robotics paper.
Personalization is especially important in autism therapy, according to the paper’s senior author, Rosalind Picard, PhD, a professor at MIT who leads research in affective computing: “If you have met one person, with autism, you have met one person with autism,” she said, citing a famous adage.
“Computers will have emotional intelligence by 2029”… by which time, machines will “be funny, get the joke, and understand human emotion.” — Ray Kurzweil
“The challenge of using AI [artificial intelligence] that works in autism is particularly vexing, because the usual AI methods require a lot of data that are similar for each category that is learned,” says Picard, in explaining the need for deep learning. “In autism, where heterogeneity reigns, the normal AI approaches fail.”
How personalized robot-assisted therapy for autism would work
Robot-assisted therapy** for autism often works something like this: A human therapist shows a child photos or flash cards of different faces meant to represent different emotions, to teach them how to recognize expressions of fear, sadness, or joy. The therapist then programs the robot to show these same emotions to the child, and observes the child as she or he engages with the robot. The child’s behavior provides valuable feedback that the robot and therapist need to go forward with the lesson.
“Therapists say that engaging the child for even a few seconds can be a big challenge for them. [But] robots attract the attention of the child,” says lead author Ognjen Rudovic, PhD, a postdoctorate fellow at the MIT Media Lab. “Also, humans change their expressions in many different ways, but the robots always do it in the same way, and this is less frustrating for the child because the child learns in a very structured way how the expressions will be shown.”
SoftBank Robotics | The researchers used NAO humanoid robots in this study. Almost two feet tall and resembling an armored superhero or a droid, NAO conveys different emotions by changing the color of its eyes, the motion of its limbs, and the tone of its voice.
However, this type of therapy would work best if the robot could also smoothly interpret the child’s own behavior — such as excited or paying attention — during the therapy, according to the researchers. To test this assertion, researchers at the MIT Media Lab and Chubu Universitydeveloped a personalized deep learning network that helps robots estimate the engagement and interest of each child during these interactions, they report.**
The researchers built a personalized framework that could learn from data collected on each individual child. They captured video of each child’s facial expressions, head and body movements, poses and gestures, audio recordings and data on heart rate, body temperature, and skin sweat response from a monitor on the child’s wrist.
Most of the children in the study reacted to the robot “not just as a toy but related to NAO respectfully, as it if was a real person,” said Rudovic, especially during storytelling, where the therapists asked how NAO would feel if the children took the robot for an ice cream treat.
In the study, the researchers found that the robots’ perception of the children’s responses agreed with assessments by human experts with a high correlation score of 60 percent, the scientists report.*** (It can be challenging for human observers to reach high levels of agreement about a child’s engagement and behavior. Their correlation scores are usually between 50 and 55 percent, according to the researchers.)
* The study was funded by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology; Chubu University; and the European Union’s HORIZON 2020 grant (EngageME).
** A deep-learning system uses hierarchical, multiple layers of data processing to improve its tasks, with each successive layer amounting to a slightly more abstract representation of the original raw data. Deep learning has been used in automatic speech and object-recognition programs, making it well-suited for a problem such as making sense of the multiple features of the face, body, and voice that go into understanding a more abstract concept such as a child’s engagement.
Overview of the key stages (sensing, perception, and interaction) during robot-assisted autism therapy. Data from three modalities (audio, visual, and autonomic physiology) were recorded using unobtrusive audiovisual sensors and sensors worn on the child’s wrist, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data. The focus of this work is the robot perception, for which we designed the personalized deep learning framework that can automatically estimate levels of the child’s affective states and engagement. These can then be used to optimize the child-robot interaction and monitor the therapy progress (see Interpretability and utility). The images were obtained by using Softbank Robotics software for the NAO robot. (credit: Ognjen Rudovic et al./Science Robotics)
“In the case of facial expressions, for instance, what parts of the face are the most important for estimation of engagement?” Rudovic says. “Deep learning allows the robot to directly extract the most important information from that data without the need for humans to manually craft those features.”
The robots’ personalized deep learning networks were built from layers of these video, audio, and physiological data, information about the child’s autism diagnosis and abilities, their culture and their gender. The researchers then compared their estimates of the children’s behavior with estimates from five human experts, who coded the children’s video and audio recordings on a continuous scale to determine how pleased or upset, how interested, and how engaged the child seemed during the session.
*** Trained on these personalized data coded by the humans, and tested on data not used in training or tuning the models, the networks significantly improved the robot’s automatic estimation of the child’s behavior for most of the children in the study, beyond what would be estimated if the network combined all the children’s data in a “one-size-fits-all” approach, the researchers found. Rudovic and colleagues were also able to probe how the deep learning network made its estimations, which uncovered some interesting cultural differences between the children. “For instance, children from Japan showed more body movements during episodes of high engagement, while in Serbs large body movements were associated with disengagement episodes,” Rudovic notes.
MIT Media Lab researcher Arnav Kapur demonstrates the AlterEgo device. It picks up neuromuscular facial signals generated by his thoughts; a bone-conduction headphone lets him privately hear responses from his personal devices. (credit: Lorrie Lejeune/MIT)
MIT researchers have invented a system that allows someone to communicate silently and privately with a computer or the internet by simply thinking — without requiring any facial muscle movement.
The AlterEgo system consists of a wearable device with electrodes that pick up otherwise undetectable neuromuscular subvocalizations — saying words “in your head” in natural language. The signals are fed to a neural network that is trained to identify subvocalized words from these signals. Bone-conduction headphones also transmit vibrations through the bones of the face to the inner ear to convey information to the user — privately and without interrupting a conversation. The device connects wirelessly to any external computing device via Bluetooth.
A silent, discreet, bidirectional conversation with machines. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?,” says Arnav Kapur, a graduate student at the MIT Media Lab who led the development of the new system. Kapur is first author on an open-access paper on the research presented in March at the IUI ’18 23rd International Conference on Intelligent User Interfaces.
In one of the researchers’ experiments, subjects used the system to silently report opponents’ moves in a chess game and silently receive recommended moves from a chess-playing computer program. In another experiment, subjects were able to undetectably answer difficult computational problems, such as the square root of large numbers or obscure facts. The researchers achieved 92% median word accuracy levels, which is expected to improve. “I think we’ll achieve full conversation someday,” Kapur said.
Non-disruptive. “We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.
“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”*
Even the tiniest signal to her jaw or larynx might be interpreted as a command. Keeping one hand on the sensitivity knob, she concentrated to erase mistakes the machine kept interpreting as nascent words.
Few people used subvocals, for the same reason few ever became street jugglers. Not many could operate the delicate systems without tipping into chaos. Any normal mind kept intruding with apparent irrelevancies, many ascending to the level of muttered or almost-spoken words the outer consciousness hardly noticed, but which the device manifested visibly and in sound.
Tunes that pop into your head… stray associations you generally ignore… memories that wink in and out… impulses to action… often rising to tickle the larynx, the tongue, stopping just short of sound…
As she thought each of those words, lines of text appeared on the right, as if a stenographer were taking dictation from her subvocalized thoughts. Meanwhile, at the left-hand periphery, an extrapolation subroutine crafted little simulations. A tiny man with a violin. A face that smiled and closed one eye… It was well this device only read the outermost, superficial nervous activity, associated with the speech centers.
When invented, the sub-vocal had been hailed as a boon to pilots — until high-performance jets began plowing into the ground. We experience ten thousand impulses for every one we allow to become action. Accelerating the choice and decision process did more than speed reaction time. It also shortcut judgment.
Even as a computer input device, it was too sensitive for most people. Few wanted extra speed if it also meant the slightest sub-surface reaction could become embarrassingly real, in amplified speech or writing.
If they ever really developed a true brain to computer interface, the chaos would be even worse.
— From EARTH (1989) chapter 35 by David Brin (with permission)
IoT control. In the conference paper, the researchers suggest that an “internet of things” (IoT) controller “could enable a user to control home appliances and devices (switch on/off home lighting, television control, HVAC systems etc.) through internal speech, without any observable action.” Or schedule an Uber pickup.
Peripheral devices could also be directly interfaced with the system. “For instance, lapel cameras and smart glasses could directly communicate with the device and provide contextual information to and from the device. … The device also augments how people share and converse. In a meeting, the device could be used as a back-channel to silently communicate with another person.”
Applications of the technology could also include high-noise environments, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press, suggests Thad Starner, a professor in Georgia Tech’s College of Computing. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally.”
* Or users could, conceivably, simply zone out — checking texts, email messages, and twitter (all converted to voice) during boring meetings, or even reply, using mentally selected “smart reply” type options.
A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)
Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.
The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.
The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.
The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)
Simulating the human eye’s lens and ciliary muscles
In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.
(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) actual device. (credit: She et al./Sci. Adv.)
Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.
The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.
* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.
** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”
Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
Abstract of Large area metalenses: design, characterization, and mass manufacturing
Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.
A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)
Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.
The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.
The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.
The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)
Simulating the human eye’s lens and ciliary muscles
In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.
(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) Photo of actual device. (credit: Alan She et al./Sci. Adv.)
Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.
The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.
* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.
** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”
Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
Abstract of Large area metalenses: design, characterization, and mass manufacturing
Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.
Cryogenic 3D-printing soft hydrogels. Top: the bioprinting process. Bottom: SEM image of general microstructure (scale bar: 100 µm). (credit: Z. Tan/Scientific Reports)
A new bioprinting technique combines cryogenics (freezing) and 3D printing to create geometrical structures that are as soft (and complex) as the most delicate body tissues — mimicking the mechanical properties of organs such as the brain and lungs.
The idea: “Seed” porous scaffolds that can act as a template for tissue regeneration (from neuronal cells, for example), where damaged tissues are encouraged to regrow — allowing the body to heal without tissue rejection or other problems. Using “pluripotent” stem cells that can change into different types of cells is also a possibility.
Smoothy. Solid carbon dioxide (dry ice) in an isopropanol bath is used to rapidly cool hydrogel ink (a rapid liquid-to-solid phase change) as it’s extruded, yogurt-smoothy-style. Once thawed, the gel is as soft as body tissues, but doesn’t collapse under its own weight — a previous problem.
Current structures produced with this technique are “organoids” a few centimeters in size. But the researchers hope to create replicas of actual body parts with complex geometrical structures — even whole organs. That could allow scientists to carry out experiments not possible on live subjects, or for use in medical training, replacing animal bodies for surgical training and simulations. Then on to mechanobiology and tissue engineering.
Bending a finger generates electricity in this prototype device. (credit: Guofeng Song et al./Nano Energy)
A new triboelectric nanogenerator (TENG) design, using a gold tab attached to your skin, will convert mechanical energy into electrical energy for future wearables and self-powered electronics. Just bend your finger or take a step.
Triboelectric charging occurs when certain materials become electrically charged after coming into contact with a different material. In this new design by University of Buffalo and Chinese scientists, when a stretched layer of gold is released, it crumples, creating what looks like a miniature mountain range. An applied force leads to friction between the gold layers and an interior PDMS layer, causing electrons to flow between the gold layers.
More power to you. Previous TENG designs have been difficult to manufacture (requiring complex lithography) or too expensive. The new 1.5-centimeters-long prototype generates a maximum of 124 volts but at only 10 microamps. It has a power density of 0.22 millwatts per square centimeter. The team plans larger pieces of gold to deliver more electricity and a portable battery.
Source: Nano Energy. Support: U.S. National Science Foundation, the National Basic Research Program of China, National Natural Science Foundation of China, Beijing Science and Technology Projects, Key Research Projects of the Frontier Science of the Chinese Academy of Sciences ,and National Key Research and Development Plan.
This artificial electrical eel may power your implants
How the eel’s electrical organs generate electricity by moving sodium (Na) and potassium (K) ions across a selective membrane. (credit: Caitlin Monney)
Taking it a giant (and a bit scary) step further, an artificial electric organ, inspired by the electric eel, could one day power your implanted implantable sensors, prosthetic devices, medication dispensers, augmented-reality contact lenses, and countless other gadgets. Unlike typical toxic batteries that need to be recharged, these systems are soft, flexible, transparent, and potentially biocompatible.
Doubles as a defibrillator? The system mimicks eels’ electrical organs, which use thousands of alternating compartments with excess potassium or sodium ions, separated by selective membranes. To create a jolt of electricity (600 volts at 1 ampere), an eel’s membranes allow the ions to flow together. The researchers built a similar system, but using sodium and chloride ions dissolved in a water-based hydrogel. It generates more than 100 volts, but at safe low current — just enough to power a small medical device like a pacemaker.
The researchers say the technology could also lead to using naturally occurring processes inside the body to generate electricity, a truly radical step.
Source: Nature, University of Fribourg, University of Michigan, University of California-San Diego. Funding: Air Force Office of Scientific Research, National Institutes of Health.
E-skin for Terminator wannabes
A section of “e-skin” (credit: Jianliang Xiao / University of Colorado Boulder)
A new type of thin, self-healing, translucent “electronic skin” (“e-skin,” which mimicks the properties of natural skin) has applications ranging from robotics and prosthetic development to better biomedical devices and human-computer interfaces.
Ready for a Terminator-style robot baby nurse? What makes this e-skin different and interesting is its embedded sensors, which can measure pressure, temperature, humidity and air flow. That makes it sensitive enough to let a robot take care of a baby, the University of Colorado mechanical engineers and chemists assure us. The skin is also rapidly self-healing (by reheating), as in The Terminator, using a mix of three commercially available compounds in ethanol.
The secret ingredient: A novel network polymer known as polyimine, which is fully recyclable at room temperature. Laced with silver nanoparticles, it can provide better mechanical strength, chemical stability and electrical conductivity. It’s also malleable, so by applying moderate heat and pressure, it can be easily conformed to complex, curved surfaces like human arms and robotic hands.
Source: University of Colorado, Science Advances (open-access). Funded in part by the National Science Foundation.
Altered Carbon
Vertebral cortical stack (credit: Netflix)
Altered Carbon takes place in the 25th century, when humankind has spread throughout the galaxy. After 250 years in cryonic suspension, a prisoner returns to life in a new body with one chance to win his freedom: by solving a mind-bending murder.
Resleeve your stack. Human consciousness can be digitized and downloaded into different bodies. A person’s memories have been encapsulated into “cortical stack” storage devices surgically inserted into the vertebrae at the back of the neck. Disposable physical bodies called “sleeves” can accept any stack.
But only the wealthy can acquire replacement bodies on a continual basis. The long-lived are called Meths, as in the Biblical figure Methuselah. The uber rich are also able to keep copies of their minds in remote storage, which they back up regularly, ensuring that even if their stack is destroyed, the stack can be resleeved (except for periods of time not backed up — as in the hack-murder).
Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?
The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?
This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).
Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.
In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.
A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.
A test for machine consciousness
So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.
(credit: Gerd Altmann/Pixabay)
Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.
One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.
Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.
At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.
Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.
There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.
The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.
Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.
We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.
(credit: Gerd Altmann/Pixabay)
Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.
ACTs also could be useful for “consciousness engineering” during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.
Beyond the Turing Test
An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)
But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.
This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.
So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner?Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?
The age of AI will be a time of soul-searching — both of ours, and for theirs.
Originally published in Scientific American, July 19, 2017
Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film,Supersapiens, the Rise of the Mind.
Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.
In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?
“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”
“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.
Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.
Cast:
Mikey Siegel, Consciousness Hacker, San FranciscoSam Harris, Neuroscientist, Philosopher
Ben Goertzel, Chief Scientist , Hanson Robotics, Hong Kong
Hugo de Garis, retired director of China Brain Project, Xiamen, China
Susan Schneider, Philosopher and cognitive scientist University of Connecticut
Joel Murphy, owner, OpenBCI, Brooklyn, New York
Tim Mullen, Neuroscientist, CEO / Research Director, Qusp Labs
Conor Russomanno, CEO, OpenBCI, Brooklyn, New York
David Putrino, Neuroscientist, Weill-Cornell Medical College, New York
Hannes Sjoblad, Tech Activist, Bodyhacker, Stockholm Sweden.
Richard Dawkins, Evolutionary Biologist, Author, Oxford, UK
Nick Bostrom, Philosopher, Future of Humanity Institute, Oxford University, UK
Anders Sandberg, Computational Neuroscientist, Oxford University, UK
Adam Gazzaley, Neuroscientist, Executive Director UCSF Neuroscape, San Francisco, USA
Andy Walshe, Director Red Bull High Performance, Santa Monica, USA
Randal Koene, Science Director, Carboncopies Science Director, San Francisco
Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)
Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.
That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*
The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.
UC Berkeley | Brain activity as a zebrafish stalks its prey
“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”
How to read/write the brain
To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.
Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)
To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***
This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.
A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)
The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.
Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.
The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.
The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.
* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.
** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.
Jack Gallant | Movie reconstruction from human brain activity
Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.
*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.
Walk this way: Metabolic feedback and optimization algorithm automatically tweaks exoskeleton for optimal performance. (credit: Kirby Witte, Katie Poggensee, Pieter Fiers, Patrick Franks & Steve Collins)
Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.
Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.
But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.
The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.
So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**
Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)
In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.
* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science
** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.
Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther
Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking
Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.
The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.
Culture and Technology
(credit: Google)
The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.
Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.
The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.
TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool
Miguel Nicolelis (credit: Duke University)
Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.
He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.
Theme: Transhumanism
Dervishes at Royal Opera House with Matthew Herbert (credit: ?)
Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.
Theme: Instrument Design
Berklee College of Music
Michael Bierylo (credit: Moogfest)
Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.
Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.
Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.
Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music
Chris Ianuzzi (credit: William Murray)
Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.
Theme: Hacking Systems
Argus Project (credit: Moogfest)
The Argus Project fromGan Golan andRon Morrison of NEW INCis a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.
By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one. The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.
Argus Exo Suit Design (credit: Argus Project)
Theme: Protest
Found Sound Nation (credit: Moogfest)
Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.
Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.
Theme: Protest
(credit: Land Marking)
Land Marking, fromHalsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.
Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.
Theme: Protest
Taeyoon Choi (credit: Moogfest)
Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshopas one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.
Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.
Theme: Protest
(credit: Moogfest)
irlbb fromVivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.
Theme: Protest
Ryan Shaw and Michael Clamann (credit: Duke University)
Duke Professors Ryan Shaw, and Michael Clamann will lead adaily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.
Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.
Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.
Theme: Hacking Systems
Dave Smith (credit: Moogfest)
Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.
As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.
Theme: Future Thought
Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)
EM-u Systems Founder Dave Rossum, Ableton CEOGerhard Behles, and LZX FounderLars Larsen will take part in conversations as part of the Instruments Innovators program.
Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.
Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.
LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.
Science
ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)
The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session, a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.
The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?
“Atlas Boogie” (referencing Higgs Boson):
ATLAS Experiment | The ATLAS Boogie
(credit: Kate Shaw)
Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.
The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?
Theme: Future Thought
Arecibo (credit: Joe Davis/MIT)
In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.
Theme: Future Thought
Immortality bus (credit: Zoltan Istvan)
Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.
Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST
Theme: Transhumanism/Biotechnology
(credit: Moogfest)
Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.