Deep learning-based bionic hand grasps objects automatically

British biomedical engineers have developed a new generation of intelligent prosthetic limbs that allows the wearer to reach for objects automatically, without thinking — just like a real hand.

The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of movements in the hand — all within milliseconds.

The research finding was published Wednesday May 3 in an open-access paper in the Journal of Neural Engineering.

A deep learning-based artificial vision and grasp system

Biomedical engineers at Newcastle University and associates developed a convolutional neural network (CNN), trained it with images of more than 500 graspable objects, and taught it to recognize the grip needed for different types of objects.

Object recognition (top) vs. grasp recognition (bottom) (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Grouping objects by size, shape and orientation, according to the type of grasp that would be needed to pick them up, the team programmed the hand to perform four different grasps: palm wrist neutral (such as when you pick up a cup); palm wrist pronated (such as picking up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).

“We would show the computer a picture of, for example, a stick,” explains lead author Ghazal Ghazae. “But not just one picture; many images of the same stick from different angles and orientations, even in different light and against different backgrounds, and eventually the computer learns what grasp it needs to pick that stick up.”

A block diagram representation of the method (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Current prosthetic hands are controlled directly via the user’s myoelectric signals (electrical activity of the muscles recorded from the skin surface of the stump). That takes learning, practice, concentration and, crucially, time.

A small number of amputees have already trialed the new technology. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%. Now the Newcastle University team is working with experts at Newcastle upon Tyne Hospitals NHS Foundation Trust to offer the “hands with eyes” to patients at Newcastle’s Freeman Hospital.

A future bionic hand

The work is part of a larger research project to develop a bionic hand that can sense pressure and temperature and transmit the information back to the brain.

Led by Newcastle University and involving experts from the universities of Leeds, Essex, Keele, Southampton and Imperial College London, the aim is to develop novel electronic devices that connect neural networks to the forearm to allow two-way communications with the brain.

The research is funded by the Engineering and Physical Sciences Research Council (EPSRC).


Abstract of Deep learning-based artificial vision for grasp classification in myoelectric hands

Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at ${{5}^{\circ}}$ intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached $85 \% $ for the seen and $75 \% $ for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of $84 \% $ in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to $88 \% $ . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

 

Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

(credit: Neuralink Corp.)

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …

No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.

Dealing with the superintelligence existential risk

Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”

“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”

Neural dust: an ultrasonic, low power solution for chronic brain-machine interfaces (credit: Swarm Lab/UC Berkeley)

To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*

Mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution (credit: Lieber Research Group, Harvard University)

Becoming one with AI — a good thing?

Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”

Thin, flexible electrodes mounted on top of a biodegradable silk substrate could provide a better brain-machine interface, as shown in this model. (credit: University of Illinois at Urbana-Champaign)

But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?

“Just an engineering problem”

Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?

Brain-to-brain (B2B) internet communication system: EEG signals representing two words were encoded into binary strings (left) by the sender (emitter) and sent via the internet to a receiver. The signal was then encoded as a series of transcranial magnetic stimulation-generated phosphenes detected by the visual occipital cortex, which the receiver then translated to words (credit: Carles Grau et al./PLoS ONE)

And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”

However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.

NIRS/EEG brain-computer interface system using non-invasive near-infrared light for sensing “yes” or “no” thoughts, shown on a model (credit: Wyss Center for Bio and Neuroengineering)

In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.

“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.

Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.

* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”

** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

These may be the last glasses you will ever need to buy

Early prototype of “smart glasses” with liquid-based lenses that can automatically adjust the focus on what a person is seeing, whether it’s far away or close up. The battery-powered frames can automatically adjust the focal length. Researchers expect to have smaller, lighter frames and packaged technology within three years. (credit: Dan Hixson/University of Utah College of Engineering)

Don’t throw away your bifocals or multiple glasses yet, but those days might soon be over. A team led by University of Utah engineers has created “smart glasses” with liquid-based lenses that can automatically adjust the focus on what you’re seeing, at any distance.

They’ve created eyeglass lenses made of glycerin, a thick colorless liquid, enclosed by flexible rubber-like membranes in the front and back. The rear membrane in each lens is connected to a series of three mechanical actuators that push the membrane back and forth like a transparent piston, changing the curvature of the liquid lens and therefore the focal length between the lens and the eye.

Simplified schematic of soft-membrane liquid lens (excluding actuators). The lens optical power is adjusted by vertically displacing the fluid with a transparent piston, deflecting the top membrane and changing its curvature. (credit: Nazmul Hasan et al./Optics Express)

In the bridge of the glasses is a distance meter that measures the distance from the glasses to an object via pulses of near-infrared light. When the wearer looks at an object, the meter instantly measures the distance and tells the actuators how to curve the lenses. If the user then sees another object that’s closer, the distance meter readjusts and tells the actuators to reshape the lens for farsightedness.

The lenses can change focus from one object to another in 14 milliseconds (faster than human reaction time). A rechargeable battery in the frames could last more than 24 hours per charge, according to electrical and computer engineering professor Carlos Mastrangelo, senior author of an open-access paper in a special edition of the journal Optics Express.

Before putting them on for the first time, users would input their eyeglasses prescription into an accompanying smartphone app, which then calibrates the lenses automatically via Bluetooth. Users only need to do that once, except for when their prescription changes over time. Theoretically, eyeglass wearers will never have to buy another pair again since these glasses would constantly adjust to their eyesight.

A startup company, Sharpeyes LLC, has been created to commercialize the glasses. The project was funded with a grant from the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.


University of Utah | Smart glasses that automatically focus on whatever you look at


Abstract of Tunable-focus lens for adaptive eyeglasses

We demonstrate the implementation of a compact tunable-focus liquid lens suitable for adaptive eyeglass application. The lens has an aperture diameter of 32 mm, optical power range of 5.6 diopter, and electrical power consumption less than 20 mW. The lens inclusive of its piezoelectric actuation mechanism is 8.4 mm thick and weighs 14.4 gm. The measured lens RMS wavefront aberration error was between 0.73 µm and 0.956 µm.

Terasem Colloquium in Second Life

The 2016 Terasem Annual Colloquium on the Law of Futuristic Persons will take place in Second Life  in  ”Terasem sim” on Saturday, Dec. 10, 2016 at noon EDT. The main themes: “Legal Aspects of Futuristic Persons: Cyber-Humans” and “A Tribute to the ‘Father of Artificial Intelligence,’ Marvin Minsky, PhD.”

Each year on December 10th, International Human Rights Day, Terasem conducts a Colloquium on the Law of Futuristic Persons. The event seeks to provide the public with informed perspectives regarding the legal rights and obligations of “futuristic persons” via VR events with expert presentations and discussions. Terasem hopes to facilitate development of a body of law covering the rights and obligations of entities that transcend, and yet encompass, conventional conceptions of humanness,” according to Terasem Movement, Inc.

12:10–12:30PM —How Marvin Minsky Inspired Me To Have a Mindclone Living on An O’Neill Space Habitat
Martine Rothblatt, JD, PhD
Co-Founder, Terasem Movement, Inc.
Space Coast, FL
Avatar name: Vitology Destiny

12:30–12:50PM — Formal Interaction

12:50–1:10PM — The Emerging Law of Cyborgs
Woodrow “Woody” Barfield, PhD, JD, LLM
Author: Cyber-Humans: Our Future with Machines
Chapel Hill, NC
Avatar name: WoodyBarfield

1:10–1:30PM — Formal Interaction

1:30–1:50PM — Cyborgs and Family Law Challenges
Rich Lee
Human Enhancement & Augmentation
St. George, UT
Avatar name: RichLee78

1:50–2:10PM — Formal Interaction

2:10–2:30PM — Synthetic Brain Simulations and Mens Rea*
Stephen Thaler, PhD.
President & CEO, Imagination-engines, Inc.
St. Charles, MO
Avatar name: SteveThaler

* Mens Rea refers to criminal intent. Moreover, it is the state of mind indicating culpability which is required by statute as an element of a crime. — Cornell University Legal Information Institute

 

Americans worried about gene editing, brain chip implants, and synthetic blood

(iStock Photo)

Many in the general U.S. public are concerned about technologies to make people’s minds sharper and their bodies stronger and healthier than ever before, according to a new Pew Research Center survey of more than 4,700 U.S. adults.

The survey covers broad public reaction to scientific advances and examines public attitudes about the potential use of three specific emerging technologies for human enhancement.

The nationally representative survey centered on public views about gene editing that might give babies a lifetime with much reduced risk of serious disease, implantation of brain chips that potentially could give people a much improved ability to concentrate and process information, and transfusions of synthetic blood that might give people much greater speed, strength, and stamina.

A majority of Americans would be “very” or “somewhat” worried about gene editing (68%); brain chips (69%); and synthetic blood (63%), while no more than half say they would be enthusiastic about each of these developments.

Among the key data:

  • More say they would not want enhancements of their brains and their blood–66% and 63%, respectively–than say they would want them (32% and 35%). U.S. adults are closely split on the question of whether they would want gene editing to help prevent diseases for their babies (48% would, 50% would not).
  • Majorities say these enhancements could exacerbate the divide between haves and have-nots. For instance, 73% believe inequality will increase if brain chips become available because initially they will be obtainable only by the wealthy. At least seven-in-ten predict each of these technologies will become available before they have been fully tested or understood.
  • Substantial shares say they are not sure whether these interventions are morally acceptable. But among those who express an opinion, more people say brain and blood enhancements would be morally unacceptable than say they are acceptable.
  • More adults say the downsides of brain and blood enhancements would outweigh the benefits for society than vice versa. Americans are a bit more positive about the impact of gene editing to reduce disease; 36% think it will have more benefits than downsides, while 28% think it will have more downsides than benefits.
  • Opinion is closely divided when it comes to the fundamental question of whether these potential developments are “meddling with nature” and cross a line that should not be crossed, or whether they are “no different” from other ways that humans have tried to better themselves over time. For example, 49% of adults say transfusions with synthetic blood for much improved physical abilities would be “meddling with nature,” while a roughly equal share (48%) say this idea is no different than other ways human have tried to better themselves.

The survey data reveal several patterns surrounding Americans’ views about these ideas:

  • People’s views about these human enhancements are strongly linked with their religiosity.
  • People are less accepting of enhancements that produce extreme changes in human abilities. And, if an enhancement is permanent and cannot be undone, people are less inclined to support it.
  • Women tend to be more wary than men about these potential enhancements from cutting-edge technologies.

The survey also finds some similarities between what Americans think about these three potential, future enhancements and their attitudes toward the kinds of enhancements already widely available today. As a point of comparison, this study examined public thinking about a handful of current enhancements, including elective cosmetic surgery, laser eye surgery, skin or lip injections, cosmetic dental procedures to improve one’s smile, hair replacement surgery and contraceptive surgery.

  • 61% of Americans say people are too quick to undergo cosmetic procedures to change their appearance in ways that are not really important, while 36% “it’s understandable that more people undergo cosmetic procedures these days because it’s a competitive world and people who look more attractive tend to have an advantage.”
  • When it comes to views about elective cosmetic surgery, in particular, 34% say elective cosmetic surgery is “taking technology too far,” while 62% say it is an “appropriate use of technology.” Some 54% of U.S. adults say elective cosmetic surgery leads to about equal benefits and downsides for society, while 26% express the belief that there are more downsides than benefits, and just 16% say society receives more benefits than downsides from cosmetic surgery.

The survey data is drawn from a nationally representative survey of 4,726 U.S. adults conducted by Pew Research Center online and by mail from March 2-28, 2016.

Pew Research Center is a nonpartisan “fact tank” that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions. The center is a subsidiary of The Pew Charitable Trusts, its primary funder.

Robot mimics vertebrate motion

Pleurobot (credit: EPFL)

École polytechnique fédérale de Lausanne (EPFL) scientists have invented a new robot called “Pleurobot” that mimics the way salamanders walk and swim with unprecedented detail.

Aside from being cool (and a likely future Disney attraction), the researchers believe designing the robot will provide a new tool for understanding the evolution of vertebrate locomotion. That could lead to better understanding of how the spinal cord controls the body’s locomotion, which may help develop future therapies and neuroprosthetic devices for paraplegic patients and amputees.

Pleurobot mimics Salamander. Neurobiologists say electrical stimulation of the spinal cord is what determines whether the salamander walks, crawls or swims: At lowest level of stimulation, the salamander walks; with higher stimulation, its pace increases, and beyond some threshold the salamander begins to swim. (credit: EPFL)

Simulating the 3D motion of the salamander’s locomotion requires exceptional precision. The Biorobotics Laboratory scientists started by shooting detailed x-ray videos of the salamander species Pleurodeles waltl from the top and the side, tracking up to 64 points along its skeleton while it performed different types of motion in water and on the ground.

Auke Ijspeert and his team at EPFL then 3D-printed bones and motorized joints, and even created a “nervous system” using electronic circuitry, allowing the Pleurobot to walk, crawl, and even swim underwater.*

Ijspeert thinks that the design methodology used for the Pleurobot can help develop other types of “biorobots,” which could become important tools in neuroscience and biomechanics.

The research, described in the Royal Society journal Interface, received funding from the Swiss National Center of Competence in Research (NCCR) in Robotics and from the Swiss National Science Foundation.


École polytechnique fédérale de Lausanne | A new robot mimics vertebrate motion

* In the design process, the researchers identified the minimum number of motorized segments required, as well as the optimal placement along the robot’s body, to replicate many of the salamander’s types of movement. That made it possible to construct Pleurobot with fewer bones and joints than the real-life creature — only 27 motors and 11 segments along its spine (the real animal has 40 vertebrae and multiple joints, some of which can even rotate freely and move side-to-side or up and down). 


Abstract of From cineradiography to biorobots: an approach for designing robots to emulate and study animal locomotion

Robots are increasingly used as scientific tools to investigate animal locomotion. However, designing a robot that properly emulates the kinematic and dynamic properties of an animal is difficult because of the complexity of musculoskeletal systems and the limitations of current robotics technology. Here, we propose a design process that combines high-speed cineradiography, optimization, dynamic scaling, three-dimensional printing, high-end servomotors and a tailored dry-suit to construct Pleurobot: a salamander-like robot that closely mimics its biological counterpart, Pleurodeles waltl. Our previous robots helped us test and confirm hypotheses on the interaction between the locomotor neuronal networks of the limbs and the spine to generate basic swimming and walking gaits. With Pleurobot, we demonstrate a design process that will enable studies of richer motor skills in salamanders. In particular, we are interested in how these richer motor skills can be obtained by extending our spinal cord models with the addition of more descending pathways and more detailed limb central pattern generator networks. Pleurobot is a dynamically scaled amphibious salamander robot with a large number of actuated degrees of freedom (DOFs: 27 in total). Because of our design process, the robot can capture most of the animal’s DOFs and range of motion, especially at the limbs. We demonstrate the robot’s abilities by imposing raw kinematic data, extracted from X-ray videos, to the robot’s joints for basic locomotor behaviours in water and on land. The robot closely matches the behaviour of the animal in terms of relative forward speeds and lateral displacements. Ground reaction forces during walking also resemble those of the animal. Based on our results, we anticipate that future studies on richer motor skills in salamanders will highly benefit from Pleurobot’s design.

Moogfest 2016: the synthesis of future music, technology, and art

Moogfest 2016, a four-day, mind-expanding festival on the synthesis of technology, art, and music, will happen this coming week (Thursday, May 19 to Sunday, May 22) near Duke University in Durham, North Carolina, with more than 300 musical performances, workshops, conversations, masterclasses, film screenings, live scores, sound installations, multiple interactive art experiences, and “The Future of Creativity” keynotes by visionary futurist Martine Rothblatt, PhD. and virtual reality pioneer and author Jaron Lanier.

Cyborg activist Neil Harbisson is the first person in the world with an antenna implanted in his skull, allowing him to hear the frequencies of colors (including infrared and ultraviolet) via bone conduction and receive phone calls. (credit: N. Harbisson)

By day, Moogfest unfolds in venues throughout downtown Durham in spaces that range from intimate galleries and experimental art installations to grand theaters as a platform for geeky exploration and experimentation in sessions and workshops, featuring more than 250 innovators in music, art, and technology, including avant-garde pioneers such as cyborg Neil Harbisson, technoshaman paleo-ecologist/multimedia performer Michael Garfield on “Technoshamanism: A Very Psychedelic Century,” sonifying plants with Data Garden, the Google Magenta (Deep Dream Generator) on training neural networks to generate music, Onyx Ashanti showing how to program music with your mind, Google Doodle’s Ryan Germick, and cyborg artist Moon Ribas, whose cybernetic implants in her arms perceive the movement of real-time earthquakes.

Modular Marketplace 2014 (credit: PatrickPKPR)

Among the fun experimental venues will be the musical Rube Goldberg  workshop, the Global Synthesizer Project (an interactive electronic musical instrument installation where users can synthesize environmental sounds from around the world), THETA (a guided meditation virtual reality spa), WiFi Whisperer (an art installation that visually displays signals around us), the Musical Playground, and Modular Marketplace, an interactive exhibition showcasing the latest and greatest from a lineup of Moog Music and other innovative instrument makers and where the public can engage with new musical devices and their designers; free and open to the public, at the American Tobacco Campus at 318 Blackwell Street from 10am–6pm from May 19–22.


INSTRUMENT 1 from Artiphon will make its public debut at Moogfest 2016. It allows users of any skill or style to strum a guitar, tap a piano, bow a violin, or loop a drum beat — all on a single interface. By connecting to iOS devices, Macs and PCs, this portable musical tool can make any sound imaginable.

In addition, noted MIT Media Lab opera composer/inventor Tod Machover will demonstrate his Hyperinstruments, responsive stage technologies that go beyond multimedia, large-scale collaborative systems and enable entire cities to create symphonies together, and musical tools that promote wellbeing, diagnose disease, and allow for customizing compositions.

Music of the future

By night, Moogfest will present cutting-edge music in venues throughout the city. Performing artists include pioneers in electronic music like Laurie Anderson and legendary synth pioneer Suzanne Ciani, alongside pop and avant-garde experimentalists of today, including Grimes, Explosions in the Sky, Oneohtrix Point Never, Alessandro Cortini, Daniel Lanois, Tim Hecker, Arthur Russell Instrumentals, Rival Consoles, and Dawn of Midi.

Durham’s historic Armory is transformed into a dark and body-thumping dance club to host the best of electronica, house, disco and techno. Godfathers of the genre include The Orb, DJ Harvey, and Robert Hood alongside inspiring new acts such as Bicep (debuting their live show), The Black Madonna and a Ryan Hemsworth curated night including Jlin, Qrion and UVBoi.

“The liberation of LGBTQ+ people is wired into the original components of electronic music culture…” — Artists’ statement here

Local favorite Pinhook features a wide range of experimental sounds: heavy techno from Kyle Hall, Paula Temple and Karen Gwyer, live experimentation from Via App, Patricia, M. Geddes Gengras and Julia Holter, jaggedly rhythmic futurists Rabit and Lotic, and the avante-garde doom metal of The Body.

Moogfest’s largest venue, Motorco Park, is a mix of future-forward electro-pop and R&B with performances by ODESZA, Blood Orange, critically- acclaimed emerging artist DAWN (Dawn Richard) playing her first NC show, he kickoff of Miike Snow’s U.S. Tour, Gary Numan, Silver Apples, Mykki Blanco and newly announced The Range as well as a distinguished hip hop lineup that includes GZA, Skepta, Torey Lanez, Daye Jack, Denzel Curry, Lunice and local artists King Mez, Professor Toon and Well$.

Full Schedule: https://moogfest.sched.org

Robert Moog (credit: Moogarchives.com)

Since 2004, Moogfest has brought together artists, futurist thinkers, inventors, entrepreneurs, designers, engineers, scientists, and musicians. Moogfest is a tribute to Dr. Robert “Bob” Moog and the profound influence his inventions have had on how we hear the world. Over the last sixty years, Bob Moog and Moog Music have pioneered the analog synthesizer and other technology tools for artists. He was vice president for new product research at Kurzweil Music Systems from 1984 to 1988.

Less-distracting haptic feedback could make car navigation safer than GPS audio and displays

Vibrotactile actuators in prototype smart glasses (credit: Joseph Szczerba et al./Proceedings of the Human Factors and Ergonomics Society)

Human factors/ergonomics researchers at General Motors and an affiliate have performed a study using a new turn-by-turn automotive navigation system that uses haptic cues (vibrations) to the temples to communicate information to drivers on coming turns (which direction and when to turn), instead of distracting voice prompts or video displays.

They modified a prototype smart-glasses device with motors in two actuators (on the right and left side of the head) that buzz to indicate a right or left turn and how far it is, indicated by the number of buzzes (1 at 800 feet away, 2 at 400 feet, and 3 at 100 feet).

National Advanced Driving Simulator (NADS) MiniSim software (credit: NADS)

Using a driving simulator, each participant drove three city routes using a visual-only, visual-plus-voice, and visual-plus-haptic navigation system. For all three system modalities, the participants were also presented with graphical icons for turn-by-turn directions and distance.

Sample turn-by-turn direction icon (credit: Joseph Szczerba et al./Proceedings of the Human Factors and Ergonomics Society)

The researchers found that effort, mental workload, and overall workload were lowest with the prototype haptic system. Drivers didn’t have to listen for voice instructions or take their eyes off the road to look at a visual display. Drivers also preferred the haptic system because it didn’t distract from conversation or audio entertainment.

The results indicate that haptic smart-glasses paired with a simplified icon-based visual display may give drivers accurate directional assistance with less effort.

Mazda 2015 with GPS audio and video display (credit: Landmark MAZDA)

As noted in “Up to 27 seconds of inattention after talking to your car or smartphone,” two studies by University of Utah researchers for the AAA Foundation for Traffic Safety found that a driver traveling only 25 mph continues to be distracted for up to 27 seconds after disconnecting from highly distracting phone and car voice-command systems. The 27 seconds means a driver traveling 25 mph would cover the length of three football fields before regaining full attention.

According to the Multiple Resource Theory developed by Christopher D. Wickens in Theoretical Issues in Ergonomics Science (open access), multiple tasks (such as use of navigation systems while driving) performed via the same channel can result in excessive demand that may increase cognitive workload (and risks of an accident).

The new human factors/ergonomics haptics research was conducted by Joseph Szczerba and Roy Mathieu from General Motors Global R&D and Roger Hersberger from RLH Systems LLC. It was described in a paper in Proceedings of the Human Factors and Ergonomics Society September 2015.


Abstract of A Wearable Vibrotactile Display for Automotive Route Guidance: Evaluating Usability, Workload, Performance and Preference

Automotive navigation systems typically provide distance and directional information of an ensuing maneuver by means of visual indicators and audible instructions. These systems, however, use the same human perception channels that are required to perform the primary task of driving, and may consequently increase cognitive workload. A vibrotactile display was designed as an alternative to voice instruction and implemented in a consumer wearable device (smart-glasses). Using a driving simulator, the prototype system was compared to conventional navigation systems by assessing usability, workload, performance and preference. Results indicated that the use of haptic feedback in smart-glasses can improve secondary task performance over the conventional visual/auditory navigation system. Additionally, users preferred the haptic system over the other conventional systems. This study indicates that existing technologies found in consumer wearable devices may be leveraged to enhance the user-interface of vehicle navigation systems.

Could humans ever regenerate limbs?

Just lopped off your ring finger slicing carrots (some time in the future)? No problem. Just speed-read this article while you’re waiting for the dronebulance. …

“Epimorphic regeneration” — growing digits, maybe even limbs, with full 3D structure and functionality — may one day be possible. So say scientists at Tulane University, the University of Washington, and the University of Pittsburgh, writing in a review article just published in Tissue Engineering, Part B, Reviews (open access until March 8).

The process of amphibian epimorphic regeneration may offer hints for humans. After amputation, the wound heals to form an epidermal layer, the underlying tissues undergo matrix remodeling, and cells in the region secrete soluble factors. A heterogeneous cell mass, or blastema, forms from the proliferation and migration of cells from the adjacent tissues. The blastema then gives rise to the various new tissues that are spatially patterned to reconstruct the original limb structure. (credit: Lina M. Quijano et al./Tissue Engineering Part B)

Epimorphic generation occurs in certain animals, such as salamanders and frogs, which are able to regenerate limbs, tails, jaws, and even eye lenses; and in deer antlers and mouse ears.

Turns out there are also rare cases of children and young adults who have had tips of digits regenerated. And there are specific “steps of epimorphic regeneration to promote the partial or complete restoration of a biological digit or limb after amputations,” the scientists believe.

Epimorphic regeneration in the murine (type of mouse) digit is level-specific and provides an opportunity for comparative studies of mammalian epimorphic regeneration. Transection through the P2 element results in the frequent outcome of fibrotic scar tissue formation. Transection through the more distal P3 element instead results in the regeneration of missing tissue. (credit: Lina M. Quijano et al./Tissue Engineering Part B)

Some of those steps are suggested by what’s possible in mice, where the digit tip has been found capable of regrowing multiple structures, including bone, after amputation.

What about humans?

The highly ambitious goal of epimorphic regeneration for humans would require the regrowth of multiple tissues that have been assembled in the proper conformation and patterns to create a fully functional limb, according to the authors.

Epimorphic regeneration has been observed in distal finger tips of children and young adults. Converting such random events into designed clinical outcomes will require altering the default postamputation progression. It may include the transplantation of cells, scaffolds, and/or soluble factors, as well as controlling microenvironmental aspects, such as oxygen concentration, tissue hydration, mechanical, and electrical cues. (credit: Lina M. Quijano et al./Tissue Engineering Part B)

They note that “it may be possible to suture an engineered epithelial layer, much like the present skin grafts, across the injury site. In-depth understanding of the proper soluble factor communication necessary, however, could lead to a more direct approach of delivering growth factors to the region, leveraging drug delivery paradigms that create spatiotemporal gradients.These interventions are intended to mimic the signals that induce a stable cell mass that functions as a blastema [a mass of cells capable of growth and regeneration into organs or body parts].

“Initial studies in mice have already shown the promise of introducing solubilized [extra-cellular matrices, bone morphogenetic proteins,and matrix metalloproteinases, generated by immune cells] in promoting recruitment/mobilization of endogenous cells to proliferate at the transected bone front. Furthermore, the injury response may also be influenced with external bioreactors … that can control parameters, such as hydration, pH, oxygen concentration, and electrical stimulation.”

The research was supported by the National Institutes of Health and the Fulbright Scholars Program.


Abstract of Looking Ahead to Engineering Epimorphic Regeneration of a Human Digit or Limb

Approximately 2 million people have had limb amputations in the United States due to disease or injury, with more than 185,000 new amputations every year. The ability to promote epimorphic regeneration, or the regrowth of a biologically based digit or limb, would radically change the prognosis for amputees. This ambitious goal includes the regrowth of a large number of tissues that need to be properly assembled and patterned to create a fully functional structure. We have yet to even identify, let alone address, all the obstacles along the extended progression that limit epimorphic regeneration in humans. This review aims to present introductory fundamentals in epimorphic regeneration to facilitate design and conduct of research from a tissue engineering and regenerative medicine perspective. We describe the clinical scenario of human digit healing, featuring published reports of regenerative potential. We then broadly delineate the processes of epimorphic regeneration in nonmammalian systems and describe a few mammalian regeneration models. We give particular focus to the murine digit tip, which allows for comparative studies of regeneration-competent and regeneration-incompetent outcomes in the same animal. Finally, we describe a few forward-thinking opportunities for promoting epimorphic regeneration in humans.

Artificial ‘skin’ system transmits the pressure of touch

“Gimmie five”: Model robotic hand with artificial mechanoreceptors (credit: Bao Research Group, Stanford University)

Researchers have created a sensory system that mimics the ability of human skin to feel pressure and have transmitted the digital signals from the system’s sensors to the brain cells of mice. These new developments, reported in the October 16 issue of Science, could one day allow people living with prosthetics to feel sensation in their artificial limbs.

Artificial mechanoreceptors mounted on the fingers of a model robotic hand (credit: Bao Research Group, Stanford University)

The system consists of printed plastic circuits, designed to be placed on robotic fingertips. Digital signals transmitted by the system would increase as the fingertips came closer to an object, with the signal strength growing as the fingertips gripped the object tighter.

How to simulate human fingertip sensations

To simulate this human sensation of pressure, Zhenan Bao of Stanford University and her colleagues developed a number of key components that collectively allow the system to function.

As our fingers first touch an object, how we physically “feel” it depends partially on the mechanical strain that the object exerts on our skin. So the research team used a sensor with a specialized circuit that translates pressure into digital signals.

To allow the sensory system to feel the same range of pressure that human fingertips can, the team needed a highly sensitive sensor. They used carbon nanotubes in formations that are highly effective at detecting the electrical fields of inanimate objects.

Stretchable skin with flexible artificial mechanoreceptors (credit: Bao Research Group, Stanford University)

Bao noted that the printed circuits of the new sensory system would make it easy to produce in large quantities. “We would like to make the circuits with stretchable materials in the future, to truly mimic skin,” Bao said. “Other sensations, like temperature sensing, would be very interesting to combine with touch sensing.”


Abstract of A skin-inspired organic digital mechanoreceptor

Human skin relies on cutaneous receptors that output digital signals for tactile sensing in which the intensity of stimulation is converted to a series of voltage pulses. We present a power-efficient skin-inspired mechanoreceptor with a flexible organic transistor circuit that transduces pressure into digital frequency signals directly. The output frequency ranges between 0 and 200 hertz, with a sublinear response to increasing force stimuli that mimics slow-adapting skin mechanoreceptors. The output of the sensors was further used to stimulate optogenetically engineered mouse somatosensory neurons of mouse cortex in vitro, achieving stimulated pulses in accordance with pressure levels. This work represents a step toward the design and use of large-area organic electronic skins with neural-integrated touch feedback for replacement limbs.