Playing a musical instrument could help restore brain health, research suggests

Tibetan singing bowl (credit: Baycrest Health Sciences)

A study by neuroscientists at Toronto-based Baycrest Rotman Research Institute and Stanford University involving playing a musical instrument suggests ways to improve brain rehabilitation methods.

In the study, published in the Journal of Neuroscience on May 24, 2017, the researchers asked young adults to listen to sounds from an unfamiliar musical instrument (a Tibetan singing bowl). Half of the subjects (the experimental group) were then asked to recreate the same sounds and rhythm by striking the bowl; the other half (the control group) were instead asked to recreate the sound by simply pressing a key on a computer keypad.

After listening to the sounds they created, subjects in the experimental group showed increased auditory-evoked P2 (P200) brain waves. This was significant because the P2 increase “occurred immediately, while in previous learning-by-listening studies, P2 increases occurred on a later day,” the researchers explained in the paper. The experimental group also had increased responsiveness of brain beta-wave oscillations and enhanced connectivity between auditory and sensorimotor cortices (areas) in the brain.

The brain changes were measured using magnetoencephalographic (MEG) recording, which is similar to EEG, but uses highly sensitive magnetic sensors.

Immediate beneficial effects on the brain

“The results … provide a neurophysiological basis for the application of music making in motor rehabilitation [increasing the ability to move arms and legs] training,” the authors state in the paper. The findings support Ross’ research in using musical training to help stroke survivors rehabilitate motor movement in their upper bodies. Baycrest scientists also have a history of breakthroughs in understanding how a person’s musical background impacts their listening abilities and cognitive function as they age.

“This study was the first time we saw direct changes in the brain after one session, demonstrating that the action of creating music leads to a strong change in brain activity,” said Bernhard Ross, PhD., senior scientist at Rotman Research Institute and senior author on the study.

“Music has been known to have beneficial effects on the brain, but there has been limited understanding into what about music makes a difference,” he added. “This is the first study demonstrating that learning the fine movement needed to reproduce a sound on an instrument changes the brain’s perception of sound in a way that is not seen when listening to music.”

The study’s next steps involve analyzing recovery by stroke patients with musical training compared to physiotherapy, and the impact of musical training on the brains of older adults. With additional funding, the study could explore developing musical training rehabilitation programs for other conditions that impact motor function, such as traumatic brain injury, and lead to hearing aids of the future, the researchers say.

The study received support from the Canadian Institutes of Health Research.


Abstract of Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced beta-band oscillations during perception

Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as vocalization or playing a musical instrument. Moreover, neural oscillations at beta-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (seven female, twelve male) participated in three magnetoencephalography (MEG) recordings while first passively listening to recorded sounds of a bell ringing, then actively playing the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared to the initial naïve listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of beta-band oscillations as well as theta coherence between auditory and sensorimotor cortices was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a keypress. We propose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.

‘Wearable’ PET brain scanner enables studies of moving patients

Julie Brefczynski-Lewis, a neuroscientist at West Virginia University, places a helmet-like PET scanner on a research subject. The mobile scanner enables studies of human interaction, movement disorders, and more. (credit: West Virginia University)

Two scientists have developed a miniaturized positron emission tomography (PET) brain scanner that can be “worn” like a helmet.

The new Ambulatory Microdose Positron Emission Tomography (AMPET) scanner allows research subjects to stand and move around as the device scans, instead of having to lie completely still and be administered anesthesia — making it impossible to find associations between movement and brain activity.

Conventional positron emission tomography (PET) scanners immobilize patients (credit: Jens Maus/CC)

The AMPET scanner was developed by Julie Brefczynski-Lewis, a neuroscientist at West Virginia University (WVU), and Stan Majewski, a physicist at WVU and now at the University of Virginia. It could make possible new psychological and clinical studies on how the brain functions when affected by diseases from epilepsy to addiction, and during ordinary and dysfunctional social interactions.

Helmet support prototype with weighted helmet, allowing for freedom of movement. The counterbalance currently supports up to 10 kg but can be upgraded. Digitizing electronics will be mounted to the support above the patient. (credit: Samantha Melroy et al./Sensors)

Because AMPET sits so close to the brain, it can also “catch” more of the photons stemming from the radiotracers used in PET than larger scanners can. That means researchers can administer a lower dose of radioactive material and still get a good biological snapshot. Catching more signals also allows AMPET to create higher resolution images than regular PET.

The AMPET idea was sparked by the Rat Conscious Animal PET (RatCAP) scanner for studying rats at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.** The scanner is a 250-gram ring that fits around the head of a rat, suspended by springs to support its weight and let the rat scurry about as the device scans. (credit: Brookhaven Lab)

The researchers plan to build a laboratory-ready version next.

Seeing more deeply into the brain

A patient or animal about to undergo a PET scan is injected with a low dose of a radiotracer — a radioactive form of a molecule that is regularly used in the body. These molecules emit anti-matter particles called positrons, which then manage to only travel a tiny distance through the body. As soon as one of these positrons meets an electron in biological tissue, the pair annihilates and converts their mass to energy. This energy takes the form of two high-energy light rays, called gamma photons, that shoot off in opposite directions. PET machines detect these photons and track their paths backward to their point of origin — the tracer molecule. By measuring levels of the tracer, for instance, doctors can map areas of high metabolic activity. Mapping of different tracers provides insight into different aspects of a patient’s health. (credit: Brookhaven Lab)

PET scans allow researchers to see farther into the body than other imaging tools. This lets AMPET reach deep neural structures while the research subjects are upright and moving. “A lot of the important things that are going on with emotion, memory, and behavior are way deep in the center of the brain: the basal ganglia, hippocampus, amygdala,” Brefczynski-Lewis notes.

“Currently we are doing tests to validate the use of virtual reality environments in future experiments,” she said. In this virtual reality, volunteers would read from a script designed to make the subject angry, for example, as his or her brain is scanned.

In the medical sphere, the scanning helmet could help explain what happens during drug treatments. Or it could shed light on movement disorders such as epilepsy, and watch what happens in the brain during a seizure; or study the sub-population of Parkinson’s patients who have great difficulty walking, but can ride a bicycle .

The RatCAP project at Brookhaven was funded by the DOE Office of Science. RHIC is a DOE Office of Science User Facility for nuclear physics research. Brookhaven Lab physicists use technology similar to PET scanners at the Relativistic Heavy Ion Collider (RHIC), where they must track the particles that fly out of near-light speed collisions of charged nuclei. PET research at the Lab dates back to the early 1960s and includes the creation of the first single-plane scanner as well as various tracer molecules.


Abstract of Development and Design of Next-Generation Head-Mounted Ambulatory Microdose Positron-Emission Tomography (AM-PET) System

Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient’s head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk—the Ambulatory Microdose PET (AM-PET)—is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.

Best of MOOGFEST 2017

The Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Technical reference: Magenta: Music and Art Generation with Machine Intelligence


TEDx Talks | Music and Art Generation using Machine Learning | Curtis Hawthorne | TEDxMountainViewHighSchool


Miguel Nicolelis (credit: Duke University)

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism


Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Theme: Instrument Design


Berklee College of Music

Michael Bierylo (credit: Moogfest)

Michael Bierylo will present his Modular Synthesizer Ensemble alongside the Csound workshops from fellow Berklee Professor Richard Boulanger.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.


Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music


Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems


Argus Project (credit: Moogfest)

The Argus Project from Gan Golan and Ron Morrison of NEW INC is a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one.  The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest


Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest


(credit: Land Marking)

Land Marking, from Halsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest


Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshop as one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest


(credit: Moogfest)

irlbb from Vivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest


Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead a daily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems


Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Theme: Future Thought


Dave Rossum, Gerhard Behles, and Lars Larsen (credit: Moogfest)

EM-u Systems Founder Dave Rossum, Ableton CEO Gerhard Behles, and LZX Founder Lars Larsen will take part in conversations as part of the Instruments Innovators program.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.


Science

ATLAS detector (credit: Kaushik De, Brookhaven National Laboratory)

ATLAS @ CERN. The full ATLAS @ CERN program will be led by Duke University Professors Mark Kruse andKatherine Hayles along with ATLAS @ CERN Physicist Steven Goldfarb.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session,  a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought


Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought


Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.


Zoltan Istvan | 1-min Hightlight Video for Zoltan Istvan Transhumanism Documentary IMMORTALITY OR BUST

Theme: Transhumanism/Biotechnology


(credit: Moogfest)

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

Theme: Techno-Shamanism

#Moogfest2017

Virtual-reality therapy found effective for treating phobias and PTSD

A soldier using “Bravemind” VR therapy (credit: USC Institute for Creative Technologies)

Virtual reality (VR) technology can be an effective part of treatment for phobias, post-traumatic stress disorder (PTSD) in combat veterans, and other mental health conditions, according to an open-access research review in the May/June issue of the Harvard Review of Psychiatry.

“VR-based exposure therapy” (VRE) has been found effective for treating panic disorder, schizophrenia, acute and chronic pain, addictions (including smoking), social anxiety disorder, claustrophobia, agoraphobia (fear or open spaces), eating disorders, “generalized anxiety disorder” (where daily functioning becomes difficult), obsessive-compulsive disorder, chronic pain, obsessive-compulsive disorder, and even schizophrenia.

iPhone VR Therapy System, including apps (lower right) (credit: Virtually Better, Inc.)

VR allows providers to “create computer-generated environments in a controlled setting, which can be used to create a sense of presence and immersion in the feared environment for individuals suffering from anxiety disorders,” says lead author Jessica L. Maples-Keller, PhD, of University of Georgia.

One dramatic example is progressive exposure to frightening situations in patients with specific phobias, such as fear of flying. This typically includes eight steps, from walking through an airport terminal to flying during a thunderstorm with turbulence, including specific stimuli linked to these symptoms (such as the sound of the cabin door closing). The patient can virtually experience repeated takeoffs and landings without going on an actual flight.

VR can simulate exposures that would be costly or impractical to recreate in real life, such as combat conditions, or to control the “dose” and specific aspects of the exposure environment.

“A VR system will typically include a head-mounted display and a platform (for the patients) and a computer with two monitors — one for the provider’s interface in which he or she constructs the exposure in real time, and another for the provider’s view of the patient’s position in the VR environment,” the researchers note.

However, research so far on VR applications has had limitations, including small numbers of patients and lack of comparison groups; and mental health care providers will need specific training, the authors warn.

The senior author of the paper, Barbara O. Rothbaum, PhD, disclosed one advisory board payment from Genentech and equity in Virtually Better, Inc., which creates virtual reality products.


Abstract of The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders

Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

Deep learning-based bionic hand grasps objects automatically

British biomedical engineers have developed a new generation of intelligent prosthetic limbs that allows the wearer to reach for objects automatically, without thinking — just like a real hand.

The hand’s camera takes a picture of the object in front of it, assesses its shape and size, picks the most appropriate grasp, and triggers a series of movements in the hand — all within milliseconds.

The research finding was published Wednesday May 3 in an open-access paper in the Journal of Neural Engineering.

A deep learning-based artificial vision and grasp system

Biomedical engineers at Newcastle University and associates developed a convolutional neural network (CNN), trained it with images of more than 500 graspable objects, and taught it to recognize the grip needed for different types of objects.

Object recognition (top) vs. grasp recognition (bottom) (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Grouping objects by size, shape and orientation, according to the type of grasp that would be needed to pick them up, the team programmed the hand to perform four different grasps: palm wrist neutral (such as when you pick up a cup); palm wrist pronated (such as picking up the TV remote); tripod (thumb and two fingers), and pinch (thumb and first finger).

“We would show the computer a picture of, for example, a stick,” explains lead author Ghazal Ghazae. “But not just one picture; many images of the same stick from different angles and orientations, even in different light and against different backgrounds, and eventually the computer learns what grasp it needs to pick that stick up.”

A block diagram representation of the method (credit: Ghazal Ghazaei/Journal of Neural Engineering)

Current prosthetic hands are controlled directly via the user’s myoelectric signals (electrical activity of the muscles recorded from the skin surface of the stump). That takes learning, practice, concentration and, crucially, time.

A small number of amputees have already trialed the new technology. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88%. Now the Newcastle University team is working with experts at Newcastle upon Tyne Hospitals NHS Foundation Trust to offer the “hands with eyes” to patients at Newcastle’s Freeman Hospital.

A future bionic hand

The work is part of a larger research project to develop a bionic hand that can sense pressure and temperature and transmit the information back to the brain.

Led by Newcastle University and involving experts from the universities of Leeds, Essex, Keele, Southampton and Imperial College London, the aim is to develop novel electronic devices that connect neural networks to the forearm to allow two-way communications with the brain.

The research is funded by the Engineering and Physical Sciences Research Council (EPSRC).


Abstract of Deep learning-based artificial vision for grasp classification in myoelectric hands

Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at ${{5}^{\circ}}$ intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached $85 \% $ for the seen and $75 \% $ for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of $84 \% $ in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to $88 \% $ . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

 

New nuclear magnetic resonance technique offers ‘molecular window’ for live disease diagnosis

New nuclear magnetic resonance (NMR) system for molecular diagnosis (credit: University of Toronto Scarborough)

University of Toronto Scarborough researchers have developed a new “molecular window” technology based on nuclear magnetic resonance (NMR) that can look inside a living system to get a high-resolution profile of which specific molecules are present, and extract a full metabolic profile.

“Getting a sense of which molecules are in a tissue sample is important if you want to know if it’s cancerous, or if you want to know if certain environmental contaminants are harming cells inside the body,” says Professor Andre Simpson, who led research in developing the new technique.*

An NMR spectrometer generates a powerful magnetic field that causes atomic nuclei to absorb and re-emit energy in distinct patterns, revealing a unique molecular signature — in this example: the chemical ethanol. (credit: adapted from the Bruker BioSpin “How NMR Works” video at www.theresonance.com/nmr-know-how)

Simpson says there’s great medical potential for this new technique, since it can be adapted to work on existing magnetic resonance imaging (MRI) systems found in hospitals. “It could have implications for disease diagnosis and a deeper understanding of how important biological processes work,” by targeting specific biomarker molecules that are unique to specific diseased tissue.

The new approach could detect these signatures without resorting to surgery and could determine, for example, whether a growth is cancerous or benign directly from the MRI alone.

The technique could also provide highly detailed information on how the brain works, revealing the actual chemicals involved in a particular response. “It could mark an important step in unraveling the biochemistry of the brain,” says Simpson.

Overcoming magnetic distortion

Until now, traditional NMR techniques haven’t been able to provide high-resolution profiles of living organisms because of magnetic distortions from the tissue itself.  Simpson and his team were able to overcome this problem by creating tiny communication channels based on “long-range dipole interactions” between molecules.

The next step for the research is to test it on human tissue samples, says Simpson. Since the technique detects all cellular metabolites (substances such as glucose) equally, there’s also potential for non-targeted discovery.

“Since you can see metabolites in a sample that you weren’t able to see before, you can now identify molecules that may indicate there’s a problem,” he explains. “You can then determine whether you need further testing or surgery. So the potential for this technique is truly exciting.”

The research results are published in the journal Angewandte Chemie.

* Simpson has been working on perfecting the technique for more than three years with colleagues at Bruker BioSpin, a scientific instruments company that specializes in developing NMR technology. The technique, called “in-phase intermolecular single quantum coherence” (IP-iSQC), is based on some unexpected scientific concepts that were discovered in 1995, which at the time were described as impossible and “crazed” by many researchers. The technique developed by Simpson and his team builds upon these early discoveries. The work was supported by Mark Krembil of the Krembil Foundation and the Natural Sciences Engineering Research Council of Canada (NSERC).


Abstract of In-Phase Ultra High-Resolution In Vivo NMR

Although current NMR techniques allow organisms to be studied in vivo, magnetic susceptibility distortions, which arise from inhomogeneous distributions of chemical moieties, prevent the acquisition of high-resolution NMR spectra. Intermolecular single quantum coherence (iSQC) is a technique that breaks the sample’s spatial isotropy to form long range dipolar couplings, which can be exploited to extract chemical shift information free of perturbations. While this approach holds vast potential, present practical limitations include radiation damping, relaxation losses, and non-phase sensitive data. Herein, these drawbacks are addressed, and a new technique termed in-phase iSQC (IP-iSQC) is introduced. When applied to a living system, high-resolution NMR spectra, nearly identical to a buffer extract, are obtained. The ability to look inside an organism and extract a high-resolution metabolic profile is profound and should find applications in fields in which metabolism or in vivo processes are of interest.

Quadriplegia patient uses brain-computer interface to move his arm by just thinking

Bill Kochevar, who was paralyzed below his shoulders in a bicycling accident eight years ago, is the first person with quadriplegia to have arm and hand movements restored without robot help (credit: Case Western Reserve University/Cleveland FES Center)

A research team led by Case Western Reserve University has developed the first implanted brain-recording and muscle-stimulating system to restore arm and hand movements for quadriplegic patients.*

In a proof-of-concept experiment, the system included a brain-computer interface with recording electrodes implanted under his skull and a functional electrical stimulation (FES) system that activated his arm and hand — reconnecting his brain to paralyzed muscles.

The research was part of the ongoing BrainGate2 pilot clinical trial being conducted by a consortium of academic and other institutions to assess the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Previous Braingate designs required a robot arm.

In 2012 research, Jan Scheuermann, who has quadriplegia, was able to feed herself using a brain-machine interface and a computer-driven robot arm (credit: UPMC)

Kochevar’s eight years of muscle atrophy first required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion. and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

To prepare him to use his arm again, Kochevar learned how to use his own brain signals to move a virtual-reality arm on a computer screen. The team then implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm, allowing him to move the actual arm.

Kochevar can now make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

Neural activity (generated when Kochevar imagines movement of his arm and hand) is recorded from two 96-channel microelectrode arrays implanted in the motor cortex, on the surface of the brain. The implanted brain-computer interface translates the recorded brain signals into specific command signals that determine the amount of stimulation to be applied to each functional electrical stimulation (FES) electrode in the hand, wrist, arm, elbow and shoulder, and to a mobile arm support. (credit: A Bolu Ajiboye et al./The Lancet)

“Our research is at an early stage, but we believe that this neuro-prosthesis could offer individuals with paralysis the possibility of regaining arm and hand functions to perform day-to-day activities, offering them greater independence,” said lead author Dr Bolu Ajiboye, Case Western Reserve University. “So far, it has helped a man with tetraplegia to reach and grasp, meaning he could feed himself and drink. With further development, we believe the technology could give more accurate control, allowing a wider range of actions, which could begin to transform the lives of people living with paralysis.”

Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

A study of the work was published in the The Lancet March 28, 2017.

Writing in a linked Comment to The Lancet, Steve Perlmutter, M.D., University of Washington, said: “The goal is futuristic: a paralysed individual thinks about moving her arm as if her brain and muscles were not disconnected, and implanted technology seamlessly executes the desired movement… This study is groundbreaking as the first report of a person executing functional, multi-joint movements of a paralysed limb with a motor neuro-prosthesis. However, this treatment is not nearly ready for use outside the lab. The movements were rough and slow and required continuous visual feedback, as is the case for most available brain-machine interfaces, and had restricted range due to the use of a motorised device to assist shoulder movements… Thus, the study is a proof-of-principle demonstration of what is possible, rather than a fundamental advance in neuro-prosthetic concepts or technology. But it is an exciting demonstration nonetheless, and the future of motor neuro-prosthetics to overcome paralysis is brighter.”

* The study was funded by the US National Institutes of Health and the US Department of Veterans Affairs. It was conducted by scientists from Case Western Reserve University, Department of Veterans Affairs Medical Center, University Hospitals Cleveland Medical Center, MetroHealth Medical Center, Brown University, Massachusetts General Hospital, Harvard Medical School, Wyss Center for Bio and Neuroengineering. The investigational BrainGate technology was initially developed in the Brown University laboratory of John Donoghue, now the founding director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The implanted recording electrodes are known as the Utah array, originally designed by Richard Normann, Emeritus Distinguished Professor of Bioengineering at the University of Utah. The report in Lancet is the result of a long-running collaboration between Kirsch, Ajiboye and the multi-institutional BrainGate consortium. Leigh Hochberg, a neurologist and neuroengineer at Massachusetts General Hospital, Brown University and the VA RR&D Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.


Case | Man with quadriplegia employs injury bridging technologies to move again – just by thinking


Abstract of Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration

Background: People with chronic tetraplegia, due to high-cervical spinal cord injury, can regain limb movements through coordinated electrical stimulation of peripheral muscles and nerves, known as functional electrical stimulation (FES). Users typically command FES systems through other preserved, but unrelated and limited in number, volitional movements (eg, facial muscle activity, head movements, shoulder shrugs). We report the findings of an individual with traumatic high-cervical spinal cord injury who coordinated reaching and grasping movements using his own paralysed arm and hand, reanimated through implanted FES, and commanded using his own cortical signals through an intracortical brain–computer interface (iBCI).

Methods: We recruited a participant into the BrainGate2 clinical trial, an ongoing study that obtains safety information regarding an intracortical neural interface device, and investigates the feasibility of people with tetraplegia controlling assistive devices using their cortical signals. Surgical procedures were performed at University Hospitals Cleveland Medical Center (Cleveland, OH, USA). Study procedures and data analyses were performed at Case Western Reserve University (Cleveland, OH, USA) and the US Department of Veterans Affairs, Louis Stokes Cleveland Veterans Affairs Medical Center (Cleveland, OH, USA). The study participant was a 53-year-old man with a spinal cord injury (cervical level 4, American Spinal Injury Association Impairment Scale category A). He received two intracortical microelectrode arrays in the hand area of his motor cortex, and 4 months and 9 months later received a total of 36 implanted percutaneous electrodes in his right upper and lower arm to electrically stimulate his hand, elbow, and shoulder muscles. The participant used a motorised mobile arm support for gravitational assistance and to provide humeral abduction and adduction under cortical control. We assessed the participant’s ability to cortically command his paralysed arm to perform simple single-joint arm and hand movements and functionally meaningful multi-joint movements. We compared iBCI control of his paralysed arm with that of a virtual three-dimensional arm. This study is registered with ClinicalTrials.gov, number NCT00912041.

Findings: The intracortical implant occurred on Dec 1, 2014, and we are continuing to study the participant. The last session included in this report was Nov 7, 2016. The point-to-point target acquisition sessions began on Oct 8, 2015 (311 days after implant). The participant successfully cortically commanded single-joint and coordinated multi-joint arm movements for point-to-point target acquisitions (80–100% accuracy), using first a virtual arm and second his own arm animated by FES. Using his paralysed arm, the participant volitionally performed self-paced reaches to drink a mug of coffee (successfully completing 11 of 12 attempts within a single session 463 days after implant) and feed himself (717 days after implant).

Interpretation: To our knowledge, this is the first report of a combined implanted FES+iBCI neuroprosthesis for restoring both reaching and grasping movements to people with chronic tetraplegia due to spinal cord injury, and represents a major advance, with a clear translational path, for clinically viable neuroprostheses for restoration of reaching and grasping after paralysis.

Funding: National Institutes of Health, Department of Veterans Affairs.

In a neurotechnology future, human-rights laws will need to be revisited

New forms of brainwashing include transcranial magnetic stimulation (TMS) to neuromodulate the brain regions responsible for social prejudice and political and religious beliefs, say researchers. (credit: U.S. National Library of Medicine)

New human rights laws to prepare for rapid current advances in neurotechnology that may put “freedom of mind” at risk have been proposed in the open access journal Life Sciences, Society and Policy.

Four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy, the authors of the study suggest: The right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

Advances in neural engineering, brain imaging, and neurotechnology put freedom of the mind at risk, says Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel. “Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Potential misuses

Sophisticated brain imaging and the development of brain-computer interfaces have moved away from a clinical setting into the consumer domain. There’s a risk that the technology could be misused and create unprecedented threats to personal freedom. For example:

  • Uses in criminal court as a tool for assessing criminal responsibility or even the risk of re-offending.*
  • Consumer companies using brain imaging for “neuromarketing” to understand consumer behavior and elicit desired responses from customers.
  • “Brain decoders” that can turn a person’s brain imaging data into images, text or sound.**
  • Hacking, allowing a third-party to eavesdrop on someone’s mind.***

International human rights laws currently make no specific mention of neuroscience. But as with the genetic revolution, the on-going neurorevolution will require consideration of human-rights laws and even the creation of new ones, the authors suggest.

* “A possibly game-changing use of neurotechnology in the legal field has been illustrated by Aharoni et al. (2013). In this study, researchers followed a group of 96 male prisoners at prison release. Using fMRI, prisoners’ brains were scanned during the performance of computer tasks in which they had to make quick decisions and inhibit impulsive reactions. The researchers followed the ex-convicts for 4 years to see how they behaved. The study results indicate that those individuals showing low activity in a brain region associated with decision-making and action (the Anterior Cingulate Cortex, ACC) are more likely to commit crimes again within 4 years of release (Aharoni et al. 2013). According to the study, the risk of recidivism is more than double in individuals showing low activity in that region of the brain than in individuals with high activity in that region. Their results suggest a “potential neurocognitive biomarker for persistent antisocial behavior”. In other words, brain scans can theoretically help determine whether certain convicted persons are at an increased risk of reoffending if released.” — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy

** NASA and Jaguar are jointly developing a technology called Mind Sense, which will measure brainwaves to monitor the driver’s concentration in the car (Biondi and Skrypchuk 2017). If brain activity indicates poor concentration, then the steering wheel or pedals could vibrate to raise the driver’s awareness of the danger. This technology can contribute to reduce the number of accidents caused by drivers who are stressed or distracted. However, it also opens theoretically the possibility for third parties to use brain decoders to eavesdropping on people’s states of mind. — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy

*** Criminally motivated actors could selectively erase memories from their victims’ brains to prevent being identified by them later on or simply to cause them harm. On the long term-scenario, they could be used by surveillance and security agencies with the purpose of selectively erasing dangerous, inconvenient from people’s brain as portrayed in the movie Men in Black with the so-called neuralyzer— Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy


Abstract of Towards new human rights in the age of neuroscience and neurotechnology

Rapid advancements in human neuroscience and neurotechnology open unprecedented possibilities for accessing, collecting, sharing and manipulating information from the human brain. Such applications raise important challenges to human rights principles that need to be addressed to prevent unintended consequences. This paper assesses the implications of emerging neurotechnology applications in the context of the human rights framework and suggests that existing human rights may not be sufficient to respond to these emerging issues. After analysing the relationship between neuroscience and human rights, we identify four new rights that may become of great relevance in the coming decades: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

What if you could type directly from your brain at 100 words per minute?

(credit: Facebook)

Regina Dugan, PhD, Facebook VP of Engineering, Building8, revealed today (April 19, 2017) at Facebook F8 conference 2017 a plan to develop a non-invasive brain-computer interface that will let you type at 100 wpm — by decoding neural activity devoted to speech.

Dugan previously headed Google’s Advanced Technology and Projects Group, and before that, was Director of the Defense Advanced Research Projects Agency (DARPA).

She explained in a Facebook post that over the next two years, her team will be building systems that demonstrate “a non-invasive system that could one day become a speech prosthetic for people with communication disorders or a new means for input to AR [augmented reality].”

Dugan said that “even something as simple as a ‘yes/no’ brain click … would be transformative.” That simple level has been achieved by using functional near-infrared spectroscopy (fNIRS) to measure changes in blood oxygen levels in the frontal lobes of the brain, as KurzweilAI recently reported. (Near-infrared light can penetrate the skull and partially into the brain.)

Dugan agrees that optical imaging is the best place to start, but her Building8 team team plans to go way beyond that research — sampling hundreds of times per second and precise to millimeters. The research team began working on the brain-typing project six months ago and she now has a team of more than 60 researchers who specialize in optical neural imaging systems that push the limits of spatial resolution and machine-learning methods for decoding speech and language.

The research is headed by Mark Chevillet, previously an adjunct professor of neuroscience at Johns Hopkins University.

Besides replacing smartphones, the system would be a powerful speech prosthetic, she noted — allowing paralyzed patients to “speak” at normal speed.

(credit: Facebook)

Dugan revealed one specific method the researchers are currently working on to achieve that: a ballistic filter for creating quasi ballistic photons (avoiding diffusion) — creating a narrow beam for precise targeting — combined with a new method of detecting blood-oxygen levels.

Neural activity (in green) and associated blood oxygenation level dependent (BOLD) waveform (credit: Facebook)

Dugan also described a system that may one day allow hearing-impaired people to hear directly via vibrotactile sensors embedded in the skin. “In the 19th century, Braille taught us that we could interpret small bumps on a surface as language,” she said. “Since then, many techniques have emerged that illustrate our brain’s ability to reconstruct language from components.” Today, she demonstrated “an artificial cochlea of sorts and the beginnings of a new a ‘haptic vocabulary’.”

A Facebook engineer with acoustic sensors implanted in her arm has learned to feel the acoustic shapes corresponding to words (credit: Facebook)

Dugan’s presentation can be viewed in the F8 2017 Keynote Day 2 video (starting at 1:08:10).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(credit: Facebook)

Neuron-recording nanowires could help screen drugs for neurological diseases

Colorized scanning electron microscopy (SEM) image of a neuron (orange) interfaced with the nanowire array (green). (credit: Integrated Electronics and Biointerfaces Laboratory, UC San Diego)

A research team* led by engineers at the University of California San Diego has developed nanowire technology that can non-destructively record the electrical activity of neurons in fine detail.

The new technology, published April 10, 2017 in Nano Letters, could one day serve as a platform to screen drugs for neurological diseases and help researchers better understand how single cells communicate in large neuronal networks.

A brain implant

The researchers currently create the neurons in vitro (in the lab) from human induced pluripotent stem cells. But the ultimate goal is to “translate this technology to a device that can be implanted in the brain,” said Shadi Dayeh, PhD, an electrical engineering professor at the UC San Diego Jacobs School of Engineering and the team’s lead investigator.

The technology can uncover details about a neuron’s health, activity, and response to drugs by measuring ion channel currents and changes in the neuron’s intracellular voltage (generated by the difference in ion concentration between the inside and outside of the cell).

The researchers cite five key innovations of this new nanowire-to-neuron technology:

  • It’s nondestructive (unlike current methods, which can break the cell membrane and eventually kill the cell).
  • It can simultaneously measure voltage changes in multiple neurons and in the future could bridge or repair neurons.**
  • It can isolate the electrical signal measured by each individual nanowire, with high sensitivity and high signal-to-noise ratios. Existing techniques are not scalable to 2D and 3D tissue-like structures cultured in vitro, according to Dayeh.
  • It can also be used for heart-on-chip drug screening for cardiac diseases.
  • The nanowires can integrate with CMOS (computer chip) electronics.***

A colorized scanning electron microscopy (SEM) image of the silicon-nickel-titanium nanowire array. The nanowires are densely packed on a small chip that is compatible with CMOS chips. The nanowires poke inside cells without damaging them, and are sensitive enough to measure small voltage changes (millivolt or less). (credit: Integrated Electronics and Biointerfaces Laboratory, UC San Diego)

* The project was a collaborative effort between researchers at UC San Diego, the Conrad Prebys Center for Chemical Genomics at the Sanford Burnham Medical Research Institute, Nanyang Technological University in Singapore, and Sandia National Laboratories. This work was supported by the National Science Foundation, the Center for Brain Activity Mapping at UC San Diego, Qualcomm Institute at UC San Diego, Los Alamos National Laboratory, the National Institutes of Health, the March of Dimes, and UC San Diego Frontiers of Innovation Scholar Program. Dayeh’s laboratory holds several pending patent applications for this technology.

** “Highly parallel in vitro drug screening experiments can be performed using the human-relevant iPSC cell line and without the need of the laborious patch-clamp … which is destructive and unscalable to large neuronal densities and to long recording times, or planar multielectrode arrays that enable long-term recordings but can just measure extracellular potentials and lack the sensitivity to subthreshold potentials. … In vivo targeted modulation of individual neural circuits or even single cells within a network becomes possible, and implications for bridging or repairing networks in neurologically affected regions become within reach.” — Ren Liu et al./Nanoletters

*** The researchers invented a new wafer bonding approach to fuse the silicon nanowires to the nickel electrodes. Their approach involved a process called silicidation, which is a reaction that binds two solids (silicon and another metal) together without melting either material. This process prevents the nickel electrodes from liquidizing, spreading out and shorting adjacent electrode leads. Silicidation is usually used to make contacts to transistors, but this is the first time it is being used to do patterned wafer bonding, Dayeh said. “And since this process is used in semiconductor device fabrication, we can integrate versions of these nanowires with CMOS electronics, but it still needs further optimization for brain-on-chip drug screening.”


Abstract of High Density Individually Addressable Nanowire Arrays Record Intracellular Activity from Primary Rodent and Human Stem Cell Derived Neurons

We report a new hybrid integration scheme that offers for the first time a nanowire-on-lead approach, which enables independent electrical addressability, is scalable, and has superior spatial resolution in vertical nanowire arrays. The fabrication of these nanowire arrays is demonstrated to be scalable down to submicrometer site-to-site spacing and can be combined with standard integrated circuit fabrication technologies. We utilize these arrays to perform electrophysiological recordings from mouse and rat primary neurons and human induced pluripotent stem cell (hiPSC)-derived neurons, which revealed high signal-to-noise ratios and sensitivity to subthreshold postsynaptic potentials (PSPs). We measured electrical activity from rodent neurons from 8 days in vitro (DIV) to 14 DIV and from hiPSC-derived neurons at 6 weeks in vitro post culture with signal amplitudes up to 99 mV. Overall, our platform paves the way for longitudinal electrophysiological experiments on synaptic activity in human iPSC based disease models of neuronal networks, critical for understanding the mechanisms of neurological diseases and for developing drugs to treat them.