Crystal ‘domain walls’ may lead to tinier electronic devices

Abstract art? No, nanoscale crystal sheets with moveable conductive “domain walls” that can modify a circuit’s electronic properties (credit: Queen’s University Belfast)

Queen’s University Belfast physicists have discovered a radical new way to modify the conductivity (ease of electron flow) of electronic circuits — reducing the size of future devices.

The two latest KurzweilAI articles on graphene cited faster/lower-power performance and device-compatibility features. This new research takes another approach: Altering the properties of a crystal to eliminate the need for multiple circuits in devices.

Reconfigurable nanocircuitry

To do that, the scientists used “ferroelectric copper-chlorine boracite” crystal sheets, which are almost as thin as graphene. The researchers discovered that squeezing the crystal sheets with a sharp needle at a precise location causes a jigsaw-puzzle-like pattern of “domains walls” to develop around the contact point.

Then, using external applied electric fields, these writable, erasable domain walls can be repeatedly moved around in the crystal to create a variety of new electronic properties. They can appear, disappear, or move around within the crystal, all without permanently altering the crystal itself.

Eliminating the need for multiple circuits may reduce the size of future computers and other devices, according to the researchers.

The team’s findings have been published in an open-access paper in Nature Communications.


Abstract of Injection and controlled motion of conducting domain walls in improper ferroelectric Cu-Cl boracite

Ferroelectric domain walls constitute a completely new class of sheet-like functional material. Moreover, since domain walls are generally writable, erasable and mobile, they could be useful in functionally agile devices: for example, creating and moving conducting walls could make or break electrical connections in new forms of reconfigurable nanocircuitry. However, significant challenges exist: site-specific injection and annihilation of planar walls, which show robust conductivity, has not been easy to achieve. Here, we report the observation, mechanical writing and controlled movement of charged conducting domain walls in the improper-ferroelectric Cu3B7O13Cl. Walls are straight, tens of microns long and exist as a consequence of elastic compatibility conditions between specific domain pairs. We show that site-specific injection of conducting walls of up to hundreds of microns in length can be achieved through locally applied point-stress and, once created, that they can be moved and repositioned using applied electric fields.

New chemical method could revolutionize graphene use in electronics

Adding a molecular structure containing carbon, chromium, and oxygen atoms retains graphene’s superior conductive properties. The metal atoms (silver, in this experiment) to be bonded are then added to the oxygen atoms on top. (credit: Songwei Che et al./Nano Letters)

University of Illinois at Chicago scientists have solved a fundamental problem that has held back the use of wonder material graphene in a wide variety of electronics applications.

When graphene is bonded (attached) to metal atoms (such as molybdenum) in devices such as solar cells, graphene’s superior conduction properties degrade.

The solution: Instead of adding molecules directly to the individual carbon atoms of graphene, the new method first adds a sort of buffer (consisting of chromium, carbon, and oxygen atoms) to the graphene, and then adds the metal atoms to this buffer material instead. That enables the graphene to retain its unique properties of electrical conduction.

In an experiment, the researchers successfully added silver nanoparticles to graphene with this method. That increased the material’s ability to boost the efficiency of graphene-based solar cells by 11 fold, said Vikas Berry, associate professor and department head of chemical engineering and senior author of a paper on the research, published in Nano Letters.

Researchers at Indian Institute of Technology and Clemson University were also involved in the study. The research was funded by the National Science Foundation.


Abstract of Retained Carrier-Mobility and Enhanced Plasmonic-Photovoltaics of Graphene via ring-centered η6 Functionalization and Nanointerfacing

Binding graphene with auxiliary nanoparticles for plasmonics, photovoltaics, and/or optoelectronics, while retaining the trigonal-planar bonding of sp2 hybridized carbons to maintain its carrier-mobility, has remained a challenge. The conventional nanoparticle-incorporation route for graphene is to create nucleation/attachment sites via “carbon-centered” covalent functionalization, which changes the local hybridization of carbon atoms from trigonal-planar sp2to tetrahedral sp3. This disrupts the lattice planarity of graphene, thus dramatically deteriorating its mobility and innate superior properties. Here, we show large-area, vapor-phase, “ring-centered” hexahapto (η6) functionalization of graphene to create nucleation-sites for silver nanoparticles (AgNPs) without disrupting its sp2 character. This is achieved by the grafting of chromium tricarbonyl [Cr(CO)3] with all six carbon atoms (sigma-bonding) in the benzenoid ring on graphene to form an (η6-graphene)Cr(CO)3 complex. This nondestructive functionalization preserves the lattice continuum with a retention in charge carrier mobility (9% increase at 10 K); with AgNPs attached on graphene/n-Si solar cells, we report an ∼11-fold plasmonic-enhancement in the power conversion efficiency (1.24%).

Graphene-based computer would be 1,000 times faster than silicon-based, use 100th the power

How a graphene-based transistor would work. A graphene nanoribbon (GNR) is created by unzipping (opening up) a portion of a carbon nanotube (CNT) (the flat area, shown with pink arrows above it). The GRN switching is controlled by two surrounding parallel CNTs. The magnitudes and relative directions of the control current, ICTRL (blue arrows) in the CNTs determine the rotation direction of the magnetic fields, B (green). The magnetic fields then control the GNR magnetization (based on the recent discovery of negative magnetoresistance), which causes the GNR to switch from resistive (no current) to conductive, resulting in current flow, IGNR (pink arrows) — in other words, causing the GNR to act as a transistor gate. The magnitude of the current flow through the GNR functions as the binary gate output — with binary 1 representing the current flow of the conductive state and binary 0 representing no current (the resistive state). (credit: Joseph S. Friedman et al./Nature Communications)

A future graphene-based transistor using spintronics could lead to tinier computers that are a thousand times faster and use a hundredth of the power of silicon-based computers.

The radical transistor concept, created by a team of researchers at Northwestern University, The University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Central Florida, is explained this month in an open-access paper in the journal Nature Communications.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But the speed of computer microprocessors that rely on silicon transistors has been relatively stagnant since around 2005, with clock speeds mostly in the 3 to 4 gigahertz range.

Clock speeds approaching the terahertz range

The researchers discovered that by applying a magnetic field to a graphene ribbon (created by unzipping a carbon nanotube), they could change the resistance of current flowing through the ribbon. The magnetic field — controlled by increasing or decreasing the current through adjacent carbon nanotubes — increased or decreased the flow of current.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, with clock speeds approaching the terahertz range — a thousand times faster.* They would also be smaller and substantially more efficient, allowing device-makers to shrink technology and squeeze in more functionality, according to Ryan M. Gelfand, an assistant professor in The College of Optics & Photonics at the University of Central Florida.

The researchers hope to inspire the fabrication of these cascaded logic circuits to stimulate a future transformative generation of energy-efficient computing.

* Unlike other spintronic logic proposals, these new logic gates can be cascaded directly through the carbon materials without requiring intermediate circuits and amplification between gates. That would result in compact circuits with reduced area that are far more efficient than with CMOS switching, which is limited by charge transfer and accumulation from RLC (resistance-inductance-capacitance) interconnect delays.


Abstract of Cascaded spintronic logic with low-dimensional carbon

Remarkable breakthroughs have established the functionality of graphene and carbon nanotube transistors as replacements to silicon in conventional computing structures, and numerous spintronic logic gates have been presented. However, an efficient cascaded logic structure that exploits electron spin has not yet been demonstrated. In this work, we introduce and analyse a cascaded spintronic computing system composed solely of low-dimensional carbon materials. We propose a spintronic switch based on the recent discovery of negative magnetoresistance in graphene nanoribbons, and demonstrate its feasibility through tight-binding calculations of the band structure. Covalently connected carbon nanotubes create magnetic fields through graphene nanoribbons, cascading logic gates through incoherent spintronic switching. The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors. We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.

High-speed light-based systems could replace supercomputers for certain ‘deep learning’ calculations

(a) Optical micrograph of an experimentally fabricated on-chip optical interference unit; the physical region where the optical neural network program exists is highlighted in gray. A programmable nanophotonic processor uses a field-programmable gate array (similar to an FPGA integrated circuit ) — an array of interconnected waveguides, allowing the light beams to be modified as needed for a specific deep-learning matrix computation. (b) Schematic illustration of the optical neural network program, which performs matrix multiplication and amplification fully optically. (credit: Yichen Shen et al./Nature Photonics)

A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.

Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.

But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.

Programmable nanophotonic processor

Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.

The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.

“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.

The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.

The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).

The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.


Abstract of Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

A noninvasive method for deep-brain stimulation for brain disorders

External electrical waves excite an area in the mouse hippocampus, shown in bright green. (credit: Nir Grossman, Ph.D., Suhasa B. Kodandaramaiah, Ph.D., and Andrii Rudenko, Ph.D.)

MIT researchers and associates have come up with a breakthrough method of remotely stimulating regions deep within the brain, replacing the invasive surgery now required for implanting electrodes for Parkinson’s and other brain disorders.

The new method could make deep-brain stimulation for brain disorders less expensive, more accessible to patients, and less risky (avoiding brain hemorrhage and infection).

Working with mice, the researchers applied two high-frequency electrical currents at two slightly different frequencies (E1 and E2 in the diagram below), attaching electrodes (similar those used with an EEG brain machine) to the surface of the skull.

A new noninvasive method for deep-brain stimulation (credit: Grossman et al./Cell)

At these higher brain frequencies, the currents have no effect on brain tissues. But where the currents converge deep in the brain, they interfere with one another in such a way that they generate low-frequency current (corresponding to the red envelope in the diagram) inside neurons, thus stimulating neural electrical activity.

The researchers named this method “temporal interference stimulation” (that is, interference between pulses in the two currents at two slightly different times — generating the difference frequency).* For the experimental setup shown in the diagram above, the E1 current was 1kHz (1,000 Hz), which mixed with a 1.04kHz E2 current. That generated a current with a 40Hz “delta f” difference frequency — a frequency that can stimulate neural activity in the brain. (The researchers found no harmful effects in any part of the mouse brain.)

“Traditional deep-brain stimulation requires opening the skull and implanting an electrode, which can have complications,” explains Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1, 2017 issue of the journal Cell. Also, “only a small number of people can do this kind of neurosurgery.”

Custom-designed, targeted deep-brain stimulation

If this new method is perfected and clinically tested, neurologists could control the size and location of the exact tissue that receives the electrical stimulation for each patient, by selecting the frequency of the currents and the number and location of the electrodes, according to the researchers.

Neurologists could also steer the location of deep-brain stimulation in real time, without moving the electrodes, by simply altering the currents. In this way, deep targets could be stimulated for conditions such as Parkinson’s, epilepsy, depression, and obsessive-compulsive disorder — without affecting surrounding brain structures.

The researchers are also exploring the possibility of using this method to experimentally treat other brain conditions, such as autism, and for basic science investigations.

Co-author Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. But they were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai.

Last year, Tsai showed (open access) that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this new type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

This new method is also an alternative to other brain-stimulation methods.

Transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression and to study the basic science of cognition, emotion, sensation, and movement, can stimulate deep brain structures but can result in surface regions being strongly stimulated, according to the researchers.

Transcranial ultrasound and expression of heat-sensitive receptors and injection of thermomagnetic nanoparticles have been proposed, “but the unknown mechanism of action … and the need to genetically manipulate the brain, respectively, may limit their immediate use in humans,” the researchers note in the paper.

The MIT researchers collaborated with investigators at Beth Israel Deaconess Medical Center (BIDMC), the IT’IS Foundation, Harvard Medical School, and ETH Zurich.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

* Similar to a radio-frequency or audio “beat frequency.”


Abstract of Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields

We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.

Researchers decipher how faces are encoded in the brain

This figure shows eight different real faces that were presented to a monkey, together with reconstructions made by analyzing electrical activity from 205 neurons recorded while the monkey was viewing the faces. (credit: Doris Tsao)

In a paper published (open access) June 1 in the journal Cell, researchers report that they have cracked the code for facial identity in the primate brain.

“We’ve discovered that this code is extremely simple,” says senior author Doris Tsao, a professor of biology and biological engineering at the California Institute of Technology and senior author. “We can now reconstruct a face that a monkey is seeing by monitoring the electrical activity of only 205 neurons in the monkey’s brain. One can imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness’s brain activity.”

The researchers previously identified the six “face patches” — general areas of the primate and human brain that are responsible for identifying faces — all located in the inferior temporal (IT) cortex. They also found that these areas are packed with specific nerve cells that fire action potentials much more strongly when seeing faces than when seeing other objects. They called these neurons “face cells.”

Previously, some experts in the field believed that each face cell (a.k.a. “grandmother cell“) in the brain represents a specific face, but this presented a paradox, says Tsao, who is also a Howard Hughes Medical Institute investigator. “You could potentially recognize 6 billion people, but you don’t have 6 billion face cells in the IT cortex. There had to be some other solution.”

Instead, they found that rather than representing a specific identity, each face cell represents a specific axis within a multidimensional space, which they call the “face space.” These axes can combine in different ways to create every possible face. In other words, there is no “Jennifer Aniston” neuron.

The clinching piece of evidence: the researchers could create a large set of faces that looked extremely different, but which all caused the cell to fire in exactly the same way. “This was completely shocking to us — we had always thought face cells were more complex. But it turns out each face cell is just measuring distance along a single axis of face space, and is blind to other features,” Tsao says.

AI applications

“The way the brain processes this kind of information doesn’t have to be a black box,” Chang explains. “Although there are many steps of computations between the image we see and the responses of face cells, the code of these face cells turned out to be quite simple once we found the proper axes. This work suggests that other objects could be encoded with similarly simple coordinate systems.”

The research also has artificial intelligence applications. “This could inspire new machine learning algorithms for recognizing faces,” Tsao adds. “In addition, our approach could be used to figure out how units in deep networks encode other things, such as objects and sentences.”

This research was supported by the National Institutes of Health, the Howard Hughes Medical Institute, the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, and the Swartz Foundation.

* The researchers started by creating a 50-dimensional space that could represent all faces. They assigned 25 dimensions to the shape–such as the distance between eyes or the width of the hairline–and 25 dimensions to nonshape-related appearance features, such as skin tone and texture.

Using macaque monkeys as a model system, the researchers inserted electrodes into the brains that could record individual signals from single face cells within the face patches. They found that each face cell fired in proportion to the projection of a face onto a single axis in the 50-dimensional face space. Knowing these axes, the researchers then developed an algorithm that could decode additional faces from neural responses.

In other words, they could now show the monkey an arbitrary new face, and recreate the face that the monkey was seeing from electrical activity of face cells in the animal’s brain. When placed side by side, the photos that the monkeys were shown and the faces that were recreated using the algorithm were nearly identical. Face cells from only two of the face patches–106 cells in one patch and 99 cells in another–were enough to reconstruct the faces. “People always say a picture is worth a thousand words,” Tsao says. “But I like to say that a picture of a face is worth about 200 neurons.”


Caltech | Researchers decipher the enigma of how faces are encoded


Abstract of The Code for Facial Identity in the Primate Brain

Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.

CogX London 2017

The emerging world of AI is expected to create incredible opportunities and cause unprecedented disruption across all industries. Healthcare, financial services, education, the future of work and more will be shaped by the proliferation of AI.

However, market complexity and fragmentation coupled with the incredible pace of change (1,500 new companies in 2016 alone) brings about great challenges; how should we navigate this complex landscape?

The AI Innovation Exchange and Annual Awards is bringing together thought leaders across more than 20 industries and domains to address the pressing issues and move the conversation forward. In a combination of keynotes, expert panels and results-driven workshops we will agree the open issues and publish papers from the breakout sessions’ findings. An interactive trade expo will showcase the current state of art in the field, and at our gala dinner we will present 10 categories of awards celebrating innovation in AI. We hope that CogX will bring clarity to the marketplace, celebrate innovation and facilitate strategies for the future.

The whole event comprises: 1/2 Day Exec AI Bootcamp, 2 Day Exchange across 18 topics, Awards dinner, AI Expo, Breakout sessions and working groups & VIP events.

—Event Producer

33 blood-cancer patients have dramatic clinical remission with new T-cell therapy

Image of a group of killer T cells (green and red) surrounding a cancer cell (blue, center)  (credit: NIH)

Chinese doctors have reported success with a new type of immunotherapy for multiple myeloma*, a blood cancer: 33 out of 35 patients in a clinical trial had clinical remission within two months.

The researchers used a type of T cell called “chimeric antigen receptor (CAR) T.”** In a phase I clinical trial in China, the patient’s own T cells were collected, genetically reprogrammed in a lab, and injected back into the patient. The reprogramming involved inserting an artificially designed gene into the T-cell genome, which helped the genetically reprogrammed cells find and destroy cancer cells throughout the body.

The study was presented Monday (June 5, 2017) at the American Society of Clinical Oncology (ASCO) conference in Chicago.

“Although recent advances in chemotherapy have prolonged life expectancy in multiple myeloma, this cancer remains incurable,” said study author Wanhong Zhao, MD, PhD, an associate director of hematology at The Second Affiliated Hospital of Xi’an Jiaotong University in Xi’an, China. “It appears that with this novel immunotherapy there may be a chance for cure in multiple myeloma, but we will need to follow patients much longer to confirm that.”***

U.S. clinical trial planned

“While it’s still early, these data are a strong sign that CAR T-cell therapy can send multiple myeloma into remission,” said ASCO expert Michael S. Sabel, MD, FACS. “It’s rare to see such high response rates, especially for a hard-to-treat cancer. This serves as proof that immunotherapy and precision medicine research pays off. We hope that future research builds on this success in multiple myeloma and other cancers.”

The researchers plan to enroll a total of 100 patients in this continuing clinical trial at four participating hospitals in China. “In early 2018 we also plan to launch a similar clinical trial in the United States. Looking ahead, we would also like to explore whether BCMA CAR T-cell therapy benefits patients who are newly diagnosed with multiple myeloma,” said Zhao.

This study was funded by Legend Biotech Co.

* Multiple myeloma is a cancer of plasma cells, which make antibodies to fight infections. Abnormal plasma cells can crowd out or suppress the growth of other cells in the bone marrow. This suppression may result in anemia, excessive bleeding, and a decreased ability to fight infection. Multiple myeloma is a relatively uncommon cancer. This year, an estimated 30,300 people [Ref. 2] in the United States will be diagnosed with multiple myeloma, and 114,250 [Ref. 3] were diagnosed with this cancer worldwide in 2012. In the United States, only about half of patients survive five years after being diagnosed with multiple myeloma. — American Society of Clinical Oncology

** Over the past few years, CAR T-cell therapy targeting a B-cell biomarker called CD19 proved very effective in initial trials for acute lymphoblastic leukemia (ALL) and some types of lymphoma, but until now, there has been little success with CAR T-cell therapies targeting other biomarkers in other types of cancer. This is one of the first clinical trials of CAR T cells targeting BCMA, which was discovered to play a role in progression of multiple myeloma in 2004. — American Society of Clinical Oncology

*** To date, 19 patients have been followed for more than four months, a pre-set time for full efficacy assessment by the International Myeloma Working Group (IMWG) consensus. Of the 19 patients, 14 have reached stringent complete response (sCR) criteria, one patient has reached partial response, and four patients have achieved very good partial remission (VgPR) criteria in efficacy.  There has been only a single case of disease progression from VgPR; an extramedullary lesion of the VgPR patient reappeared three months after disappearing on CT scans. There has not been a single case of relapse among patients who reached sCR criteria. The five patients who have been followed for over a year (12–14 months) all remain in sCR status and are free of minimal residual disease as well (have no detectable cancer cells in the bone marrow). Cytokine release syndrome or CRS, a common and potentially dangerous side effect of CAR T-cell therapy, occurred in 85% of patients, but it was only transient. In the majority of patients symptoms were mild and manageable. CRS is associated with symptoms such as fever, low blood pressure, difficulty breathing, and problems with multiple organs. Only two patients on this study experienced severe CRS (grade 3) but recovered upon receiving tocilizumab (Actemra, an inflammation-reducing treatment commonly used to manage CRS in clinical trials of CAR T-cell therapy). No patients experienced neurologic side effects, another common and serious complication from CAR T-cell therapy. American Society of Clinical Oncology


Abstract of Durable remissions with BCMA-specific chimeric antigen receptor (CAR)-modified T cells in patients with refractory/relapsed multiple myeloma.

Background: Chimeric antigen receptor engineered T cell (CAR-T) is a novel immunotherapeutic approach for cancer treatment and has been clinically validated in the treatment of acute lymphoblastic leukemia (ALL). Here we report an encouraging breakthrough of treating multiple myeloma (MM) using a CAR-T designated LCAR-B38M CAR-T, which targets principally BCMA. Methods: A single arm clinical trial was conducted to assess safety and efficacy of this approach. A total of 19 patients with refractory/relapsed multiple myeloma were included in the trial. The median number of infused cells was 4.7 (0.6 ~ 7.0) × 10e6/ kg. The median follow-up times was 208 (62 ~ 321) days. Results: Among the 19 patients who completed the infusion, 7 patients were monitored for a period of more than 6 months. Six out of the 7 achieved complete remission (CR) and minimal residual disease (MRD)-negative status. The 12 patients who were followed up for less than 6 months met near CR criteria of modified EBMT criteria for various degrees of positive immunofixation. All these effects were observed with a progressive decrease of M-protein and thus expected to eventually meet CR criteria. In the most recent follow-up examination, all 18 survived patients were determined to be free of myeloma-related biochemical and hematologic abnormalities. One of the most common adverse event of CAR-T therapy is acute cytokine release syndrome (CRS). This was observed in 14 (74%) patients who received treatment. Among these 14 patients there were 9 cases of grade 1, 2 cases of grade 2, 1 case of grade 3, and 1 case of grade 4 patient who recovered after treatments. Conclusions: A 100% objective response rate (ORR) to LCAR-B38M CAR-T cells was observed in refractory/relapsed myeloma patients. 18 out of 19 (95%) patients reached CR or near CR status without a single event of relapse in a median follow-up of 6 months. The majority (14) of the patients experienced mild or manageable CRS, and the rest (5) were even free of diagnosable CRS. Based on the encouraging safety and efficacy outcomes, we believe that our LCAR-B38M CAR-T cell therapy is an innovative and highly effective treatment for multiple myeloma.

How to design and build your own robot

Two robots — robot calligrapher and puppy — produced using an interactive display tool and selecting off-the-shelf components and 3D-printed parts (credit: Carnegie Mellon University)

Carnegie Mellon University (CMU) Robotics Institute researchers have developed a simplified interactive design tool that lets you design and make your own customized legged or wheeled robot, using a mix of 3D-printed parts and off-the-shelf components.

The current process of creating new robotic systems is challenging, time-consuming, and resource-intensive. So the CMU researchers have created a visual design tool with a simple drag-and-drop interface that lets you choose from a library of standard building blocks (such as actuators and mounting brackets that are either off-the-shelf/mass-produced or can be 3D-printed) that you can combine to create complex functioning robotic systems.

(a) The design interface consists of two workspaces. The left workspace allows for designing the robot. It displays a list of various modules at the top. The leftmost menu provides various functions that allow users to define preferences for the search process visualization and for physical simulation. The right workspace (showing the robot design on a plane) runs a physics simulation of the robot for testing. (b) When you select a new module from the modules list, the system automatically makes visual suggestions (shown in red) about possible connections for this module that are relevant to the current design. (credit: Carnegie Mellon University)

An iterative design process lets you experiment by changing the number and location of actuators and adjusting the physical dimensions of your robot. An auto-completion feature can automatically generate assemblies of components by searching through possible component arrangements. It even suggests components that are compatible with each other, points out where actuators should go, and automatically generates 3D-printable structural components to connect those actuators.

Automated design process. (a) Start with a guiding mesh for the robot you want to make and select the orientations of its motors, using the drag and drop interface. (b) The system then searches for possible designs that connect a given pair of motors in user-defined locations, according to user-defined preferences. You can reject the solution and re-do the search with different preferences anytime. A proposed search solution connecting the root motor to the target motor (highlighted in dark red) is shown in light blue. Repeat this process for each pair of motors. (c) Since the legs are symmetric in this case, you would only need to use the search process for two legs. The interface lets you create the other pair of legs by simple editing operations. Finally, attach end-effectors of your choice and create a body plate to complete your awesome robot design. (d) shows the final design (with and without the guiding mesh). The dinosaur head mesh was manually added after this particular design, for aesthetic appeal. (credit: Carnegie Mellon University)

The research team, headed by Stelian Coros, CMU Robotics Institute assistant professor of robotics, designed a number of robots with the tool and verified its feasibility by fabricating two test robots (shown above) — a wheeled robot with a manipulator arm that can hold a pen for drawing, and a four-legged “puppy” robot that can walk forward or sideways. “Our work aims to make robotics more accessible to casual users,” says Coros.

Robotics Ph.D. student Ruta Desai presented a report on the design tool at the IEEE International Conference on Robotics and Automation (ICRA 2017) May 29–June 3 in Singapore. No date for the availability of this tool has been announced.

This work was supported by the National Science Foundation.


Ruta Desai | Computational Abstractions for Interactive Design of Robotic Devices (ICRA 2017)


Abstract of Computational Abstractions for Interactive Design of Robotic Devices

We present a computational design system that allows novices and experts alike to easily create custom robotic devices using modular electromechanical components. The core of our work consists of a design abstraction that models the way in which these components can be combined to form complex robotic systems. We use this abstraction to develop a visual design environment that enables an intuitive exploration of the space of robots that can be created using a given set of actuators, mounting brackets and 3d-printable components. Our computational system also provides support for design auto-completion operations, which further simplifies the task of creating robotic devices. Once robot designs are finished, they can be tested in physical simulations and iteratively improved until they meet the individual needs of their users. We demonstrate the versatility of our computational design system by creating an assortment of legged and wheeled robotic devices. To test the physical feasibility of our designs, we fabricate a wheeled device equipped with a 5-DOF arm and a quadrupedal robot.

Playing a musical instrument could help restore brain health, research suggests

Tibetan singing bowl (credit: Baycrest Health Sciences)

A study by neuroscientists at Toronto-based Baycrest Rotman Research Institute and Stanford University involving playing a musical instrument suggests ways to improve brain rehabilitation methods.

In the study, published in the Journal of Neuroscience on May 24, 2017, the researchers asked young adults to listen to sounds from an unfamiliar musical instrument (a Tibetan singing bowl). Half of the subjects (the experimental group) were then asked to recreate the same sounds and rhythm by striking the bowl; the other half (the control group) were instead asked to recreate the sound by simply pressing a key on a computer keypad.

After listening to the sounds they created, subjects in the experimental group showed increased auditory-evoked P2 (P200) brain waves. This was significant because the P2 increase “occurred immediately, while in previous learning-by-listening studies, P2 increases occurred on a later day,” the researchers explained in the paper. The experimental group also had increased responsiveness of brain beta-wave oscillations and enhanced connectivity between auditory and sensorimotor cortices (areas) in the brain.

The brain changes were measured using magnetoencephalographic (MEG) recording, which is similar to EEG, but uses highly sensitive magnetic sensors.

Immediate beneficial effects on the brain

“The results … provide a neurophysiological basis for the application of music making in motor rehabilitation [increasing the ability to move arms and legs] training,” the authors state in the paper. The findings support Ross’ research in using musical training to help stroke survivors rehabilitate motor movement in their upper bodies. Baycrest scientists also have a history of breakthroughs in understanding how a person’s musical background impacts their listening abilities and cognitive function as they age.

“This study was the first time we saw direct changes in the brain after one session, demonstrating that the action of creating music leads to a strong change in brain activity,” said Bernhard Ross, PhD., senior scientist at Rotman Research Institute and senior author on the study.

“Music has been known to have beneficial effects on the brain, but there has been limited understanding into what about music makes a difference,” he added. “This is the first study demonstrating that learning the fine movement needed to reproduce a sound on an instrument changes the brain’s perception of sound in a way that is not seen when listening to music.”

The study’s next steps involve analyzing recovery by stroke patients with musical training compared to physiotherapy, and the impact of musical training on the brains of older adults. With additional funding, the study could explore developing musical training rehabilitation programs for other conditions that impact motor function, such as traumatic brain injury, and lead to hearing aids of the future, the researchers say.

The study received support from the Canadian Institutes of Health Research.


Abstract of Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced beta-band oscillations during perception

Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as vocalization or playing a musical instrument. Moreover, neural oscillations at beta-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (seven female, twelve male) participated in three magnetoencephalography (MEG) recordings while first passively listening to recorded sounds of a bell ringing, then actively playing the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared to the initial naïve listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of beta-band oscillations as well as theta coherence between auditory and sensorimotor cortices was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a keypress. We propose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.