Nanomaterials that mimic nerve impulses (credit: Osaka University)
A combination of nanomaterials that can mimic nerve impulses (“spikes”) in the brain have been discovered by researchers at Kyushu Institute of Technology and Osaka University in Japan.
Current “neuromorphic” (brain-like) chips (such as IBM’s neurosynaptic TrueNorth) and circuits (such as those based on the NVIDIA GPGPU, or general purpose graphical processing unit) are devices based on complex circuits that emulate only one part of the brain’s mechanisms: the learning ability of synapses (which connect neurons together).
(Left) Schematic of the SWNT/POM complex network, showing single-wall nanotubes and polyoxometalate (POM) molecules, with gold contacts. (Right) Conductive atomic force microscope image of a molecular neuromorphic network device. (Inset) Molecular structure of polyoxometalate (POM) molecules. (credit: Hirofumi Tanaka et al./Nature Communications)
The researchers have now developed a way to simulate a large-scale spiking neural network. They created a complex SWNT/POM molecular neuromorphic device consisting of a dense and complex network of spiking molecules. The new nanomaterial comprises polyoxometalate (POM) molecules that are absorbed by single-wall carbon nanotubes (SWNTs).
Unlike ordinary organic molecules, POM consists of metal atoms and oxygen atoms that form a three-dimensional framework that can store charges in a single molecule. The new nanomaterial emits spikes and can transmit them via synapses to and from other neurons.
The researchers also demonstrated that this molecular model could be used as a component of reservoir computing devices, which are anticipated as next-generation neural network devices.
MIT Media Lab (no sound) | Intro: Personalized Machine Learning for Robot Perception of Affect and Engagement in Autism Therapy. This is an example of a therapy session augmented with SoftBank Robotics’ humanoid robot NAO and deep-learning software. The 35 children with autism who participated in this study ranged in age from 3 to 13. They reacted in various ways to the robots during their 35-minute sessions — from looking bored and sleepy in some cases to jumping around the room with excitement, clapping their hands, and laughing or touching the robot.
Robots armed with personalized “deep learning” software could help therapists interpret behavior and personalize therapy of autistic children, while making the therapy more engaging and natural. That’s the conclusion of a study by an international team of researchers at MIT Media Lab, Chubu University, Imperial College London, and University of Augsburg.*
Children with autism-spectrum conditions often have trouble recognizing the emotional states of people around them — distinguishing a happy face from a fearful face, for instance. So some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways.
Personalized autism therapy
But the MIT research team realized that deep learning would help the therapy robots perceive the children’s behavior more naturally, they report in a Science Robotics paper.
Personalization is especially important in autism therapy, according to the paper’s senior author, Rosalind Picard, PhD, a professor at MIT who leads research in affective computing: “If you have met one person, with autism, you have met one person with autism,” she said, citing a famous adage.
“Computers will have emotional intelligence by 2029”… by which time, machines will “be funny, get the joke, and understand human emotion.” — Ray Kurzweil
“The challenge of using AI [artificial intelligence] that works in autism is particularly vexing, because the usual AI methods require a lot of data that are similar for each category that is learned,” says Picard, in explaining the need for deep learning. “In autism, where heterogeneity reigns, the normal AI approaches fail.”
How personalized robot-assisted therapy for autism would work
Robot-assisted therapy** for autism often works something like this: A human therapist shows a child photos or flash cards of different faces meant to represent different emotions, to teach them how to recognize expressions of fear, sadness, or joy. The therapist then programs the robot to show these same emotions to the child, and observes the child as she or he engages with the robot. The child’s behavior provides valuable feedback that the robot and therapist need to go forward with the lesson.
“Therapists say that engaging the child for even a few seconds can be a big challenge for them. [But] robots attract the attention of the child,” says lead author Ognjen Rudovic, PhD, a postdoctorate fellow at the MIT Media Lab. “Also, humans change their expressions in many different ways, but the robots always do it in the same way, and this is less frustrating for the child because the child learns in a very structured way how the expressions will be shown.”
SoftBank Robotics | The researchers used NAO humanoid robots in this study. Almost two feet tall and resembling an armored superhero or a droid, NAO conveys different emotions by changing the color of its eyes, the motion of its limbs, and the tone of its voice.
However, this type of therapy would work best if the robot could also smoothly interpret the child’s own behavior — such as excited or paying attention — during the therapy, according to the researchers. To test this assertion, researchers at the MIT Media Lab and Chubu Universitydeveloped a personalized deep learning network that helps robots estimate the engagement and interest of each child during these interactions, they report.**
The researchers built a personalized framework that could learn from data collected on each individual child. They captured video of each child’s facial expressions, head and body movements, poses and gestures, audio recordings and data on heart rate, body temperature, and skin sweat response from a monitor on the child’s wrist.
Most of the children in the study reacted to the robot “not just as a toy but related to NAO respectfully, as it if was a real person,” said Rudovic, especially during storytelling, where the therapists asked how NAO would feel if the children took the robot for an ice cream treat.
In the study, the researchers found that the robots’ perception of the children’s responses agreed with assessments by human experts with a high correlation score of 60 percent, the scientists report.*** (It can be challenging for human observers to reach high levels of agreement about a child’s engagement and behavior. Their correlation scores are usually between 50 and 55 percent, according to the researchers.)
* The study was funded by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology; Chubu University; and the European Union’s HORIZON 2020 grant (EngageME).
** A deep-learning system uses hierarchical, multiple layers of data processing to improve its tasks, with each successive layer amounting to a slightly more abstract representation of the original raw data. Deep learning has been used in automatic speech and object-recognition programs, making it well-suited for a problem such as making sense of the multiple features of the face, body, and voice that go into understanding a more abstract concept such as a child’s engagement.
Overview of the key stages (sensing, perception, and interaction) during robot-assisted autism therapy. Data from three modalities (audio, visual, and autonomic physiology) were recorded using unobtrusive audiovisual sensors and sensors worn on the child’s wrist, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data. The focus of this work is the robot perception, for which we designed the personalized deep learning framework that can automatically estimate levels of the child’s affective states and engagement. These can then be used to optimize the child-robot interaction and monitor the therapy progress (see Interpretability and utility). The images were obtained by using Softbank Robotics software for the NAO robot. (credit: Ognjen Rudovic et al./Science Robotics)
“In the case of facial expressions, for instance, what parts of the face are the most important for estimation of engagement?” Rudovic says. “Deep learning allows the robot to directly extract the most important information from that data without the need for humans to manually craft those features.”
The robots’ personalized deep learning networks were built from layers of these video, audio, and physiological data, information about the child’s autism diagnosis and abilities, their culture and their gender. The researchers then compared their estimates of the children’s behavior with estimates from five human experts, who coded the children’s video and audio recordings on a continuous scale to determine how pleased or upset, how interested, and how engaged the child seemed during the session.
*** Trained on these personalized data coded by the humans, and tested on data not used in training or tuning the models, the networks significantly improved the robot’s automatic estimation of the child’s behavior for most of the children in the study, beyond what would be estimated if the network combined all the children’s data in a “one-size-fits-all” approach, the researchers found. Rudovic and colleagues were also able to probe how the deep learning network made its estimations, which uncovered some interesting cultural differences between the children. “For instance, children from Japan showed more body movements during episodes of high engagement, while in Serbs large body movements were associated with disengagement episodes,” Rudovic notes.
(Left) Current stationary MEG scanner. (Right) New wearable scanner allows patients to move around, even play ping pong. (credit: National Institute of Mental Health and University of Nottingham)
A radical new wearable magnetoencephalography (MEG) brain scanner under development at the University of Nottingham allows a patient to move around, instead of having to sit or lie still inside a massive scanner.
Currently, MEG scanners* weigh around 500 kilograms (about 1100 pounds) because they require bulky superconducting sensors refrigerated in a liquid helium dewar at -269°C. Patients must keep still — even a 5mm movement can ruin images of brain activity. That immobility severely limits the range of brain activities and experiences and makes the scanner unsuitable for children and many patients.
Natural movement, increased sensitivity
The new wearable, compact, non-invasive MEG technology** now makes it possible for patients to move around, which could revolutionize diagnosis and treatment of neurological disorders, say the researchers. Using compact, scalp-mounted sensors, it allows for natural movements in the real world, such as head nodding, stretching, drinking, and even playing ping pong.***
The new design also provides a four times increase in sensitivity in adults and a 15 to 20 times increase in infants, compared to existing MEG systems, according to the researchers. Brain events such as an epileptic seizure could be captured and movement disorders such as Parkinson’s disease could be studied in more precise detail. Now young patients could also have a brain scan, even if they need to fidget.
The new system also supports better targeting, diagnosis, and treatment of mental health and neurological conditions, and wider ranges of social, environmental, and physical conditions.
“In a few more years, we could be imaging brain function while people do, quite literally, anything [using virtual-reality-based environments],” said Matt Brookes, Ph.D., director of the Sir Peter Mansfield Imaging Centre at the University of Nottingham.
The scientists next plan to design a bike-helmet-size scanner, offering more freedom of movement and a generic fit and allowing for the technology to be applied to a wider range of head sizes.
* MEG scanners are unique in allowing for precise whole-brain coverage and high resolution in both space and time, compared to EEG and MRI.
** Instead of a superconducting quantum interference device (SQUID), the new system is based on an array of optically pumped magnetometers (OPMs) — magnetic field sensors that rely on the atomic properties of alkali metals.
*** As is the case with current MEG scanners, special walls are required to block the Earth’s magnetic field. That constraints patient movements to a small room.
A user supervises and controls an autonomous robot using brain signals to detect mistakes and muscle signals to redirect a robot in a task to move a power drill to one of three possible targets on the body of a mock airplane. (credit: MIT)
Getting robots to do things isn’t easy. Usually, scientists have to either explicitly program them, or else train them to understand human language. Both options are a lot of work.
Now a new system developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Vienna University of Technology, and Boston University takes a simpler approach: It uses a human’s brainwaves and hand gestures to instantly correct robot mistakes.
Plug and play
Instead of trying to mentally guide the robot (which would require a complex, error-prone system and extensive operator training), the system identifies robot errors in real time by detecting a specific type of electroencephalogram (EEG) signal called “error-related potentials,” using a brain-computer interface (BCI) cap. These potentials (voltage spikes) are unconsciously produced in the brain when people notice mistakes — no user training required.
If an error-related potential signal is detected, the system automatically stops. That allows the supervisor to correct the robot by simply flicking a wrist — generating an electromyogram (EMG) signal that is detected by a muscle sensor in the supervisor’s arm to provide specific instructions to the robot.*
To develop the system, the researchers used “Baxter,” a popular humanoid robot from Rethink Robotics, shown here folding a shirt. (credit: Rethink Robotics)
Remarkably, the “plug and play” system works without requiring supervisors to be trained. So organizations could easily deploy it in real-world use in manufacturing and other areas. Supervisors can even manage teams of robots.**
For the project, the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time in a multi-target selection task for a mock drilling operation.
“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
“A more natural and intuitive extension of us”
The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.
“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”
* EEG and EMG both have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
** The “plug and play” supervisory control system
“If an [error-related potential] or a gesture is detected, the robot halts and requests assistance. The human then gestures to the left or right to naturally scroll through possible targets. Once the correct target is selected, the robot resumes autonomous operation. … The system includes an experiment controller and the Baxter robot as well as EMG and EEG data acquisition and classification systems. A mechanical contact switch on the robot’s arm detects initiation of robot arm motion. A human supervisor closes the loop.” — Joseph DelPreto et al. Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection. Robotics: Science and Systems Proceedings (forthcoming). (credit: MIT)
American and Korean researchers are creating an artificial nerve system for robots and humans. (credit: Kevin Craft)
Researchers at Stanford University and Seoul National University have developed an artificial sensory nerve system that’s a step toward artificial skin for prosthetic limbs, restoring sensation to amputees, and giving robots human-like reflexes.*
(Biological model) Pressures applied to afferent (sensory) mechanoreceptors (pressure sensors, in this case) in the finger change the receptor potential (voltage) of each mechanoreceptor. The receptor potential changes combine and initiate action potentials in the nerve fiber, connected to a heminode in the chest. The nerve fiber forms synapses with interneurons in the spinal cord. Action potentials from multiple nerve fibers combine through the synapses and contribute to information processing (via postsynaptic potentials). (credit: (Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))
(Artificial model) Illustration of a corresponding artificial afferent nerve system made of pressure sensors, an organic ring oscillator (simulates a neuron), and a transistor that simulates a synapse. (Only one ring oscillator connected to a synaptic transistor is shown here for simplicity.) Colors of parts match corresponding colors in the biological version. (credit: Yeongin Kim (Stanford University), Alex Chortos (Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))
(Photo) Artificial sensor, artificial neuron, and artificial synapse. (credit: Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))
Experiments with the artificial nerve circuit
In a demonstration experiment, the researchers used the artificial nerve circuit to activate the twitch reflex in the knee of a cockroach.
A cockroach insect (A) with an attached artificial mechanosensory nerve was used in this experiment. The artificial afferent nerve (B) was connected to the biological motor (movement) nerves of a detached insect leg (B, lower right) to demonstrate a hybrid reflex arc (such as a knee reflex). Applied pressure caused a reflex movement of the leg. A force gauge (C) was used to measure the force of the reflex movements of the disabled insect leg. (credit: Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))
The researchers did another experiment that showed how the artificial nerve system could be used to identify letters in the Braille alphabet.
Improving robot and human sensory abilites
iCub robot (credit: D. Farina/Istituto Italiano Di Tecnologia)
The researchers “used a knee reflex as an example of how more-advanced artificial nerve circuits might one day be part of an artificial skin that would give prosthetic devices or robots both senses and reflexes,” noted Chiara Bartolozzi, Ph.D., of Istituto Italiano Di Tecnologia, writing in a Science commentary on the research.
Tactile information from artificial tactile systems “can improve the interaction of a robot with objects,” says Bartolozzi, who is involved in research with the iCub robot.
“In this scenario, objects can be better recognized because touch complements the information gathered from vision about the shape of occluded or badly illuminated regions of the object, such as its texture or hardness. Tactile information also allows objects to be better manipulated — for example, by exploiting contact and slip detection to maintain a stable but gentle grasp of fragile or soft objects (see the photo). …
“Information about shape, softness, slip, and contact forces also greatly improves the usability of upper-limb prosthetics in fine manipulation. … The advantage of the technology devised by Kim et al. is the possibility of covering at a reasonable cost larger surfaces, such as fingers, palms, and the rest of the prosthetic device.
“Safety is enhanced when sensing contacts inform the wearer that the limb is encountering obstacles. The acceptability of the artificial hand by the wearer is also improved because the limb is perceived as part of the body, rather than as an external device. Lower-limb prostheses can take advantage of the same technology, which can also provide feedback about the distribution of the forces at the foot while walking.”
Next research steps
The researchers plan next to create artificial skin coverings for prosthetic devices, which will require new devices to detect heat and other sensations, the ability to embed them into flexible circuits, and then a way to interface all of this to the brain. They also hope to create low-power, artificial sensor nets to cover robots. The idea is to make them more agile by providing some of the same feedback that humans derive from their skin.
“We take skin for granted but it’s a complex sensing, signaling and decision-making system,” said Zhenan Bao, Ph.D., a Stanford professor of chemical engineering and one of the senior authors. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”
This milestone is part of Bao’s quest to mimic how skin can stretch, repair itself, and, most remarkably, act like a smart sensory network that knows not only how to transmit pleasant sensations to the brain, but also when to order the muscles to react reflexively to make prompt decisions.
The synaptic transistor is the brainchild of Tae-Woo Lee of Seoul National University, who spent his sabbatical year in Bao’s Stanford lab to initiate the collaborative work.
* This work was funded by the Ministry of Science and ICT, Korea; by Seoul National University (SNU); by Samsung Electronics; by the National Nanotechnology Coordinated Infrastructure; and by the Stanford Nano Shared Facilities (SNSF). Patents related to this work are planned.
These four marker proteins (top row) are involved in controlling entry of molecules into the brain via the blood brain barrier. Here, the scientists illustrate one form of damage to the blood brain barrier in ischemic stroke conditions, as revealed by changes (bottom row) in these markers. (credit:WFIRM)
Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have developed a 3-D brain organoid (tiny artifical organ) that could have potential applications in drug discovery and disease modeling.
The scientists say this is the first engineered tissue-equivalent to closely resemble normal human brain anatomy — containing all six major cell types found in normal organs, including neurons and immune cells.
The advanced 3-D organoids promote the formation of a fully cell-based, natural, and functional version of the blood brain barrier (a semipermeable membrane that separates the circulating blood from the brain, protecting it from foreign substances that could cause injury).
The new artificial organ model can help improve understanding of disease mechanisms at the blood brain barrier (BBB), the passage of drugs through the barrier, and the effects of drugs once they cross the barrier.
Faster drug discovery and screening
The shortage of effective therapies and the low success rate of investigational drugs are (in part) due to the fact that we do not have human-like tissue models for testing, according to senior author Anthony Atala, M.D., director of WFIRM. “The development of tissue-engineered 3D brain tissue equivalents such as these can help advance the science toward better treatments and improve patients’ lives,” he said.
The development of the model opens the door to speedier drug discovery and screening. This applies both to neurological conditions and for diseases like HIV, where pathogens hide in the brain; and to disease modeling of neurological conditions, such as Alzheimer’s disease, multiple sclerosis and Parkinson’s disease. The goal is to better understand their pathways and progression.
“To date, most in vitro [lab] BBB models [only] utilize endothelial cells, pericytes and astrocytes,” the researchers note in a paper. “We report a 3D spheroid model of the BBB comprising all major cell types, including neurons, microglia, and oligodendrocytes, to recapitulate more closely normal human brain tissue.”
So far, the researchers have used the brain organoids to measure the effects of (mimicked) strokes on impairment of the blood brain barrier, and have successfully tested permeability (ability of molecules to pass through the BBB) of large and small molecules.
Noninvasive brain–computer interface (BCI) systems can restore functions lost to disability — allowing for spontaneous, direct brain control of external devices without the risks associated with surgical implantation of neural interfaces. But as machine-learning algorithms have become faster and more powerful, researchers have mostly focused on increasing performance by optimizing pattern-recognition algorithms.
But what about letting patients actively participate with AI in improving performance?
To test that idea, researchers at the École Polytechnique Fédérale de Lausanne (EPFL), based in Geneva, Switzerland, conducted research using “mutual learning” between computer and humans — two severely impaired (tetraplegic) participants with chronic spinal cord injury. The goal: win a live virtual racing game at an international event.
Controlling a racing-game avatar using a BCI
A computer graphical user interface for the race track in Cybathlon 2016 “Brain Runners“ game. “Pilots” (participants) had to deliver (by thinking) the proper command in each color pad (cyan, magenta, yellow) to accelerate their own avatar in the race. (credit: Serafeim Perdikis and Robert Leeb)
The participants were trained to improve control of an avatar (a person-substitute shown on a computer screen) in a virtual racing game. The experiment used a brain-computer interface (BCI), which uses electrodes on the head to pick up control signals from a person’s brain.
Each participant (called a “pilot”) controlled an on-screen avatar in a three-part race. This required mastery of separate commands for spinning, jumping, sliding, and walking without stumbling.
After training for several months, in Oct. 8, 2016, the two pilots participated (on the “Brain Tweakers” team) in Cybathlon in Zurich, Switzerland — the first international para-Olympics for disabled individuals in control of bionic assistive technology.*
The BCI-based race consisted of four brain-controlled avatars competing in a virtual racing game called “Brain Runners.” To accelerate each pilot’s avatar, they had to issue up to three mental commands (or intentional idling) on corresponding color-coded track segments.
Maximizing BCI performance by humanizing mutual learning
The researchers believe that with the mutual-learning approach, they have “maximized the chances for human learning by infrequent recalibration of the computer, leaving time for the human to better learn how to control the sensorimotor rhythms that would most efficiently evoke the desired avatar movement. Our results showcase strong and continuous learning effects at all targeted levels — machine, subject, and application — with both [participants] over a longitudinal study lasting several months,” the researchers conclude.
Reference (open-source): PLoS Biology May 10, 2018
* At Cybathlon, each team comprised a pilot together with scientists and technology providers of the functional and assistive devices used, which can be prototypes developed by research labs or companies, or commercially available products. That also makes Cybathlon a competition between companies and research laboratories. The next Cybathlon will be held in Zurich in 2020.
Visualizing the brain: Here, tissue from a human dentate gyrus (a part of the brain’s hippocampus that is involved in the formation of new memories) was imaged transparently in 3D and colored-coded to reveal the distribution and types of nerve cells. (credit: The University of Hong Kong)
Visualizing human brain tissue in vibrant transparent colors
Neuroscientists from The University of Hong Kong (HKU) and Imperial College London have developed a new method called “OPTIClear” for 3D transparent color visualization (at the microscopic level) of complex human brain circuits.
To understand how the brain works, neuroscientists map how neurons (nerve cells) are wired to form circuits in both healthy and disease states. To do that, the scientists typically cut brain tissues into thin slices. Then they trace the entangled fibers across those slices — a complex, laborious process.
Making human tissues transparent. OPTIClear replaces that process by “clearing” (making tissues transparent) and using fluorescent staining to identify different types of neurons. In one study of more than 3,000 large neurons in the human basal forebrain, the researchers were able to reduce the time from about three weeks to five days to visualize neurons, glial cells, and blood vessels in exquisite 3D detail. Previous clearing methods (such as CLARITY) have been limited to rodent tissue.
Watching millions of brain cells in a moving animal for the first time
Neurons in the hippocampus flash on and off as a mouse walks around with tiny camera lenses on its head. (credit: The Rockefeller University)
It’s a neuroscientist’s dream: being able to track the millions of interactions among brain cells in animals that move about freely — allowing for studying brain disorders. Now a new invention, developed at The Rockefeller University and reported today, is expected to give researchers a dynamic tool to do that just that, eventually in humans.
The new tool can track neurons located at different depths within a volume of brain tissue in a freely moving rodent, or record the interplay among neurons when two animals meet and interact socially.
Microlens array for 3D recording. The technology consists of a tiny microscope attached to a mouse’s head, with a group of lenses called a “microlens array.” These lenses enable the microscope to capture images from multiple angles and depths on a sensor chip, producing a three-dimensional record of neurons blinking on and off as they communicate with each other through electrochemical impulses. (The mouse neurons are genetically modified to light up when they become activated.) A cable attached to the top of the microscope transmits the data for recording.
One challenge: Brain tissue is opaque, making light scatter, which makes it difficult to pinpoint the source of each neuronal light flash. The researchers’ solution: a new computer algorithm (program), known as SID, that extracts additional information from the scattered emission light.
Illustration: An astrocyte (green) interacts with a synapse (red), producing an optical signal (yellow). (credit: UCLA/Khakh lab)
Researchers at the David Geffen School of Medicine at UCLA can now peer deep inside a mouse’s brain to watch how star-shaped astrocytes (support glial cells in the brain) interact with synapses (the junctions between neurons) to signal each other and convey messages.
The method uses different colors of light that pass through a lens to magnify objects that are invisible to the naked eye. The viewable objects are now far smaller than those viewable by earlier techniques. That enables researchers to observe how brain damage alters the way astrocytes interact with neurons, and develop strategies to address these changes, for example.
Astrocytes are believed to play a key role in neurological disorders like Lou Gehrig’s, Alzheimer’s, and Huntington’s disease.
Reference: Neuron. Source: UCLA Khakh lab April 4, 2018.
A timeline of the Universe based on the cosmic inflation theory (credit: WMAP science team/NASA)
Stephen Hawking’s final cosmology theory says the universe was created instantly (no inflation, no singularity) and it’s a hologram
There was no singularity just after the big bang (and thus, no eternal inflation) — the universe was created instantly. And there were only three dimensions. So there’s only one finite universe, not a fractal or a multiverse — and we’re living in a projected hologram. That’s what Hawking and co-author Thomas Hertog (a theoretical physicist at the Catholic University of Leuven) have concluded — contradicting Hawking’s former big-bang singularity theory (with time as a dimension).
Problem: So how does time finally emerge? “There’s a lot of work to be done,” admits Hertog. Citation (open access): Journal of High Energy Physics, May 2, 2018. Source (open access): Science, May 2, 2018
Movies capture the dynamics of an RNA molecule from the HIV-1 virus. (photo credit: Yu Xu et al.)
Molecular movies of RNA guide drug discovery — a new paradigm for drug discovery
Duke University scientists have invented a technique that combines nuclear magnetic resonance imaging and computationally generated movies to capture the rapidly changing states of an RNA molecule.
It could lead to new drug targets and allow for screening millions of potential drug candidates. So far, the technique has predicted 78 compounds (and their preferred molecular shapes) with anti-HIV activity, out of 100,000 candidate compounds. Citation: Nature Structural and Molecular Biology, May 4, 2018. Source: Duke University, May 4, 2018.
Chromium tri-iodide magnetic layers between graphene conductors. By using four layers, the storage density could be multiplied. (credit: Tiancheng Song)
Atomically thin magnetic memory
University of Washington scientists have developed the first 2D (in a flat plane) atomically thin magnetic memory — encoding information using magnets that are just a few layers of atoms in thickness — a miniaturized, high-efficiency alternative to current disk-drive materials.
In an experiment, the researchers sandwiched two atomic layers of chromium tri-iodide (CrI3) — acting as memory bits — between graphene contacts and measured the on/off electron flow through the atomic layers.
The U.S. Dept. of Energy-funded research could dramatically increase future data-storage density while reducing energy consumption by orders of magnitude. Citation: Science, May 3, 2018. Source: University of Washington, May 3, 2018.
Definitions of artificial intelligence (credit: House of Lords Select Committee on Artificial Intelligence)
A Magna Carta for the AI age
A report by the House of Lords Select Committee on Artificial Intelligence in the U.K. lays out “an overall charter for AI that can frame practical interventions by governments and other public agencies.”
The key elements: Be developed for the common good. Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used. Respect rights to privacy. Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online. Never be given the autonomous power to hurt, destroy or deceive human beings.
Memes and social networks have become weaponized, but many governments seem ill-equipped to understand the new reality of information warfare.
The weapons include: Computational propaganda: digitizing the manipulation of public opinion; advanced digital deception technologies; malicious AI impersonating and manipulating people; and AI-generated fake video and audio. Counter-weapons include: Spotting AI-generated people; uncovering hidden metadata to authenticate images and videos; blockchain for tracing digital content back to the source; and detecting image and video manipulation at scale.
Magnetic calcium-responsive nanoparticles (dark centers are magnetic cores) respond within seconds to calcium ion changes by clustering (Ca+ ions, right) or expanding (Ca- ions, left), creating a magnetic contrast change that can be detected with MRI, indicating brain activation. (High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses.) Blue: C2AB “molecular glue” (credit: The researchers)
Calcium-based MRI sensor enables deep brain imaging
MIT neuroscientists have developed a new magnetic resonance imaging (MRI) sensor that allows them to monitor neural activity deep within the brain by tracking calcium ions.
Calcium ions are directly linked to neuronal firing at high resolution — unlike the changes in blood flow detected by functional MRI (fMRI), which provide only an indirect indication of neural activity. The new sensor can also monitor large areas, compared to fluorescent molecules, used to label calcium in the brain and image it with traditional microscopy, which is limited to small areas of the brain.
A calcium-based MRI sensor could allow researchers to link specific brain functions directly to specific neuron activity, and to determine how distant brain regions communicate with each other during particular tasks. The research is described in a paper in the April 30 issue of Nature Nanotechnology. Source: MIT
New technique for measuring blood flow in the brain uses laser light shined into the head (“sample arm” path) through the skull. The return signal is boosted by a reference light beam and returned to a detector camera chip. (credit: Srinivasan lab, UC Davis)
Measuring deep-tissue blood flow at high speed
Biomedical engineers at the University of California, Davis, have developed a more-effective, lower-cost technique for measuring deep tissue blood flow in the brain at high speed. It could be especially useful for patients with stroke or traumatic brain injury.
The technique, called “interferometric diffusing wave spectroscopy” (iDWS), replaces about 20 photon-counting detectors in diffusing wave spectroscopy (DWS) devices (which cost a few thousand dollars each) with a single low-cost CMOS-based digital-camera chip.
The NIH-funded work is described in an open-access paper published April 26 in the journal Optica. Source: UC Davis