IBM researchers use analog memory to train deep neural networks faster and more efficiently

Crossbar arrays of non-volatile memories can accelerate the training of neural networks by performing computation at the actual location of the data. (credit: IBM Research)

Imagine advanced artificial intelligence (AI) running on your smartphone — instantly presenting the information that’s relevant to you in real time. Or a supercomputer that requires hundreds of times less energy.

The IBM Research AI team has demonstrated a new approach that they believe is a major step toward those scenarios.

Deep neural networks normally require fast, powerful graphical processing unit (GPU) hardware accelerators to support the needed high speed and computational accuracy — such as the GPU devices used in the just-announced Summit supercomputer. But GPUs are highly energy-intensive, making their use expensive and limiting their future growth, the researchers explain in a recent paper published in Nature.

Analog memory replaces software, overcoming the “von Neumann bottleneck”

Instead, the IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power — without sacrificing the ability to create deep learning systems.*

The trick was to replace conventional von Neumann architecture, which is “constrained by the time and energy spent moving data back and forth between the memory and the processor (the ‘von Neumann bottleneck’),” the researchers explain in the paper. “By contrast, in a non-von Neumann scheme, computing is done at the location of the data [in memory], with the strengths of the synaptic connections (the ‘weights’) stored and adjusted directly in memory.

“Delivering the future of AI will require vastly expanding the scale of AI calculations,” they note. “Instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all of the computation inside the analog memory chip. We believe this is a major step on the path to the kind of hardware accelerators necessary for the next AI breakthroughs.”**

Given these encouraging results, the IBM researchers have already started exploring the design of prototype hardware accelerator chips, as part of an IBM Research Frontiers Institute project, they said.

Ref.: Nature. Source: IBM Research

 * “From these early design efforts, we were able to provide, as part of our Nature paper, initial estimates for the potential of such [non-volatile memory]-based chips for training fully-connected layers, in terms of the computational energy efficiency (28,065 GOP/sec//W) and throughput-per-area (3.6 TOP/sec/mm2). These values exceed the specifications of today’s GPUs by two orders of magnitude. Furthermore, fully-connected layers are a type of neural network layer for which actual GPU performance frequently falls well below the rated specifications. … Analog non-volatile memories can efficiently accelerate at the heart of many recent AI advances. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other.

** “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional complementary metal-oxide semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices.  It was essential to use real analog memory devices for every weight in our neural networks, because modeling approaches for such novel devices frequently fail to capture the full range of device-to-device variability they can exhibit.”

Summit supercomputer is world’s fastest

(credit: Oak Ridge National Laboratory)

Summit — the world’s most powerful supercomputer, with a peak performance of 200,000 trillion calculations per second, or 200 petaflops* peak performance — was announced June 8 by the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL).

The previous leading supercomputer was China’s Sunway TaihuLight, with 125 petaflops peak performance.**

Summit will enable researchers to apply techniques like machine learning and deep learning to problems in human health such as genetics and cancer, high-energy physics (such as astrophysics and fusion energy), discovery of new materials, climate modeling, and other scientific discoveries that were previously impractical or impossible, according to ORNL.

“It’s at least a hundred times more computation than we’ve been able to do on earlier machines,” said ORNL computational astrophysicist Bronson Messer.

Summit supercomputer chips (credit: ORNL)

Summit’s IBM system has more than 10 petabytes (10,000 trillion bytes) of memory and 4,608 servers — each containing two 22-core IBM Power9 processors and six NVIDIA Tesla V100 graphics processing unit (GPU) accelerators. (“For IBM, Summit represents a great opportunity to showcase its Power9-GPU AC922 server to other potential HPC  and enterprise customers,” notes Michael Feldman, Managing Editor of Top 500 News.)

Exascale next

Summit will be eight times more powerful than ORNL’s previous top-ranked system, Titan. For certain scientific applications, Summit will also be capable of more than three billion billion mixed-precision calculations per second, or 3.3 exaops.

Summit is a step closer to the U.S. goal of creating an exascale (1 exaflop* or 1,000 petaflops) supercomputing system by 2021. (However, China has multiple exaflop projects expected to be running a year or more before the U.S. has a system at that level, according to EE Times.)

Summit is part of the Oak Ridge Leadership Computing Facility at DOE’s Office of Science .

(credit: ORNL)

* A petaflop is 1015 (1000 trillion) floating point operations per second (“floating point” refers to the large number of decimal-point locations required for the wide range or numbers used in scientific calculations, including very small numbers and very large numbers). An exaflop is 1018 floating point operations per second.

** The “peak” rating refers to a supercomputer’s theoretical maximum performance. A more meaningful measure is “Rmax” — a score that describes a supercomputer’s maximal measured performance on a Linpack benchmark. Rmax for the Summit has not yet been announced.

roundup | AI powers cars, photos, phones, and people

(credit: BDD Industry Consortium)

Huge self-driving-car video dataset may help reduce accidents

Berkeley Deep Drive, the largest-ever self-driving car dataset, has been released by BDD Industry Consortium for free public download. It features 100,000 HD videos on cars and labeled objects, with GPS and other data — 800 times larger than Baidu’s Apollo dataset. The goal: apply computer vision research — including deep reinforcement learning for object tracking — to the automotive field.

Berkeley researchers plan to add to the dataset, including panorama and stereo videos, LiDAR, and radar. Ref.: arXiv. Source: BDD Industry Consortium.


A “privacy filter” that disrupts facial-recognition algorithms. A “difference” filter alters very specific pixels in the image, making subtle changes (such as in the corner of the eyes). (credit:Avishek Bose)

A “privacy filter” for photos

University of Toronto engineering researchers have created an artificial intelligence (AI) algorithm (computer program) to disrupt facial recognition systems and protect privacy. It uses a deep-learning technique called “adversarial training,” which pits two algorithms against each other — one to identify faces, and the second  to disrupt the facial recognition task of the first.

The algorithm also disrupts image-based search, feature identification, emotion, and ethnicity estimation, and all other face-based attributes that can be extracted automatically. It will be available as an app or website. Ref.: Github. Source: University of Toronto.


Developers of the more than 2 million iOS apps will be able to hook into Siri’s new Suggestions feature, with help from a new “Create ML” tool. (credit: TechCrunch)

A smarter Siri

“Apple is turning its iPhone into a highly personalized device, powered by its [improved] Siri AI,” says TechCrunch, reporting on the just-concluded Apple Worldwide Developers Conference. With the new “Suggestions” feature — to be available with Apple’s iOS 12 mobile operating system (in autumn 2018) — Siri will offer suggestions to users, such as texting someone that you’re running late to a meeting.

The Photos app will also get smarter, with a new tab that will “prompt users to share photos taken with other people, thanks to facial recognition and machine learning,” for example, says TechCrunch. Along with Core ML (announced last year), a new tool called “Create ML” should help Apple developers build machine learning models, reports Wired.


(credit: Loughborough University)

AI detects illnesses in human breath

Researchers at Loughborough University in the U.K. have developed deep-learning networks that can detect illness-revealing chemical compounds in breath samples, with potentially wide applications in medicine, forensics, environmental analysis, and others.

The new process is cheaper and more reliable — taking only minutes to autonomously analyze a breath sample that previously took hours by a human expert, using gas-chromatography mass-spectrometers (GC-MS). The initial study focused on recognizing a group of chemicals called aldehydes, which are often associated with fragrances but also human stress conditions and illnesses. Source: The Conversation.

Overcoming transistor miniaturization limits due to ‘quantum tunneling’

An illustration of a single-molecule device that blocks leakage current in a transistor (yellow: gold transistor electrodes) (credit: Haixing Li/Columbia Engineering)

A team of researchers at Columbia Engineering and associates* have synthesized a molecule that could overcome a major physical limit to miniaturizing computer transistors at the nanometer scale (under about 3 nanometers) — caused by “leakage current.”

Leakage current between two metal transistor electrodes results when the gap between the electrodes narrows to the point that electrons are no longer contained by their barriers — a phenomenon known as quantum tunneling.

The researchers synthesized the first molecule** capable of insulating (preventing electron flow) at the nanometer scale more effectively than a vacuum barrier (the traditional approach). The molecule bridges the nanometer gap between two metal electrodes.

Constructive interference (left) between two waves increases the resulting wave; destructive interference (right) decreases the resulting wave. (credit: Wikipedia)

The silicon-based molecule design uses “destructive quantum interference,” which occurs when the peaks and valleys of two waves are placed exactly out of phase, annulling oscillation.

“We’ve reached the point where it’s critical for researchers to develop creative solutions for redesigning insulators. Our molecular strategy represents a new design principle for classic devices, with the potential to support continued miniaturization in the near term,” said Columbia Engineering physicist Latha Venkataraman, Ph.D.

The research bucks the trend of most research in transistor miniaturization, which aims to create highly conducting contact electrodes, typically using carbon nanotubes (see “Method to replace silicon with carbon nanotubes developed by IBM Research”).

* Other researchers on the team were from Columbia University Department of Chemistry, Shanghai Normal University, and the University of Copenhagen.

** The molecule is bicyclo[2.2.2]octasilane.

Ultrasound-powered nanorobots clear bacteria and toxins from blood

MRSA bacterium captured by a hybrid cell membrane-coated nanorobot (colored scanning electron microscope image and black and white image below) (credit: Esteban-Fernández de Ávila/Science Robotics)

Engineers at the University of California San Diego have developed tiny ultrasound-powered nanorobots that can swim through blood, removing harmful bacteria and the toxins they produce.

These proof-of-concept nanorobots could one day offer a safe and efficient way to detoxify and decontaminate biological threat agents — providing an fast alternative to the multiple, broad-spectrum antibiotics currently used to treat life-threatening pathogens like MRSA bacteria (an antibiotic-resistant staph strain). MRSA is considered a serious worldwide threat to public health.

The MRSA superbug (in yellow) is resistant to antibiotics and can lead to death (credit: National Institute of Allergy and Infectious Diseases)

Antimicrobial resistance (AMR) threatens the effective prevention and treatment of an ever-increasing range of infections caused by bacteria, parasites, viruses and fungi, according to the World Health Organization — an increasingly serious threat to global public health.

Trapping pathogens

The researchers coated gold nanowires with a hybrid of red blood cell membranes and platelets (tiny blood cells that help your body form clots to stop bleeding).*

  • The platelets cloak the nanowires and attract bacterial pathogens, which become bound to the nanorobots.
  • The red blood cells then absorb and neutralize the toxins produced by these bacteria.

Gold nanorobots coated in hybrid platelet/red blood cell membranes (colored scanning electron microscope image). (credit: Esteban-Fernández de Ávila/Science Robotics)

The interior gold nanowire body of the nanorobots responds to ultrasound, causing the nanorobots to swim around rapidly (no chemical fuel required) — mimicking the movement of natural motile cells (such as red blood cells). This mobility helps the nanorobots efficiently mix with their targets (bacteria and toxins) in blood and speed up detoxification.

The coating also protects the nanorobots from a process known as biofouling — when proteins collect onto the surface of foreign objects and prevent them from operating normally.

The nanorobots are just over one micrometer** (1,000 nanometers) long (for comparison, red blood cells have a diameter of 6 to 8 micrometers). The nanorobots can travel up to 35 micrometers per second in blood when powered by ultrasound.

In tests, the researchers used the nanorobots to treat blood samples contaminated with MRSA and their toxins. After five minutes, these blood samples had three times less bacteria and toxins than untreated samples.

Broad-spectrum detoxification

Future work includes tests in mice, making nanorobots out of biodegradable materials instead of gold, and tests of also using the nanorobots for drug delivery.

The ultimate research goal is not to use the nanorobots specifically for treating MRSA infections, but more generally for detoxifying biological fluids — “an important step toward the creation of a broad-spectrum detoxification robotic platform,” as the researchers note in a paper.

* The researchers created the nanorobots in three steps:

1. They created the hybrid coating by first separating entire membranes from platelets and red blood cells.

2. They applied ultrasound (high-frequency sound waves) to fuse the membranes together. (Since the membranes were taken from actual cells, they contain all their original-cell surface protein functions, including the ability of platelets to attract bacteria.)

3. They coated these hybrid membranes onto gold nanowires.

** A micrometer is one millionth of a meter, or one thousandth of a millimeter.

This work was supported by the Defense Threat Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense.

Reference: Science Robotics. Source: UC San Diego.

Artificial sensory neurons may give future prosthetic devices and robots a subtle sense of touch

American and Korean researchers are creating an artificial nerve system for robots and humans. (credit: Kevin Craft)

Researchers at Stanford University and Seoul National University have developed an artificial sensory nerve system that’s a step toward artificial skin for prosthetic limbs, restoring sensation to amputees, and giving robots human-like reflexes.*

Their rudimentary artificial nerve circuit integrates three previously developed components: a touch-pressure sensor, a flexible electronic neuron, and an artificial synaptic transistor modeled on human synapses.

Here’s how the artificial nerve circuit works:

(Biological model) Pressures applied to afferent (sensory) mechanoreceptors (pressure sensors, in this case) in the finger change the receptor potential (voltage) of each mechanoreceptor. The receptor potential changes combine and initiate action potentials in the nerve fiber, connected to a heminode in the chest. The nerve fiber forms synapses with interneurons in the spinal cord. Action potentials from multiple nerve fibers combine through the synapses and contribute to information processing (via postsynaptic potentials). (credit: (Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))

(Artificial model) Illustration of a corresponding artificial afferent nerve system made of pressure sensors, an organic ring oscillator (simulates a neuron), and a transistor that simulates a synapse. (Only one ring oscillator connected to a synaptic transistor is shown here for simplicity.) Colors of parts match corresponding colors in the biological version. (credit: Yeongin Kim (Stanford University), Alex Chortos (Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))

(Photo) Artificial sensor, artificial neuron, and artificial synapse. (credit: Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))

Experiments with the artificial nerve circuit

In a demonstration experiment, the researchers used the artificial nerve circuit to activate the twitch reflex in the knee of a cockroach.

A cockroach insect (A) with an attached artificial mechanosensory nerve was used in this experiment. The artificial afferent nerve (B) was connected to the biological motor (movement) nerves of a detached insect leg (B, lower right) to demonstrate a hybrid reflex arc (such as a knee reflex). Applied pressure caused a reflex movement of the leg. A force gauge (C) was used to measure the force of the reflex movements of the disabled insect leg. (credit: Yeongin Kim (Stanford University), Alex Chortos(Stanford University), Wentao Xu (Seoul National University), Zhenan Bao (Stanford University), Tae-Woo Lee (Seoul National University))

The researchers did another experiment that showed how the artificial nerve system could be used to identify letters in the Braille alphabet.

Improving robot and human sensory abilites

iCub robot (credit: D. Farina/Istituto Italiano Di Tecnologia)

The researchers “used a knee reflex as an example of how more-advanced artificial nerve circuits might one day be part of an artificial skin that would give prosthetic devices or robots both senses and reflexes,” noted Chiara Bartolozzi, Ph.D., of Istituto Italiano Di Tecnologia, writing in a Science commentary on the research.

Tactile information from artificial tactile systems “can improve the interaction of a robot with objects,” says Bartolozzi, who is involved in research with the iCub robot.

“In this scenario, objects can be better recognized because touch complements the information gathered from vision about the shape of occluded or badly illuminated regions of the object, such as its texture or hardness. Tactile information also allows objects to be better manipulated — for example, by exploiting contact and slip detection to maintain a stable but gentle grasp of fragile or soft objects (see the photo). …

“Information about shape, softness, slip, and contact forces also greatly improves the usability of upper-limb prosthetics in fine manipulation. … The advantage of the technology devised by Kim et al. is the possibility of covering at a reasonable cost larger surfaces, such as fingers, palms, and the rest of the prosthetic device.

“Safety is enhanced when sensing contacts inform the wearer that the limb is encountering obstacles. The acceptability of the artificial hand by the wearer is also improved because the limb is perceived as part of the body, rather than as an external device. Lower-limb prostheses can take advantage of the same technology, which can also provide feedback about the distribution of the forces at the foot while walking.”

Next research steps

The researchers plan next to create artificial skin coverings for prosthetic devices, which will require new devices to detect heat and other sensations, the ability to embed them into flexible circuits, and then a way to interface all of this to the brain. They also hope to create low-power, artificial sensor nets to cover robots. The idea is to make them more agile by providing some of the same feedback that humans derive from their skin.

“We take skin for granted but it’s a complex sensing, signaling and decision-making system,” said Zhenan Bao, Ph.D., a Stanford professor of chemical engineering and one of the senior authors. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”

This milestone is part of Bao’s quest to mimic how skin can stretch, repair itself, and, most remarkably, act like a smart sensory network that knows not only how to transmit pleasant sensations to the brain, but also when to order the muscles to react reflexively to make prompt decisions.

The synaptic transistor is the brainchild of Tae-Woo Lee of Seoul National University, who spent his sabbatical year in Bao’s Stanford lab to initiate the collaborative work.

Reference: Science May 31. Source: Stanford University and Seoul National University.

* This work was funded by the Ministry of Science and ICT, Korea; by Seoul National University (SNU); by Samsung Electronics; by the National Nanotechnology Coordinated Infrastructure; and by the Stanford Nano Shared Facilities (SNSF). Patents related to this work are planned.

New noninvasive technique could be alternative to laser eye surgery

(Left) Corneal shape before (top) and after (bottom) the treatment. (Right) Simulated effects on vision. (credit: Sinisa Vukelic/Columbia Engineering)

Columbia Engineering researcher Sinisa Vukelic, Ph.D., has developed a new non-invasive approach for permanently correcting myopia (nearsightedness), replacing glasses and invasive corneal refractive surgery.* The non-surgical method uses a “femtosecond oscillator” — an ultrafast laser that delivers pulses of very low energy at high repetition rate to modify the tissue’s shape.

The method has fewer side effects and limitations than those seen in refractive surgeries, according to Vukelic. For instance, patients with thin corneas, dry eyes, and other abnormalities cannot undergo refractive surgery.** The study could lead to treatment for myopia, hyperopia, astigmatism, and irregular astigmatism. So far, it’s shown promise in preclinical models.

“If we carefully tailor these changes, we can adjust the corneal curvature and thus change the refractive power of the eye,” says Vukelic. “This is a fundamental departure from the mainstream ultrafast laser treatment [such as LASIK] … and relies on the optical breakdown of the target materials and subsequent cavitation bubble formation.”

Personalized treatments and use on other collagen-rich tissues

Vukelic’s group plans to start clinical trials by the end of the year. They hope to predict corneal effects —  how the cornea might deform if a small circle or an ellipse, for example. That would make it possible to personalize the treatment.

“What’s especially exciting is that our technique is not limited to ocular media — it can be used on other collagen-rich tissues,” Vukelic adds. “We’ve also been working with Professor Gerard Ateshian’s lab to treat early osteoarthritis, and the preliminary results are very, very encouraging. We think our non-invasive approach has the potential to open avenues to treat or repair collagenous tissue without causing tissue damage.”

* Nearsightedness, or myopia, is an increasing problem around the world. There are now twice as many people in the U.S. and Europe with this condition as there were 50 years ago, the researchers note. In East Asia, 70 to 90 percent of teenagers and young adults are nearsighted. By some estimates, about 2.5 billion of people across the globe may be affected by myopia by 2020. Eye glasses and contact lenses are simple solutions; a more permanent one is corneal refractive surgery. But, while vision correction surgery has a relatively high success rate, it is an invasive procedure, subject to post-surgical complications, and in rare cases permanent vision loss. In addition, laser-assisted vision correction surgeries such as laser in situ keratomileusis (LASIK) and photorefractive keratectomy (PRK) still use ablative technology, which can thin and in some cases weaken the cornea.

** Vukelic’s approach uses low-density plasma, which causes ionization of water molecules within the cornea. This ionization creates a reactive oxygen species (a type of unstable molecule that contains oxygen and that easily reacts with other molecules in a cell), which in turn interacts with the collagen fibrils to form chemical bonds, or crosslinks. This selective introduction of crosslinks induces changes in the mechanical properties of the treated corneal tissue. This ultimately results in changes in the overall macrostructure of the cornea, but avoids optical breakdown of the corneal tissue. Because the process is photochemical, it does not disrupt tissue and the induced changes remain stable.

Reference: Nature Photonics. Source: Columbia Engineering.

 

 

 

First 3D-printed human corneas

3D-printing a human cornea (credit: Newcastle University)

Scientists at Newcastle University have created a proof-of-concept process to achieve the first 3D-printed human corneas (the cornea, the outermost layer of the human eye, has an important role in focusing vision).*

Stem cells (human corneal stromal cells) from a healthy donor’s cornea were mixed together with alginate and collagen** to create a “bio-ink” solution. Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea in less than 10 minutes.

They also demonstrated that they could build a cornea to match a patient’s unique specifications, based on a scan of the patient’s eye.

The technique could be used in the future to ensure an unlimited supply of corneas, but it will be several years of testing before they could be used in transplants, according to the scientists.

* There is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder. In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

* This mixture keeps the stem cells alive, and it’s stiff enough to hold its shape but soft enough to be squeezed out of the nozzle of a 3D printer.

Reference: Experimental Eye Research. Source: Newcastle University

 

Teaching robots to do household chores

MIT’s Sims-inspired “VirtualHome” system aims to teach artificial agents a range of chores, such as setting the table and making coffee. (credit: MIT CSAIL)

Computer scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto* have created a Sims-inspired “VirtualHome” system that can simulate detailed household tasks.

The idea is to allow “artificial agents” to execute tasks — opening up the possibility of one day teaching robots to do such tasks.

Using crowdsourcing, the researchers created videos that simulate detailed household activities and sub-tasks in eight different scenes, including a living room, kitchen, dining room, bedroom, and home office.  A simple model can generate a program from either a video or a textual description, allowing robots to be programmed by naive users, either via natural language or video demonstration.

“Hey, Jeeves — get me a glass of milk” would require several subtasks — the first five are shown here. (credit: MIT CSAIL)

The researchers have trained the system using nearly 3,000 programs for various activities, which are further broken down into subtasks for the computer to understand. A simple task like “making coffee,” for example, would also include the necessary step, “grabbing a cup.”

Next: Anticipating personalized wants and needs

The end result is a large database of household tasks described using natural language. Companies like Amazon that are working to develop Alexa-like robotic systems at home could in the future use such data to train their models to do more complex tasks.

Robots could eventually be trained to anticipate personalized wants and needs, which could be especially helpful as assistive technology for the elderly, or for those with limited mobility.

The team hopes to train the robots using actual videos instead of Sims-style simulation videos. That would enable a robot to learn directly by simply watching a YouTube video. The team is also working on a reward-learning system in which the robot gets positive feedback when it does tasks correctly.

The project** will be presented at the Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, June 18–22, 2018 .

Reference: CVPR paper (open-access). Source: MIT CSAIL.

* Researchers from McGill University and the University of Ljubljana were also involved.

** This project was partially supported by a “La Caixa” fellowship, Canada’s National Sciences and Engineering Research Council Strategic Partnership Network on Machine Learning Hardware Acceleration (NSERC COHESA), Samsung, the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA).

Advanced brain organoid could model strokes, screen drugs

These four marker proteins (top row) are involved in controlling entry of molecules into the brain via the blood brain barrier. Here, the scientists illustrate one form of damage to the blood brain barrier in ischemic stroke conditions, as revealed by changes (bottom row) in these markers. (credit:WFIRM)

Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have developed a 3-D brain organoid (tiny artifical organ) that could have potential applications in drug discovery and disease modeling.

The scientists say this is the first engineered tissue-equivalent to closely resemble normal human brain anatomy — containing all six major cell types found in normal organs, including neurons and immune cells.

The advanced 3-D organoids promote the formation of a fully cell-based, natural, and functional version of the blood brain barrier (a semipermeable membrane that separates the circulating blood from the brain, protecting it from foreign substances that could cause injury).

The new artificial organ model can help improve understanding of disease mechanisms at the blood brain barrier (BBB), the passage of drugs through the barrier, and the effects of drugs once they cross the barrier.

Faster drug discovery and screening

The shortage of effective therapies and the low success rate of investigational drugs are (in part) due to the fact that we do not have human-like tissue models for testing, according to senior author Anthony Atala, M.D., director of WFIRM. “The development of tissue-engineered 3D brain tissue equivalents such as these can help advance the science toward better treatments and improve patients’ lives,” he said.

The development of the model opens the door to speedier drug discovery and screening. This applies both to neurological conditions and for diseases like HIV, where pathogens hide in the brain; and to disease modeling of neurological conditions, such as Alzheimer’s disease, multiple sclerosis and Parkinson’s disease. The goal is to better understand their pathways and progression.

“To date, most in vitro [lab] BBB models [only] utilize endothelial cells, pericytes and astrocytes,” the researchers note in a paper. “We report a 3D spheroid model of the BBB comprising all major cell types, including neurons, microglia, and oligodendrocytes, to recapitulate more closely normal human brain tissue.”

So far, the researchers have used the brain organoids to measure the effects of (mimicked) strokes on impairment of the blood brain barrier, and have successfully tested permeability (ability of molecules to  pass through the BBB) of large and small molecules.

Reference: Nature Scientific Reports (open access). Source: Wake Forest Institute for Regenerative Medicine.