There’s no known upper limit to human longevity, study suggests

Chiyo Miyako of Japan is the world’s oldest verified living person at 117 years, as of June 29, 2018, according to the Gerontology Research Group. She credits eating eel, drinking red wine, and never smoking for her longevity, and enjoys calligraphy. (credit: Medical Review Co., Ltd.)

Human death risk increases exponentially from 65 up to about age 80. At that point, the range of risks starts to increase. But by age 105, the death risk actually levels off — suggesting there’s no known upper limit for human lifespan.*

That’s the conclusion of a controversial study by an international team of scientists, published Thursday, June 28 in the journal Science.

“The increasing number of exceptionally long-lived people and the fact that their mortality beyond 105 is seen to be declining across cohorts — lowering the mortality plateau or postponing the age when it appears — strongly suggest that longevity is continuing to increase over time and that a limit, if any, has not been reached,” the researchers wrote.

High-quality data

Logarithmic plot of the exponential risk of death (“hazard”) from ages 65 to 115 (on a logarithmic plot, exponentials are shown as a diagonal straight line). For ages up to 105, the data is from the Human Mortality Database (HMD). Note that starting at age 80, the range of risks of death (blue bars) starts to increase (people live to different ages; some live longer) — it’s no longer a fixed probability, as in the traditional “Gompertz” model (black line). However by age 105, based on data from the new Italian ISTAT model, the risk of death actually hits a plateau (stops increasing, as shown in dashed black line with orange background) and the odds of someone dying from one birthday to the next are roughly 50:50.** (credit: E. Barbi et al., Science)

The new study was based on “high-quality data from Italians aged 105 and older, collected by the Italian National Institute of Statistics (ISTAT).” That data provided “accuracy and precision that were not possible before,” the researchers say.

Previous data for supercentenarians (age 110 or older) in the Max Planck Institute for Demographic Research International Database on Longevity (IDL) were “problematic in terms of age reporting [due to] sparse data pooled from 11 countries,” according to the authors of the Science paper.

Instead, ISTAT “collected and validated the individual survival trajectory” of all inhabitants of Italy aged 105 and older in the period from 1 January 2009 to 31 December 2015, including birth certificates, suggesting that “misreporting is believed to be minimal in these data.”

Ref.: Science.

* The current record for the longest human life span was set in 1997 when French woman Jeanne Calment died at the age of 122.

** This chart shows yearly hazards (probability of death) on a logarithmic scale for the cohort (group of subjects with a matching characteristic) of Italian women born in 1904. The straight-line prediction (black) is based on fitting a Gompertz model to ages 65 to 80. Confidence intervals (blue) — the range of death probabilities — were derived from Human Mortality Database (HMD) data for ages up to 105, and from ISTAT data beyond age 105. Note the longer intervals and increased diversion from straight-line prediction (black) after age 80; and the estimated plateau in probability of death values beyond age 105 (black dashed line with orange background), based on the model parameters, which were in turn based on the full ISTAT database.

roundup | AI powers cars, photos, phones, and people

(credit: BDD Industry Consortium)

Huge self-driving-car video dataset may help reduce accidents

Berkeley Deep Drive, the largest-ever self-driving car dataset, has been released by BDD Industry Consortium for free public download. It features 100,000 HD videos on cars and labeled objects, with GPS and other data — 800 times larger than Baidu’s Apollo dataset. The goal: apply computer vision research — including deep reinforcement learning for object tracking — to the automotive field.

Berkeley researchers plan to add to the dataset, including panorama and stereo videos, LiDAR, and radar. Ref.: arXiv. Source: BDD Industry Consortium.


A “privacy filter” that disrupts facial-recognition algorithms. A “difference” filter alters very specific pixels in the image, making subtle changes (such as in the corner of the eyes). (credit:Avishek Bose)

A “privacy filter” for photos

University of Toronto engineering researchers have created an artificial intelligence (AI) algorithm (computer program) to disrupt facial recognition systems and protect privacy. It uses a deep-learning technique called “adversarial training,” which pits two algorithms against each other — one to identify faces, and the second  to disrupt the facial recognition task of the first.

The algorithm also disrupts image-based search, feature identification, emotion, and ethnicity estimation, and all other face-based attributes that can be extracted automatically. It will be available as an app or website. Ref.: Github. Source: University of Toronto.


Developers of the more than 2 million iOS apps will be able to hook into Siri’s new Suggestions feature, with help from a new “Create ML” tool. (credit: TechCrunch)

A smarter Siri

“Apple is turning its iPhone into a highly personalized device, powered by its [improved] Siri AI,” says TechCrunch, reporting on the just-concluded Apple Worldwide Developers Conference. With the new “Suggestions” feature — to be available with Apple’s iOS 12 mobile operating system (in autumn 2018) — Siri will offer suggestions to users, such as texting someone that you’re running late to a meeting.

The Photos app will also get smarter, with a new tab that will “prompt users to share photos taken with other people, thanks to facial recognition and machine learning,” for example, says TechCrunch. Along with Core ML (announced last year), a new tool called “Create ML” should help Apple developers build machine learning models, reports Wired.


(credit: Loughborough University)

AI detects illnesses in human breath

Researchers at Loughborough University in the U.K. have developed deep-learning networks that can detect illness-revealing chemical compounds in breath samples, with potentially wide applications in medicine, forensics, environmental analysis, and others.

The new process is cheaper and more reliable — taking only minutes to autonomously analyze a breath sample that previously took hours by a human expert, using gas-chromatography mass-spectrometers (GC-MS). The initial study focused on recognizing a group of chemicals called aldehydes, which are often associated with fragrances but also human stress conditions and illnesses. Source: The Conversation.

New noninvasive technique could be alternative to laser eye surgery

(Left) Corneal shape before (top) and after (bottom) the treatment. (Right) Simulated effects on vision. (credit: Sinisa Vukelic/Columbia Engineering)

Columbia Engineering researcher Sinisa Vukelic, Ph.D., has developed a new non-invasive approach for permanently correcting myopia (nearsightedness), replacing glasses and invasive corneal refractive surgery.* The non-surgical method uses a “femtosecond oscillator” — an ultrafast laser that delivers pulses of very low energy at high repetition rate to modify the tissue’s shape.

The method has fewer side effects and limitations than those seen in refractive surgeries, according to Vukelic. For instance, patients with thin corneas, dry eyes, and other abnormalities cannot undergo refractive surgery.** The study could lead to treatment for myopia, hyperopia, astigmatism, and irregular astigmatism. So far, it’s shown promise in preclinical models.

“If we carefully tailor these changes, we can adjust the corneal curvature and thus change the refractive power of the eye,” says Vukelic. “This is a fundamental departure from the mainstream ultrafast laser treatment [such as LASIK] … and relies on the optical breakdown of the target materials and subsequent cavitation bubble formation.”

Personalized treatments and use on other collagen-rich tissues

Vukelic’s group plans to start clinical trials by the end of the year. They hope to predict corneal effects —  how the cornea might deform if a small circle or an ellipse, for example. That would make it possible to personalize the treatment.

“What’s especially exciting is that our technique is not limited to ocular media — it can be used on other collagen-rich tissues,” Vukelic adds. “We’ve also been working with Professor Gerard Ateshian’s lab to treat early osteoarthritis, and the preliminary results are very, very encouraging. We think our non-invasive approach has the potential to open avenues to treat or repair collagenous tissue without causing tissue damage.”

* Nearsightedness, or myopia, is an increasing problem around the world. There are now twice as many people in the U.S. and Europe with this condition as there were 50 years ago, the researchers note. In East Asia, 70 to 90 percent of teenagers and young adults are nearsighted. By some estimates, about 2.5 billion of people across the globe may be affected by myopia by 2020. Eye glasses and contact lenses are simple solutions; a more permanent one is corneal refractive surgery. But, while vision correction surgery has a relatively high success rate, it is an invasive procedure, subject to post-surgical complications, and in rare cases permanent vision loss. In addition, laser-assisted vision correction surgeries such as laser in situ keratomileusis (LASIK) and photorefractive keratectomy (PRK) still use ablative technology, which can thin and in some cases weaken the cornea.

** Vukelic’s approach uses low-density plasma, which causes ionization of water molecules within the cornea. This ionization creates a reactive oxygen species (a type of unstable molecule that contains oxygen and that easily reacts with other molecules in a cell), which in turn interacts with the collagen fibrils to form chemical bonds, or crosslinks. This selective introduction of crosslinks induces changes in the mechanical properties of the treated corneal tissue. This ultimately results in changes in the overall macrostructure of the cornea, but avoids optical breakdown of the corneal tissue. Because the process is photochemical, it does not disrupt tissue and the induced changes remain stable.

Reference: Nature Photonics. Source: Columbia Engineering.

 

 

 

First 3D-printed human corneas

3D-printing a human cornea (credit: Newcastle University)

Scientists at Newcastle University have created a proof-of-concept process to achieve the first 3D-printed human corneas (the cornea, the outermost layer of the human eye, has an important role in focusing vision).*

Stem cells (human corneal stromal cells) from a healthy donor’s cornea were mixed together with alginate and collagen** to create a “bio-ink” solution. Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea in less than 10 minutes.

They also demonstrated that they could build a cornea to match a patient’s unique specifications, based on a scan of the patient’s eye.

The technique could be used in the future to ensure an unlimited supply of corneas, but it will be several years of testing before they could be used in transplants, according to the scientists.

* There is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder. In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

* This mixture keeps the stem cells alive, and it’s stiff enough to hold its shape but soft enough to be squeezed out of the nozzle of a 3D printer.

Reference: Experimental Eye Research. Source: Newcastle University

 

Advanced brain organoid could model strokes, screen drugs

These four marker proteins (top row) are involved in controlling entry of molecules into the brain via the blood brain barrier. Here, the scientists illustrate one form of damage to the blood brain barrier in ischemic stroke conditions, as revealed by changes (bottom row) in these markers. (credit:WFIRM)

Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have developed a 3-D brain organoid (tiny artifical organ) that could have potential applications in drug discovery and disease modeling.

The scientists say this is the first engineered tissue-equivalent to closely resemble normal human brain anatomy — containing all six major cell types found in normal organs, including neurons and immune cells.

The advanced 3-D organoids promote the formation of a fully cell-based, natural, and functional version of the blood brain barrier (a semipermeable membrane that separates the circulating blood from the brain, protecting it from foreign substances that could cause injury).

The new artificial organ model can help improve understanding of disease mechanisms at the blood brain barrier (BBB), the passage of drugs through the barrier, and the effects of drugs once they cross the barrier.

Faster drug discovery and screening

The shortage of effective therapies and the low success rate of investigational drugs are (in part) due to the fact that we do not have human-like tissue models for testing, according to senior author Anthony Atala, M.D., director of WFIRM. “The development of tissue-engineered 3D brain tissue equivalents such as these can help advance the science toward better treatments and improve patients’ lives,” he said.

The development of the model opens the door to speedier drug discovery and screening. This applies both to neurological conditions and for diseases like HIV, where pathogens hide in the brain; and to disease modeling of neurological conditions, such as Alzheimer’s disease, multiple sclerosis and Parkinson’s disease. The goal is to better understand their pathways and progression.

“To date, most in vitro [lab] BBB models [only] utilize endothelial cells, pericytes and astrocytes,” the researchers note in a paper. “We report a 3D spheroid model of the BBB comprising all major cell types, including neurons, microglia, and oligodendrocytes, to recapitulate more closely normal human brain tissue.”

So far, the researchers have used the brain organoids to measure the effects of (mimicked) strokes on impairment of the blood brain barrier, and have successfully tested permeability (ability of molecules to  pass through the BBB) of large and small molecules.

Reference: Nature Scientific Reports (open access). Source: Wake Forest Institute for Regenerative Medicine.

Augmented-reality system lets doctors see medical images projected on patients’ skin

Projected medical image (credit: University of Alberta)

New technology is bringing the power of augmented reality into clinical practice. The system, called ProjectDR, shows clinicians 3D medical images such as CT scans and MRI data, projected directly on a patient’s skin.

The technology uses motion capture, similar to how it’s done in movies. Infrared cameras track invisible (to human vision) markers on the patient’s body. That allows the system to track the orientation of the body, allowing the projected images to move as the patient does.

Applications include teaching, physiotherapy, laparoscopic surgery, and surgical planning.

ProjectDR can also present segmented images — for example, only the lungs or only the blood vessels — depending on what a clinician is interested in seeing.

The researchers plan to test ProjectDR in an operating room to simulate surgery, according to Pierre Boulanger, PhD, a professor in the Department of Computing Science at the University of Alberta, Canada. “We are also doing pilot studies to test the usability of the system for teaching chiropractic and physical therapy procedures,” he said.

They next plan to conduct real surgical pilot studies.


UAlbertaScience | ProjectDR demonstration video

 

How deep learning is about to transform biomedical science

Human induced pluripotent stem cell neurons imaged in phase contrast (gray pixels, left) — currently processed manually with fluorescent labels (color pixels) to make them visible. That’s about to radically change. (credit: Google)

Researchers at Google, Harvard University, and Gladstone Institutes have developed and tested new deep-learning algorithms that can identify details in terabytes of bioimages, replacing slow, less-accurate manual labeling methods.

Deep learning is a type of machine learning that can analyze data, recognize patterns, and make predictions. A new deep-learning approach to biological images, which the researchers call “in silico labeling” (ISL), can automatically find and predict features in images of “unlabeled” cells (cells that have not been manually identified by using fluorescent chemicals).

The new deep-learning network can identify whether a cell is alive or dead, and get the answer right 98 percent of the time (humans can typically only identify a dead cell with 80 percent accuracy) — without requiring invasive fluorescent chemicals, which make it difficult to track tissues over time. The deep-learning network can also predict detailed features such as nuclei and cell type (such as neural or breast cancer tissue).

The deep-learning algorithms are expected to make it possible to handle the enormous 3–5 terabytes of data per day generated by Gladstone Institutes’ fully automated robotic microscope, which can track individual cells for up to several months.

The research was published in the April 12, 2018 issue of the journal Cell.

How to train a deep-learning neural network to predict the identity of cell features in microscope images


Using fluorescent labels with unlabeled images to train a deep neural network to bring out image detail. (Left) An unlabeled phase-contrast microscope transmitted-light image of rat cortex — the center image from the z-stack (vertical stack) of unlabeled images. (Right three images) Labeled images created with three different fluorescent labels, revealing invisible details of cell nuclei (blue), dendrites (green), and axons (red). The numbered outsets at the bottom show magnified views of marked subregions of images. (credit: Finkbeiner Lab)

To explore the new deep-learning approach, Steven Finkbeiner, MD, PhD, the director of the Center for Systems and Therapeutics at Gladstone Institutes in San Francisco, teamed up with computer scientists at Google.

“We trained the [deep learning] neural network by showing it two sets of matching images of the same cells: one unlabeled [such as the black and white "phase contrast"microscope image shown in the illustration] and one with fluorescent labels [such as the three colored images shown above],” explained Eric Christiansen, a software engineer at Google Accelerated Science and the study’s first author. “We repeated this process millions of times. Then, when we presented the network with an unlabeled image it had never seen, it could accurately predict where the fluorescent labels belong.” (Fluorescent labels are created by adding chemicals to tissue samples to help visualize details.)

The study used three cell types: human motor neurons derived from induced pluripotent stem cells, rat cortical cultures, and human breast cancer cells. For instance, the deep-learning neural network can identify a physical neuron within a mix of cells in a dish. It can go one step further and predict whether an extension of that neuron is an axon or dendrite (two different but similar-looking elements of the neural cell).

For this study, Google used TensorFlow, an open-source machine learning framework for deep learning originally developed by Google AI engineers. The code for this study, which is open-source on Github, is the result of a collaboration between Google Accelerated Science and two external labs: the Lee Rubin lab at Harvard and the Steven Finkbeiner lab at Gladstone.

Animation showing the same cells in transmitted light (black and white) and fluorescence (colored) imaging, along with predicted fluorescence labels from the in silico labeling model. Outset 2 shows the model predicts the correct labels despite the artifact in the transmitted-light input image. Outset 3 shows the model infers these processes are axons, possibly because of their distance from the nearest cells. Outset 4 shows the model sees the hard-to-see cell at the top, and correctly identifies the object at the left as DNA-free cell debris. (credit: Google)

Transforming biomedical research

“This is going to be transformative,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco. “Deep learning is going to fundamentally change the way we conduct biomedical science in the future, not only by accelerating discovery, but also by helping find treatments to address major unmet medical needs.”

In his laboratory, Finkbeiner is trying to find new ways to diagnose and treat neurodegenerative disorders, such as Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS). “We still don’t understand the exact cause of the disease for 90 percent of these patients,” said Finkbeiner. “What’s more, we don’t even know if all patients have the same cause, or if we could classify the diseases into different types. Deep learning tools could help us find answers to these questions, which have huge implications on everything from how we study the disease to the way we conduct clinical trials.”

Without knowing the classifications of a disease, a drug could be tested on the wrong group of patients and seem ineffective, when it could actually work for different patients. With induced pluripotent stem cell technology, scientists could match patients’ own cells with their clinical information, and the deep network could find relationships between the two datasets to predict connections. This could help identify a subgroup of patients with similar cell features and match them to the appropriate therapy, Finkbeiner suggests.

The research was funded by Google, the National Institute of Neurological Disorders and Stroke of the National Institutes of Health, the Taube/Koret Center for Neurodegenerative Disease Research at Gladstone, the ALS Association’s Neuro Collaborative, and The Michael J. Fox Foundation for Parkinson’s Research.


Abstract of In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images

Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary pertur-bations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call ‘‘in silico labeling’’ (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire.

round-up | Five important biomedical technology breakthroughs

Printing your own bioprinter

PrintrBot Simple Metal modified with the LVE for FRESH printing. (credit: Adam Feinberg/HardwareX)

Now you can build your own low-cost 3-D bioprinter by modifying a standard commercial desktop 3-D printer for under $500 — thanks to an open-source “LVE 3-D” design developed by Carnegie Mellon University (CMU) researchers. CMU provides detailed instructional videos.

You can print artificial human tissue scaffolds on a larger scale (entire human heart) and at higher resolution and quality, the researchers say. Most 3-D bioprinters start between $10K and $20K, and commercial 3D printers cost up to $200,000 and are typically proprietary machines, closed source, and difficult to modify.

AI-enhanced medical imaging technique reduces radiation doses, MRI times

AUTOMAP yields higher-quality images medical images from less data, reducing radiation doses for CT and PET and shortening scan times for MRI. Shown here: MRI images reconstructed from the same data with conventional approaches (left) and AUTOMAP (right). (credit: Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital)

Massachusetts General Hospital (MGH) researchers have developed a machine-learning-based technique that enables clinicians to acquire higher-quality images without the increased radiation dose — from acquiring additional data from computed tomography (CT) or positron emission tomography (PET) — or the uncomfortably long scan times needed for magnetic resonance imaging (MRI).

The new AUTOMAP (automated transform by manifold approximation)  deep-learning technique avoids radiologists having to tweak manual settings to overcome imperfections in raw data.

The technique could also help radiologists make real-time decisions about imaging protocols while the patient is in the scanner (image reconstruction time is just tens of milliseconds), thanks to AI algorithms running on graphical processing units (GPUs). — MGH Athinoula A. Martinos Center for Biomedical Imaging, Nature.

“Nanotweezers” trap cells and nanoparticles with a laser beam

Trapping a nanoparticle with laser beams (credit: Linhan Lin et al./Nature Photonics)

A new tool called “opto-thermoelectric nanotweezers” (OTENT) allows bioscientists to use light to manipulate biological cells and molecules at single-molecule resolution. The goal is to make nanomedicine discoveries for early disease diagnosis.

By optically heating a plasmonic substrate, a light-directed thermoelectric field is generated by spatial separation of dissolved ions within the heating laser spot — allowing for manipulating nanoparticles over a wide range of materials, sizes and shapes. — University of Texas at Austin, Nature Photonics.

CMU Associate Professor Adam Feinberg says his lab “aims to produce open-source biomedical research that other researchers can expand upon … to seed innovation widely [and] encourage the rapid development of biomedical technologies to save lives.”— CMU, video, HardwareX (open-access)

Nanometer-scale MRI opens the door to viewing virus nanoparticles and proteins

MRFM imaging device under microwave irradiation (inset: ultra-sensitive mechanical oscillator — a silicon nanowire, with sample coated at the tip (credit: R. Annabestani et al./Physical Review X)

A new technique allows for magnetic resonance imaging (MRI) at the unprecedented resolution of 2 nanometers — 10,000 times smaller than current millimeter resolution.

It promises to open the door to major advances in understanding virus particles, proteins that cause diseases like Parkinson’s and Alzheimer’s, and discovery of new materials, say researchers at the University of Waterloo Institute for Quantum Computing.

The breakthrough technique combines magnetic resonance force microscopy (MRFM) with the ability to precisely control atomic spins. “We now have unprecedented access to understanding complex biomolecules,” says Waterloo physicist Raffi Budakian. — Waterloo, Physical Review X, arXiv (open-access)

Skin-implantable biosensor relays real-time personal health data to a cell phone, lasts for years

Left: Infrared light (arrow shows excitation light) causes a biosensor (blue) under the skin to fluoresce at a level determined by the chemical of interest (center). Right: A detector (arrow) receives and analyzes the signals from the biosensor and transmits data to a computer or phone. (credit: Profusa)

A technology for continuous monitoring of glucose, lactate, oxygen, carbon dioxide, and other molecules — using tiny biosensors that are placed under the skin with a single injection — has been developed by DARPA/NIH-supported Profusa. Using a flexible, biocompatible hydrogel fiber about 5mm long and 0.5mm wide, the biosensors can continuously measure body chemistries.

An external device sends light through the skin to the biosensor dye phosphor, which then emits light proportional to the concentration of the current analyte (such as glucose) of interest. A detector then wirelessly transmits the brightness measurement to a computer or cell phone to record the change. Data can be shared securely via digital networks with healthcare providers.

To date, the injected biosensors have functioned for as long as four years. For example, tracking the rise and fall of oxygen levels around muscle with these sensors produces an “oxygen signature” that may reveal a person’s fitness level.

Until now, local inflammation and scar tissue from the “foreign body response” (from a sensing electrode wire that penetrates the skin) has prevented development of in-body sensors capable of continuous, long-term monitoring of body chemistry. The Lumee Oxygen Platform, the first medical application of the biosensor technology, was approved in 2017 for sale in Europe, designed for patients undergoing treatment for chronic limb ischemia, avoiding amputations. — Profusa

DARPA-funded ‘body on a chip’ microfluidic system could revolutionize drug evaluation

To measure the effects of drugs on different parts of the body, MIT’s new “Physiome-on-a-chip” microfluidic platform can connect 3D tissues from up to 10 “organs on chips” — allowing researchers to accurately replicate human-organ interactions for weeks at a time. (credit: Felice Frankel)

MIT bioengineers have developed a new microfluidic platform technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans.

The microfluidic platform can connect 3D tissues from up to 10 organs. Replacing animal testing, it can accurately replicate human-organ interactions for weeks at a time and can allow for measuring the effects of drugs on different parts of the body, according to the engineers. For example, the system could reveal whether a drug that is intended to treat one organ will have adverse effects on another.

Physiome on a chip. The new technology was originally funded in 2012 by the Defense Advanced Research Projects Agency (DARPA) Microphysiological Systems (MPS) program (see “DARPA and NIH to fund ‘human body on a chip’ research”). The goal of the $32 million program was to model potential drug effects more accurately and rapidly.

Linda Griffith, PhD, the MIT School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and her colleagues decided to pursue a technology that they call a “physiome on a chip.” Griffith is one of the senior authors of a paper on the study, which appears in the open-access Nature journal Scientific Reports.

Schematic overview of “Physiome-on-a-chip” approach, using bioengineered devices that nurture many interconnected 3D “microphysiological systems” (MPSs), aka “organs-on-chips.” MPSs are in vitro (lab) models that capture facets of in vivo (live) organ function. They represent specified functional behaviors of each organ of interest, such as gut, brain, and liver, as shown here. MPSs are designed to capture essential features of in vivo physiology that is based on quantitative systems models tailored for individual applications, such as drug fate or disease modeling. (illustration credit: Victor O. Leshyk)

To achieve this, the researchers needed to develop a platform that would allow tissues to grow and interact with each other. They also needed to develop engineered tissue that would accurately mimic the functions of human organs. Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. And most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require awkward external pumps.

The MIT team decided to create an open system, making it easier to manipulate the system and remove samples for analysis. Their system was adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations*. It also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger tissues (for example, tumors within an organ) to be evaluated.

Complex interactions with 1 or 2 million cells in 10 organs. The researchers created three versions of their system, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells.

MPS platform and flow partitioning. (Left) Exploded rendering of the 7-MPS platform. Rigid plates of polysulfone (yellow) and acrylic (clear) sandwich an elastomeric (rubbery) membrane to form a pumping manifold with integrated fluid channels. Channels interface to the top side of the polysulfone plate (yellow) to deliver fluid to each MPS compartment (such as brain) in a defined manner. Fluid return occurs passively via spillway channels machined into the top plate. (Right) Flow partitioning mirrors physiological cardiac output for 7-way platforms. A 10-MPS platform adds muscle, skin, and kidney flow. (credit: Collin D. Edington et al./Scientific Reports)

These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These “primary cells” are more difficult to work with, but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down.

Replacing animal testing. The new microfluidic platform technology is superior to animal testing, according to Griffith:

  • Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects.
  • Drugs that work in animals often fail in human trials.
  • “Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic. With our chip, you can distribute a drug and then look for the effects on other tissues and measure the exposure and how it is metabolized.”
  • These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because the treatments are only designed to interact with the human immune system.

Modeling Parkinson’s and metastasizing tumors. Griffith believes that the most immediate applications for this technology involve modeling two to four specific organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease. Other applications the lab is investigating include modeling tumors that metastasize to other parts of the body.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

* Co-author David Hughes is an employee of CN BioInnovations, the commercial vendor for the Liverchip. Linda Griffith and Steve Tannenbaum receive patent royalties from the Liverchip.

Abstract of Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies

Microphysiological systems (MPSs) are in vitro models that capture facets of in vivo organ function through use of specialized culture microenvironments, including 3D matrices and microperfusion. Here, we report an approach to co-culture multiple different MPSs linked together physiologically on re-useable, open-system microfluidic platforms that are compatible with the quantitative study of a range of compounds, including lipophilic drugs. We describe three different platform designs – “4-way”, “7-way”, and “10-way” – each accommodating a mixing chamber and up to 4, 7, or 10 MPSs. Platforms accommodate multiple different MPS flow configurations, each with internal re-circulation to enhance molecular exchange, and feature on-board pneumatically-driven pumps with independently programmable flow rates to provide precise control over both intra- and inter-MPS flow partitioning and drug distribution. We first developed a 4-MPS system, showing accurate prediction of secreted liver protein distribution and 2-week maintenance of phenotypic markers. We then developed 7-MPS and 10-MPS platforms, demonstrating reliable, robust operation and maintenance of MPS phenotypic function for 3 weeks (7-way) and 4 weeks (10-way) of continuous interaction, as well as PK analysis of diclofenac metabolism. This study illustrates several generalizable design and operational principles for implementing multi-MPS “physiome-on-a-chip” approaches in drug discovery.

Two new wearable sensors may replace traditional medical diagnostic devices

Throat-motion sensor monitors stroke effects more effectively

A radical new type of stretchable, wearable sensor that measures vocal-cord movements could be a “game changer” for stroke rehabilitation, according to Northwestern University scientists. The sensors can also measure swallowing ability (which may be affected by stroke), heart function, muscle activity, and sleep quality. Developed in the lab of engineering professor John A. Rogers, Ph.D., in partnership with Shirley Ryan AbilityLab in Chicago, the new sensors have been deployed to tens of patients.

“One of the biggest problems we face with stroke patients is that their gains tend to drop off when they leave the hospital,” said Arun Jayaraman, Ph.D., research scientist at the Shirley Ryan AbilityLab and a wearable-technology expert. “With the home monitoring enabled by these sensors, we can intervene at the right time, which could lead to better, faster recoveries for patients.”

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Monitoring movements, not sounds. The new band-aid-like stretchable throat sensor (two are applied) measures speech patterns by detecting throat movements to improve diagnosis and treatment of aphasia, a communication disorder associated with stroke.

Speech-language pathologists currently use microphones to monitor patients’ speech functions, which can’t distinguish between patients’ voices and ambient noise.

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Full-body kinematics. AbilityLab also uses similar electronic biosensors (developed in Rogers’ lab) on the legs, arms and chest to monitor stroke patients’ recovery progress. The sensors stream data wirelessly to clinicians’ phones and computers, providing a quantitative, full-body picture of patients’ advanced physical and physiological responses in real time.

Patients can wear them even after they leave the hospital, allowing doctors to understand how their patients are functioning in the real world.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Mobile displays. Data from the sensors will be presented in a simple iPad-like display that is easy for both clinicians and patients to understand. It will send alerts when patients are under-performing on a certain metric and allow them to set and track progress toward their goals. A smartphone app can also help patients make corrections.

The researchers plan to test the sensors on patients with other conditions, such as Parkinson’s disease.

 

(credit: Elliott Abel/ Shirley Ryan AbilityLab)

Body-chemicals sensor. Another patch developed by the Rogers Lab does colorimetric analysis — determining the concentration of a chemical — for measuring sweat rate/loss and electrolyte loss. The Rogers Lab has a contract with Gatorade, and is testing this technology with the U.S. Air Force, the Seattle Mariners, and other unnamed sports teams.

Phone apps will also be available to capture precise colors and for data extraction, using algorithms.

A wearable electrocardiogram

Electrocardiogram on a prototype skin sensor (credit: 2018 Takao Someya Research Group)

Wearing your heart of your sleeve. Imagine looking at a electrocardiogram displayed on your wrist, using a simple skin sensor (replacing the usual complex array of EKG body electrodes), linked wirelessly to a smartphone or the cloud.

That’s the concept for a new wearable device developed by a team headed by Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering and Dai Nippon Printing (DNP). It’s designed to provide continuous, non-invasive health monitoring.

 

The soft, flexible skin display is about 1 millimeter thick. (credit: 2018 Takao Someya Research Group.)

Stretchable nanomesh. The device uses a lightweight sensor made from a nanomesh electrode and a display made from a 16 x 24 array of micro LEDs and stretchable wiring, mounted on a rubber sheet. It’s stretchable by up 45 percent of its original length and can be worn on the skin continuously for a week without causing inflammation.

The sensor can also measure temperature, pressure, and the electrical properties of muscle, and can display messages on skin.

DNP hopes to bring the integrated skin display to market within three years.