New microscope captures awesome animated 3D movies of cells at high resolution and speed

HHMI Howard Hughes Medical Institute | An immune cell explores a zebrafish’s inner ear

By combining two state-of-the-art imaging technologies, Howard Hughes Medical Institute Janelia Research Campus scientists, led by 2014 chemistry Nobel laureate physicist Eric Betzig, have imaged living cells at unprecedented 3D detail and speed, the scientists report on April 19, 2018 in an open-access paper in the journal Science.

In stunning videos of animated worlds, cancer cells crawl, spinal nerve circuits rewire, and we travel down through the endomembrane mesh of a zebrafish eye.

Microscope flaws. The new adaptive optics/lattice light sheet microscopy (AO-LLSM) system addresses two fundamental flaws with traditional microscopes. They’re too slow to study natural three-dimensional (3D) cellular processes in real time and in detail (the sharpest views have been limited to isolated cells immobilized on glass slides).

And the bright light required for imaging causes photobleaching and other cellular damages. These microscopes bathe cells with light thousands to millions of times more intense than the desert sun, says Betzig — damaging or killing the organism being studied.

Merging adaptive optics and rapid scanning. To meet these challenges, Betzig and his team created a microscopy system that merges two technologies: Aberration-correcting adaptive-optics technology used by astronomers to provide clear views of distant celestial objects through Earth’s turbulent atmosphere; and non-invasive lattice light sheet microscopy, which rapidly and repeatedly sweeps an ultra-thin sheet of light through the cell (avoiding light damage) while acquiring a series of 2D images and building a high-resolution 3D movie of subcellular dynamics.

Zebrafish embryo spinal cord neural circuit development (credit: HHMI Howard Hughes Medical Institute)

The combination allows for the study of 3D subcellular processes in their native multicellular environments at high spatiotemporal (space and time) resolution.

Desk version. Currently, the new microscope fills a 10-foot-long table. “It’s a bit of a Frankenstein’s monster right now,” says Betzig. His team is working on a next-generation version that should fit on a small desk at a cost within the reach of individual labs. The first such instrument will go to Janelia’s Advanced Imaging Center, where scientists from around the world can apply to use it. Plans that scientists can use to create their own microscopes will also be made freely available.

Ultimately, Betzig hopes that the adaptive optical version of the lattice microscope will be commercialized, as was the base lattice instrument before it. That could bring adaptive optics into the mainstream.

Movie Gallery: Lattice Light Sheet Microscopy with Adaptive Optics 

Endocytosis in a human stem cell derived organoid. Clathrin-mediated endocytosis in vivo. Clathrin localization in muscle fibers.

‘Minimalist machine learning’ algorithm analyzes complex microscopy and other images from very little data

(a) Raw microscopy image of a slice of mouse lymphblastoid cells. (b) Reconstructed image using time-consuming manual segmentation — note missing data (arrow). (c) Equivalent output of the new “Mixed-Scale Dense Convolution Neural Network” algorithm with 100 layers. (credit: Data from A. Ekman and C. Larabell, National Center for X-ray Tomography.)

Mathematicians at Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a radical new approach to machine learning: a new type of highly efficient “deep convolutional neural network” that can automatically analyze complex experimental scientific images from limited data.*

As experimental facilities generate higher-resolution images at higher speeds, scientists struggle to manage and analyze the resulting data, which is often done painstakingly by hand.

For example, biologists record cell images and painstakingly outline the borders and structure by hand. One person may spend weeks coming up with a single fully three-dimensional image of a cellular structure. Or materials scientists use tomographic reconstruction to peer inside rocks and materials, and then manually label different regions, identifying cracks, fractures, and voids by hand. Contrasts between different yet important structures are often very small and “noise” in the data can mask features and confuse the best of algorithms.

To meet this challenge, mathematicians Daniël Pelt and James Sethian at Berkeley Lab’s Center for Advanced Mathematics for Energy Research Applications (CAMERA)** attacked the problem of machine learning from very limited amounts of data — to do “more with less.”

Their goal was to figure out how to build an efficient set of mathematical “operators” that could greatly reduce the number of required parameters.

“Mixed-Scale Dense” network learns quickly with far fewer images

Many applications of machine learning to imaging problems use deep convolutional neural networks (DCNNs), in which the input image and intermediate images are convolved in a large number of successive layers, allowing the network to learn highly nonlinear features. To train deeper and more powerful networks, additional layer types and connections are often required. DCNNs typically use a large number of intermediate images and trainable parameters, often more than 100 million, to achieve results for difficult problems.

The new method the mathematicians developed, “Mixed-Scale Dense Convolution Neural Network” (MS-D), avoids many of these complications. It “learns” much more quickly than manually analyzing the tens or hundreds of thousands of labeled images required by typical machine-learning methods, and requires far fewer images, according to Pelt and Sethian.

(Top) A schematic representation of a two-layer CNN architecture. (Middle) A schematic representation of a common DCNN architecture with scaling operations; downward arrows represent downscaling operations, upward arrows represent upscaling operations and dashed arrows represent skipped connections. (Bottom) Schematic representation of an MS-D network; colored lines represent 3×3 dilated convolutions, with each color corresponding to a different dilation. (credit: Daniël Pelt and James Sethian/PNAS, composited by KurzweilAI)

The “Mixed-Scale Dense” network architecture calculates ”dilated convolutions” — a substitute for complex scaling operations. To capture features at various spatial ranges, it employs multiple scales within a single layer, and densely connects all intermediate images. The new algorithm achieves accurate results with few intermediate images and parameters, eliminating both the need to tune hyperparameters and additional layers or connections to enable training, according to the researchers.***

“In many scientific applications, tremendous manual labor is required to annotate and tag images — it can take weeks to produce a handful of carefully delineated images,” said Sethian, who is also a mathematics professor at the University of California, Berkeley. “Our goal was to develop a technique that learns from a very small data set.”

Details of the algorithm were published Dec. 26, 2017 in a paper in the Proceedings of the National Academy of Sciences.

Radically transforming our ability to understand disease

The MS-D approach is already being used to extract biological structure from cell images, and is expected to provide a major new computational tool to analyze data across a wide range of research areas. In one project, the MS-D method needed data from only seven cells to determine the cell structure.

“The breakthrough resulted from realizing that the usual downscaling and upscaling that capture features at various image scales could be replaced by mathematical convolutions handling multiple scales within a single layer,” said Pelt, who is also a member of the Computational Imaging Group at the Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands.

“In our laboratory, we are working to understand how cell structure and morphology influences or controls cell behavior. We spend countless hours hand-segmenting cells in order to extract structure, and identify, for example, differences between healthy vs. diseased cells,” said Carolyn Larabell, Director of the National Center for X-ray Tomography and Professor at the University of California San Francisco School of Medicine.

“This new approach has the potential to radically transform our ability to understand disease, and is a key tool in our new Chan-Zuckerberg-sponsored project to establish a Human Cell Atlas, a global collaboration to map and characterize all cells in a healthy human body.”

To make the algorithm accessible to a wide set of researchers, a Berkeley team built a web portal, “Segmenting Labeled Image Data Engine (SlideCAM),” as part of the CAMERA suite of tools for DOE experimental facilities.

High-resolution science from low-resolution data

A different challenge is to produce high-resolution images from low-resolution input. If you’ve ever tried to enlarge a small photo and found it only gets worse as it gets bigger, this may sound close to impossible.

(a) Tomographic images of a fiber-reinforced mini-composite, reconstructed using 1024 projections. Noisy images (b) of the same object were obtained by reconstructing using only 128 projections, and were used as input to an MS-D network (c). A small region indicated by a red square is shown enlarged in the bottom-right corner of each image. (credit: Daniël Pelt and James A. Sethian/PNAS)

As an example, imagine trying to de-noise tomographic reconstructions of a fiber-reinforced mini-composite material. In an experiment described in the paper, images were reconstructed using 1,024 acquired X-ray projections to obtain images with relatively low amounts of noise. Noisy images of the same object were then obtained by reconstructing using only 128 projections. Training inputs to the Mixed-Scale Dense network were the noisy images, with corresponding noiseless images used as target output during training. The trained network was then able to effectively take noisy input data and reconstruct higher resolution images.

Pelt and Sethian are now applying their approach to other new areas, such as real-time analysis of images coming out of synchrotron light sources, biological reconstruction of cells, and brain mapping.

* Inspired by the brain, convolutional neural networks are computer algorithms that have been successfully used in analyzing visual imagery. “Deep convolutional neural networks (DCNNs) use a network architecture similar to standard convolutional neural networks, but consist of a larger number of layers, which enables them to model more complicated functions. In addition, DCNNs often include downscaling and upscaling operations between layers, decreasing and increasing the dimensions of feature maps to capture features at different image scales.” — Daniël Pelt and James A. Sethian/PNAS

** In 2014, Sethian established CAMERA at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory as an integrated, cross-disciplinary center to develop and deliver fundamental new mathematics required to capitalize on experimental investigations at DOE Office of Science user facilities. CAMERA is part of the lab’s Computational Research Division. It is supported by the offices of Advanced Scientific Computing Research and Basic Energy Sciences in the Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time.

*** “By combining dilated convolutions and dense connections, the MS-D network architecture can achieve accurate results with significantly fewer feature maps and trainable parameters than existing architectures, enabling accurate training with relatively small training sets. MS-D networks are able to automatically adapt by learning which combination of dilations to use, allowing identical MS-D networks to be applied to a wide range of different problems.” — Daniël Pelt and James A. Sethian/PNAS


Abstract of A mixed-scale dense convolutional neural network for image analysis

Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.

Cancer ‘vaccine’ eliminates all traces of cancer in mice

Effects of in situ vaccination with CpG and anti-OX40 agents. Left: Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads were injected into the first arising tumor (black arrow) with either a vehicle (inactive fluid) (left) or with CpG and anti-OX40 (right). Pictures were taken on day 80. (credit: Idit Sagiv-Barfi et al./ Sci. Transl. Med.)

Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice was able to eliminate all traces of cancer in the animals — including distant, untreated metastases (spreading cancer locations), according to a study by Stanford University School of Medicine researchers.

The researchers believe this new “in situ vaccination” method could serve as a rapid and relatively inexpensive cancer therapy — one that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

The  approach works for many different types of cancers, including those that arise spontaneously, the study found.

“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy*, MD, professor of oncology and senior author of the study, which was published Jan. 31 in Science Translational Medicine. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

Many current immunotherapy approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.** “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself,” Levy said. “In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”

Cancer-destroying T cells that target other tumors in the body

Levy’s method reactivates cancer-specific T cells (a type of white blood cell) by injecting microgram (one-millionth of a gram) amounts of the two agents directly into the tumor site.*** Because the two agents are injected directly into the tumor, only T cells that have infiltrated the tumor are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.

Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.


“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system.” — Ronald Levy, MD.


The approach worked “startlingly well” in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies, the researchers say. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.

Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.

Finally, researchers explored the specificity of the T cells. They transplanted two types of tumors into the mice. They transplanted the same lymphoma cancer cells in two locations, and transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.

“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”

Lymphoma clinical trial

The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer. This would prevent recurrence of cancer due to unidentified metastases or lingering cancer cells, or even head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.

* Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is also a member of the Stanford Cancer Institute and Stanford Bio-X. Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory formerly led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anticancer treatment in humans. Professor of radiology Sanjiv Gambhir, MD, PhD, senior author of the paper, is the founder and equity holder in CellSight Inc., which develops and translates multimodality strategies to image cell trafficking and transplantation. The research was supported by the National Institutes of Health, the Leukemia and Lymphoma Society, the Boaz and Varda Dotan Foundation, and the Phil N. Allen Foundation. Stanford’s Department of Medicine also supported the work.

** Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.

*** One agent, CpG, that induces an immune response in a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other agent, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells.


Abstract of Eradication of spontaneous malignancy by local immunotherapy

It has recently become apparent that the immune system can cure cancer. In some of these strategies, the antigen targets are preidentified and therapies are custom-made against these targets. In others, antibodies are used to remove the brakes of the immune system, allowing preexisting T cells to attack cancer cells. We have used another noncustomized approach called in situ vaccination. Immunoenhancing agents are injected locally into one site of tumor, thereby triggering a T cell immune response locally that then attacks cancer throughout the body. We have used a screening strategy in which the same syngeneic tumor is implanted at two separate sites in the body. One tumor is then injected with the test agents, and the resulting immune response is detected by the regression of the distant, untreated tumor. Using this assay, the combination of unmethylated CG–enriched oligodeoxynucleotide (CpG)—a Toll-like receptor 9 (TLR9) ligand—and anti-OX40 antibody provided the most impressive results. TLRs are components of the innate immune system that recognize molecular patterns on pathogens. Low doses of CpG injected into a tumor induce the expression of OX40 on CD4+T cells in the microenvironment in mouse or human tumors. An agonistic anti-OX40 antibody can then trigger a T cell immune response, which is specific to the antigens of the injected tumor. Remarkably, this combination of a TLR ligand and an anti-OX40 antibody can cure multiple types of cancer and prevent spontaneous genetically driven cancers.

How to grow functioning human muscles from stem cells

A cross section of a muscle fiber grown from induced pluripotent stem cells, showing muscle cells (green), cell nuclei (blue), and the surrounding support matrix for the cells (credit: Duke University)

Biomedical engineers at Duke University have grown the first functioning human skeletal muscle from human induced pluripotent stem cells (iPSCs). (Pluripotent stem cells are important in regenerative medicine because they can generate any type of cell in the body and can propagate indefinitely; the induced version can be generated from adult cells instead of embryos.)

The engineers say the new technique is promising for cellular therapies, drug discovery, and studying rare diseases. “When a child’s muscles are already withering away from something like Duchenne muscular dystrophy, it would not be ethical to take muscle samples from them and do further damage,” explained Nenad Bursac, professor of biomedical engineering at Duke University and senior author of an open-access paper on the research published Tuesday, January 9, in Nature Communications.


How to grow a muscle

In the study, the researchers started with human induced pluripotent stem cells. These are cells taken from adult non-muscle tissues, such as skin or blood, and reprogrammed to revert to a primordial state. The pluripotent stem cells are then grown while being flooded with a molecule called Pax7 — which signals the cells to start becoming muscle.

After two to four weeks of 3-D culture, the resulting muscle cells form muscle fibers that contract and react to external stimuli such as electrical pulses and biochemical signals — mimicking neuronal inputs just like native muscle tissue. The researchers also implanted the newly grown muscle fibers into adult mice. The muscles survived and functions for at least three weeks, while progressively integrating into the native tissue through vascularization (growing blood vessels).

A stained cross section of the new muscle fibers, showing muscle cells (red), receptors for neuronal input (green), and cell nuclei (blue) (credit: Duke University)

Once the cells were well on their way to becoming muscle, the researchers stopped providing the Pax7 signaling molecule and started giving the cells the support and nourishment they needed to fully mature. (At this point in the research, the resulting muscle is not as strong as native muscle tissue, and also falls short of the muscle grown in a previous study*, which started from muscle biopsies.)

However, the pluripotent stem cell-derived muscle fibers develop reservoirs of “satellite-like cells” that are necessary for normal adult muscles to repair damage, while the muscle from the previous study had much fewer of these cells. The stem cell method is also capable of growing many more cells from a smaller starting batch than the previous biopsy method.

“With this technique, we can just take a small sample of non-muscle tissue, like skin or blood, revert the obtained cells to a pluripotent state, and eventually grow an endless amount of functioning muscle fibers to test,” said Bursac.

The researchers could also, in theory, fix genetic malfunctions in the induced pluripotent stem cells derived from a patient, he added. Then they could grow small patches of completely healthy muscle. This could not heal or replace an entire body’s worth of diseased muscle, but it could be used in tandem with more widely targeted genetic therapies or to heal more localized problems.


The researchers are now refining their technique to grow more robust muscles and beginning work to develop new models of rare muscle diseases. This work was supported by the National Institutes of Health.


Duke Engineering | Human Muscle Grown from Skin Cells

Muscles for future microscale robot exoskeletons

Meanwhile, physicists at Cornell University are exploring ways to create muscles for future microscale robot exoskeletons — rapidly changing their shape upon sensing chemical or thermal changes in their environment. The new designs are compatible with semiconductor manufacturing, making them useful for future microscale robotics.

The microscale robot exoskeleton muscles move using a motor called a bimorph. (A bimorph is an assembly of two materials — in this case, graphene and glass — that bends when driven by a stimulus like heat, a chemical reaction or an applied voltage.) The shape change happens because, in the case of heat, two materials with different thermal responses expand by different amounts over the same temperature change. The bimorph bends to relieve some of this strain, allowing one layer to stretch out longer than the other. By adding rigid flat panels that cannot be bent by bimorphs, the researchers localize bending to take place only in specific places, creating folds. With this concept, they are able to make a variety of folding structures ranging from tetrahedra (triangular pyramids) to cubes. The bimorphs also fold in response to chemical stimuli by driving large ions into the glass, causing it to expand. (credit: Marc Z. Miskin et al./PNAS)

Their work is outlined in a paper published Jan. 2 in Proceedings of the National Academy of Sciences.

* The advance builds on work published in 2015, when the Duke engineers grew the first functioning human muscle tissue from cells obtained from muscle biopsies. In that research, Bursac and his team started with small samples of human cells obtained from muscle biopsies, called “myoblasts,” that had already progressed beyond the stem cell stage but hadn’t yet become mature muscle fibers. The engineers grew these myoblasts by many folds and then put them into a supportive 3-D scaffolding filled with a nourishing gel that allowed them to form aligned and functioning human muscle fibers.


Abstract of Engineering human pluripotent stem cells into a functional skeletal muscle tissue

The generation of functional skeletal muscle tissues from human pluripotent stem cells (hPSCs) has not been reported. Here, we derive induced myogenic progenitor cells (iMPCs) via transient overexpression of Pax7 in paraxial mesoderm cells differentiated from hPSCs. In 2D culture, iMPCs readily differentiate into spontaneously contracting multinucleated myotubes and a pool of satellite-like cells endogenously expressing Pax7. Under optimized 3D culture conditions, iMPCs derived from multiple hPSC lines reproducibly form functional skeletal muscle tissues (iSKM bundles) containing aligned multi-nucleated myotubes that exhibit positive force–frequency relationship and robust calcium transients in response to electrical or acetylcholine stimulation. During 1-month culture, the iSKM bundles undergo increased structural and molecular maturation, hypertrophy, and force generation. When implanted into dorsal window chamber or hindlimb muscle in immunocompromised mice, the iSKM bundles survive, progressively vascularize, and maintain functionality. iSKM bundles hold promise as a microphysiological platform for human muscle disease modeling and drug development.


Abstract of Graphene-based bimorphs for micron-sized, autonomous origami machines

Origami-inspired fabrication presents an attractive platform for miniaturizing machines: thinner layers of folding material lead to smaller devices, provided that key functional aspects, such as conductivity, stiffness, and flexibility, are persevered. Here, we show origami fabrication at its ultimate limit by using 2D atomic membranes as a folding material. As a prototype, we bond graphene sheets to nanometer-thick layers of glass to make ultrathin bimorph actuators that bend to micrometer radii of curvature in response to small strain differentials. These strains are two orders of magnitude lower than the fracture threshold for the device, thus maintaining conductivity across the structure. By patterning 2-<mml:math><mml:mi>

Researchers hack cell biology to create complex shapes that form living tissue

This image shows the shapes made of living tissue, engineered by the researchers. By patterning mechanically active mouse or human cells to thin layers of extracellular fibers, the researchers could create bowls, coils, and ripple shapes. (credit: Alex Hughes)

Many of the complex folded and curved shapes that form human tissues can now be programmatically recreated with very simple instructions, UC San Francisco (UCSF) bioengineers report December 28 in the journal Developmental Cell.

The researchers used 3D cell-patterning to shape active mouse and human embryonic cells into thin layers of extracellular matrix fibers (a structural material produced by human cells that make up our connective tissue) to create bowls, coils, and ripples out of living tissue. A web of these fibers folded themselves up in predictable ways, mimicking developmental processes in natural human body tissue.

Beyond 3D-printing and molds

As KurzweilAI has reported, labs have already used modified 3D printers to pioneer 3D shapes for tissue engineering (such as this research in creating an ear and jawbone structure). They have also used micro-molding for creating variously shaped objects using plastic material in a mold (frame). But the final product often misses key structural features of normal tissues.

Engineered tissue curvature using DNA-programmed assembly of cells (credit: Alex J. Hughes et al./ Developmental Cell)

The UCSF lab approach instead used a precision 3D cell-patterning technology called DNA-programmed assembly of cells (DPAC). It provides an initial template (pattern) for tissue to later develop in vitro (in a test tube or other lab container). That tissue automatically folds itself into complex shapes in ways that replicate how in vivo (body) tissues normally assemble themselves hierarchically during development.

“This approach could significantly improve the structure, maturation, and vascularization” of tissues in organoids” (miniature models of human parts, such as brains, used for drug testing) “and 3D-printed tissues in general,” the researchers note in the paper.

“We believe these efforts have important implications for the engineering of in vitro models of disease, for regenerative medicine, and for future applications of living active materials such as in soft robotics. … These mechanisms can be integrated with top-down patterning technologies such as optogenetics, micromolding, and printing approaches that control cellular and [extracellular matrix] tissue composition at specific locations.”

This work was funded by a Jane Coffin Childs postdoctoral fellowship, the National Institutes of Health, the Department of Defense Breast Cancer Research Program, the NIH Common Fund, the Chan-Zuckerberg Biohub Investigator Program, the National Science Foundation, the UCSF Program in Breakthrough Biomedical Research, and the UCSF Center for Cellular Construction.


Abstract of Engineered Tissue Folding by Mechanical Compaction of the Mesenchyme

Many tissues fold into complex shapes during development. Controlling this process in vitro would represent an important advance for tissue engineering. We use embryonic tissue explants, finite element modeling, and 3D cell-patterning techniques to show that mechanical compaction of the extracellular matrix during mesenchymal condensation is sufficient to drive tissue folding along programmed trajectories. The process requires cell contractility, generates strains at tissue interfaces, and causes patterns of collagen alignment around and between condensates. Aligned collagen fibers support elevated tensions that promote the folding of interfaces along paths that can be predicted by modeling. We demonstrate the robustness and versatility of this strategy for sculpting tissue interfaces by directing the morphogenesis of a variety of folded tissue forms from patterns of mesenchymal condensates. These studies provide insight into the active mechanical properties of the embryonic mesenchyme and establish engineering strategies for more robustly directing tissue morphogenesis ex vivo.

How to program DNA like we do computers

A programmable chemical oscillator made from DNA (credit: Ella Maru Studio and Cody Geary)

Researchers at The University of Texas at Austin have programmed DNA molecules to follow specific instructions to create sophisticated molecular machines that could be capable of communication, signal processing, problem-solving, decision-making, and control of motion in living cells — the kind of computation previously only possible with electronic circuits.

Future applications may include health care, advanced materials, and nanotechnology.

As a demonstration, the researchers constructed a first-of-its-kind chemical oscillator that uses only DNA components — no proteins, enzymes or other cellular components — to create a classic chemical reaction network (CRN) called a “rock-paper-scissors oscillator.” The goal was to show that DNA alone is capable of precise, complex behavior.

A systematic pipeline for programming DNA-only dynamical systems and the implementation of a chemical oscillator (credit: Niranjan Srinivas et al./Science)

Chemical oscillators have long been studied by engineers and scientists. For example, the researchers who discovered the chemical oscillator that controls the human circadian rhythm — responsible for our bodies’ day and night rhythm — earned the 2017 Nobel Prize in physiology or medicine.

“As engineers, we are very good at building sophisticated electronics, but biology uses complex chemical reactions inside cells to do many of the same kinds of things, like making decisions,” said David Soloveichik, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering and senior author of a paper in the journal Science.

“Eventually, we want to be able to interact with the chemical circuits of a cell, or fix malfunctioning circuits or even reprogram them for greater control. But in the near term, our DNA circuits could be used to program the behavior of cell-free chemical systems that synthesize complex molecules, diagnose complex chemical signatures, and respond to their environments.”

The team’s research was conducted as part of the National Science Foundation’s (NSF) Molecular Programming Project and funded by the NSF, the Office of Naval Research, the National Institutes of Health, and the Gordon and Betty Moore Foundation.


Programming a Chemical Oscillator


Abstract of Enzyme-free nucleic acid dynamical systems

An important goal of synthetic biology is to create biochemical control systems with the desired characteristics from scratch. Srinivas et al. describe the creation of a biochemical oscillator that requires no enzymes or evolved components, but rather is implemented through DNA molecules designed to function in strand displacement cascades. Furthermore, they created a compiler that could translate a formal chemical reaction network into the necessary DNA sequences that could function together to provide a specified dynamic behavior.

 

 

Space dust may transport life between worlds

Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)

Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.

A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.

The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.

The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.

“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**

The study was partly funded by the U.K. Science and Technology Facilities Council.

* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia

** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology


Abstract of Space Dust Collisions as a Planetary Escape Mechanism

It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.

Scientists decipher mechanisms in cells for extending human longevity

Aging cells periodically switch their chromatin state. The image illustrates the “on” and “off” patterns in individual cells. (credit: UC San Diego)

A team of scientists at the University of California San Diego led by biologist Nan Hao have combined engineering, computer science, and biology technologies to decode the molecular processes in cells that influence aging.

Protecting DNA from damage

As cells age, damage in their DNA accumulates over time, leading to decay in normal functioning — eventually resulting in death. But a natural biochemical process known as “chromatin silencing” helps protect DNA from damage by converting specific regions of DNA from a loose, open state into a closed one, thus shielding DNA regions. (Chromatin is a complex of macromolecules found in cells, consisting of DNA, protein, and RNA.)

Among the molecules that promote silencing is a family of proteins — broadly conserved from bacteria to humans — known as sirtuins. In recent years, chemical activators of sirtuins have received much attention and are being marketed as nutraceuticals (such as resveratrol and more recently, NMN, as discussed on KurzweilAI) to aid chromatin silencing in the hopes of slowing the aging process.

To silence or not to silence? It’s all about the dynamics.

However, scientists have also found that such chromatin silencing also stops the protected DNA regions from expressing RNAs and proteins that carry out biological functions, so excessive silencing could derail normal cell physiology.

To learn more, the UC San Diego scientists turned to cutting-edge computational and experimental approaches in yeast, as described in an open-access study published in Proceedings of the National Academy of Sciences. That allowed the researchers to track chromatin silencing in unprecedented detail through generations during aging.

Here’s the puzzle: They found that a complete loss of silencing leads to cell aging and death. But continuous chromatin silencing also leads cells to a shortened lifespan, they found. OK, so is chromatin silencing or not silencing the answer to delay aging? The answer derived from the new study: Both.

According to the researchers, nature has developed a clever way to solve this dilemma. “Instead of staying in the silencing or silencing loss state, cells switch their DNA between the open (silencing loss) and closed (silencing) states periodically during aging,” said Hao. “In this way, cells can avoid a prolonged duration in either state, which is detrimental, and maintain a time-based balance important for their function and longevity.”

What about nutraceuticals?

So are nutraceuticals to aid chromatin silencing still advised? According to a statement provided to KurzweilAI, “since the study focused on yeast aging, much more investigation is needed to inform any questions about chromatin silencing and nutraceuticals for human benefit, which is a much more complex issue requiring more intricate studies.”

“When cells grow old, they lose their ability to maintain this periodic switching, resulting in aged phenotypes and eventually death,” explained Hao. “The implication here is that if we can somehow help cells to reinforce switching, especially as they age, we can slow their aging. And this possibility is what we are currently pursuing.

“I believe this collaboration will produce in the near future many new insights that will transform our understanding in the basic biology of aging and will lead to new strategies to promote longevity in humans.”

The research was supported by the National Science Foundation, University of California Cancer Research Coordinating Committee (L.P.); Department of Defense, Air Force Office of Scientific Research, National Defense Science and Engineering; Human Frontier Science Program; and the San Diego Center for Systems Biology National Institutes of Health.


Nan Hao | This time-lapse movie tracks the replicative aging of individual yeast cells throughout their entire life spans.


Nan Hao | periodic switching during aging


Abstract of Multigenerational silencing dynamics control cell aging

Cellular aging plays an important role in many diseases, such as cancers, metabolic syndromes, and neurodegenerative disorders. There has been steady progress in identifying aging-related factors such as reactive oxygen species and genomic instability, yet an emerging challenge is to reconcile the contributions of these factors with the fact that genetically identical cells can age at significantly different rates. Such complexity requires single-cell analyses designed to unravel the interplay of aging dynamics and cell-to-cell variability. Here we use microfluidic technologies to track the replicative aging of single yeast cells and reveal that the temporal patterns of heterochromatin silencing loss regulate cellular life span. We found that cells show sporadic waves of silencing loss in the heterochromatic ribosomal DNA during the early phases of aging, followed by sustained loss of silencing preceding cell death. Isogenic cells have different lengths of the early intermittent silencing phase that largely determine their final life spans. Combining computational modeling and experimental approaches, we found that the intermittent silencing dynamics is important for longevity and is dependent on the conserved Sir2 deacetylase, whereas either sustained silencing or sustained loss of silencing shortens life span. These findings reveal that the temporal patterns of a key molecular process can directly influence cellular aging, and thus could provide guidance for the design of temporally controlled strategies to extend life span.

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate.

What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information.

In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

Like our Universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0.

It’s still an open question how, when and where life first appeared in our Universe, but there is strong evidence that here on Earth life first appeared about 4 billion years ago.

Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way.

Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information processing, such as when you use information from your eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple.

For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.”

You’ve learned how to speak and countless other skills. Bacteria, on the other hand, aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard- coded into their DNA from the start.

There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information-processing system that implements the sugar-finding algorithm and other software.


“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity Is Near and How to Create a Mind


Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software are evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.

You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software.

Perhaps your school allows you to select a foreign language: Do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it?

This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style.

I weigh about twenty-five times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been preloaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. Life 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all; in contrast, a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts.

This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past fifty thousand years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died.

By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks.

This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant.

Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our Universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.

The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or acquiring a thousand times bigger brain.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:

• Life 1.0 (biological stage): evolves its hardware and software

• Life 2.0 (cultural stage): evolves its hardware, designs much of its software

• Life 3.0 (technological stage): designs its hardware and software

After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.

From the book Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark, © 2017 by Max Tegmark. Published by arrangement with Alfred A. Knopf, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.

A breakthrough new method for 3D-printing living tissues

The 3D droplet bioprinter, developed by the Bayley Research Group at Oxford, producing millimeter-sized tissues (credit: Sam Olof/ Alexander Graham)

Scientists at the University of Oxford have developed a radical new method of 3D-printing laboratory-grown cells that can form complex living tissues and cartilage to potentially support, repair, or augment diseased and damaged areas of the body.

Printing high-resolution living tissues is currently difficult because the cells often move within printed structures and can collapse on themselves. So the team devised a new way to produce tissues in protective nanoliter droplets wrapped in a lipid (oil-compatible) coating that is assembled, layer-by-layer, into living cellular structures.

3D-printing cellular constructs. (left) Schematic of cell printing. The dispensing nozzle ejects cell-containing bioink droplets into a lipid-containing oil. The droplets are positioned by the programmed movement of the oil container. The droplets cohere through the formation of droplet interface lipid bilayers. (center) A related micrograph of a patterned cell junction, containing two cell types, printed as successive layers of 130-micrometer droplets ejected from two glass nozzles. (right) A confocal fluorescence micrograph of about 700 printed human embryonic kidney cells under oil at a density of 40 million cells per milliliter (scale bar = 150 micrometers). (credit: Alexander D. Graham et al./Scientific Reports)

This new method improves the survival rate of the individual cells and allows for building each tissue one drop at a time to mimic the behaviors and functions of the human body. The patterned cellular constructs, once fully grown, can mimic or potentially enhance natural tissues.

“We were aiming to fabricate three-dimensional living tissues that could display the basic behaviors and physiology found in natural organisms,” explained Alexander Graham, PhD, lead author and 3D Bioprinting Scientist at OxSyBio (Oxford Synthetic Biology).*

“To date, there are limited examples of printed tissues [that] have the complex cellular architecture of native tissues. Hence, we focused on designing a high-resolution cell printing platform, from relatively inexpensive components, that could be used to reproducibly produce artificial tissues with appropriate complexity from a range of cells, including stem cells.”

A confocal micrograph of an artificial tissue containing two populations of human embryonic kidney cells (HEK-293T) printed in the form of an arborized structure within a cube (credit: Sam Olof/Alexander Graham)

The researchers hope that with further development, the materials could have a wide impact on healthcare worldwide and bypass clinical animal testing. The scientists plan to develop new complementary printing techniques that allow for a wider range of living and hybrid materials, producing tissues at industrial scale.

“We believe it will be possible to create personalized treatments by using cells sourced from patients to mimic or enhance natural tissue function,” said Sam Olof, PhD, Chief Technology Officer at OxSyBio. “In the future, 3D bio-printed tissues may also be used for diagnostic applications — for example, for drug or toxin screening.”

The study results were published August 1 in the open-access journal Scientific Reports.


Abstract of High-Resolution Patterned Cellular Constructs by Droplet-Based 3D Printing

Bioprinting is an emerging technique for the fabrication of living tissues that allows cells to be arranged in predetermined three-dimensional (3D) architectures. However, to date, there are limited examples of bioprinted constructs containing multiple cell types patterned at high-resolution. Here we present a low-cost process that employs 3D printing of aqueous droplets containing mammalian cells to produce robust, patterned constructs in oil, which were reproducibly transferred to culture medium. Human embryonic kidney (HEK) cells and ovine mesenchymal stem cells (oMSCs) were printed at tissue-relevant densities (107 cells mL−1) and a high droplet resolution of 1 nL. High-resolution 3D geometries were printed with features of ≤200 μm; these included an arborised cell junction, a diagonal-plane junction and an osteochondral interface. The printed cells showed high viability (90% on average) and HEK cells within the printed structures were shown to proliferate under culture conditions. Significantly, a five-week tissue engineering study demonstrated that printed oMSCs could be differentiated down the chondrogenic lineage to generate cartilage-like structures containing type II collagen.