IBM’s Watson shown to enhance human-computer co-creativity, support biologically inspired design

Using Watson for enhancing human-computer co-creativity (credit: Georgia Tech)

Georgia Institute of Technology researchers, working with student teams, trained a cloud-based version of IBM’s Watson called the Watson Engagement Advisor to provide answers to questions about biologically inspired design (biomimetics), a design paradigm that uses biological systems as analogues for inventing technological systems.

Ashok Goel, a professor at Georgia Tech’s School of Interactive Computing who conducts research on computational creativity. In an experiment, he used this version of Watson as an “intelligent research assistant” to support teaching about biologically inspired design and computational creativity in the Georgia Tech CS4803/8803 class on Computational Creativity in Spring 2015. Goel found that Watson’s ability to retrieve natural language information could allow a novice to quickly “train up” about complex topics and better determine whether their idea or hypothesis is worth pursuing.

An intelligent research assistant

In the form of a class project, the students fed Watson several hundred biology articles from Biologue, an interactive biology repository, and 1,200 question-answer pairs. The teams then posed questions to Watson about the research it had “learned” regarding big design challenges in areas such as engineering, architecture, systems, and computing.

Examples of questions:

“How do you make a better desalination process for consuming sea water?” (Animals have a variety of answers for this, such as how seagulls filter out seawater salt through special glands.)

“How can manufacturers develop better solar cells for long-term space travel?” One answer: Replicate how plants in harsh climates use high-temperature fibrous insulation material to regulate temperature.

Watson effectively acted as an intelligent sounding board to steer students through what would otherwise be a daunting task of parsing a wide volume of research that may fall outside their expertise.

This version of Watson also prompts users with alternate ways to ask questions for better results. Those results are packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. This was intended to allow the average user to navigate results more easily on a given topic.

Results from training the Watson AI system were packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. Each leaf is the starting point for a Q&A with Watson. (credit: Georgia Tech)

“Imagine if you could ask Google a complicated question and it immediately responded with your answer — not just a list of links to manually open, says Goel. “That’s what we did with Watson. Researchers are provided a quickly digestible visual map of the concepts relevant to the query and the degree to which they are relevant. We were able to add more semantic and contextual meaning to Watson to give some notion of a conversation with the AI.”

Georgia Tech’s Watson Engagement Advisor (credit: Georgia Tech)

Goel believes this approach to using Watson could assist professionals in a variety of fields by allowing them to ask questions and receive answers as quickly as in a natural conversation. He plans to investigate other areas with Watson such as online learning and healthcare.

The work was presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2015 Fall Symposium on Cognitive Assistance in Government, Nov. 12–14, in Arlington, Va. and was published in Procs. AAAI 2015 Fall Symposium on Cognitive Assistance (open access).


Abstract of Using Watson for Enhancing Human-Computer Co-Creativity

We describe an experiment in using IBM’s Watson cognitive system to teach about human-computer co-creativity in
a Georgia Tech Spring 2015 class on computational creativity. The project-based class used Watson to support biologically
inspired design, a design paradigm that uses biological systems as analogues for inventing technological
systems. The twenty-four students in the class self-organized into six teams of four students each, and developed semester-long projects that built on Watson to support biologically inspired design. In this paper, we describe this experiment in using Watson to teach about human-computer co-creativity, present one project in detail, and summarize the remaining five projects. We also draw lessons on building on Watson for (i) supporting biologically inspired design, and (ii) enhancing human-computer co-creativity.

Beyond telomerase: another enzyme discovered critical to maintaining telomere length

ATM inhibition shortens telomeres and ATM activation elongates telomeres. (credit: Stella Suyong Lee et al./Cell Reports)

Johns Hopkins researchers report they have uncovered the role of an another enzyme crucial to telomere length in addition to the enzyme telomerase, discovered in 1984.

The researchers say the new test they used to find the enzyme should speed discovery of other proteins and processes that determine telomere length. Shortened telomeres have been implicated in aging and in diseases as diverse as lung and bone marrow disorders, while overly long telomeres are linked to cancer.

Their results appear in an open-access paper in the Nov. 24 issue of Cell Reports.

“We’ve known for a long time that telomerase doesn’t tell the whole story of why chromosomes’ telomeres are a given length, but with the tools we had, it was difficult to figure out which proteins were responsible for getting telomerase to do its work,” says Carol Greider, Ph.D., the Daniel Nathans Professor and Director of Molecular Biology and Genetics in the Johns Hopkins Institute for Basic Biomedical Sciences. Greider won the 2009 Nobel Prize in Physiology or Medicine for the discovery of telomerase.

Figuring out exactly what’s needed to lengthen telomeres has broad health implications, Greider notes. Telomeres naturally shorten each time DNA is copied in preparation for cell division, so cells need a well-tuned process to keep adding the right number of building blocks back onto telomeres over an organism’s lifetime.

But until now, researchers have been saddled with a limiting and time-consuming test for whether a given protein is involved in maintaining telomere length, a test that first requires blocking a suspected protein’s action in lab-grown cells, then getting the cells to grow and divide for about three months so that detectable differences in telomere length can emerge. In addition to being time consuming, the test could not be used at all for proteins whose loss would kill the cells before the three-month mark.

ATM kinase found needed to lengthen telomeres

Telomeres glow at the ends of chromosomes (credit: Hesed Padilla-Nash and Thomas Ried/NIH)

For their trial run of the new test, dubbed “addition of de novo initiated telomeres (ADDIT),” Greider’s group examined an enzyme called ATM kinase. “ATM kinase was known to be involved in DNA repair, but there were conflicting reports about whether it had a role in telomere lengthening,” says Greider.

Her team blocked the enzyme in lab-grown mouse cells, and used ADDIT to find that it was indeed needed to lengthen telomeres. They verified the result using the old, three-month-long telomere test, and got the same result.

The team also found that in normal mouse cells, a drug that blocks an enzyme called PARP1 would activate ATM kinase and spur telomere lengthening. This finding offers a proof of principle for drug-based telomere elongation to treat short-telomere diseases, such as bone marrow failure, Greider says — but she cautions that PARP1 inhibitor drug itself doesn’t have the same telomere-elongating effect in human cells as it does in mouse cells.

Greider’s group plans to use ADDIT to find out more about the telomere-lengthening biochemical pathway that ATM kinase is a part of, as well as other pathways that help determine telomere length.

“The potential applications are very exciting,” says graduate student Stella Suyong Lee, who conducted the research, which took nearly five years. “Ultimately ADDIT can help us understand how cells strike a balance between aging and the uncontrolled cell growth of cancer, which is very intriguing.”


Abstract of ATM Kinase Is Required for Telomere Elongation in Mouse and Human Cells

Short telomeres induce a DNA damage response, senescence, and apoptosis, thus maintaining telomere length equilibrium is essential for cell viability. Telomerase addition of telomere repeats is tightly regulated in cells. To probe pathways that regulate telomere addition, we developed the ADDIT assay to measure new telomere addition at a single telomere in vivo. Sequence analysis showed telomerase-specific addition of repeats onto a new telomere occurred in just 48 hr. Using the ADDIT assay, we found that ATM is required for addition of new repeats onto telomeres in mouse cells. Evaluation of bulk telomeres, in both human and mouse cells, showed that blocking ATM inhibited telomere elongation. Finally, the activation of ATM through the inhibition of PARP1 resulted in increased telomere elongation, supporting the central role of the ATM pathway in regulating telomere addition. Understanding this role of ATM may yield new areas for possible therapeutic intervention in telomere-mediated disease.

Multi-layer nanoparticles glow when exposed to invisible near-infrared light

An artist’s rendering shows the layers of a new, onion-like nanoparticle whose specially crafted layers enable it to efficiently convert invisible near-infrared light to higher-energy blue and UV light (credit: Kaiheng Wei)

A new onion-like nanoparticle developed at the State University of New York University at Buffalo could open new frontiers in biomaging, solar-energy harvesting, and light-based security techniques.

The particle’s innovation lies in its layers: a coating of organic dye, a neodymium-containing shell, and a core that incorporates ytterbium and thulium. Together, these strata convert invisible near-infrared light to higher energy blue and UV light with record-high efficiency.

A transmission electron microscopy image of the new nanoparticles, which convert invisible near-infrared light to higher-energy blue and UV light with high efficiency. Each particle is about 50 nanometers in diameter. (credit: Institute for Lasers, Photonics and Biophotonics, University at Buffalo)

Light-emitting nanoparticles placed by a surgeon inside the body could provide high-contrast images of areas of interest. Nanoparticle-infused inks could also be incorporated into currency designs using ink that is invisible to the naked eye but glows blue when hit by a low-energy near-infrared laser pulse — which would very difficult for counterfeiters to reproduce.

The researchers say the nanoparticle is about 100 times more efficient at “upconverting” [increasing the frequency of] light than similar nanoparticles.

Peeling back the layers

Energy-cascaded upconversion (credit: Guanying Chen et al./Nano Letters)

Converting low-energy light to light of higher energies is difficult to do. It involves capturing two or photons from a low-energy light source, and combining their energy to form a single, higher-energy photon. Each of the three layers of this onionesque nanoparticle fulfills a unique function:

  • The outermost layer is a coating of organic dye. This dye is adept at absorbing photons from low-energy near-infrared light sources. It acts as an “antenna” for the nanoparticle, harvesting light and transferring energy inside, Ohulchanskyy says.
  • The next layer is a neodymium-containing shell. This layer acts as a bridge, transferring energy from the dye to the particle’s light-emitting core.*
  • Inside the light-emitting core, ytterbium and thulium ions work in concert. The ytterbium ions draw energy into the core and pass the energy on to the thulium ions, which have special properties that enable them to absorb the energy of three, four or five photons at once, and then emit a single higher-energy photon of blue and UV light.

The research was published online in Nano Letters on Oct. 21. It was led by the Institute for Lasers, Photonics, and Biophotonics at UB, and the Harbin Institute of Technology in China, with contributions from the Royal Institute of Technology in Sweden, Tomsk State University in Russia, and the University of Massachusetts Medical School.

* The neodymium-containing layer is necessary for transferring energy efficiently from dye to core. When molecules or ions in a material absorb a photon, they enter an “excited” state from which they can transfer energy to other molecules or ions. The most efficient transfer occurs between molecules or ions whose excited states require a similar amount of energy to obtain, but the dye and ytterbium ions have excited states with very different energies. So the team added neodymium — whose excited state is in between that of the dye and thulium’s — to act as a bridge between the two, creating a “staircase” for the energy to travel down to reach emitting thulium ions.


Abstract of Energy-Cascaded Upconversion in an Organic Dye-Sensitized Core/Shell Fluoride Nanocrystal

Lanthanide-doped upconversion nanoparticles hold promises for bioimaging, solar cells, and volumetric displays. However, their emission brightness and excitation wavelength range are limited by the weak and narrowband absorption of lanthanide ions. Here, we introduce a concept of multistep cascade energy transfer, from broadly infrared-harvesting organic dyes to sensitizer ions in the shell of an epitaxially designed core/shell inorganic nanostructure, with a sequential nonradiative energy transfer to upconverting ion pairs in the core. We show that this concept, when implemented in a core–shell architecture with suppressed surface-related luminescence quenching, yields multiphoton (three-, four-, and five-photon) upconversion quantum efficiency as high as 19% (upconversion energy conversion efficiency of 9.3%, upconversion quantum yield of 4.8%), which is about ∼100 times higher than typically reported efficiency of upconversion at 800 nm in lanthanide-based nanostructures, along with a broad spectral range (over 150 nm) of infrared excitation and a large absorption cross-section of 1.47 × 10–14 cm2 per single nanoparticle. These features enable unprecedented three-photon upconversion (visible by naked eye as blue light) of an incoherent infrared light excitation with a power density comparable to that of solar irradiation at the Earth surface, having implications for broad applications of these organic–inorganic core/shell nanostructures with energy-cascaded upconversion.

New technology senses colors in the infrared spectrum

A scanning electron microscope image showing a surface coated with silver nanocubes on a gold surface to image near-infrared light (credit: Maiken Mikkelsen and Gleb Akselrod, Duke University)

Duke University scientists have invented a technology that can identify and image different wavelengths of the infrared spectrum.

The fabrication technique for the system is easily scalable, can be applied to any surface geometry, and costs much less than current light-absorption technologies, according to the researchers. Once adopted, the technique would allow advanced infrared imaging systems to be produced faster and cheaper than today’s counterparts and with higher sensitivity.

The visible-light spectrum, with wavelengths of about 400 to 700 nanometers. The near-infrared region starts just above the red region. (credit: Wikipedia)

Using nanoparticles to absorb specific wavelengths

The technology relies on a physics phenomenon called plasmonics to absorb or reflect specific wavelengths, similar to how stained-glass windows are created (see “Crafting color coatings from nanometer-thick layers of gold and germanium“).

A curved object covered with a coating of 100-nanometer silver cubes that absorbs all red light, leaving the object with a green tint (credit: Maiken Mikkelsen and Gleb Akselrod, Duke University)

The researchers first coat a surface with a thin film of gold through a common process like evaporation. They then put down a few-nanometer-thin layer of polymer, followed by a coating of silver cubes, each one about 100 nanometers (billionths of a meter) in size.

When light strikes the new engineered surface, a specific color gets trapped on the surface of the nanocubes in packets of energy called plasmons, and eventually dissipates into heat. By controlling the thickness of the polymer film and the size and number of silver nanocubes, the coating can be tuned to absorb different wavelengths of light from the visible spectrum (at 650 nm) to the near infrared (up to 1420 nm).

“By borrowing well-known techniques from chemistry and employing them in new ways, we were able to obtain significantly better resolution than with a million-dollar state-of-the-art electron beam lithography system,” said Maiken H. Mikkelsen, the Nortel Networks Assistant Professor of Electrical & Computer Engineering and Physics at Duke University. “This allowed us to create a coating that can fine-tune the absorption spectra with a level of control that hasn’t been possible previously.”

Coating photodetectors to absorb specific wavelengths of near-infrared light would allow novel and cheap cameras to be made that could capture and discriminate different wavelengths.

A better “tricorder” camera

The researchers note in the paper that “plasmonic resonances could be moved deeper into the infrared spectrum by using larger colloidal nanoparticles with larger metallic facets,” including wavelengths in the thermal infrared spectrum.

Colors shown on current thermal infrared-imaging displays are actually based on a simple pseudocolor scheme in which the color displayed is arbitrary — it only represents the amount of thermal radiation (infrared light) that the camera captures, not its wavelength in the electromagnetic spectrum.

That means the technology could be used in a variety of other applications, such as masking the heat signatures of objects such as stealth aircraft (see “Plasmonic cloaking“) or creating a sophisticated “tricorder” camera that shows a person’s temperature at different mid-infrared (thermal) wavelengths (see “Graphene could take night-vision thermal imagers beyond ‘Predator’“).

The study was published online Nov. 9 in the journal Advanced Materials and was supported by the Air Force Office of Scientific Research.


Abstract of Large-Area Metasurface Perfect Absorbers from Visible to Near-Infrared

An absorptive metasurface based on film-coupled colloidal silver nanocubes is demonstrated. The metasurfaces are fabricated using simple dip-coating methods and can be deposited over large areas and on arbitrarily shaped objects. The surfaces show nearly complete absorption, good off-angle performance, and the resonance can be tuned from the visible to the near-infrared.

‘Golden window’ wavelength range for optimal deep-brain near-infrared imaging determined

Rayleigh scattering causes the reddening of the sun at sunset — an example of how longer wavelengths (yellow and red compared to blue in blue sky) penetrate matter (dust at sunset) better. (credit: Wikipedia/CC)

Researchers at The City College of New York (CCNY) have determined the optimal wavelengths for bioimaging of the brain at longer near-infrared wavelengths, which permit deeper imaging.

Near-infrared (NIR) radiation has been used for one- and two-photon fluorescence imaging at near-infrared wavelengths of 650–950 nm (nanometers) for deep brain imaging, but it is limited in penetration depth. (The CCNY researchers dubbed this Window I, also known as the therapeutic window.)

Longer infrared wavelengths penetrate deeper but are limited by Rayleigh and Mie scattering, which blur images, and absorption, which reduces the number of available photons (brightness). These limitations are based on the lack of suitable CMOS semiconductor imaging detectors or femtosecond laser sources.

The new CCNY study, led by biomedical engineer Lingyan Shi, studied three new optical windows in the near-infrared (NIR) region, in addition to Window I, for high-resolution deep brain imaging.

Their study built on a prior CCNY study* in 2014 using detectors based on indium gallium arsenide (GaAs) or indium antimonide-(InSb) and a femtosecond excitation source of IMRA fiber laser** to image rat brain tissue in window II (1,100– 1,350 nm), window III (1,600–1,870 nm), and window IV (1600 nm to 1870 nm, centered at 2,200 nm).

Absorbance for four windows at four brain-tissue thicknesses (credit: Lingyan Shi et al./J. Biophotonics)

The new CCNY research investigated the optimal wavelength band and optical properties of brain tissue with NIR, including the total attenuation coefficient (μt), absorption coefficient (μa), reduced scattering coefficient (μ0 s), and the scattering anisotropy coefficient (g) in these optical windows. The purpose of the study was to determine an optimal optical window in NIR in the 650 nm to 2500 nm range to reduce scattering, achieve optimal absorption, and reduce noise for deep-brain tissue imaging.

Golden Window: optimal wavelength range

Peak transmittance T (%) measured with each optical window for brain tissues with four thicknesses. Window III had the highest transmittance percentage for each of the thicknesses, followed by windows II and IV. (credit: Lingyan Shi et al./J. Biophotonics)

The researchers found that the “Golden Window” (1600 nm to 1870 nm) is an optimal wavelength range for light penetration in brain tissue, followed by Windows II and IV.

“This is a first for brain imaging and proved theoretically and experimentally that deep imaging of the brain is possible using light at longer wavelengths. It demonstrates these windows’ potential for deeper brain tissue imaging due to the reduction of scattering that causes blurring,” said Shi, a research associate in City College’s Institute for Ultrafast Spectroscopy and Lasers, and the biology department.

Published by the Journal of Biophotonics, her study sheds light on the development of the next generation of microscopy imaging technique, in which the “Golden Window” may be utilized for high-resolution deeper brain imaging. The next step in the research is in vivo imaging in mice using Golden Window wavelength light.

Shi’s team included Distinguished Professor of Physics Robert R. Alfano and Adrian Rodriguez-Contreras, an assistant professor of biology. Shi earned a Ph.D. in biomedical engineering from CCNY’s Grove School of Engineering in 2014.

* L. A. Sordillo, Y. Pu, S. Pratavieira, Y. Budansky, and R. R. Alfano, J. Biomed. Opt. 19, 056004 (2014) [link].

** Excitation wavelength 1680 nm, power > 200 mW, pulse width 100 fs, and 50 MHz repetition rate.


Abstract of Transmission in near-infrared optical windows for deep brain imaging

Near-infrared (NIR) radiation has been employed using one- and two-photon excitation of fluorescence imaging at wavelengths 650–950 nm (optical window I) for deep brain imaging; however, longer wavelengths in NIR have been overlooked due to a lack of suitable NIR-low band gap semiconductor imaging detectors and/or femtosecond laser sources. This research introduces three new optical windows in NIR and demonstrates their potential for deep brain tissue imaging. The transmittances are measured in rat brain tissue in the second (II, 1,100–1,350 nm), third (III, 1,600–1,870 nm), and fourth (IV, centered at 2,200 nm) NIR optical tissue windows. The relationship between transmission and tissue thickness is measured and compared with the theory. Due to a reduction in scattering and minimal absorption, window III is shown to be the best for deep brain imaging, and windows II and IV show similar but better potential for deep imaging than window I.

New electron microscopy method sculpts 3-D structures with one-nanometer features

ORNL researchers used a new scanning transmission electron microscopy technique to sculpt 3-D nanoscale features in a complex oxide material. (credit: Department of Energy’s Oak Ridge National Laboratory)

Oak Ridge National Laboratory researchers have developed a way to build precision-sculpted 3-D strontium titanate nanostructures as small as one nanometer, using scanning transmission electron microscopes, which are normally used for imaging.

The technique could find uses in fabricating structures for functional nanoscale devices such as microchips. The structures grow epitaxially (in perfect crystalline alignment), which ensures that the same electrical and mechanical properties extend throughout the whole material, with more pronounced control over properties. The process can also be observed with atomic resolution.

The use of a scanning transmission electron microscope, which passes an electron beam through a bulk material, sets the approach apart from lithography techniques, which only pattern or manipulate a material’s surface. Combined with electron beam path control, the process could lead to large-scale implementation of bulk atomic-level fabrication as a new enabling tool of nanoscience and technology, providing a bottom-up, atomic-level complement to 3D printing, the researchers say.

“We’re using fine control of the beam to build something inside the solid itself,” said ORNL’s Stephen Jesse. “We’re making transformations that are buried deep within the structure. It would be like tunneling inside a mountain to build a house.”

The technique also offers a shortcut to researchers interested in studying how materials’ characteristics change with thickness. Instead of imaging multiple samples of varying widths, scientists could use the microscopy method to add layers to the sample and simultaneously observe what happens.

The study is published in the journal Small.


Abstract of Atomic-Level Sculpting of Crystalline Oxides: Toward Bulk Nanofabrication with Single Atomic Plane Precision

The atomic-level sculpting of 3D crystalline oxide nanostructures from metastable amorphous films in a scanning transmission electron microscope (STEM) is demonstrated. Strontium titanate nanostructures grow epitaxially from the crystalline substrate following the beam path. This method can be used for fabricating crystalline structures as small as 1–2 nm and the process can be observed in situ with atomic resolution. The fabrication of arbitrary shape structures via control of the position and scan speed of the electron beam is further demonstrated. Combined with broad availability of the atomic resolved electron microscopy platforms, these observations suggest the feasibility of large scale implementation of bulk atomic-level fabrication as a new enabling tool of nanoscience and technology, providing a bottom-up, atomic-level complement to 3D printing.

Blood-brain barrier opened non-invasively for the first time in humans, using focused ultrasound

Opening up the blood-brain barrier to deliver drugs (credit: Focused Ultrasound Foundation)

The blood-brain barrier has been non-invasively opened in a human patient for the first time. A team at Sunnybrook Health Sciences Centre in Toronto used focused ultrasound to temporarily open the blood-brain barrier (BBB), allowing for effective delivery of chemotherapy into a patient’s malignant brain tumor.

The team infused the chemotherapy agent doxorubicin, along with tiny gas-filled bubbles, into the bloodstream of a patient with a brain tumor. They then applied focused ultrasound to areas in the tumor and surrounding brain, causing the bubbles to vibrate, loosening the tight junctions of the cells comprising the BBB, and allowing high concentrations of the chemotherapy to enter targeted tissues.

This patient treatment is part of a pilot study of up to 10 patients to establish the feasibility, safety, and preliminary efficacy of focused ultrasound to temporarily open the blood-brain barrier to deliver chemotherapy to brain tumors. The Focused Ultrasound Foundation is currently funding this trial through their Cornelia Flagg Keller Memorial Fund for Brain Research. Based on these two pre-clinical studies, a pilot clinical trial using focused ultrasound to treat Alzheimer’s is being organized.

Dr. Kullervo Hynynen, senior scientist at the Sunnybrook Research Institute, has been performing similar pre-clinical studies for about a decade. In 2012, his team was able to bypass the BBB of a rat model non-invasively (see Bypassing the blood-brain barrier with MRI and ultrasound).

Previous methods where invasive, requiring an operation, such as an implanted mucosal graft in the nose (see A drug-delivery technique to bypass the blood-brain barrier and Researchers bypass the blood-brain barrier, widening treatment options for neurodegenerative and central nervous system disease) or inserting needle electrodes into the diseased tissue and applying multiple bursts of pulsed electric energy (see Blood-brain-barrier disruption with high-frequency pulsed electric fields).

Fighting disease

The researchers suggest that focused ultrasound could also be used to deliver other types of drugs, DNA-loaded nanoparticles, viral vectors, and antibodies to the brain to treat a range of neurological conditions, including various types of brain tumors, Parkinson’s, Alzheimer’s and some psychiatric diseases.

For example, the temporary opening of the blood-brain barrier appears to facilitate the brain’s clearance of a key pathologic protein related to Alzheimer’s and improves cognitive function, the researchers found. And a recent study at the Queensland Brain Institute in Australia demonstrated that opening the blood-brain barrier with focused ultrasound reduced brain plaques and improved memory in a mouse model of Alzheimer’s disease.


Focused Ultrasound Foundation


Abstract of Scanning ultrasound removes amyloid-β and restores memory in an Alzheimer’s disease mouse model

Transgenic mice with increased amyloid-β (Aβ) production show several aspects of Alzheimer’s disease, including Aβ deposition and memory impairment. By repeatedly treating these Aβ-forming mice with scanning ultrasound, Leinenga and Götz now demonstrate that Aβ is removed and memory is restored as revealed by improvement in three memory tasks. These improvements were achieved without the use of any therapeutic agent, and the scanning ultrasound treatment did not induce any apparent damage to the mouse brain. The authors then showed that scanning ultrasound activated resident microglial cells that took up Aβ into their lysosomes. These findings suggest that repeated scanning ultrasound may be a noninvasive method with potential for treating Alzheimer’s disease.

Disney Research-CMU design tool helps novices design 3-D-printable robotic creatures

Digital designs for robotic creatures are shown on the left and the physical prototypes produced via 3-D printing are on the right (credit: Disney Research, Carnegie Melon University)

Now you can design and build your own customized walking robot using a 3-D printer and off-the-shelf servo motors, with the help of a new DYI design tool developed by Disney Research and Carnegie Mellon University.

You can specify the shape, size, and number of legs for your robotic creature, using intuitive editing tools to interactively explore design alternatives. The system takes over much of the non-intuitive and tedious task of planning the motion of the robot, and ensures that your design is capable of moving the way you want and not fall down. Or you can alter your creature’s gait as desired.

Six robotic creatures designed with the Disney Research-CMU interactive design system: one biped, four quadrupeds and one five-legged robot (credit: Disney Research, Carnegie Melon University)

“Progress in rapid manufacturing technology is making it easier and easier to build customized robots, but designing a functioning robot remains a difficult challenge that requires an experienced engineer,” said Markus Gross, vice president of research for Disney Research. “Our new design system can bridge this gap and should be of great interest to technology enthusiasts and the maker community at large.”

The research team presented the system at SIGGRAPH Asia 2015, the ACM Conference on Computer Graphics and Interactive Techniques, in Kobe, Japan.

Design viewports

The design interface features two viewports: one that lets you edit the robot’s structure and motion and a second that displays how those changes would likely alter the robot’s behavior.

You can load an initial, skeletal description of the robot and the system then creates an initial geometry and places a motor at each joint position. You can then edit the robot’s structure, adding or removing motors, or adjust their position and orientation.

The researchers have developed an efficient optimization method that uses an approximate dynamics model to generate stable walking motions for robots with varying numbers of legs. In contrast to conventional methods that can require several minutes of computation time to generate motions, the process takes just a few seconds, enhancing the interactive nature of the design tool.

3-D printer-ready designs

Once the design process is complete, the system automatically generates 3-D geometry for all body parts, including connectors for the motors, which can then be sent to a 3-D printer for fabrication.

In a test of creating two four-legged robots, it took only minutes to design these creatures, but hours to assemble them and days to produce parts on 3-D printers,” said Bernhard Thomaszewski, a research scientist at Disney Research. “It is both expensive and time-consuming to build a prototype — which underscores the importance of a design system [that] produces a final design without the need for building multiple physical iterations.”

The research team also included roboticists at ETH Zurich. For more information, see Interactive Design of 3D-Printable Robotic Creatures (open access) and visit the project website.


Disney Research | Interactive Design of 3D Printable Robotic Creatures

New ‘tricorder’ technology might be able to ‘hear’ tumors

Capacitive micromachined ultrasonic detectors used in the experiments, with a detail view of the front and back of one device (credit: Hao Nan et al./Applied Physics Letters)

Stanford electrical engineers have developed an enhancement of technology intended to safely find buried plastic explosives and spot fast-growing tumors, using a combination of microwaves and ultrasound to develop a detector similar to the legendary Star Trek tricorder.

The work, led by Assistant Professor Amin Arbabian and Research Professor Pierre Khuri-Yakub, grows out of DARPA research designed to detect buried plastic explosives, but the researchers said the technology could also provide a new way to detect early stage cancers.

The new work was spurred by a challenge posed by the Defense Advanced Research Projects Agency (DARPA), which sought a system to detect plastic explosives (improvised explosive devices or IEDs) buried underground, which are currently invisible to metal detectors. The detection device could not touch the surface in question, so as not to trigger an explosion.

The engineers developed a system based on the principle that all materials expand and contract when heated, but not at identical rates. In a potential battlefield application, the microwaves would heat the suspect area, causing the muddy ground to absorb energy and expand, and thus squeeze the plastic. Pulsing the microwaves would then generate a series of ultrasound pressure waves that could be detected and interpreted to disclose the presence of buried plastic explosives.

Touchless ultrasound detection

Sound waves propagate differently in solids than air, with a drastic transmission loss occurring when sound jumps from the solid to air. So the Stanford team accommodated for this loss by building highly sensitive capacitive micromachined ultrasonic transducers (CMUTs) that can specifically discern the weaker ultrasound signals that jumped from the solid, through the air, to the detector.

Solving the technical challenges of detecting ultrasound after it left the ground gave the Stanford researchers the experience to take aim at their ultimate goal: Using the device in medical applications without touching the skin.

Schematic of the non-contact thermoacoustic detection setup. H is the thickness of the surrounding packaging material (set to between 1 and 3 cm of water or Agarose), corresponding to the surrounding flesh-like tissue. T is the thickness of the embedded target (Rexolite, in this case, set to 4mm layers and target area of 4 square cm), corresponding to a tumor. In microwave-induced thermoacoustic imaging, the target absorbs a portion of the microwave electromagnetic energy (from the microwave signal generator) based on the the target tissue’s dielectric properties, producing an ultrasonic wave that is then detected by the airborne capacitive micromachined ultrasonic transducers (CMUT). The corresponding data is then captured for use in reconstructing the target image. (credit: Hao Nan et al./Applied Physics Letters)

Arbabian’s team used brief microwave pulses to heat a flesh-like material that had been implanted with a sample “target.” Holding the device at a standoff distance of 30 cm, the material was heated by a mere thousandth of a degree, well within safety limits. Yet even that slight heating caused the material to expand and contract, which, in turn, created ultrasound waves that the Stanford team was able to detect to disclose the location of the  4 square centimeter embedded target, all without touching the “flesh” — just like the Star Trek tricorder.

Prior medical research showed that tumors grow additional blood vessels to nourish their cancerous growth. Like wet ground, blood vessels absorb heat differently than surrounding tissue, so tumors should show up as ultrasound hotspots.

“We think we could develop instrumentation sufficiently sensitive to disclose the presence of tumors, and perhaps other health anomalies, much earlier than current detection systems, non-intrusively and with a handheld portable device,” Arbabian said.

The researchers believe that their microwave and ultrasound detection system will be practical and widely available within 10 to 15 years. It would be more portable and less expensive than other medical imaging devices such as MRI or CT, and safer than X-rays.

The experiments are detailed in Applied Physics Letters and were presented at the International Ultrasonics Symposium in Taipei, Taiwan.


Stanford University | Stanford Engineers Test Tricorder-Like Detector


Abstract of Non-contact thermoacoustic detection of embedded targets using airborne-capacitive micromachined ultrasonic transducers

A radio frequency (RF)/ultrasound hybrid imaging system using airborne capacitive micromachined ultrasonic transducers (CMUTs) is proposed for the remote detection of embedded objects in highly dispersive media (e.g., water, soil, and tissue). RF excitation provides permittivity contrast, and ultra-sensitive airborne-ultrasound detection measures thermoacoustic-generated acoustic waves that initiate at the boundaries of the embedded target, go through the medium-air interface, and finally reach the transducer. Vented wideband CMUTs interface to 0.18 μm CMOS low-noise amplifiers to provide displacement detectionsensitivity of 1.3 pm at the transducer surface. The carefully designed vented CMUT structure provides a fractional bandwidth of 3.5% utilizing the squeeze-film damping of the air in the cavity.

Google open-sources its TensorFlow machine learning system

Google announced today that it will make its new second-generation “TensorFlow” machine-learning system open source.

That means programmers can now achieve some of what Google engineers have done, using TensorFlow — from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos, to reading a sign in a foreign language using Google Translate.

Google says TensorFlow is a highly scalable machine learning system — it can run on a single smartphone or across thousands of computers in datacenters. The idea is to accelerate research on machine learning, “or wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data.”

This blog post by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead, provides a technical overview. “Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with signals derived from deep neural networks, and to power the magic features of tomorrow,” they note.