New electronic stethoscope and app diagnose lung conditions

1. Electronic stethoscope records patient’s breathing. Lung sounds are sent to a phone or tablet and analyzed by an app. 3. Medical professionals can listen and see the results in real time from any location to diagnose the patient. (credit: Hiroshima University)

The traditional stethoscope has just been superseded by an electronic stethoscope and an app called Respiratory Sounds Visualizer, which can automatically classify lung sounds into five common diagnostic categories.* The system was developed by three physician researchers at Hiroshima University and Fukushima Medical University in collaboration with Pioneer Corporation.

The respiratory specialist doctors recorded and classified lung sounds of 878 patients, then turned these diagnoses into templates to create a mathematical formula that evaluates the length, frequency, and intensity of lung sounds. The resulting app can recognize the sound patterns consistent with five different respiratory diagnoses.

How the Respiratory Sounds Visualizer app works

Based on an analysis of the characteristics of respiratory sounds, the Respiratory Sounds Visualizer app generates this diagnostic chart. The total area in red represents the overall volume of sound, and the proportion of red around each line from the center to each vertex represents the proportion of the overall sound that each respiratory sound contributes. (credit: Shinichiro Ohshimo et al./Annals of Internal Medicine)

The app analyzes the lung sounds and maps them on a five-sided chart. Each of the five axes represents one of the five types of lung sounds. Doctors and patients can see the likely diagnosis based on the length of the axis covered in red.

A doctor working in less-than-ideal circumstances, such as a noisy emergency room or field hospital, could rely on the computer program to “hear” what they might otherwise miss, and the new system could help student doctors learn.

The results from the computer program are simple to interpret and can be saved and shared electronically. In the future, this convenience may allow patients to track and record their own lung function during chronic conditions, like chronic obstructive pulmonary disease (COPD) or cystic fibrosis.

“We plan to use the electronic stethoscope and Respiratory Sounds Visualizer with our own patients after further improving [the mathematical calculations]. We will also release the computer program as a downloadable application to the public in the near future,” said Shinichiro Ohshimo, MD, PhD, an emergency physician in the Department of Emergency and Critical Care Medicine at Hiroshima University Hospital and one of the researchers involved in developing the technology.

* Despite advances in technology, respiratory physiology still depends primarily on chest auscultation, [which is] subjective and requires sufficient training. In addition, identification of the five respiratory sounds specified by the International Lung Sounds Association is difficult because their frequencies overlap:The frequency of normal respiratory sound is 100 to 1000 Hz, wheeze is 100 to 5000 Hz, rhonchus is 150 Hz, coarse crackle is 350 Hz, and fine crackle is 650 Hz. — Shinichiro Ohshimo et al./Annals of Internal Medicine.

Could ‘smart skin’ made of recyclable materials transform medicine and robotics?

Capacitive-based disposable pH sensor. The silver pen could be replaced with aluminum foil. (credit: Joanna M. Nassar et al./Advanced Materials Technologies)

Here’s a challenge: using only low-cost materials available in your house (such as aluminum foil, pencil, scotch tape, sticky-notes, napkins, and sponges), build sensitive sensors (“smart skin”) for detecting temperature, humidity, pH, pressure, touch, flow, motion, and proximity (at a distance of 13 cm). Your sensors must show reliable and consistent results and be capable of connecting to low-cost, tiny computers such as Arduino and Raspberry Pi devices.

The goal here is to replace expensive manufacturing processes for creating paper-based sensors with a simple recyclable 3D stacked 6 × 6 “paper skin” array for simultaneous sensing, made solely from household resources, according to Muhammad Mustafa Hussain, senior author of an Advanced Materials Technologies journal open-access paper and professor at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

How to create a temperature sensor

Schematic of temperature sensors using aluminum foil or silver ink pen (credit: Joanna M. Nassar/Advanced Materials Technologies)

Creating a highly sensitive temperature sensor requires just two things: a Post-It note and a piece of aluminum foil (a silver ink pen would be more sensitive).  A change of temperature would change the resistance of an aluminum strip. To measure the resistance change, connect the sensor to a highly sensitive ohmmeter, using an Arduino Uno, for example. (The output of the Arduino could trigger an alarm, for example.)

Arduino Uno and ohmmeter circuit. The sensor would replace the “resistor to be measured” in the schematic. The bottom resistor value would depend on the sensor resistance range. (credit: Adafruit and Learning About Electronics)

 Two designs for a simple pressure sensor

Two designs for a pressure sensor using a parallel-plate structure: (top) Microfiber wipe and sponge; (bottom) more sensitive air-gap structure with sponge. As applied pressure increases, the dielectric thickness decreases, increasing the output capacitance. To measure it, the aluminum foil is connected to a resistor–capacitor circuit (RC circuit), which is connected to an Arduino or Raspberry Pi device to calculate associated pressure change. (credit: Joanna M. Nassar et al./Advanced Materials Technologies)

The simple fabrication process and low-cost materials used “make this flexible platform the lowest cost and accessible to anyone, without affecting performance in terms of response and sensitivity,” Hussain says.

“Democratization of electronics will be key in the future for its continued growth. … This is the first time a [single] platform shows multi-sensory functionalities close to that of natural skin.”


Abstract of Paper Skin Multisensory Platform for Simultaneous Environmental Monitoring

Human skin and hair can simultaneously feel pressure, temperature, humidity, strain, and flow—great inspirations for applications such as artificial skins for burn and acid victims, robotics, and vehicular technology. Previous efforts in this direction use sophisticated materials or processes. Chemically functionalized, inkjet printed or vacuum-technology-processed papers albeit cheap have shown limited functionalities. Thus, performance and/or functionalities per cost have been limited. Here, a scalable “garage” fabrication approach is shown using off-the-shelf inexpensive household elements such as aluminum foil, scotch tapes, sticky-notes, napkins, and sponges to build “paper skin” with simultaneous real-time sensing capability of pressure, temperature, humidity, proximity, pH, and flow. Enabling the basic principles of porosity, adsorption, and dimensions of these materials, a fully functioning distributed sensor network platform is reported, which, for the first time, can sense the vitals of its carrier (body temperature, blood pressure, heart rate, and skin hydration) and the surrounding environment.

Less-distracting haptic feedback could make car navigation safer than GPS audio and displays

Vibrotactile actuators in prototype smart glasses (credit: Joseph Szczerba et al./Proceedings of the Human Factors and Ergonomics Society)

Human factors/ergonomics researchers at General Motors and an affiliate have performed a study using a new turn-by-turn automotive navigation system that uses haptic cues (vibrations) to the temples to communicate information to drivers on coming turns (which direction and when to turn), instead of distracting voice prompts or video displays.

They modified a prototype smart-glasses device with motors in two actuators (on the right and left side of the head) that buzz to indicate a right or left turn and how far it is, indicated by the number of buzzes (1 at 800 feet away, 2 at 400 feet, and 3 at 100 feet).

National Advanced Driving Simulator (NADS) MiniSim software (credit: NADS)

Using a driving simulator, each participant drove three city routes using a visual-only, visual-plus-voice, and visual-plus-haptic navigation system. For all three system modalities, the participants were also presented with graphical icons for turn-by-turn directions and distance.

Sample turn-by-turn direction icon (credit: Joseph Szczerba et al./Proceedings of the Human Factors and Ergonomics Society)

The researchers found that effort, mental workload, and overall workload were lowest with the prototype haptic system. Drivers didn’t have to listen for voice instructions or take their eyes off the road to look at a visual display. Drivers also preferred the haptic system because it didn’t distract from conversation or audio entertainment.

The results indicate that haptic smart-glasses paired with a simplified icon-based visual display may give drivers accurate directional assistance with less effort.

Mazda 2015 with GPS audio and video display (credit: Landmark MAZDA)

As noted in “Up to 27 seconds of inattention after talking to your car or smartphone,” two studies by University of Utah researchers for the AAA Foundation for Traffic Safety found that a driver traveling only 25 mph continues to be distracted for up to 27 seconds after disconnecting from highly distracting phone and car voice-command systems. The 27 seconds means a driver traveling 25 mph would cover the length of three football fields before regaining full attention.

According to the Multiple Resource Theory developed by Christopher D. Wickens in Theoretical Issues in Ergonomics Science (open access), multiple tasks (such as use of navigation systems while driving) performed via the same channel can result in excessive demand that may increase cognitive workload (and risks of an accident).

The new human factors/ergonomics haptics research was conducted by Joseph Szczerba and Roy Mathieu from General Motors Global R&D and Roger Hersberger from RLH Systems LLC. It was described in a paper in Proceedings of the Human Factors and Ergonomics Society September 2015.


Abstract of A Wearable Vibrotactile Display for Automotive Route Guidance: Evaluating Usability, Workload, Performance and Preference

Automotive navigation systems typically provide distance and directional information of an ensuing maneuver by means of visual indicators and audible instructions. These systems, however, use the same human perception channels that are required to perform the primary task of driving, and may consequently increase cognitive workload. A vibrotactile display was designed as an alternative to voice instruction and implemented in a consumer wearable device (smart-glasses). Using a driving simulator, the prototype system was compared to conventional navigation systems by assessing usability, workload, performance and preference. Results indicated that the use of haptic feedback in smart-glasses can improve secondary task performance over the conventional visual/auditory navigation system. Additionally, users preferred the haptic system over the other conventional systems. This study indicates that existing technologies found in consumer wearable devices may be leveraged to enhance the user-interface of vehicle navigation systems.

Powering brain implants without wires with thin-film wireless power transmission system

Schematic of proposed architecture of an implantable wireless-powered neural interface system that can provide power to implanted devices. Adding a transmitter chip could allow for neural signals to be transmitted via the antenna for external processing. (credit: Toyohashi University Of Technology)

A research team at Toyohashi University of Technology in Japan has fabricated an implanted wireless power transmission (WPT) device to deliver power to an implanted neural interface system, such as a brain-computer interface (BCI) device.

Described in an open-access paper in Sensors journal, the system avoids having to connect an implanted device to an external power source via wires through a hole in the skull, which can cause infections through the opening and risk of infection and leakage of the cerebrospinal fluid during long-term measurement. The system also allows for free-moving subjects, allowing for more natural behavior in experiments.

Photographs of fabricated flexible antenna and bonded CMOS rectifier chip with RF transformer (credit: Kenji Okabe et al./Sensors)

The researchers used a wafer-level packaging technique to integrate a silicon large-scale integration (LSI) chip in a thin (5 micrometers), flexible parylene film, using flip-chip (face-down) bonding to the film. The system includes a thin-film antenna and a rectifier to convert a radio-frequency signal to DC voltage (similar to how an RFID chip works). The entire system measures 27 mm × 5 mm, and the flexible film can conform to the surface of the brain.

Coventry University prof. Kevin Warwick turns on a light with a double-click of his finger, which triggers an implant in his arm (wired to a computer connected to the light). Adding an RF transmitter chip (and associated processing) to the Toyohashi system could similarly allow for controlling devices, but without wires. (credit: Kevin Warwick/element14)

The researchers plan to integrate additional functions, including amplifiers, analog-to-digital converters, signal processors, and  a radio frequency circuit for transmitting (and receiving) data.

Tethered Braingate brain-computer interface for paralyzed patients (credit: Brown University)

Such a system could perform some of the functions of the Braingate system, which allows paralyzed patients to communicate (see “People with paralysis control robotic arms using brain-computer interface“).

This work is partially supported by Grants-in-Aid for Scientific Research, Young Scientists, and the Japan Society for the Promotion of Science.


element14 | Kevin Warwick’s BrainGate Implant


Abstract of Co-Design Method and Wafer-Level Packaging Technique of Thin-Film Flexible Antenna and Silicon CMOS Rectifier Chips for Wireless-Powered Neural Interface Systems

In this paper, a co-design method and a wafer-level packaging technique of a flexible antenna and a CMOS rectifier chip for use in a small-sized implantable system on the brain surface are proposed. The proposed co-design method optimizes the system architecture, and can help avoid the use of external matching components, resulting in the realization of a small-size system. In addition, the technique employed to assemble a silicon large-scale integration (LSI) chip on the very thin parylene film (5 μm) enables the integration of the rectifier circuits and the flexible antenna (rectenna). In the demonstration of wireless power transmission (WPT), the fabricated flexible rectenna achieved a maximum efficiency of 0.497% with a distance of 3 cm between antennas. In addition, WPT with radio waves allows a misalignment of 185% against antenna size, implying that the misalignment has a less effect on the WPT characteristics compared with electromagnetic induction.

NASA engineers to build first integrated-photonics modem

NASA laser expert Mike Krainak and his team plan to replace portions of this fiber-optic receiver with an integrated-photonic circuit (its size will be similar to the chip he is holding) and will test the advanced modem on the International Space Station. (credit: W. Hrybyk/NASA)

A NASA team plans to build the first integrated-photonics modem, using an emerging, potentially revolutionary technology that could transform everything from telecommunications, medical imaging, advanced manufacturing to national defense.

The cell phone-sized device incorporates optics-based functions, such as lasers, switches, and fiber-optic wires, onto a microchip similar to an integrated circuit found in all electronics hardware.

The device will be tested aboard the International Space Station beginning in 2020 as part of NASA’s multi-year Laser Communications Relay Demonstration (LCRD). The Integrated LCRD LEO (Low-Earth Orbit) User Modem and Amplifier (ILLUMA) will serve as a low-Earth-orbit terminal for NASA’s LCRD, demonstrating another capability for high-speed, laser-based communications.

Space communications requires 100 times higher data rates

Since its inception in 1958, NASA has relied exclusively on radio frequency (RF)-based communications. Today, with missions demanding higher data rates than ever before, the need for LCRD has become more critical, said Don Cornwell, director of NASA’s Advanced Communication and Navigation Division within the space Communications and Navigation Program, which is funding the modem’s development.

LCRD, expected to begin operations in 2019, promises to transform the way NASA sends and receives data, video and other information. It will use lasers to encode and transmit data at rates 10 to 100 times faster than today’s communications equipment, requiring significantly less mass and power.

Such a leap in technology could deliver video and high-resolution measurements from spacecraft over planets across the solar system — permitting researchers to make detailed studies of conditions on other worlds, much as scientists today track hurricanes and other climate and environmental changes here on Earth.

A payload aboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) demonstrated record-breaking download and upload speeds to and from lunar orbit at 622 megabits per second (Mbps) and 20 Mbps, respectively, in 2013 (see “NASA laser communication system sets record with data transmissions to and from Moon“).

LCRD, however, is designed to be an operational system after an initial two-year demonstration period. It involves a hosted payload and two specially equipped ground stations. The mission will dedicate the first two years to demonstrating a fully operational system, from geosynchronous orbit to ground stations. NASA then plans to use ILLUMA to test communications between geosynchronous and low-Earth-orbit spacecraft, Cornwell said.

Laser Communications Relay Demonstration (LCRD), artist’s illustration (credit: NASA)

Integrated photonics: transforming light-based technologies

ILLUMA incorporates an emerging technology — integrated photonics — that is expected to transform any technology that employs light. This includes everything from Internet communications over fiber optic cable to spectrometers, chemical detectors, and surveillance systems.

“Integrated photonics are like an integrated circuit, except they use light rather than electrons to perform a wide variety of optical functions,” Cornwell said. Recent developments in nanostructures, metamaterials, and silicon technologies have expanded the range of applications for these highly integrated optical chips. Furthermore, they could be lithographically printed in mass — just like electronic circuitry today — further driving down the costs of photonic devices.

“The technology will simplify optical system design, said Mike Krainak, who is leading the modem’s development at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “It will reduce the size and power consumption of optical devices, and improve reliability, all while enabling new functions from a lower-cost system.”

Krainak also serves as NASA’s representative on the country’s first consortium to advance integrated photonics. Funded by the U.S. Department of Defense, the non-profit American Institute for Manufacturing Integrated Photonics, with headquarters in Rochester, New York, brings together the nation’s leading technological talent to establish global leadership in integrated photonics. Its primary goal is developing low-cost, high-volume, manufacturing methods to merge electronic integrated circuits with integrated photonic devices.

NASA’s Space Technology Mission Directorate (STMD) also appointed Krainak as the integrated photonics lead for its Space Technology Research Grants Program, which supports early-stage innovations. The program recently announced a number of research awards under this technology area (see related story).

Photonics on a chip

Under the NASA project, Krainak and his team will reduce the size of the terminal, now about the size of two toaster ovens — a challenge made easier because all light-related functions will be squeezed onto a microchip. Although the modem is expected to use some optic fiber, ILLUMA is the first step in building and demonstrating an integrated photonics circuit that ultimately will embed these functions onto a chip, he said.

ILLUMA will flight-qualify the technology, as well as demonstrate a key capability for future spacecraft. In addition to communicating to ground stations, future satellites will require the ability to communicate with one another, he said.

“What we want to do is provide a faster exchange of data to the scientific community. Modems have to be inexpensive. They have to be small. We also have to keep their weight down,” Krainak said. The goal is to develop and demonstrate the technology and then make it available to industry and other government agencies, creating an economy of scale that will further drive down costs. “This is the payoff,” he said.

Although integrated photonics promises to revolutionize space-based science and inter-planetary communications, its impact on terrestrial uses also is equally profound, Krainak added. One such use is with data centers. These costly, very large facilities house servers that are connected by fiber optic cable to store, manage and distribute data.

Integrated photonics promises to dramatically reduce the need for and size of these behemoths — particularly since the optic hardware needed to operate these facilities will be printed onto a chip, much like electronic circuitry today. In addition to driving down costs, the technology promises faster computing power.

“Google, Facebook, they’re all starting to look at this technology,” Krainak said. “As integrated photonics progresses to be more cost effective than fiber optics, it will be used,” Krainak said. “Everything is headed this way.”


NASA | Laser Comm: That’s a Bright Idea

Scientists decode brain signals to recognize images in real time

Using electrodes implanted in the temporal lobes of seven awake epilepsy patients, University of Washington scientists have decoded brain signals (representing images) at nearly the speed of perception for the first time* — enabling the scientists to predict in real time which images of faces and houses the patients were viewing and when, and with better than 95 percent accuracy.

Multi-electrode placements on thalamus surface (credit: K.J. Miller et al./PLoS Comput Biol)

The research, published Jan. 28 in open-access PLOS Computational Biology, may lead to an effective way to help locked-in patients (who were paralyzed or have had a stroke) communicate, the scientists suggest.

Predicting what someone is seeing in real time

“We were trying to understand, first, how the human brain perceives objects in the temporal lobe, and second, how one could use a computer to extract and predict what someone is seeing in real time,” explained University of Washington computational neuroscientist Rajesh Rao. He is a UW professor of computer science and engineering and directs the National Science Foundation’s Center for Sensorimotor Engineering, headquartered at UW.

The study involved patients receiving care at Harborview Medical Center in Seattle. Each had been experiencing epileptic seizures not relieved by medication, so each had undergone surgery in which their brains’ temporal lobes were implanted (temporarily, for about a week) with electrodes to try to locate the seizures’ focal points.

Temporal lobes process sensory input and are a common site of epileptic seizures. Situated behind mammals’ eyes and ears, the lobes are also involved in Alzheimer’s and dementias and appear somewhat more vulnerable than other brain structures to head traumas, said UW Medicine neurosurgeon Jeff Ojemann.

Recording digital signatures of images in real time

In the experiment, signals from electrocorticographic (ECoG) electrodes from multiple temporal-lobe locations were processed powerful computational software that extracted two characteristic properties of the brain signals: “event-related potentials” (voltages from hundreds of thousands of neurons activated by an image) and “broadband spectral changes” (processing of power measurements  across a wide range of frequencies).

Averaged broadband power at two multi-electrode locations (1 and 4) following presentation of different images; note that responses to people are stronger than to houses. (credit: K.J. Miller et al./PLoS Comput Biol)

Target image (credit: K.J. Miller et al./PLoS Comput Biol)

The subjects, watching a computer monitor, were shown a random sequence of pictures: brief (400 millisecond) flashes of images of human faces and houses, interspersed with blank gray screens. Their task was to watch for an image of an upside-down house and verbally report this target, which appeared once during each of 3 runs (3 of 300 stimuli). Patients identified the target with less than 3 percent errors across all 21 experimental runs.

The computational software sampled and digitized the brain signals 1,000 times per second to extract their characteristics. The software also analyzed the data to determine which combination of electrode locations and signal types correlated best with what each subject actually saw.

By training an algorithm on the subjects’ responses to the (known) first two-thirds of the images, the researchers could examine the brain signals representing the final third of the images, whose labels were unknown to them, and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen, with only ~20 milliseconds timing error.

This accuracy was attained only when event-related potentials and broadband changes were combined for prediction, which suggests they carry complementary information.

Steppingstone to real-time brain mapping

“Traditionally scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

The scientists’ technique, he said, is a steppingstone for brain mapping, in that it could be used to identify in real time which locations of the brain are sensitive to particular types of information.

“The computational tools that we developed can be applied to studies of motor function, studies of epilepsy, studies of memory. The math behind it, as applied to the biological, is fundamental to learning,” Ojemann added.

Lead author of the study is Kai Miller, a neurosurgery resident and physicist at Stanford University who obtained his M.D. and Ph.D. at the UW. Other collaborators were Dora Hermes, a Stanford postdoctoral fellow in neuroscience, and Gerwin Schalk, a neuroscientist at the Wadsworth Institute in New York.

This work was supported by National Aeronautics and Space Administration Graduate Student Research Program, the National Institutes of Health, the National Science Foundation, and the U.S. Army.

* In previous studies, such as these three covered on KurzweilAI, brain images were reconstructed after they were viewed, not in real time: Study matches brain scans with topics of thoughts, Neuroscape Lab visualizes live brain functions using dramatic images, How to make movies of what the brain sees.


Abstract of Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change

The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject’s perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject’s perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing.

Detecting heartbeats remotely with millimeter-wave radar

Japanese researchers have developed a way to measure heartbeats remotely in real time with as much accuracy as electrocardiographs. (credit: Kyoto University)

A radar system that measures heartbeats remotely in real time and with as much accuracy as electrocardiographs has been developed by researchers at the Kyoto University Center of Innovation and Panasonic Corporation,

The results were published in an open-access paper in the journal IEEE Transactions on Biomedical Engineering.

The researchers say this new approach will allow for developing long-term monitoring and “casual sensing” — taking measurements as people go about their daily activities, without having to attach uncomfortable electrodes or sensors to the patient’s body. People will also be able to monitor their cardio health status themselves.

The remote sensing system transmits and receives radar signals in the 24 GHz band* using “ultra-wideband” (UWB) modulation, which is known for its ability to detect specific signals in a noisy radio-frequency environment. A unique signal analysis algorithm extracts data from radar signals reflected from the body to detect heart rate and heart-rate variability (time derivative). Future use for detection of breathing, body movement, and other data from the body is also possible, the researchers suggest.

* The 24 GHz radar band is also used for new automotive radar sensors in the U.S. and Europe for collision detection, obstacle detection, blind-spot monitoring, and automatic cruise control for cars, including self-driving cars. Speculation: it may be possible to use the new UWB-based technology on semi-automated and fully automated vehicles for detecting and tracking humans and pets in the road in real time, based on heartbeat and other characteristics, and compensating for a subject’s movement. The remote feature also suggests possible future covert use in remote polygraph, airport screening, espionage, and other applications. The authors have been asked to technically comment on these uses.


Abstract of Feature-based Correlation and Topological Similarity for Interbeat Interval Estimation using Ultra-Wideband Radar

The objectives of this paper are to propose a method that can accurately estimate the human heart rate using an ultra-wideband radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the heart rate efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous heart rates in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus to suppress inaccurate heart rate estimates. Measurements were taken using ultra-wideband radar, while simultaneously performing electro-cardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this work for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of heart-rate variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.

Tiny electronic implants monitor brain injury, then melt away

Artist’s rendering of bioresorbable implanted brain sensor (top left) connected via biodegradable wires to external wireless transmitter (ring, top right) for monitoring a rat’s brain (red) (credit: Graphic by Julie McMahon)

Researchers at University of Illinois at Urbana-Champaign and Washington University School of Medicine in St. Louis have developed a new class of small, thin electronic sensors that can monitor temperature and pressure within the skull — crucial health parameters after a brain injury or surgery* — then melt away when they are no longer needed, eliminating the need for additional surgery to remove the monitors and reducing the risk of infection and hemorrhage.

Similar sensors could be adapted for postoperative monitoring in other body systems as well, the researchers say.

John A. Rogers, a professor of materials science and engineering at the at the U. of I. at Urbana-Champaign, and Wilson Ray, a professor of neurological surgery at Washington University, published their work in the journal Nature.

After a traumatic brain injury or brain surgery, it’s crucial to monitor the patient for swelling and pressure on the brain. Current monitoring technology is bulky and invasive, Rogers said, and the wires restrict the patent’s movement and hamper physical therapy as they recover. Because they require continuous, hard-wired access into the head, such implants also carry the risk of allergic reactions, infection and hemorrhage, and even could exacerbate the inflammation they are meant to monitor.

Bioresorbable materials

“If you simply could throw out all the conventional hardware and replace it with very tiny, fully implantable sensors capable of the same function, constructed out of bioresorbable materials in a way that also eliminates or greatly miniaturizes the wires, then you could remove a lot of the risk and achieve better patient outcomes,” Rogers said. ”We were able to demonstrate all of these key features in animal models, with a measurement precision that’s just as good as that of conventional devices.”

Schematic illustration of a biodegradable pressure sensor. The inset shows the location of the silicon-nanomembrane (Si-NM) strain gauge. (credit: Seung-Kyun Kang et al./Nature)

The new devices incorporate dissolvable silicon technology developed by Rogers’ group at the U. of I. The sensors, smaller than a grain of rice, are built on extremely thin sheets of nanoporous silicon — which are naturally biodegradable. The sheets are configured to function normally for a few weeks, then dissolve away, completely and harmlessly, in the body’s own fluids (via hydrolysis and/or metabolic action).

The silicon platforms are also sensitive to clinically relevant pressure levels in the intracranial fluid surrounding the brain.

The researchers added a tiny temperature sensor and connected it to a wireless transmitter roughly the size of a postage stamp, implanted under the skin but on top of the skull.

Tiny pressure and temperature sensor (bottom right) connects via bioresorbable molybdenum wires (2-micrometers in diameter) to a penny-size wireless transmitter externally mounted on top of the skull. (image credit: John A. Rogers)

The Illinois group worked with clinical experts in traumatic brain injury at Washington University to implant the sensors in rats, testing for performance and biocompatibility. They found that the temperature and pressure readings from the dissolvable sensors matched conventional monitoring devices for accuracy.

“The ultimate strategy is to have a device that you can place in the brain — or in other organs in the body — that is entirely implanted, intimately connected with the organ you want to monitor and can transmit signals wirelessly to provide information on the health of that organ, allowing doctors to intervene if necessary to prevent bigger problems,” said Rory Murphy, a neurosurgeon at Washington University and co-author of the paper.

“After the critical period that you actually want to monitor, it will dissolve away and disappear.”

Embedding drug-delivery and electrical-stimulator devices

The researchers are moving toward human trials for this technology, as well as extending its functionality for other biomedical applications.

“We have established a range of device variations, materials, and measurement capabilities for sensing in other clinical contexts,” Rogers said. “In the near future, we believe that it will be possible to embed therapeutic function, such as electrical stimulation or drug delivery, into the same systems while retaining the essential bioresorbable character.”

The National Institutes of Health, the Defense Advanced Research Projects Agency and the Howard Hughes Medical Institute supported this work. Rogers and Braun are affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.

* About 50,000 people die of traumatic brain injuries annually in the U.S. When patients with such injuries arrive in the hospital, doctors must be able to accurately measure intracranial pressure in the brain and inside the skull because an increase in pressure can lead to further brain injury, and there is no way to reliably estimate pressure levels from brain scans or clinical features in patients.


Abstract of Bioresorbable silicon electronic sensors for the brain

Many procedures in modern clinical medicine rely on the use of electronic implants in treating conditions that range from acute coronary events to traumatic injury. However, standard permanent electronic hardware acts as a nidus for infection: bacteria form biofilms along percutaneous wires, or seed haematogenously, with the potential to migrate within the body and to provoke immune-mediated pathological tissue reactions. The associated surgical retrieval procedures, meanwhile, subject patients to the distress associated with re-operation and expose them to additional complications. Here, we report materials, device architectures, integration strategies, and in vivo demonstrations in rats of implantable, multifunctional silicon sensors for the brain, for which all of the constituent materials naturally resorb via hydrolysis and/or metabolic action, eliminating the need for extraction. Continuous monitoring of intracranial pressure and temperature illustrates functionality essential to the treatment of traumatic brain injury; the measurement performance of our resorbable devices compares favourably with that of non-resorbable clinical standards. In our experiments, insulated percutaneous wires connect to an externally mounted, miniaturized wireless potentiostat for data transmission. In a separate set-up, we connect a sensor to an implanted (but only partially resorbable) data-communication system, proving the principle that there is no need for any percutaneous wiring. The devices can be adapted to sense fluid flow, motion, pH or thermal characteristics, in formats that are compatible with the body’s abdomen and extremities, as well as the deep brain, suggesting that the sensors might meet many needs in clinical medicine.

A self-assembling molecular nanoswitch

Molecular nanoswitch: calculated adsorption geometry of porphine adsorbed at copper bridge site  (credit: Moritz Müller et al./J. Chem. Phys.)

Technical University of Munich (TUM) researchers have simulated a self-assembling molecular nanoswitch in a supercomputer study.

As with other current research in bottom-up self-assembly nanoscale techniques, the goal is to further miniaturize electronic devices, overcoming the physical limits of currently used top-down procedures such as photolithography.

The new TUM research focuses on porphine (C20H14N4, the simplest form of porphyrin* organic molecules), interacting on copper and silver surfaces to form a single-porphyrin switch that occupies a surface area of only one square nanometer (porphine itself is much smaller). Porphyrins have potential applications in molecular memory devices, photovoltaics, gas sensors, light emission, and catalysis, the researchers note.

Structure of a porphine molecule (credit: Moritz  Müller et al./J. Chem. Phys.)

In their simulation, the researchers placed porphine molecules on a copper or silver slab. After finding the optimal geometry in which the molecules would adsorb on the surface, the researchers altered the size of the metal slab to increase or decrease the distance between molecules to simulate different molecular coverages.

The researchers found that weak long-range van der Waals (attractive or repulsive forces between molecules or atomic groups that do not arise from interactions due to a covalent bond or electrostatic force) yielded the largest contribution to the molecule-surface interaction.

The study was published last week in The Journal of Chemical Physics.

* Porphyrins are a group of ringed chemical compounds which notably include heme — responsible for transporting oxygen and carbon dioxide in the bloodstream — and chlorophyll. Porphyrins are studied in the lab for their potential uses as sensors, light-sensitive dyes in organic solar cells, and molecular magnets. The close-packed single crystal surfaces of copper and silver, are widely used as substrates in surface science. This is due to the densely packed nature of the surfaces, which allow the molecules to exhibit a smooth adsorption environment. Additionally, copper and silver each react differently with porhyrins. These molecules adsorb more strongly on copper, whereas silver does a better job of keeping the electronic structure of the molecule intact — allowing the researchers to monitor a variety of competing effects for future applications.


Abstract of Interfacial charge rearrangement and intermolecular interactions: Density-functional theory study of free-base porphine adsorbed on Ag(111) and Cu(111)

We employ dispersion-corrected density-functional theory to study the adsorption of tetrapyrrole 2H-porphine (2H-P) at Cu(111) and Ag(111). Various contributions to adsorbate-substrate and adsorbate-adsorbate interactions are systematically extracted to analyze the self-assembly behavior of this basic building block to porphyrin-based metal-organic nanostructures. This analysis reveals a surprising importance of substrate-mediated van der Waals interactions between 2H-P molecules, in contrast to negligible direct dispersive interactions. The resulting net repulsive interactions rationalize the experimentally observed tendency for single molecule adsorption.

Microbots individually controlled using magnetic fields

This image shows how two microbots can be independently controlled when operating within a group. (Purdue University image/David Cappelleri)

Purdue University researchers have developed a method to use magnetic fields to independently control individual microrobots operating within groups.

The design allows for each microbot to work independently while operating in groups, similar to how ants work.  Until now, it was generally only possible to control groups of microbots to move generally in unison, said David Cappelleri, an assistant professor of mechanical engineering at Purdue.

The magnetic coils were made by printing a copper pattern with the same technology used to manufacture printed circuit boards. (credit: Sagar Chowdhury et al./Micromachines)

The solution: an array of tiny coils that generate attractive or repulsive magnetic forces to move the microbots, which are magnetic disks that slide across the surface. (The microbots are currently about 2 millimeters in diameter, but the researchers plan to create microbots that are around 250 micrometers, 0.25 millimeters, in diameter.)

Applications could include microelectromechanical systems for additive manufacturing, cell sorting, cell manipulation, and cancer cell detection in a biopsy (cancer cells have different stiffness characteristics than non-cancer cells), Cappelleri said.

The National Science Foundation funded the research, which was described in an open-access paper appearing this month in the journal Micromachines.


Purdue University | Localized Magnetic Field Control for Microrobots


Abstract of Towards Independent Control of Multiple Magnetic Mobile Microrobots

In this paper, we have developed an approach for independent autonomous navigation of multiple microrobots under the influence of magnetic fields and validated it experimentally. We first developed a heuristics based planning algorithm for generating collision-free trajectories for the microrobots that are suitable to be executed by an available magnetic field. Second, we have modeled the dynamics of the microrobots to develop a controller for determining the forces that need to be generated for the navigation of the robots along the trajectories at a suitable control frequency. Next, an optimization routine is developed to determine the input currents to the electromagnetic coils that can generate the required forces for the navigation of the robots at the controller frequency. We then validated our approach by simulating an electromagnetic system that contains an array of sixty-four magnetic microcoils designed for generating local magnetic fields suitable for simultaneous independent actuation of multiple microrobots. Finally, we prototyped an m m -scale version of the system and present experimental results showing the validity of our approach.