(Left) Corneal shape before (top) and after (bottom) the treatment. (Right) Simulated effects on vision. (credit: Sinisa Vukelic/Columbia Engineering)
Columbia Engineering researcher Sinisa Vukelic, Ph.D., has developed a new non-invasive approach for permanently correcting myopia (nearsightedness), replacing glasses and invasive corneal refractive surgery.* The non-surgical method uses a “femtosecond oscillator” — an ultrafast laser that delivers pulses of very low energy at high repetition rate to modify the tissue’s shape.
The method has fewer side effects and limitations than those seen in refractive surgeries, according to Vukelic. For instance, patients with thin corneas, dry eyes, and other abnormalities cannot undergo refractive surgery.** The study could lead to treatment for myopia, hyperopia, astigmatism, and irregular astigmatism. So far, it’s shown promise in preclinical models.
“If we carefully tailor these changes, we can adjust the corneal curvature and thus change the refractive power of the eye,” says Vukelic. “This is a fundamental departure from the mainstream ultrafast laser treatment [such as LASIK] … and relies on the optical breakdown of the target materials and subsequent cavitation bubble formation.”
Personalized treatments and use on other collagen-rich tissues
Vukelic’s group plans to start clinical trials by the end of the year. They hope to predict corneal effects — how the cornea might deform if a small circle or an ellipse, for example. That would make it possible to personalize the treatment.
“What’s especially exciting is that our technique is not limited to ocular media — it can be used on other collagen-rich tissues,” Vukelic adds. “We’ve also been working with Professor Gerard Ateshian’s lab to treat early osteoarthritis, and the preliminary results are very, very encouraging. We think our non-invasive approach has the potential to open avenues to treat or repair collagenous tissue without causing tissue damage.”
* Nearsightedness, or myopia, is an increasing problem around the world. There are now twice as many people in the U.S. and Europe with this condition as there were 50 years ago, the researchers note. In East Asia, 70 to 90 percent of teenagers and young adults are nearsighted. By some estimates, about 2.5 billion of people across the globe may be affected by myopia by 2020. Eye glasses and contact lenses are simple solutions; a more permanent one is corneal refractive surgery. But, while vision correction surgery has a relatively high success rate, it is an invasive procedure, subject to post-surgical complications, and in rare cases permanent vision loss. In addition, laser-assisted vision correction surgeries such as laser in situ keratomileusis (LASIK) and photorefractive keratectomy (PRK) still use ablative technology, which can thin and in some cases weaken the cornea.
** Vukelic’s approach uses low-density plasma, which causes ionization of water molecules within the cornea. This ionization creates a reactive oxygen species (a type of unstable molecule that contains oxygen and that easily reacts with other molecules in a cell), which in turn interacts with the collagen fibrils to form chemical bonds, or crosslinks. This selective introduction of crosslinks induces changes in the mechanical properties of the treated corneal tissue. This ultimately results in changes in the overall macrostructure of the cornea, but avoids optical breakdown of the corneal tissue. Because the process is photochemical, it does not disrupt tissue and the induced changes remain stable.
A timeline of the Universe based on the cosmic inflation theory (credit: WMAP science team/NASA)
Stephen Hawking’s final cosmology theory says the universe was created instantly (no inflation, no singularity) and it’s a hologram
There was no singularity just after the big bang (and thus, no eternal inflation) — the universe was created instantly. And there were only three dimensions. So there’s only one finite universe, not a fractal or a multiverse — and we’re living in a projected hologram. That’s what Hawking and co-author Thomas Hertog (a theoretical physicist at the Catholic University of Leuven) have concluded — contradicting Hawking’s former big-bang singularity theory (with time as a dimension).
Problem: So how does time finally emerge? “There’s a lot of work to be done,” admits Hertog. Citation (open access): Journal of High Energy Physics, May 2, 2018. Source (open access): Science, May 2, 2018
Movies capture the dynamics of an RNA molecule from the HIV-1 virus. (photo credit: Yu Xu et al.)
Molecular movies of RNA guide drug discovery — a new paradigm for drug discovery
Duke University scientists have invented a technique that combines nuclear magnetic resonance imaging and computationally generated movies to capture the rapidly changing states of an RNA molecule.
It could lead to new drug targets and allow for screening millions of potential drug candidates. So far, the technique has predicted 78 compounds (and their preferred molecular shapes) with anti-HIV activity, out of 100,000 candidate compounds. Citation: Nature Structural and Molecular Biology, May 4, 2018. Source: Duke University, May 4, 2018.
Chromium tri-iodide magnetic layers between graphene conductors. By using four layers, the storage density could be multiplied. (credit: Tiancheng Song)
Atomically thin magnetic memory
University of Washington scientists have developed the first 2D (in a flat plane) atomically thin magnetic memory — encoding information using magnets that are just a few layers of atoms in thickness — a miniaturized, high-efficiency alternative to current disk-drive materials.
In an experiment, the researchers sandwiched two atomic layers of chromium tri-iodide (CrI3) — acting as memory bits — between graphene contacts and measured the on/off electron flow through the atomic layers.
The U.S. Dept. of Energy-funded research could dramatically increase future data-storage density while reducing energy consumption by orders of magnitude. Citation: Science, May 3, 2018. Source: University of Washington, May 3, 2018.
Definitions of artificial intelligence (credit: House of Lords Select Committee on Artificial Intelligence)
A Magna Carta for the AI age
A report by the House of Lords Select Committee on Artificial Intelligence in the U.K. lays out “an overall charter for AI that can frame practical interventions by governments and other public agencies.”
The key elements: Be developed for the common good. Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used. Respect rights to privacy. Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online. Never be given the autonomous power to hurt, destroy or deceive human beings.
Memes and social networks have become weaponized, but many governments seem ill-equipped to understand the new reality of information warfare.
The weapons include: Computational propaganda: digitizing the manipulation of public opinion; advanced digital deception technologies; malicious AI impersonating and manipulating people; and AI-generated fake video and audio. Counter-weapons include: Spotting AI-generated people; uncovering hidden metadata to authenticate images and videos; blockchain for tracing digital content back to the source; and detecting image and video manipulation at scale.
HHMI Howard Hughes Medical Institute | An immune cell explores a zebrafish’s inner ear
By combining two state-of-the-art imaging technologies, Howard Hughes Medical Institute Janelia Research Campus scientists, led by 2014 chemistry Nobel laureate physicist Eric Betzig, have imaged living cells at unprecedented 3D detail and speed, the scientists report on April 19, 2018 in an open-access paper in the journal Science.
In stunning videos of animated worlds, cancer cells crawl, spinal nerve circuits rewire, and we travel down through the endomembrane mesh of a zebrafish eye.
Microscope flaws. The new adaptive optics/lattice light sheet microscopy (AO-LLSM) system addresses two fundamental flaws with traditional microscopes. They’re too slow to study natural three-dimensional (3D) cellular processes in real time and in detail (the sharpest views have been limited to isolated cells immobilized on glass slides).
And the bright light required for imaging causes photobleaching and other cellular damages. These microscopes bathe cells with light thousands to millions of times more intense than the desert sun, says Betzig — damaging or killing the organism being studied.
Merging adaptive optics and rapid scanning. To meet these challenges, Betzig and his team created a microscopy system that merges two technologies: Aberration-correcting adaptive-optics technology used by astronomers to provide clear views of distant celestial objects through Earth’s turbulent atmosphere; and non-invasive lattice light sheet microscopy, which rapidly and repeatedly sweeps an ultra-thin sheet of light through the cell (avoiding light damage) while acquiring a series of 2D images and building a high-resolution 3D movie of subcellular dynamics.
Zebrafish embryo spinal cord neural circuit development (credit: HHMI Howard Hughes Medical Institute)
The combination allows for the study of 3D subcellular processes in their native multicellular environments at high spatiotemporal (space and time) resolution.
Desk version. Currently, the new microscope fills a 10-foot-long table. “It’s a bit of a Frankenstein’s monster right now,” says Betzig. His team is working on a next-generation version that should fit on a small desk at a cost within the reach of individual labs. The first such instrument will go to Janelia’s Advanced Imaging Center, where scientists from around the world can apply to use it. Plans that scientists can use to create their own microscopes will also be made freely available.
Ultimately, Betzig hopes that the adaptive optical version of the lattice microscope will be commercialized, as was the base lattice instrument before it. That could bring adaptive optics into the mainstream.
A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)
Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.
The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.
The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.
The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)
Simulating the human eye’s lens and ciliary muscles
In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.
(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) actual device. (credit: She et al./Sci. Adv.)
Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.
The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.
* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.
** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”
Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
Abstract of Large area metalenses: design, characterization, and mass manufacturing
Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.
A silicon-based metalens just 30 micrometers thick is mounted on a transparent, stretchy polymer film. The colored iridescence is produced by the large number of nanostructures within the metalens. (credit:Harvard SEAS)
Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a breakthrough electronically controlled artificial eye. The thin, flat, adaptive silicon nanostructure (“metalens”) can simultaneously control focus, astigmatism, and image shift (three of the major contributors to blurry images) in real time, which the human eye (and eyeglasses) cannot do.
The 30-micrometers-thick metalens makes changes laterally to achieve optical zoom, autofocus, and image stabilization — making it possible to replace bulky lens systems in future optical systems used in eyeglasses, cameras, cell phones, and augmented and virtual reality devices.
The research is described in an open-access paper in Science Advances. In another paper recently published in Optics Express, the researchers demonstrated the design and fabrication of metalenses up to centimeters or more in diameter.* That makes it possible to unify two industries: semiconductor manufacturing and lens-making. So the same technology used to make computer chips will be used to make metasurface-based optical components, such as lenses.
The adaptive metalens (right) focuses light rays onto an image sensor (left), such as one in a camera. An electrical signal controls the shape of the metalens to produce the desired optical wavefront patterns (shown in red), resulting in improved images. In the future, adaptive metalenses will be built into imaging systems, such as cell phone cameras and microscopes, enabling flat, compact autofocus as well as the capability for simultaneously correcting optical aberrations and performing optical image stabilization, all in a single plane of control. (credit: Second Bay Studios/Harvard SEAS)
Simulating the human eye’s lens and ciliary muscles
In the human eye, the lens is surrounded by ciliary muscle, which stretches or compresses the lens, changing its shape to adjust its focal length. To achieve that function, the researchers adhered a metalens to a thin, transparent dielectric elastomer actuator (“artificial muscle”). The researchers chose a dielectic elastomer with low loss — meaning light travels through the material with little scattering — to attach to the lens.
(Top) Schematic of metasurface and dielectric elastomer actuators (“artificial muscles”), showing how the new artificial muscles change focus, similar to how the ciliary muscle in the eye work. An applied voltage supplies transparent, stretchable electrode layers (gray), made up of single-wall carbon-nanotube nanopillars, with electrical charges (acting as a capacitor). The resulting electrostatic attraction compresses (red arrows) the dielectric elastomer actuators (artificial muscles) in the thickness direction and expands (black arrows) the elastomers in the lateral direction. The silicon metasurface (in the center), applied by photolithography, can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift. (Bottom) Photo of actual device. (credit: Alan She et al./Sci. Adv.)
Next, the researchers aim to further improve the functionality of the lens and decrease the voltage required to control it.
The research was performed at the Harvard John A. Paulson School of Engineering and Applied Sciences, supported in part by the Air Force Office of Scientific Research and by the National Science Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), which is supported by the National Science Foundation. The Harvard Office of Technology Development is exploring commercialization opportunities.
* To build the artificial eye with a larger (more functional) metalens, the researchers had to develop a new algorithm to shrink the file size to make it compatible with the technology currently used to fabricate integrated circuits.
** “All optical systems with multiple components — from cameras to microscopes and telescopes — have slight misalignments or mechanical stresses on their components, depending on the way they were built and their current environment, that will always cause small amounts of astigmatism and other aberrations, which could be corrected by an adaptive optical element,” said Alan She, a graduate student at SEAS and first author of the paper. “Because the adaptive metalens is flat, you can correct those aberrations and integrate different optical capabilities onto a single plane of control. Our results demonstrate the feasibility of embedded autofocus, optical zoom, image stabilization, and adaptive optics, which are expected to become essential for future chip-scale image sensors and Furthermore, the device’s flat construction and inherently lateral actuation without the need for motorized parts allow for highly stackable systems such as those found in stretchable electronic eye camera sensors, providing possibilities for new kinds of imaging systems.”
Abstract of Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
Abstract of Large area metalenses: design, characterization, and mass manufacturing
Optical components, such as lenses, have traditionally been made in the bulk form by shaping glass or other transparent materials. Recent advances in metasurfaces provide a new basis for recasting optical components into thin, planar elements, having similar or better performance using arrays of subwavelength-spaced optical phase-shifters. The technology required to mass produce them dates back to the mid-1990s, when the feature sizes of semiconductor manufacturing became considerably denser than the wavelength of light, advancing in stride with Moore’s law. This provides the possibility of unifying two industries: semiconductor manufacturing and lens-making, whereby the same technology used to make computer chips is used to make optical components, such as lenses, based on metasurfaces. Using a scalable metasurface layout compression algorithm that exponentially reduces design file sizes (by 3 orders of magnitude for a centimeter diameter lens) and stepper photolithography, we show the design and fabrication of metasurface lenses (metalenses) with extremely large areas, up to centimeters in diameter and beyond. Using a single two-centimeter diameter near-infrared metalens less than a micron thick fabricated in this way, we experimentally implement the ideal thin lens equation, while demonstrating high-quality imaging and diffraction-limited focusing.
Inspired by the iconic Stars Wars scene with Princess Leia in distress, Brigham Young University engineers and physicists have created the “Princess Leia project” — a new technology for creating 3D “volumetric images” that float in the air and that you can walk all around and see from almost any angle.*
“Our group has a mission to take the 3D displays of science fiction and make them real,” said electrical and computer engineering professor and holography expert Daniel Smalley, lead author of a Jan. 25 Nature paper on the discovery.
The image of Princess Leia portrayed in the movie is actually not a hologram, he explains. A holographic display scatters light only on a 2D surface. So you have to be looking at a limited range of angles to see the image, which is also normally static. Instead, a moving volumetric display can be seen from any angle and you can even reach your hand into it. Examples include the 3D displays Tony Stark interacts with in Ironman and the massive image-projecting table in Avatar.*
How to create a 3D volumetric image from a single moving particle
BYU student Erich Nygaard, depicted as a moving 3D image, mimicks the Princess Leia projection in the iconic Star Wars scene (“Help me Obi Wan Kenobi, you’re my only hope”). (credit: Dan Smalley Lab)
The team’s free-space volumetric display technology, called “Optical Trap Display,” is based on photophoretic** optical trapping (controlled by a laser beam) of a rapidly moving particle (of a plant fiber called cellulose in this case). This technique takes advantage of human persistence of vision (at more than 10 images per second we don’t see a moving point of light, just the pattern it traces in space — the same phenomenon that makes movies and video work).
As the laser beam moves the trapped particle around, three more laser beams illuminate the particle with RGB (red-green-blue) light. The resulting fast-moving dot traces out a color image in three dimensions (you can see the vertical scan lines in one vertical slice in the Princess Leia image above) — producing a full-color, volumetric (3D) still image in air with 10-micrometer resolution, which allows for fine detail. The technology also features low noticeable speckle (the annoying specks seen in holograms).***
Applications in the real (and virtual) world
So far, Smalley and his student researchers have 3D light-printed a butterfly, a prism, the stretch-Y BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. The images in this proof-of-concept prototype are still in the range of millimeters. But in the Nature paper, the researchers say they anticipate that the device “can readily be scaled using parallelism and [they] consider this platform to be a viable method for creating 3D images that share the same space as the user, as physical objects would.”
What about augmented and virtual-reality uses? “While I think this technology is not really AR or VR but just ‘R,’ there are a lot of interesting ways volumetric images can enhance and augment the world around us,” Smalley told KurzweilAI in an email. “A very-near-term application could be the use of levitated particles as ‘streamers’ to show the expected flow of air over actual physical objects. That is, instead of looking at a computer screen to see fluid flow over a turbine blade, you could set a volumetric projector next to the actual turbine blade and see particles form ribbons to shown expected fluid flow juxtaposed on the real object.
“In a scaled-up version of the display, a projector could place a superimposed image of a part on an engine showing a technician the exact location and orientation of that part. An even more refined version could create a magic portal in your home where you could see the size of shoes you just ordered and set your foot inside to (visually) check the fit. Other applications would included sparse telepresence, satellite tracking, command and control surveillance, surgical planning, tissue tagging, catheter guidance and other medical visualization applications.”
How soon? “I won’t make a prediction on exact timing but if we make as much progress in the next four years as we did in the last four years (a big ‘if’), then we would have a display of usable size by the end of that period. We have had a number of interested parties from a variety of fields. We are open to an exclusive agreement, given the right partner.”
* Smalley says he has long dreamed of building the kind of 3D holograms that pepper science-fiction films. But watching inventor Tony Stark thrust his hands through ghostly 3D body armor in the 2008 film Iron Man, Smalley realized that he could never achieve that using holography, the current standard for high-tech 3D display, because Stark’s hand would block the hologram’s light source. “That irritated me,” he says. He immediately tried to work out how to get around that.
** “Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light.” — Wikipedia
*** Previous researchers have created volumetric imagery, but the Smalley team says it’s the first to use optical trapping and color effectively. “Among volumetric systems, we are aware of only three such displays that have been successfully demonstrated in free space: induced plasma displays, modified air displays, and acoustic levitation displays. Plasma displays have yet to demonstrate RGB color or occlusion in free space. Modified air displays and acoustic levitation displays rely on mechanisms that are too coarse or too inertial to compete directly with holography at present.” — D.E. Smalley et al./Nature
Nature video | Pictures in the air: 3D printing with light
Abstract of A photophoretic-trap volumetric display
Free-space volumetric displays, or displays that create luminous image points in space, are the technology that most closely resembles the three-dimensional displays of popular fiction. Such displays are capable of producing images in ‘thin air’ that are visible from almost any direction and are not subject to clipping. Clipping restricts the utility of all three-dimensional displays that modulate light at a two-dimensional surface with an edge boundary; these include holographic displays, nanophotonic arrays, plasmonic displays, lenticular or lenslet displays and all technologies in which the light scattering surface and the image point are physically separate. Here we present a free-space volumetric display based on photophoretic optical trapping that produces full-colour graphics in free space with ten-micrometre image points using persistence of vision. This display works by first isolating a cellulose particle in a photophoretic trap created by spherical and astigmatic aberrations. The trap and particle are then scanned through a display volume while being illuminated with red, green and blue light. The result is a three-dimensional image in free space with a large colour gamut, fine detail and low apparent speckle. This platform, named the Optical Trap Display, is capable of producing image geometries that are currently unobtainable with holographic and light-field technologies, such as long-throw projections, tall sandtables and ‘wrap-around’ displays.
Imagine what this amazingly resilient microscopic (0.2 to 0.7 millimeter) milnesium tardigradum animal could evolve into on another planet. (credit: Wikipedia)
Life on our planet might have originated from biological particles brought to Earth in streams of space dust, according to a study published in the journal Astrobiology.
A huge amount of space dust (~10,000 kilograms — about the weight of two elephants) enters our atmosphere every day — possibly delivering organisms from far-off worlds, according to Professor Arjun Berera from the University of Edinburgh School of Physics and Astronomy, who led the study.
The dust streams could also collide with bacteria and other biological particles at 150 km or higher above Earth’s surface with enough energy to knock them into space, carrying Earth-based organisms to other planets and perhaps beyond.
The finding suggests that large asteroid impacts may not be the sole mechanism by which life could transfer between planets, as previously thought.
“The streaming of fast space dust is found throughout planetary systems and could be a common factor in proliferating life,” said Berera. Some bacteria, plants, and even microscopic animals called tardigrades* are known to be able to survive in space, so it is possible that such organisms — if present in Earth’s upper atmosphere — might collide with fast-moving space dust and withstand a journey to another planet.**
The study was partly funded by the U.K. Science and Technology Facilities Council.
* “Some tardigrades can withstand extremely cold temperatures down to 1 K (−458 °F; −272 °C) (close to absolute zero), while others can withstand extremely hot temperatures up to 420 K (300 °F; 150 °C)[12] for several minutes, pressures about six times greater than those found in the deepest ocean trenches, ionizing radiation at doses hundreds of times higher than the lethal dose for a human, and the vacuum of outer space. They can go without food or water for more than 30 years, drying out to the point where they are 3% or less water, only to rehydrate, forage, and reproduce.” — Wikipedia
** “Over the lifespan of the Earth of four billion years, particles emerging from Earth by this manner in principle could have traveled out as far as tens of kiloparsecs [one kiloparsec = 3,260 light years; our galaxy is about 100,000 light-years across]. This material horizon, as could be called the maximum distance on pure kinematic grounds that a material particle from Earth could travel outward based on natural processes, would cover most of our Galactic disk [the "Milky Way"], and interestingly would be far enough out to reach the Earth-like or potentially habitable planets that have been identified.” — Arjun Berera/Astrobiology
Abstract of Space Dust Collisions as a Planetary Escape Mechanism
It is observed that hypervelocity space dust, which is continuously bombarding Earth, creates immense momentum flows in the atmosphere. Some of this fast space dust inevitably will interact with the atmospheric system, transferring energy and moving particles around, with various possible consequences. This paper examines, with supporting estimates, the possibility that by way of collisions the Earth-grazing component of space dust can facilitate planetary escape of atmospheric particles, whether they are atoms and molecules that form the atmosphere or larger-sized particles. An interesting outcome of this collision scenario is that a variety of particles that contain telltale signs of Earth’s organic story, including microbial life and life-essential molecules, may be “afloat” in Earth’s atmosphere. The present study assesses the capability of this space dust collision mechanism to propel some of these biological constituents into space. Key Words: Hypervelocity space dust—Collision—Planetary escape—Atmospheric constituents—Microbial life. Astrobiology 17, xxx–xxx.
Astronomers detect gravitational waves and a gamma-ray burst from two colliding neutron stars. (credit: National Science Foundation/LIGO/Sonoma State University/A. Simonnet)
Scientists reported today (Oct. 16, 2017) the first simultaneous detection of both gravitational waves and light — an astounding collision of two neutron stars.
The discovery was made nearly simultaneously by three gravitational-wave detectors, followed by observations by some 70 ground- and space-based light observatories.
Neutron stars are the smallest, densest stars known to exist and are formed when massive stars explode in supernovas.
MIT | Neutron Stars Collide
As these neutron stars spiraled together, they emitted gravitational waves that were detectable for about 100 seconds. When they collided, a flash of light in the form of gamma rays was emitted and seen on Earth about two seconds after the gravitational waves. In the days and weeks following the smashup, other forms of light, or electromagnetic radiation — including X-ray, ultraviolet, optical, infrared, and radio waves — were detected.
The stars were estimated to be in a range from around 1.1 to 1.6 times the mass of the sun, in the mass range of neutron stars. A neutron star is about 20 kilometers, or 12 miles, in diameter and is so dense that a teaspoon of neutron star material has a mass of about a billion tons.
The initial gamma-ray measurements, combined with the gravitational-wave detection, provide confirmation for Einstein’s general theory of relativity, which predicts that gravitational waves should travel at the speed of light. The observations also reveal signatures of recently synthesized material, including gold and platinum, solving a decades-long mystery of where about half of all elements heavier than iron are produced.
Georgia Tech | The Collision of Two Neutron Stars (audible frequencies start at ~25 seconds)
“This detection has genuinely opened the doors to a new way of doing astrophysics,” said Laura Cadonati, professor of physics at Georgia Tech and deputy spokesperson for the LIGO Scientific Collaboration. I expect it will be remembered as one of the most studied astrophysical events in history.”
In the weeks and months ahead, telescopes around the world will continue to observe the afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.
The research was published today in Physical Review Letters and in an open-access paper in The Astrophysical Journal Letters.
Timeline
KurzweilAI has assembled this timeline of the observations from various reports:
About 130 million years ago: Two neutron stars are in their final moments of orbiting each other, separated only by about 300 kilometers (200 miles) and gathering speed while closing the distance between them. As the stars spiral faster and closer together, they stretch and distort the surrounding space-time, giving off energy in the form of powerful gravitational waves, before smashing into each other. At the moment of collision, the bulk of the two neutron stars merge into one ultradense object, emitting a “fireball” of gamma rays.
Aug. 17, 2017, 12∶41:04 ET: Virgo detector in Pisa, Italy picks up a new strong “chirp” gravitational wave signal, designated GW170817. The LIGO detector in Livingston, Louisiana detects the signal just 22 milliseconds later, then the twin LIGO detector in Hanford, Washington, 3 milliseconds after that. Based on the signal duration (about 100 minutes) and the signal frequencies, scientists at the three facilities conclude it’s likely from neutron stars — not from more massive black holes (as in the previously three gravitational wave detections). And based on the signal strengths and timing between the three detectors, scientists are able to precisely triangulate the position in the sky. (The most precise gravitational-wave detection so far.)
1.7 seconds later: NASA’s Fermi Gamma-ray Space Telescope and the European INTEGRAL satellite detect a gamma-ray burst (GRB) lasting nearly 2 seconds from the same general direction of sky. Both the Fermi and LIGO teams quickly alert astronomers around the world to search for an afterglow.
Hours later: Armed with these precise coordinates, a handful of observatories around the world starts searching the region of the sky where the signal was thought to originate. A new point of light, resembling a new star, is found by optical telescopes first. Known as a “kilonova,” it’s a phenomenon by which the material that is left over from the neutron star collision, which glows with light, is blown out of the immediate region and far out into space.
Days and weeks following: About 70 observatories on the ground and in space observe the event at various longer wavelengths (starting at gamma and then X-ray, ultraviolet, optical, infrared, and ending up at radio wave frequencies).
In the weeks and months ahead: Telescopes around the world will continue to observe the radio-wave afterglow of the neutron star merger and gather further evidence about various stages of the merger, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.
“Multimessenger” astronomy
Caltech’s David H. Reitze, executive director of the LIGO Laboratory puts the observations in context: “This detection opens the window of a long-awaited ‘multimessenger’ astronomy. It’s the first time that we’ve observed a cataclysmic astrophysical event in both gravitational waves and electromagnetic waves — our cosmic messengers. Gravitational-wave astronomy offers new opportunities to understand the properties of neutron stars in ways that just can’t be achieved with electromagnetic astronomy alone.”
caltech | Variety of Gravitational Waves and a Chirp (audible sound for GW170817 starts ~30 seconds)
Green Bank Telescope in West Virginia (credit: Geremia/CC)
Using the Green Bank radio telescope, astronomers at Breakthrough Listen, a $100 million initiative to find signs of intelligent life in the universe, have detected 15 brief but powerful “fast radio bursts” (FRBs). These microwave radio pulses are from a mysterious source known as FRB 121102* in a dwarf galaxy about 3 billion light years from Earth, transmitting at record high frequencies (4 to 8 GHz), according to the researchers
This sequence of 14 of the 15 detected fast radio bursts illustrates their dispersed spectrum and extreme variability. The streaks across the colored energy plot are the bursts appearing at different times and different energies because of dispersion caused by 3 billion years of travel through intergalactic space. In the top frequency spectrum, the dispersion has been removed to show the 300 microsecond pulse spike. (credit: Berkeley SETI Research Center)
Andrew Siemion, director of the Berkeley SETI Research Center and of the Breakthrough Listen program, and his team alerted the astronomical community to the high-frequency activity via an Astronomer’s Telegram on Monday evening, Aug. 28.
A schematic illustration of CSIRO’s Parkes radio telescope in Australia receiving a fast radio burst signal in 2014 (credit: Swinburne Astronomy Productions)
First detected in 2007, fast radio bursts are brief, bright pulses of radio emission detected from distant but largely unknown sources.
Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Proxima Centauri (credit: Breakthrough Initiatives)
Possible explanations for the repeating bursts range from outbursts from magnetars (rotating neutron stars with extremely strong magnetic fields) to directed energy sources — powerful bursts used by extraterrestrial civilizations to power exploratory spacecraft, akin to Breakthrough Starshot’s plan to use powerful laser pulses to propel nano-spacecraft to Earth’s nearest star, Proxima Centauri.
* FRB 121102 was discovered Nov 2, 2014 (hence its name) with the Arecibo radio telescope, and in 2015 it was the first fast radio burst seen to repeat. More than 150 high-energy bursts have been observed so far. (The repetition ruled out the possibility that FRBs were caused by catastrophic events.)
FRB 121102: Detection at 4 – 8 GHz band with Breakthrough Listen backend at Green Bank
On Saturday, August 26 at 13:51:44 UTC we initiated observations of the well-known repeating fast radio burst FRB 121102 [Spitler et al., Nature, 531, 7593 202-205, 2016] using the Breakthrough Listen Digital Backend with the C-band receiver at the Green Bank Telescope. We recorded baseband voltage data across 5.4375 GHz of bandwidth, completely covering the C-band receiver’s nominal 4-8 GHz band [MacMahon et al. arXiv:1707.06024v2]. Observations were conducted over ten 30-minute scans, as detailed in Table 1. Immediately after observations, the baseband data were reduced to form high time resolution (300 us integration) Stokes-I products using a GPU-accelerated spectroscopy suite. These reduced products were searched for dispersed pulses consistent with the known dispersion measure of FRB 121102 (557 pc cm^-3); baseband voltage data were preserved. We detected 15 bursts above our detection threshold of 10 sigma in the first two 30-minute scans, denoted 11A-L and 12A-B in Table 2. In Table 2, we include the detection signal-to-noise ratio (SNR) of each burst, along with a very rough estimate of pulse energy density assuming a 12 Jy system equivalent flux density, 300 us pulse width, and uniform 3800 MHz bandwidth. We note the following phenomenological properties of the detected bursts: 1. Bursts show marked changes in spectral extent, with characteristic spectral structure in the 100 MHz – 1 GHz range. 2. Several bursts appear to peak in brightness at frequencies above 6 GHz.
Julie Brefczynski-Lewis, a neuroscientist at West Virginia University, places a helmet-like PET scanner on a research subject. The mobile scanner enables studies of human interaction, movement disorders, and more. (credit: West Virginia University)
Two scientists have developed a miniaturized positron emission tomography (PET) brain scanner that can be “worn” like a helmet.
The new Ambulatory Microdose Positron Emission Tomography (AMPET) scanner allows research subjects to stand and move around as the device scans, instead of having to lie completely still and be administered anesthesia — making it impossible to find associations between movement and brain activity.
The AMPET scanner was developed by Julie Brefczynski-Lewis, a neuroscientist at West Virginia University (WVU), and Stan Majewski, a physicist at WVU and now at the University of Virginia. It could make possible new psychological and clinical studies on how the brain functions when affected by diseases from epilepsy to addiction, and during ordinary and dysfunctional social interactions.
Helmet support prototype with weighted helmet, allowing for freedom of movement. The counterbalance currently supports up to 10 kg but can be upgraded. Digitizing electronics will be mounted to the support above the patient. (credit: Samantha Melroy et al./Sensors)
Because AMPET sits so close to the brain, it can also “catch” more of the photons stemming from the radiotracers used in PET than larger scanners can. That means researchers can administer a lower dose of radioactive material and still get a good biological snapshot. Catching more signals also allows AMPET to create higher resolution images than regular PET.
The AMPET idea was sparked by the Rat Conscious Animal PET (RatCAP) scanner for studying rats at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.** The scanner is a 250-gram ring that fits around the head of a rat, suspended by springs to support its weight and let the rat scurry about as the device scans. (credit: Brookhaven Lab)
The researchers plan to build a laboratory-ready version next.
Seeing more deeply into the brain
A patient or animal about to undergo a PET scan is injected with a low dose of a radiotracer — a radioactive form of a molecule that is regularly used in the body. These molecules emit anti-matter particles called positrons, which then manage to only travel a tiny distance through the body. As soon as one of these positrons meets an electron in biological tissue, the pair annihilates and converts their mass to energy. This energy takes the form of two high-energy light rays, called gamma photons, that shoot off in opposite directions. PET machines detect these photons and track their paths backward to their point of origin — the tracer molecule. By measuring levels of the tracer, for instance, doctors can map areas of high metabolic activity. Mapping of different tracers provides insight into different aspects of a patient’s health. (credit: Brookhaven Lab)
PET scans allow researchers to see farther into the body than other imaging tools. This lets AMPET reach deep neural structures while the research subjects are upright and moving. “A lot of the important things that are going on with emotion, memory, and behavior are way deep in the center of the brain: the basal ganglia, hippocampus, amygdala,” Brefczynski-Lewis notes.
“Currently we are doing tests to validate the use of virtual reality environments in future experiments,” she said. In this virtual reality, volunteers would read from a script designed to make the subject angry, for example, as his or her brain is scanned.
In the medical sphere, the scanning helmet could help explain what happens during drug treatments. Or it could shed light on movement disorders such as epilepsy, and watch what happens in the brain during a seizure; or study the sub-population of Parkinson’s patients who have great difficulty walking, but can ride a bicycle .
The RatCAP project at Brookhaven was funded by the DOE Office of Science. RHIC is a DOE Office of Science User Facility for nuclear physics research. Brookhaven Lab physicists use technology similar to PET scanners at the Relativistic Heavy Ion Collider (RHIC), where they must track the particles that fly out of near-light speed collisions of charged nuclei. PET research at the Lab dates back to the early 1960s and includes the creation of the first single-plane scanner as well as various tracer molecules.
Abstract of Development and Design of Next-Generation Head-Mounted Ambulatory Microdose Positron-Emission Tomography (AM-PET) System
Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient’s head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk—the Ambulatory Microdose PET (AM-PET)—is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.