‘Eternal 5D’ data storage could reliably record the history of humankind

Documents captured in nanostructured glass, expected to last billions of years (credit: University of Southampton)

Scientists at the University of Southampton Optoelectronics Research Centre (ORC) have developed the first digital data storage system capable of creating archives that can survive for billions of years.

Using nanostructured glass, the system has 360 TB per disc capacity, thermal stability up to 1,000°C, and virtually unlimited lifetime at room temperature (or 13.8 billion years at 190°C ).

As a “highly stable and safe form of portable memory,” the technology opens up a new era of “eternal” data archiving that could be essential to cope with the accelerating amount of information currently being created and stored, the scientists says.* The system could be especially useful for organizations with big archives, such as national archives, museums, and libraries, according to the scientists.

Superman memory crystal

5D optical storage writing setup. FSL: femtosecond laser; SLM: spatial light modulator; FL1 and FL2: Fourier lens; HPM: half-wave plate matrix; AP: aperture; WIO: water immersion objective. Inset: Linearly polarized light (white arrows) with different intensity levels propagate simultaneously through each half-wave plate segment with different slow-axis orientation (black arrows). The colors of the rectangle indicate four different intensity levels. (credit: University of Southampton)

The recording system uses an ultrafast laser to produce extremely short (femtosecond) and intense pulses of light. The file is written in three up to 18 layers of nanostructured dots separated by five micrometers (one millionth of a meter) in fuzed quartz (coined as a “Superman memory crystal” (as in “memory crystals” used in the Superman films).” The self-assembled nanostructures change the way light travels through glass, modifying the polarization of light, which can then be read by a combination optical microscope and polarizer, similar to that found in Polaroid sunglasses.

The recording method is described as “5D” because the information encoding is in five dimensions — three-dimensional position plus size and orientation.

So far, the researchers have saved major documents from human history, such as the Universal Declaration of Human Rights (UDHR), Newton’s Opticks, Magna Carta, and Kings James Bible as digital copies. A copy of the UDHR encoded to 5D data storage was recently presented to UNESCO by the ORC at the International Year of Light (IYL) closing ceremony in Mexico.

The team is now looking for industry partners to further develop and commercialize this technology.

The researchers will present their research at the photonics industry’s SPIE (the International Society for Optical Engineering Conference) in San Francisco on Wednesday Feb. 17.

* In 2008, the International Data Corporation [found] that total capacity of data stored is increasing by around 60% each year. As a result, more than 39,000 exabytes of data will be generated by 2020. This amount of data will cause a series of problems and one of the main will be power consumption. 1.5% of the total U.S. electricity consumption in 2010 was given to the data centers in the U.S. According to a report by the Natural Resources Defence Council, the power consumption of all data centers in the U.S. will reach roughly 140 billion kilowatt-hours per each year by 2020. This amount of electricity is equivalent to that generated by roughly thirteen Heysham 2 nuclear power stations (one of the biggest stations in UK, net 1240 MWe).

Most of these data centers are built based on hard-disk drive (HDD), with only a few designed on optical discs. HDD is the most popular solution for digital data storage according to the International Data Corporation. However, HDD is not an energy-efficient option for data archiving; the loading energy consumption is around 0.04 W/GB. In addition, HDD is an unsatisfactory candidate for long-term storage due to the short lifetime of the hardware and requires transferring data every two years to avoid any loss.

— Jingyu Zhang et al. Eternal 5D data storage by ultrafast laser writing in glass. Proceedings of the SPIE OPTO 2016


Abstract of Eternal 5D data storage by ultrafast laser writing in glass

Femtosecond laser writing in transparent materials has attracted considerable interest due to new science and a wide range of applications from laser surgery, 3D integrated optics and optofluidics to geometrical phase optics and ultra-stable optical data storage. A decade ago it has been discovered that under certain irradiation conditions self-organized subwavelength structures with record small features of 20 nm, could be created in the volume of silica glass. On the macroscopic scale the self-assembled nanostructure behaves as a uniaxial optical crystal with negative birefringence. The optical anisotropy, which results from the alignment of nano-platelets, referred to as form birefringence, is of the same order of magnitude as positive birefringence in crystalline quartz. The two independent parameters describing birefringence, the slow axis orientation (4th dimension) and the strength of retardance (5th dimension), are explored for the optical encoding of information in addition to three spatial coordinates. The slow axis orientation and the retardance are independently manipulated by the polarization and intensity of the femtosecond laser beam. The data optically encoded into five dimensions is successfully retrieved by quantitative birefringence measurements. The storage allows unprecedented parameters including hundreds of terabytes per disc data capacity and thermal stability up to 1000°. Even at elevated temperatures of 160oC, the extrapolated decay time of nanogratings is comparable with the age of the Universe – 13.8 billion years. The demonstrated recording of the digital documents, which will survive the human race, including the eternal copies of Kings James Bible and Magna Carta, is a vital step towards an eternal archive.

An 18-inch video display you can roll up like a newspaper

(credit: LG)

LG is creating a buzz at CES with its concept demo of the world’s first display that can be rolled up like a newspaper.

LG says they’re aiming for 4K-quality 55-inch screens (the prototype resolution is 1,200 by 810 pixels), BBC reports.

The trick:  switching from LED to thinner, more-flexible OLED technology (organic light-emitting diodes), allowing for a 2.57 millimeter-thin display. One limitation: the screen can’t be flattened.

What this design might be useful for in the future is not clear, but experts suggest the technology could soon be used on smartphones and in-car screens that curve around a vehicle’s interior, Daily Mail notes.

LG is also displaying a 55-inch double-sided display that’s as thin as a piece of paper and shows different video images on each side, and two 65-inch “extreme-curve” TVs that bend inwards and outwards.

CNET CES Videos

How to animate a digital model of a person from images collected from the Internet

UW researchers have reconstructed 3-D models of celebrities such as Tom Hanks from large Internet photo collections. The models can also be controlled and animated by photos or videos of another person. (credit: University of Washington)

University of Washington researchers have demonstrated that it’s possible for machine learning algorithms to capture the “persona” and create a digital model of a well-photographed person like Tom Hanks from the vast number of images of them available on the Internet. With enough visual data to mine, the algorithms can also animate the digital model of Tom Hanks to deliver speeches that the real actor never performed.

Tom Hanks has appeared in many acting roles over the years, playing young and old, smart and simple. Yet we always recognize him as Tom Hanks. Why? Is it his appearance? His mannerisms? The way he moves? “One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” said lead author Supasorn Suwajanakorn, a UW graduate student in computer science and engineering.

The technology relies on advances in 3-D face reconstruction, tracking, alignment, multi-texture modeling, and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The new results will be presented in an open-access paper at the International Conference on Computer Vision in Chile on Dec. 16.


Supasorn Suwajanakorn | What Makes Tom Hanks Look Like Tom Hanks

The team’s latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else — for instance, mapping former president George W. Bush’s mannerisms onto the faces of other politicians and celebrities.

It’s one step toward a grand goal shared by the UW computer vision researchers: creating fully interactive, three-dimensional digital personas from family photo albums and videos, historic collections or other existing visuals.

As virtual and augmented reality technologies develop, they envision using family photographs and videos to create an interactive model of a relative living overseas or a far-away grandparent, rather than simply Skyping in two dimensions.

“You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said senior author Kemelmacher-Shlizerman. “Such technology doesn’t exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?”

One day the reconstruction technology could be taken a step further, researchers say.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person — LeBron James, Barack Obama, Charlie Chaplin — and interact with them,” said co-author Steve Seitz, UW professor of computer science and engineering. “We’re trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn’t say but it still feels like them? This paper is demonstrating that ability.”


Supasorn Suwajanakorn | George Bush driving crowd

Existing technologies to create detailed three-dimensional holograms or digital movie characters like Benjamin Button often rely on bringing a person into an elaborate studio. They painstakingly capture every angle of the person and the way they move — something that can’t be done in a living room.

Other approaches still require a person to be scanned by a camera to create basic avatars for video games or other virtual environments. But the UW computer vision experts wanted to digitally reconstruct a person based solely on a random collection of existing images.

Learning in the wild

To reconstruct celebrities like Tom Hanks, Barack Obama and Daniel Craig, the machine learning algorithms mined a minimum of 200 Internet images taken over time in various scenarios and poses — a process known as learning “in the wild.”

“We asked, ‘Can you take Internet photos or your personal photo collection and animate a model without having that person interact with a camera?’” said Kemelmacher-Shlizerman. “Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal.”

Suwajanakorn more recently developed techniques to capture expression-dependent textures — small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, he developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face. That breakthrough enables the team to “control” the digital model with a video of another person, and could potentially enable a host of new animation and virtual reality applications.

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

Perhaps this could be used to create VR experiences by integrating the animated images in 360-degree sets?


Abstract of What Makes Tom Hanks Look Like Tom Hanks

We reconstruct a controllable model of a person from a large photo collection that captures his or her persona, i.e., physical appearance and behavior. The ability to operate on unstructured photo collections enables modeling a huge number of people, including celebrities and other well photographed people without requiring them to be scanned. Moreover, we show the ability to drive or puppeteer the captured person B using any other video of a different person A. In this scenario, B acts out the role of person A, but retains his/her own personality and character. Our system is based on a novel combination of 3D face reconstruction, tracking, alignment, and multi-texture modeling, applied to the puppeteering problem. We demonstrate convincing results on a large variety of celebrities derived from Internet imagery and video.

AI will replace smartphones within 5 years, Ericsson survey suggests

(credit: Ericsson ConsumerLab)

Artificial intelligence (AI) interfaces will take over, replacing smartphones in five years, according to a survey of more than 5000 smartphone customers in nine countries by Ericsson ConsumerLab in the fifth edition of its annual trend report, 10 Hot Consumer Trends 2016 (and beyond).

Smartphone users believe AI will take over many common activities, such as searching the net, getting travel guidance, and as personal assistants. The survey found that 44 percent think an AI system would be as good as a teacher and one third would like an AI interface to keep them company. A third would rather trust the fidelity of an AI interface than a human for sensitive matters; and 29 percent agree they would feel more comfortable discussing their medical condition with an AI system.

However, many of the users surveyed find smartphones limited.

Impractical. Constantly having a screen in the palm of your hand is not always a practical solution, such as in driving or cooking.

Battery capacity limits. One in 3 smartphone users want a 7−8 inch screen, creating a battery drain vs. size and weight issue.

Not wearable. 85 percent of the smartphone users think intelligent wearable electronic assistants will be commonplace within 5 years, reducing the need to always touch a screen. And one in two users believes they will be able to talk directly to household appliances.

VR and 3D better. The smartphone users want movies that play virtually around the viewer, virtual tech support, and VR headsets for sports, and more than 50 percent of consumers think holographic screens will be mainstream within 5 years — capabilities not available in a small handheld device. Half of the smartphone users want a 3D avatar to try on clothes online, and 64 percent would like the ability to see an item’s actual size and form when shopping online. Half of the users want to bypass shopping altogether, with a 3D printer for printing household objects such as spoons, toys and spare parts for appliances; 44 percent even want to print their own food or nutritional supplements.

The 10 hot trends for 2016 and beyond cited in the report

  1. The Lifestyle Network Effect. Four out of five people now experience an effect where the benefits gained from online services increases as more people use them. Globally, one in three consumers already participates in various forms of the sharing economy.
  2. Streaming NativesTeenagers watch more YouTube video content daily than other age groups. Forty-six percent of 16-19 year-olds spend an hour or more on YouTube every day.
  3. AI Ends The Screen AgeArtificial intelligence will enable interaction with objects without the need for a smartphone screen. One in two smartphone users think smartphones will be a thing of the past within the next five years.
  4. Virtual Gets RealConsumers want virtual technology for everyday activities such as watching sports and making video calls. Forty-four percent even want to print their own food.
  5. Sensing Homes. Fifty-five percent of smartphone owners believe bricks used to build homes could include sensors that monitor mold, leakage and electricity issues within the next five years. As a result, the concept of smart homes may need to be rethought from the ground up.
  6. Smart CommutersCommuters want to use their time meaningfully and not feel like passive objects in transit. Eighty-six percent would use personalized commuting services if they were available.
  7. Emergency ChatSocial networks may become the preferred way to contact emergency services. Six out of 10 consumers are also interested in a disaster information app.
  8. InternablesInternal sensors that measure well-being in our bodies may become the new wearables. Eight out of 10 consumers would like to use technology to enhance sensory perceptions and cognitive abilities such as vision, memory and hearing.
  9. Everything Gets HackedMost smartphone users believe hacking and viruses will continue to be an issue. As a positive side-effect, one in five say they have greater trust in an organization that was hacked but then solved the problem.
  10. Netizen JournalistsConsumers share more information than ever and believe it increases their influence on society. More than a third believe blowing the whistle on a corrupt company online has greater impact than going to the police.

Source: 10 Hot Consumer Trends 2016. Ericsson ConsumerLab, Information Sharing, 2015. Base: 5,025 iOS/Android smartphone users aged 15-69 in Berlin, Chicago, Johannesburg, London, Mexico City, Moscow, New York, São Paulo, Sydney and Tokyo

Minority Report, Limitless TV shows launch Monday, Tuesday

A sequel to Steven Spielberg’s epic movie, MINORITY REPORT is set in Washington, D.C., 10 years after the demise of Precrime, a law enforcement agency tasked with identifying and eliminating criminals … before their crimes were committed. Now, in 2065, crime-solving is different, and justice leans more on sophisticated and trusted technology than on the instincts of the precogs. Sept. 21 series premiere Mondays 9/8:00c

LIMITLESS, based on the feature film, is a fast-paced drama about Brian Finch, who discovers the brain-boosting power of the mysterious drug NZT and is coerced by the FBI into using his extraordinary cognitive abilities to solve complex cases for them. Sept. 22 series premiere Tuesdays 10/9c

AI authors crowdsourced interactive fiction


GVU Center at Georgia Tech | A new Georgia Tech artificial intelligence system develops interactive stories through crowdsourced data for more robust fiction. Here (in a simplified example), the AI replicates a typical first date to the movies (user choices are in red), complete with loud theater talkers and the arm-over-shoulder movie move.

Georgia Institute of Technology researchers have developed a new artificially intelligent system that crowdsources plots for interactive stories, which are popular in video games and let players choose different branching story options.

“Our open interactive narrative system learns genre models from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience,” says Mark Riedl, lead investigator and associate professor of interactive computing at Georgia Tech.

With potentially limitless crowdsourced plot points, the system could allow for more creative stories and an easier method for interactive narrative generation. For example, imagine a Star Wars game using online fan fiction, generating paths for a player to take.

Current AI models for games have a limited number of scenarios, no matter what a player chooses. They depend on a dataset already programmed into a model by experts.

Near human-level authoring

The Scheherazade-IF Architecture (credit: Matthew Guzdial et al.)

test* of the AI system, called Scheherazade IF (Interactive Fiction) — a reference to the fabled Arabic queen and storyteller — showed that it can achieve near human-level authoring, the researchers claim.

“When enough data is available and that data sufficiently covers all aspects of the game experience, the system was able to meet or come close to meeting human performance in creating a playable story,” says Riedl.

The creators say that they are seeking to inject more creative scenarios into the system. Right now, the AI plays it safe with the crowdsourced content, producing what one might expect in different genres. But opportunities exist to train Scheherazade (just like its namesake implies) to surprise and immerse those in future interactive experiences.

The impact of this research can support not only online storytelling for entertainment, but also digital storytelling used in online course education or corporate training.

* The researchers evaluated the AI system by measuring the number of “commonsense” errors (e.g. scenes out of sequence) found by players, as well as players’ subjective experiences for things such as enjoyment and coherence of story.

Three test groups played through two interactive stories — a bank robbery and a date to the movies — to measure performance of three narrative generators: the AI story generator, a human-programmed generator, or a random story generator.

For the bank robbery story, the AI system performed identically to the human-programmed generator in terms of errors reported by players, with a median of three each. The random generator produced a median of 12.5 errors reported.

For the movie date scenario, the median values of errors reported were three (human), five (AI) and 15 (random). This shows the AI system performing at 83.3 percent of the human-programmed generator.

As for the play experience itself, the human and AI generators compared favorably for coherence, player involvement, enjoyment and story recognition.


Abstract of Crowdsourcing Open Interactive Narrative

Interactive narrative is a form of digital interactive experience in which users influence a dramatic storyline through their actions. Artificial intelligence approaches to interactive narrative use a domain model to determine how the narrative should unfold based on user actions. However, domain models for interactive narrative require artificial intelligence and knowledge representation expertise. We present open interactive narrative, the problem of generating an interactive narrative experience about any possible topic. We present an open interactive narrative system— Scherazade IF—that learns a domain model from crowdsourced example stories so that the player can perform different actions and still receive a coherent story experience. We report on an evaluation of our system showing near-human level authoring

Self/Less movie features uploading … to an existing human body

In Self/Less, a science-fiction thriller to be released in the U.S. today, July 10, 2015, Damian Hale, an extremely wealthy aristocrat (Ben Kingsley) dying from cancer, undergoes a $250 million radical medical procedure at a lab called Phoenix Biogenic in Manhattan to have his consciousness transferred into the body of a healthy young man (Ryan Reynolds).

(credit: Hilary Bronwyn Gayle / Gramercy Pictures)

But when he starts to uncover the mystery of the body’s origin — he has flashbacks in a dream of a former life as Mark — he discovers the body was not grown in a laboratory, as promised, and that the “organization” he bought the body from will kill to protect its investment. To make matters worse, he faces the threat of losing control of the body he now possesses and its original owner’s consciousness resurfacing, which will erase his mind in the process.

Curiously, at one point, Mark looks up the scientist who did the transfer on Wikipedia, and finds that he was the “godfather of transhumanism.” “What many summer movie-goers might not realize is that Self/less is loosely based on a real-life project called the 2045 Initiative, which is being spearheaded by Dmitry Itskov, a Russian multi-millionaire, Ars Technica suggests. But the theme has also been explored in a number of movies, ranging from Metropolis to The Sixth Day, Avatar, and The Age of Ultron.

(credit: 2045 Strategic Social Initiative)

Crowdsourcing neurofeedback data

In front of an audience, the collective neurofeedback of 20 participants were projected on the 360° surface of the semi-transparent dome as artistic video animations with soundscapes generated based on a pre-recorded sound library and improvisations from live musicians (credit: Natasha Kovacevic et al./PLoS ONE/Photo: David Pisarek)

In a large-scale art-science installation called My Virtual Dream in Toronto in 2013, more than 500 adults wearing a Muse wireless electroencephalography (EEG) headband inside a 60-foot geodesic dom participated in an unusual neuroscience experiment.

As they played a collective neurofeedback computer game where they were required to manipulate their mental states of relaxation and concentration, the group’s collective EEG signals triggered a catalog of related artistic imagery displayed on the dome’s 360-degree interior, along with spontaneous musical interpretation by live musicians on stage.

“What we’ve done is taken the lab to the public. We collaborated with multimedia artists, made this experiment incredibly engaging, attracted highly motivated subjects, which is not easy to do in the traditional lab setting, and collected useful scientific data from their experience.”

Collective neurofeedback: a new kind of neuroscience research

Participant instructions (credit: Natasha Kovacevic et al./PLoS ONE)

Results from the experiment demonstrated the scientific viability of collective neurofeedback as a potential new avenue of neuroscience research that takes into account individuality, complexity and sociability of the human mind. They also yielded new evidence that neurofeedback learning can have an effect on the brain almost immediately the researchers say.

Studying brains in a social and multi-sensory environment is closer to real life and may help scientists to approach questions of complex real-life social cognition that otherwise are not accessible in traditional labs that study one person’s cognitive functions at a time.

“In traditional lab settings, the environment is so controlled that you can lose some of the fine points of real-time brain activity that occur in a social life setting,” said Natasha Kovacevic, creative producer of My Virtual Dream and program manager of the Centre for Integrative Brain Dynamics at Baycrest’s Rotman Research Institute.

The massive amount of EEG data collected in one night yielded an interesting and statistically relevant finding: that subtle brain activity changes were taking place within approximately one minute of the neurofeedback learning exercise — unprecedented speed of learning changes that have not been demonstrated before.

Building the world’s first virtual brain

“These results really open up a whole new domain of neuroscience study that actively engages the public to advance our understanding of the brain,” said Randy McIntosh, director of the Rotman Research Institute and vice-president of Research at Baycrest. He is a senior author on the paper.

The idea for the Nuit Blanche art-science experiment was inspired by Baycrest’s ongoing international project to build the world’s first functional, virtual brain — a research and diagnostic tool that could one day revolutionize brain healthcare.

Baycrest cognitive neuroscientists collaborated with artists and gaming and wearable technology industry partners for over a year to create the My Virtual Dream installation. Partners included the University of Toronto, Scotiabank Nuit Blanche, Muse, and Uken Games.

Plans are underway to travel My Virtual Dream to other cities around the world.


Abstract of ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants’ baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions.

Improving the experience of the audience with digital instruments

Virtual content being displayed on stage and overlapping the instruments and the performers (credit: Florent Berthaut)

University of Bristol researchers have developed a new augmented-reality display that allows audiences to better appreciate digital musical performances

The research team from the University’s Bristol Interaction and Graphics (BIG) has been investigating how to improve the audiences experience during performances with digital musical instruments, which are played by manipulating buttons, mich, and various other controls.

Funded by a Marie Curie grant, the IXMI project, led by Florent Berthaut, aims to show the mechanisms of digital instruments, using 3D virtual content and mixed-reality displays.

Their first creation Reflets is a mixed-reality environment that allows for displaying virtual content anywhere on stage, even overlapping the instruments or the performers. It does not require the audience to wear glasses or to use their smartphones to see the augmentations, which remain consistent at all positions in the audience.

Reflets relies on combining the audience and stage spaces using reflective transparent surfaces and having the audience and performers reveal the virtual content by intersecting it with their bodies or physical props.

The research is being presented at the 15th International Conference on New Interfaces for Musical Expression (NIME) in the U.S. [May 31 -- June 3].


BristolIG | Ixmi: Improving the experience of the audience with digital instruments

Robot servants push the boundaries in HUMANS

(credit: AMC)

AMC announced today HUMANS, an eight-part TV science-fiction thriller that takes place in a parallel present featuring sophisticated, life-like robot servants and caregivers called Synths (personal synthetics).

The show explores conflicts as the lines between humans and machines become increasingly blurred.

The upgrade you’ve been waiting for is here (credit: AMC)

The series is set to premiere on AMC June 28 with HUMANS 101: The Hawkins family buys a Synth, Anita. But are they in danger from this machine and the young man Leo who seems desperate to find her?

The show features Oscar-winning actor William Hurt, Katherine Parkinson (The IT Crowd), Colin Morgan (Merlin), and Gemma Chan (Secret Diary of a Call Girl).


AMC

More HUMANS trailers