How to control information leaks from smartphone apps

A Northeastern University research team has found “exten­sive” leakage of users’ information — device and user iden­ti­fiers, loca­tions, and passwords — into net­work traffic from apps on mobile devices, including iOS, Android, and Win­dows phones. The researchers have also devised a way to stop the flow.

David Choffnes, an assis­tant pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence, and his col­leagues devel­oped a simple, effi­cient cloud-based system called ReCon. It detects leaks of “per­son­ally iden­ti­fi­able infor­ma­tion,” alerts users to those breaches, and enables users to con­trol the leaks by spec­i­fying what infor­ma­tion they want blocked and from whom.

The team’s study fol­lowed 31 mobile device users with iOS devices and Android devices who used ReCon for a period of one week to 101 days and then mon­i­tored their per­sonal leak­ages through a ReCon secure webpage.

The results were alarming. “Depress­ingly, even in our small user study we found 165 cases of cre­den­tials being leaked in plain­text,” the researchers wrote.

Of the top 100 apps in each oper­ating system’s app store that par­tic­i­pants were using, more than 50 per­cent leaked device iden­ti­fiers, more than 14 per­cent leaked actual names or other user iden­ti­fiers, 14–26 per­cent leaked loca­tions, and three leaked pass­words in plain­text. In addi­tion to those top apps, the study found sim­ilar pass­word leaks from 10 addi­tional apps that par­tic­i­pants had installed and used.

The password-leaking apps included Map­MyRun, the lan­guage app Duolingo, and the Indian dig­ital music app Gaana. All three devel­opers have since fixed the leaks. Sev­eral other apps con­tinue to send plain­text pass­words into traffic, including a pop­ular dating app.

What’s really trou­bling is that we even see sig­nif­i­cant num­bers of apps sending your pass­word, in plain­text read­able form, when you log in,” says Choffnes. In a public-WiFi set­ting, that means anyone run­ning “some pretty simple soft­ware” could nab it.

Screen capture of the ReCon user interface. Users can view how their personally identifiable information is leaked, validate the suspected leaks, and create custom filters to block or modify leaks. (credit: Jingjing Ren et al./arXiv)

Apps that track

Access settings for an iPhone app (credit: KurzweilAI)

Apps, like many other dig­ital prod­ucts, con­tain soft­ware that tracks our com­ings, goings, and details of who we are. If you look in the pri­vacy set­ting on your iPhone, you’ll see this state­ment:

“As appli­ca­tions request access to your data, they will be added in the cat­e­gories above.”

Those cat­e­gories include “Loca­tion Ser­vices,” “Con­tacts,” “Cal­en­dars,” “Reminders,” “Photos,” “Blue­tooth Sharing,” and “Camera.”

Although many users don’t realize it, they have con­trol over that access. “When you install an app on a mobile device, it will ask you for cer­tain per­mis­sions that you have to approve or deny before you start using the app,” explains Choffnes. “Because I’m a bit of a pri­vacy nut, I’m even selec­tive about which apps I let know my loca­tion.” For a nav­i­ga­tion app, he says, fine. For others, it’s not so clear.

One reason that apps track you, of course, so is so devel­opers can recover their costs. Many apps are free, tied in with tracking soft­ware, sup­plied by adver­tising and ana­lytics net­works, that gen­er­ates rev­enue when users click on the tar­geted ads that pop up on their phones.

ReCon

Using ReCon is easy, Choffnes says. Par­tic­i­pants install a vir­tual pri­vate net­work, or VPN, on their devices — an easy six- or seven-step process. The VPN then securely trans­mits users’ data to the system’s server, which runs the ReCon soft­ware, iden­ti­fying when and what infor­ma­tion is being leaked.

To learn the status of their infor­ma­tion, par­tic­i­pants simply log onto the ReCon secure web­page. There they can find things like a Google map pin­pointing which of their apps are zap­ping their loca­tion to other des­ti­na­tions and which apps are releasing their pass­words into unen­crypted net­work traffic. They can also tell the system what they want to do about it.

“One of the advan­tages to our approach is you don’t have to tell us your infor­ma­tion, for example, your pass­word, email, or gender,” says Choffnes. “Our system is designed to use cues in the net­work traffic to figure out what kind of infor­ma­tion is being leaked. The soft­ware then auto­mat­i­cally extracts what it sus­pects is your per­sonal infor­ma­tion. We show those find­ings to users, and they tell us if we are right or wrong. That per­mits us to con­tin­u­ally adapt our system, improving its accuracy.”

The team’s eval­u­a­tive study showed that ReCon iden­ti­fies leaks with 98 per­cent accuracy.

“There are other tools that will show you how you’re being tracked but they won’t nec­es­sarily let you do any­thing,” says Choffnes. “And they are mostly focused on tracking behavior and not the actual per­sonal infor­ma­tion that’s being sent out. ReCon covers a wide range of infor­ma­tion being sent out over the net­work about you, and auto­mat­i­cally detects when your infor­ma­tion is leaked without having to know in advance what that infor­ma­tion is. You can [also] set poli­cies to change how your infor­ma­tion is being released.”

A demo of ReCon is available here.

Choffnes presented his find­ings in an open-access paper Monday Nov. 16 at the Data Trans­parency Lab 2015 Con­fer­ence, held at the MIT Media Lab.


Abstract of ReCon: Revealing and Controlling Privacy Leaks in Mobile Network Traffic

It is well known that apps running on mobile devices extensively track and leak users’ personally identifiable information (PII); however, these users have little visibility into PII leaked through the network traffic generated by their devices, and have poor control over how, when and where that traffic is sent and handled by third parties. In this paper, we present the design, implementation, and evaluation of ReCon: a cross-platform system that reveals PII leaks and gives users control over them without requiring any special privileges or custom OSes. ReCon leverages machine learning to reveal potential PII leaks by inspecting network traffic, and provides a visualization tool to empower users with the ability to control these leaks via blocking or substitution of PII. We evaluate ReCon’s effectiveness with measurements from controlled experiments using leaks from the 100 most popular iOS, Android, and Windows Phone apps, and via an IRB-approved user study with 31 participants. We show that ReCon is accurate, efficient, and identifies a wider range of PII than previous approaches.

Google open-sources its TensorFlow machine learning system

Google announced today that it will make its new second-generation “TensorFlow” machine-learning system open source.

That means programmers can now achieve some of what Google engineers have done, using TensorFlow — from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos, to reading a sign in a foreign language using Google Translate.

Google says TensorFlow is a highly scalable machine learning system — it can run on a single smartphone or across thousands of computers in datacenters. The idea is to accelerate research on machine learning, “or wherever researchers are trying to make sense of very complex data — everything from protein folding to crunching astronomy data.”

This blog post by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead, provides a technical overview. “Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with signals derived from deep neural networks, and to power the magic features of tomorrow,” they note.

Semantic Scholar uses AI to transform scientific search

Example of the top return in a Semantic Scholar search for “quantum computer silicon” constrained to overviews (52 out of 1,397 selected papers since 1989) (credit: AI2)

The Allen Institute for Artificial Intelligence (AI2) launched Monday (Nov. 2) its free Semantic Scholar service, intended to allow scientific researchers to quickly cull through the millions of scientific papers published each year to find those most relevant to their work.

Semantic Scholar leverages AI2’s expertise in data mining, natural-language processing, and computer vision, according to according to Oren Etzioni, PhD, CEO at AI2. At launch, the system searches more than three million computer science papers, and will add scientific categories on an ongoing basis.

With Semantic Scholar, computer scientists can:

  • Home in quickly on what they are looking for, with advanced selection filtering tools. Researchers can filter search results by author, publication, topic, and date published. This gets the researcher to the most relevant result in the fastest way possible, and reduces information overload.
  • Instantly access a paper’s figures and findings. Unique among scholarly search engines, this feature pulls out the graphic results, which are often what a researcher is really looking for.
  • Jump to cited papers and references and see how many researchers have cited each paper, a good way to determine citation influence and usefulness.
  • Be prompted with key phrases within each paper to winnow the search further.

Example of figures and tables extracted from the first document discovered (“Quantum computation and quantum information”) in the search above (credit: AI2)

How Semantic Scholar works

Using machine reading and vision methods, Semantic Scholar crawls the web, finding all PDFs of publicly available scientific papers on computer science topics, extracting both text and diagrams/captions, and indexing it all for future contextual retrieval.

Using natural language processing, the system identifies the top papers, extracts filtering information and topics, and sorts by what type of paper and how influential its citations are. It provides the scientist with a simple user interface (optimized for mobile) that maps to academic researchers’ expectations.

Filters such as topic, date of publication, author and where published are built in. It includes smart, contextual recommendations for further keyword filtering as well. Together, these search and discovery tools provide researchers with a quick way to separate wheat from chaff, and to find relevant papers in areas and topics that previously might not have occurred to them.

Semantic Scholar builds from the foundation of other research-paper search applications such as Google Scholar, adding AI methods to overcome information overload.

“Semantic Scholar is a first step toward AI-based discovery engines that will be able to connect the dots between disparate studies to identify novel hypotheses and suggest experiments that would otherwise be missed,” said Etzione. “Our goal is to enable researchers to find answers to some of science’s thorniest problems.”

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Affordable camera reveals hidden details invisible to the naked eye

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical RGB (visible light) camera (credit: University of Washington)

HyperCam, an affordable “hyperspectral” (sees beyond the visible range) camera technology being developed by the University of Washington and Microsoft Research, may enable consumers of the future to use a cell phone to tell which piece of fruit is perfectly ripe or if a work of art is genuine.

The technology uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera, typically used in industrial applications, can cost between several thousand to tens of thousands of dollars.

In a paper presented at the UbiComp 2015 conference, the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. It illuminates a scene with 17 different wavelengths and generates an image for each. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

In one test, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

The HyperCam system was also able differentiate between hand images of users with 99 percent accuracy. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said Shwetak Patel, Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual (credit: University of Washington)

How it works

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. Near-infrared cameras, for instance, can reveal whether crops are healthy. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye (credit: University of Washington)

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

“Next research steps will include making it work better in bright light and making the camera small enough to be incorporated into mobile phones and other devices,” he said.


Mayank Goel | HyperCam: HyperSpectral Imaging for Ubiquitous Computing Applications


Abstract of HyperCam: hyperspectral imaging for ubiquitous computing applications

Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCam’s usefulness and effectiveness.

Gartner identifies the top 10 strategic IT technology trends for 2016

Top 10 strategic trends 2016 (credit: Gartner, Inc.)

At the Gartner Symposium/ITxpo today (Oct. 8), Gartner, Inc. highlighted the top 10 technology trends that will be strategic for most organizations in 2016 and will shape digital business opportunities through 2020.

The Device Mesh

The device mesh refers to how people access applications and information or interact with people, social communities, governments and businesses. It includes mobile devices, wearable, consumer and home electronic devices, automotive devices, and environmental devices, such as sensors in the Internet of Things (IoT), allowing for greater cooperative interaction between devices.

Ambient User Experience

The device mesh creates the foundation for a new continuous and ambient user experience. Immersive environments delivering augmented and virtual reality hold significant potential but are only one aspect of the experience. The ambient user experience preserves continuity across boundaries of device mesh, time and space. The experience seamlessly flows across a shifting set of devices — such as sensors, cars, and even factories — and interaction channels blending physical, virtual and electronic environment as the user moves from one place to another.

3D Printing Materials

Advances in 3D printing will drive user demand and a compound annual growth rate of 64.1 percent for enterprise 3D-printer shipments through 2019, which will require a rethinking of assembly line and supply chain processes to exploit 3D printing.

Information of Everything

Everything in the digital mesh produces, uses and transmits information, including sensory and contextual information. “Information of everything” addresses this influx with strategies and technologies to link data from all these different data sources. Advances in semantic tools such as graph databases as well as other emerging data classification and information analysis techniques will bring meaning to the often chaotic deluge of information.

Advanced Machine Learning

In advanced machine learning, deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world on their own, making it possible to address key challenges related to the information of everything trend.

DNNs (an advanced form of machine learning particularly applicable to large, complex datasets) is what makes smart machines appear “intelligent.” DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organizations must assess how they can apply these technologies to gain competitive advantage.

Autonomous Agents and Things

Machine learning gives rise to a spectrum of smart machine implementations — including robots, autonomous vehicles, virtual personal assistants (VPAs) and smart advisors — that act in an autonomous (or at least semiautonomous) manner.

VPAs such as Google Now, Microsoft’s Cortana, and Apple’s Siri are becoming smarter and are precursors to autonomous agents. The emerging notion of assistance feeds into the ambient user experience in which an autonomous agent becomes the main user interface. Instead of interacting with menus, forms and buttons on a smartphone, the user speaks to an app, which is really an intelligent agent.

Adaptive Security Architecture

The complexities of digital business and the algorithmic economy combined with an emerging “hacker industry” significantly increase the threat surface for an organization. Relying on perimeter defense and rule-based security is inadequate, especially as organizations exploit more cloud-based services and open APIs for customers and partners to integrate with their systems. IT leaders must focus on detecting and responding to threats, as well as more traditional blocking and other measures to prevent attacks. Application self-protection, as well as user and entity behavior analytics, will help fulfill the adaptive security architecture.

Advanced System Architecture

The digital mesh and smart machines require intense computing architecture demands to make them viable for organizations. Providing this required boost are high-powered and ultraefficient neuromorphic (brain-like) architectures fueled by GPUs (graphic processing units) and field-programmable gate arrays (FPGAs). There are significant gains to this architecture, such as being able to run at speeds of greater than a teraflop with high-energy efficiency.

Mesh App and Service Architecture

Monolithic, linear application designs (e.g., the three-tier architecture) are giving way to a more loosely coupled integrative approach: the apps and services architecture. Enabled by software-defined application services, this new approach enables Web-scale performance, flexibility and agility. Microservice architecture is an emerging pattern for building distributed applications that support agile delivery and scalable deployment, both on-premises and in the cloud. Containers are emerging as a critical technology for enabling agile development and microservice architectures. Bringing mobile and IoT elements into the app and service architecture creates a comprehensive model to address back-end cloud scalability and front-end device mesh experiences. Application teams must create new modern architectures to deliver agile, flexible and dynamic cloud-based applications that span the digital mesh.

Internet of Things Platforms

IoT platforms complement the mesh app and service architecture. The management, security, integration and other technologies and standards of the IoT platform are the base set of capabilities for building, managing, and securing elements in the IoT. The IoT is an integral part of the digital mesh and ambient user experience and the emerging and dynamic world of IoT platforms is what makes them possible.

* Gartner defines a strategic technology trend as one with the potential for significant impact on the organization. Factors that denote significant impact include a high potential for disruption to the business, end users or IT, the need for a major investment, or the risk of being late to adopt. These technologies impact the organization’s long-term plans, programs and initiatives.

First brain-to-brain ‘telepathy’ communication via the Internet

University of Washington graduate student Jose Ceballos wears an electroencephalography (EEG) cap that records brain activity and sends a response to a second participant over the Internet (credit: University of Washington)

The first brain-to-brain telepathy-like communication between two participants via the Internet has been performed by University of Washington researchers.*

The experiment used a question-and-answer game. The goal is for the “inquirer” to determine which object the “respondent” is looking at from a list of possible objects. The inquirer sends a question (e.g., “Does it fly?) to the respondent, who answers “yes” or “no” by mentally focusing on one of two flashing LED lights attached to the monitor. The respondent is wearing an electroencephalography (EEG) helmet.

By focusing on the “yes” light, the EEG device generates send a signal to the inquirer via the Internet to activate a magnetic coil positioned behind the inquirer’s head, which stimulates the visual cortex and causes the inquirer to see a flash of light (known as a “phosphene”). A “no” signal works the same way, but is not strong enough to activate the coil.

Remote brain-to-brain communication process (credit: A. Stocco et al./PLoS ONE)

The experiment, detailed today in an open access paper in PLoS ONE, is the first to show that two brains can be directly linked to allow one person to guess what’s on another person’s mind. It is “the most complex brain-to-brain experiment, I think, that’s been done to date in humans,” said lead author Andrea Stocco, an assistant professor of psychology and researcher at UW’s Institute for Learning & Brain Sciences.

The experiment was carried out in dark rooms in two UW labs located almost a mile apart and involved five pairs of participants, who played 20 rounds of the question-and-answer game. Each game had eight objects and three questions. The sessions were a random mixture of 10 real games and 10 control games that were structured the same way.*

Participants were able to guess the correct object in 72 percent of the real games, compared with just 18 percent of the control rounds. Incorrect guesses in the real games could be caused by several factors, the most likely being uncertainty about whether a phosphene had appeared.

uw_brain2brain_interface_1

UW team’s initial experiment in 2013: University of Washington researcher Rajesh Rao, left, plays a computer game with his mind. Across campus, researcher Andrea Stocco, right, wears a magnetic stimulation coil over the left motor cortex region of his brain. Stocco’s right index finger moved involuntarily to hit the “fire” button as part of the first human brain-to-brain interface demonstration. (credit: University of Washington)

The study builds on the UW team’s initial experiment in 2013, which was the first to demonstrate a direct brain-to-brain connection between humans, using noninvasive technology to send a person’s brain signals over the Internet to control the hand motions of another person. Other scientists had previously connected the brains of rats and monkeys, and transmitted brain signals from a human to a rat, using electrodes inserted into animals’ brains.

The new experiment evolved out of research by co-author Rajesh Rao, a UW professor of computer science and engineering, on brain-computer interfaces that enable people to activate devices with their minds. In 2011, Rao began collaborating with Stocco and Prat to determine how to link two human brains together.


University of Washington | Team links two human brains for question-and-answer experiment

“Brain tutoring” next

In 2014, the researchers received a $1 million grant from the W.M. Keck Foundation that allowed them to broaden their experiments to decode more complex interactions and brain processes. They are now exploring the possibility of “brain tutoring,” transferring signals directly from healthy brains to ones that are developmentally impaired or impacted by external factors such as a stroke or accident, or simply to transfer knowledge from teacher to pupil.

The team is also working on transmitting brain states — for example, sending signals from an alert person to a sleepy one, or from a focused student to one who has attention deficit hyperactivity disorder, or ADHD.

“Imagine having someone with ADHD and a neurotypical student,” Prat said. “When the non-ADHD student is paying attention, the ADHD student’s brain gets put into a state of greater attention automatically.”

“Evolution has spent a colossal amount of time to find ways for us and other animals to take information out of our brains and communicate it to other animals in the forms of behavior, speech and so on,” Stocco said. “But it requires a translation. We can only communicate part of whatever our brain processes.

“What we are doing is kind of reversing the process a step at a time by opening up this box and taking signals from the brain and with minimal translation, putting them back in another person’s brain,” he said.

* “Telepathy-like” is KurzweilAI’s wording, meaning that no action by the subject outside of the brain were required in the communication. As noted above, the first experiment (known to KurzweilAI) to demonstrate a direct brain-to-brain connection between humans via the Internet, the UW team’s initial experiment in 2013, used involuntary finger movements on a keyboard. Proponents of “telepathy” or “psychic” experiments using the Internet as a link, if any, might counter this.

The researchers took steps to ensure participants couldn’t use clues other than direct brain communication to complete the game. Inquirers wore earplugs so they couldn’t hear the different sounds produced by the varying stimulation intensities of the “yes” and “no” responses. Since noise travels through the skull bone, the researchers also changed the stimulation intensities slightly from game to game and randomly used three different intensities each for “yes” and “no” answers to further reduce the chance that sound could provide clues.

The researchers also repositioned the coil on the inquirer’s head at the start of each game, but for the control games, added a plastic spacer undetectable to the participant that weakened the magnetic field enough to prevent the generation of phosphenes. Inquirers were not told whether they had correctly identified the items, and only the researcher on the respondent end knew whether each game was real or a control round.

UPDATE Sept. 9, 2015: Footnote expanded to clarify “telepathy-like.”

First all-optical chip memory

Illustration of all-optical data memory: ultra-short light pulses (left) make a bit in the Ge2Sb2Te5 (GST) material change from crystalline to amorphous (or the reverse), and weak light pulses (right) read out the data (credit: C. Rios/Oxford University)

The first all-optical chip memory has been developed by an international team of scientists. It is capable of writing data to memory at a speed of up to a gigahertz or more and may allow computers to work more rapidly and more efficiently.

The memory is non-volatile (similar to flash memory), and the new memory can store data even when the power is removed, and may persist for decades, the researchers believe.

The scientists, from Oxford, Exeter, Karlsruhe and Münster universities, used a “phase-change material,” Ge2Sb2Te5 (GST). Phase-change materials radically change their optical properties depending on their phase state (arrangement of the atoms) — crystalline (regular) or amorphous (irregular) — initiated by ultrashort light pulses. For reading out the data, weak light pulses are used.

Light is ideally suited for ultra-fast high-bandwidth data transfer (via optical-fiber cables), but until now, it has not been possible to store large quantities of optical data directly on integrated chips. The memory is also compatible with latest processors, the researchers note.

Permanent all-optical on-chip memories promise to considerably increase the speed of computers and reduce their energy consumption. Together with all-optical connections, on-chip memories might also reduce latencies (transmission delays, which can make long-distance two-way communication difficult, for example). In addition, energy-intensive conversion of optical signals into electronic signals and vice versa would no longer be required, reducing bulk and cost.

The research is published in Nature Photonics.


Abstract of Integrated all-photonic non-volatile multi-level memory

Implementing on-chip non-volatile photonic memories has been a long-term, yet elusive goal. Photonic data storage would dramatically improve performance in existing computing architectures by reducing the latencies associated with electrical memories and potentially eliminating optoelectronic conversions. Furthermore, multi-level photonic memories with random access would allow for leveraging even greater computational capability. However, photonic memories have thus far been volatile. Here, we demonstrate a robust, non-volatile, all-photonic memory based on phase-change materials. By using optical near-field effects, we realize bit storage of up to eight levels in a single device that readily switches between intermediate states. Our on-chip memory cells feature single-shot readout and switching energies as low as 13.4 pJ at speeds approaching 1 GHz. We show that individual memory elements can be addressed using a wavelength multiplexing scheme. Our multi-level, multi-bit devices provide a pathway towards eliminating the von Neumann bottleneck and portend a new paradigm in all-photonic memory and non-conventional computing.

‘Information sabotage’ on Wikipedia claimed

 

Research has moved online, with more than 80 percent of U.S. students using Wikipedia for research papers, but controversial science information has egregious errors, claim researchers (credit: Pixabay)

Wikipedia entries on politically controversial scientific topics can be unreliable due to “information sabotage,” according to an open-access paper published today in the journal PLOS One.

The authors (Gene E. Likens* and Adam M. Wilson*) analyzed Wikipedia edit histories for three politically controversial scientific topics (acid rain, evolution, and global warming), and four non-controversial scientific topics (the standard model in physics, heliocentrism, general relativity, and continental drift).

“Egregious errors and a distortion of consensus science”

Using nearly a decade of data, the authors teased out daily edit rates, the mean size of edits (words added, deleted, or edited), and the mean number of page views per day. Across the board, politically controversial scientific topics were edited more heavily and viewed more often.

“Wikipedia’s global warming entry sees 2–3 edits a day, with more than 100 words altered, while the standard model in physics has around 10 words changed every few weeks,” Wilson notes. “The high rate of change observed in politically controversial scientific topics makes it difficult for experts to monitor their accuracy and contribute time-consuming corrections.”

While the edit rate of the acid rain article was less than the edit rate of the evolution and global warming articles, it was significantly higher than the non-controversial topics. “In the scientific community, acid rain is not a controversial topic,” said professor Likens. “Its mechanics have been well understood for decades. Yet, despite having ‘semi-protected’ status to prevent anonymous changes, Wikipedia’s acid rain entry receives near-daily edits, some of which result in egregious errors and a distortion of consensus science.”

Wikipedia’s limitations

Likens adds, “As society turns to Wikipedia for answers, students, educators, and citizens should understand its limitations for researching scientific topics that are politically charged. On entries subject to edit-wars, like acid rain, evolution, and global change, one can obtain — within seconds — diametrically different information on the same topic.”

However, the authors note that as Wikipedia matures, there is evidence that the breadth of its scientific content is increasingly based on source material from established scientific journals. They also note that Wikipedia employs algorithms to help identify and correct blatantly malicious edits, such as profanity. But in their view, it remains to be seen how Wikipedia will manage the dynamic, changing content that typifies politically charged science topics.

To help readers critically evaluate Wikipedia content, Likens and Wilson suggest identifying entries that are known to have significant controversy or edit wars. They also recommend quantifying the reputation of individual editors. In the meantime, users are urged to cast a critical eye on Wikipedia source material, which is found at the bottom of each entry.

Wikipedia editors not impressed

In the Wikipedia “User_talk:Jimbo_Wales” page, several Wikipedia editors questioned the PLOS One authors’ statistical accuracy and conclusions, and noted that the data is three years out of date. “I don’t think this dataset can make any claim about controversial subjects at all,” one editor said. “It simply looks at too few articles, and there are too many explanations.”

“It has long been a source of bewilderment to me that we allow climate change denialists to run riot on Wikipedia,” said another.

* Dr. Gene E. Likens is President Emeritus of the Cary Institute of Ecosystem Studies and a Distinguished Research Professor at the University of Connecticut, Storrs. Likens co-discovered acid rain in North America, and counts among his accolades a National Medal of Science, a Tyler Prize, and elected membership in the National Academy of Sciences. Dr. Adam M. Wilson is a geographer at the University of Buffalo.


Abstract of Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale

Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

Electro-optical modulator is 100 times smaller, consumes 100th of the energy

Colorized electron microscope image of a micro-modulator made of gold. In the slit in the center of the picture, light is converted into plasmon polaritons, modulated, and then re-converted into light pulses (credit: Haffner et al. Nature Photonics)

Researchers at ETH Zurich have developed a modulator that is a 100 times smaller than conventional modulators, so it can now be integrated into electronic circuits. Transmitting large amounts of data via the Internet requires high-performance electro-optic modulators — devices that convert electrical signals (used in computers and cell phones) into light signals (used in fiber-optic cables).

Today, huge amounts of data are sent incredibly fast through fiber-optic cables as light pulses. For that purpose they first have to be converted from electrical signals, which are used by computers and telephones, into optical signals. Today’s electro-optic modulators are more complicated and large, compared with electronic devices that can be as small as a few micrometers.

The plasmon trick

To build the smallest possible modulator they first need to focus a light beam whose intensity they want to modulate into a very small volume. The laws of optics, however, dictate that such a volume cannot be smaller than the wavelength of the light itself. Modern telecommunications use near-infrared laser light with a wavelength of 1500 nanometers (1.5 micrometers), which sets the lower limit for the size of a modulator.

To beat that limit and to make the device even smaller, the light is first turned into surface-plasmon-polaritons. Plasmon-polaritons are a combination of electromagnetic fields and electrons that propagate along a surface of a metal strip. At the end of the strip they are converted back to light once again. The advantage of this detour is that plasmon-polaritons can be confined in a much smaller space than the light they originated from.

The modulator is much smaller than conventional devices so it consumes very little energy — only a few thousandth of a Watt at a data transmission rate of 70 Gigabits per second. This corresponds to about 100th of the energy consumption of commercial models. And that means more data can be transmitted at higher speeds. The device is also cheaper to produce.

The research is described in a paper in the journal Nature Photonics.


Abstract of All-plasmonic Mach–Zehnder modulator enabling optical high-speed communication at the microscale

Optical modulators encode electrical signals to the optical domain and thus constitute a key element in high-capacity communication links. Ideally, they should feature operation at the highest speed with the least power consumption on the smallest footprint, and at low cost. Unfortunately, current technologies fall short of these criteria. Recently, plasmonics has emerged as a solution offering compact and fast devices. Yet, practical implementations have turned out to be rather elusive. Here, we introduce a 70 GHz all-plasmonic Mach–Zehnder modulator that fits into a silicon waveguide of 10 μm length. This dramatic reduction in size by more than two orders of magnitude compared with photonic Mach–Zehnder modulators results in a low energy consumption of 25 fJ per bit up to the highest speeds. The technology suggests a cheap co-integration with electronics.