Infrared-light-based Wi-Fi network is 100 times faster

Schematic of a beam of white light being dispersed by a prism into different wavelengths, similar in prinicple to how a new near-infrared WiFi system works (credit: Lucas V. Barbosa/CC)

A new infrared-light WiFi network can provide more than 40 gigabits per second (Gbps) for each user* — about 100 times faster than current WiFi systems — say researchers at Eindhoven University of Technology (TU/e) in the Netherlands.

The TU/e WiFi design was inspired by experimental systems using ceiling LED lights (such as Oregon State University’s experimental WiFiFO, or WiFi Free space Optic, system), which can increase the total per-user speed of WiFi systems and extend the range to multiple rooms, while avoiding interference from neighboring WiFi systems. (However, WiFiFo is limited to 100 Mbps.)

Experimental Oregon State University system uses LED lighting to boost the bandwidth of Wi-Fi systems and extend range (credit: Thinh Nguyen/Oregon State University)

Near-infrared light

Instead of visible light, the TU/e system uses invisible near-infrared light.** Supplied by a fiber optic cable, a few central “light antennas” (mounted on the ceiling, for instance) each use a pair of ”passive diffraction gratings” that radiate light rays of different wavelengths at different angles.

That allows for directing the light beams to specific users. The network tracks the precise location of every wireless device, using a radio signal transmitted in the return direction.***

The TU/e system uses infrared light with a wavelength of 1500 nanometers (a frequency of 200 terahertz, or 40,000 times higher than 5GHz), allowing for significantly increased capacity. The system has so far used the light rays only for downloading; uploads are still done using WiFi radio signals, since much less capacity is usually needed for uploading.

The researchers expect it will take five years or more for the new technology to be commercially available. The first devices to be connected will likely be high-data devices like video monitors, laptops, and tablets.

* That speed is 67 times higher than the current 802.11n WiFi system’s max theoretical speed of 600Mbps capacity — which has to be shared between users, so the ratio is actually about 100 times, according to TU/e researchers. That speed is also 16 times higher than the 2.5 Gbps performance with the best (802.11ac) Wi-Fi system — which also has to be shared (so actually lower) — and in addition, uses the 5GHz wireless band, which has limited range. “The theoretical max speed of 802.11ac is eight 160MHz 256-QAM channels, each of which are capable of 866.7Mbps, for a total of 6,933Mbps, or just shy of 7Gbps,” notes Extreme Tech. “In the real world, thanks to channel contention, you probably won’t get more than two or three 160MHz channels, so the max speed comes down to somewhere between 1.7Gbps and 2.5Gbps. Compare this with 802.11n’s max theoretical speed, which is 600Mbps.”

** The TU/e system was designed by Joanne Oh as a doctoral thesis and part of the wider BROWSE project headed up by professor of broadband communication technology Ton Koonen, with funding from the European Research Council, under the auspices of the noted TU/e Institute for Photonic Integration.

*** According to TU/e researchers, a few other groups are investigating network concepts in which infrared-light rays are directed using movable mirrors. The disadvantage here is that this requires active control of the mirrors and therefore energy, and each mirror is only capable of handling one ray of light at a time. The grating used the and Oh can cope with many rays of light and, therefore, devices at the same time.


Space X plans global space internet

(credit: SpaceX)

SpaceX has applied to the FCC to launch 11,943 satellites into low-Earth orbit, providing “ubiquitous high-bandwidth (up to 1Gbps per user, once fully deployed) broadband services for consumers and businesses in the U.S. and globally,” according to FCC applications.

Recent meetings with the FCC suggest that the plan now looks like “an increasingly feasible reality — particularly with 5G technologies just a few years away, promising new devices and new demand for data,” Verge reports.

Such a service will be particularly useful to rural areas, which have limited access to internet bandwidth.

Low-Earth orbit (at up to 2,000 kilometers, or 1,200 mi) ensures lower latency (communication delay between Earth and satellite) — making the service usable for voice communications via Skype, for example — compared to geosynchronous orbit (at 35,786 kilometers, or 22,000 miles), offered by Dish Network and other satellite ISP services.* The downside: it takes a lot more satellites to provide the coverage.

Boeing, Softbank-backed OneWeb (which hopes to “connect every school to the Internet by 2022″), Telesat, and others** have proposed similar services, possibly bringing the total number of satellites to about 20,000 in low and mid earth orbits in the 2020s, estimates Next Big Future.

* “SpaceX expects its latencies between 25 and 35ms, similar to the latencies measured for wired Internet services. Current satellite ISPs have latencies of 600ms or more, according to FCC measurements, notes Ars Technica.

** Audacy, Karousel, Kepler Communications, LeoSat, O3b, Space Norway,Theia Holdings, and ViaSat, according to Space News. The ITU [international counterpart of the FCC] has set rules preventing new constellations to interfere with established ground and satellite systems operating in the same frequencies. OneWeb, for example, has said it will basically switch off power as its satellites cross the equator so as not to disturb transmissions from geostationary-orbit satellites directly above and using Ku-band frequencies.

 

Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users

Typing with your mind. You are paralyzed. But now, tiny electrodes have been surgically implanted in your brain to record signals from your motor cortex, the brain region controlling muscle movement. As you think of mousing over to a letter (or clicking to choose it), those electrical brain signals are transmitted via a cable to a computer (replacing your spinal cord and muscles). There, advanced algorithms decode the complex electrical brain signals, converting them instantly into screen actions. (credit: Chethan Pandarinath et al./eLife)

Stanford University researchers have developed a brain-computer interface (BCI) system that can enable people with paralysis* to type (using an on-screen cursor) at speeds and accuracy levels of about three times faster than reported to date.

Simply by imagining their own hand movements, one participant was able to type 39 correct characters per minute (about eight words per minute); the other two participants averaged 6.3 and 2.7 words per minute, respectively — all without auto-complete assistance (so it could be much faster).

Those are communication rates that people with arm and hand paralysis would also find useful, the researchers suggest. “We’re approaching the speed at which you can type text on your cellphone,” said Krishna Shenoy, PhD, professor of electrical engineering, a co-senior author of the study, which was published in an open-access paper online Feb. 21 in eLife.

Braingate and beyond

The three study participants used a brain-computer interface called the “BrainGate Neural Interface System.” On KurzweilAI, we first discussed Braingate in 2011, followed by a 2012 clinical trial that allowed a paralyzed patient to control a robot.

Braingate in 2012 (credit: Brown University)

The new research, led by Stanford, takes the Braingate technology way further**. Participants can now move a cursor (by just thinking about a hand movement) on a computer screen that displays the letters of the alphabet, and they can “point and click” on letters, computer-mouse-style, to type letters and sentences.

The new BCI uses a tiny silicon chip, just over one-sixth of an inch square, with 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

As the participant thinks of a specific hand-to-mouse movement (pointing at or clicking on a letter), neural electrical activity is recorded using 96-channel silicon microelectrode arrays implanted in the hand area of the motor cortex. These signals are then filtered to extract multiunit spiking activity and high-frequency field potentials, then decoded (using two algorithms) to provide “point-and-click” control of a computer cursor.

What’s next

The team next plans is to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets — perhaps extending out to the internet.

Beyond that, Shenoy predicted that a self-calibrating, fully implanted wireless BCI system with no required caregiver assistance and no “cosmetic impact” would be available in five to 10 years from now (“closer to five”).

Perhaps a future wireless, noninvasive version could let anyone simply think to select letters, words, ideas, and images — replacing the mouse and finger touch — along the lines of Elon Musk’s neural lace concept?

* Millions of people with paralysis reside in the U.S.

** The study’s results are the culmination of the long-running multi-institutional BrainGate consortium, which includes scientists at Massachusetts General Hospital, Brown University, Case Western University, and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island. The study was funded by the National Institutes of Health, the Stanford Office of Postdoctoral Affairs, the Craig H. Neilsen Foundation, the Stanford Medical Scientist Training Program, Stanford BioX-NeuroVentures, the Stanford Institute for Neuro-Innovation and Translational Neuroscience, the Stanford Neuroscience Institute, Larry and Pamela Garlick, Samuel and Betsy Reeves, the Howard Hughes Medical Institute, the U.S. Department of Veterans Affairs, the MGH-Dean Institute for Integrated Research on Atrial Fibrillation and Stroke and Massachusetts General Hospital.


Stanford | Stanford researchers develop brain-controlled typing for people with paralysis


Abstract of High performance communication by people with paralysis using an intracortical brain-computer interface

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

Someone is learning how to take down the Internet

Submarine cables map (credit: Teleography)

“Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet,” according to a blog post by security expert Bruce Schneier.

“These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar.”

Schneier said major companies that provide the basic infrastructure that makes the Internet work [presumably, ones such as Cisco] have seen an increase in distributed denial of service (DDoS) attacks against them, and the attacks are significantly larger, last longer, and are more sophisticated.

“They look like probing — being forced to demonstrate their defense capabilities for the attacker.” This is similar to flying reconnaissance planes over a country to detect capabilities by making the enemy turn on air-defense radars.

Who might do this? “The size and scale of these probes — and especially their persistence — point to state actors. … China or Russia would be my first guesses.”

 

 

 

 

Beyond Wi-Fi

A nanocrystal-based material converts blue laser emission to white light for combined illumination and high-speed data communication. (credit: KAUST 2016)

Researchers at King Abdullah University of Science and Technology (KAUST) have developed a system that uses high-speed visible light communications (VLC) to replace slower Wi-Fi and Bluetooth, allowing ceiling lights, for example, to provide an internet connection to laptops.

“VLC has many advantages compared with lower frequency communications approaches (including Wi-Fi and Bluetooth), such as energy efficiency, an unregulated communication spectrum, environmental friendliness, greater security, and no RF interferences,” according to KAUST researchers.

However, VLC is currently limited to about 100 megabits/sec. (compared to a maximum speed for the current 802.11n Wi-Fi spec of about 600 megabits/sec.) because it requires light-emitting diodes (LEDs) that produce white light, according to KAUST Professor of Electrical Engineering Boon Ooi.

These are usually fabricated by combining a diode that emits blue light with phosphorous that turns some of this radiation into red and green light. However, this conversion process limits the switching speed.

To deal with that limitation, the researchers created nanocrystals based on cesium lead bromide perovskite combined with a conventional nitride red phosphor, which achieved a data rate of 2 gigabits/sec.* That’s comparable to the newest Wi-Fi spec, 802.11ac, which can deliver about 1.7Gbps to 2.5Gbps, as Extreme Tech reports (but limited to the the 5 GHz Wi-Fi band, which has limited penetration).

Importantly, the white light generated using their perovskite nanostructures was of a quality comparable to present LED technology.

The research is presented in an open-access paper in ACS Photonics.

* When illuminated by a blue laser light, the nanocrystals emitted green light while the nitride emitted red light. Together, these combined to create a warm white light. The researchers characterized the optical properties of their material using a technique known as femtosecond transient spectroscopy. They were able to show that the optical processes in cesium lead bromide nanocrystals occur on a time scale of roughly seven nanoseconds. This meant they could modulate the optical emission at a frequency of 491 Megahertz, 40 times faster than is possible using phosphorus, and transmit data at a rate of  2 gigabits/sec.

Interscatter communication

In interscatter communication, a backscattering device such as a smart contact lens converts Bluetooth transmissions from a device such as a smartwatch to generate Wi-Fi signals that can be read by a phone or tablet. (credit: University of Washington)

University of Washington researchers have introduced another variant on Wi-Fi called “interscatter communication” that may allow devices such as brain implants, contact lenses, credit cards and smaller wearable electronics to talk to everyday devices such as smartphones and watches (to report a medical condition, for example).

The interscatter communication method works by converting Bluetooth signals into Wi-Fi transmissions. Using only reflections, an interscatter device such as a smart contact lens converts Bluetooth signals from a smartwatch, for example, into Wi-Fi transmissions that can be picked up by a smartphone.

Due to their size and location within the body, these smart contact lenses are normally too constrained by power demands to send data using conventional wireless transmissions. That means they so far have not been able to send data using Wi-Fi to smartphones and other mobile devices. Those same requirements also limit emerging technologies such as brain implants that treat Parkinson’s diseasestimulate organs and may one day even reanimate limbs.

Smart contact lens (credit: University of Washington)

The team of UW electrical engineers and computer scientists has demonstrated for the first time that these types of power-limited devices can “talk” to others using standard Wi-Fi communication. Their system requires no specialized equipment, relying solely on mobile devices commonly found with users to generate Wi-Fi signals using 10,000 times less energy than conventional methods.

“Instead of generating Wi-Fi signals on your own, our technology creates Wi-Fi by using Bluetooth transmissions from nearby mobile devices such as smartwatches,” said co-author Vamsi Talla, a recent UW doctoral graduate in electrical engineering who is now a research associate in the Department of Computer Science & Engineering.


University of Washington Computer Science & Engineering | Interscatter

Backscatter communication: piggybacking on Bluetooth

The team’s process relies on a communication technique called backscatter, which allows devices to exchange information simply by reflecting existing signals. Because the new technique enables inter-technology communication by using Bluetooth signals to create Wi-Fi transmissions, the team calls it “interscattering.”

Interscatter communication uses the Bluetooth, Wi-Fi or ZigBee radios embedded in common mobile devices  like smartphones, watches, laptops, tablets and headsets, to serve as both sources and receivers for these reflected signals.

In one example the team demonstrated, a smartwatch transmits a Bluetooth signal to a smart contact lens outfitted with an antenna. To create a blank slate on which new information can be written, the UW team developed an innovative way to transform the Bluetooth transmission into a “single tone” signal that can be further manipulated and transformed. By backscattering that single tone signal, the contact lens can encode data — such as health information it may be collecting — into a standard Wi-Fi packet that can then be read by a smartphone, tablet or laptop.

Examples of interscatter communication include a) a smart contact lens using Bluetooth signals from a watch to send data to a phone b) an implantable brain interface communicating via a Bluetooth headset and smartphone and c) credit cards communicating by backscattering Bluetooth transmissions from a phone. (credit: University of Washington)

The researchers built three proof-of-concept demonstrations for previously infeasible applications, including a smart contact lens and an implantable neural recording device that can communicate directly with smartphones and watches.

Consuming only tens of microwatts

“Preserving battery life is very important in implanted medical devices, since replacing the battery in a pacemaker or brain stimulator requires surgery and puts patients at potential risk from those complications,” said co-author Joshua Smith, associate professor of electrical engineering and of computer science and engineering. “Interscatter can enable Wi-Fi for these implanted devices while consuming only tens of microwatts of power.”

Beyond implanted devices, the researchers have also shown that their technology can apply to other applications, such as smart credit cards. The team built credit card prototypes that can communicate directly with each other by reflecting Bluetooth signals coming from a smartphone.

This opens up possibilities for smart credit cards that can communicate directly with other cards and enable applications where users can split the bill by just tapping their credit cards together.

The new technique is described in an open-access paper to be presented today, Aug. 22, at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM 2016) in Brazil.

The research was funded by the National Science Foundation and Google Faculty Research Awards.


Abstract of Perovskite Nanocrystals as a Color Converter for Visible Light Communication

Visible light communication (VLC) is an emerging technology that uses light-emitting diodes (LEDs) or laser diodes for simultaneous illumination and data communication. This technology is envisioned to be a major part of the solution to the current bottlenecks in data and wireless communication. However, the conventional lighting phosphors that are typically integrated with LEDs have limited modulation bandwidth and thus cannot provide the bandwidth required to realize the potential of VLC. In this work, we present a promising light converter for VLC by designing solution-processed CsPbBr3 perovskite nanocrystals (NCs) with a conventional red phosphor. The fabricated CsPbBr3 NC phosphor-based white light converter exhibits an unprecedented modulation bandwidth of 491 MHz, which is ∼40 times greater than that of conventional phosphors, and the capability to transmit a high data rate of up to 2 Gbit/s. Moreover, this perovskite-enhanced white light source combines ultrafast response characteristics with a high color rendering index of 89 and a correlated color temperature of 3236 K, thereby enabling dual VLC and solid-state lighting functionalities.


Abstract of Inter-Technology Backscatter: Towards Internet Connectivity for Implanted Devices

We introduce inter-technology backscatter, a novel approach that transforms wireless transmissions from one technology to another, on the air. Specifically, we show for the first time that Bluetooth transmissions can be used to create Wi-Fi and ZigBee-compatible signals using backscatter communication. Since Bluetooth, Wi-Fi and ZigBee radios are widely available, this approach enables a backscatter design that works using only commodity devices. We build prototype backscatter hardware using an FPGA and experiment with various Wi-Fi, Bluetooth and ZigBee devices. Our experiments show we can create 2–11 Mbps Wi-Fi standards-compliant signals by backscattering Bluetooth transmissions. To show the generality of our approach, we also demonstrate generation of standards-complaint ZigBee signals by backscattering Bluetooth transmissions. Finally, we build proof-of-concepts for previously infeasible applications including the first contact lens form-factor antenna prototype and an implantable neural recording interface that communicate directly with commodity devices such as smartphones and watches, thus enabling the vision of Internet connected implanted devices.

Facebook’s internet-beaming drone completes first test flight

(credit: Facebook)

Facebook Connectivity Lab announced today the first full-scale test flight of Aquila — a solar-powered unmanned airplane/drone designed to bring affordable internet access to some of the 1.6 billion people living in remote locations with no access to mobile broadband networks.

When complete, Aquila will be able to circle a region up to 60 miles in diameter, beaming internet connectivity down from an altitude of more than 60,000 feet to people within a 60-mile communications diameter for up to 90 days at a time. It will be part of a future fleet of drones.


Facebook

Facebook’s Secret Conversations

(credit: Facebook)

Facebook began today (Friday, July 8) rolling out a new beta-version feature for Messenger called “Secret Conversations,” allowing for “one-to-one secret conversations … that will be end-to-end encrypted and which can only be read on one device of the person you’re communicating with.”

Facebook suggests the feature will be useful for discussing an illness or sending financial information (as in the pictures above).  You can choose to set a timer to control the length of time each message you send remains visible within the conversation. (Rich content like GIFs, videos, and making payments are not supported.)

The technology, described in a technical whitepaper (open access), is based on the Signal Protocol developed by Open Whisper Systems, which is also used in Open Whisper Systems’ own Signal messaging app (Chrome, iOS, Android),  WhatsApp, and Google’s Allo (not yet launched).

Unlike WhatsApp and iMessage, which automatically encrypt every message, Secret Conversations only works from a single device and is opt-in, which “will likely rankle many privacy advocates,” says Wired .

But not as much as all of these encrypted services rankle law enforcement agencies, since the feature hampers surveillance capabilities, it adds.

 

 


 

 

 

 

Cell-phone-radiation study finds associated brain and heart tumors in rodents

Glioma in rat brain (credit: Samuel Samnick et al./European Journal of Nuclear Medicine)

A series of studies over two years with rodents exposed to radio frequency radiation (RFR) found low incidences of malignant gliomas (tumors of glial support cells) in the brain and schwannoma tumors in the heart.*

The studies were performed under the auspices of the U.S. National Toxicology Program (NTP).

Potentially preneoplastic (pre-cancer) lesions were also observed in the brain and heart of male rats exposed to RFR, with higher confidence in the association with neoplastic lesions in the heart than the brain.

No biologically significant effects were observed in the brain or heart of female rats regardless of type of radiation.

The NTP notes that the open-access report is a preview and has not been peer-reviewed.**

In 2011, the WHO/International Agency for Research on Cancer (IARC) classified RFR as possibly carcinogenic to humans, also based on increased risk for glioma.

* The rodents were subjected to whole-body exposure to the two types RFR modulation currently used in U.S. wireless networks — CDMA and GSM — at frequencies of 900 MHz for rats and 1900 MHz for mice, with a total exposure time of approximately 9 hours a day over the course of the day, 7 days/week. The glioma lesions occurred in 2 to 3 percent of the rats and the schwannomas occurred in 1 to 6 percent of the rats.

** The NTP says further details will be published in the peer-reviewed literature later in 2016. The reports are “limited to select findings of concern in the brain and heart and do not represent a complete reporting of all findings from these studies of cell phone RFR,” which will be “reported together with the current findings in two forthcoming NTP peer-reviewed reports, to be available for peer review and public comment by the end of 2017.”


Abstract of Report of Partial Findings from the National Toxicology Program Carcinogenesis Studies of Cell Phone Radiofrequency Radiation

The U.S. National Toxicology Program (NTP) has carried out extensive rodent toxicology and carcinogenesis studies of radiofrequency radiation (RFR) at frequencies and modulations used in the US telecommunications industry. This report presents partial findings from these studies. The occurrences of two tumor types in male Harlan Sprague Dawley rats exposed to RFR, malignant gliomas in the brain and schwannomas of the heart, were considered of particular interest, and are the subject of this report. The findings in this report were reviewed by expert peer reviewers selected by the NTP and National Institutes of Health (NIH). These reviews and responses to comments are included as appendices to this report, and revisions to the current document have incorporated and addressed these comments. Supplemental information in the form of 4 additional manuscripts has or will soon be submitted for publication. These manuscripts describe in detail the designs and performance of the RFR exposure system, the dosimetry of RFR exposures in rats and mice, the results to a series of pilot studies establishing the ability of the animals to thermoregulate during RFR exposures, and studies of DNA damage. Capstick M, Kuster N, Kühn S, Berdinas-Torres V, Wilson P, Ladbury J, Koepke G, McCormick D, Gauger J, Melnick R. A radio frequency radiation reverberation chamber exposure system for rodents Yijian G, Capstick M, McCormick D, Gauger J, Horn T, Wilson P, Melnick RL and Kuster N. Life time dosimetric assessment for mice and rats exposed to cell phone radiation Wyde ME, Horn TL, Capstick M, Ladbury J, Koepke G, Wilson P, Stout MD, Kuster N, Melnick R, Bucher JR, and McCormick D. Pilot studies of the National Toxicology Program’s cell phone radiofrequency radiation reverberation chamber exposure system Smith-Roe SL, Wyde ME, Stout MD, Winters J, Hobbs CA, Shepard KG, Green A, Kissling GE, Tice RR, Bucher JR, Witt KL. Evaluation of the genotoxicity of cell phone radiofrequency radiation in male and female rats and mice following subchronic exposure.

British researchers, Google design modular shape-shifting mobile devices

Cubimorph is an interactive device made of a chain of reconfigurable modules that shape-shifts into any shape that can be made out of a chain of cubes, such as transforming from a mobile phone to a game console. (credit: Anne Roudaut et al./Proceedings of the ICRA 2016)

British researchers and Google have independently developed revolutionary concepts for Lego-like modular interactive mobile devices.

The British team’s design, called Cubimorph, is constructed of a chain of cubes. It has touchscreens on each of the six module faces and uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. One example: a mobile phone that can transform into a console when a user launches a game.

Proof-of-concept prototype of Cubimorph (credit: BIG/University of Bristol)

The research team has developed three prototypes demonstrating key aspects — turntable hinges, embedded touchscreens, and miniaturization.


BIG | Cubimorph: Designing Modular Interactive Devices

The modular interactive design is a step toward the vision of programmable matter, where interactive devices change their shape to meet specific user needs.

The research is led by Anne Roudaut, PhD, from the Department of Computer Science at the University of Bristol and and co-leader of the BIG (Bristol Interaction Group), in collaboration with academics at the Universities of PurdueLancaster and Sussex.

The research was presented last week at the International Conference on Robotics and Automation (ICRA).

Google’s Ara

Ara (credit: Google)

Ara, launched at Google’s I/O developer conference, uses a frame that contains all the functionality of a smartphone (CPU, GPU, antennas, sensors, battery, and display) plus six flexible slots for easy swapping of modules. “Slide any Ara module into any slot and it just works,” is the concept. Powering this is Greybus, a new bit of software deep in the Android stack that supports instantaneous connections, power efficiency, and data-transfer rates of up to 11.9 Gbps. The Developer Edition will ship in Fall 2016, with a consumer version in 2017.


Google | Ara: What’s next


Abstract of Cubimorph: Designing Modular Interactive Devices

We introduce Cubimorph, a modular interactive device that accommodates touchscreens on each of the six module faces, and that uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. Cubimorph contributes toward the vision of programmable matter where interactive devices reconfigure in any shape that can be made out of a chain of cubes in order to fit a myriad of functionalities, e.g. a mobile phone shifting into a console when a user launches a game. We present a design rationale that exposes user requirements to consider when designing homogeneous modular interactive devices. We present our Cubimorph mechanical design, three prototypes demonstrating key aspects (turntable hinges, embedded touchscreens and miniaturization), and an adaptation of the probabilistic roadmap algorithm for the reconfiguration.

Your smartphone and tablet may be making you ADHD-like

(credit: KurzweilAI)

Smartphones and other digital technology may be causing ADHD-like symptoms, according to an open-access study published in the proceedings of ACM CHI ’16, the Human-Computer Interaction conference of the Association for Computing Machinery, ongoing in San Jose.

In a two-week experimental study, University of Virginia and University of British Columbia researchers showed that when students kept their phones on ring or vibrate and with notification alerts on, they reported more symptoms of inattention and hyperactivity than when they kept their phones on silent.

The results suggest that even people who have not been diagnosed with ADHD may experience some of the disorder’s symptoms, including distraction, fidgeting, having trouble sitting still, difficulty doing quiet tasks and activities, restlessness, and difficulty focusing and getting bored easily when trying to focus, the researchers said.

“We found the first experimental evidence that smartphone interruptions can cause greater inattention and hyperactivity — symptoms of attention deficit hyperactivity disorder — even in people drawn from a nonclinical population,”said Kostadin Kushlev, a psychology research scientist at the University of Virginia who led the study with colleagues at the University of British Columbia.

In the study, 221 students at the University of British Columbia drawn from the general student population were assigned for one week to maximize phone interruptions by keeping notification alerts on, and their phones within easy reach.

Indirect effects of manipulating smartphone interruptions on psychological well-being via inattention symptoms. Numbers are unstandardized regression coefficients. (credit: Kostadin Kushlev et al./CHI 2016)

During another week participants were assigned to minimize phone interruptions by keeping alerts off and their phones away.

At the end of each week, participants completed questionnaires assessing inattention and hyperactivity. Unsurprisingly, the results showed that the participants experienced significantly higher levels of inattention and hyperactivity when alerts were turned on.

Digital mobile users focus more on concrete details than the big picture

Using digital platforms such as tablets and laptops for reading may also make you more inclined to focus on concrete details rather than interpreting information more contemplatively or abstractly (seeing the big picture), according to another open-access study published in ACM CHI ’16 proceedings.

Researchers at Dartmouth’s Tiltfactor lab and the Human-Computer Interaction Institute at Carnegie Mellon University conducted four studies with a total of 300 participants. Participants were tested by reading a short story and a table of information about fictitious Japanese car models.

The studies revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical printout) exhibited a lower level of “construal” (abstract) thinking. However, the researchers also found that engaging the subjects in a more abstract mindset prior to an information processing task on a digital platform appeared to help facilitate a better performance on tasks that require abstract thinking.

Coping with digital overload

Given the widespread acceptance of digital devices, as evidenced by millions of apps, ubiquitous smartphones, and the distribution of iPads in schools, surprisingly few studies exist about how digital tools affect us, the researchers noted.

“The ever-increasing demands of multitasking, divided attention, and information overload that individuals encounter in their use of digital technologies may cause them to ‘retreat’ to the less cognitively demanding lower end of the concrete-abstract continuum,” according to the authors. They also say the new research suggests that “this tendency may be so well-ingrained that it generalizes to contexts in which those resource demands are not immediately present.”

Their recommendation for human-computer interaction designers and researchers: “Consider strategies for encouraging users to see the ‘forest’ as well as the ‘trees’ when interacting with digital platforms.”

Jony Ive, are you listening?


Abstract of “Silence your phones”: Smartphone notifications increase inattention and hyperactivity symptoms

As smartphones increasingly pervade our daily lives, people are ever more interrupted by alerts and notifications. Using both correlational and experimental methods, we explored whether such interruptions might be causing inattention and hyperactivity-symptoms associated with Attention Deficit Hyperactivity Disorder (ADHD) even in people not clinically diagnosed with ADHD. We recruited a sample of 221 participants from the general population. For one week, participants were assigned to maximize phone interruptions by keeping notification alerts on and their phones within their reach/sight. During another week, participants were assigned to minimize phone interruptions by keeping alerts off and their phones away. Participants reported higher levels of inattention and hyperactivity when alerts were on than when alerts were off. Higher levels of inattention in turn predicted lower productivity and psychological well-being. These findings highlight some of the costs of ubiquitous connectivity and suggest how people can reduce these costs simply by adjusting existing phone settings.


Abstract of High-Low Split: Divergent Cognitive Construal Levels Triggered by Digital and Non-digital Platforms

The present research investigated whether digital and non-digital platforms activate differing default levels of cognitive construal. Two initial randomized experiments revealed that individuals who completed the same information processing task on a digital mobile device (a tablet or laptop computer) versus a non-digital platform (a physical print-out) exhibited a lower level of construal, one prioritizing immediate, concrete details over abstract, decontextualized interpretations. This pattern emerged both in digital platform participants’ greater preference for concrete versus abstract descriptions of behaviors as well as superior performance on detail-focused items (and inferior performance on inference-focused items) on a reading comprehension assessment. A pair of final studies found that the likelihood of correctly solving a problem-solving task requiring higher-level “gist” processing was: (1) higher for participants who processed the information for task on a non-digital versus digital platform and (2) heightened for digital platform participants who had first completed an activity activating an abstract mindset, compared to (equivalent) performance levels exhibited by participants who had either completed no prior activity or completed an activity activating a concrete mindset.

(credit: KurzweilAI/Apple)