30 May 2018

HoloLens Audio Guides for Blind People

Microsoft’s HoloLens has an impressive ability to quickly sense its surroundings, but limiting it to displaying emails or game characters on them would show a lack of creativity. New research shows that it works quite well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions. The researchers, from Caltech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. Crunching visual data and producing a map of high-level features like walls, obstacles and doors is one of the core capabilities of the HoloLens, so the team decided to let it do its thing and recreate the environment for the user from these extracted features. They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it.


The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well. Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A 'virtual guide' repeatedly says 'follow me' from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails where and when the user had gone off course. All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation. One subject, the paper notes, said 'That was fun! When can I get one?' Microsoft actually looked into something like this years ago, but the hardware just wasn’t there — HoloLens changes that. Even though it is clearly intended for use by sighted people, its capabilities naturally fill the requirements for a visual prosthesis like the one described here. Interestingly, the researchers point out that this type of system was also predicted more than 30 years ago, long before they were even close to possible:

More information:

27 May 2018

Oculus Hand-Tracking Accuracy

One of Facebook’s underlying goals for VR is to use it as a means of connecting distant people. While today friends can talk and play in a range of social VR applications, including Facebook Spaces, the representation of users in VR is still a caricature at best. Recently, Oculus showed work being done on hand-tracking to bring more intuitive control and accurate avatars into VR. Oculus ‘Half Dome’ prototype is a headset with a 140 degree field of view and varifocal display. A computer-vision based hand-tracking system trained with a self-optimizing machine learning algorithm, achieves tracking that’s far more accurate than any method before for tracking a single hand, two hands, and hand-object interactions. Footage which appeared to show the hand-tracking in action also appeared to show detection of snapping gestures.


The company used a marker-based tracking system to record hand interactions in high fidelity, and then condensed the recorded data into 2D imagery which allowed them to set a convolutional neural network to the task of uniquely identifying the positions of the markers across a large set of hand pose imagery, effectively allowing the system to learn what a hand should look like given an arbitrary set of marker positions. Ostensibly, this trained system can then be fed markerless camera input of a user’s hands and solve for their position. By measure of something Oculus labeled ‘Tracking Success Rate’ (the company claims to have achieved a rather astounding 100% success rate with single hand-tracking). They claim even bigger leaps compared to other methods for two-handed and hand-object interactions.

More information:

24 May 2018

Importance of Appearance of Avatars in Games

The gaming experience over the last decade has evolved tremendously and player-customized avatars, or virtual doppelgangers, are becoming more realistic every day. Past studies have shown women may prefer avatars that don't look like them but a new study by USC Institute for Creative Technologies and University of Illinois at Urbana-Champaign shows no gender difference or negative effect on player's performance or subjective involvement based on whether a photorealistic avatar looked like them or like their friend. 


The study is the latest to examine benefits to using self-similar avatars in virtual experiences, and builds primarily on a study by Gale Lucas that analyzed players' performance and subjective involvement with a photorealistic self-similar avatar in a maze running game. Results showed effects based on avatar appearance as well as gender differences in participants' experiences. The new findings reveal how important it is to carefully consider the extent to which high fidelity self-similar avatars align with the purpose and structure of an interactive experience before development.

More information:

19 May 2018

Using VR and EEG for Language Comprehension

Recently, the validity of combining EEG with VR in studying language processing in naturalistic environments has been confirmed. By combining VR and EEG, strictly controlled experiments in a more naturalistic environment would be comfortably in your hands to get an explicit understanding of how we process language. EEG combined with VR would, make it possible to correlate humans’ physiological signals with their every single movement in the designed environment. Thus, the successful combination of the two has been used to study users’ driving behavior, spatial navigation, spatial presence and more. Researchers conducted an experiment to validate the combined use of VR and EEG as a tool to study neurophysiological mechanisms of language processing and comprehension. They decided to prove the validity by showing that the N400 response happens similarly in a virtual environment. The N400 refers to an event-related potential (ERP) component that peaks around 400ms after the critical stimuli; the previous study in a traditional setting have found that incongruence between the spoken and visual stimuli will cause enhanced N400. Therefore, the research team set up the situation containing mismatches of verbal and visual stimuli and analyzed brainwave to observe N400 response. In the experiment, total 25 people were put into the virtual environment where eight tables are in a row with a virtual guest sitting at each table in a virtual restaurant. The participants were moved from a table to table following the pre-programmed procedure. 


The materials consisted of 80 objects and 96 sentences (80 experimental sentences and 16 filler ones). Both of them were relevant with restaurant setting, but only half of the object and sentence pairs were semantically matched. Each of the participants went through equal rate of match and mismatch situations and made 12 rounds through the restaurant during the entire experiment. At the end of the trial, they were asked two questions to assess whether the participants had paid attention during the trial and their perceptions of the virtual agents. The EEG was recorded from 59 active electrodes during the entire rounds of the experiment. Epochs from 100ms preceding the onset of the critical noun to 1200ms after it was selected and the ERPs were further calculated and analyzed per participant and condition in three time windows: N400 window (350–600ms), an earlier window (250–350ms) and a later window (600–800ms). Finally, repeated measures of analyses of variance (ANOVAs) were performed, three variables were predetermined time windows, and the factors included condition (match, mismatch), region (vertical midline, left anterior, right anterior, left posterior, left interior), and the electrode. It was revealed that ERPs seem more negative for the mismatch condition than for the match condition in all time windows and the difference was particularly significant during the N400 window. That is to say, the N400 response was observed in line with predictions, while leading to the conviction that VR and EEG combined can be used to study language comprehension.

More information:

16 May 2018

First Wireless Flying Robotic Insect

Insect-sized flying robots could help with time-consuming tasks like surveying crop growth on large farms or sniffing out gas leaks. These robots soar by fluttering tiny wings because they are too small to use propellers, like those seen on their larger drone cousins. Small size is advantageous: These robots are cheap to make and can easily slip into tight places that are inaccessible to big drones. But current flying robo-insects are still tethered to the ground. The electronics they need to power and control their wings are too heavy for these miniature robots to carry. Now, engineers at the University of Washington have for the first time cut the cord and added a brain, allowing their RoboFly to take its first independent flaps. This might be one small flap for a robot, but it's one giant leap for robot-kind. 


RoboFly is slightly heavier than a toothpick and is powered by a laser beam. It uses a tiny onboard circuit that converts the laser energy into enough electricity to operate its wings. The engineering challenge is the flapping. Wing flapping is a power-hungry process, and both the power source and the controller that directs the wings are too big and bulky to ride aboard a tiny robot. But a flying robot should be able to operate on its own. They decided to use a narrow invisible laser beam to power their robot. They pointed the laser beam at a photovoltaic cell, which is attached above RoboFly and converts the laser light into electricity. A circuit boosted the seven volts coming out of the photovoltaic cell up to the 240 volts needed for flight. To give RoboFly control over its own wings, the engineers provided a microcontroller to the same circuit.

More information: