31 August 2018

Frontiers in Robotics and AI Article

This month, HCI Lab researchers and colleagues from iMareCulture have published a peer-review paper at Frontiers in Robotics and AI entitled "Impact of Dehazing on Underwater Marker Detection for Augmented Reality". The paper describes the visibility conditions affecting underwater scenes and shows existing dehazing techniques that successfully improve the quality of underwater images. Four underwater dehazing methods are selected for evaluation of their capability of improving the success of square marker detection in underwater videos. Two reviewed methods represent approaches of image restoration: Multi-Scale Fusion, and Bright Channel Prior. 


Another two methods evaluated, the Automatic Color Enhancement and the Screened Poisson Equation, are methods of image enhancement. The evaluation uses diverse test data set to evaluate different environmental conditions. Results of the evaluation show an increased number of successful marker detections in videos pre-processed by dehazing algorithms and evaluate the performance of each compared method. The Screened Poisson method performs slightly better to other methods across various tested environments, while Bright Channel Prior and Automatic Color Enhancement shows similarly positive results.

More information:

29 August 2018

Video Games Can Boost Empathy

A space-exploring robot crashes on a distant planet. In order to gather the pieces of its damaged spaceship, it needs to build emotional rapport with the local alien inhabitants. The aliens speak a different language but their facial expressions are remarkably human like. This fantastical scenario is the premise of a video game developed for middle schoolers by University of Wisconsin-Madison researchers to study whether video games can boost kids' empathy, and to understand how learning such skills can change neural connections in the brain.


Results reveal for the first time that, in as few as two weeks, kids who played a video game designed to train empathy showed greater connectivity in brain networks related to empathy and perspective taking. Some also showed altered neural networks commonly linked to emotion regulation, a crucial skill that this age group is beginning to develop. On average, youth between the ages of 8 and 18 rack up more than 70 minutes of video gameplay daily, according to data from the Kaiser Family Foundation. The research was funded by a grant from the Bill & Melinda Gates Foundation.

More information:

19 August 2018

Water Simulation Captures Small Details

When designers select a method for simulating water and waves, they have to choose either fast computation or realistic effects; state-of-the-art methods are only able to optimize one or the other. Now, a method developed by researchers at the Institute of Science and Technology Austria (IST Austria) and NVIDIA bridges this gap. Their simulation method can reproduce complex interactions with the environment and tiny details over huge areas in real time. Moreover, the basic construction of the method allows graphics designers to easily create artistic effects.


Current water wave simulations are based on one of two available methods. Fourier-based methods are efficient but cannot model complicated interactions, such as water hitting shore of an island. Numerical methods, on the other hand, can simulate a wide range of such effects, but are much more expensive computationally. Achieving all of this required ingenuity, as well as a deep understanding of the basic physics involved. We encoded the waves with different physical parameters than people previously used.

More information:

11 August 2018

Precision of HoloLens 2.0 Depth Sensing

While the next-generation HoloLens does not have a launch date yet, we now have a better idea of how big a leap the device will take in terms of depth sensor performance. At the recent Conference on Computer Vision and Pattern Recognition, held in Salt Lake City, Utah in June, Microsoft researchers gave a tutorial showing off the new HoloLens Research Mode, which gives developers access to the device's sensor data.


During the tutorial, the researchers showed the audience a preview of the depth sensor feed from the Project Kinect for Azure, which Microsoft unveiled earlier this year as the sensor for the next version of HoloLens. The sensor's higher frame rate at long range is also on display, and the sensor captures audience members as far as eight rows back, while the point cloud (below right) shows details of chairs and people.

More information:

09 August 2018

Guy with Four Arms, Two Controled in VR

Researchers at Tokyo-based Keio University’s Graduate School of Media Design, led the development of a robotic-arms-on-a-backpack project, called Fusion, to explore how people may be able to work together to control one person’s body. The operator of the robotic arms and hands can pick things up or move around the arms and hands of the human wearing the backpack. The mechanical hands can be removed and replaced with straps that go around the backpack-wearer’s wrists if you want to truly remote control their arms. The backpack includes a PC that streams data wirelessly between the robotic arm-wearer and the person controlling the limbs in VR.


The PC also connects to a microcontroller, letting it know how to position the robotic arms and hands and how much torque to apply to the joints. The robotic arms, each with seven joints, jut out of the backpack, along with a connected head, of sorts. The head has two cameras that show the remote operator, in VR, a live feed of everything the backpack-wearer is seeing. When the operator moves their head in VR, sensors track that motion and cause the robotic head to move in response. The wearable system is powered by a battery that lasts about an hour and a half. It’s heavy, weighing in at nearly 21 pounds.

More information: