17 June 2018

IEEE CGA 2018 Paper

Recently we published a paper at IEEE Computer Graphics and Applications entitled 'Simulation of Underwater Excavation using Dredging Procedures'. The article presents a novel system for simulating underwater excavation techniques using immersive VR. 

 
The focus is not on simulating swimming but on excavating underwater while following established archaeological methods and techniques. In particular, the use of dredging procedures was implemented by a realistic simulation of sand in real-time performance.

More information:

16 June 2018

AI Detects Pose Through Walls

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls. Their latest project, 'RF-Pose', uses artificial intelligence (AI) to teach wireless devices to sense people's postures and movement, even from the other side of a wall. The researchers use a neural network to analyze radio signals that bounce off people's bodies, and can then create a dynamic stick figure that walks, stops, sits and moves its limbs as the person performs those actions.


The team says that the system could be used to monitor diseases like Parkinson's and multiple sclerosis (MS), providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns. The team is currently working with doctors to explore multiple applications in healthcare. Besides health-care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

More information:

14 June 2018

AI Does Household Chores

Recently, computer scientists have been working on teaching machines to do a wider range of tasks around the house. Recently, researchers demonstrated 'VirtualHome', a system that can simulate detailed household tasks and then have artificial agents execute them, opening up the possibility of one day teaching robots to do such tasks. The team trained the system using nearly 3,000 programs of various activities, which are further broken down into subtasks for the computer to understand. 


A simple task like making coffee, would also include the step grabbing a cup. The researchers demonstrated VirtualHome in a 3D world inspired by the Sims video game. The AI agent can execute 1,000 of these interactions in the Sims-style world, with 8 different scenes including a living room, kitchen, dining room, bedroom, and home office. The project was co-developed by researchers from CSAIL and the University of Toronto, McGill University and the University of Ljubljana.

More information:

11 June 2018

Infinite Walking in VR

In the ever-evolving landscape of virtual reality (VR) technology, a number of key hurdles remain. But a team of computer scientists have tackled one of the major challenges in VR that will greatly improve user experience enabling an immersive virtual experience while being physically limited to one's actual, real-world space. Computer scientists from Stony Brook University, NVIDIA and Adobe have collaborated on a computational framework that gives VR users the perception of infinite walking in the virtual world - while limited to a small physical space. The framework also enables this free-walking experience for users without causing dizziness, shakiness, or discomfort typically tied to physical movement in VR. And, users avoid bumping into objects in the physical space while in the VR world. To do this, researchers focused on manipulating a user's walking direction by working with a basic natural phenomenon of the human eye, called saccade. Saccades are quick eye movements that occur when we look at a different point in our field of vision, like when scanning a room or viewing a painting. Saccades occur without our control and generally several times per second. During that time, our brains largely ignore visual input in a phenomenon known as saccadic suppression, leaving us completely oblivious to our temporary blindness, and the motion that our eyes performed.


Using a head- and eye-tracking VR headset, the researchers' new method detects saccadic suppression and redirects users during the resulting temporary blindness. When more redirection is required, researchers attempt to encourage saccades using a tailored version of subtle gaze direction - a method that can dynamically encourage saccades by creating points of contrast in our visual periphery. To date, existing methods addressing infinite walking in VR have limited redirection capabilities or cause undesirable scene distortions; they have also been unable to avoid obstacles in the physical world, like desks and chairs. The team's new method dynamically redirects the user away from these objects. The method runs fast, so it is able to avoid moving objects as well, such as other people in the same room. The researchers ran user studies and simulations to validate their new computational system, including having participants perform game-like search and retrieval tasks. Overall, virtual camera rotation was unnoticeable to users during episodes of saccadic suppression; they could not tell that they were being automatically redirected via camera manipulation. Additionally, in testing the team's method for dynamic path planning in real-time, users were able to walk without running into walls and furniture, or moving objects like fellow VR users.

More information:

10 June 2018

Virtual Movies of Vibrating Molecules

Scientists have shown how an optical chip can simulate the motion of atoms within molecules at the quantum level, which could lead to better ways of creating chemicals for use as pharmaceuticals. An optical chip uses light to process information, instead of electricity, and can operate as a quantum computing circuit when using single particles of light, known as photons. Data from the chip allows a frame-by-frame reconstruction of atomic motions to create a virtual movie of a molecule's quantum vibrations, which is what lies at the heart of the research published today in Nature. These findings are the result of a collaboration between researchers at the University of Bristol, MIT, IUPUI, Nokia Bell Labs, and NTT. As well as paving the way for more efficient pharmaceutical developments, the research could prompt new methods of molecular modelling for industrial chemists.


When lasers were invented in the 1960s, experimental chemists had the idea of using them to break apart molecules. However, the vibrations within molecules rapidly redistribute the laser energy before the intended molecular bond is broken. Controlling the behaviour of molecules requires an understanding of how they vibrate at the quantum level. But modelling these dynamics requires massive computational power, beyond what we can expect from coming generations of supercomputers. The Quantum Engineering and Technology Labs at Bristol have pioneered the use of optical chips, controlling single photons of light, as basic circuitry for quantum computers. Quantum computers are expected to be exponentially faster than conventional supercomputers at solving certain problems.

More information: