20 September 2018

Euronews Futuris iMareCulture Documentary

A few days ago, Euronews - Futuris presented a documentary for the iMareCulture project I am working with colleagues around Europe and not only. The sunken ruins of ancient cities, the monuments of lost civilisations, may reappear before our eyes thanks to new technologies of augmented reality. Two thousand years ago, a now-flooded coastal area near Naples was a fashionable Roman resort, Baiae. Nowadays, you have to dive to see the remains of the luxurious villas. And soon, to make your diving experience even better, you could take your tablet along. The tablet, safely carried in a waterproof case, picks up acoustic signals from underwater beacons. 


This helps two different AR apps to precisely position itself on a map, guiding the diver to the most interesting underwater sites, like a floor mosaic from a submerged Roman villa that would otherwise be hidden from view by sand. The first app is based on acoustic tracking while the second one on QR codes and in both cases the divers can travel through the virtual city while exploring its submerged ruins. Beyond the popularisation of historical heritage, the virtual reality technologies allowed the researchers to develop a professional simulator that teaches proper excavation techniques at an underwater archaeological site.

More information:

18 September 2018

Editorial Computers & Graphics 2018

A few days ago, the Foreword to the Special Section on Serious Games and Virtual Environments was published by Computers & Graphics (C&G). It features extended and revised versions of select best technical papers presented at the 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2017), 6–8 September, Athens, Greece. VS-Games 2017 was jointly organized by the National Technical University of Athens (NTUA), Greece, and the Human Computer Interaction Laboratory (HCI Lab), Faculty of Informatics, Masaryk University, Czech Republic and was awarded technical sponsorship by the Institute of Electrical and Electronics Engineers (IEEE).


The terms Virtual Worlds and Games for Serious Applications cover a broad range of applications ranging from simulations to computer graphics applications. VS-Games 2017 addressed some of the significant challenges of these areas covering both educational and technological issues. Three papers have been selected and reviewed based on their relevance to the general field of computer graphics. In particular, topics included: eye-tracking, motion capturing, multimedia, brain-computer interfaces and virtual environments. All three papers were reviewed again by three anonymous experts before they could be accepted for publication in Computers & Graphics.

More information:

17 September 2018

Automatically Transforming Deep Fakes in Videos

Researchers at Carnegie Mellon University have devised a way to automatically transform the content of one video into the style of another, making it possible to transfer the facial expressions of comedian John Oliver to those of a cartoon character, or to make a daffodil bloom in much the same way a hibiscus would. Because the data-driven method does not require human intervention, it can rapidly transform large amounts of video, making it a boon to movie production. It can also be used to convert black-and-white films to color and to create content for virtual reality experiences. The technology also has the potential to be used for so-called deep fakes, videos in which a person's image is inserted without permission, making it appear that the person has done or said things that are out of character. Transferring content from one video to the style of another relies on artificial intelligence. In particular, a class of algorithms called generative adversarial networks (GANs) have made it easier for computers to understand how to apply the style of one image to another, particularly when they have not been carefully matched.

In a GAN, two models are created: a discriminator that learns to detect what is consistent with the style of one image or video, and a generator that learns how to create images or videos that match a certain style. When the two work competitively -- the generator trying to trick the discriminator and the discriminator scoring the effectiveness of the generator -- the system eventually learns how content can be transformed into a certain style. A variant, called cycle-GAN, completes the loop, much like translating English speech into Spanish and then the Spanish back into English and then evaluating whether the twice-translated speech still makes sense. Using cycle-GAN to analyze the spatial characteristics of images has proven effective in transforming one image into the style of another. That spatial method still leaves something to be desired for video, with unwanted artifacts and imperfections cropping up in the full cycle of translations. To mitigate the problem, the researchers developed a technique, called Recycle-GAN, that incorporates not only spatial, but temporal information.

More information:

16 September 2018

Computers & Graphics Article

A few days ago, HCI Lab researchers published a paper at Computers and Graphics entitled "Embodied VR environment facilitates training in motor imagery brain-computer interfaces". Motor imagery (MI) is the predominant control paradigm for nowadays brain-computer interfaces (BCIs). After sufficient training effort is invested, the accuracy of commands mediated by mental imagery of bodily movements grows to satisfactory level. However, many issues with MI-BCIs persist; e.g. long and tiresome training, low bit transfer rate, BCI illiteracy. This study aims at addressing the issues with MI-BCI training. In order to facilitate easier and faster learning, an embodied training environment was created.


Participants were placed into virtual reality scene seen from a first-person view of a human-like avatar, and the mental rehearsal of MI actions was accompanied with corresponding movements performed by the avatar. Leveraging extension of the sense of ownership, agency, and self-location towards non-body object has already been proven to help produce stronger MI EEG correlates. In this work, these principles were used to facilitate the MI-BCI training process for the first time. After two training sessions and final evaluation, the results show significantly higher classification accuracy and score for the group trained in embodied environment, compared to the control group trained with standard MI-BCI training protocol with arrows.

More information:

15 September 2018

New Theory for Phantom Limb Pain

Phantom limb pain is a poorly understood phenomenon, in which people who have lost a limb can experience severe pain, seemingly located in that missing part of the body. The condition can be seriously debilitating and can drastically reduce the sufferer's quality of life. But current ideas on its origins cannot explain clinical findings, nor provide a comprehensive theoretical framework for its study and treatment. Researchers Chalmers University of Technology, propose that after an amputation, neural circuitry related to the missing limb loses its role and becomes susceptible to entanglement with other neural networks in this case, the network responsible for pain perception.


Neurons are never completely silent. When not processing a particular job, they might fire at random. This may result in coincidental firing of neurons in that part of the sensorimotor network, at the same time as from the network of pain perception. When they fire together, that will create the experience of pain in that part of the body. Through a principle known as Hebb's Law neurons in the sensorimotor and pain perception networks become entangled, resulting in phantom limb pain. The new theory also explains why not all amputees suffer from the condition- the randomness, or stochasticity, means that simultaneous firing may not occur, and become linked, in all patients.

More information: