27 November 2019

Connect Your Brain to a Smartphone

Neuralink tries to build an interface that enables someone's brain to control a smartphone or computer, and to make this process as safe and routine as Lasik surgery. Currently, Neuralink has only experimented on animals. In these experiments, the company used a surgical robot to embed into a rat brain a tiny probe with about 3,100 electrodes on some 100 flexible wires or threads — each of which is significantly smaller than a human hair.


This device can record the activity of neurons, which could help scientists learn more about the functions of the brain, specifically in the realm of disease and degenerative disorders. This device has been tested on at least one monkey, who was able to control a computer with its brain. Neuralink's experiments involve embedding a probe into the animal's brain through invasive surgery with a sewing machine-like robot that drills holes into the skull.

More information:

24 November 2019

VR Interface Allows Long Distance Touch

This new interface is a type of haptic device, a technology that remotely conveys tactile signals. A common example is video game controllers that vibrate when the player’s avatar takes a hit. Some researchers think more advanced, wearable versions of such interfaces will become a vital part of making virtual and augmented reality experiences feel like they are happening. Researchers developed a vibrating disk, only a couple millimeters thick, that can run with very little energy. These actuators (a term for devices that give a system physical motion) need so little energy that they can be powered by near-field communication—a wireless method of transferring small amounts of power, typically used for applications like unlocking a door with an ID card.


The resulting product looks like a lightweight, soft patch of fabric-like material that can flex and twist like a wet suit, maintaining direct contact with the wearer’s skin as their body moves. It consists of thin layers of electronics sandwiched between protective silicone sheets. One layer contains the near-field communication technology that powers the device. This can activate another layer: an array of actuators, each of which can be activated individually and tuned to different vibration frequencies to convey a stronger or weaker sensation. This stack of electronics, slightly thinner than a mouse pad, culminates in a tacky surface that sticks to the skin. Researchers have tested prototype patches of different shapes and sizes to fit on various parts of the body.

More information:

20 November 2019

VRST 2019 Paper

On the 14th of November 2019, we have presented a co-authored paper entitled "A Mobile Augmented Reality Interface for Teaching Folk Dances". The paper was presented at the 25th ACM Symposium on Virtual Reality Software and Technology (VRST 2019), which was held in Australia, Sydney and sponsored by the EU project called Terpsichore. This paper presents a prototype mobile augmented reality interface for assisting the process of learning folk dances. 


As a case study, a folk dance was digitized based on recordings from professional dancers. To assess the effectiveness of the technology, it was comparatively evaluated with a large back-projection system in laboratory conditions. Sixteen participants took part in the study, and their movements were captured using motion capture system and then compared with the recordings from the professional dancers. Experimental results indicate that AR has the potential to be used for learning folk dances.

More information:

18 November 2019

Apple's Siri Might Interpret Emotions

Apple is developing a way to help interpret a user's requests by adding facial analysis to a future version of Siri or other system. The aim is to cut down the number of times a spoken request is misinterpreted, and to do so by attempting to analyse emotions. Intelligent software agents can perform actions on behalf of a user. Part of the system entails using facial recognition to identify the user and so provide customized actions such as retrieving that person's email or playing their personal music playlists. It is also intended, however, to read the emotional state of a user.


The system works by obtaining, by a microphone, an audio input, and obtaining, by a camera, one or more images. Apple notes that expressions can have different meanings, but its method classifies the range of possible meanings according to the Facial Action Coding System (FACS). This is a standard for facial taxonomy, first created in the 1970s, which categorizes every possible facial expression into an extensive reference catalog. Using FACS, Apple's system assigns scores to determine the likelihood of which is the correct interpretation and then can have Siri react or respond accordingly.

More information: