11 November 2019

Identify Individuals Using Sound

Every sound we hear has a unique signature thanks to the way it was created, and which objects the sound waves have passed through. A team of South Korean researchers are now exploring whether the unique bioacoustic signatures created as sound waves pass through humans can be used to identify individuals. The biometric system developed by ETRI uses a transducer to generate vibrations and thus sound waves, which pass through a given body part on a person. In this a case, a finger is easily accessible and convenient. 


After the sound has passed through the skin, bones, and other tissues, a sensor picks up the unique bioacoustic signature. Teasing apart the distinct signatures of individuals is further boosted using modeling. The approach is effective enough to distinguish different fingers on the same hand. This means that a person must use the same finger that was originally analyzed for authentication. While measuring changes in acoustic vibrations is fairly accurate, it does not yet match the accuracy of fingerprints or iris scans.

More information:

08 November 2019

AI Generates Fake Avatars

A new deep learning algorithm can generate high-resolution, photorealistic images of people (faces, hair, outfits, and all) from scratch. The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media. The algorithm was developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto University, according to a press release. 


The new algorithm is a Generative Adversarial Network (GAN) and is the kind of AI typically used to churn out new imitations of something that exists in the real world, whether they be video game levels or images that look like hand-drawn caricatures. DataGrid’s system is posing the AI models in front of a nondescript white background and shining realistic-looking light down on them. Each time scientists build a new algorithm that can generate realistic images or deepfakes that are indistinguishable from real photos, it seems like a new warning that AI-generated media could be readily misused to create manipulative propaganda.

More information:

03 November 2019

Real Time Human Thought Reconstruction from Brain Waves using AI

Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person's brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information. A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video. The existing solutions for extracting observed images from brain signals either use functional MRI or analyze the signals picked up via implants directly from neurons. Both methods have fairly limited applications in clinical practice and everyday life. The brain-computer interface developed by MIPT and Neurobotics relies on artificial neural networks and electroencephalography, or EEG, a technique for recording brain waves via electrodes placed non-invasively on the scalp. By analyzing brain activity, the system reconstructs the images seen by a person undergoing EEG in real time. 


In the first part of the experiment, the neurobiologists asked healthy subjects to watch 20 minutes of 10-second YouTube video fragments. The team selected five arbitrary video categories: abstract shapes, waterfalls, human faces, moving mechanisms and motor sports. The latter category featured first-person recordings of snowmobile, water scooter, motorcycle and car races. By analyzing the EEG data, they showed that the brain wave patterns are distinct for each category of videos. In the second phase of the experiment, three random categories were selected from the original five. The researchers developed two neural networks: one for generating random category-specific images from noise, and another for generating similar noise from EEG. The team then trained the networks to operate together in a way that turns the EEG signal into actual images similar to those the test subjects were observing. To test the system's ability to visualize brain activity, the subjects were shown previously unseen videos from the same categories. As they watched, EEGs were recorded and fed to the neural networks and the system generated convincing images that could be easily categorized in 90 percent of the cases.

More information:

02 November 2019

VR Game Beyond Sight and Sound

Four sensory streams are fused together simultaneously in the game to achieve remarkable realism. Called The Lost Foxfire, the 10-minute game engages a player’s senses of vision, audition (sense of hearing), olfaction (sense of smell), somatosensation (sense of touch), and thermoception (ability to sense intensity of heat). Besides relying on their vision and hearing, game players will also need to take cues from their senses of smell and touch to successfully complete the game. Most conventional virtual reality games use headsets and haptic bodysuits to mimic and amplify sensory feedback, for instance, to deliver the sensation of a cool breeze to match the visual scene of the moment. In contrast, the game set researchers from Keio-NUS CUTE Center at the National University of Singapore developed brings VR multisensory bodysuits to a new level where players use real-time, life-like sensory feedback to make decisions that will directly affect the outcome of the gameplay. 


The entire game system comprises a virtual reality headset that researchers paired with a configurable multisensory suit that delivers thermal, wind, and olfactory stimuli to the players in order to assist them in the game. The adjustable suit has five heat modules that enable players to sense heat on the front, back, and the sides of their necks, as well as on their faces. The thermal stimuli can be calibrated and customized to an individual’s tolerance of warmth. When players encounter a fox character in the game, they will catch a whiff of the scent of apples, a favorite fruit of foxes. As players get close to fire in the game, they can feel the heat it emits. The team comprising hardware and product engineers, artists, technology researchers, and designers took nine months to develop the experimental game—from its conception, coding, and building of special hardware, to graphic designing and animating. They have filed a patent for the technology behind the configurable multisensory suit.

More information: