20 August 2019

Spectacles 3 Camera Glasses

Snap’s new Spectacles 3 don’t look that different from their predecessors. They consist of a metal designer frame with a couple of HD cameras. In exchange for the embarrassment of wearing them, the Spectacles 3 offer the chance to shoot 3D video hands-free and then upload it to the Snapchat app, where it can be further affected. And that’s pretty much it. You can’t view the video, or anything else, in the lenses. There are no embedded displays. Still, the new Spectacles foreshadow a device that many of us may wear as our primary personal computing device in about 10 years. Based on what I’ve learned by talking AR with technologists in companies big and small, here is what such a device might look like and do.


Unlike Snap’s new goggles, future glasses will overlay digital content over the real-world imagery we see through the lenses. We might even wear mixed reality (MR) glasses that can realistically intersperse digital content within the layers of the real world in front of us. The addition of the second camera on the front of the new Spectacles is important because in order to locate digital imagery within reality, you need a 3D view of the world, a depth map. The Spectacles derive depth by combining the input of the two HD cameras on the front, similar to the way the human eye does it. The Spectacles use that depth mapping to shoot 3D video to be watched later, but that second camera is also a step toward supporting mixed reality experiences in real time.

More information:

19 August 2019

VR Helps Parkinson’s Patients to Walk Steadily

USC engineers team with researchers and VR game designers to help Parkinson’s patients walk steadily with confidence. Symptoms such as stiffness, uncontrollable shaking, gait and balance problems are the first warning signs. According to the Parkinson’s Outcomes Project, the largest-ever clinical study of Parkinson’s done by the Parkinson’s Foundation, 71 percent of people living with Parkinson’s for at least 10 years, are susceptible to falls. The serious injuries caused by falls, particularly in older patients, can lead to disability, social isolation, and even nursing home placement.

Patients roam a virtual modern city, complete with roads, pavements, buildings, and cars, and with an option of day/night mode, as they walk on a treadmill. They gain points by avoiding obstacles such as chairs, paper, plastic cups, etc. that are randomly generated on the sidewalk. However, a problem arises: the VR environment lacks the dimension of touch, which makes it not only unnatural, but also disconcerting when they walk into an object. Viterbi students have made the VR experience more immersive by introducing a haptic feedback component in addition to audio feedback.

More information:

13 August 2019

AI Assesses Violinist’s Bow Movements

In a recent study, members of the Music and Machine Learning Lab of the Music Technology Group (MTG) at the Department of Information and Communication Technologies (DTIC) of UPF, apply artificial intelligence to the automatic classification of violin bow gestures according to the performer’s movement. Researchers recorded movement and audio data corresponding to seven representative bow techniques (Détaché, Martelé, Spiccato, Ricochet, Sautillé, Staccato and Bariolage) performed by a professional violinist. They obtained information about the inertial motion from the right forearm and we synchronized it with the audio recordings.


The data used in this study are available in an online public repository. After extracting the characteristics of the information concerning movement and audio, the researchers trained a system to automatically identify the different bow techniques used in playing the violin. The model can determine the different techniques studied to more than 94% accuracy. The results enable applying this work to a practical learning scenario, in which students of violin can benefit from the feedback provided by the system in real time. This study was conducted within the framework of the TELMI (Technology Enhanced Learning Performance of Musical Instrument) project.

More information:

10 August 2019

Frontiers in ICT Article 2019

A few days ago, HCI Lab researchers in collaboration with Konica Minolta, published a paper at Frontiers in ICT entitled ‘An Interactive and Multimodal Virtual Mind Map for Future Workplace’. The paper presents multimodal VR collaborative interfaces that facilitate various types of intelligent ideation/brainstorming (or any other mostly creative activity). Participants can be located in different environments and have a common goal on a particular topic within a limited amount of time. 


Users can group (or ungroup) actions (i.e., notes belonging in a specific category) and intuitively interact with them using a combination of different modalities. Ideally, the multimodal interface should allow users to create actions and then post it on the virtual mind map using one or more intuitive methods, such as voice recognition, gesture recognition, and through other physiological or neurophysiological sources. Finally, users can access the content and assess it.

More information: