31 December 2018

RFID Tracks Body Movements and Shape Changes

Carnegie Mellon University researchers have found ways to track body movements and detect shape changes using arrays of RFID tags. RFID-embedded clothing thus could be used to control avatars in video games. Or embedded clothing could to tell you when you should sit up straight. Researchers devised for tracking the tags, and thus monitoring movements and shapes. RFID tags reflect certain radio frequencies. It would be possible to use multiple antennas to track this backscatter and triangulate the locations of the tags.

Rather, the CMU researchers showed they could use a single, mobile antenna to monitor an array of tags without any prior calibration. Just how this works varies based on whether the tags are being used to track the body's skeletal positions or to track changes in shape. For body movement tracking, arrays of RFID tags are positioned on either side of the knee, elbow or other joints. By keeping track of the ever-so-slight differences in when the backscattered radio signals from each tag reach the antenna, it's possible to calculate the angle of bend in a joint.

More information:

23 December 2018

AI for Better Computer Graphics

At the TU Wien (Vienna), neural networks have been developed which make it much easier to create photorealistic pictures of a wide variety of materials. If computer-generated images are to look realistic, different materials have to be presented differently: The metallic sheen of a coin looks quite different from the dull gloss of a wooden plate or the slightly transparent skin of a grape. Exactly simulating such material effects usually requires a lot of experience and patience. Many different parameters need to be adjusted carefully, then the computer takes a while to calculate the corresponding image, and then the same procedure is repeated, until the result is fully satisfactory. New methods have now been developed that make this process much faster and easier. 

AI recognizes the designer's creative desires and autonomously proposes suitable sample images. A neural network applies the selected material parameters to a sample object in real time. For very different applications in the graphics area, this is a big step forward, from game design and film animation to architectural visualization. In order for the computer to learn how to display a specific material, different versions of a sample object are displayed. A person clicks on the image which looks closest to the desired result. After a few practice rounds, the artificial intelligence has learned the physical properties of the desired material. That way the system acquires parameters which can then be used to insert objects of this material into any image, matching any specific lighting.

More information:

20 December 2018

AI Dances Like Human

Choreographers and researchers at Google’s Arts & Culture in Paris are using AI to dance. They are using an AI-driven tool that can generate its own independent choreography based on hundreds of hours of video footage it has been fed. This comes both from the choreographer’s archives and from the ten dancers, whose individual styles were captured in solos performed for the technology.

The project looked back on a 25 year-long history of recorded video and wondered if technology could do anything to help keep the performances fresh. The purpose is to make use of this massive archive of work in an interesting way. It all goes down to the same question that is crucial in choreography, which is on how to keep creating fresh content.

More information:

18 December 2018

Focals AR Glasses

North, the company behind the Focals AR glasses, has acquired the technology portfolio behind another set of AR glasses, the cancelled Intel Vaunt glasses. The company wouldn’t disclose the terms of the deal, but Intel Capital is a major investor in North and led its last financing round in 2016. Both Focals and Vaunt had the same basic idea: use a tiny laser embedded in the stem of your glasses to project a reflected image directly into your retina. Unlike other AR and VR efforts, the goal is to create a pair of glasses you’d actually want to wear, something that looks relatively normal and doesn’t weigh too much. 

Intel struggled to get its glasses out of the lab and on a path to actually becoming available to consumers. Like so many Intel prototypes, it failed to find the right partner to bring the technology to market. Except, well, it just sort of did, in North’s Focals. Focals have the same basic idea as Vaunt but are actually set to ship to consumers fairly soon. The Canadian company already has a couple of stores where you can select the right style of glasses. But more importantly, you need to get them fitted, North says, because aligning the projector so you can see the image requires that the glasses be adjusted for your face.

More information:

16 December 2018

Relationship Between Eyes and Mental Workload

With nearly breakneck speed, the demands of work productivity in today's society seem to have increased tenfold. Enter multitasking as a way to cope with the insistence that tasks be completed almost immediately. Previous studies on workload and productivity include physical aspects, such as how much a person walks or carries, but they do not take into account a person's state of mind. Now, MU College of Engineering researchers have discovered a person's eyes may offer a solution. To do this, they compared data from a workload metric developed by NASA for its astronauts with their observations of pupillary response from participants in a lab study. Using a simulated oil and gas refinery plant control room, researchers watched, through motion-capture and eye-tracking technology, as the participants reacted to unexpected changes, such as alarms, while simultaneously watching the performance of gauges on two monitors.

During the scenario's simple tasks, the participants' eye searching behaviors were more predictable. Yet, as the tasks became more complex and unexpected changes occurred, their eye behaviors became more erratic. Through the use of the data from this lab study by applying a formula applied called fractal dimension, researchers discovered a negative relationship between the fractal dimension of pupil dilation and a person's workload. This showed that pupil dilation could be used to indicate the mental workload of a person in a multitasking environment. Researchers hope this finding can give a better insight into how systems should be designed to avoid mentally overloading workers and build a safer working environment. One day this finding could give employers and educators alike a tool to determine the maximum stress level a person can experience before they become fatigued, and their performance begins to negatively change.

More information:

13 December 2018

Video Games May Improve Empathy in Middle Schoolers

A fantastical scenario involving a space-exploring robot crashing on a distant planet is the premise of a video game developed for middle schoolers by researchers to study whether video games can boost kids' empathy, and to understand how learning such skills can change neural connections in the brain. This fantastical scenario is the premise of a video game developed for middle schoolers by University of Wisconsin-Madison researchers to study whether video games can boost kids' empathy, and to understand how learning such skills can change neural connections in the brain.

Results reveal for the first time that, in as few as two weeks, kids who played a video game designed to train empathy showed greater connectivity in brain networks related to empathy and perspective taking. Some also showed altered neural networks commonly linked to emotion regulation, a crucial skill that this age group is beginning to develop. Researchers obtained functional magnetic resonance imaging scans in the laboratory from participants looking at connections among areas of the brain, including those associated with empathy and emotion regulation.

More information: