31 August 2020

Emergence - VR Crowd Simulator

Emergence is a short VR experience from indie studio Universal Everything that, with its unique crowd simulation, raises some interesting questions about life, society and how we connect to it all. Emergence serves up a short but pregnant message about how we’re the protagonists of our own stories. Like the little third-person golden avatar, sometimes people flock to you, run from you, mimick you, and even huddle together in undulating tribes, leaving you on the outside looking in.


Sometimes you’re the objecct of red hot hatred and no one wants to touch you. Sometimes you only percieve those immediately around you, leaving everyone else on the planet to fade away to the literal size of ants. Although there’s no true goal or defined endpoint—it’s an art piece after all—it’s a nice reminder in a socially distanced world that we’re both individuals and a collective, even if we may lose sight of that little paradox every once in a while.

More information:

https://www.roadtovr.com/emergence-is-a-vr-crowd-simulator/

29 August 2020

Smart Gaming Glove

A team of researchers from the National University of Singapore (NUS), has developed a smart glove, called InfinityGlove, that allows users to mimic a variety of in-game controls using simple hand gestures. While the concept of controlling a game using your hands is not new, the main problems have always been weight and flexibility. The current generation of smart glove type controllers available on the market are usually bulky and rigid as they rely on conventional sensors which put the hard in hardware. The InfinityGlove overcomes existing problems with weight and flexibility by weaving ultra-thin, highly sensitive microfiber sensors into the material of the glove. These sensors are not only lightweight and accurate, but also fulfill the role of wires.


Currently, the prototype weighs about 40 grams, and is flexible and comfortable and each InfinityGlove contains a total of five thread-like sensors, one for each finger. This network of sensors can interface with the game software to produce accurate 3D positions of a moving hand. Various gestures made by the user's hands are then mapped to specific inputs that are found on a regular controller. To date, the team has mapped a total of 11 inputs and commands which will allow users to play games such as Battlefield V. The glove is embedded with thread-line microsensors to produce accurate three-dimensional positions of a moving hand, allowing users to mimic a variety of in-game controls using simple gestures.

More information:

https://techxplore.com/news/2020-08-smart-gaming-glove.html

21 August 2020

AI Defeats Human Lockheed F-16 Pilot In Virtual Dogfight

An artificial intelligence algorithm defeated a human F-16 fighter pilot in a virtual dogfight sponsored by the Defense Advanced Research Projects Agency. After two days of competition, the winning algorithm of  Darpa's Air Combat Evolution program took on a human pilot in a Lockheed Martin (LMT) F-16 simulator.

Artificial intelligence teams from Boeing (BA) subsidiary Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Heron Systems, Lockheed Martin, Perspecta Labs, PhysicsAI, and SoarTech entered the competition. The teams then competed against each other in a round-robin style competition.

More information:

https://www.investors.com/news/artificial-intelligence-lockheed-martin-f16-pilot-virtual-darpa-dogfight/

19 August 2020

AI Virtual Tennis Player

A team of researchers at Stanford University has created an AI-based player called the Vid2Player that is capable of generating startlingly realistic tennis matches featuring real professional players. Video game companies have put a lot of time and effort into making their games look realistic, but thus far, have found it tough going when depicting human beings. Researchers have taken a different approach to the task looking characters from scratch, they use sprites, which are characters based on video of real people. The sprites are then pushed into action by a computer using artificial intelligence to mimic the ways a human being moves while playing tennis. The researchers trained their AI system using video of real tennis professionals performing; the footage also provided imagery for the creation of sprites. The result is an interactive player that depicts real professional tennis players such as Roger Federer, Serena Williams, Novak Djokovic and Rafael Nadal in action. Perhaps most importantly, the simulated gameplay is virtually indistinguishable from a televised tennis match.

The Vid2Player is capable of replaying actual matches, but because it is interactive, a user can change the course of the match as it unfolds. Users can change how a player reacts when a ball comes over the net, for example, or how a player plays in general. They can decide which part of the opposite side of the court to aim for, or whether to hit backhand or forehand. They can also slightly alter the course of a real match by allowing a shot that in reality was out of bounds to land magically inside the line. The system also allows for players from different eras to compete. The AI software adjusts for lighting and clothing if video is used from multiple matches. Because AI software is used to teach the sprites how to play, the actions of the sprites actually mimic the most likely actions of the real player. Thus, when running a match between Roger Federer and Rafael Nader, for example, both players will move the way the real players move and take shots that the real players would make—and play at the kind of high level that viewers have come to expect.

More information:

https://techxplore.com/news/2020-08-ai-player-strikingly-realistic-virtual.html

16 August 2020

Gestures Accompany Virtual Agent's Speech

Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak. Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how virtual assistants and robots communicate with humans by generating natural gestures to accompany their speech. Mix-StAGE was trained to produce effective gestures for multiple speakers, learning the unique style characteristics of each speaker and producing gestures that match these characteristics. In addition, the model can generate gestures in one speaker's style for another speaker's speech.

For instance, it could generate gestures that match what speaker A is saying in the gestural style typically used by speaker B. n initial tests, the model developed by Ahuja and his colleagues performed remarkably well, producing realistic and effective gestures in different styles. Moreover, the researchers found that as they increased the number of speakers used to train Mix-StAGE, its gesture generation accuracy significantly improved. In the future, the model could help to enhance the ways in which virtual assistants and robots communicate with humans. To train Mix-StAGE, the researchers compiled a dataset called Pose-Audio-Transcript-Style (PATS), containing audio recordings of 25 different people speaking, for a total of over 250 hours, with matching gestures. This dataset could soon be used by other research teams to train other gesture generation models.

More information:

https://techxplore.com/news/2020-08-mix-stage-gestures-accompany-virtual-agent.html

10 August 2020

Smartwatch Tracks Medication Levels

Engineers at the UCLA Samueli School of Engineering and their colleagues at Stanford School of Medicine have demonstrated that drug levels inside the body can be tracked in real time using a custom smartwatch that analyzes the chemicals found in sweat. This wearable technology could be incorporated into a more personalized approach to medicine—where an ideal drug and dosages can be tailored to an individual. According to the researchers, current efforts to personalize the drug dosage rely heavily on repeated blood draws at the hospital. The samples are then sent out to be analyzed in central labs. These solutions are inconvenient, time-consuming, invasive and expensive. That is why they are only performed on a small subset of patients and on rare occasions.

Because of their small molecular sizes, many different kinds of drugs end up in sweat, where their concentrations closely reflect the drugs' circulating levels. That's why the researchers created a smartwatch, equipped with a sensor that analyzes the sampled tiny droplets of sweat. The team's experiment tracked the effect of acetaminophen on individuals over the period of a few hours. First, the researchers stimulated sweat glands on the wrist by applying a small electric current. This allowed the researchers to detect changes in body chemistry, without needing subjects to work up a sweat by exercising. As different drugs each have their own unique electrochemical signature, the sensor can be designed to look for the level of a particular medication at any given time.

More information:

https://techxplore.com/news/2020-08-smartwatch-tracks-medication-personalize-treatments.html

06 August 2020

Body Weight Affects Brain Function

As a person's weight goes up, all regions of the brain go down in activity and blood flow, according to a new brain imaging study in the Journal of Alzheimer's Disease. One of the largest studies linking obesity with brain dysfunction, scientists analyzed over 35,000 functional neuroimaging scans using single-photon emission computerized tomography (SPECT) from more than 17,000 individuals to measure blood flow and brain activity. Low cerebral blood flow is the number 1 brain imaging predictor that a person will develop Alzheimer's disease. It is also associated with depression, ADHD, bipolar disorder, schizophrenia, traumatic brain injury, addiction, suicide, and other conditions.

Striking patterns of progressively reduced blood flow were found in virtually all regions of the brain across categories of underweight, normal weight, overweight, obesity, and morbid obesity. These were noted while participants were in a resting state as well as while performing a concentration task. In particular, brain areas noted to be vulnerable to Alzheimer's disease, the temporal and parietal lobes, hippocampus, posterior cingulate gyrus, and precuneus, were found to have reduced blood flow along the spectrum of weight classification from normal weight to overweight, obese, and morbidly obese.

More information:

https://medicalxpress.com/news/2020-08-body-weight-alarming-impact-brain.html

05 August 2020

Anxiousness and Depression Makes Brain Bigger

Researchers have found depression is linked to areas of the brain shrinking in size but when depression is paired with anxiety one area of the brain becomes significantly larger. A new study looked at more than 10,000 people to find the effects of depression and anxiety on brain volume. The study shows depression has a pronounced impact on the hippocampus, the part of the brain linked to memory and learning, shrinking it.  In contrast, the study found that when depression and anxiety occur together, it leads to an increase in size of the part of the brain linked to emotions, the amygdala.

Depression is the most debilitating disorder worldwide, and one-in-six Australians currently experience depression, anxiety, or both.  A particularly important finding of this research is that people who had both depression and anxiety had less shrinkage in many brain areas and even an increase in the amygdala. This indicates that the true effect of depression on the brain has been underestimated because of an opposite effect in the amygdala. Anxiety lowers the effect of depression on brain volume sizes by three per cent on average, somewhat hiding the true shrinking effects of depression.

More information:

https://www.anu.edu.au/news/all-news/your-brain-gets-bigger-if-you’re-anxious-and-depressed