30 March 2016

Boosting Synaptic Plasticity to Accelerate Learning

The body’s branching network of peripheral nerves connects neurons in the brain and spinal cord to organs, skin, and muscles, regulating a host of biological functions from digestion to sensation to locomotion. But the peripheral nervous system can do even more than that, which is why DARPA already has research programs underway to harness it for a number of functions—as a substitute for drugs to treat diseases and accelerate healing, for example, as well as to control advanced prosthetic limbs and restore tactile sensation to their users. Now, pushing those limits further, DARPA aims to enlist the body’s peripheral nerves to achieve something that has long been considered the brain’s domain alone: facilitating learning. The effort will turn on its head the usual notion that the brain tells the peripheral nervous system what to do.
 

The new program, Targeted Neuroplasticity Training (TNT), seeks to advance the pace and effectiveness of a specific kind of learning—cognitive skills training—through the precise activation of peripheral nerves that can in turn promote and strengthen neuronal connections in the brain. TNT will pursue development of a platform technology to enhance learning of a wide range of cognitive skills, with a goal of reducing the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes. If successful, TNT could accelerate learning and reduce the time needed to train foreign language specialists, intelligence analysts, cryptographers, and others. The program is also notable because, unlike many of DARPA’s previous neuroscience and neurotechnology endeavors, it will aim not just to restore lost function but to advance capabilities beyond normal levels.

More information:

25 March 2016

A UX Designer's Guide to Combat VR Sickness

Nobody wants to use a product that makes them throw up. Actually, this is not entirely true: the roller-coaster is a commercially successful product that's fun and makes you puke at the same time, but it's the exception and not the rule. Just imagine your stomach turning upside down every time you look at your smartphone - you'd probably go back to your good old Nokia 3310 the next day. If we want to see VR going mainstream, we have to address virtual reality sickness. Virtual reality sickness sounds like a new thing, but it's not. Motion sickness, a similar symptom, is as old as humanity. The earliest record comes from Hippocrates, who first described motion sickness caused by sea travel, and even the word nausea comes from "naus", the old Greek for "ship". Similar symptoms have been observed in immersive environments (aviation training simulators) as early as the 1950s. Simulator sickness has been studied for decades now by doctors and the US Army. Without going into graphic details: it's really awful for some people. Virtual reality sickness is something we just start seeing, but thanks to the research in simulator sickness we already have the tools and methods to make VR comfortable for the majority of the people. We don't know exactly, but it's very possibly related to sensory conflicts. When you start moving in real life, it's not just your brain processing the visual information. You feel the movement in your body, most importantly in your vestibular system. You usually do a lot of muscle work. If you jump on a plane in VR, your brain receives very conflicting information: your eyes make you think you are flying, but you don't feel the speed in your gut.


Sensory conflict does not even have to be this strong to cause simulator sickness. The human brain is an incredibly fine piece of hardware, and even the slightest latency is noticeable. When you turn your head in real life the world is already there - consensual reality does not need to be rendered in real-time. Virtual reality is different, and if the drawing drops below a certain frame rate (more on this in a minute), you'll start noticing lags, glitches, and nausea. Another cause of VR sickness is forced camera movement. You'd be pretty pissed off if someone suddenly grabbed and turned your head forcing you to look in a particular direction. This is not a problem on a flat screen, but it's a huge issue in an immersive environment. Lastly, low-quality animation is known to be causing discomfort. This is an interesting point, as we know the brain is brilliant in filling perception gaps. Manipulating objects using hand tracking in virtual reality feels natural, even if your arms are very visibly missing. Our imagination fills the void, and if you think back to such an experience you probably won't even remember you had no arms. The problem (called the "uncanny valley effect") starts when we see something that should look realistic, but it's not - especially poorly animated human characters. A low polygon body works well - your imagination kicks in. However, a high polygon body with poor, unnatural animation is disturbing. This is not related to motion sickness or simulator sickness, and it will not make you feel dizzy; just plain uncomfortable. Virtual reality sickness is not an issue for about 80 percent of the people, but it makes the 20 percent mildly or terribly sick. Interestingly people under 20, women, and Asians are more susceptible to it.

More information:

24 March 2016

Video Games Improve Brain Connections in Multiple Sclerosis Patients

Playing brain-training video games may help improve some cognitive abilities of people with multiple sclerosis (MS) by strengthening neural connections in an important part of their brains, according to a new study. MS is a disease of the central nervous system that results in damage to the protective covering of nerve fibers. Symptoms include weakness, muscle stiffness and difficulty thinking--a phenomenon often referred to as ‘brain fog’. MS affects an estimated 2.5 million people worldwide, according to the Multiple Sclerosis Foundation. Damage to the thalamus, a structure in the middle of the brain that acts as a kind of information hub and its connections with other parts of the brain play an important role in the cognitive dysfunction many MS patients experience. Researchers from the Department of Neurology and Psychiatry at Sapienza University in Rome recently studied the effects of a video game-based cognitive rehabilitation program on the thalamus in patients with MS. They used a collection of video games from the Nintendo Corporation, called Dr. Kawashima's Brain Training, which train the brain using puzzles, word memory and other mental challenges.


Twenty-four MS patients with cognitive impairment were randomly assigned to either take part in an eight-week, home-based rehabilitation program (consisting of 30-minute gaming sessions, five days per week). Patients were evaluated by cognitive tests and by 3-Tesla resting state functional MRI (RS-fMRI) at baseline and after the eight-week period. Functional imaging when the brain is in its resting state, or not focused on a particular task, provides important information on neural connectivity. At follow-up, the 12 patients in the video-game group had significant increases in thalamic functional connectivity in brain areas corresponding to the posterior component of the default mode network, which is one of the most important brain networks involved in cognition. The results provide an example of the brain's plasticity, or ability to form new connections throughout life. The modifications in functional connectivity shown in the video game group after training corresponded to significant improvements in test scores assessing sustained attention and executive function, the higher-level cognitive skills that help organize our lives and regulate our behavior. The results suggest that video-game-based brain training is an effective option to improve cognitive abilities of patients with MS.

More information:

20 March 2016

How Computers and Brains Recognize Images

We do not merely recognize objects, our brain is so good at this task that we can automatically supply the concept of a cup when shown a photo of a curved handle or identify a face from just an ear or nose. Neurobiologists, computer scientists, and robotics engineers are all interested in understanding how such recognition works (in both human and computer vision systems). New research suggests that there is an atomic unit of recognition, a minimum amount of information an image must contain for recognition to occur. In the field of computer vision, for example, the ability to recognize an object in an image has been a challenge for computer and artificial intelligence researchers. Researchers, wanted to know how well current models of computer vision are able to reproduce the capacities of the human brain. To this end they enlisted thousands of participants from Amazon's Mechanical Turk and had them identify series of images. The images came in several formats: Some were successively cut from larger images, revealing less and less of the original. Others had successive reductions in resolution, with accompanying reductions in detail.


When the scientists compared the scores of the human subjects with those of the computer models, they found that humans were much better at identifying partial- or low-resolution images. The comparison suggested that the differences were also qualitative: Almost all the human participants were successful at identifying the objects in the various images, up to a fairly high loss of detail -- after which, nearly everyone stumbled at the exact same point. The division was so sharp; the scientists termed it a phase transition. The researchers suggest that the differences between computer and human capabilities lie in the fact that computer algorithms adopt a bottom-up approach that moves from simple features to complex ones. Human brains, on the other hand, work in "bottom-up" and "top-down" modes simultaneously, by comparing the elements in an image to a sort of model stored in their memory banks. The findings also suggest there may be something elemental in our brains that are tuned to work with a minimal amount, a basic atom of information. That elemental quantity may be crucial to our recognition abilities, and incorporating it into current models could improve their sensitivity.

More information:

19 March 2016

Startup Makes VR Intuitive with Eye Tracking

No more fiddling with remote-controller buttons or a mouse. Just look. San Francisco-based startup Fove has developed eye-tracking for virtual reality - that kernel of technology many feel is key for the illusion of becoming immersed in a setting. Fove, which comes from fovea, the part of the eye with the sharpest vision, from field of view, and the word's similarity with love, has devised a way to use tiny infrared sensors inside headset goggles to monitor the movements of a wearer's pupils.


Fove is getting attention from the fledgling VR industry, as virtual reality is known, and winning backing from innovative financiers. It has raised about $500,000 through Kickstarter. Virtual reality could revolutionize entertainment, like movies, games and live-streaming of sports. It has myriad potential business applications, such as giving apartment hunters a virtual look at real estate options and car buyers tours of virtual showrooms.

More information:

13 March 2016

What Games Teach Us About Intelligence

In the coming days, Google DeepMind’s alphaGo program is expected to defeat one of the world’s leading professional Go players, Lee Sedol, in a best-of-five unhandicapped Go matchup. AlphaGo has already won the first two games, and a profound reality is upon us: the walls are crumbling around one of the last major strongholds of superior human intelligence. This looming defeat raises important questions for research on human intelligence. What can we learn from continued advances in gameplay artificial intelligence? What role can games play in measuring continued progress in research on intelligence more generally? Is there an “endgame” for the role of games in AI research?


The lasting importance of games in AI research, beyond serving as a source of well-defined and widely understood challenge problems, is that they provide a unique means of measuring intelligence through task-based comparisons. Intelligence is notoriously difficult to measure, even in humans. Games offer simple and useful comparisons of skills, reasoning skills. Overall, games provide a rich framework for measuring progress in machine reasoning capabilities through competitive comparisons. Computer Go will likely continue to be relevant to AI researchers for quite some time, and it will be exciting to see how the related wide range of challenges are met by the broad AI research community.

More information: