31 May 2015

Robot Performs Brain Surgery on a Fruit Fly

On a small darkened platform a handful of fruit flies wander aimlessly. There is a brief flash of light and a robotic arm darts downward, precisely targeting a fly’s thorax, a moving target roughly the size of a pinhead. The fly seems unfazed, appearing not to notice that it has been snatched by a high-speed laboratory robot. The system, which has been prototyped by a team of biologists and roboticists at Stanford, makes it possible automate many aspects of research on Drosophila, one of the most popular experimental animals.


Tasks such as determining gender, measuring the size of body parts and even performing micro-brain surgery — long performed by graduate students armed with tweezers — can now be assigned to a robot. In one experiment, the robot exposed a fly running on a tiny trackball to different odors as the researchers recorded its changing path. The robot arm is extremely precise and uses the fly’s legs as shock absorbers, to avoid crushing or impaling the insects. The robot is also far more efficient than the previous grad student-powered methods.

More information:

26 May 2015

Timelapses From Public Photos

A team from Google and the University of Washington have developed a fully automated way to create time-lapse videos of popular tourist’s landmarks using images from Flickr, Picasa and other sites. Here's how it works: first, the researchers sorted some 86 million photos by geographic location, looking for widely snapped landmarks. Next, the photos were ordered by date and warped so that all had a matching viewpoint. Lastly, each photo was color-corrected to have a similar appearance, resulting in uniform time-lapse videos.


The videos aren't just breathtaking, but also illuminating. For instance, they show glaciers receding, waterfalls evolving and skyscrapers sprouting, making them useful tools for geologists or builders. The science that went into the time-lapses is also interesting, as researchers combined various techniques in warping, stabilization and color normalizing to make it work. Many sequences contain over 1,000 images and took around six hours to render on a single computer. The best part is that even though it's a fun form of crowd-sourcing, it doesn't require participants to do anything but be tourists.

More information:

23 May 2015

Impact of Video Gaming on the Brain

Past research has shown that people who use caudate nucleus-dependent navigation strategies have decreased grey matter and lower functional brain activity in the hippocampus. Video gamers now spend a collective three billion hours per week in front of their screens. In fact, it is estimated that the average young person will have spent some 10,000 hours gaming by the time they are 21. The effects of intense video gaming on the brain are only beginning to be understood. The study was conducted among a group of adult gamers who were spending at least six hours per week on this activity. For more than a decade now, research has demonstrated that action video game players display more efficient visual attention abilities.


However, this study found that gamers rely on the caudate-nucleus to a greater degree than non-gamers. Past research has shown that people who rely on caudate nucleus-dependent strategies have lower grey matter and functional brain activity in the hippocampus. This means that people who spend a lot of time playing video games may have reduced hippocampus integrity, which is associated with an increased risk of neurological disorders such as Alzheimer's disease. Because past research has shown video games as having positive effects on attention, it is important for future research to confirm that gaming does not have a negative effect on the hippocampus. Future research will investigate the direct effects of specific video games on the integrity of the reward system and hippocampus.

More information:

22 May 2015

APCPP 2015 Workshop Talk

Today I gave an invited talk to a workshop held at Masaryk University which run in parallel with the international conference entitled ‘Applying Principles of Cognitive Psychology in Practice 2015 (APCPP 2015)’. The title of my presentation was ‘Brain Computer Interfaces for Psychological Experiments Using Virtual Environments’.


My talk consisted of two parts. First I gave an overview of BCI devices ranging from cheap to more expensive ones. The second part included three different case studies, one using the Neurosky device, one using the Emotiv device and the final one using the Enobio-32 device.

More information:

18 May 2015

Computer Vision BCIs for Faster Mine Detection

Computer scientists at the University of California, San Diego, have combined sophisticated computer vision algorithms and a BCI to find mines in sonar images of the ocean floor. The study shows that the new method speeds detection up considerably, when compared to existing methods--mainly visual inspection by a mine detection expert. Researchers worked with the U.S. Navy's Space and Naval Warfare Systems Center Pacific to collect a dataset of 450 sonar images containing 150 inert, bright-orange mines placed in test fields in San Diego Bay. An image dataset was collected with an underwater vehicle equipped with sonar. In addition, researchers trained their computer vision algorithms on a data set of 975 images of mine-like objects. They first showed six subjects a complete dataset, before it had been screened by computer vision algorithms. Then they ran the image dataset through mine-detection computer vision algorithms they developed, which flagged images that most likely included mines. They then showed the results to subjects outfitted with an EEG system, programmed to detect brain activity that showed subjects reacted to an image because it contained a salient feature. Subjects detected mines much faster when the images had already been processed by the algorithms.


The algorithms are what's known as a series of classifiers, working in succession to improve speed and accuracy. The classifiers are designed to capture changes in pixel intensity between neighboring regions of an image. The system's goal is to detect 99.5 percent of true positives and only generate 50 percent of false positives during each pass through a classifier. As a result, true positives remain high, while false positives decrease with each pass. Researchers took several versions of the dataset generated by the classifier and ran it by six subjects outfitted with the EEG gear, which had been first calibrated for each subject. It turns out that subjects performed best on the data set containing the most conservative results generated by the computer vision algorithms. They sifted through a total of 3,400 image chips sized at 100 by 50 pixels. Each chip was shown to the subject for only 1/5 of a second (0.2 seconds) --just enough for the EEG-related algorithms to determine whether subject's brain signals showed that they saw anything of interest. All subjects performed better than when shown the full set of images without the benefit of pre-screening by computer vision algorithms. Some subjects also performed better than the computer vision algorithms on their own.

More information:

16 May 2015

Is the Universe a Hologram?

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The holographic principle asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon. Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.


If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must be physical quantities. Especially one key feature of quantum mechanics (quantum entanglement) has to appear in the gravitational theory. When quantum particles are entangled, they cannot be described individually (they form a single quantum object). The measure for the amount of entanglement in a quantum system is called ‘entropy of entanglement’. Researchers showed that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory. This however, does not yet prove that we are indeed living in a hologram but apparently there is growing evidence for the validity of the correspondence principle in our own universe.

More information:

09 May 2015

Realistic Surface Rendering in Computer Games

The surface of rendered objects in computer games often looks unrealistic. A new method creates much more realistic images, imitating the complex scattering processes under the surface. Overturning cars, flying missiles and airplanes speeding across the screen on modern computers, 3D objects can be calculated in a flash. However, many surfaces still look unnatural. Whether it is skin, stone or wax on the computer screen, all materials look alike, as if the objects had all been cut out of the same kind of opaque material. TU Wien (Vienna), the University of Zaragoza and the video game company Activision-Blizzard have developed a new mathematical method which makes surfaces appear much more realistic by taking into account light scattering which occurs below the surface.


When we hold our hand against the sun, it looks red along the edges, because light enters our skin. The appearance of an object is strongly influenced by scattering of light inside the material. It is called subsurface scattering. This scattering inside the object is the main reason why different surfaces can look so different. Skin does not look like wax and a plant does not look like a stone surface. Skin is particularly tricky. A face can be rendered in high resolution, with ultra-realistic details, down to single pores and tiny impurities; but this does not mean that it looks realistic. When subsurface scattering is not taken into account, even a perfectly modelled face looks as if it has been chiselled out of a dull, opaque, skin-coloured stone.

More information: