28 February 2013

Quantum Algorithm Breakthrough

An international research group led by scientists from the University of Bristol, UK, and the University of Queensland, Australia, has demonstrated a quantum algorithm that performs a true calculation for the first time. Quantum algorithms could one day enable the design of new materials, pharmaceuticals or clean energy devices.


The team implemented the ‘phase estimation algorithm’ — a central quantum algorithm which achieves an exponential speedup over all classical algorithms.  It lies at the heart of quantum computing and is a key sub-routine of many other important quantum algorithms, such as Shor’s factoring algorithm and quantum simulations.

More information:

24 February 2013

Real-Time Brain Monitoring

Researchers at Tufts University in Medford, Massachusetts, want to give computers the ability to directly monitor human brain in real time performance. The system utilises a headset that beams infrared light from emitters on a user's forehead into their prefrontal cortex, a part of the brain associated with planning and decision-making. Some of the light is absorbed by oxygenated haemoglobin, some by the deoxygenated version of the molecule, and some is reflected back out. By measuring the amount of light reaching receivers on the forehead, the system can tell when a user is concentrating intently or not mentally engaged. Matching the readings to what a user is looking at on a screen allows the system to determine what useful information is and what is getting in the way. The technique, known as functional near-infrared spectroscopy (fNIRS), is a crude brain imager compared with, fMRI. But infrared sensors are cheap and portable and MRI machines are not. Researchers reckon they can glean enough information from their fNIRS rig to turn computers into mind-readers.


As a proof of principle, the system monitored haemoglobin changes while 14 test subjects rated movies listed on the Internet Movie Database. It recorded how each user's brain behaved when rating movies positively and negatively, with greater levels of activity associated with more positive ratings. After this training, the system recommended a list of other movies in turn, with each movie suggestion modified by the brain's reaction to the previous movie suggestion. Not only were its suggestions more acceptable than a random list, but it also improved its results the more it was used. The US Federal Aviation Administration is also exploring the technique to help manage the cognitive workloads of air-traffic controllers. The next step is to build a brain interface that can handle more complex interactions, like filtering emails and the other rivers of information that threaten to overwhelm the modern worker on a daily basis. For now, their set-up can only determine when people are engaged with what they are doing, and when they are not.

More information:

22 February 2013

Virtual Vehicle Vibrations

User Interface (UI) researcher designed a program to predict role posture may play in reducing head, neck injuries. Studies have shown that awkward head-neck postures inside whole-body vibration environments can increase discomfort and the risk of injury. The goal of this research is to introduce a computerized human model that can be used to predict human motion in response to whole-body vibration when the human takes different head-neck postures.


The predicted motion data of the current model can be used to drive more sophisticated computer human models—with muscles and internal tissues—that can predict muscle forces and internal strain and stress between tissues and vertebrae. Significantly, the computer program may reduce the need for actual human subjects to drive test vehicles. One major benefit is the possibility of using it instead of humans in the design/modification loop of equipment in whole-body vibration.

More information:

10 February 2013

Demos to The Princess Royal

On the 8th February 2013, I have presented 2 interactive demonstrations to the Royal Highness the Princess Royal which came to formally open the Faculty of Engineering and Computing, Coventry University. The demos were developed by iWARG students, Andrei Ermilov and Alina Ene and focused on interaction technologies for computer games. 


Andrei presented how computer vision interactions based on Kinect can be used to play computer games whereas Alina presented how brain-computer interfaces can be used to control and navigate into serious games. Both demos lasted for approximately five minutes and The Princess Royal expressed a great interest in these technologies.

More information:

07 February 2013

Predicting Success of Online Games

On a first date, couples scrutinize each other’s facial expressions for a clue as to whether the date will turn into a long-term relationship. Game publishers and designers might start doing the same thing. By analyzing the movements of gamers’ smile and frown muscles in the first 45 minutes of play, Taiwanese researchers have found a way to predict a game’s addictiveness. The online gaming industry sees a game that is played by a large number of fanatics and survives more than two years as a success. But that success comes at a cost. According to researchers, more than 200 online games are released each year, globally. The cost of developing a game, jointly brainstormed by dozens of designers, ranges from less than $1 million to as much as $200 million. However, the humbling fact is that most games survive only four to nine months. It’s difficult to evaluate an online game’s addictiveness prior to the release. The gaming industry’s approach is simply based on designers’ intuition and experience and the feedback from focus groups, the latter of which could be limited and biased. Researchers at the institute and at the electrical engineering department of National Taiwan University, aims to help game publishers avoid risky or blind investments. Using archival game data and dozens of electromyography (EMG) experiments, they constructed a forecasting model that predicts a game’s ability to retain active players for a long time.


Researchers had to sort out the relationship between the data from laboratory emotion studies and a game’s market performance during the first six months after its release. By analyzing account activity records of 11 games—five role-playing games, four action games, and two first-person shooter games—they produced a general addictiveness index. They came up with an index that takes into account things like how quickly players’ frequency of participation decreases during the subscription period in which the gamer actually played the game and found that the index correlated well with key measures of a game’s success (i.e. user focus group responses and the length of time players spent playing a game). Researchers connected electrodes to 84 gamers, ages 19 to 34. The electrodes were set up to measure the electrical potentials generated by two facial muscles—the corrugator supercilii, or frowning muscle, whose motion primarily produces the appearance of suffering and unhappiness, and the zygomaticus major muscle, which is used in smiling and laughing. These facial EMG measurements were conducted for 45 minutes as players explored new games for the first time. Each of the subjects played as many as 3 new games and researchers gathered 155 hours of facial-expression data and were able to discern positive and negative emotions. Analyzing those two separately and in combination, they were able to predict the games’ addictiveness index to within an average of 11 percent.

More information:

06 February 2013

Brain Memory Network

Working with patients with electrodes implanted in their brains, researchers at the University of California, Davis, and The University of Texas Health Science Center at Houston (UTHealth) have shown for the first time that areas of the brain work together at the same time to recall memories. The unique approach promises new insights into how we remember details of time and place. Researchers placed electrodes on the patient's brain inside the skull. The electrodes remain in place for one to two weeks for monitoring.


Six such patients volunteered for the study while the electrodes were in place. Using a laptop computer, the patients learned to navigate a route through a virtual streetscape, picking up passengers and taking them to specific places. Later, they were asked to recall the routes from memory. Correct memory recall was associated with increased activity across multiple connected brain regions at the same time, rather than activity in one region followed by another. However, the analysis did show that the medial temporal lobe is an important hub of the memory network, confirming earlier studies.

More information:

05 February 2013

Virtual Superpowers

Researchers at Stanford recently investigated the subject by giving people the ability of Superman-like flight in the university's Virtual Human Interaction Laboratory (VHIL). While several studies have shown that playing violent videogames can encourage aggressive behavior, the new research suggests that games could be designed to train people to be more empathetic in the real world. To test this hypothesis, the group needed to choose a superhuman ability that could only be simulated in virtual reality, but that people would also subconsciously identify as a ‘do-gooder’ superpower. One at a time, 30 men and 30 women entered the simulator and strapped on a set of goggles that transported them into a digital cityscape. A woman's voice then explained their mission: A diabetic child is stranded somewhere in the city, and you must find him and deliver an insulin injection.


With a whoosh of air, the subjects left the ground -- either controlling their flight by a series of arm motions, like Superman, or as a passenger in a helicopter. As they scoured the city, wall-mounted speakers gave the impression of wind whistling by; powerful speakers in the floor produced vibrations to simulate riding in a helicopter. The experiment was set so that two minutes into the simulation, no matter what mode of transport, the subject found the sick child. After removing the virtual reality goggles, each person then sat with an experimenter to answer a few questions about the experience.  The people who had just flown as Superman were quick to lend a hand, beginning to pick up the pens within three seconds. The helicopter group, however, picked up the first pen, on average, after six seconds (one second after the experimenter began picking them up herself).

More information:

04 February 2013

Brain's Vision Secrets

A new study led by scientists at the Universities of York and Bradford has identified the two areas of the brain responsible for our perception of orientation and shape. Using sophisticated imaging equipment at York Neuroimaging Centre (YNiC), the research found that the two neighbouring areas of the cortex -- each about the size of a 5p coin and known as human visual field maps -- process the different types of visual information independently. The scientists, from the Department of Psychology at York and the Bradford School of Optometry & Vision Science established how the two areas worked by subjecting them to magnetic fields for a short period which disrupted their normal brain activity.


The research represents an important step forward in understanding how the brain processes visual information. Attention now switches to a further four areas of the extra-striate cortex which are also responsible for visual function but whose specific individual roles are unknown. Researchers used functional magnetic resonance imaging (fMRI) equipment at YNiC to pinpoint the two brain areas, which they subsequently targeted with magnetic fields that temporarily disrupt neural activity. They found that one area had a specialised and causal role in processing orientation while neural activity in the other underpinned the processing of shape defined by differences in curvature.

More information: