29 September 2016

VR Helps Blind Man To See

A patient that suffers from a hereditary eye condition known as Retinitis Pigmentosa makes him debilitatingly near-sighted and requires him to use a blind cane at night or in dark spaces. His afflictions also include diplopia, which causes nearly constant double vision. The unique technical design of a VR headset was having something of a counteractive effect to the patient’s Pigmentosa. These contraptions may be designed to provide the illusion of depth through special lenses, but in physical reality the screens they are employing are mere centimeters away from a user’s face. Coupled with the dual-screen projection method of the HTC Vive ended up being the perfect storm of factors to judo-flip the patient’s typical visual impairments and render his vision closer to normal than he had experienced in decades.


While the patient’s mind is still adjusting to the miracle that is unfolding, the demo operator lazily presses a button and triggers the next phase of the experience. Balloons of various sizes and colors begin to rise and swarm around the patient. For those that were not experiencing a life-altering event this may have been little more than a charming surprise. But for a man who had trouble telling if the sky was clear or cloudy, the sudden appearance of crystal clear colors was enough for him to leap straight in the air with a start. This experience was over three months ago now but according to Soar it was the culmination of a long-held ambition. For the patient, VR represented something of a final hope that some of form of modern electronics would be accessible with his condition.

More information:

24 September 2016

BCI Robotic Exoskeleton Moves Hand

Using the power of thought to control a robot that helps to move a paralysed hand: a project from the ETH Rehabilitation Engineering Laboratory could fundamentally change the therapy and daily lives of stroke patients. One in six people will suffer a stroke in their lifetime. In Switzerland alone, stroke affects 16,000 people every year. Two thirds of those affected suffer from paralysis of the arm. Intensive training can – depending on the extent of damage to the brain – help patients regain a certain degree of control over their arms and hands. This may take the form of classic physio- and occupational therapy, or it may also involve robots. Researchers developed a number of robotic devices that train hand functions and sees this as a good way to support patient therapy. However, both physio- and robot-assisted therapy are usually limited to one or two training sessions a day; and for patients, traveling to and from therapy can also be time consuming.


Another question that is still not fully understood is how the brain controls limbs that interact with the environment. For example, the robotics experts have developed an exoskeleton that makes it possible to block the knee for 200 milliseconds while walking and extend it by 5 degrees. With the help of sensors, the scientists measure the forces that are involved and use this data to infer how the brain modulates the stiffness of the knee. These findings then flow into applications such as the control of new, active prostheses. If the researchers succeed in establishing an interaction between the brain and the exoskeleton, the result will be a device that is ideally suited for therapy. If, on the other hand, the deficits are permanent, a robotic device could offer long-term support – as an alternative to invasive methods, which are also being researched. These for instance envisage implanting electrodes in the brain and triggering stimulators in the muscles.

More information:

22 September 2016

How the Brain Separates Relevant and Irrelevant Information

Imagine yourself sitting in a noisy café trying to read. To focus on the book at hand, you need to ignore the surrounding chatter and clattering of cups, with your brain filtering out the irrelevant stimuli coming through your ears and ‘gating’ in the relevant ones in your vision—words on a page. New York University researchers offer a new theory, based on a computational model, on how the brain separates relevant from irrelevant information in these and other circumstances. The analysis focuses on inhibitory neurons—the brain’s traffic cops that help ensure proper neurological responses to incoming stimuli by suppressing other neurons and working to balance excitatory neurons, which aim to stimulate neuronal activity. In their analysis, the researchers devised a model that maps out a more complicated role for inhibitory neurons than had previously been suggested.


Of particular interest to the team was a specific subtype of inhibitory neurons that targets the excitatory neurons’ dendrites—components of a neuron where inputs from other neurons are located. These dendrite-targeting inhibitory neurons are labeled by a biological marker called somatostatin and can be studied selectively by experimentalists. The researchers proposed that they not only control the overall inputs to a neuron, but also the inputs from individual pathways—for example, the visual or auditory pathways converging onto a neuron. The study’s authors used computational models to show that even with the seemingly random connections, these dendrite-targeting neurons can gate individual pathways by aligning with excitatory inputs through different pathways. They showed that this alignment can be realized through synaptic plasticity—a brain mechanism for learning through experience.

More information:

18 September 2016

VS-Games 2016 Paper II

On the 9th of September 2016, I presented a paper I co-authored with my PhD student, Bojan Kerouš. It was presented at the 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2016). The conference took place at Barcelona, Spain, 7-9 September, 2016.

The paper was entitled ‘Brain-Computer Interfaces - A Survey on Interactive Virtual Environments’ and demonstrated how provided an overview of electroencephalography (EEG) based brain-computer interfaces (BCIs) and their present and potential uses in virtual environments and serious and computer games.

A draft version of the paper can be downloaded from here.

17 September 2016

Report States that We Already Live in The Matrix

Analysts at Bank of America have reportedly suggested there is a 20 to 50 per cent chance our world is a Matrix-style virtual reality and everything we experience is just a simulation. The report, which was issued to clients, also implies even if our world was an illusion, we would never know about it. Bank of America Merrill Lynch backed up the claims by citing comments from leading philosophers, scientists and other thinkers.


It is conceivable that with advancements in artificial intelligence, virtual reality, and computing power, members of future civilizations could have decided to run a simulation of their ancestors. The Bank of America’s report, which was looking at the implications of virtual reality, explained that many scientists, philosophers, and business leaders believe that there is a 20-50 per cent probability that humans are already living in a computer-simulated virtual world.

More information:

16 September 2016

VS-Games 2016 Paper I

On the 8th of September 2016, I presented a paper I co-authored with colleagues from Utrecht University and Hellenic Open University. It was presented at the 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2016). The conference took place at Barcelona, Spain, 7-9 September, 2016.


The paper was entitled ‘Procedural Modeling in Archaeology - Approximating Ionic Style Columns for Games’ and demonstrated how procedural modeling and computer graphics techniques can be combined for creating fast, accurate and realistic modeling of archaeological parts of buildings which can be used for games and virtual environments.

A draft version of the paper can be downloaded from here.

13 September 2016

Reach In and Touch Objects in Videos

A new technique called Interactive Dynamic Video (IDV) lets you reach in and 'touch' objects in videos. IDV has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound.


To simulate objects, researchers analyzed video clips to find 'vibration modes' at different frequencies that each represent distinct ways that an object can move. By identifying these modes' shapes, the researchers can begin to predict how these objects will move in new situations.

More information: