29 December 2012

FEST12 Debate

On the 15th of December 2012, I have been invited to participate in a debate about the impact of robotics and artificial intelligence in our lives as part of the Festival of Science and Technology 2012. This was organised in collaboration with the National Hellenic Research Foundation (NHRF), Dimokritos National Centre for Science Research and EPISEY (Institute of Communication & Computer Systems).


The debate involved six academics from various Universities and backgrounds including engineering, computer science, philosophy and theology.  Three of them supported the argument and the other three were against. About 100 participants voted twice, once before the debate and another one afterwards, and the outcome was that the impact is positive to everyday life.

More information:

28 December 2012

NTUA Invited Lecture

On Wednesday December 19th at 14:30, I have delivered an invited lecture with title ‘Emerging Technologies for Games and Virtual Environments for Cultural Heritage’. The lecture focused on serious games, augmented reality, procedural modeling and brain computer interfaces.


This was part of the extra lectures delivered for the students of the 9th semester course “Monument recording”. The event took place in the small Amphitheatre of the Lambadarios Building of the School of Rural & Surveying Engineering of the National Technical University of Athens.

More information:

27 December 2012

The Future of Gaming IET's Seminar

On Wednesday the 12th of December 2012, I and Dr Eike Falk Anderson gave an invited lecture for the IET Birmingham’s Christmas Lecture on 'The Future of Gaming'. We outlined the evolution of computer and video games from their first emergence to the variety of modern games, ranging from casual games on mobile devices to the rich virtual environments found in desktop PC and console games.


Discussion was focused on the different types of games, how game players play and interact with their games and what impact new developments in technology and game mechanics, such as gesture based user interfaces and brain computer interfaces, and emerging trends and changes in player behaviour, such as social gaming and cloud gaming, are likely to have in the future.

More information:

24 December 2012

STEM Exhibition 2012

The third Phoenix Partner Annual Conference was held on Monday 10 December between 4pm and 6pm in the Engineering & Computing Building at Coventry University, UK. A live demonstration of the RomaNova project was performed by myself and my student Alina Ene, to about fifty visitors and staff. RomaNova is a prototype system for cultural heritage based on brain computer interfaces for navigating and interacting with serious games.


The interactive game is built upon Rome Reborn one of the most realistic 3D representations of Ancient Rome currently in existence. This 3D representation provides high fidelity 3D digital models which can be explored in real-time. The aim of the game is to control an avatar inside virtual Rome and interact with intelligent agents while learning at the same time. Both navigation and interaction is performed using brain-wave technology.

More information:

10 December 2012

IEEETCLT 2012 Paper

A few months ago, I published a paper at the Bulletin of the IEEE Technical Committee on Learning Technology. The title of the paper is ‘Augmented Reality Interfaces for Assisting Computer Games University Students’. The paper proposes the use of augmented reality (AR) interfaces for the construction of educational applications that can be used in practice to enhance current teaching methods as well as for the delivery of lecture material.


The interactive AR interface has been piloted in the classroom at an undergraduate module of a Bachelor of Science (BSc) degree in Games Technology at Coventry University, UK. An initial evaluation was performed with fifteen students and qualitative feedback was recorded. Results indicate that the adoption of AR technology is not only a promising and stimulating tool for learning computer graphics, but it can also be incredibly effective when used in parallel with more traditional teaching methods.

The original paper can be found at:

08 December 2012

Reconfigurable Robots

The device doesn’t look like much: a caterpillar-sized assembly of metal rings and strips resembling something you might find buried in a home-workshop drawer. But the technology behind it, and the long-range possibilities it represents, are quite remarkable. The little device is called a milli-motein — a name melding its millimeter-sized components and a motorized design inspired by proteins, which naturally fold themselves into incredibly complex shapes. This minuscule robot may be a harbinger of future devices that could fold themselves up into almost any shape imaginable. To build the world’s smallest chain robot, the team from MIT’s Center for Bits and Atoms, had to invent an entirely new kind of motor: not only small and strong, but also able to hold its position firmly even with power switched off. The researchers met these needs with a new system called an electropermanent motor.

The motor is similar in principle to the giant electromagnets used in scrapyards to lift cars, in which a powerful permanent magnet is paired with a weaker magnet (one whose magnetic field direction can be flipped by an electric current in a coil). The two magnets are designed so that their fields either add or cancel, depending on which way the switchable field points. Thus, the force of the powerful magnet can be turned off at will — such as to release a suspended car — without having to power an enormous electromagnet the whole time. In this new miniature version, a series of permanent magnets paired with electromagnets are arranged in a circle; they drive a steel ring that’s situated around them. The key innovation, is that they do not take power in either the ‘on’ or the ‘off’ state, but only use power in the changing state, using minimal energy overall.

More information:

06 December 2012

Spaun Learns And Remembers

Spaun, which stands for Semantic Pointer Architecture Unified Network, is a computer model that can recognize numbers, remember them, figure out numeric sequences, and even write them down with a robotic arm. It’s a major leap in brain simulation, because it’s the first model that can actually emulate behaviors while also modeling the physiology that underlies them. The program consists of 2.5 million simulated neurons organized into subsystems that are designed to resemble specific brain regions, including the prefrontal cortex, basil ganglia and thalamus. It has a virtual eye and a robotic arm, and can perform a series of tasks, each different from one another. 


It’s different from other artificial brains like IBM’s Watson in that it’s designed to mimic behavior, not simply solve for function in the best possible way. Where IBM wants Watson to do one thing supremely well--search--Big Blue isn’t interested in how it’s done. Other IBM brain simulations, like the massive Blue Brain Project, can mimic brain spatial structure and connectivity--but they can’t mimic how this structure is tied to behavior. Spaun is divided into two main structures, representing the cerebral cortex and the basal ganglia. The neurons are wired together in a physiologically realistic way, and they mimic what researchers think the basal ganglia and cortex are doing during certain tasks.

More information:

01 December 2012

New Virtual Hair

We all understand what it means to have a bad hair day. Thanks to the work of a new researcher at the University of Utah, video game and virtual environment developers may never experience a virtual bad hair day in the future. The researcher developed Hair Farm, a computer software plugin for 3ds Max, a 3D modeling, animation, and rendering software package used by game developers and visual effects artists. Hair Farm is currently used by production studios and individual artists to create realistic hair for virtual characters in digital media such as video games and movies.


The researcher developed ways to create the geometric characteristics of hair, the complexity of light within the hair, and how it moves. This method was developed through “hair meshing,” which helps model hair in a process similar to that used in modeling polygonal surfaces. This technique allows virtual artists direct control over the shape of the hair model, giving them the ability to model the exact hair shape they desire in a simplified and seamless process. Most hair animation is time consuming, so it is all about how to simulate simplified physics in an efficient way and then creating an algorithm to generate high-quality results.

More information: