29 September 2013

Human Robot Getting Closer

A robot that feels, sees and, in particular, thinks and learns like us. UT researchers want to implement the cognitive process of the human brain in robots. The research should lead to the arrival of the latest version of the iCub robot in Twente. This human robot (humanoid) blurs the boundaries between robot and human. 


Decades of scientific research into cognitive psychology and the brain have given us knowledge about language, memory, motor skills and perception. We can now use that knowledge in robots, but this research goes even further. The application of cognition in technical systems should also mean that the robot learns from its experiences and the actions it performs.

More information:

28 September 2013

Virtualizer

Head-mounted devices, which display three dimensional images according one’s viewing direction, allowing the users to lose themselves in computer generated worlds are already commercially available. However, it has not yet been possible to walk through these virtual realities, without at some point running into the very real walls of the room. A team of researchers at the Vienna University of Technology has now built a ‘Virtualizer’, which allows for an almost natural walk through virtual spaces. The user is fixated with a belt in a support frame, the feet glide across a low friction surface. Sensors pick up these movements and feed the data into the computer. The team hopes that the Virtualizer will enter the market in 2014. Various ideas have been put forward on the digitalization of human motion. Markers can be attached to the body, which are then tracked with cameras – this is how motion capture for animated movies is achieved. For this, however, expensive equipment is needed, and the user is confined to a relatively small space. 


Prototypes using conveyor belts have not yet yielded satisfactory results. In the Virtualizer’s metal frame, the user is kept in place with a belt. The smooth floor plate contains sensors, picking up every step. Rotations of the body are registered by the belt. The Virtualizer can be used with standard 3D headgear, which picks up the users viewing direction and displays 3D pictures accordingly. This is independent from the leg motion, therefore running into one direction and looking into another becomes possible. Moving through virtual realities using a keyboard or a joystick can lead to a discrepancy between visual perception and other body sensations. The prototype developed at TU Vienna already works very well – only some minor adjustments are still to be made. The Virtualizer has already caused some a stir. The Virtualizer is scheduled to enter the market as soon as 2014. The price cannot be determined yet. The product should lead virtual reality out of the research labs and into the gamers’ living rooms.

More information:

26 September 2013

Interior 3D Map of Pisa

Developed by the CSIRO, Australia's national science agency, the Zebedee technology is a handheld 3D mapping system incorporating a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it. Specialised software then converts the system's laser data into a detailed 3D map.


While the tower's cramped stairs and complex architecture have prevented previous mapping technologies from capturing its interior, Zebedee has enabled the researchers to finally create the first comprehensive 3D map of the entire building. Within 20 minutes researchers were able to use Zebedee to complete an entire scan of the building’s interior.

More information:

24 September 2013

Exoskeletons Are Here

Although literally conceived as a motorized suit of armor reminiscent of medieval knights, it has come to represent a true technological-biological fusion as the most complicated neuroprosthetic ever imagined. The breadth and scope of sci-fi exoskeletal armor is nicely captured in the sweeping and grand scene near the end of the 2013 Marvel Studios production ‘Iron Man 3’. When scientists want to produce a movement, complex commands related to motor planning and organization send signals to the motor output areas of the brain. These commands then travel down the spinal cord to the appropriate level. That is, higher up for arm movements and lower down for legs. At the spinal cord level the cells controlling the muscles that need to be activated are found. From the spinal cord the commands go to the muscles needed to produce the movement. All of this relaying takes time and introduces control delays that would make armored superhero fights difficult.


Because of these delays, the ultimate objective should be to create neuroprosthetics controlled by brain commands. This reduces all the transmission delays found in using commands downstream in the spinal cord or at the muscle level. But it also currently requires inserting electrodes into the nervous system. Instead, a good starting point for now is to use the commands from the brain that are relayed and detected as electrical activity (electromyography, EMG) in muscle. These EMG signals can be detected quite readily with electrodes placed on the skin over the muscles of interest. The EMG activity is a pretty faithful proxy for what your nervous system is trying to get your muscles to do. It’s kind of like a biological form of ‘wire tapping’ to ‘listen’ in to the commands sent to muscle. Many different neuroprosthetics have been developed to use EMG control signals in order to guide the activity of the motors in the prosthetic itself.

More information:

23 September 2013

VS-Games 2013 Short Paper

On Friday 13th September, I have presented a paper I co-authored with my students Athanasios Vourvopoulos and Alina Ene as well as a colleague from the SGI, Dr. Panagiotis Petridis with title ‘Assessing Brain-Computer Interfaces for Controlling Serious Games’. The paper was presented at the 5th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2013), at Bournemouth, UK, 11-13 September, 2013.


The paper examined how to fully interact with serious games in noisy environments using only non-invasive EEG-based information. Two different EEG-based BCI devices were used and results indicated that although BCI devices are still in their infancy, they offer the potential of being used as alternative game interfaces prior to some familiarisation with the device and in several cases a certain degree of calibration.

A draft version of the paper can be downloaded from here.

20 September 2013

VS-Games 2013 Full Paper

On Thursday 12th September, I have presented a co-authored paper with my PhD student Stuart O'Connor and Dr. Christopher Peters with title ‘A Study into Gauging the Perceived Realism of Agent Crowd Behaviour within a Virtual Urban City’. The paper was presented at the 5th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2013), at Bournemouth, UK, 11-13 September, 2013.


The paper examined the development of a crowd simulation in a virtual city, and a perceptual experiment to identify features of behaviour which can be linked to perceived realism. The perceptual experimentation methodologies presented can be adapted and potentially utilised to test other types of crowd simulation, for application within computer games or more specific simulations such as for urban planning or health and safety purposes.

A draft version of the paper can be downloaded from here.

15 September 2013

Cloth Imaging

Creating a computer graphic model of a uniform material like woven cloth or finished wood can be done by modeling a small volume, like one yarn crossing, and repeating it over and over, perhaps with minor modifications for color or brightness. But the final rendering step, where the computer creates an image of the model, can require far too much calculating for practical use. Cornell graphics researchers have extended the idea of repetition to make the calculation much simpler and faster. Rendering an image of a patterned silk tablecloth the old way took 404 hours of calculation. The new method cut the time to about one-seventh of that, and with thicker fabrics, computing was speeded up 10 or 12 times. A computer graphic image begins with a 3D model of the object’s surface. To render an image, the computer must calculate the path of light rays as they are reflected from the surface. Cloth is particularly complicated because light penetrates into the surface and scatters a bit before emerging and traveling to the eye. It’s the pattern of this scattering that creates different highlights on silk, wool or felt. They previously used high-resolution CT scans of real fabric to guide them in building micron-resolution models. 


Brute-force rendering computes the path of light through every block individually, adjusting at each step for the fact that blocks of different color and brightness will have different scattering patterns. The new method pre-computes the patterns of a set of example blocks – anywhere from two dozen to more than 100 – representing the various possibilities. These become a database the computer can consult as it processes each block of the full image. For each type of block, the pre-computation shows how light will travel inside the block and pass through the sides to adjacent blocks. In tests, the researchers first rendered images of plain-colored fabrics, showing that the results compared favorably in appearance with the old brute-force method. Then they produced images of patterned tablecloths and pillows. Patterned fabrics require larger databases of example blocks, but the researchers noted that once the database is computed, it can be re-used for numerous different patterns. The method could be employed on other materials besides cloth, the researchers noted, as long as the surface can be represented by a small number of example blocks. They demonstrated with images of finished wood and a coral-like structure.

More information:

09 September 2013

Touch Goes Digital

Researchers at the University of California, San Diego report a breakthrough in technology that could pave the way for digital systems to record, store, edit and replay information in a dimension that goes beyond what we can see or hear touch. Touch was largely bypassed by the digital revolution, because it seemed too difficult to replicate what analog haptic devices can produce.


In addition to uses in health and medicine, the communication of touch signals could have far-reaching implications for education, social networking, e-commerce, robotics, gaming, and military applications, among others. The sensors and sensor arrays reported in the research are also fully transparent which makes it particularly interesting for touch-screen applications in mobile devices.

More information: