28 June 2011

Aristotle University Seminar

On Wednesday, 11th May 2011, I gave an invited talk at the Aristotle University of Thessaloniki, School of Rural & Surveying Engineering, Thessaloniki, Greece. The title of the seminar is ‘Serious Virtual Applications for Cultural Heritage’. For more information a brief abstract is provided below:


Abstract The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games relies on the advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. This presentation will provide an overview of serious virtual applications, ranging from virtual and augmented reality to serious games and online virtual environments for cultural heritage. Some characteristic examples include a subset of the research conducted at the University of Sussex for the ARCO project, City University for the LOCUS project and various other projects for the Serious Games Institute (such as the RomaNova project) and iWARG (such as the Herbert and Shakespeare’s projects) at Coventry University. Results from all projects indicate the importance of intuitive and flexible serious virtual applications for a variety of applications in cultural heritage.

More information:

http://www.topo.auth.gr/greek/EPIKAIRA/doc/Invitation_Lecture_Liarokapis.pdf

25 June 2011

'Digital Games and Learning' Review

Recently, I have reviewed a new book published on the 31st March 2011 and titled ‘Digital Games and Learning’. Digital Games and Learning was edited by Sara de Freitas (Serious Games Institute, UK) and Paul Maharg (Northumbria University, UK) and presents an interesting selection of papers in the area of games simulations.


The book is divided into three distinctive sections including: theoria (theoretical positions); cultura (cultural perspectives); praxis (theory into practice) aiming in solving significant issues and challenges. The main research question of the contributors is to explore our understanding of the paradigm shift from conventional learning environments to learning in games and simulations.

More information:

http://www.continuumbooks.com/books/detail.aspx?BookId=157255&SubjectId=940

21 June 2011

Touch-Screen Steering Wheel

Distance driving can be mind-numbingly boring, but looking away from the road to text or change songs is a life-or-death gamble. Plus, buttons embedded in the wheel only control a fraction of a car's functionality. Now German researchers have a wheel prototype that puts everything within reach -- no glancing needed. The team came up with the idea for a multi-touch steering wheel interface while thinking about driving and mobile technology.


Their prototype is made from 11-millimeter-thick clear acrylic ringed in infrared LEDs. An infrared camera attached to the bottom picks up the reflections made when the surface is touched. A driver can control a radio or navigate a map with simple movements along the surface. Those gestures can be made with the thumbs while still gripping the wheel and looking at the road. To identify intuitive gestures, the researchers conducted a study asking participants what movements they'd make for each of 20 commands.

More information:

http://news.discovery.com/autos/steering-wheel-interface-driving-110606.html

19 June 2011

University of Derby Seminar

On Wednesday, 23rd February 2011, I gave an invited talk at the University of Derby, School of Computing and Mathematics, UK. The title of the seminar was ‘Interacting with Mixed Reality Interfaces’. For more information a brief abstract is provided below:


Abstract This presentation will provide an overview of mixed reality interfaces, ranging from virtual and augmented reality to serious games and online virtual environments. Some characteristic examples include a subset of the research conducted at the University of Sussex for the ARCO project, City University for the LOCUS project and various smaller projects for the Serious Games Institute and iWARG at Coventry University. Results from all projects indicate the importance of intuitive and flexible computer interfaces for a variety of application domains such as archaeology, navigation and education.

More information:

http://www.derby.ac.uk/computing/disys/seminars

14 June 2011

Virtual Pregnancy

Ever wondered what it feels like to be pregnant? Now even men can find out thanks to a new dress created by researchers from the Japan Advanced Institute of Science and Technology that simulates the weight, temperature, movement and heartbeat of a fetus. The device can replicate the 9-month long process in two minutes or it can be worn for a longer period to experience what it feels like day-to-day. To mimic the fetus, it contains a 4-litre bag filled with warm water. Kicking movement is recreated with a lining of 45 balloons that expand and contract. But wiggling is more complex to reproduce and requires a grid of air actuators that exploit a tactile illusion. When two vibrating sources placed a distance apart move at the same time, it triggers a sensation in between the two points.


So by varying vibrating pairs over time, the simulated fetus seems to squirm. The system also contains an accelerometer and touch sensors to allow for interaction. When the suit is connected to a computer, software displays a 3D model of the fetus that changes to mimic different stages of pregnancy. The fetus on the screen appears to be in a good mood when a wearer strokes their abdomen and makes steady movements. But if the person moves around vigorously, it will trigger more intense motion. The team hopes the system will help men better to understand what a woman goes through during pregnancy. It offers a more realistic simulation than existing systems by reproducing the temperature and movement of the fetus.

More information:

http://www.newscientist.com/blogs/nstv/2011/06/future-of-virtual-reality-what-pregnancy-feels-like.html

09 June 2011

Leia in Your Living Room

MIT researchers had hacked the Kinect and found that it was a perfect tool for capturing images to project in 3D for holograms. In particular, they projected a Star-Wars-Style Hologram using a Microsoft Kinect device. Home holography video chat may sound like the stuff of Star Wars, but it’s closer than we think. Holography, like traditional 3D filmmaking, has the end goal of a more immersive video experience, but the tech is completely different. 3D cameras are traditional, fixed cameras, which simply capture two very slightly different streams to be directed to each eye individually--the difference between the two images creates the illusion of depth. If you change your position in front of a 3D movie, the image you see will remain the same--it has depth, but only one perspective. A hologram, on the other hand, is made by capturing the scatter of light bouncing off a scene as data, and then reconstructing that data as a 3D environment.


That allows for much greater immersion--if you change your viewing angle, you'll actually see a different image, just as you can see the front, sides, and back of a real-life object by rotating around it. Capturing that scatter of light is no easy feat. A standard 3D movie camera captures light bouncing off of an object at two different angles, one for each eye. But in the real world, light bounces off of objects at an infinite number of angles. Holographic video systems use devices that produce so-called diffraction fringes, basically fine patterns of light and dark that can bend the light passing through them in predictable ways. A dense enough array of fringe patterns, each bending light in a different direction, can simulate the effect of light bouncing off of a 3D object. The trick is making it live, fast and cheap. It is one of the OBMG’s greatest challenges: the equipment is currently extremely expensive, the amount of data massive.

More information:

http://www.popsci.com/gadgets/article/2011-04/leia-your-living-room-creating-holograph-microsoft-kinect

07 June 2011

Robotic Aids for Visually Impaired

For the visually impaired, navigating city streets or neighborhoods has constant challenges. And the reality is that a significant number of such people must rely on a rudimentary technology - a simple cane - to find their way through a world filled with obstacles. A group of USC Viterbi School of Engineering researchers is working to change this by developing a robotic, vision-based mobility aid for the visually impaired. A design first shown a year ago now is being further developed. For the USC Viterbi team, the need is clear. Researchers have developed software that sees the world and linked it to a system that provides tactile messages to alert users about objects in their paths.


The system uses Simultaneous Localization and Mapping software to build three-dimensional maps of the environment and to identify a safe path through obstacles. The route information is conveyed to the user through a guide vest that includes four micro-motors located on an individual’s shoulder and waist that vibrate like cell phones. For example, when a vibration on the left shoulder indicates a higher object to the left, such as a low-hanging branch, the individual can use that information to take a new path. Researchers said that canes have clear limitations with larger objects, from walls to concrete structures, and the technology will enable users to avoid falls and injuries.

More information:

http://uscnews.usc.edu/science_technology/researchers_create_robotic_aids_for_visually_impaired.html

05 June 2011

Aurasma: Augmented Reality Reader

An augmented reality (AR) application uses Autonomy's high-powered search technology - hitherto just aimed at commercial clients - to link all sorts of things it sees through a smartphones camera to other objects. So users can point their phones camera at a poster for the movie Thor - and it will suddenly start playing a trailer. And if users show their camera the London Underground symbol a cartoon starts to play with an athletics commentary, promoting the Tube as the way to get to the London Olympics.


This is not working via barcodes or NFC technology but through visual recognition. Aurasma is building up a bank of images which it recognises, and sees as a cue to play the video or animate the graphic. It works not just with images on a page but with buildings, landscapes, and soon, we're promised, with people. The app is currently only available on the iPhone 4 but already, newspapers are talking about turning display adverts into video ads - which can earn them more. And movie studios are planning sightseeing tours where you see parts of a film played out in the real world.

More information:

http://www.bbc.co.uk/news/technology-13558137