30 September 2009

VAST 2009 Article

Last Friday, myself and Eike Anderson, another colleague from Interactive Worlds Applied Research Group (IWARG), have presented a paper with title ‘Serious Games in Cultural Heritage’ in the 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST09), State of the Art Reports session. The conference was held in Malta between 22 - 25 September and it is one of the most significant conferences of the field. The paper supported that although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology.

As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.

A draft version of the paper can be downloaded from here.

29 September 2009

Monitoring Pedestrian Crossings

A team of researchers from the University of Castilla-La Mancha (UCLM) has developed an intelligent surveillance system able to detect aberrant behaviour by drivers and people on foot crossing pedestrian crossings and in other urban settings. The study, published this month in the journal Expert Systems with Applications, could be used to penalise incorrect behaviour. The study focused on a pedestrian crossing in a two-way street, regulated by a traffic light. The authors defined ‘normal’ behaviour of cars and pedestrians in this setting, in which they can move when the lights are green, but must stop and not cross the safety lines when the lights are red. The system, working in a similar way to a human monitor, can detect whether the vehicles and pedestrians are moving ‘normally’. If at any point any of the movements related to these ‘objects’ is not ‘normal’ (driving through a red light, for example), the programme recognizes that the behaviour differs from the normal framework established.

The supporting architecture underlying the model is a multi-agent artificial intelligence system (made up of software agents that carry out the various tasks involved in monitoring the environment). It has been designed according to standards recommended by the FIPA (Foundation for Intelligent Physical Agents), an international committee working to promote the adoption and diffusion of this kind of technology. In order to prove the effectiveness of the model, its creators have developed a monitoring tool (OCULUS), which analyses images taken from a real setting. In order to do this, the team members placed a video camera close to their place of work, the Higher School of Information Technology in Ciudad Real. The researchers are continuing their work to fine tune the system, and believe it will be possible to use it in future in other situations, for example in analysing behaviour within indoor environments (museums, for example), or in detecting overcrowding.

More information:

http://www.eurekalert.org/pub_releases/2009-09/f-sf-pcc091809.php

27 September 2009

Augmented Reality Markup Language

The nascent field of Mobile Augmented Reality (AR) is on the verge of becoming mainstream. In recent months an explosion in the development of practical AR solutions has given consumers numerous AR applications to experience and ‘augment’ their daily lives. With this surge in AR development the potential arises for the multiplication of proprietary methods for aggregating and displaying geographic annotation and location-specific data. Mobilizy proposes creating an augmented reality mark-up language specification based on the OpenGIS® KML Encoding Standard (OGC KML) with extensions. The impetus for proposing the creation of an open Augmented Reality Markup Language (ARML) specification to The AR Consortium is to help establish and shape a long-term, sustainable framework for displaying geographic annotation and location-specific data within Augmented Reality browsers.

In addition to proposing the ARML specification to The AR Consoritum, Mobilizy will be presenting an overview of the ARML specification at the Emerging Technologies Conference @MIT, Boston, and at the Over The Air Event held at Imperial College in London. The purpose for establishing an open ARML specification is to assure all data that is created for augmentation in the physical world could be universally accessed and viewed on any augmented reality browser. ARML allows individuals and organizations to easily create and style their own AR content (e.g. points of interest) without advanced knowledge of AR, APIs or tools. The ARML specification is analogous to HTML for the Web, which is used for creating web-pages and web-sites. Mobilizy has taken a very exciting giant step forward in proposing one of the first specifications for the commercial augmented reality sector.

More information:

http://www.openarml.org/

http://www.mobilizy.com/enpress-release-mobilizy-proposes-arml

19 September 2009

Digitization of Ancient Rome

The ancient city of Rome was not built in a day. It took nearly a decade to build the Colosseum, and almost a century to construct St. Peter's Basilica. But now the city, including these landmarks, can be digitized in just a matter of hours. A new computer algorithm developed at the University of Washington uses hundreds of thousands of tourist photos to automatically reconstruct an entire city in about a day. The tool is the most recent in a series developed at the UW to harness the increasingly large digital photo collections available on photo-sharing Web sites. The digital Rome was built from 150,000 tourist photos tagged with the word ‘Rome’ or ‘Roma’ that were downloaded from the popular photo-sharing Web site, Flickr. Computers analyzed each image and in 21 hours combined them to create a 3D digital model. With this model a viewer can fly around Rome's landmarks, from the Trevi Fountain to the Pantheon to the inside of the Sistine Chapel. Earlier versions of the UW photo-stitching technology are known as Photo Tourism. That technology was licensed in 2006 to Microsoft, which now offers it as a free tool called Photosynth. With Photosynth and Photo Tourism it is possible to reconstruct individual landmarks.

In addition to Rome, the team recreated the Croatian coastal city of Dubrovnik, processing 60,000 images in less than 23 hours using a cluster of 350 computers, and Venice, Italy, processing 250,000 images in 65 hours using a cluster of 500 computers. Many historians see Venice as a candidate for digital preservation before water does more damage to the city, the researchers said. Previous versions of the Photo Tourism software matched each photo to every other photo in the set. But as the number of photos increases the number of matches explodes, increasing with the square of the number of photos. A set of 250,000 images would take at least a year for 500 computers to process. A million photos would take more than a decade. The newly developed code works more than a hundred times faster than the previous version. It first establishes likely matches and then concentrates on those parts. The code also uses parallel processing techniques, allowing it to run simultaneously on many computers, or even on remote servers connected through the Internet. This technique could create online maps that offer viewers a virtual-reality experience. The software could build cities for video games automatically, instead of doing so by hand. It also might be used in architecture for digital preservation of cities, or integrated with online maps. The research was supported by the National Science Foundation, the Office of Naval Research and its Spawar lab, Microsoft Research, and Google.

More information:

http://uwnews.org/article.asp?articleID=51970

13 September 2009

AR Visual Time Machine

A ruined temple, ancient frescos and even a long-dead king have been brought to life by a ‘visual time machine’ developed by European researchers. The Palace of Venaria near Turin, Italy, and Winchester Castle in the United Kingdom have already benefited from the technology, which combines augmented reality (AR) content with location awareness on mobile devices to give visitors to historic and cultural sites a deeper, richer and more enjoyable experience. Other places of interest are also set for a virtual renaissance in the near future with a commercial version of the system being developed to run on smart phones. Users of the system can look at a historic site and, by taking a photo or viewing it through the camera on their mobile device, be able to access much more information about it. They are even able to visualise, in real time, how it looked at different stages in history. The AR system is one component of a comprehensive mobile information platform for tourists developed in the EU-funded iTacitus project, which also created location-based services and smart itinerary-generating software to help users get the most out of any trip.

Visitors to historic cities provide the iTacitus system with their personal preferences – a love of opera or an interest in Roman history, for example – and the platform automatically suggests places to visit and informs them of events currently taking place. The smart itinerary application ensures that tourists get the most out of each day, dynamically helping them schedule visits and directing them between sites. Once at their destination, is it an archaeological site, museum or famous city street, the AR component helps bring the cultural and historic significance to life by downloading suitable AR content from a central server. At the Palace of Venaria, a UNESCO World Heritage site, the iTacitus system allowed users to see how frescos on the walls of the Sale Diana once appeared and superimpose a long-gone temple in the colourful gardens to the pictures of the ruins on their mobile phone. In Winchester, the system showed visitors the court inside the castle’s Great Hall and even offered an introduction by a virtual King Alfred.

More information:

http://www.sciencedaily.com/releases/2009/08/090812104219.htm

10 September 2009

Virtual Maps For The Blind

The blind and visually impaired often rely on others to provide cues and information on navigating through their environments. The problem with this method is that it doesn't give them the tools to venture out on their own, says Dr. Orly Lahav of Tel Aviv University's School of Education and Porter School for Environmental Studies. To give navigational ‘sight’ to the blind, researchers from Tel Aviv University have invented a new software tool to help the blind navigate through unfamiliar places. It is connected to an existing joystick, a 3D haptic device that interfaces with the user through the sense of touch. People can feel tension beneath their fingertips as a physical sensation through the joystick as they navigate around a virtual environment which they cannot see, only feel: the joystick stiffens when the user meets a virtual wall or barrier. The software can also be programmed to emit sounds - a cappuccino machine firing up in a virtual café, or phones ringing when the explorer walks by a reception desk. Exploring 3D virtual worlds based on maps of real-world environments, the blind are able to ‘feel out’ streets, sidewalks and hallways with the joystick as they move the cursor like a white cane on the computer screen that they will never see. Before going out alone, the new solution gives them the control, confidence and ability to explore new streets making unknown spaces familiar.

In other words, it allows people who can't see, to make mental maps in their mind. The software takes physical information from our world and digitizes it for transfer to a computer, with which the user interacts using a mechanical device. The hope is that the blind will be able to explore the virtual environment of a new neighborhood in the comfort of their homes before venturing out into the real world. This tool lets the blind ‘touch’ and ‘hear’ virtual objects and deepens their sense of space, distance and perspective. They can ‘feel’ intersections, buildings, paths, and obstacles with the joystick, and even navigate inside a shopping mall or a museum like the Louvre in a virtual environment before they go out to explore on their own. The tool transmits textures to the fingers and can distinguish among surfaces like tiled floors, asphalt, sidewalks and grass. In theory, any unknown spaces can be virtually pre-explored. The territory just needs to be mapped first - and with existing applications like GIS. The tool, called the BlindAid, was piloted to users at the Carroll Center for the Blind, a rehabilitation center in Newton, Massachusetts.

More information:

http://www.sciencedaily.com/releases/2009/09/090910114152.htm

05 September 2009

VR and Interactive 3D Learning

These are not industry professionals. They are the students of tomorrow using interactive 3D technology to become fully immersed in the virtual learning environment. In this era of 21st-century teaching tools, the Kentucky Community & Technical College System (KCTCS) is leading the new wave of institutions that fuse interactive 3D models with hands-on simulations to provide multiple opportunities to experiment without risk and enhance learning for the future workforce. Traditionally, academic institutions have relied on tools such as blackboard outlines, physical demonstrations and videos to facilitate learning. But through computers and projectors, 3D technology allows users to see a person, place or thing as it would appear in real life. This opens the door to a virtual world of possibilities in the classroom, where students can learn about science anatomy, geography, architecture and astronomy by interacting with the content rather than reading about it in a textbook.

Although KCTCS leadership had been looking to integrate the 3D technologies into the classroom for the past seven years, the push really came in the wake of the coal mining tragedies in 2006. That's when KCTCS launched its first virtual project for the Kentucky Coal Academy to show advantages of simulation-based training. A simulation-based training application was developed that takes miners through daily inspection, has them go through parts and demonstrates how the breathing process works in addition to the actual donning process. Such innovative units of instruction can be viewed on a laptop, while others use 3D stereographic projection technology, which allows learning objects to pop out in the middle of the room. For some projects, students enter a space called a CAVE, which has screens on the walls that project a real environment of the respective field such as a hospital room, for instance.

More information:

http://www.convergemag.com/edtech/Virtual-Simulations-Take-Learning-to-Higher-Dimension.html

01 September 2009

Virtual 3D Lab Stimulate Learning

Students at a Baltimore County high school this fall will explore the area surrounding Mount St. Helens in a vehicle that can morph from an aircraft to a car to a boat to learn about how the environment has changed since the volcano’s 1980 eruption. But they’ll do it all without ever leaving their Chesapeake High School classroom--they will be using a 3D Virtual Learning Environment developed by the Johns Hopkins University Applied Physics Laboratory (APL) with the university’s Center for Technology Education. Researchers are deploying the environment, which was modeled after a state-of-the-art, 3D visualization facility at APL that was used for projects by the Department of Defense and NASA. The Virtual Learning Environment is the first of its kind in the nation. There’s not a lot of research that says this directly improves student achievement. We have a hunch that it does. But we do know that it improves student involvement. And it improves teacher involvement, as well.

Initial results showed that when students have interest in something, they are more willing and able to learn--and gaming is something that students are interested in. People can learn anything, but they have to be interested in it. There are people who can recite sports statistics for the past 10 years, because it’s something that they’re interested in. There we will work to develop other environments, and hope that eventually students will be able to create their own environment. The Virtual Learning Environment includes 10 high-definition, 72-inch TV monitors, arranged in two five-screen semicircles that allow students to interact with what they see on screen using a custom-designed digital switch and touch-panel controller. In an adjoining lab, 30 workstations, each outfitted with three interconnected monitors, will display the same environments, allowing lessons to be translated and understood on a team or a student basis.

More information:

http://www.eschoolnews.com/news/top-news/index.cfm?i=60314