31 December 2009

Teaching Avatars

James Cameron's latest release Avatar has made online virtual worlds such as Second Life (SL) more popular than ever as audiences sit up and take notice of the possibilities of these sites. Users are currently using these sites to socialise and connect using free voice and text chat through personalised avatars or computerised self-representations. However, these sites also hold out the possibility to become places where educators are discovering academic possibilities. SL, for example, provides virtual homes for some of the world's most prestigious universities such as Harvard and Stanford who have bought virtual land with Linden Dollars.

Although this seems to be somewhat of a trend in the West it has yet to catch on in the Middle East. Campus Notes spoke to educators in the UAE to gauge how long it will take before students take their seats in a virtual classroom. Researchers at Zayed University catch on to the possibilities of teaching in virtual reality. Using OpenSimulator, often called OpenSim, they teach students about the basics of 3D concepts and the principles of server building within a virtual world. OpenSim is an open source server platform that hosts virtual worlds and can be accessed by multiple protocols. This means it is free software created for everyone to use.

More information:


29 December 2009

Understanding Interaction in VR

New cinema blockbuster, Avatar, leapt to the top of box office charts as soon as it came out — a stunning 3D realisation of an alien world. Our fascination with themes of escape to other fantastic places and the thrill of immersion in virtual environments also attracts millions to assume new identities in online virtual worlds. Now researchers at The University of Nottingham, SRI International in Silicon Valley California, two Canadian universities — Simon Fraser and York — and online games developer Multiverse are to begin a new three-year international project examining online behaviour in virtual gaming environments.

The Virtual Environment Real User Study (Verus) will explore the relationships between the real-world characteristics of gamers and the individual activities and group dynamics of their avatars in online virtual worlds. Investigating how individuals interact within online environments will have many benefits. Researchers will interview and track the volunteers as they play online in virtual worlds such as Second Life and World of Warcraft, as well as in other virtual environments that have been specially designed for the project.

More information:


27 December 2009

Real-Time Virtual Worlds

A new digital system developed at the University of Illinois, allows people in different locations to interact in real time in a shared virtual space. The tele-immersive environment captures, transmits, and displays three-dimensional movement in real time. Unlike the virtual reality people see in video games or in digitally animated films, these virtual environments record real-time actions. It’s a virtual environment that is the product of real-time imaging, not the result of programming 3D CAD models.

Nobody has to be supplied with equipment to enable imaging and 3-D reconstruction. The only thing you might have is some kind of controller, like a Wii controller, so you can change the view angle of the data you see. Clusters of visible and thermal spectrum digital cameras and large LCD displays surround a defined space. Information is extracted from the digital images, rendered in 3-D in a virtual space, and transmitted via the Internet to the separate geographic sites. Participants at each site can see their own digital clones and their counterparts at the other sites on the LCD screens and can move their bodies in respond to the images on the screen.

More information:


20 December 2009

Pompeii in Second Life

The virtual villa is a recreation of the Pompeian Court, a life-size replica of a house in Pompeii which was built inside the Crystal Palace. In 1936, a huge fire destroyed the Palace which had been a feature on the south London landscape since 1854. Lost with the massive iron and glass superstructure were the displays inside, in particular a series of Fine Arts courts which used reconstruction to show the artistic and architectural achievements of past epochs, from Egypt to the Renaissance.

Amongst them was the Pompeian Court, a life-size replica decorated with paintings traced from the wall frescoes uncovered in the city’s ruins. The virtual model of the Court, brings together a digitised collection of the paintings displayed in the Court as well as an archive of the guidebooks and press reviews which described it. Visitors can explore the house alone, join guided tours, meet other visitors, take part in learning activities, or even interact with virtual Victorian and Pompeian inhabitants.

More information:


19 December 2009

PlayStation 3: Crunching Numbers

When it comes to high-performance computing, Sony's PlayStation 3 is not all fun and games. Four years after Sony unveiled its gaming console to the world, some researchers and federal agencies are using PS3s for serious work. For the last year, the U.S. Immigration and Customs Enforcement agency's Cyber Crimes Center in Fairfax, Va., has used a bank of 40 interconnected PS3 consoles to decrypt passwords. It's working to add 40 more units. Through Stanford University's Folding@home project, almost 40,000 PS3s volunteered by their owners during idle time currently contribute to the study of protein folding. More than 880,000 PS3 consoles have participated in the project, researchers said. The U.S. Air Force Research Laboratory in Rome, N.Y., uses a cluster of 336 PS3s for research on urban surveillance and large image processing. Last month, the lab ordered 2,200 more units. Since the PS3's unveiling in 2005, the console has been touted not only for its amped-up gaming capabilities but also for its ability to generate complex real-time graphics and calculations thanks to its ground-breaking Cell processor, created by IBM in collaboration with Sony and Toshiba. What particularly caught the attention of researchers was the PS3's ability to have the Linux operating system installed on it - which allows the gaming console to be transformed into a powerful home computer. That opened the door for researchers to use the PS3's power for projects and experiments that required high-performance computing.

The Cell processor, researchers said, is perfect for applications that need a heavy amount of number-crunching and can vastly outperform traditional CPUs. The processor, for example, can do 100 billion operations per second while a typical CPU can only run 5 billion. The PlayStation 3's Cell processor allows video games to simulate physical reality. You can have a character with clothing and the clothing will flap in the wind. It turned out that with a certain amount of work, it was possible to run scientific applications in the processor. The real performance edge of the PS3 shows off when the computing power of several consoles is joined together. While early experiments tried clustering several consoles, Stanford's Folding@home project was among the first to try something more ambitious. Since 1999, FAH has studied the way proteins fold and misfold in an effort to better understand diseases like Alzheimer's, Huntington's and Parkinson's. Because running simulations requires staggering amounts of computing power, the FAH team appeals to computer owners across the globe to help by leaving their computers on to perform calculations and simulations when they're not using them. The combined computing power coming from the network of volunteers was modest until FAH and Sony developed an application that would allow PS3 owners to contribute their idle consoles to the project.

More information:


16 December 2009

BCI and Gaming Workshop

On Wednesday 13th January 2010, Serious Games Institute (SGI) is organising another workshop with title ‘Brain Computer Interface in Gaming’. The workshop is focused on graduates that want to work on the games industry. Attendees will find out how instances of bio-feedback and brain computer interface devices can be used in educational and health contexts.

Speakers for the Brain Computer Interface in Gaming session will include Prof. Sara de Freitas (SGI), Simon Bennett (Roll 7), Ian Glasscock (Games for Life), Prof. Kevin Warwick (Reading University) and Prof Pamela Kato (University Medical Center, Utrecht, Holland).

More information:


08 December 2009

Defence Security With Virtual Worlds

Advances in computerized modeling and prediction of group behavior, together with improvements in video game graphics, are making possible virtual worlds in which defense analysts can explore and predict results of many different possible military and policy actions, say computer science researchers at the University of Maryland in a commentary published in the November 27 issue of the journal Science. Defense analysts can understand the repercussions of their proposed recommendations for policy options or military actions by interacting with a virtual world environment. They can propose a policy option and walk skeptical commanders through a virtual world where the commander can literally 'see' how things might play out. This process gives the commander a view of the most likely strengths and weaknesses of any particular course of action. Computer scientists now know pretty much how to do this, and have created a ‘pretty good chunk’ of the computing theory and software required to build a virtual Afghanistan, Pakistan or another ‘world’.

Human analysts, with their real world knowledge and experience, will be essential partners in taking us the rest of the way in building these digital worlds and, then, in using them to predict courses of action most likely to build peace and security in Afghanistan and elsewhere. Researchers at the University of Maryland have developed a number of the computing pieces critical to building virtual worlds. These include stochastic opponent modeling agents (SOMA) -- artificial intelligence software that uses data about past behavior of groups in order to create rules about the probability of that group various actions in different situations; ‘cultural islands’, which provide a virtual world representation of a real-world environment or terrain, populated with characters from that part of the world who behave in accordance with a behavioral model; and forecasting ‘engines’ CONVEX and CAPE, which focus on predicting behavioral changes in groups based on validated on historical data.

More information:


06 December 2009

Editable 3D Mash-Up Maps

Armchair explorers who soar over 3D cityscapes on their computer may be used to the idea of maps with an extra dimension. But they are now getting accurate enough to offer much more than a preview of your next holiday destination. Accurate, large-scale 3D maps could soon change the way we design, manage and relate to our urban environments. As part of a project at the Ordnance Survey (OS), the UK government's mapping agency, to demonstrate the potential of 3D mapping, the coastal resort of Bournemouth in southern England has probably become the best-mapped place on the planet. Lasers were fired at the town from the ground and from the air to capture the height of buildings, trees and other features, using a technique called Lidar. Adding information from aerial photos and traditional surveys produced a full-colour 3D map, built up from more than 700 million points. The map is accurate to 4 centimetres in x, y and z - by comparison 3D structures in Google Earth are accurate to about 15 metres. OS is not the only organisation to be exploiting improvements in the hardware and software needed to capture and model cities in 3D. Detailed digital 2D maps, like those the OS maintains of the UK, already underpin the everyday activities of businesses and governments the world over.

They are annotated and overlaid with everything from the layout of electric cables to data on air pollution. Companies are now building large-scale 3D maps to be used in the same way. Now it's not just buildings, but floors within the building that could be annotated. The new generation of maps can capture details like mailboxes and lamp posts too small to appear in existing city-scale virtual maps. Infoterra, a firm based in Leicester, UK, supplied 3D data used in Google Earth, and will launch its own 3D city-mapping service, Skape in January 2010. It also uses Lidar to capture the heights of buildings and other features, and uses aerial images taken from a low angle to provide surface detail at a spatial resolution as low as 4.5 centimetres. Competition between Google Earth and Microsoft's Virtual Earth to wow home users with 3D maps is partly responsible for the maturing of large-area 3D maps. But even as this technology goes pro, consumers may still have a role to play. Google's newly launched Building Maker allows any web user to translate an aerial photo in Google Earth into a 3D building. The results are less accurate than a Lidar-based map. But flying planes to get laser data is not cheap, so crowd-sourcing may be necessary outside commercial and urban areas. Future maps may still need help from enthusiasts more interested in eye candy than urban planning.

More information:


28 November 2009

Feeling the Way

For many people, it has become routine to go online to check out a map before traveling to a new place. But for blind people, Google maps and other visual mapping applications are of little use. Now, a unique device developed at MIT could give the visually impaired the same kind of benefit that sighted people get from online maps. The BlindAid system, developed in MIT’s Touch Lab, allows blind people to ‘feel’ their way around a virtual model of a room or building, familiarizing themselves with it before going there. The director of the Touch Lab is working with the Carroll Center for the Blind in Newton, Mass., to develop and test the device. Preliminary results show that when blind people have the chance to preview a virtual model of a room, they have an easier time navigating their way around the actual room later on. That advantage could be invaluable for the visually impaired. One of the toughest challenges a visually impaired person faces is entering an unfamiliar environment with no human or dog to offer guidance.

The BlindAid system builds on a device called the Phantom, developed at MIT in the early 1990s and commercialized by SensAble Technologies. Phantom consists of a robotic arm that the user grasps as if holding a stylus. The stylus can create the sensation of touch by exerting a small, precisely controlled force on the fingers of the user. The BlindAid stylus functions much like a blind person’s cane, allowing the user to feel virtual floors, walls, doors and other objects. The stylus is connected to a computer programmed with a three-dimensional map of the room. Whenever a virtual obstacle is encountered, the computer directs the stylus to produce a force against the user’s hand, mimicking the reaction force from a real obstacle. The team has tested the device in about 10 visually impaired subjects at the Carroll Center, a non-profit agency that offers education, training and rehabilitation programs to about 2,000 visually impaired people per year. To successfully use such a system, the visually impaired person must have a well-developed sense of space.

More information:


22 November 2009

Games Graduates Workshop

On Wednesday 9th December 2009, Serious Games Institute (SGI) is organising another workshop with title ‘Get ahead of the game’. The workshop is focused on graduates that want to work on the games industry.

Join us at the Serious Games Institute for this workshop and discover opportunities available to you within this exciting industry. Also graduates will get a chance to makes first steps towards securing a placement in a fun and innovative industry.

More information:


21 November 2009

Rendering Cloaked Objects

Scientists and curiosity seekers who want to know what a partially or completely cloaked object would look like in real life can now get their wish -- virtually. A team of researchers at the Karlsruhe Institute of Technology in Germany has created a new visualization tool that can render a room containing such an object, showing the visual effects of such a cloaking mechanism and its imperfections. To illustrate their new tool, the researchers have published an article in the latest issue of Optics Express, the Optical Society's (OSA) open-access journal, with a series of full-color images. These images show a museum nave with a large bump in the reflecting floor covered by an invisibility device known as the carpet cloak. They reveal that even as an invisibility cloak hides the effect of the bump, the cloak itself is apparent due to surface reflections and imperfections. The researchers call this the "ostrich effect" -- in reference to the bird's mythic penchant for partial invisibility. The software, which is not yet commercially available, is a visualization tool designed specifically to handle complex media, such as metamaterial optical cloaks. Metamaterials are man-made structured composite materials that exhibit optical properties not found in nature. By tailoring these optical properties, these media can guide light so that cloaking and other optical effects can be achieved. In 2006, scientists at Duke University demonstrated in the laboratory that an object made of metamaterials can be partially invisible to particular wavelengths of light (not visible light, but rather microwaves).

A few groups, including one at the University of California, Berkeley, have achieved a microscopically-sized carpet cloak. These and other studies have suggested that the Hollywood fantasy of invisibility may one day be reality. While such invisibility has been achieved so far in the laboratory, it is very limited. It works, but only for a narrow band of light wavelengths. Nobody has ever made an object invisible to the broad range of wavelengths our eyes can see, and doing so remains a challenge. Another challenge has been visualizing a cloaked object. It is very likely that any invisibility cloak would remain partly seen because of imperfections and optical effects. Up to now, nobody has been able to show what this would look like -- even on a computer. The problem is that metamaterials may have optical properties that vary over their length. Rendering a room with such an object in it requires building hundreds of thousands of distinct volume elements that each independently interact with the light in the room. The standard software that scientists and engineers use to simulate light in a room only allows for a few hundred volume elements, which is nowhere close to the complexity needed to handle many metamaterials such as the carpet cloak. So researchers built the software needed to do just that. Wanting to demonstrate it, they rendered a virtual museum niche with three walls, a ceiling, and a floor. In the middle of the room, they place the carpet cloak -- leading the observer to perceive a flat reflecting floor, thus cloaking the bump and any object hidden underneath it.

More information:


20 November 2009

Mobile Maps of Noise Pollution

Mobile phones could soon be used to fight noise pollution - an irony that won't be lost on those driven to distraction by mobile phones' ringtones. In a bid to make cities quieter, the European Union requires member states to create noise maps of their urban areas once every five years. Rather than deploying costly sensors all over a city, the maps are often created using computer models that predict how various sources of noise, such as airports and railway stations, affect the areas around them. Researchers of the Sony Computer Science Laboratory in Paris, France, say that those maps are not an accurate reflection of residents' exposure to noise. To get a more precise picture, the team has developed NoiseTube, a downloadable software app which uses people's smartphones to monitor noise pollution. The goal of this project was to turn the mobile phone into an environmental sensor. The app records any sound picked up by the phone's microphone, along with its GPS location.

Users can label the data with extra information, such as the source of the noise, before it is transmitted to NoiseTube's server. There the sample is tagged with the name of the street and the city it was recorded in and converted into a format that can be used with Google Earth. Software on the server checks against weather information, and rejects data that might have been distorted by high winds, for instance. Locations that have been subjected to sustained levels of noise are labelled as dangerous. The data is then added to a file, which can be downloaded from the NoiseTube website and displayed using Google Earth. Currently the software works on only a handful of Sony Ericsson and Nokia smartphones as it has to be calibrated by researchers to work with the microphone on any given model. They are currently working on a method to automatically calibrate microphones.

More information:


18 November 2009

Contact Lenses Virtual Displays

A contact lens that harvests radio waves to power an LED is paving the way for a new kind of display. The lens is a prototype of a device that could display information beamed from a mobile device. Realising that display size is increasingly a constraint in mobile devices, researchers at the University of Washington, in Seattle, hit on the idea of projecting images into the eye from a contact lens. One of the limitations of current head-up displays is their limited field of view. A contact lens display can have a much wider field of view. Researchers hope to create images that effectively float in front of the user perhaps 50 cm to 1 m away. This involves embedding nanoscale and microscale electronic devices in substrates like paper or plastic. Fitting a contact lens with circuitry is challenging. The polymer cannot withstand the temperatures or chemicals used in large-scale microfabrication. So, some components – the power-harvesting circuitry and the micro light-emitting diode – had to be made separately, encased in a biocompatible material and then placed into crevices carved into the lens.

One obvious problem is powering such a device. The circuitry requires 330 microwatts but doesn't need a battery. Instead, a loop antenna picks up power beamed from a nearby radio source. The team has tested the lens by fitting it to a rabbit. Researchers mention that future versions will be able to harvest power from a user's cell phone, perhaps as it beams information to the lens. They will also have more pixels and an array of microlenses to focus the image so that it appears suspended in front of the wearer's eyes. Despite the limited space available, each component can be integrated into the lens without obscuring the wearer's view, the researchers claim. As to what kinds of images can be viewed on this screen, the possibilities seem endless. Examples include subtitles when conversing with a foreign-language speaker, directions in unfamiliar territory and captioned photographs. The lens could also serve as a head-up display for pilots or gamers. Other researchers in Christchurch, New Zealand, mentioned that this new technology could provide a compelling augmented reality experience.

More information:


16 November 2009

Creating 3D Models with a Webcam

Constructing virtual 3D models usually requires heavy and expensive equipment, or takes lengthy amounts of time. A group of researchers here at the Department of Engineering, University of Cambridge have created a program able to build 3D models of textured objects in real-time, using only a standard computer and webcam. This allows 3D modeling to become accessible to everybody. During the last few years, many methods have been developed to build a realistic 3D model of a real object. Various equipment has been used: 2D/3D laser, (in visible spectrum or other wave lengths), scanner, projector, camera, etc. These pieces of equipment are usually expensive, complicated to use or inconvenient and the model is not built in real-time. The data (for example laser information or photos) must first be acquired, before going through the lengthy reconstruction process to form the model.

If the 3D reconstruction is unsatisfactory, then the data must be acquired again. The method proposed by researchers needs only a simple webcam. The object is moved about in front of the webcam and the software can reconstruct the object ‘on-line’ while collecting live video. The system uses points detected on the object to estimate object structure from the motion of the camera or the object, and then computes the Delaunay tetrahedralisation of the points (the extension of the 2D Delaunay triangulation to 3D). The points are recorded in a mesh of tetrahedra, within which is embedded the surface mesh of the object. The software can then tidy up the final reconstruction by taking out the invalid tetrahedra to obtain the surface mesh based on a probabilistic carving algorithm, and the object texture is applied to the 3D mesh in order to obtain a realistic model.

More information:


09 November 2009

Art History in 3D

If you don't have the time to travel to Florence, you can still see Michelangelo's statue of David on the Internet, revolving in true-to-life 3D around its own axis. This is a preview of what scientists are developing in the European joint project 3D-COFORM. The project aims to digitize the heritage in museums and provide a virtual archive for works of art from all over the world. Vases, ancient spears and even complete temples will be reproduced three-dimensionally. In a few years' time museum visitors will be able to revolve Roman amphorae through 360 degrees on screen, or take off on a virtual flight around a temple. The virtual collection will be especially useful to researchers seeking comparable works by the same artist, or related anthropological artifacts otherwise forgotten in some remote archive. The digital archive will be intelligent, searching for and linking objects stored in its database. For instance, a search for Greek vases from the sixth century BC with at least two handles will retrieve corresponding objects from collections all over the world.

3D documentation provides a major advance over the current printed catalogs containing pictures of objects, or written descriptions. A set of 3D data presents the object from all angles, providing information of value to conservators, such as the condition of the surface or a particular color. As the statue of David shows, impressive 3D animations of art objects already exist. Researchers are generating 3D models and processing them for the digital archive. They are developing calculation specifications to derive the actual object from the measured data. The software must be able to identify specific structures, such as the arms on a statue or columns on a building, as well as recognizing recurring patterns on vases. A virtual presentation also needs to include a true visual image -- a picture of a temple would not be realistic if the shadows cast by its columns were not properly depicted. The research group in Darmstadt is therefore combining various techniques to simulate light effects.

More information:



04 November 2009

Muscle-Bound Computer Interface

It's a good time to be communicating with computers. No longer are we constrained by the mouse and keyboard--touch screens and gesture-based controllers are becoming increasingly common. A startup called Emotiv Systems even sells a cap that reads brain activity, allowing the wearer to control a computer game with her thoughts. Now, researchers at Microsoft, the University of Washington in Seattle, and the University of Toronto in Canada have come up with another way to interact with computers: a muscle-controlled interface that allows for hands-free, gestural interaction. A band of electrodes attach to a person's forearm and read electrical activity from different arm muscles. These signals are then correlated to specific hand gestures, such as touching a finger and thumb together, or gripping an object tighter than normal. The researchers envision using the technology to change songs in an MP3 player while running or to play a game like Guitar Hero without the usual plastic controller. Muscle-based computer interaction isn't new. In fact, the muscles near an amputated or missing limb are sometimes used to control mechanical prosthetics. But, while researchers have explored muscle-computer interaction for non-disabled users before, the approach has had limited practicality.

Inferring gestures reliably from muscle movement is difficult, so such interfaces have often been restricted to sensing a limited range of gestures or movements. The new muscle-sensing project is going after healthy consumers who want richer input modalities. Researchers had to come up with a system that was inexpensive and unobtrusive and that reliably sensed a range of gestures. The group's most recent interface, presented at the User Interface Software and Technology conference earlier this month in Victoria, British Columbia, uses six electromyography sensors (EMG) and two ground electrodes arranged in a ring around a person's upper right forearm for sensing finger movement, and two sensors on the upper left forearm for recognizing hand squeezes. While these sensors are wired and individually placed, their orientation isn't exact--that is, specific muscles aren't targeted. This means that the results should be similar for a thin, EMG armband that an untrained person could slip on without assistance. The research builds on previous work that involved a more expensive EMG system to sense finger gestures when a hand is laid on a flat surface. The sensors cannot accurately interpret muscle activity straight away. Software must be trained to associate the electrical signals with different gestures. The researchers used standard machine-learning algorithms, which improve their accuracy over time.

More information:



29 October 2009

VR Reduces Tobacco Addiction

Smokers who crushed computer-simulated cigarettes as part of a psychosocial treatment program in a virtual reality environment had significantly reduced nicotine dependence and higher rates of tobacco abstinence than smokers participating in the same program who grasped a computer-simulated ball, according to a study described in the current issue of CyberPsychology and Behavior. Researchers from the GRAP Occupational Psychology Clinic, and the University of Quebec in Gatineau, randomly assigned 91 smokers enrolled in a 12-week anti-smoking support program to one of two treatment groups. In a computer-generated virtual reality environment, one group simulated crushing virtual cigarettes, while the other group grasped virtual balls during 4 weekly sessions.

The findings demonstrate a statistically significant reduction in nicotine addiction among the smokers in the cigarette-crushing group versus those in the ball-grasping group. Also, at week 12 of the program, the smoking abstinence rate was significantly higher for the cigarette-crushing group (15%) compared to the ball-grasping group (2%). Other notable findings include the following: smokers who crushed virtual cigarettes tended to stay in the treatment program longer (average time to drop-out > 8 weeks) than the ball-grasping group (< 6 weeks). At the 6-month follow-up, 39% of the cigarette crushers reported not smoking during the previous week, compared to 20% of the ball graspers.

More information:


28 October 2009

Immersive Bird’s-Eye View Exhibit

A new virtual environment was developed by Texas A&M University researchers. The system allows humans to see and hear some of the extreme ranges of vision and hearing that animals have could help reinvent the way museums teach about the natural world. Such immersive exhibits would allow visitors, for example, the chance to experience birds’ ultraviolet vision or whales’ ultrasonic hearing. Participants at the international Siggraph conference had the opportunity to experience the program, titled “I’m Not There,” by donning 3D glasses and using a Wii controller to navigate through the exhibit.

The Viz lab is about the synthesis between art and science, so we inserted artistic elements into these scenes to make them more realistic and interesting. Researchers take ultra- or infrasonic sound and ultraviolet and infrared light and scale it down so humans can sense it. It’s still not the same way animals experience it, but it gives a sense of what they see and hear. A similar show is planned at Agnes Scott College in Atlanta this winter. Meanwhile, researchers that developed the system are also working on an LCD version.

More information:


26 October 2009

Virtual World And Reality Meet

Virtual reality is entering a new era of touch-sensitive tiles and immersive animations. Researchers in Barcelona have created a unique space at the cutting edge of digital immersion. Researchers built the experience induction machine as part of the PRESENCCIA project to understand how humans can exist in physical and virtual environments. It may look like a fun diversion, but the experience induction machine is the result of some fundamental scientific research. One of the key challenges was to create a credible virtual environment. To do those researchers had to understand how our brains construct our vision of the world. They want to move beyond the simple interface of keyboard, screen and mouse. Moving to Austria, researchers are controlling a virtual reality system using brain-computer interfaces.

These types of systems could one day help people with disabilities. Graz researchers are also developing similar tools. Once the sensors are in place the user concentrates on an icon they want as it lights up. For the brain computer interface electrodes are attached to the head to be able to measure brain currents. The task of the person then is to watch the icons flashing in a random sequence and the brain will react to the icon which I want and that response the computer can recognise, and that way we can control external devices. Each time the icon flashes the brain reacts, and the computer monitors that reaction and then carries out the command. When we interact with a virtual world on a human scale then on some level we believe it to be real. The researchers believe that these systems are the future of human computer interaction, and an important step away from current technology.

More information:



20 October 2009

Radio Waves See Through Walls

University of Utah engineers showed that a wireless network of radio transmitters can track people moving behind solid walls. The system could help police, firefighters and others nab intruders, and rescue hostages, fire victims and elderly people who fall in their homes. It also might help retail marketing and border control. By showing the locations of people within a building during hostage situations, fires or other emergencies, radio tomography can help law enforcement and emergency responders to know where they should focus their attention. Their method uses radio tomographic imaging (RTI), which can see, locate and track moving people or objects in an area surrounded by inexpensive radio transceivers that send and receive signals. People don't need to wear radio-transmitting ID tags. The study involved placing a wireless network of 28 inexpensive radio transceivers - called nodes - around a square-shaped portion of the atrium and a similar part of the lawn. In the atrium, each side of the square was almost 14 feet long and had eight nodes spaced 2 feet apart. On the lawn, the square was about 21 feet on each side and nodes were 3 feet apart.

The transceivers were placed on 4-foot-tall stands made of plastic pipe so they would make measurements at human torso level. Radio signal strengths between all nodes were measured as a person walked in each area. Processed radio signal strength data were displayed on a computer screen, producing a bird's-eye-view, blob-like image of the person. A second study detailed a test of an improved method that allows ‘tracking through walls’. That study has been placed on arXiv.org, an online archive for preprints of scientific papers. The study details how variations in radio signal strength within a wireless network of 34 nodes allowed tracking of moving people behind a brick wall. The wireless system used in the experiments was not a Wi-Fi network like those that link home computers, printers and other devices. Researchers used Zigbee network - the kind of network often used by wireless home thermostats and other home or factory automation.

More information:



18 October 2009

Merging Video with Maps

A novel navigation system under development at Microsoft aims to tweak users' visual memory with carefully chosen video clips of a route. Developed with researchers from the University of Konstanz in Germany, the software creates video using 360-degree panoramic images of the street that are strung together. Such images have already been gathered by several different mapping companies for many roads around the world. The navigation system, called Videomap, adjusts the speed of the video and the picture to highlight key areas along the route. Videomap also provides written directions and a map with a highlighted route. But unlike existing software, such as Google Maps or MapQuest, the system also allows users to watch a video of their drive. The video slows down to highlight turns or speeds up to minimize the total length of the clip. Memorable landmarks are also highlighted, though at present the researchers have to select them from the video manually.

Algorithms also automatically adjust the video to incorporate something researchers call ‘turn anticipation’. Before a right-hand turn, for example, the video will slow down and focus on images on the right-hand side of the street. This smoothes out the video and draws the driver's attention to the turn. Still images of the street at each turn are also embedded in the map and the written directions. The system was tested on 20 users, using images of streets in Austria. The participants were given driving directions using the standard map and text, as well as thumbnails for each intersection. Each participant was allotted five minutes to study the information. The drivers were then shown a video simulation of the drive and asked which way the car should turn at various points along the way. They were then asked to do the same thing for a different route, this time using Videomap directions.

More information:

11 October 2009

IEEE VS-GAMES '10 Conference

The 2nd IEEE International Conference in Games and Virtual Worlds for Serious Applications 2010 (VS-GAMES 2010) will be held between 25-26 March, in Braga, Portugal. The use of virtual worlds and games for serious applications has emerged as a dominating force in training, education and simulation due to the focus on creating compelling interactive environments at reduced costs by adopting commodity technologies commonly associated with the entertainment industries. This field is informed by theories, methods, applications and the state-of-the-art in a number of areas based on technological principles and innovation, advances in games design, pedagogic methodologies and the convergence of these fields.

While the serious games community has made it possible to bring together such diverse fields, further academic and industrial collaboration is needed in further defining, formalising and applying the standards and methodologies for the future. VS-GAMES 2010 is the primary conference dedicated to serious games and virtual worlds presenting state of the art methods and technologies in multidisciplinary fields outlined above. The aim of this international conference is to encourage an exchange of knowledge and experience in this cross-disciplinary area and its application to all aspects of the use of games and virtual worlds in serious applications. The best technical full papers will be published on a special issue of Elsevier's Computers & Graphics.

More information:


07 October 2009

Mobile AR Issues

The momentum building behind AR has been fuelled by the growing sophistication of cellphones. With the right software, devices like the iPhone can now overlay reviews of local services or navigation information onto scenes from the phone's camera. A good example of what is possible today are the AR features in an app released by Yelp, a website that collects shop and restaurant reviews for cities in North America, Ireland and the UK. The company's app, for the iPhone 3GS released last month, uses the phone's camera to display images of nearby businesses, and then lists links to relevant reviews. Yelp's app relies on the GPS receiver and compass that are built into the handset. Together, these sensors can identify the phone's location and orientation, allowing such apps to call up corresponding information. Other AR apps include virtual Post-it notes that you can leave in specific places, and a navigation system that marks out a route to your destination. Meanwhile, companies are working on games in which characters will appear to move around real environments. However, when iPhone was tested in downtown San Francisco, the error reported by the GPS sensor was as great as 70 metres, and the compass leapt through 180 degrees as the phone moved past a metal sculpture. Yelp says the app's AR features are a ‘very early iteration’ that the company will improve as it gets feedback.

Some researchers doubt whether high-accuracy GPS systems will ever be small or efficient enough to incorporate into mobile phones. Others suggest achieving the sub-metre positioning accuracy that really good AR demands, mobile devices will have to analyse scenes, not just record images. One way to achieve this is to combine laser scans of a city with conventional images to create a three-dimensional computer map. Each building in the map would be represented by a block of the right size and shape, with the camera image of the real building mapped onto it. The phone could use GPS and compass data to get a rough location and orientation reading, then compare the camera image with the model to get a more precise location fix. Various interested companies are building 3D city maps, including Google and Microsoft, but it is doubtful that such maps will achieve truly global coverage. The models will also inevitably lag behind reality, as buildings are knocked down or new ones appear. So such shortcomings have inspired other researchers to consider a ‘crowd-sourced’ solution to speed up data collection. In this approach, software would pull photographs of a location from the internet and stitch the pictures together to create a 3D image of that place. Such images could also have GPS information attached, and even though the coordinates might be slightly inaccurate, combining many photographs of the same place would fine-tune the location information embedded in the resulting composite image.

More information:


06 October 2009


Layar is a free application on mobile phones which shows what is around the user by displaying real time digital information on top of reality through the camera of the mobile phone. Layar is a global application, available for the T-Mobile G1, HTC Magic and other Android phones in all Android Markets. It also comes pre-installed on the Samsung Galaxy in the Netherlands. Layar is used by holding the phone in front of the user’s perspective (like a camera) and information is displayed on top of the camera display view. For all points of interest which are displayed on the screen, information is shown at the bottom of the screen.

On top of the camera image (displaying reality) Layar adds content layers. Layers are the equivalent of web pages in normal browsers. Just like there are thousands of websites there will be thousands of layers. One can easily switch between layers by selecting another via the menu button, pressing the logobar or by swiping your finger across the screen. Layar combines GPS, camera, and compass to identify surroundings and overlay information on screen, in real time.

More information:


30 September 2009

VAST 2009 Article

Last Friday, myself and Eike Anderson, another colleague from Interactive Worlds Applied Research Group (IWARG), have presented a paper with title ‘Serious Games in Cultural Heritage’ in the 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST09), State of the Art Reports session. The conference was held in Malta between 22 - 25 September and it is one of the most significant conferences of the field. The paper supported that although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology.

As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.

A draft version of the paper can be downloaded from here.

29 September 2009

Monitoring Pedestrian Crossings

A team of researchers from the University of Castilla-La Mancha (UCLM) has developed an intelligent surveillance system able to detect aberrant behaviour by drivers and people on foot crossing pedestrian crossings and in other urban settings. The study, published this month in the journal Expert Systems with Applications, could be used to penalise incorrect behaviour. The study focused on a pedestrian crossing in a two-way street, regulated by a traffic light. The authors defined ‘normal’ behaviour of cars and pedestrians in this setting, in which they can move when the lights are green, but must stop and not cross the safety lines when the lights are red. The system, working in a similar way to a human monitor, can detect whether the vehicles and pedestrians are moving ‘normally’. If at any point any of the movements related to these ‘objects’ is not ‘normal’ (driving through a red light, for example), the programme recognizes that the behaviour differs from the normal framework established.

The supporting architecture underlying the model is a multi-agent artificial intelligence system (made up of software agents that carry out the various tasks involved in monitoring the environment). It has been designed according to standards recommended by the FIPA (Foundation for Intelligent Physical Agents), an international committee working to promote the adoption and diffusion of this kind of technology. In order to prove the effectiveness of the model, its creators have developed a monitoring tool (OCULUS), which analyses images taken from a real setting. In order to do this, the team members placed a video camera close to their place of work, the Higher School of Information Technology in Ciudad Real. The researchers are continuing their work to fine tune the system, and believe it will be possible to use it in future in other situations, for example in analysing behaviour within indoor environments (museums, for example), or in detecting overcrowding.

More information:


27 September 2009

Augmented Reality Markup Language

The nascent field of Mobile Augmented Reality (AR) is on the verge of becoming mainstream. In recent months an explosion in the development of practical AR solutions has given consumers numerous AR applications to experience and ‘augment’ their daily lives. With this surge in AR development the potential arises for the multiplication of proprietary methods for aggregating and displaying geographic annotation and location-specific data. Mobilizy proposes creating an augmented reality mark-up language specification based on the OpenGIS® KML Encoding Standard (OGC KML) with extensions. The impetus for proposing the creation of an open Augmented Reality Markup Language (ARML) specification to The AR Consortium is to help establish and shape a long-term, sustainable framework for displaying geographic annotation and location-specific data within Augmented Reality browsers.

In addition to proposing the ARML specification to The AR Consoritum, Mobilizy will be presenting an overview of the ARML specification at the Emerging Technologies Conference @MIT, Boston, and at the Over The Air Event held at Imperial College in London. The purpose for establishing an open ARML specification is to assure all data that is created for augmentation in the physical world could be universally accessed and viewed on any augmented reality browser. ARML allows individuals and organizations to easily create and style their own AR content (e.g. points of interest) without advanced knowledge of AR, APIs or tools. The ARML specification is analogous to HTML for the Web, which is used for creating web-pages and web-sites. Mobilizy has taken a very exciting giant step forward in proposing one of the first specifications for the commercial augmented reality sector.

More information:



19 September 2009

Digitization of Ancient Rome

The ancient city of Rome was not built in a day. It took nearly a decade to build the Colosseum, and almost a century to construct St. Peter's Basilica. But now the city, including these landmarks, can be digitized in just a matter of hours. A new computer algorithm developed at the University of Washington uses hundreds of thousands of tourist photos to automatically reconstruct an entire city in about a day. The tool is the most recent in a series developed at the UW to harness the increasingly large digital photo collections available on photo-sharing Web sites. The digital Rome was built from 150,000 tourist photos tagged with the word ‘Rome’ or ‘Roma’ that were downloaded from the popular photo-sharing Web site, Flickr. Computers analyzed each image and in 21 hours combined them to create a 3D digital model. With this model a viewer can fly around Rome's landmarks, from the Trevi Fountain to the Pantheon to the inside of the Sistine Chapel. Earlier versions of the UW photo-stitching technology are known as Photo Tourism. That technology was licensed in 2006 to Microsoft, which now offers it as a free tool called Photosynth. With Photosynth and Photo Tourism it is possible to reconstruct individual landmarks.

In addition to Rome, the team recreated the Croatian coastal city of Dubrovnik, processing 60,000 images in less than 23 hours using a cluster of 350 computers, and Venice, Italy, processing 250,000 images in 65 hours using a cluster of 500 computers. Many historians see Venice as a candidate for digital preservation before water does more damage to the city, the researchers said. Previous versions of the Photo Tourism software matched each photo to every other photo in the set. But as the number of photos increases the number of matches explodes, increasing with the square of the number of photos. A set of 250,000 images would take at least a year for 500 computers to process. A million photos would take more than a decade. The newly developed code works more than a hundred times faster than the previous version. It first establishes likely matches and then concentrates on those parts. The code also uses parallel processing techniques, allowing it to run simultaneously on many computers, or even on remote servers connected through the Internet. This technique could create online maps that offer viewers a virtual-reality experience. The software could build cities for video games automatically, instead of doing so by hand. It also might be used in architecture for digital preservation of cities, or integrated with online maps. The research was supported by the National Science Foundation, the Office of Naval Research and its Spawar lab, Microsoft Research, and Google.

More information:


13 September 2009

AR Visual Time Machine

A ruined temple, ancient frescos and even a long-dead king have been brought to life by a ‘visual time machine’ developed by European researchers. The Palace of Venaria near Turin, Italy, and Winchester Castle in the United Kingdom have already benefited from the technology, which combines augmented reality (AR) content with location awareness on mobile devices to give visitors to historic and cultural sites a deeper, richer and more enjoyable experience. Other places of interest are also set for a virtual renaissance in the near future with a commercial version of the system being developed to run on smart phones. Users of the system can look at a historic site and, by taking a photo or viewing it through the camera on their mobile device, be able to access much more information about it. They are even able to visualise, in real time, how it looked at different stages in history. The AR system is one component of a comprehensive mobile information platform for tourists developed in the EU-funded iTacitus project, which also created location-based services and smart itinerary-generating software to help users get the most out of any trip.

Visitors to historic cities provide the iTacitus system with their personal preferences – a love of opera or an interest in Roman history, for example – and the platform automatically suggests places to visit and informs them of events currently taking place. The smart itinerary application ensures that tourists get the most out of each day, dynamically helping them schedule visits and directing them between sites. Once at their destination, is it an archaeological site, museum or famous city street, the AR component helps bring the cultural and historic significance to life by downloading suitable AR content from a central server. At the Palace of Venaria, a UNESCO World Heritage site, the iTacitus system allowed users to see how frescos on the walls of the Sale Diana once appeared and superimpose a long-gone temple in the colourful gardens to the pictures of the ruins on their mobile phone. In Winchester, the system showed visitors the court inside the castle’s Great Hall and even offered an introduction by a virtual King Alfred.

More information:


10 September 2009

Virtual Maps For The Blind

The blind and visually impaired often rely on others to provide cues and information on navigating through their environments. The problem with this method is that it doesn't give them the tools to venture out on their own, says Dr. Orly Lahav of Tel Aviv University's School of Education and Porter School for Environmental Studies. To give navigational ‘sight’ to the blind, researchers from Tel Aviv University have invented a new software tool to help the blind navigate through unfamiliar places. It is connected to an existing joystick, a 3D haptic device that interfaces with the user through the sense of touch. People can feel tension beneath their fingertips as a physical sensation through the joystick as they navigate around a virtual environment which they cannot see, only feel: the joystick stiffens when the user meets a virtual wall or barrier. The software can also be programmed to emit sounds - a cappuccino machine firing up in a virtual café, or phones ringing when the explorer walks by a reception desk. Exploring 3D virtual worlds based on maps of real-world environments, the blind are able to ‘feel out’ streets, sidewalks and hallways with the joystick as they move the cursor like a white cane on the computer screen that they will never see. Before going out alone, the new solution gives them the control, confidence and ability to explore new streets making unknown spaces familiar.

In other words, it allows people who can't see, to make mental maps in their mind. The software takes physical information from our world and digitizes it for transfer to a computer, with which the user interacts using a mechanical device. The hope is that the blind will be able to explore the virtual environment of a new neighborhood in the comfort of their homes before venturing out into the real world. This tool lets the blind ‘touch’ and ‘hear’ virtual objects and deepens their sense of space, distance and perspective. They can ‘feel’ intersections, buildings, paths, and obstacles with the joystick, and even navigate inside a shopping mall or a museum like the Louvre in a virtual environment before they go out to explore on their own. The tool transmits textures to the fingers and can distinguish among surfaces like tiled floors, asphalt, sidewalks and grass. In theory, any unknown spaces can be virtually pre-explored. The territory just needs to be mapped first - and with existing applications like GIS. The tool, called the BlindAid, was piloted to users at the Carroll Center for the Blind, a rehabilitation center in Newton, Massachusetts.

More information:


05 September 2009

VR and Interactive 3D Learning

These are not industry professionals. They are the students of tomorrow using interactive 3D technology to become fully immersed in the virtual learning environment. In this era of 21st-century teaching tools, the Kentucky Community & Technical College System (KCTCS) is leading the new wave of institutions that fuse interactive 3D models with hands-on simulations to provide multiple opportunities to experiment without risk and enhance learning for the future workforce. Traditionally, academic institutions have relied on tools such as blackboard outlines, physical demonstrations and videos to facilitate learning. But through computers and projectors, 3D technology allows users to see a person, place or thing as it would appear in real life. This opens the door to a virtual world of possibilities in the classroom, where students can learn about science anatomy, geography, architecture and astronomy by interacting with the content rather than reading about it in a textbook.

Although KCTCS leadership had been looking to integrate the 3D technologies into the classroom for the past seven years, the push really came in the wake of the coal mining tragedies in 2006. That's when KCTCS launched its first virtual project for the Kentucky Coal Academy to show advantages of simulation-based training. A simulation-based training application was developed that takes miners through daily inspection, has them go through parts and demonstrates how the breathing process works in addition to the actual donning process. Such innovative units of instruction can be viewed on a laptop, while others use 3D stereographic projection technology, which allows learning objects to pop out in the middle of the room. For some projects, students enter a space called a CAVE, which has screens on the walls that project a real environment of the respective field such as a hospital room, for instance.

More information:


01 September 2009

Virtual 3D Lab Stimulate Learning

Students at a Baltimore County high school this fall will explore the area surrounding Mount St. Helens in a vehicle that can morph from an aircraft to a car to a boat to learn about how the environment has changed since the volcano’s 1980 eruption. But they’ll do it all without ever leaving their Chesapeake High School classroom--they will be using a 3D Virtual Learning Environment developed by the Johns Hopkins University Applied Physics Laboratory (APL) with the university’s Center for Technology Education. Researchers are deploying the environment, which was modeled after a state-of-the-art, 3D visualization facility at APL that was used for projects by the Department of Defense and NASA. The Virtual Learning Environment is the first of its kind in the nation. There’s not a lot of research that says this directly improves student achievement. We have a hunch that it does. But we do know that it improves student involvement. And it improves teacher involvement, as well.

Initial results showed that when students have interest in something, they are more willing and able to learn--and gaming is something that students are interested in. People can learn anything, but they have to be interested in it. There are people who can recite sports statistics for the past 10 years, because it’s something that they’re interested in. There we will work to develop other environments, and hope that eventually students will be able to create their own environment. The Virtual Learning Environment includes 10 high-definition, 72-inch TV monitors, arranged in two five-screen semicircles that allow students to interact with what they see on screen using a custom-designed digital switch and touch-panel controller. In an adjoining lab, 30 workstations, each outfitted with three interconnected monitors, will display the same environments, allowing lessons to be translated and understood on a team or a student basis.

More information:


30 August 2009

Gaming Takes On Augmented Reality

Augmented reality - the ability to overlay digital information on the real world - is increasingly finding its way into different aspects of our lives. Mobile phone applications are already in use to find the nearest restaurants, shops and underground stations. And the technology is also starting to enter the world of gaming. Developers are now exploring its potential for a new genre of entertainment, and to create applications that seamlessly integrate our real and virtual identities. The gaming world has been toying with the idea of using augmented reality to enhance the user's experience. Virus Killer 360 by UK-firm Acrossair uses a mobile phone's GPS and compass to turn real images of the world into the game board. This immersive 360-degree game shows the user surrounded by viruses when moving the handset - the aim is to kill the spores before they multiply. The possibilities of augmented reality are explored afresh in the Eye of Judgement. The PlayStation 3 game uses a camera and a real set of cards to bring monsters to "life" to do battle. Gamers move the device around over a 2D map to recreate it as a 3D gaming environment. Like Nintendo's Wii, actions affect the game, except this game offers 360-degree freedom of movement. Handsets and handheld consoles are not powerful enough to do this yet, but graphics specialists and researchers believe the tech is only one to two years away.

We could soon be using augmented reality to tell us more about the people around us. Users in a Swedish trial set their profiles on a mobile phone, deciding what elements of their online self they want to share, including photos, interests, and Facebook updates. For instance, someone giving a presentation at a meeting could choose to share their name and the slides for others to see. Others in the room could point their phone at the person to download the information directly to their handset. But for this mash-up of social networking and augmented reality to work, face-recognition software will have to be improved. The development of this technology could be speeded up if the app was limited to the contacts in a user's mobile phone. The ultimate goal of augmented reality is for information just to appear as people go about their daily tasks. For instance, a camera worn around the neck would read a book title, get reviews from Amazon, and project the results back onto the book. Researchers at Massachusetts Institute of Technology are exploring the possibilities of object recognition. One day it may be possible to take a photo simply by making a rectangle shape with their fingers, they believe.

More information:


26 August 2009

HCI 2009 Article

Last month, a co-authored paper with title ‘Assessing the Usability of a Brain-Computer Interface (BCI) that Detects Attention Levels in an Assessment Exercise’ was presented by a colleague of mine at the 13th International Conference on Human-Computer Interaction at San Diego, California, USA. The paper presented the results of a usability evaluation of the NeuroSky’s MindBuilder –EM (MB). Until recently most Brain Computer Interfaces (BCI) have been designed for clinical and research purposes partly due to their size and complexity. However, a new generation of consumer-oriented BCI has appeared for the video game industry. The MB, a headset with a single electrode, is based on electro-encephalogram readings (EEG) capturing faint electrical signals generated by neural activity.

The electrical signals across the electrode are measured to determine levels of attention and then translated into binary data. The paper presented the results of an evaluation to assess the usability of the MB by defining a model of attention to fuse attention signals with user-generated data in a Second Life assessment exercise. The results of this evaluation suggest that the MB provides accurate readings regarding attention, since there is a positive correlation between measured and self reported attention levels. The results also suggest there are some usability and technical problems with its operation. Future research is presented consisting of the definition of a standardized reading methodology and an algorithm to level out the natural fluctuation of users’ attention levels when used as inputs.

A draft version of the paper can be downloaded from here.

24 August 2009

Modified 3D HDTV LCD Screens

For the first time, a team of researchers at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego, have designed a 9-panel, 3D visualization display from HDTV LCD flat-screens developed by JVC. The technology, dubbed "NexCAVE," was inspired by Calit2's StarCAVE virtual reality environment and designed and developed by Calit2 researchers. Although the StarCAVE's unique pentagon shape and 360-degree views make it possible for groups of scientists to venture into worlds as small as nanoparticles and as big as the cosmos, its expensive projection system requires constant maintenance — an obstacle DeFanti and Dawe were determined to overcome. Researchers developed the NexCAVE technology at the behest of Saudi Arabia's King Abdullah University of Science and Technology (KAUST), which established a special partnership with UC San Diego last year to collaborate on world-class visualization and virtual-reality research and training activities. The NexCAVE technology was inspired by Calit2's StarCAVE virtual reality environment. The KAUST campus includes a Geometric Modeling and Scientific Visualization Research Center featuring a 21-panel NexCAVE and several other new visualization displays developed at Calit2. Classes at the brand-new, state-of-the-art, 36-million square meter campus start Sept. 5. When paired with polarized stereoscopic glasses, the NexCAVE's modular, micropolarized panels and related software will make it possible for a broad range of UCSD and KAUST scientists — from geologists and oceanographers to archaeologists and astronomers — to visualize massive datasets in three dimensions, at unprecedented speeds and at a level of detail impossible to obtain on a myopic desktop display.

The NexCAVE's technology delivers a faithful, deep 3D experience with great color saturation, contrast and really good stereo separation. The JVC panels' xpol technology circularly polarizes successive lines of the screen clockwise and anticlockwise and the glasses you wear make you see, in each eye, either the clockwise or anticlockwise images. This way, the data appears in three dimensions. Since these HDTVs are very bright, 3D data in motion can be viewed in a very bright environment, even with the lights in the room on. The NexCAVE's data resolution is also superb, close to human visual acuity (or 20/20 vision). The 9-panel, 3-column prototype that his team developed for Calit2's VirtuLab has a 6000x1500 pixel resolution, while the 21-panel, 7-column version being built for KAUST boasts 15,000x1500-pixel resolution. The costs for the NexCAVE in comparison to the StarCAVE are also considerably cheaper. The 9-panel version cost under $100,000 to construct, whereas the StarCAVE is valued at $1 million. One-third of that cost comes from the StarCAVE's projectors, which burn through $15,000 in bulbs per year. Every time a projector needs to be relamped, the research team must readjust the color balance and alignment, which is a long, involved process. Since the NexCAVE requires no projector, those costs and alignment issues are eliminated. The NexCAVE'S tracker (the device used to manipulate data) is also far less expensive — it's only $5,000 as compared to the StarCAVE's $75,000 tracker, although its range is more limited. NexCAVE's specially designed COVISE software (developed at Germany's University of Stuttgart) combines the latest developments from the world of real-time graphics and PC hardware to allow users to transcend the capabilities of the machine itself. The machine will also be connected via 10 gigabit/second networks, which allows researchers at KAUST to collaborate remotely with UCSD colleagues. The NexCAVE uses game PCs with high end Nvidia game engines.

More information:




20 August 2009

Serious Virtual Worlds '09 Conference

The Serious Virtual Worlds 2009 (SVW09) conference with title ‘Real Value for Public and Private Organisations’ will be hosted by the globally renowned Serious Games Institute in Coventry University, and jointly run by Ambient Performance. It is also supported by the Digital and Creative Technologies Network and aims in this conference will show a whole new world of business.

Large and small organisations are using virtual worlds to meet, work and simulate working situations. SVW09 focuses on business applications of Virtual Worlds and will highlight a variety of case studies that will demonstrate the economic and ecological benefits of using these virtual spaces. Case studies featuring organisations already working in Virtual Worlds including: BP, Afiniti, Highways Agency, StormFjord and Schools for the Future.

More information:


18 August 2009

Unraveling Ancient Documents

Computer science and humanities departments have joined forces at Ben-Gurion University in Beersheba to decipher historical Hebrew documents, a large number of which have been overwritten with Arabic stories. The unique algorithm being used to determine the wording was developed by BGU computer scientists. The documents are searched electronically, letter by letter, for similarities in handwriting which help determine the date and author of the texts. The documents being deciphered at BGU are degraded texts from sources such as the Cairo Geniza, the Al-Aksa manuscript library in Jerusalem, and the Al-Azar manuscript library in Cairo. All together, the base consists of 100,000 medieval Hebrew codices and their fragments that represent the book production output of only the last six centuries of the Middle Ages. The purpose of the project is to classify the handwritten documents and determine their authorship. One problem is that many of the original Hebrew texts which were found in the Cairo Geniza have been scratched off, and the parchment used to write an Arabic text.

Although the texts are in Hebrew, the task of deciphering what is written is difficult because the historical documents have degraded over time. Now, the foreground and background lettering are hard to separate and there are smudges on the ink of much of the text which intensifies the background coloring. Furthermore, ink from the alternate side of the document adds blotches to the lettering. To solve the problem, the algorithm is used to cover the text in a dark grey color, which then highlights lighter colored pixels as background space and identifies the darker pixels as outlining the original Hebrew lettering. There are two separate academic disciplines interested in driving this project forward. First, linguistic specialists seek to gain a deeper appreciation of the origins of the Hebrew language. Second, Jewish philosophers are interested in studying ancient forms of prayer that are thought to be contained in the texts. With the new algorithm, researchers hope to create a catalogue of all the texts and piece together the ancient prayers and other documents, including those citing Jewish law.

More information:


17 August 2009

Smarter GPS Using Linux

Sick of having your GPS tell you to turn the wrong way up a one-way street or lead you to a dead end? Fear not: Linux-based technology developed at NICTA is on its way to help make personal navigation systems more accurate. AutoMap, developed by National ICT Australia (NICTA), uses machine vision techniques that can detect and classify geometric shapes from video footage. These shapes include things like signs and company logos: the type of fixtures that change frequently in a neighborhood and make it difficult for digital map makers to keep their products up to date. Currently, to keep on top of this, the mapping companies need to get someone to physically drive up and down each street in a van with five or six cameras fixed in all directions. There will be a driver and a co-driver who will sit and make annotations. They then take this footage back to the office where they have an army of slaves who will look at this footage frame by frame and record where all the signs are. What they provide is an intelligent solution that can automatically detect signs from video footage without having to employ an army of slaves to do it. The AutoMap system uses some of the technology developed as part of an earlier smart cars project.

Although the product is now ready for commercial deployment and discussions are underway with the significant mapping companies, research on the project will continue. They are looking at placing this technology inside a little camera and putting it in taxis, fleet vehicles, and garbage trucks that are going about their business. These vehicles will traverse the whole road network on a regular basis. They will be able to automatically detect points of interest and automatically send this information back to base where a complete and constantly updating map emerges over time. The research team will also be developing methods to recognise three-dimensional images like park benches and speed cameras. This research and technology is almost entirely Linux-based. The research team also use an Intel-based UMPC (an ASUS R50A). NICTA predicts that the digital mapping market will expand significantly as companies like Google, Microsoft and Yahoo continue to develop and release location based services. Whilst these companies currently purchase some mapping information from digital map producers it is expected they will quickly shift to developing and maintaining their own databases.

More information:


11 August 2009

Games Solve Complex Problems

A new computer game prototype combines work and play to help solve a fundamental problem underlying many computer hardware design tasks. The online logic puzzle is called FunSAT, and it could help integrated circuit designers select and arrange transistors and their connections on silicon microchips, among other applications. Designing chip architecture for the best performance and smallest size is an exceedingly difficult task that's outsourced to computers these days. But computers simply flip through possible arrangements in their search. They lack the human capacities for intuition and visual pattern recognition that could yield a better or even optimal design. That's where FunSAT comes in. Developed by University of Michigan computer science researchers, FunSAT is designed to harness humans' abilities to strategize, visualize and understand complex systems. A single-player prototype implemented in Java already exists and researchers are working on growing it to a multi-player game, which would allow more complicated problems to be solved. By solving challenging problems on the FunSAT board, players can contribute to the design of complex computer systems, but you don't have to be a computer scientist to play. The game is a sort of puzzle that might appeal to Sudoku fans. The board consists of rows and columns of green, red and gray bubbles in various sizes.

Around the perimeter are buttons that players can turn yellow or blue with the click of a mouse. The buttons' color determines the color of bubbles on the board. The goal of the game is to use the perimeter buttons to toggle all the bubbles green. Right-clicking on a bubble tells you which buttons control its color, giving the player a hint of what to do next. The larger a bubble is, the more buttons control it. The game may be challenging because each button affects many bubbles at the same time and in different ways. A button that turns several bubbles green will also turn others from green to red or gray. The game actually unravels so-called satisfiability problems—classic and highly complicated mathematical questions that involve selecting the best arrangement of options. In such quandaries, the solver must assign a set of variables to the right true or false categories so to fulfill all the constraints of the problem. In the game, the bubbles represent constraints. They become green when they are satisfied. The perimeter buttons represent the variables. They are assigned to true or false when players click the mouse to make them yellow (true) or blue (false). Once the puzzle is solved and all the bubbles are green, a computer scientist could simply look at the color of each button to gather the solution of that particular problem. Satisfiability problems arise not only in complex chip design, but in many other areas such as packing a backpack with as many items as possible, or searching for the shortest postal route to deliver mail in a neighborhood.

More information: