30 December 2008

Mobile College Application

Professors usually don’t ask everyone in a class of 300 students to shout out answers to a question at the same time. But a new application for the iPhone lets a roomful of students beam in answers that can be quietly displayed on a screen to allow instant group feedback. The application was developed by programmers at Abilene Christian University, which handed out free iPhones and iPod Touch devices to all first-year students this year. The university was the first to do such a large-scale iPhone handout, and officials there have been experimenting with ways to use the gadgets in the classroom. The application lets professors set up instant polls in various formats. They can ask true-or-false questions or multiple-choice questions, and they can allow for free-form responses. The software can quickly sort and display the answers so that a professor can view responses privately or share them with the class by projecting them on a screen. For open-ended questions, the software can display answers in “cloud” format, showing frequently-repeated answers in large fonts and less-frequent answers in smaller ones.

The idea for such a system is far from new. Several companies sell classroom response systems, often called “clickers,” that often involve small wireless gadgets that look like television remote controls. Most clickers allow students to answer true-or-false or multiple-choice questions (but do not allow open-ended feedback), and many colleges have experimented with the devices, especially in large lecture courses. There are several drawbacks to many clicker systems, however. First of all, every student in a course must have one of the devices, so in courses that use clickers, students are often required to buy them. Then, students have to remember to bring the gadgets to class, which doesn’t always happen. Using cellphones instead of dedicated clicker devices solves those issues. Because students rely on their phones for all kinds of communication, they usually keep the devices on hand. The university calls its iPhone software NANOtools — NANO stands for No Advanced Notice, emphasizing how easy the system is for students and professors to use. Some companies that make clickers, such as TurningPoint, are starting to sell similar software to turn smartphones into student feedback systems as well.

More information:

http://chronicle.com/wiredcampus/article/3518/mobile-college-app-turning-iphones-into-super-clickers-for-classroom-feedback

26 December 2008

Virtual Battle of Sexes

Picture a typical player of a massively multiplayer game such as World of Warcraft and most people will imagine an overweight, solitary male. But this stereotype has been challenged by a study investigating gender differences among gamers. It found that the most hard-core players are female, that gamers are healthier than average, and that game playing is an increasingly social activity. Despite gaming being seen as a male activity, female players now make up about 40% of the gaming population. The study looked at gender differences in more than 2,400 gamers playing EverQuest II. The participants, who were recruited directly out of the game, completed a web-based questionnaire about their gaming habits and lifestyles. They received an in-game item as a reward for taking part - a condition which has led to some questioning of the results.

In addition Sony Online Entertainment, Everquest's creator, gave the US researchers access to information about the players' in-game behaviours. The results showed that, although more of the players were male, it was the female players who were the most dedicated players, spending more time each day playing the game than their male counterparts. The pressure to conform to traditional gender roles might mean that some women are put off activities seen as ‘masculine’, whereas women who reject traditional gender roles might be more likely to play MMOs such as EverQuest II. Perhaps in support of this the survey revealed an unusually high level of bisexuality among the women who took part in the study - over five times higher than the general population.

More information:

http://news.bbc.co.uk/2/hi/technology/7796482.stm

22 December 2008

Cognitive Computing

Suppose you want to build a computer that operates like the brain of a mammal. How hard could it be? After all, there are supercomputers that can decode the human genome, play chess and calculate prime numbers out to 13 million digits. But University of Wisconsin-Madison research psychiatrist says the goal of building a computer as quick and flexible as a small mammalian brain is more daunting than it sounds. Scientists from Columbia University and IBM will work on the software for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the hardware. Thus, a cat landing on a hot stovetop not only jumps off immediately, it learns not to do that again. The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions. There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day. Our brains can do it, so we have proof that it is possible. What our brains are good at is being flexible, learning from experience and adapting to different situations. While the project will take its inspiration from the brain's architecture and function, it isn't possible or even desirable to recreate the entire structure of the brain down to the level of the individual synapse.

A lot of the work will be to determine what kinds of neurons are crucial and which ones we can do without. It all comes down to an understanding of what is necessary for teaching an artificial brain to reason and learn from experience. Value systems or reward systems are important aspects. Learning is crucial because it needs to learn from experience just like we do. So a system modeled after the neurons that release neuromodulators could be important. For example, neurons in the brain stem flood the brain with a neurotransmitter during times of sudden stress, signaling the "fight-or flight" response. Every neuron in the brain knows that something has changed. Thus, a cat landing on a hot stovetop not only jumps off immediately, it learns not to do that again. The ideal artificial brain will need to be plastic, meaning it is capable of changing as it learns from experience. The design will likely convey information using electrical impulses modeled on the spiking neurons found in mammal brains. And advances in nanotechnology should allow a small artificial brain to contain as many artificial neurons as a small mammal brain. It won't be an easy task, a veteran of earlier efforts to create cognitive computers. Even the brains of the smallest mammals are quite impressive when you consider what tasks they perform with a relatively small volume and energy input.

More information:

http://www.sciencedaily.com/releases/2008/12/081221215537.htm

21 December 2008

Modeling Brain Blasts

Traumatic brain injury (TBI) is often called the signature injury of the war in Iraq. Medical experts have yet to determine exactly what causes the condition, but the violent waves of air pressure emitted by an improvised explosive device (IED) or a rocket-propelled grenade are most likely to blame. These pressure waves travel close to the speed of sound and can rattle the brain's soft tissue, causing permanent, yet invisible, damage. In an effort to better understand how the waves shake soldiers' brains, researchers at the Naval Research Laboratory (NRL), in Washington, DC, developed a computer simulation that models the motion of a propagating blast wave using data gathered from laboratory experiments with sensor-studded mannequins. The simulation gives us the full 3D flow field, velocities, and pressure distributions surrounding the head and the helmet. Initial testing has already revealed some compelling results. The NRL researchers are collaborating with a team of researchers at Allen-Vanguard Technologies, in Canada.

The group placed Marine Corps ballistic helmets on mannequins equipped with pressure sensors and accelerometers, and these modified mannequins were placed at various orientations and distances from controlled explosions. The researchers collected data from more than 40 different blast scenarios and integrated the data into their computer simulation. The simulation uses a set of well-established flow-modeling algorithms for simulating reacting and compressible flow to create a 3D simulation of the pressure wave that would be experienced by a real soldier. These [algorithms] have been used in the past, but we are combining them in a new way to make software for this particular problem. The calculations are done in two steps. First, the algorithms are used to model the initial blast to get a realistic blast profile from the explosion. This includes the chemistry, so we can get the strength of the pressure waves and the velocity field. Second, as the wave approaches the mannequin, this information is fed into a compressible flow simulation that produces a more complex 3D simulation of the head-helmet geometry. This combined approach makes the calculations more realistic and efficient.

More information:

http://www.technologyreview.com/computing/21712/?a=f

18 December 2008

Virtual Cognitive Telerehabilitation

The Guttmann Institute, the Biomedical Engineering Research Center (CREB) and the Department of Software of the Universitat Politècnica de Catalunya (UPC), as well as other science and technology partners, are working on a telerehabilitation program for treating people with cognitive deficits caused by acquired brain damage. A three-dimensional space has been designed to help these people improve their functional capacity in daily life activities. The PREVIRNEC platform enables therapists to personalize treatment plans: intensive rehabilitation can be programmed automatically for the required length of time, the results monitored and the level of difficulty adjusted according to patients’ performance in previous sessions. The aim of this project is to use software to meet the treatment needs of patients with acquired brain damage. The software promotes the rehabilitation of affected cognitive functions by representing everyday, real life situations in a virtual world. All of the software that has been designed has two applications. It offers patients a three-dimensional IT platform on which to carry out their cognitive rehabilitation exercises. In addition, it provides a web interface for the therapist, through which different exercises can be programmed for each individual, their performance monitored, their progress assessed and their rehabilitation treatment plan adapted, if required.

Currently, the main limitation of conventional cognitive rehabilitation is the difference between the types of activities used in therapeutic sessions and the real difficulties that patients face after treatment. The introduction of virtual reality applications helps to reduce this gap between clinical practice and everyday life. In this innovative proposal, the contribution of the UPC’s Computer Science in Engineering Group (GIE) involves developing virtual realities of everyday spaces, for example the kitchen, in which the patient has to carry out several tasks, such as putting things away in the fridge or preparing a salad. These kinds of tasks, which are difficult to simulate in a clinical setting, can help patients to work on their ability to plan, sequence, categorize or use their memory. The flexibility and adaptability of this computer technology means that it can be used in the rehabilitation of other types of patients who also require cognitive treatment to improve their quality of life. The project is still in progress and the following have been incorporated: new technology partners, such as the Biomedical Engineering and Telemedicine Center (GBT) of the Technical University of Madrid; knowledge in the field of neurosciences, provided by the Catalan Institute of Aging (FICE); the UAB, in the form of the Cognitive Neuroscience Research Group; and the recognition and drive of ICT industries, represented by ICA and Vodafone, as the result of a research, development and innovation grant that was awarded through the AVANZA program.

More information:

http://www.sciencedaily.com/releases/2008/12/081201082357.htm

17 December 2008

Future Computer Interfaces

This month, the humble computer mouse celebrated its 40th birthday. Thanks to the popularity of the iPhone, the touch screen has gained recognition as a practical interface for computers. In the coming years, we may see increasingly useful variations on the same theme. A couple of projects, in particular, point the way toward interacting more easily with miniature touch screens, as well as with displays the size of walls. One problem with devices like the iPhone is that users' fingers tend to cover up important information on the screen. Yet making touch screens much larger would make a device too bulky to slip discreetly into a pocket. A project called nanoTouch, developed at Microsoft Research, tackles the challenges of adding touch sensitivity to ever-shrinking displays. Researchers have added touch interaction to the back of devices that range in size from an iPod nano to a watch or a pendant. The researchers' concept is for a gadget to have a front that is entirely a display, a back that is entirely touch sensitive, and a side that features buttons.

To make the back of a gadget touch sensitive, the researchers added a capacitive surface, similar to those used on laptop touch pads. In one demonstration, the team shows that the interface can be used to play a first-person video game on a screen the size of a credit card. In another demo, the device produces a semitransparent image of a finger as if the device were completely see-through. When a transparent finger or a cursor is shown onscreen, people can still operate the device reliably. Details of the device will be presented at the Computer Human Interaction conference in Boston next April. The researchers tested four sizes of square displays, measuring 2.4 inches, 1.2 inches, 0.6 inches, and 0.3 inches wide. They found that people could complete tasks at roughly the same speed using even the smallest display, and that they made about the same number of errors using all sizes of the device. Furthermore, the back-of-the-screen prototypes performed better than the smallest front-touch device.

More information:

http://www.technologyreview.com/computing/21799/?a=f

11 December 2008

Virtual Emotions, Moods, Personality

A team of researchers from the University of the Balearic Islands (UIB) has developed a computer model that enables the generation of faces which for the first time display emotions and moods according to personality traits. The aim of this work has been to design a model that reveals a person's moods and displays them on a virtual face. In the same 3-D space we have integrated personality, emotions and moods, which had previously been dealt with separately. Researchers pointed out that emotions (such as fear, joy or surprise) are almost instantaneous mood alterations, in contrast to emotional states (such as boredom or anxiety) which are more long-lasting, or personality, which normally lasts someone's entire life. The designers have followed theories to draw up the model, based on the five personality traits established by American psychologists including: extraversion, neuroticism, openness, conscientiousness and agreeableness. An introverted and neurotic personality is therefore related to an anxious emotional state. The points of the face that define these emotions can be determined mathematically, and the algorithms developed by computer experts can be used to obtain different facial expressions ‘quickly and easily’.

The system, which uses the MPEG-4 video coding standard for creating images, makes it possible to display basic emotions (anger, disgust, fear, joy, sadness, surprise) and intermediate situations. The results of the method have been assessed objectively (through an automatic recognizer which identified 82% of the expressions generated) and subjectively, through a survey carried out among a group of 75 university students. The students successfully recognised 86% of the emotions and 73% of the emotional states shown on the computer. Even so, the researchers have detected that some emotions, such as fear and surprise, are difficult to tell apart, with context helping to differentiate between the two. The team is already working in this line and prepared a virtual storyteller which enriches the narration, using its face to express the emotions generated by the story told. The researchers believe that this model could be applied in both educational environments (virtual tutors and presenters with personality traits) and in video game characters or interactive stories that have their own emotional motor.

More information:

http://www.sciencedaily.com/releases/2008/12/081204133855.htm

07 December 2008

VAST2008 Article

A few days ago I have presented a co-authored paper in 9th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), in Braga, Portugal. Museums and other cultural institutions try to communicate the theme of their exhibitions and attract the visitors’ attention by presenting audio-visual information in a number of different ways. Traditional museum exhibitions have evolved from passive presentations of artefacts to interactive displays, such as pre-recorded audio guides and static information kiosks. However even if some technological advances have been adopted by current museum and mobile exhibitions, they provide very simplistic presentations compared to the potential of the current Information Technologies. It is therefore essential to provide a unifying framework that can be highly customisable, user-friendly and intuitive to use in order to engage a broad spectrum of users and take into account the diverse needs of museum visitors.

This paper presents solutions for both museum exhibitions and mobile guides moving towards a unifying framework based on open standards. This can offer more customisable experiences attracting and engaging a broader spectrum of users. Our solution takes into account the diverse needs of visitors to heritage and mobile guide exhibitions allowing for multimedia representations of the same content but using diverse interfaces including a web, a map, a virtual reality and an augmented reality domain. Different case studies illustrate the majority of the capabilities of the multimodal interfaces used and also how personalisation and customisation can be performed in both kiosk and mobile guide exhibitions to meet user needs.

A draft version of the paper can be downloaded from here.

30 November 2008

Social Inclusion Workshop

Serious Games Institute (SGI) is organising another workshop, on the 10th of December 2008 in the area of social inclusion and serious games. The issues around social inclusion and digital equity are extremely significant for supporting social communities through the use of virtual and games technologies.

The workshop will explore the key issues with leading experts from the field and give examples of how virtual technologies can be used to create new relationships and communities that are socially inclusive.

More information:

http://www.seriousgamesinstitute.co.uk/events.aspx?item=554

27 November 2008

Jacking into the Brain

Futurists and science-fiction writers speculate about a time when brain activity will merge with computers. Technology now exists that uses brain signals to control a cursor or prosthetic arm. How much further development of brain-machine interfaces might progress is still an imponderable. It is at least possible to conceive of inputting text and other high-level information into an area of the brain that helps to form new memories. But the technical hurdles to achieving this task probably require fundamental advances in understanding the way the brain functions. The cyberpunk science fiction that emerged in the 1980s routinely paraded “neural implants” for hooking a computing device directly to the brain. The genius of the then emergent genre (back in the days when a megabyte could still wow) was its juxtaposition of low-life retro culture with technology that seemed only barely beyond the capabilities of the deftest biomedical engineer.

In the past 10 years, however, more realistic approximations of technologies originally evoked in the cyberpunk literature have made their appearance. A person with electrodes implanted inside his brain has used neural signals alone to control a prosthetic arm, a prelude to allowing a human to bypass limbs immobilized by amyotrophic lateral sclerosis or stroke. Researchers are also investigating how to send electrical messages in the other direction as well, providing feedback that enables a primate to actually sense what a robotic arm is touching. But how far can we go in fashioning replacement parts for the brain and the rest of the nervous system? Besides controlling a computer cursor or robot arm, will the technology somehow actually enable the brain’s roughly 100 billion neurons to function as a clandestine repository for pilfered industrial espionage data or another plot element borrowed from Gibson?

More information:

http://www.sciam.com/article.cfm?id=jacking-into-the-brain

24 November 2008

Musical Instruments Cellphones

The satisfying thud of a bass drum sounds every time Gil Weinberg strikes thin air with his iPhone. A pal nearby swings his Nokia smartphone back and forth, adding a rippling bass line. A third phone-wielding friend sprinkles piano and guitar phrases on top. Weinberg's trio are using software that turns ordinary cellphones into musical instruments. Commuters regularly assailed by tinny recorded music played on other passengers' phones might not share his enthusiasm, but air guitarists and would-be drummers will probably be delighted. Smart gesture-recognition software will democratise music-making as never before. The software, dubbed ZoozBeats and launched this week, monitors a phone's motion and plays a corresponding sound. For example, you might play a rhythm based on a snare drum by beating the air with the phone as if it's a drumstick.

Or you could strum with it to play a sequence of guitar chords. The software runs on a wide range of phones because it uses many different ways to sense gestures. The obvious way is to use the accelerometers built into gadgets like the Apple iPhone and Nokia N96 smartphone. But ZoozBeats can also trigger sounds when the view through a phone's camera lens changes rapidly, or generate a beat or bassline from simple taps on the mobile's microphone. Of course, people who aren't well versed in music-making are more likely to make an infernal racket than beautiful melodies, so ZoozBeats incorporates an algorithm called Musical Wizard to make sure their musical decisions are harmonious. ZoozBeats comes with instruments for three types of music: rock, techno and hip hop. It also allows users to produce vocal effects by singing into the phone and will be downloadable in two versions. One of these will be for solo use, the other a Bluetooth networkable version that supports jamming by groups of people - using the Musical Wizard to keep everybody's input melodious.

More information:

http://www.newscientist.com/article/mg20026816.200-cellphone-app-will-get-air-guitarists-wailing.html

21 November 2008

Voice and Gesture Mobile Phones

Five years from now, it is likely that the mobile phone you will be holding will be a smooth, sleek brick — a piece of metal and plastic with a few grooves in it and little more. Like the iPhone, it will be mostly display; unlike the iPhone, it will respond to voice commands and gestures as well as touch. You could listen to music, access the internet, use the camera and shop for gadgets by just telling your phone what you want to do, by waving your fingers at it, or by aiming its camera at an object you're interested in buying. Over the last few years, advances in display technology and processing power have turned smartphones into capable tiny computers. Mobile phones have gone beyond traditional audio communication and texting to support a wide range of multimedia and office applications. The one thing that hasn't changed, until recently, is the tiny keypad. Sure, there have been some tweaks, such as T9 predictive text input that cuts down on the time it takes to type, a QWERTY keyboard instead of a 12-key one, or the touchscreen version of a keyboard found on the iPhone. But fundamentally, the act of telling your phone what to do still involves a lot of thumb-twiddling. Experts say the industry needs a new wave of interface technologies to transform how we relate to our phones. The traditional keypads and scroll wheels will give way to haptics, advanced speech recognition and motion sensors. Until Apple's iPhone came along, keypads were a standard feature on all mobile phones. The iPhone paved the way for a range of touchscreen-based phones, including the T-Mobile G1 and the upcoming BlackBerry Storm. So far, even iPhone clones require navigation across multiple screens to complete a task. That will change as touchscreens become more sophisticated and cheaper. Instead of a single large screen that is fragile and smudged by fingerprints, phone designers could create products with multiple touch screens.

Users could also interact with their phone by simply speaking to it using technology from companies such as Cambridge, Massachusetts-based Vlingo. Vlingo's application allows users to command their phones by voice. That could enable you to speak the URLs for web pages or dictate e-mail messages. Natural speech recognition has long been challenging for human-computer interface researchers. Most devices with speech-recognition capabilities require users to speak commands in an artificially clear, stilted way. They also tend to have high error rates, leading to user disenchantment. Unlike conventional voice-recognition technologies, which require specific applications built to recognize selected language commands, Vlingo uses a more open-ended approach. User voice commands are captured as audio files and transferred over the wireless connection to a server, where they're processed. The technology personalizes itself for each individual user, recognizing and training itself based on the individual user's speech patterns. The technology has already found a major partner in Yahoo, which offers voice-enabled search on BlackBerry phones. Vlingo's completely voice-powered user interface is also available on Research In Motion phones, such as the BlackBerry Curve and Pearl. Vlingo hopes to expand its services to additional platforms such as Symbian, Android and feature phones over the next few months. Moreover, even the traditional keypad is set to get a face lift. Typing on a touchscreen keypad is slow and difficult, even for those without stubby fingers or long nails. That's where Swype comes in. It allows users to use trace a continuous motion on an onscreen QWERTY keypad instead of tapping individual characters. For instance, instead of typing the word infinity, users can just draw a line through each of the characters.

More information:

http://blog.wired.com/gadgets/2008/11/buttons-make-wa.html

16 November 2008

Google Earth: Ancient Rome 3D

Google Earth has embraced a frontier dating back 17 centuries: ancient Rome under Constantine the Great. Ancient Rome 3D, as the new feature is known, is a digital elaboration of some 7,000 buildings recreating Rome circa A.D. 320, at the height of Constantine’s empire, when more than a million inhabitants lived within the city’s Aurelian walls. In Google Earth-speak it is a “layer” to which visitors gain access through its Gallery database of images and information. Google had planned to activate the feature on Wednesday morning, but a spokesman said there would be a short delay because of technical difficulties. By Wednesday night, however, the feature was up and running. The Google Earth feature is based on his Rome Reborn 1.0, a 3D reconstruction first developed in 1996 at the University of California, Los Angeles, and fine-tuned over the years with partners in the United States and Europe.

Of the 7,000 buildings in the 1.0 version, around 250 are extremely detailed. Thirty-one of them are based on 1:1 scale models built at U.C.L.A. The others are sketchier and derived from a 3D scan of data collected from a plaster model of ancient Rome at the Museum of Roman Civilization here. Archaeologists and scholars verified the data used to create the virtual reconstruction, although debates continue about individual buildings. The Rome Reborn model went through various incarnations over the years as the technology improved. Originally it was developed to be screened in theaters for viewers wearing 3-D glasses or on powerful computers at the universities contributing to the project, rather than run on the Internet. To experience Ancient Rome 3D, a user must install the Google Earth software at earth.google.com, select the Gallery folder on the left side of the screen and then click on “Ancient Rome 3D.”

More information:

http://www.nytimes.com/2008/11/13/arts/design/13anci.html?_r=1&oref=slogin

http://earth.google.com/rome/index.html

14 November 2008

Telemedicine Using Internet2

Imagine a scenario where doctors from different hospitals can collaborate on a surgery without having to actually be in the operating room. What if doctors in remote locations could receive immediate expert support from top specialists in hospitals around the world? This environment could soon become a reality thanks to research by a multi-university partnership that is testing the live broadcast of surgeries using the advanced networking consortium Internet2. Rochester Institute of Technology is collaborating with a team led by the University of Puerto Rico School of Medicine that recently tested technology, which allows for the transmission of high quality, real time video to multiple locations. Using a secure, high-speed network, an endoscopic surgery at the University of Puerto Rico was broadcast to multiple locations in the United States. The experiment also included a multipoint videoconference that was connected to the video stream, allowing for live interaction between participants.

Results from the test were presented at a meeting of the collaboration special interest group at the fall 2008 Internet2 member meeting in New Orleans. The experiment demonstrates that by using the speed and advanced protocols support provided by the Internet2 network, it is potential to develop real-time, remote consultation and diagnosis during surgery, taking telemedicine to the next level. The researchers utilized a 30-megabit-per-second broadcast quality video stream, which produces high quality images, and configured it to be transmitted via multicast using Microsoft Research’s ConferenceXP system. This level of real time video was not possible in the past due to slower and lower quality computer networks. The team also utilized a Polycom videoconferencing system to connect all parties. The team will next conduct additional tests with different surgical procedures and an expanded number of remote locations. The researchers’ goal is to transfer the technology for use in medical education and actual diagnostic applications.

More information:

http://www.sciencedaily.com/releases/2008/11/081112160853.htm

06 November 2008

Second Life: 'Second China'

A team of University of Florida computer engineers and scholars has used the popular online world Second Life to create a virtual Chinese city, one that hands a key to users who want to familiarize themselves with the sights and experiences they will encounter as first-time visitors. The goal of the federally funded research project: To educate and prepare foreign service or other government professionals to arrive in the country prepared and ready to work. People have long prepared for international travel with language and cultural instruction, role-playing and, in recent years, distance-learning experiences. The “Second China Project” seeks to add another element: Simulated experiences aimed at introducing users not only to typical sights and the Chinese language, but also to expectations of politeness, accepted business practices and cultural norms.

As with all Second Life worlds, users’ avatars simply “teleport” in to Second China, a city with both old and new buildings that looks surprisingly similar to some of China’s fastest growing metropolises. There, they can try a number of different activities — including, for example, visiting an office building for a conference. In the office simulation, the user’s avatar chooses appropriate business attire and a gift, greets a receptionist, and is guided to a conference room to be seated, among other activities. With each scenario, the user gains understanding or awareness: the Chinese formal greeting language and procedure, that it’s traditional to bring a gift to a first meeting, that guests typically are seated facing the door in a Chinese meeting room, and so on. In the teahouse simulation, a greeter shows the visitor photos of well-known personalities who have visited as patrons, a typical practice in many establishments in China.

More information:

http://www.sciencedaily.com/releases/2008/10/081029154856.htm

04 November 2008

New Model Predicts A Glacier's Life

EPFL researchers have developed a numerical model that can re-create the state of Switzerland's Rhône Glacier as it was in 1874 and predict its evolution until the year 2100. This is the longest period of time ever modeled in the life of a glacier, involving complex data analysis and mathematical techniques. The work will serve as a benchmark study for those interested in the state of glaciers and their relation to climate change. The Laboratory of Hydraulics, Hydrology and Glaciology at ETH Zurich has been a repository for temperature, rainfall and flow data on the Rhône Glacier since the 1800s. Researchers there have used this data to reconstruct the glacier's mass balance, i.e. the difference between the amount of ice it accumulates over the winter and the amount that melts during the summer. Now, a team of mathematicians has taken the next step, using all this information to create a numerical model of glacier evolution, which they have used to simulate the history and predict the future of Switzerland's enormous Rhone glacier over a 226-year period.

The mathematicians developed their model using three possible future climate scenarios. With a temperature increase of 3.6 degrees Celsius and a decrease in rainfall of 6% over a century, the glacier's ‘equilibrium line’, or the transition from the snowfall accumulation zone to the melting zone (currently situated at an altitude of around 3000 meters), rose significantly. According to this same scenario, the simulation anticipates a loss of 50% of the volume by 2060 and forecasts the complete disappearance of the Rhône glacier around 2100. Even though measurements have been taken for quite some time, the sophisticated numerical techniques that were needed to analyze them have only been developed very recently. To verify their results, the mathematicians have also reconstructed a long-vanished glacier in Eastern Switzerland. They were able to pinpoint the 10,000-year-old equilibrium line from vestiges of moraines that still exist. The scientists' work will be of interest not only to climate change experts, but also to those to whom glaciers are important – from tourism professionals to hydroelectric energy suppliers.

More information:

http://www.sciencedaily.com/releases/2008/10/081029104258.htm

31 October 2008

Undressing the Human Body

Imagine you are a police detective trying to identify a suspect wearing a trench coat, baggy pants and a baseball cap pulled low. Or imagine you are a fashion industry executive who wants to market virtual clothing that customers of all shapes and sizes can try online before they purchase. Perhaps you want to create the next generation of “Guitar Hero” in which the user, not some character, is pumping out the licks. The main obstacle to these and other pursuits is creating a realistic, 3D body shape — especially when the figure is clothed or obscured. Researchers have created a computer program that can accurately map the human body’s shape from digital images or video. This is an advance from current body scanning technology, which requires people to stand still without clothing in order to produce a 3D model of the body. With the new 3D body-shape model, the scientists can determine a person’s gender and calculate an individual’s waist size, chest size, height, weight and other features.

The potential applications are broad. Besides forensics and fashion, Black and Balan’s research could benefit the film industry. Currently, actors must wear tight-fitting suits covered with reflective markers to have their motion captured. The new approach could capture both the actors’ shape and motion, while doing away with the markers and suits. In sports medicine, doctors would be able to use accurate, computerized models of athletes’ bodies to better identify susceptibility to injury. In the gaming world, it could mean the next generation of interactive technology. Instead of acting through a character, a camera could track the user, create a 3D representation of that person’s body and insert the user into the video game. The researchers stress the technique is not invasive; it does not use X-rays, nor does it actually see through clothing. The software makes an intelligent guess about the person’s exact body shape.

More information:

http://www.sciencedaily.com/releases/2008/10/081027101350.htm

http://www.cs.brown.edu/~alb/scapeClothing/

29 October 2008

Maps You Can Feel

“Eyes on the future” is the mantra of the ‘World Sight Day’ held this month to raise awareness of blindness and vision impairment. New technologies, developed by European researchers offering the visually impaired greater independence, live up to this vision. Many of the most innovative systems have been created by a consortium of companies and research institutes working in the EU-funded ENABLED project. The project has led to 17 prototype devices and software platforms being developed to help the visually impaired, two of which have been patented. Guide dogs, canes, Braille and screen readers that turn digital text into spoken audio all help to improve the lives of the blind or severely visually impaired, but none of these tools can make up for having a friend or relative accompany a blind person around and assist them in their daily life. However, a human helper is not always available. Activities that the sighted take for granted, such as going for a walk in the park or trying out a new restaurant, becomes an odyssey for the visually impaired, particularly when they do not already know the route by heart. A guide dog can help them avoid dangers in the street, be it a curb or a lamppost, but it cannot show them a new route. People can be asked for directions, but following them is another matter entirely when you cannot read street signs or see landmarks. Those barriers have typically prevented the visually impaired from exploring the world around them on their own, but now, with the new technologies, they can surmount some of these barriers.

To achieve that, the project partners worked in two broad areas. On the one hand, they developed software applications with tactile, haptic and audio feedback devices to help visually impaired people feel and hear digital maps of where they want to go. On the other hand, they created new haptic and tactile devices to guide them when they are out in the street. One of the patented prototypes, called VITAL, allows users to access a tactile map of an area. Using a device akin to a computer mouse they can move a cursor around the map and small pins will create shapes under the palm of their hand. The device could produce the sensation of a square block to define a building, or form into different icons to depict different shops and services – an ‘H’ for a hospital, for example. Having obtained a ‘mental image’ of the map from the computer, users can then take the route information with them when they venture outside. For that purpose, the project partners used a commercially available navigation aid called the Trekker, which uses GPS to guide users as they walk around, much like a navigation system in a car. However, the Trekker gives only spoken directions, something that can be disconcerting for blind people, who may not want to draw attention to themselves. The device can often be hard to hear in noisy, city environments. The ENABLED team therefore developed prototypes to provide directions through tactile and haptic feedback, rather than via audio alone. One patented device developed by the project team, the VIFLEX, looks similar to a TV remote control with a movable plate at the front. The user rests his thumb on the plate, which tilts in eight directions to guide users based on the directions given by the Trekker.

More information:

http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/90170

28 October 2008

AR Makes Commercial Headway

Media Power is a New York City–based firm that develops mobile communications applications. Media Power is part of a vanguard of organizations that is working to commercialize augmented-reality (AR) technology, which can be characterized as the timely overlay of useful virtual information onto the real world. AR incorporates three key features: virtual information that is tightly registered, or aligned, with the real world; the ability to deliver information and interactivity in real time; and seamless mixing of real-world and virtual information. When explaining AR technology, it often invokes the virtual first-down marker seen as a yellow stripe in televised football games. The technical challenge of AR is to do something similar but more complex with the live video feed from a cell phone camera and without the 10-second delay required to generate the virtual marker.

Although AR has mostly lived in the lab, the recent emergence of highly capable mobile devices is fueling a surge in interest. Players look at cards through a camera and watch animated versions of the game characters on the cards fight one another. The ability is based on identifying real-world objects and estimating their locations in space. AR-like technology is also finding its way into industrial manufacturing. Inter­Sense, offers process-verification systems that use sensors and cameras to track the positions and motions of tools as workers do their jobs. Computers then compare the actual tool movements with ideal procedures to detect errors or confirm correct completion, information that is then provided graphically to the workers in real time. Media Power will also be introducing cell phone–enabled museum exhibit tours based on the same technology, as well as the means by which consumers can trigger delivery of targeted advertising by directing camera phones at brand logos.

More information:

http://www.sciam.com/article.cfm?id=digitally-augmented-reality

25 October 2008

Environmental VEs Workshop

On Wednesday, 12th of November another workshop is taking place at the Serious Games Institute (SGI). This month’s focus is on environmental issues. The title of the workshop is ‘Using virtual environments to support environmental issues’.

The use of virtual and smart spaces are significantly changing how we design and interact with our real environments. This workshop will explore some of the ways that virtualising spaces through virtual environments and serious games can help to reshape the debate about the environment.

More information:

http://www.seriousgamesinstitute.co.uk/events.aspx?item=553

21 October 2008

Real Pilots And 'Virtual Flyers'

Stunt pilots have raced against computer-generated opponents for the first time — in a contest that combines the real and the ‘virtual’ at 250 miles per hour. Using technology developed, in part, by a University of Nottingham spin-out company, an air-race in the skies above Spain saw two stunt pilots battle it out with a ‘virtual’ plane which they watched on screens in their cockpits. The ‘virtual’ aircraft was piloted by a computer-gamer who never left the ground, but could likewise see the relative location of the real planes on his own computer screens as the trio swooped around each other during the ‘Sky Challenge’ race. The event could pave the way for massive online competitions, and also demonstrates the power and scope of the very latest in GPS and related systems.

The 'Sky Challenge' was organised by Air Sports Ltd, a New Zealand company which specialises in advanced sports TV technology. The technology that made 'Sky Challenge' possible was supplied by the Geospatial Research Centre (GRC), a joint venture between The University of Nottingham, the University of Canterbury in New Zealand and Canterbury Development Corporation. They were able to merge an electronically-generated world with the real world using a combination of satellite navigation technology (GPS, or global positioning system) and inertial navigation system technology (INS). The result of the Sky Challenge was a narrow victory for one of the real pilots — but he was only 1.5 seconds ahead of his virtual rival.

More information:

http://www.sciencedaily.com/releases/2008/10/081017103640.htm

13 October 2008

IEEE VS-GAMES '09 Conference

The first IEEE International Conference in Games and Virtual Worlds for Serious Applications 2009 (VS-GAMES 2009) will be held between 23-24 March, Coventry University, UK. It aims to meet the significant challenges of the cross-disciplinary community that work around these serious application areas by bringing the community together to share case studies of practice, to present new frameworks, methodologies and theories and to begin the process of developing shared cross-disciplinary outputs. In order to achieve this main aim the conference will pioneer new methods for bringing together and supporting communities of practice emerging in themed areas beyond the duration of the conference. Using the conference as an ignition to support a wider aspiration to form and sustain a community of practice around the field. To achieve this, the team at the Serious Games Institute (SGI) will use innovative software called Intronetworks, which allows conference participants to create their own profile allowing them to identify like-minded and complementary skilled colleagues.


The term 'Serious Games' covers a broad range of applications from flash-based animations to totally immersive and code driven 3D environments where users interface with large volumes of data through sophisticated and interactive digital interfaces. This shift towards immersive world applications being used to support education, health and training activities marks the beginning of new challenges that offer real scope for collaborative and multi-disciplinary research solutions, and real opportunities for innovative development. We invite researchers, developers, practitioners and decision-makers working with or applying serious games in their communities to present papers in the following two main streams of the conference: games and virtual world applications for serious applications. The conference will explore games and virtual worlds in relation to: applications, methodologies, theories, frameworks, evaluation approaches and user-studies.

More information:

http://www.vs-games.org.uk/

07 October 2008

Pervasive Open Infrastructure

Pervasive computing provides a means of broadening and deepening the reach of information technology (IT) in society. It can be used to simplify interactions with Web sites, provide advanced location-specific services for people on the move, and support all aspects of citizens' life in the community. The Construct system identifies the best-of-breed techniques that have been successfully implemented for pervasive systems. They are collected together into a middleware platform, an intermediary between sensors and services. Construct provides a uniform framework for situation identification and context fusion, while providing transparent data dissemination and node management. Construct's basic architecture (Figure below) relies on services and sensors that access a distributed collection of nodes, which are responsible for aggregating data from the sensors. Construct regards all data sources as sensors: for example, physical ones for temperature, pressure, and location are included along with virtual ones that access digital and Web resources.

A sensor injects information into Construct's resource description framework (RDF) triple-store database. The triple store provides a set of common descriptions for concepts across domains. This model means that different sensors can be used to detect the same information. Location may be sensed directly from RFID (radio frequency identification) or Ubisense, or inferred from diary or proximity information. Yet all this information can be accessed by services using a common data model. To request information from the database, applications query the triple store using the standard SPARQL language. On the other hand, it does not provide remote access to sensors: instead, sensor data is transmitted around the network using the Zeroconf protocol for node discovery and gossiping to exchange data. Gossiping means that nodes randomly synchronise their triple stores. This can lead to substantial background communications traffic, but increases the robustness of the system, since a node failure will not cause sensed data to be lost.

More information:

http://www.perada-magazine.eu/view.php?article=1262-2008-09-22&category=Middleware

01 October 2008

Major Incident Training Workshop

Major incident training can be difficult and expensive using traditional methods. In this workshop we explore different approaches to solving these challenges through the use of game technology and virtual world applications. The workshop will be held on the 8th October 2008 at Serious Games Institute.

The aim is to discuss the issues of virtual training in the area of healthcare, training and disaster management. Colleagues from all over the UK are invited to join the hub area in major incident training at the Serious Games Institute, and help to develop a roadmap for future work in the field.

More information:

http://www.seriousgames.org.uk/events.aspx?item=550

27 September 2008

Less Virtual Games

While most massively multiplayer online games (MMOs) are based on fantasy worlds, there is a growing trend for a new kind of game that merges the real world with the virtual. Rather than taking on the person of a mythical character who goes on quests, players of this new breed of game compete against one another in real sports, based in the real world. At first glance, these games resemble racing simulations, but with unparalleled realism and the ability to race against a large number of people, including professionals they represent a cut above the rest. iRacing is an internet-based auto racing simulation system in which drivers can race against dozens of other online participants on race tracks modelled on the real thing. But the makers of iRacing are keen to stress that it's more than just a game. iRacing uses laser-scanning technology to accurately replicate real racetracks, while vehicle-handling dynamics are reproduced using a physics engine and tire model so that each car feels different to drive.

Sky Challenge takes this link to reality a step further, allowing players to race against real jets. High-performance aircraft race through a virtual computer-generated obstacle course in the sky. The course is stored in onboard computers and the pilots flying the planes see the series of animated objects through which they must fly on a small screen display. The course is also dynamic. It can adjust to punish or reward competitors for penalties and bonuses, so that if a pilot hits a virtual object, the course for that pilot gets longer. While iRacing opened to the public this summer, Sky Challenge is yet to become available to internet participants. A test event is due to be held next week on October 2nd 2008 over the beaches of Barcelona. In iRacing, drivers are grouped according to skill level so that races are evenly matched and in Sky Challenge, internet participants start by practising alone, then once they've learnt the course they race against other online players, finally earning the chance to take on the real pilots in a real-time race.

More information:

http://news.bbc.co.uk/1/hi/technology/7633110.stm

25 September 2008

Virtual RFID-enabled Hospital

Students at the University of Arkansas, and at neighboring high schools, are employing avant-garde technology to help the health-care industry learn just how RFID can make a difference in the operations of a company or organization. The researchers hope the technology will provide a modeling and simulation environment that lets organizations test RFID implementations—down to such details as the number of RFID readers and tags, and where to put them—prior to physical deployment. They have digitally created a hospital in Second Life, a three-dimensional virtual world developed and owned by Linden Lab in which millions of people visit to work and play online. The project is connected with the University of Arkansas' Center for Innovation in Healthcare Logistics, in its College of Engineering, as well as with the RFID Research Center, part of the Information Technology Research Institute at the Sam M. Walton College of Business. The Center for Innovation in Healthcare Logistics, which opened in 2007, includes an interdisciplinary team of researchers who investigate supply chain networks and information and logistics systems within the broad spectrum of U.S. health care. Since 2005, the RFID Research Center has conducted studies regarding the use of radio frequency identification in retail.

The virtual world allows hospitals to model their environments in great detail. On the University of Arkansas' Second Life Island, the students have created a virtual hospital containing operating suites, patient rooms, laboratories, a pharmacy, waiting rooms, stock rooms and bathrooms. The virtual facility also includes furnishings, such as working toilets, sinks, showers, chairs and beds, along with various diagnostic and medical equipment including electrocardiogram machines, respiratory rate monitors and portable X-ray machines. The avatars (in this case, doctors, nurses, staff members and patients) and various assets are tagged with virtual RFID tags, each with its own unique number. There are also virtual RFID interrogators positioned in doorways and various other places throughout the hospital. Using the tags and readers, the researchers have modeled a variety of business processes. For instance, one process simulates the delivery of equipment and goods to the hospital: A delivery truck drives up to a warehouse, where RFID-tagged items have been placed on a smart pallet (which has scanned the items' RFID tags to create a bill of lading). The avatar loads the smart pallet onto the delivery truck, then drives to the hospital. Once the truck backs up to a dock, an RFID-enabled robot picks up the pallet, scans the items' tags and transports the goods to the appropriate locations within the hospital.

More information:

http://www.rfidjournal.com/article/articleview/4326/1/1/definitions_off

http://vw.ddns.uark.edu/index.php?page=media

23 September 2008

Google's Android Mobile Unveiled

The first mobile telephone using Google's Android software has been unveiled. The T-Mobile G1 handset will be available in the UK in time for Christmas. The first device to run the search giant's operating system will feature a touch screen as well as a Qwerty keyboard. It will be available for free on T-Mobile tariffs of over £40 a month and includes unlimited net browsing. Other features include a three megapixel camera, a 'one click' contextual search and a browser that users can zoom in on by tapping the screen. The handset will be wi-fi and 3G enabled and has built-in support for YouTube. Users will also have access the so-called Android Market, where they will be able to download a variety of applications. Google announced its plans for the Android phone software in November 2007 with a declared aim of making it easier to get at the web while on the move.

The idea behind Android is to do for phone software what the open source Linux software has done for PCs. Developers of phone software can get at most of the core elements of the Android software to help them write better applications. However, in launching Android, Google faces stiff competition from established players such as Nokia with its Symbian software and Microsoft with its Mobile operating system. More recently Apple has been gaining customers with its much hyped iPhone. The Android software is squarely aimed at the smartphone segment of the handset market which adds sophisticated functions to the basic calling and texting capabilities of most phones. Current estimates suggest that only 12-13% of the all handsets can be considered smartphones.

More information:

http://news.bbc.co.uk/1/hi/technology/7630888.stm

21 September 2008

From Xbox To T-cells

A team of researchers at Michigan Technological University is harnessing the computing muscle behind the leading video games to understand the most intricate of real-life systems. The group has supercharged agent-based modeling, a powerful but computationally massive forecasting technique, by using graphic processing units (GPUs), which drive the spectacular imagery beloved of video gamers. In particular, the team aims to model complex biological systems, such as the human immune response to a tuberculosis bacterium. During demonstration, a swarm of bright green immune cells surrounds and contains a yellow TB germ. These busy specks look like 3D-animations from a PBS documentary, but they are actually virtual T-cells and macrophages—the visual reflection of millions of real-time calculations.

Researchers from the University of Michigan in Ann Arbor, developed the TB model and gave it to Michigan's team, which programmed it into a graphic processing unit. Agent-based modeling hasn't replaced test tubes, but it is providing a powerful new tool for medical research. Computer models offer significant advantages. It is possible to create a mouse that's missing a gene and see how important that gene is but with agent-based modeling, we can knock out two or three genes at once. In particular, agent-based modeling allows researchers to do something other methodologies can't: virtually test the human response to serious insults, such as injury and infection. While agent-based modeling may never replace the laboratory entirely, it could reduce the number of dead-end experiments.

More information:

http://www.sciencedaily.com/releases/2008/09/080916155058.htm

16 September 2008

Watch And Learn

In work that could aid efforts to develop more brain-like computer vision systems, MIT neuroscientists have tricked the visual brain into confusing one object with another, thereby demonstrating that time teaches us how to recognize objects. It may sound strange, but human eyes never see the same image twice. An object such as a cat can produce innumerable impressions on the retina, depending on the direction of gaze, angle of view, distance and so forth. Every time our eyes move, the pattern of neural activity changes, yet our perception of the cat remains stable. This stability, which is called 'invariance,' is fundamental to our ability to recognize objects, it feels effortless, but it is a central challenge for computational neuroscience. A possible explanation is suggested by the fact that our eyes tend to move rapidly (about three times per second), whereas physical objects usually change more slowly. Therefore, differing patterns of activity in rapid succession often reflect different images of the same object. Could the brain take advantage of this simple rule of thumb to learn object invariance?

In this study, monkeys watch a similarly altered world while recording from neurons in the inferior temporal (IT) cortex — a high-level visual brain area where object invariance is thought to arise. IT neurons "prefer" certain objects and respond to them regardless of where they appear within the visual field. After the monkeys spent time in this altered world, their IT neurons became confused, just like the previous human subjects. The sailboat neuron, for example, still preferred sailboats at all locations — except at the swap location, where it learned to prefer teacups. The longer the manipulation, the greater the confusion, exactly as predicted by the temporal contiguity hypothesis. Importantly, just as human infants can learn to see without adult supervision, the monkeys received no feedback from the researchers. Instead, the changes in their brain occurred spontaneously as the monkeys looked freely around the computer screen. The team is now testing this idea further using computer vision systems viewing real-world videos. This work was funded by the NIH, the McKnight Endowment Fund for Neuroscience and a gift from Marjorie and Gerald Burnett.

More information:

http://www.sciencedaily.com/releases/2008/09/080911150046.htm

09 September 2008

High-Resolution GeoEye-1 Satellite

GeoEye-1 the super-sharp Earth-imaging satellite was launched into orbit on 6th of September from Vandenberg Air Force Base on the Central California coast. A Delta 2 rocket carrying the GeoEye-1 satellite lifted off at 11:50 a.m. Video on the GeoEye Web site showed the satellite separating from the rocket moments later on its way to an eventual polar orbit. The satellite makers say GeoEye-1 has the highest resolution of any commercial imaging system. It can collect images from orbit with enough detail to show home plate on a baseball diamond. The company says the satellite's imaging services will be sold for uses that could range from environmental mapping to agriculture and defence. GeoEye-1 was lifted into a near-polar orbit by a 12-story-tall United Launch Alliance Delta II 7420-10 configuration launch vehicle. The launch vehicle and associate support services were procured by Boeing Launch Services. The company expects to offer imagery and products to customers in the mid- to late-October timeframe. GeoEye-1, designed and built by General Dynamics Advanced Information Systems, is the world's highest resolution commercial imaging satellite.

Designed to take color images of the Earth from 423 miles (681 kilometers) in space and moving at a speed of about four-and-a-half miles (seven kilometers) per second, the satellite will make 15 earth orbits per day and collect imagery with its ITT-built imaging system that can distinguish objects on the Earth's surface as small as 0.41-meters (16 inches) in size in the panchromatic (black and white) mode. The 4,300-pound satellite will also be able to collect multispectral or color imagery at 1.65-meter ground resolution. While the satellite will be able to collect imagery at 0.41-meters, GeoEye's operating license from NOAA requires re-sampling the imagery to half-meter resolution for all customers not explicitly granted a waiver by the U.S. Government. The satellite will be able to see an object the size of home plate on a baseball diamond but also map the location of an object that size to within about nine feet (three meters) of its true location on the surface of the Earth without need for ground control points. Together, GeoEye's IKONOS and GeoEye-1 satellites can collect almost one million square kilometers of imagery per day.

More information :

http://www.gisdevelopment.net/news/viewn.asp?id=GIS:N_nvjzcqpdlg&Ezine=sept0808&section=News

http://geoeye.mediaroom.com/

02 September 2008

LIDAR Bringing High-Res 3D Data

To make accurate forecasts, meteorologists need data on the vertical distribution of temperature and humidity in the atmosphere. The LIDAR system developed by EPFL can collect these data continuously and automatically up to an altitude of 10km. On August 26, EPFL will officially transfer this custom-developed LIDAR to MeteoSwiss, and from this point on Swiss forecasters will have access to this source of vertical humidity data for the models they use to calculate weather predictions. The project was supported by funding from the Swiss National Science Foundation. The LIDAR system developed by EPFL is a relative of the familiar RADAR systems used widely in weather forecasting. Instead of sending radio waves out looking for water droplets, however, the LIDAR sends a beam of light vertically into the sky. The ‘echo’ here is a reflection of that light from different layers in the atmosphere. This reflection is used to build an instantaneous vertical profile of temperature and humidity.

Traditional LIDAR systems are more finicky, typically needing to be tuned on a daily basis. The new LIDAR will operate at the Center for Technical Measurements at MeteoSwiss' Payerne weather service. It will provide an ideal complement to the traditional instrumentation already in place: a ground-based measurement network, balloon launched radio-soundings, radar equipment, remotely sensed windspeed and temperature measurements, and a station of the Baseline Surface Radiation Network, part of a world-wide network that measures radiation changes at the Earth's surface. The combination of all these measurements will open up new possibilities, and weather forecasting models stand to benefit. The acquisition of the LIDAR will bring high-resolution three-dimensional humidity data to Swiss weather forecasting for the first time.

More information:

http://azooptics.com/Details.asp?newsID=2990

01 September 2008

Serious Virtual Worlds Conference 08

Building on the real success of first Serious Virtual Worlds conference in 2007 this is your invitation to be a part of the newly emerging professional community for the serious uses of virtual worlds. Serious Virtual Worlds'08 is the only event focussing on the serious uses of these environments.

SVW'08 will address the live issue of how virtual worlds will cross boundaries both between the real world and virtual worlds and between virtual worlds. As people spend increasing time in virtual worlds how will they interoperate between these virtual and real spaces? SVW'08 is the only international event that takes these leading edge issues and addresses them in a compact 2 day event.

More information:

http://www.seriousvirtualworlds.net/

28 August 2008

Personalised Maps Show Street View

Finding your way across an unfamiliar city is a challenge for most people's sense of direction. Software that generates personalised maps showing only relevant information, and carefully chosen views of selected landmarks, could make disorientation a thing of the past. Thanks to online services such as Google Maps and Microsoft Live maps now contain more information than ever. It is possible to toggle between a regular schematic, a ‘bird's eye view’ that uses aerial photos and even three-dimensional representations of a city's buildings. Those multiple perspectives can help users locate themselves more accurately. Direct routeGrabler's team at Berkeley, working with researchers at ETH Zurich, used a perceptual study of San Francisco from the 1960s to help identify which landmark buildings to include on a map of the city. They found that landmark buildings came in three varieties. These categories were used to give each building in San Francisco a rating on the basis of its score in each of the three categories.
When generating a map, the user can choose to display those landmarks in one of two ways. They can be displayed as straightforward three-dimensional depictions, but that masks the buildings' facades. To provide the user with more information, the team added an oblique projection option, which shows all visible sides of the building. Although the buildings look distorted compared with a regular three-dimensional depiction, it is possible to see all the facades a building presents to the street, including both facades for a building on a corner. But buildings depicted this way can hide some streets. This is avoided by widening the map's roads, and shrinking the height of the buildings so that roads remain visible behind even tall buildings. The user's final decision is to choose the purpose of their map. On a shopping map, all the major shops become semantically important and are included on the map. A food map, by contrast, will show fewer shops but more of the city's restaurants.

More information:

http://technology.newscientist.com/channel/tech/dn14562-personalised-maps-s%20%20how-the-view-from-the-street-.html

27 August 2008

Sign Language Over Cell Phones

A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle. This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis. For mobile communication, deaf people now communicate by cell phone using text messages. Video is much better than text-messaging because it's faster and it's better at conveying emotion. Low data transmission rates on U.S. cellular networks have so far prevented real-time video transmission with enough frames per second that it could be used to transmit sign language.

Communication rates on United States cellular networks allow about one tenth of the data rates common in places such as Europe and Asia (sign language over cell phones is already possible in Sweden and Japan). The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality. The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing. Mobile video sign language won't be widely available until the service is provided through a commercial cell-phone manufacturer.

More information:

http://www.sciencedaily.com/releases/2008/08/080821164609.htm

http://mobileasl.cs.washington.edu/index.html

http://youtube.com/watch?v=FaE1PvJwI8E

25 August 2008

High Res Images for Video Games

The images of rocks, clouds, marble and other textures that serve as background images and details for 3D video games are often hand painted and thus costly to generate. A breakthrough from a UC San Diego computer science undergraduate now offers video game developers the possibility of high quality yet lightweight images for 3D video games that are generated "on the fly" and are free of stretch marks, flickering and other artifacts. The 2008 SIGGRAPH paper marks an important improvement over Perlin noise, an established technique in which small computer programs create many layers of noise that are piled on top of each other. The layers are then manipulated -- like layers of paint on a canvas -- in order to develop detailed and realistic textures such as rock, soil, cloud, water and marble that serve as background images and details for 3D video games.

The new approach also eliminates the need to store the textures as huge images that take up valuable memory. Instead the textures are generated by computer programs on the fly every time an image is rendered. Both the stretch marks and the flickering in 3D video game backgrounds often stem from the same technical issue: choosing what color to make individual pixels. They mapped elliptical areas of background images back to circular pixels and found that their technique yielded higher quality background images with less stretching and other distortions. The reason elliptical shapes are a better fit for circular pixels in backgrounds for 3D video games goes back to basic geometry: when a cone that extends from a circular pixel intersects with the background of a 3D video game scene, the region of the cone that hits the background is an ellipse rather than a circle.

More information:

http://www.physorg.com/news137771248.html

22 August 2008

Archaeologists Reconnect Fragments

For several decades, archaeologists in Greece have been painstakingly attempting to reconstruct wall paintings that hold valuable clues to the ancient culture of Thera, an island civilization that was buried under volcanic ash more than 3,500 years ago. Researchers from Princeton University report on their work in a paper to be presented Aug. 15 in Los Angeles at the Association of Computing Machinery's annual SIGGRAPH conference, widely considered the premier meeting in the field of computer graphics. To design their system, the Princeton team collaborated closely with the archaeologists and conservators working at Akrotiri, which flourished in the Late Bronze Age, around 1630 B.C.E. Reconstructing an excavated fresco, mosaic or similar archaeological object is like solving a giant jigsaw puzzle, only far more difficult. The original object often has broken into thousands of tiny pieces -- many of which lack any distinctive color, pattern or texture and possess edges that have eroded over the centuries. As a result, the task of reassembling artifacts often requires a lot of human effort, as archaeologists sift through fragments and use trial and error to hunt for matches. While other researchers have endeavored to create computer systems to automate parts of this undertaking, their attempts relied on expensive, unwieldy equipment that had to be operated by trained computer experts. The Princeton system, by contrast, uses inexpensive, off-the-shelf hardware and is designed to be operated by archaeologists and conservators rather than computer scientists. The system employs a combination of powerful computer algorithms and a processing system that mirrors the procedures traditionally followed by archaeologists. In 2007, a large team of Princeton researchers made a series of trips to Akrotiri, initially to observe and learn from the highly skilled conservators at the site, and later to test their system. During a three-day visit to the island in September 2007, they successfully measured 150 fragments using their automated system. Although the system is still being perfected, it already has yielded promising results on real-world examples.

The setup used by the Princeton researchers consists of a flatbed scanner (of the type commonly used to scan documents and which scans the surface of the fragment), a laser rangefinder (essentially a laser beam that scans the width and depth of the fragment) and a motorized turntable (which allows for precise rotation of the fragment as it is being measured). These devices are connected to a laptop computer. By following a precisely defined and intuitive sequence of actions, a conservator working under the direction of an archaeologist can use the system to measure, or acquire, up to 10 fragments an hour. The flatbed scanner first is used to record several high-resolution color images of the fragment. Next, the fragment is placed on the turntable, and the laser rangefinder measures its visible surface from various viewpoints. The fragment is then turned upside down and the process is repeated. Finally, computer software, or algorithms, undertake the challenging work of making sense of this information. The Princeton researchers have dubbed the software that they have developed ‘Griphos’, which is Greek for puzzle or riddle. One algorithm aligns the various partial surface measurements to create a complete and accurate three-dimensional image of the piece. Another analyzes the scanned images to detect cracks or other minute surface markings that the rangefinder might have missed. The system then integrates all of the information gathered -- shape, image and surface detail -- into a rich and meticulous record of each fragment. Once it has acquired an object's fragments, the system begins to reassemble them, examining a pair of fragments at a time. Using only the information from edge surfaces, it acts as a virtual archaeologist, sorting through the fragments to see which ones fit snugly together. Analyzing a typical pair of fragments to see whether they match is very fast, taking only a second or two. However, the time needed to reassemble a large fresco may be significant, as the system must examine all possible pairs of fragments. To make the system run faster, the researchers are planning to incorporate a number of additional cues that archaeologists typically use to simplify their searching for matching fragments. These data include information such as where fragments were found, their pigment texture and their state of preservation.

More information:

http://www.sciencedaily.com/releases/2008/08/080815130417.htm