30 November 2008

Social Inclusion Workshop

Serious Games Institute (SGI) is organising another workshop, on the 10th of December 2008 in the area of social inclusion and serious games. The issues around social inclusion and digital equity are extremely significant for supporting social communities through the use of virtual and games technologies.

The workshop will explore the key issues with leading experts from the field and give examples of how virtual technologies can be used to create new relationships and communities that are socially inclusive.

More information:


27 November 2008

Jacking into the Brain

Futurists and science-fiction writers speculate about a time when brain activity will merge with computers. Technology now exists that uses brain signals to control a cursor or prosthetic arm. How much further development of brain-machine interfaces might progress is still an imponderable. It is at least possible to conceive of inputting text and other high-level information into an area of the brain that helps to form new memories. But the technical hurdles to achieving this task probably require fundamental advances in understanding the way the brain functions. The cyberpunk science fiction that emerged in the 1980s routinely paraded “neural implants” for hooking a computing device directly to the brain. The genius of the then emergent genre (back in the days when a megabyte could still wow) was its juxtaposition of low-life retro culture with technology that seemed only barely beyond the capabilities of the deftest biomedical engineer.

In the past 10 years, however, more realistic approximations of technologies originally evoked in the cyberpunk literature have made their appearance. A person with electrodes implanted inside his brain has used neural signals alone to control a prosthetic arm, a prelude to allowing a human to bypass limbs immobilized by amyotrophic lateral sclerosis or stroke. Researchers are also investigating how to send electrical messages in the other direction as well, providing feedback that enables a primate to actually sense what a robotic arm is touching. But how far can we go in fashioning replacement parts for the brain and the rest of the nervous system? Besides controlling a computer cursor or robot arm, will the technology somehow actually enable the brain’s roughly 100 billion neurons to function as a clandestine repository for pilfered industrial espionage data or another plot element borrowed from Gibson?

More information:


24 November 2008

Musical Instruments Cellphones

The satisfying thud of a bass drum sounds every time Gil Weinberg strikes thin air with his iPhone. A pal nearby swings his Nokia smartphone back and forth, adding a rippling bass line. A third phone-wielding friend sprinkles piano and guitar phrases on top. Weinberg's trio are using software that turns ordinary cellphones into musical instruments. Commuters regularly assailed by tinny recorded music played on other passengers' phones might not share his enthusiasm, but air guitarists and would-be drummers will probably be delighted. Smart gesture-recognition software will democratise music-making as never before. The software, dubbed ZoozBeats and launched this week, monitors a phone's motion and plays a corresponding sound. For example, you might play a rhythm based on a snare drum by beating the air with the phone as if it's a drumstick.

Or you could strum with it to play a sequence of guitar chords. The software runs on a wide range of phones because it uses many different ways to sense gestures. The obvious way is to use the accelerometers built into gadgets like the Apple iPhone and Nokia N96 smartphone. But ZoozBeats can also trigger sounds when the view through a phone's camera lens changes rapidly, or generate a beat or bassline from simple taps on the mobile's microphone. Of course, people who aren't well versed in music-making are more likely to make an infernal racket than beautiful melodies, so ZoozBeats incorporates an algorithm called Musical Wizard to make sure their musical decisions are harmonious. ZoozBeats comes with instruments for three types of music: rock, techno and hip hop. It also allows users to produce vocal effects by singing into the phone and will be downloadable in two versions. One of these will be for solo use, the other a Bluetooth networkable version that supports jamming by groups of people - using the Musical Wizard to keep everybody's input melodious.

More information:


21 November 2008

Voice and Gesture Mobile Phones

Five years from now, it is likely that the mobile phone you will be holding will be a smooth, sleek brick — a piece of metal and plastic with a few grooves in it and little more. Like the iPhone, it will be mostly display; unlike the iPhone, it will respond to voice commands and gestures as well as touch. You could listen to music, access the internet, use the camera and shop for gadgets by just telling your phone what you want to do, by waving your fingers at it, or by aiming its camera at an object you're interested in buying. Over the last few years, advances in display technology and processing power have turned smartphones into capable tiny computers. Mobile phones have gone beyond traditional audio communication and texting to support a wide range of multimedia and office applications. The one thing that hasn't changed, until recently, is the tiny keypad. Sure, there have been some tweaks, such as T9 predictive text input that cuts down on the time it takes to type, a QWERTY keyboard instead of a 12-key one, or the touchscreen version of a keyboard found on the iPhone. But fundamentally, the act of telling your phone what to do still involves a lot of thumb-twiddling. Experts say the industry needs a new wave of interface technologies to transform how we relate to our phones. The traditional keypads and scroll wheels will give way to haptics, advanced speech recognition and motion sensors. Until Apple's iPhone came along, keypads were a standard feature on all mobile phones. The iPhone paved the way for a range of touchscreen-based phones, including the T-Mobile G1 and the upcoming BlackBerry Storm. So far, even iPhone clones require navigation across multiple screens to complete a task. That will change as touchscreens become more sophisticated and cheaper. Instead of a single large screen that is fragile and smudged by fingerprints, phone designers could create products with multiple touch screens.

Users could also interact with their phone by simply speaking to it using technology from companies such as Cambridge, Massachusetts-based Vlingo. Vlingo's application allows users to command their phones by voice. That could enable you to speak the URLs for web pages or dictate e-mail messages. Natural speech recognition has long been challenging for human-computer interface researchers. Most devices with speech-recognition capabilities require users to speak commands in an artificially clear, stilted way. They also tend to have high error rates, leading to user disenchantment. Unlike conventional voice-recognition technologies, which require specific applications built to recognize selected language commands, Vlingo uses a more open-ended approach. User voice commands are captured as audio files and transferred over the wireless connection to a server, where they're processed. The technology personalizes itself for each individual user, recognizing and training itself based on the individual user's speech patterns. The technology has already found a major partner in Yahoo, which offers voice-enabled search on BlackBerry phones. Vlingo's completely voice-powered user interface is also available on Research In Motion phones, such as the BlackBerry Curve and Pearl. Vlingo hopes to expand its services to additional platforms such as Symbian, Android and feature phones over the next few months. Moreover, even the traditional keypad is set to get a face lift. Typing on a touchscreen keypad is slow and difficult, even for those without stubby fingers or long nails. That's where Swype comes in. It allows users to use trace a continuous motion on an onscreen QWERTY keypad instead of tapping individual characters. For instance, instead of typing the word infinity, users can just draw a line through each of the characters.

More information:


16 November 2008

Google Earth: Ancient Rome 3D

Google Earth has embraced a frontier dating back 17 centuries: ancient Rome under Constantine the Great. Ancient Rome 3D, as the new feature is known, is a digital elaboration of some 7,000 buildings recreating Rome circa A.D. 320, at the height of Constantine’s empire, when more than a million inhabitants lived within the city’s Aurelian walls. In Google Earth-speak it is a “layer” to which visitors gain access through its Gallery database of images and information. Google had planned to activate the feature on Wednesday morning, but a spokesman said there would be a short delay because of technical difficulties. By Wednesday night, however, the feature was up and running. The Google Earth feature is based on his Rome Reborn 1.0, a 3D reconstruction first developed in 1996 at the University of California, Los Angeles, and fine-tuned over the years with partners in the United States and Europe.

Of the 7,000 buildings in the 1.0 version, around 250 are extremely detailed. Thirty-one of them are based on 1:1 scale models built at U.C.L.A. The others are sketchier and derived from a 3D scan of data collected from a plaster model of ancient Rome at the Museum of Roman Civilization here. Archaeologists and scholars verified the data used to create the virtual reconstruction, although debates continue about individual buildings. The Rome Reborn model went through various incarnations over the years as the technology improved. Originally it was developed to be screened in theaters for viewers wearing 3-D glasses or on powerful computers at the universities contributing to the project, rather than run on the Internet. To experience Ancient Rome 3D, a user must install the Google Earth software at earth.google.com, select the Gallery folder on the left side of the screen and then click on “Ancient Rome 3D.”

More information:



14 November 2008

Telemedicine Using Internet2

Imagine a scenario where doctors from different hospitals can collaborate on a surgery without having to actually be in the operating room. What if doctors in remote locations could receive immediate expert support from top specialists in hospitals around the world? This environment could soon become a reality thanks to research by a multi-university partnership that is testing the live broadcast of surgeries using the advanced networking consortium Internet2. Rochester Institute of Technology is collaborating with a team led by the University of Puerto Rico School of Medicine that recently tested technology, which allows for the transmission of high quality, real time video to multiple locations. Using a secure, high-speed network, an endoscopic surgery at the University of Puerto Rico was broadcast to multiple locations in the United States. The experiment also included a multipoint videoconference that was connected to the video stream, allowing for live interaction between participants.

Results from the test were presented at a meeting of the collaboration special interest group at the fall 2008 Internet2 member meeting in New Orleans. The experiment demonstrates that by using the speed and advanced protocols support provided by the Internet2 network, it is potential to develop real-time, remote consultation and diagnosis during surgery, taking telemedicine to the next level. The researchers utilized a 30-megabit-per-second broadcast quality video stream, which produces high quality images, and configured it to be transmitted via multicast using Microsoft Research’s ConferenceXP system. This level of real time video was not possible in the past due to slower and lower quality computer networks. The team also utilized a Polycom videoconferencing system to connect all parties. The team will next conduct additional tests with different surgical procedures and an expanded number of remote locations. The researchers’ goal is to transfer the technology for use in medical education and actual diagnostic applications.

More information:


06 November 2008

Second Life: 'Second China'

A team of University of Florida computer engineers and scholars has used the popular online world Second Life to create a virtual Chinese city, one that hands a key to users who want to familiarize themselves with the sights and experiences they will encounter as first-time visitors. The goal of the federally funded research project: To educate and prepare foreign service or other government professionals to arrive in the country prepared and ready to work. People have long prepared for international travel with language and cultural instruction, role-playing and, in recent years, distance-learning experiences. The “Second China Project” seeks to add another element: Simulated experiences aimed at introducing users not only to typical sights and the Chinese language, but also to expectations of politeness, accepted business practices and cultural norms.

As with all Second Life worlds, users’ avatars simply “teleport” in to Second China, a city with both old and new buildings that looks surprisingly similar to some of China’s fastest growing metropolises. There, they can try a number of different activities — including, for example, visiting an office building for a conference. In the office simulation, the user’s avatar chooses appropriate business attire and a gift, greets a receptionist, and is guided to a conference room to be seated, among other activities. With each scenario, the user gains understanding or awareness: the Chinese formal greeting language and procedure, that it’s traditional to bring a gift to a first meeting, that guests typically are seated facing the door in a Chinese meeting room, and so on. In the teahouse simulation, a greeter shows the visitor photos of well-known personalities who have visited as patrons, a typical practice in many establishments in China.

More information:


04 November 2008

New Model Predicts A Glacier's Life

EPFL researchers have developed a numerical model that can re-create the state of Switzerland's Rhône Glacier as it was in 1874 and predict its evolution until the year 2100. This is the longest period of time ever modeled in the life of a glacier, involving complex data analysis and mathematical techniques. The work will serve as a benchmark study for those interested in the state of glaciers and their relation to climate change. The Laboratory of Hydraulics, Hydrology and Glaciology at ETH Zurich has been a repository for temperature, rainfall and flow data on the Rhône Glacier since the 1800s. Researchers there have used this data to reconstruct the glacier's mass balance, i.e. the difference between the amount of ice it accumulates over the winter and the amount that melts during the summer. Now, a team of mathematicians has taken the next step, using all this information to create a numerical model of glacier evolution, which they have used to simulate the history and predict the future of Switzerland's enormous Rhone glacier over a 226-year period.

The mathematicians developed their model using three possible future climate scenarios. With a temperature increase of 3.6 degrees Celsius and a decrease in rainfall of 6% over a century, the glacier's ‘equilibrium line’, or the transition from the snowfall accumulation zone to the melting zone (currently situated at an altitude of around 3000 meters), rose significantly. According to this same scenario, the simulation anticipates a loss of 50% of the volume by 2060 and forecasts the complete disappearance of the Rhône glacier around 2100. Even though measurements have been taken for quite some time, the sophisticated numerical techniques that were needed to analyze them have only been developed very recently. To verify their results, the mathematicians have also reconstructed a long-vanished glacier in Eastern Switzerland. They were able to pinpoint the 10,000-year-old equilibrium line from vestiges of moraines that still exist. The scientists' work will be of interest not only to climate change experts, but also to those to whom glaciers are important – from tourism professionals to hydroelectric energy suppliers.

More information: