28 November 2009

Feeling the Way

For many people, it has become routine to go online to check out a map before traveling to a new place. But for blind people, Google maps and other visual mapping applications are of little use. Now, a unique device developed at MIT could give the visually impaired the same kind of benefit that sighted people get from online maps. The BlindAid system, developed in MIT’s Touch Lab, allows blind people to ‘feel’ their way around a virtual model of a room or building, familiarizing themselves with it before going there. The director of the Touch Lab is working with the Carroll Center for the Blind in Newton, Mass., to develop and test the device. Preliminary results show that when blind people have the chance to preview a virtual model of a room, they have an easier time navigating their way around the actual room later on. That advantage could be invaluable for the visually impaired. One of the toughest challenges a visually impaired person faces is entering an unfamiliar environment with no human or dog to offer guidance.

The BlindAid system builds on a device called the Phantom, developed at MIT in the early 1990s and commercialized by SensAble Technologies. Phantom consists of a robotic arm that the user grasps as if holding a stylus. The stylus can create the sensation of touch by exerting a small, precisely controlled force on the fingers of the user. The BlindAid stylus functions much like a blind person’s cane, allowing the user to feel virtual floors, walls, doors and other objects. The stylus is connected to a computer programmed with a three-dimensional map of the room. Whenever a virtual obstacle is encountered, the computer directs the stylus to produce a force against the user’s hand, mimicking the reaction force from a real obstacle. The team has tested the device in about 10 visually impaired subjects at the Carroll Center, a non-profit agency that offers education, training and rehabilitation programs to about 2,000 visually impaired people per year. To successfully use such a system, the visually impaired person must have a well-developed sense of space.

More information:

http://web.mit.edu/newsoffice/2009/touch-map.html

22 November 2009

Games Graduates Workshop

On Wednesday 9th December 2009, Serious Games Institute (SGI) is organising another workshop with title ‘Get ahead of the game’. The workshop is focused on graduates that want to work on the games industry.

Join us at the Serious Games Institute for this workshop and discover opportunities available to you within this exciting industry. Also graduates will get a chance to makes first steps towards securing a placement in a fun and innovative industry.

More information:

http://www.seriousgamesinstitute.co.uk/events.aspx?item=754

21 November 2009

Rendering Cloaked Objects

Scientists and curiosity seekers who want to know what a partially or completely cloaked object would look like in real life can now get their wish -- virtually. A team of researchers at the Karlsruhe Institute of Technology in Germany has created a new visualization tool that can render a room containing such an object, showing the visual effects of such a cloaking mechanism and its imperfections. To illustrate their new tool, the researchers have published an article in the latest issue of Optics Express, the Optical Society's (OSA) open-access journal, with a series of full-color images. These images show a museum nave with a large bump in the reflecting floor covered by an invisibility device known as the carpet cloak. They reveal that even as an invisibility cloak hides the effect of the bump, the cloak itself is apparent due to surface reflections and imperfections. The researchers call this the "ostrich effect" -- in reference to the bird's mythic penchant for partial invisibility. The software, which is not yet commercially available, is a visualization tool designed specifically to handle complex media, such as metamaterial optical cloaks. Metamaterials are man-made structured composite materials that exhibit optical properties not found in nature. By tailoring these optical properties, these media can guide light so that cloaking and other optical effects can be achieved. In 2006, scientists at Duke University demonstrated in the laboratory that an object made of metamaterials can be partially invisible to particular wavelengths of light (not visible light, but rather microwaves).

A few groups, including one at the University of California, Berkeley, have achieved a microscopically-sized carpet cloak. These and other studies have suggested that the Hollywood fantasy of invisibility may one day be reality. While such invisibility has been achieved so far in the laboratory, it is very limited. It works, but only for a narrow band of light wavelengths. Nobody has ever made an object invisible to the broad range of wavelengths our eyes can see, and doing so remains a challenge. Another challenge has been visualizing a cloaked object. It is very likely that any invisibility cloak would remain partly seen because of imperfections and optical effects. Up to now, nobody has been able to show what this would look like -- even on a computer. The problem is that metamaterials may have optical properties that vary over their length. Rendering a room with such an object in it requires building hundreds of thousands of distinct volume elements that each independently interact with the light in the room. The standard software that scientists and engineers use to simulate light in a room only allows for a few hundred volume elements, which is nowhere close to the complexity needed to handle many metamaterials such as the carpet cloak. So researchers built the software needed to do just that. Wanting to demonstrate it, they rendered a virtual museum niche with three walls, a ceiling, and a floor. In the middle of the room, they place the carpet cloak -- leading the observer to perceive a flat reflecting floor, thus cloaking the bump and any object hidden underneath it.

More information:

http://www.sciencedaily.com/releases/2009/11/091112171409.htm

20 November 2009

Mobile Maps of Noise Pollution

Mobile phones could soon be used to fight noise pollution - an irony that won't be lost on those driven to distraction by mobile phones' ringtones. In a bid to make cities quieter, the European Union requires member states to create noise maps of their urban areas once every five years. Rather than deploying costly sensors all over a city, the maps are often created using computer models that predict how various sources of noise, such as airports and railway stations, affect the areas around them. Researchers of the Sony Computer Science Laboratory in Paris, France, say that those maps are not an accurate reflection of residents' exposure to noise. To get a more precise picture, the team has developed NoiseTube, a downloadable software app which uses people's smartphones to monitor noise pollution. The goal of this project was to turn the mobile phone into an environmental sensor. The app records any sound picked up by the phone's microphone, along with its GPS location.

Users can label the data with extra information, such as the source of the noise, before it is transmitted to NoiseTube's server. There the sample is tagged with the name of the street and the city it was recorded in and converted into a format that can be used with Google Earth. Software on the server checks against weather information, and rejects data that might have been distorted by high winds, for instance. Locations that have been subjected to sustained levels of noise are labelled as dangerous. The data is then added to a file, which can be downloaded from the NoiseTube website and displayed using Google Earth. Currently the software works on only a handful of Sony Ericsson and Nokia smartphones as it has to be calibrated by researchers to work with the microphone on any given model. They are currently working on a method to automatically calibrate microphones.

More information:

http://www.newscientist.com/article/mg20427346.900-cellphone-app-to-make-maps-of-noise-pollution.html

18 November 2009

Contact Lenses Virtual Displays

A contact lens that harvests radio waves to power an LED is paving the way for a new kind of display. The lens is a prototype of a device that could display information beamed from a mobile device. Realising that display size is increasingly a constraint in mobile devices, researchers at the University of Washington, in Seattle, hit on the idea of projecting images into the eye from a contact lens. One of the limitations of current head-up displays is their limited field of view. A contact lens display can have a much wider field of view. Researchers hope to create images that effectively float in front of the user perhaps 50 cm to 1 m away. This involves embedding nanoscale and microscale electronic devices in substrates like paper or plastic. Fitting a contact lens with circuitry is challenging. The polymer cannot withstand the temperatures or chemicals used in large-scale microfabrication. So, some components – the power-harvesting circuitry and the micro light-emitting diode – had to be made separately, encased in a biocompatible material and then placed into crevices carved into the lens.

One obvious problem is powering such a device. The circuitry requires 330 microwatts but doesn't need a battery. Instead, a loop antenna picks up power beamed from a nearby radio source. The team has tested the lens by fitting it to a rabbit. Researchers mention that future versions will be able to harvest power from a user's cell phone, perhaps as it beams information to the lens. They will also have more pixels and an array of microlenses to focus the image so that it appears suspended in front of the wearer's eyes. Despite the limited space available, each component can be integrated into the lens without obscuring the wearer's view, the researchers claim. As to what kinds of images can be viewed on this screen, the possibilities seem endless. Examples include subtitles when conversing with a foreign-language speaker, directions in unfamiliar territory and captioned photographs. The lens could also serve as a head-up display for pilots or gamers. Other researchers in Christchurch, New Zealand, mentioned that this new technology could provide a compelling augmented reality experience.

More information:

http://www.newscientist.com/article/dn18146-contact-lenses-to-get-builtin-virtual-graphics.html

16 November 2009

Creating 3D Models with a Webcam

Constructing virtual 3D models usually requires heavy and expensive equipment, or takes lengthy amounts of time. A group of researchers here at the Department of Engineering, University of Cambridge have created a program able to build 3D models of textured objects in real-time, using only a standard computer and webcam. This allows 3D modeling to become accessible to everybody. During the last few years, many methods have been developed to build a realistic 3D model of a real object. Various equipment has been used: 2D/3D laser, (in visible spectrum or other wave lengths), scanner, projector, camera, etc. These pieces of equipment are usually expensive, complicated to use or inconvenient and the model is not built in real-time. The data (for example laser information or photos) must first be acquired, before going through the lengthy reconstruction process to form the model.

If the 3D reconstruction is unsatisfactory, then the data must be acquired again. The method proposed by researchers needs only a simple webcam. The object is moved about in front of the webcam and the software can reconstruct the object ‘on-line’ while collecting live video. The system uses points detected on the object to estimate object structure from the motion of the camera or the object, and then computes the Delaunay tetrahedralisation of the points (the extension of the 2D Delaunay triangulation to 3D). The points are recorded in a mesh of tetrahedra, within which is embedded the surface mesh of the object. The software can then tidy up the final reconstruction by taking out the invalid tetrahedra to obtain the surface mesh based on a probabilistic carving algorithm, and the object texture is applied to the 3D mesh in order to obtain a realistic model.

More information:

http://www.eng.cam.ac.uk/news/stories/2009/3D_models/

09 November 2009

Art History in 3D

If you don't have the time to travel to Florence, you can still see Michelangelo's statue of David on the Internet, revolving in true-to-life 3D around its own axis. This is a preview of what scientists are developing in the European joint project 3D-COFORM. The project aims to digitize the heritage in museums and provide a virtual archive for works of art from all over the world. Vases, ancient spears and even complete temples will be reproduced three-dimensionally. In a few years' time museum visitors will be able to revolve Roman amphorae through 360 degrees on screen, or take off on a virtual flight around a temple. The virtual collection will be especially useful to researchers seeking comparable works by the same artist, or related anthropological artifacts otherwise forgotten in some remote archive. The digital archive will be intelligent, searching for and linking objects stored in its database. For instance, a search for Greek vases from the sixth century BC with at least two handles will retrieve corresponding objects from collections all over the world.

3D documentation provides a major advance over the current printed catalogs containing pictures of objects, or written descriptions. A set of 3D data presents the object from all angles, providing information of value to conservators, such as the condition of the surface or a particular color. As the statue of David shows, impressive 3D animations of art objects already exist. Researchers are generating 3D models and processing them for the digital archive. They are developing calculation specifications to derive the actual object from the measured data. The software must be able to identify specific structures, such as the arms on a statue or columns on a building, as well as recognizing recurring patterns on vases. A virtual presentation also needs to include a true visual image -- a picture of a temple would not be realistic if the shadows cast by its columns were not properly depicted. The research group in Darmstadt is therefore combining various techniques to simulate light effects.

More information:

http://www.3d-coform.eu/

http://www.sciencedaily.com/releases/2009/11/091104101537.htm

04 November 2009

Muscle-Bound Computer Interface

It's a good time to be communicating with computers. No longer are we constrained by the mouse and keyboard--touch screens and gesture-based controllers are becoming increasingly common. A startup called Emotiv Systems even sells a cap that reads brain activity, allowing the wearer to control a computer game with her thoughts. Now, researchers at Microsoft, the University of Washington in Seattle, and the University of Toronto in Canada have come up with another way to interact with computers: a muscle-controlled interface that allows for hands-free, gestural interaction. A band of electrodes attach to a person's forearm and read electrical activity from different arm muscles. These signals are then correlated to specific hand gestures, such as touching a finger and thumb together, or gripping an object tighter than normal. The researchers envision using the technology to change songs in an MP3 player while running or to play a game like Guitar Hero without the usual plastic controller. Muscle-based computer interaction isn't new. In fact, the muscles near an amputated or missing limb are sometimes used to control mechanical prosthetics. But, while researchers have explored muscle-computer interaction for non-disabled users before, the approach has had limited practicality.

Inferring gestures reliably from muscle movement is difficult, so such interfaces have often been restricted to sensing a limited range of gestures or movements. The new muscle-sensing project is going after healthy consumers who want richer input modalities. Researchers had to come up with a system that was inexpensive and unobtrusive and that reliably sensed a range of gestures. The group's most recent interface, presented at the User Interface Software and Technology conference earlier this month in Victoria, British Columbia, uses six electromyography sensors (EMG) and two ground electrodes arranged in a ring around a person's upper right forearm for sensing finger movement, and two sensors on the upper left forearm for recognizing hand squeezes. While these sensors are wired and individually placed, their orientation isn't exact--that is, specific muscles aren't targeted. This means that the results should be similar for a thin, EMG armband that an untrained person could slip on without assistance. The research builds on previous work that involved a more expensive EMG system to sense finger gestures when a hand is laid on a flat surface. The sensors cannot accurately interpret muscle activity straight away. Software must be trained to associate the electrical signals with different gestures. The researchers used standard machine-learning algorithms, which improve their accuracy over time.

More information:

http://www.technologyreview.com/video/?vid=473

http://www.technologyreview.com/computing/23813/?a=f