30 June 2010

3D Without Glasses

Today's 3D movies are far more spectacular than the first ones screened more than 50 years ago, but watching them--both at the movie theater and at home--still means donning a pair of dorky, oversized glasses. Now a new type of lens developed by researchers in Microsoft's Applied Sciences Group could help make glasses-free 3D displays more practical. The new lens, which is thinner at the bottom than at the top, steers light to a viewer's eyes by switching light-emitting diodes along its bottom edge on and off. Combined with a backlight, this makes it possible to show different images to different viewers, or to create a stereoscopic (3D) effect by presenting different images to a person's left and right eye. 3D technology has seen a renaissance recently. Thanks to the success of movies like ‘Coraline’, ‘Up’, and ‘Avatar’, Hollywood is spending more money than ever to give audiences a stereoscopic experience. And electronics manufacturers are racing to replicate the 3D theatre experience in the home.

The market for 3D-capable televisions is expected to grow from 2.5 million sets shipped in 2010 to 27 million in 2013, according to the research firm DisplaySearch. However, the glasses required to watch 3D video is a turnoff for many would-be early adopters. At the Society for Information Display International Symposium in Seattle last month, companies showed off 3D displays that don't require glasses. These sets often use lenticular lenses, which are integrated into the display and project different images in two fixed directions. But a viewer needs to stand in designated zones to experience a 3-D effect; otherwise the screen becomes an out-of-focus blur. Microsoft's prototype display can deliver 3D video to two viewers at the same time (one video for each individual eye), regardless of where they are positioned. It can also shows ordinary 2D video to up to four people simultaneously (one video for each person). The 3D display uses a camera to track viewers so that it knows where to steer light toward them.

More information:

http://www.technologyreview.com/computing/25524/?a=f

26 June 2010

AR Mobile Teaching

At the University of New Mexico, some students in second-year Spanish classes become detectives. They travel to Los Griegos, an Albuquerque neighbourhood 15 minutes northwest of the campus, on a mission: Clear the names of four families accused of conspiring to murder a local resident. It's a fictional murder mystery, and instead of guns and badges, the students are armed with iPod Touches, provided by the university. When students enter their location into the wireless handheld devices, a clue might turn up: a bloody machete, for example, or a virtual character who may converse with them—in Spanish—about a suspect. But Los Griegos and the language skills needed to navigate the locale are no fiction. By integrating mobile computing and actual surroundings, the educational game, Mentira—Spanish for ‘lie’ and a reference to the claim of conspiracy the students are assigned to debunk—helps take teaching to a new place outside the classroom: augmented reality. Video and computer games are commonly criticized for isolating players from reality, but augmented-reality developers who work in higher education see the technology as a way to accomplish just the opposite. Researchers developed of a software tool called ARIS, or Augmented Reality and Interactive Storytelling.

ARIS lets designers link text, images, video, or audio to a physical location, making the real world into a map of virtual characters and objects that people can navigate with iPhones, iPads, or iPod Touches. The open-source tool, which is the brainchild of a Madison research group that focuses on games and learning, was built with students and educators in mind. It has not yet been released to the public; developers are aiming for a fall rollout. The researchers and educators in this small, emerging field see clear advantages to using real-world sites as the backdrop for educational games. A major goal of Mentira is to motivate students "to get their heads out of the textbook" by showing them that language has a vibrant local context, Ms. Sykes says. By setting the story in a nearby neighbourhood, researchers took advantage of its historic sites and folklore to integrate learning about its history and culture into the game. Teaching with augmented reality is not all fun and games; however researchers struggled to find an affordable way to make their game a reality. They chose iPod Touches instead of costlier iPhones. As a result, they had to design a game that would work without GPS navigation and persuade the university to sign a contract for a mobile wireless hotspot.

More information:

http://chronicle.com/article/Augmented-Reality-on/65991/

24 June 2010

Private Mobile Social Network

Researchers at Microsoft have developed mobile social networking software, called Contrail, that lets users share personal information with friends but not the network itself. When a Contrail user updates his information on the network, by adding a new photo, for example, the image file is sent to a server operating within the networks' cloud, just as with a conventional social network. But it is encrypted and appended with a list that specifies which other users are allowed to see the file. When those users' devices check in with the social network, they download the data and decrypt it to reveal the photo. Contrail requires users to opt-in if they want to receive information from friends. When a person wants to receive a particular kind of update from a contact, a ‘filter’ is sent to that friend's device.

If, for example, a mother wants to see all the photos tagged with the word ‘family’ by her son, she creates the filter on her phone. The filter is encrypted and sent via the cloud to her son's device. Once decrypted, the filter ensures that every time he shares a photo tagged ‘family’, an encrypted version is sent to the cloud with a header directing it to the cell phone belonging to his mother (as well as anyone else who has installed a similar filter on his device). Encryption hides the mother's preferences from the cloud, as well as the photos themselves. Each user has a cryptographic key on his or her device for every friend that is used to encrypt and decrypt shared information. Contrail runs on Microsoft's cloud computing service, Windows Azure, and the team has developed three compatible applications running on HTC Windows Mobile cell phones.

More information:

http://www.technologyreview.com/web/25640/?a=f

23 June 2010

Clouds Add Depth to Landscapes

Clouds are not normally a boon for image-processing algorithms because their shadows can distort objects in a scene, making them difficult for software to recognise. However, researchers at Washington University in St Louis, Missouri, are making shadows work for them, helping them to create a depth map of a scene from a single camera. Depth maps record the geography of a 3D landscape and represent it in 2D for surveillance and atmospheric monitoring. They are usually created using lasers, because adjacent pixels in camera images do not equate to adjacent geographic points: one pixel might form the line of a hill in the near distance, while an adjoining one is from a more distant landmark.

Enter the clouds - the shadows they cast can hint at real-world geography, researchers say. By comparing a series of images and recording the time at which the passing shadows change a pixel's colour they can estimate the distance between each pixel. If the wind speed is known you can reconstruct the scene with the right scale. That is very difficult from a single camera viewpoint. Compared with laser-created maps, average positional error in the cloud map was just 2 per cent. The work is to be presented at the Computer Vision and Pattern Recognition conference in San Francisco.

More information:

http://www.newscientist.com/article/mg20627655.500-clouds-add-depth-to-computer-landscapes.html

16 June 2010

Xbox 360 Kinect

Microsoft's Project Natal motion control system now has an official name: Kinect. The new title for the hands-free gaming peripheral was announced at a flashy event in Los Angeles on Sunday night, a few hours before Microsoft's Xbox press briefing kicks off E3 in earnest. There is still no word on a price for the gaming add-on, and - although it's been confirmed that the system will be launched in November - we are also still waiting on a definite street date. Kinect uses a camera, microphone and motion sensors to enable people to play games without the need for a controller.

Kinect Sports was another game shown off, a sports title with six different activities; boxing, bowling, beach volleyball, track and field, football and table tennis. Xbox owners can also look forward to playing racing game Joyride, pet training game Kinectimals, dancing title Dance Central and outdoor pursuits game Kinect Adventures. To play any of the titles, the gamer must perform the real life action. For instance, driving a car in Joyride requires you to move your hands as if you are holding a virtual steering wheel. To play football in Kinect Sports, you must kick the imaginary ball.

More information:

http://www.xbox.com/en-US/kinect

http://tech.uk.msn.com/gaming/articles.aspx?cp-documentid=153764468

15 June 2010

Effects of Multi-Touch Devices

The evolution of computer systems has freed us from keyboards and now is focusing on multi-touch systems, those finger-flicking, intuitive and easy-to-learn computer manipulations that speed the use of any electronic device from cell phones to iPads. But little is known about the long-term stresses on our bodies through the use of these systems. Now, a team of researchers of ASU is engaged in a project to determine the effects of long-term musculoskeletal stresses multi-touch devices place on us. The team, which includes computer interaction researchers, kinesiologists and ergonomic experts from ASU and Harvard University, also are developing a tool kit that could be used by designers when they refine new multi-touch systems.

When we use our iPhone or iPad, we don’t naturally think that it might lead to a musculoskeletal disorder, researchers mentioned. But the fact is it could, and we don’t even know it. We are all part of a large experiment. Multi-touch systems might be great for usability of a device, but we just don’t know what it does to our musculoskeletal system. As we move towards a world where human-computer interaction is based on various body movements that are not well documented or studied we face serious and grave risk of creating technology and systems that may lead to musculoskeletal disorders (MSD). Many of today’s multi-touch systems have no consideration of eliminating gestures that are known to lead to MSDs, or eliminating gestures that are symptomatic of a patient population.

More information:

http://asunews.asu.edu/20100608_multitouchdevicestudy

09 June 2010

How the Brain Recognizes Objects

Researchers at MIT’s McGovern Institute for Brain Research have developed a new mathematical model to describe how the human brain visually identifies objects. The model accurately predicts human performance on certain visual-perception tasks, which suggests that it’s a good indication of what actually happens in the brain, and it could also help improve computer object-recognition systems. The model was designed to reflect neurological evidence that in the primate brain, object identification — deciding what an object is — and object location — deciding where it is — are handled separately. Although what and where are processed in two separate parts of the brain, they are integrated during perception to analyze the image, researchers say. The mechanism of integration, the researchers argue, is attention. According to their model, when the brain is confronted by a scene containing a number of different objects, it can’t keep track of all of them at once. So instead it creates a rough map of the scene that simply identifies some regions as being more visually interesting than others. If it’s then called upon to determine whether the scene contains an object of a particular type, it begins by searching — turning its attention toward — the regions of greatest interest.

The subjects were asked first to simply regard a street scene depicted on a computer screen, then to count the cars in the scene, and then to count the pedestrians, while an eye-tracking system recorded their eye movements. The software predicted with great accuracy which regions of the image the subjects would attend to during each task. The software’s analysis of an image begins with the identification of interesting features — rudimentary shapes common to a wide variety of images. It then creates a map that depicts which features are found in which parts of the image. But thereafter, shape information and location information are processed separately, as they are in the brain. The software creates a list of all the interesting features in the feature map, and from that, it creates another list, of all the objects that contain those features. But it doesn’t record any information about where or how frequently the features occur. At the same time, it creates a spatial map of the image that indicates where interesting features are to be found, but not what sorts of features they are. It does, however, interpret the ‘interestingness’ of the features probabilistically. If a feature occurs more than once, its interestingness is spread out across all the locations at which it occurs. If another feature occurs at only one location, its interestingness is concentrated at that one location.

More information:

http://web.mit.edu/newsoffice/2010/people-images-0607.html

05 June 2010

Virtual Lab for Infectious Diseases

Doctors around the world will soon have a powerful new tool at their disposal in the fight against HIV and other infectious diseases: a virtual laboratory that will help them match drugs to patients and make treatments more effective. The ViroLab Virtual Laboratory, the core components of which are scheduled to be available online in 2010, uses the latest advances in machine learning, data mining, grid computing, modelling and simulation to turn the content of millions of scientific journal articles, disparate databases and patients’ own medical histories into knowledge that can effectively be used to treat disease. Developed by a multidisciplinary team of European researchers working in the EU-funded ViroLab project, the virtual laboratory is already being used in seven hospitals to provide personalised treatment to HIV patients and is eliciting widespread interest as a potent decision-support tool for doctors. The ViroLab Virtual Laboratory uses a combination of technologies and methods to help doctors make decisions about the best medication to give each individual patient, accessed through a simple-to-use web interface.

The system continuously crawls grid-connected databases of virological, immunological, clinical, genetic and experimental data, extracts information from scientific journal articles (such as the results of drug resistance experiments) and draws on other sources of information. This data is then processed to give it machine-readable semantic meaning and analysed to produce models of the likely effects of different drugs on a given patient. Each medication is ranked according to its predicted effectiveness in light of the patient’s personal medical history. Crucially, the system incorporates the concept of provenance, ensuring that every step a doctor takes in creating a workflow to find the right drug for a patient and every step the system takes to provide a recommendation is recorded. Because of the distributed nature of the virtual laboratory, cases can be compared to those of other patients living a few streets or thousands of kilometres away. And the system can even generate models simulating the likely spread and progression of different mutations of viruses based not only on medical data but also on sociological information.

More information:

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91302