30 April 2010

A Flexible Color Display

Researchers at HP Labs are testing a flexible, full-color display that saves power by reflecting ambient light instead of using a backlight. The prototype display's pixels are controlled by fast-switching silicon transistors printed on top of plastic. If the technology can be commercialized, the display will compete with liquid crystal screens as well as other low-power color flexible displays in the works. HP is collaborating with Phicot, a subsidiary of Ames, IA-based Powerfilm, which prints high-performance transistors on plastic. HP plans to target both the e-reader and tablet PC markets. The e-reader screen market is dominated by E-Ink, a company based in Cambridge, MA, that makes black-and-white reflective displays incorporating tiny microcapsules.

E-Ink's screens have the look of paper, do not need a backlight, and do not require any power once the pixels have switched between black and white. But it is also too slow to show video and, as yet, is only available in black and white. In contrast, Apple's iPad uses a more conventional liquid crystal display. This means it produces vibrant color, but it is also expensive, power-hungry, and vulnerable to glare. The display is also relatively fragile because it's built on top of glass. Many manufacturers believe there is still a market for low-power reflective displays. But they're also working to develop robust reflective displays built on plastic that use less battery life without giving up the functionality of LCDs.

More information:


25 April 2010

Handheld Projector Image Interaction

POWERPOINT presentations are about to get a sprinkle of fairy dust. A hand-held projector can now create virtual characters and objects that interact with the real world. The device - called Twinkle - projects animated graphics that respond to patterns, shapes or colours on a surface, or even 3D objects such as your hand. It uses a camera to track relevant elements - say a line drawn on a wall - in the scene illuminated by the projector and an accelerometer ensures it can sense the projector's rapid motion and position.

Software then matches up the pixels detected by the camera with the animation, making corrections for the angle of projection and distance from the surface. The device could eventually fit inside a cellphone, researchers of the University of Tokyo mentioned. A prototype which projects a cartoon fairy that bounces off or runs along paintings on a wall or even the surface of a bottle (pictured) was presented at the recent Virtual Reality 2010 meeting in Waltham, Massachusetts.

More information:


21 April 2010

No Gain From Brain Training

The largest trial to date of 'brain-training' computer games suggests that people who use the software to boost their mental skills are likely to be disappointed. The study, a collaboration between British researchers and the BBC Lab UK website, recruited viewers of the BBC science programme Bang Goes the Theory to practise a series of online tasks for a minimum of ten minutes a day, three times a week, for six weeks. In one group, the tasks focused on reasoning, planning and problem-solving abilities — skills correlated with general intelligence. A second group was trained on mental functions targeted by commercial brain-training programs - short-term memory, attention, visuospatial abilities and maths. A third group, the control subjects, simply used the Internet to find answers to obscure questions.

A total of 11,430 volunteers aged from 18 to 60 completed the study, and although they improved on the tasks, the researchers believe that none of the groups boosted their performance on tests measuring general cognitive abilities such as memory, reasoning and learning. There were absolutely no transfer effects from the training tasks to more general tests of cognition. Researchers think the expectation that practising a broad range of cognitive tasks to get people smarter is completely unsupported. Most commercial programs are aimed at adults well over 60 who fear that their memory and mental sharpness are slipping. An older test group, would have a lower mean starting score and more variability in performance, leaving more room for training to cause meaningful improvement.

More information:


20 April 2010

Multitoe Touch Interface

Researchers from Germany's Hasso Plattner Institute previewed a new touch interface called Multitoe that uses feet, instead of fingers or hands, at the Computer Human Interaction conference in Atlanta Sunday. Researchers built a floor that is based on the same concept as multitouch tables. The system sits flush with the floor and when someone stands on it, the floor will light up. The system can store user profiles based each user's shoe sole. Each shoe sole is slightly different, even different sizes of the same model shoe appear differently, and the software can tell the difference. Once the profiles are stored, the interface can identify users. In order to enable direct manipulation on floors, the group uses a technique called frustrated total internal reflection (FTIR) with high camera resolution. The concept is complex, but light is first injected to the pane of glass, on which a user stands, from below.

With pressure sensors and gait detection the software can understand when someone is walking and ignores the input, focusing only on users who want to interact with the system. The current prototype allows users to draw, control a game and use a keyboard. Even though each "key" of the keyboard is smaller than someone's foot, the group found that error rates per letter were relatively low when large (5.3cm by 5.8cm) or medium (3.1cm by 3.5cm) keys were used. The error rates were 3 percent and 9.5 percent, respectively, compared to 28.6 percent for small (1.1cm by 1.7cm) keys. The prototype Multitoe system measures less than one square meter, but the team plans to install a much larger unit when a new research building opens at the Hasso Plattner Institute in July. It will measure three meters by 2.15 meters and weigh 1.2 metric tons.

More information:


17 April 2010

Augmented Reality City Visits

Using a combination of personalised location-based services and augmented reality, in which multimedia content is superimposed and merged with real-time images, a team of European researchers and city authorities has created a device to bring a little movie magic to city visits by tourists, cinema lovers, inquisitive local residents and film professionals. The device, which resembles a pair of binoculars with an integrated camera and LCD screen, was tested in San Sebastián, Spain, and Venice, Italy, and is continuing to be developed with a view to creating a commercial product. It uses a hybrid tracking system to provide location-based information, and cities’ wireless communications networks to download and share multimedia content. Though smart phones incorporating features such as location-awareness and augmented reality applications have come onto the market in the three years since the CINeSPACE project began, researchers note that none offer the same immersive experience provided by a dedicated platform and device. Unlike staring at the small screen of a smart phone, the CINeSPACE device is held up to the eye like a pair of binoculars, allowing users to see multimedia content superimposed on a city scene, be it a popular shopping street or an historical square.

Users are guided around a city by an intelligent sensor-fusion system incorporating GPS, WLAN tags, inertia cubes and marker-less optical tracking. Personalised location-aware services tell them where to go and where to stand for the best augmented reality experience. And maps and other multimedia content are provided via a 4.5-inch augmented reality touch panel on the binocular device, with user preferences taken into account when selecting points of interest and content. The project partners say the device could be rented out by local tourism offices. Content may consist of video, photos or audio recordings, stored on a central server of the municipality and downloaded as required, and can come from a variety of sources, including the users themselves. The CINeSPACE system was tested in San Sebastián and Venice last summer, with trial users rating highly the overall concept of the system and the quality of the augmented reality content. Further work, aimed at addressing user feedback regarding the device and interface, has since led to a third prototype being developed by German micro-electro-optical device manufacturer and project partner Trivisio, which is planning to commercialise it.

More information:


10 April 2010

Machine Consciousness

Challenges don't get much bigger than trying to create artificial consciousness. Some doubt if it can be done - or if it ever should. Bolder researchers are not put off, though. Researchers consider machine consciousness as a grand challenge, like putting a man on the moon. One landmark is the recently developed ‘Conscale’, developed by researchers at the University of Madrid in Spain to compare the intelligence of various software agents - and biological ones too. IDA assigns sailors in the US navy to new jobs when they finish a tour of duty and has to juggle naval policies, job requirements, changing costs and sailors' needs. Like people, IDA has ‘conscious’ and ‘unconscious’ levels of processing. At the unconscious level she deploys software agents to gather data and process information. These agents compete to enter IDA's "conscious" workspace, where they interact with each other and decisions get made. The updated Learning IDA, or LIDA, was completed this year. She learns from what reaches her consciousness and uses this to guide future decisions. LIDA also has the benefit of ‘emotions’ - high-level goals that guide her decision-making. Another advance emerged from designing robots able to maintain their function after being damaged.

In 2006, researchers at the University of Vermont in Burlington designed a walking robot with a continuously updated internal model of itself. If damaged, this self-knowledge allows it to devise an alternative gait using its remaining abilities. Having an internal ‘imagined’ model of ourselves is considered a key part of human sentience, taking the robot closer to self-awareness. Along with an internal model, the robot developed by researchers at the University of Sussex, UK, is also anatomically human-like. A robot with a body that is very close to a human's will develop cognition that is closer to the human variety. None of these approaches solve what many consider to be the ‘hard problem’ of consciousness: subjective awareness. No one yet knows how to design the software for that. But as machines grow in sophistication, the hard problem may simply evaporate - either because awareness emerges spontaneously or because we will simply assume it has emerged without knowing for sure. After all, when it comes to other humans, we can only assume they have subjective awareness too. We have no way of proving we are not the only self-aware individual in a world of unaware ‘zombies’.

More information:


08 April 2010

Haptics Model Industrial Designs

Industrial design modelling, used to make prototypes of home appliances or mock-ups of car parts, could soon make the leap from the world of plaster, plastic and sticky tape into the digital domain thanks to an augmented reality design system developed in Europe. The system, developed by a team of researchers from six EU countries, merges touch-sensitive haptic technology with 3D digital modelling and computer-aided design (CAD) to allow professional designers to feel and shape their creations physically and virtually. Implemented commercially, the system promises to save companies time and money, raise designers’ productivity and improve the quality of new products. Though designers use computer programs to create mathematically precise models of products, they still need to be able to see and handle the model physically. Until now, the only way they have been able to do that is to turn to a model-maker to create a real, physical sample. It’s a labour-intensive, time-consuming and costly process. Haptic technology, which uses mechanics and/or special materials to transmit and receive information through the sense of touch, offers a practical solution, providing many of the benefits of physical models with none of the drawbacks.

However, haptics is far from a mature technology, and this project was one of the first to build a haptic system for industrial designers. The multimodal and multisensory SATIN system consists of two FCS-HapticMASTER devices, in essence robotic arms more commonly used for remote welding or dental surgery, which position and rotate a robotic spline, an electronic version of the flexible strip of material, typically wood or metal, long used by designers to draw curves. Fitted with actuators and sensors, the spline automatically twists and bends to the shape of a digital representation of the product uploaded by the designer into the system. Standing in front of a workstation and wearing 3D glasses, the designer views, through a set of mirrors, a virtual 3D model of the product superimposed where the spline actually is. By pressing the centre or pushing or pulling the ends of the robotic spline with their hands, the designers can reshape and reform the 3D model. Models can be saved and compared, and any changes made much more quickly and simply than using traditional modelling methods. Additional information about the model that cannot be perceived tactilely on the spline, such as discontinuities of a curve or inflection points, is transmitted through audio signals as the designer runs a finger along it.

More information:


06 April 2010

Virtual Social Gathering

By marrying state-of-the-art video and audio communications technology with digital media, interactive devices and ambient intelligence, a team of European researchers hope to give people of all ages the opportunity to get together, play games, share experiences and generally communicate, interact and have fun even if they are thousands of kilometres apart. Their goal is to bring down the barriers between people – technological and social. Coupled with people moving and travelling more frequently for work and study, it is a situation that has led to families and friends spending less time together. Even in the same home many people now tend to entertain and educate themselves alone, whether it is the teenager playing computer games in her room, the father listening to music on his MP3 player in the lounge or the mother studying on her laptop in the kitchen. Technology has encouraged this isolation, but advances in that same technology could now reverse it. Working in the EU-funded TA2 (Together anywhere, together anytime) project, a team of researchers from seven European countries are aiming to turn the tables on technology by simply and affordably bringing telepresence into normal households.

Their vision is of groups of friends and family members seeing each other on their TVs, hearing each other through their stereo systems, sharing photos and videos and playing games almost as naturally as if they were in the same room. To make that possible, the TA2 researchers are developing the components necessary to build an affordable and easy to install in-home telepresence system. The components can then be used to build complete telepresence systems without the need for special rooms or big screens to bring people together virtually. A television set, sound system, cameras and microphones placed in a living room suffice to create a sufficiently interactive and immersive experience, while state-of-the-art software which is transparent to the end user manages the communications backbone. Children and the elderly, who often find themselves more isolated than other social groups in the modern world, stand to benefit particularly from the technology. One scenario, envisages a grandparent and grandchild playing a picture-matching game called pairs in which old photos could be used to trigger conversations and pass stories down through the generations.

More information: