30 May 2009

Really Virtual Reality

Far from being geeky and exotic, virtual reality could be the key to a new range of innovative products. European researchers and industrialists have come together to build a world-leading community ready to exploit that promise. Made famous by the ‘holodeck’ in Star Trek: The Next Generation, virtual reality (VR) has long had the reputation of being slightly frivolous. Yet Europe’s VR industry is emerging as a world leader thanks to new efforts to coordinate developments on a continental scale. In VR, the user can enter a virtual world and interact with it as if it were real. In the simpler VR systems, the user views a virtual scene on a normal computer screen. This is the method used by many kinds of computer games and the famous online Second Life simulation. In more sophisticated ‘fully immersive’ systems, the user can move through a surrounding virtual environment, though not yet as realistically as portrayed in Star Trek. VR is already in use in medicine, education, training and the energy, aeronautics and car industries, but until the last few years there was little sense of cohesion amongst those working in the field.

That began to change with INTUITION, an EU-funded Network of Excellence set up in 2004 to pull together Europe’s fragmented efforts in VR. In the previous ten years we’d had a lot of new developments that made the wide use of such technologies more realistic and cost-effective. As well as more than 60 formal partners, INTUITION attracted a further 80 associated organisations. Practical services included an online knowledge base in VR, a ‘virtual lab’ where partners could use one another’s infrastructure, and an employment exchange and mobility scheme. All these things helped to build cooperation and a sense of community. An annual workshop soon grew into a major conference and has now become one of the world’s biggest trade exhibitions for the VR industry. One important application is in industrial prototyping. By building virtual prototypes rather than physical ones, the time to develop and commercialise a product can be greatly reduced, along with the costs. Links with the European Space Agency have led to projects to do with prototyping, astronaut training and remote maintenance. A remarkably simple project, developed by INRIA, showed how an illusion of texture in web pages could be created without any special equipment.

More information:

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=90613

25 May 2009

Virtual Karnak

For the past two years, a team of UCLA Egyptologists, digital modelers, web designers, staff and students has been building a three-dimensional virtual-reality model of the ancient Egyptian religious site known as Karnak, one of the largest temple complexes ever constructed. The result is Digital Karnak, a high-tech model that runs in real time and allows users to navigate 2,000 years of history at the popular ancient Egyptian tourist site near modern-day Luxor, where generations of pharaohs constructed temples, chapels, obelisks, sphinxes, shrines and other sacred structures beginning in the 20th century B.C. Developed by UCLA's Experiential Technologies Center the Karnak model and a host of additional digital resources are now available for educators, students, scholars and the public to explore for free. The website features videos from the 3D model, instructional resources for educators, a Google Earth version of the model and pages detailing the chronology and construction of individual structures at the temple complex.

The collective resources offer a window onto the incredibly rich architectural, religious, economic, social and political history of ancient Egypt. In recent years, scientists, historians and archaeologists around the world have embraced the 3D modeling of cultural heritage sites. Information technology has permitted them to recreate buildings and monuments that no longer exist or to digitally restore sites that have been damaged by the passage of time. The results can be used both in research, to test new theories, and in teaching, to take students on virtual tours of historical sites they are studying. The Experiential Technologies Center (ETC) at UCLA, which promotes the critical incorporation of new technologies into research and teaching, has been a leader in this cutting-edge movement, having digitally reconstructed a wide variety of sites of historical and cultural importance in Europe, the Middle East, South America and the Caribbean.

More information:

http://dlib.etc.ucla.edu/projects/Karnak

http://www.sciencedaily.com/releases/2009/04/090429172224.htm

22 May 2009

Wearable Sensors Watch Workers

Office workers who make time to chat face to face with colleagues may be far more productive than those who rely on e-mail, the phone, or Facebook, suggests a study carried out by researchers at MIT and New York University. The researchers outfitted workers in a Rhode Island call center with a wearable sensor pack that records details of social interactions. They discovered that those employees who had in-person conversations with coworkers throughout the day also tended to be more productive. The results aren't yet published, but they support research published last December by the same team. This study showed that employees at an IT company who completed tasks within a tight-knit group that communicated face to face were about 30 percent more productive than those who did not communicate in a face-to-face network. Many managers probably suspect a link between personal communication and productivity. Conventional wisdom suggests that face-to-face conversations are a useful way to create and maintain strong social networks, which could help workers solve complex customer problems or complete more calls at the center.

Researchers used a sociometer, a device about the size of a deck of cards, which participants wear around their necks as they would an identification badge. Each sociometer contains an accelerometer to measure their movement; a microphone that picks up their speech characteristics, such as intonation and cadence; a Bluetooth radio to detect other people wearing sociometers nearby; and an infrared sensor that can detect face-to-face interactions. Worn all day, the sociometers log workers' activity and conversations. The data collected by each sociometer can, for instance, reveal how central a person is to a social network and how cohesive the network is overall. A more cohesive network is one in which all people talk to each other, thereby forming a closed loop. This may be an important measure of workplace social dynamics: workers in the most cohesive networks were about 30 percent more productive than those who weren't in such networks, according to the call-center study. The researchers chose a call center for their research because productivity is constantly monitored and recorded.

More information:

http://www.technologyreview.com/communications/22642/page1/

21 May 2009

3D for Mobile Phones

Three-dimensional viewing has not yet made it in a big way onto our television and cinema screens. According to European researchers, the story of 3D TV is set to be quite different with mobile devices, as the right standards and technology fall into place. Simulating the third dimension is something of a Holy Grail for cinema and television. The key advantage of 3D film over the conventional two dimensions is the illusion of depth and the sense of ‘body’ the viewer experiences – as if the action is leaping out of the screen rather than occurring within it. Despite the images it evokes of high-tech wizardry, rudimentary 3D technologies have been around practically since the dawn of filmmaking. The mobile market has always been much more dynamic and receptive to new technologies than the television market, as the whole idea of mobility is based on dynamism. Viewing conditions, and hence technical requirements, for mobile devices are not as exacting as they are for cinema, which targets a mass audience who expect a thrilling experience, and television, which needs to be of ‘home entertainment’ quality. In mobile 3D technology, the viewing mode is personal, the required display size is small and the user is expected to adjust the display position for the best viewing experience.

The story of 3D television for mobile phones has been one punctuated by stops and starts. As early as 2003, Sharp launched a 3D mobile phone in Japan and Korea’s SK Telecom launched a 3D phone – from Samsung – in 2007, and Japan’s Hitachi just launched one in 2009. But the big challenges have been the paucity of content and coming up with a profitable business model. Apple’s iPhone also supports 3D television, but can currently only be viewed with special glasses. Mobile3DTV is developing the core elements of the next generation of three-dimensional television for mobile devices. The format should be adopted ideally by all industrial players and the project decided to build its system around the EU standard known as Digital Video Broadcasting – Handheld (DVB-H). Mobile3DTV is employing so-called auto-stereoscopic displays, which produce 3D images that do not require those awkward glasses to view them – which is good news for people who want to be incognito about their mobile viewing. Auto-stereoscopic displays use additional optical elements aligned on the surface of an LCD, to ensure that the observer sees different images with each eye. As mobile devices are normally watched by a single observer, two independent views are sufficient for satisfactory 3-D perception.

More information:

http://sp.cs.tut.fi/mobile3dtv/

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&ID=90580

18 May 2009

Stretchable Displays

Researchers at the University of Tokyo have moved a step closer to displays and simple computers that you can wear on your sleeve or wrap around your couch. And they have opened up the possibility of printing such devices, which would make them cheap. Researchers make a stretchable display by connecting organic light-emitting diodes (OLEDs) and organic transistors with a new rubbery conductor. The researchers can spread the display over a curved surface without affecting performance. The display can also be folded in half or crumpled up without incurring any damage. In a previous prototype, researchers used their elastic conductor--a mix of carbon nanotubes and rubber--to make a stretchy electronic circuit.

The new version of the conductor is significantly more conductive and can stretch to more than twice its original size. Combined with printable transistors and OLEDs, this could pave the way for rolling out large, cheap, wearable displays and electronics. Bendy, flexible electronics that can be rolled up like paper are already available. But rubber-like stretchable electronics offer the additional advantage that they can cover complex three-dimensional objects. To make such materials, researchers have tried several approaches. In one approach, ultrathin silicon sheets are used to make complex circuits on stretchy surfaces. Others have made elastic conductors using graphene sheets or by combining gold and rubbery polymers. The new carbon nanotube conductor offers the advantage of being printable.

More information:

http://www.technologyreview.com/computing/22632/

15 May 2009

When Virtual Reality Feels Real

Despite advances in computer graphics, few people would think virtual characters or objects are real. Yet placed in a virtual reality environment most people will interact with them as if they are really there. European researchers are finding out why. In trying to understand presence – the propensity of humans to respond to fake stimuli as if they are real – the researchers are not just gaining insights into how the human brain functions. They are also learning how to create more intense and realistic virtual experiences, opening the door to myriad applications for healthcare, training, social research and entertainment. Working in the EU-funded Presenccia project, researchers, drawn from fields as diverse as neuroscience, psychology, psychophysics, mechanical engineering and philosophy, conducted a variety of experiments to understand why humans interpret and respond to virtual stimuli the way they do and how those experiences can be made more intense. For one experiment they developed a virtual bar, which test subjects enter by donning a virtual reality (VR) headset or immersing themselves in a VR CAVE in which stereo images are projected onto the walls. As the virtual patrons socialise, drink and dance, a fire breaks out. Sometimes the virtual characters ignore it, sometimes they flee in panic. That in turn dictates how the real test subjects, immersed in the virtual environment, respond. In another instance, the researchers re-enacted controversial experiments conducted by American social psychologist Stanley Milgram in the 1960s that showed people’s propensity to follow orders even if they know what they are doing is wrong.

Instead of using a real actor, as Milgram did, the Presenccia team used a virtual character to which the test subject was instructed to give progressively more intense electric shocks whenever it answered questions incorrectly. The howls of pain and protest from the character, a virtual woman, increased as the experiment went on. Some of the test subjects felt so uncomfortable that they actually stopped participating and left the VR environment. Around half said they wanted to leave, but said they did not because they kept telling themselves it wasn’t real. All had physical reactions, measured by their skin conductivity, perspiration and heart rate, showing that, at a subconscious level, people’s responses are similar regardless of whether what they are experiencing is real or virtual. The plausibility of the events enhances the sense that what is happening is real. Plausibility, is therefore more important to presence than the quality of the graphics in a VR environment. For example, when a test subject was made to stand on the edge of a virtual pit, staring down at an 18-metre drop, their level of anxiety increased if they could see dynamically changing shadows and reflections of their virtual body even if the graphics were poor. In other experiments, the researchers made people believe that a virtual hand was their own – replicating in VR the so-called ‘rubber hand illusion’ – or that they were looking at themselves from another angle, creating a kind of out-of-body experience. In one trial, they even gave male test subjects a woman’s body. By understanding what makes people perceive virtual objects and experiences to be real, the researchers hope to create applications that could revolutionise certain psychiatric treatments. Patients with a fear of spiders or heights, for example, could be exposed to and helped to overcome their fears in virtual reality.

More information:

http://www.presenccia.org/

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=90561

08 May 2009

Pop-up Button Touch Screen

Touch-screen technology has become wildly popular, thanks to smart phones designed for nimble fingers. But most touch screens have a major drawback: you need to keep a close eye on the screen as you tap, to make sure that you hit the right virtual buttons. As touch screens become more popular in other contexts, such as in-car navigation and entertainment systems, this lack of sensory feedback could become a dangerous distraction. Now researchers at Carnegie Mellon University have developed buttons that pop out from a touch-screen surface. The design retains the dynamic display capabilities of a normal touch screen but can also produce tactile buttons for certain functions. Researchers have built a handful of proof-of-concept displays with the morphing buttons. The screens are covered in semitransparent latex, which sits on top of an acrylic plate with shaped holes and an air chamber connected to a pump. When the pump is off, the screen is flat; when it's switched on, the latex forms concave or convex features around the cutouts, depending on negative or positive pressure. To illuminate the screens and give them multitouch capabilities, the researchers use projectors, infrared light, and cameras positioned below the surface.

The projectors cast images onto the screens while the cameras sense infrared light scattered by fingers at the surface. The idea of physically dynamic interfaces isn't new, and in recent years, researchers have explored using screens made from polymers that can alter their shape when exposed to heat, light, and changes in a magnetic field. However, these materials are still experimental and relatively expensive to make. Simpler systems, such as those that use a flexible material like latex and a pneumatic pump, have also been explored by researchers in the past. However, these systems haven't had all the capabilities of the Carnegie Mellon project. The display is the first to combine moving parts (the pop-up buttons), display dynamic information, and be touch sensitive. Other projects and products usually achieve two of these three criteria. Because the system is pressurized, the pressure information can itself be used as an input, Harrison says. For example, if the screen were used to control an MP3 player, a person could press a button harder to scan through radio stations or songs faster. While many touch-screen displays can also register different levels of pressure, the glass or rigid plastic used doesn't provide any tactile feedback.

More information:

http://www.technologyreview.com/computing/22550/?a=f

05 May 2009

Second Skin Captures Motion

Researchers at MIT have developed a new system that may provide a cheaper and more efficient way to track motion. The system, called Second Skin, could be a cheaper alternative for creating special effects for movies. The researchers say that they hope it will also be used to help people monitor their own motions so that they can practice physical therapy or perfect their tai chi moves. Traditional tracking systems involve high-speed cameras placed around a specially lit set. The subject being tracked wears special markers that reflect light emitted by the cameras. The cameras capture and record the reflected light several times a second, to track the subject's motion. When the system is used to make movies, software programs and a team of animators convert the data into an animated character. These motion-tracking systems can cost up to hundreds of thousands of dollars. Alternative systems that use magnets, accelerometers, or exoskeletons are, respectively, in need of even more extensive set up and calibration, error prone, or bulky and inflexible.

In contrast to traditional optical tracking systems, Second Skin doesn't rely on cameras at all. Instead, the system uses inexpensive projectors that can be mounted in ceilings or outdoors. Therefore, the system can be used indoors and out without special lighting, and it costs only a few thousand dollars. Tiny photosensors embedded in regular clothes record movement. The projectors send out patterns of near infrared light--approximately 10,000 different patterns a second. When the patterns hit the tiny photosensors embedded in the subject's clothes, the photosensors capture the coded light and convert it into a binary signal that indicates the position of the sensor. Because the patterns of light will hit the sensors differently, depending on where they are, each sensor receives a unique light pattern. These patterns are recorded about 500 times a second for each sensor. The sensors send the information to a thin, lightweight microcontroller worn by the subject under her clothes, which then transmits the data back to a computer via Bluetooth. The whole system can cost less than $1,000, with each photosensor costing about $2, a vibrating sensor $80, and a projector $50.

More information:

http://www.technologyreview.com/computing/22555/

http://www.technologyreview.com/video/?vid=327