27 June 2009

Simpler Data Visualization

There are many ways to slice and dice data to better understand what it means. Software like Microsoft's Excel offers a simple way to create charts and graphs, while more complex applications, such as IBM's Many Eyes, provide more interesting ways to visualize more complex data. Specialized programming languages can do more by tweaking the design of visualizations. But these languages tend to be difficult for non-experts to use. Now researchers at Stanford are offering a suite of tools called Protovis that streamline the process of building data visualizations. The tools still require knowledge of programming but are designed to be easier to implement for someone without programming experience.

The level of programming required to use and modify the tools is slightly above that of HTML but easier than JavaScript, a common Web scripting language. One of the main benefits of Protovis is that it is structured in such a way that a person who thinks first in terms of visualizations and then in terms of data should be able to find it easy enough to use. Instead of having to focus on how to structure code for the program, Protovis lets a user create simple building blocks, such as the colors and shapes needed for the visualization, then piece the blocks together to define the complete picture. One example of the type of visualizations that are made easier with Protovis is called Job Voyager.

More information:



24 June 2009

Mixed Reality Medical Breast Exam

Intimate procedures such as breast exams, while a routine and critical part of medical care, are notoriously tough to teach. Medical students practice on disembodied prosthetics but have limited opportunities to practice exams on real people — especially patients who have an abnormality. In a collaboration with the Augusta, Ga.-based Medical College of Georgia and three other universities, UF engineers have crafted a solution: a hybrid computer/mannequin that helps train students not only how to correctly perform a breast exam — but also how to talk to, and glean information from, the patient during the procedure. The project is important because correct examinations and good doctor-patient communication are critical to successful medical treatment. Studies have shown that communication skills are actually a better predictor of outcome than medical skills. With the virtual patient, students have to not only practice their technique, but they also have to work on their empathy. The mixed reality human, named Amanda Jones, ‘talks’ to students, and they respond via a computer speech and voice recognition system. Her physical form is immobile, but her virtual representation, created by the engineers, moves and speaks from a large flat screen above her physical body. Students can also view Jones through a head-mounted display.

The interaction is unscripted, but it follows a typical pattern for a woman’s visit and examination with both verbal and tactile challenges for the medical students. The student must tease out Jones’ medical history, listen to her concerns and respond to her questions. Just as in a real exam, this interaction occurs simultaneously with the physical examination. For that, the student must use the correct palpitating technique and apply the proper pressure. Sensors within the prosthetic breast provide pressure information depicted by colors on the virtual breast, guiding students in the exams. The engineers can program the system to include or exclude an abnormality — and the attendant conversation. It sounds awkward, and to be sure, the speech recognition element has its hiccups. But especially for students reared in an era of sophisticated 3D video games, the system turns out to be surprisingly convincing. The researchers have tested it on about 100 medical students from the Medical College of Georgia and one of their most consistent and prominent findings: Students do not hesitate to express empathy to Jones. A pilot study has concluded that students who practiced with a mixed realty human improved their communication skills and their technical abilities, but more trials are needed to determine whether those skills persist once the students examine real patients.

More information:


23 June 2009

Cell Phones That Listen and Learn

Researchers are increasingly using cell phones to better understand users' behavior and social interactions. The data collected from a phone's GPS chip or accelerometer, for example, can reveal trends that are relevant to modeling the spread of disease, determining personal health-care needs, improving time management, and even updating social-networks. The approach, known as reality mining, has also been suggested as a way to improve targeted advertising or make cell phones smarter: a device that knows its owner is in a meeting could automatically switch its ringer off, for example. Now a group at Dartmouth College, in Hanover, NH, has created software that uses the microphone on a cell phone to track and interpret a user's activity. The software, called SoundSense, picks up sounds and tries to classify them into certain categories. In contrast to similar software developed previously, SoundSense can recognize completely unfamiliar sounds, and it also runs entirely on the device.

SoundSense automatically classifies sounds as ‘voice’, ‘music’, or ‘ambient noise’. If a sound is repeated often enough or for long enough, SoundSense gives it a high ‘sound rank’ and asks the user to confirm that it is significant and offers the option to label the sound. The Dartmouth team focused on monitoring sound because every phone has a microphone and because accelerometers provide only limited information. The researchers made sure the program is small, so that it doesn't use too much power. To address privacy concerns, they designed SoundSense so that information is not removed from the device for processing. Additionally, the program itself doesn't store raw audio clips. A user can also tell the software to ignore any sounds deemed off limits. In testing, the SoundSense software was able to correctly determine when the user was in a particular coffee shop, walking outside, brushing her teeth, cycling, and driving in the car. It also picked up the noise of an ATM machine and a fan in a particular room.

More information:


20 June 2009

Hybrid Human Machine Interaction

In a groundbreaking study, scientists at Florida Atlantic University have created a "hybrid" system to examine real-time interactions between humans and machines (virtual partners). For more than 25 years, scientists in the Center for Complex Systems and Brain Sciences (CCSBS) in Florida Atlantic University’s Charles E. Schmidt College of Science, and others around the world, have been trying to decipher the laws of coordinated behavior called ‘coordination dynamics’. Unlike the laws of motion of physical bodies, the equations of coordination dynamics describe how the coordination states of a system evolve over time, as observed through special quantities called collective variables. These collective variables typically span the interaction of organism and environment. Imagine a machine whose behavior is based on the very equations that are supposed to govern human coordination.

Then imagine a human interacting with such a machine whereby the human can modify the behavior of the machine and the machine can modify the behavior of the human. An interdisciplinary group of scientists in the CCSBS created VPI, a hybrid system of a human interacting with a machine. These scientists placed the equations of human coordination dynamics into the machine and studied real-time interactions between the human and virtual partners. Their findings open up the possibility of exploring and understanding a wide variety of interactions between minds and machines. VPI may be the first step toward establishing a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.

More information:


17 June 2009

3D Cinema Without Glasses

Most people’s experience with 3D involves wearing tinted glasses in a cinema. But a new technology, which does not require glasses and may enable 3DTV, is being developed by European researchers. While the first applications of the new technology are likely to be the fields of industry and science, there are also very major implications for the future of entertainment, both at the cinema and on television, as well as in video gaming. The most important aspect of the new system from the user perspective is that nothing is required of the viewer – no need for the special glasses in cinemas or having to adjust your head into specific positions to get the 3D effect, as with a holographic image. It provides the closest video 3D viewing experience compared to the well-known static holography, where the user can freely move to change viewing angle. The breakthrough has been thanks to two EU-funded projects, firstly HOLOVISION, which ended in April last year, and then its successor OSIRIS, which is still going and runs until the end of 2009. These projects organised projection engines in a special way and used holographic imaging film for the display screen. The combination of these, with the projection engines being driven by a cluster of nine high-end PCs, and new sophisticated software, allowed us to achieve our aims.

A prototype system was produced with a resolution of 100Mpixel – or around 10 times that of HDTV – at 25 frames per second in six colours, rather than the standard RGB. The researchers were able to increase the resolution three fold to virtually 300Mpixel by using greyscales instead of colours. Although nothing commercial has come out of the HOLOVISION project as yet, it provided a major stepping-stone for the much larger OSIRIS project. An early prototype of an OSIRIS system was demonstrated at the ICT 2008 exhibition in Lyon, and impressed enough to win the Best Exhibit Award silver prize. A major aim of OSIRIS is to develop high-resolution, big screen, reflective projection 3D cinema. The prototype under development has a wall-mounted 1.7m x 3m screen, with the projector on the ceiling. In HOLOVISION, the images were back-projected which meant a very unwieldy display system that was 35 inches deep. In OSIRIS, using a complex system of mirrors and light sources to provide the re-projected images, the screen display will only have a depth of between 15 and 20 inches to give a much less bulky and more modern look. The glassless technology presents the 3D image in a way very similar to light coming from a normal object, so putting a lot less strain on the brain than current 3D projections. Although the technology is still under development, the commercial prospects are many and varied.

More information:


16 June 2009

ICT for Dementia Patients

The labour force in the health services is shrinking, there are more and more old people, and a very high proportion of them are plagued by deteriorating short- and long-term memory. All this has created a need for computer-based solutions that will enable elderly people to live safely in their own homes, but at the same time, the technology needed to take special care of them is expensive. On top of this, different standards for home sensors create problems. This situation formed the backcloth for the EU’s decision a couple of years ago to launch a series of projects to make it simpler for industry to develop new equipment in this field. One of these projects was called Mpower, and its aim was to create a computer platform that could be used for various purposes and meet a wide range of needs among its target group. What is being tested out in Norway today is a simple communication system based on a computer screen, aimed at elderly people who live at home but whose memory is failing.

No keyboard is needed, only a touch on the screen, which displays the sun and the moon to indicate whether it is day or night, while a large clock-face shows the time. The families of these patients are often anxious about how it is going with their parents, and this allows both them and the home help to enter messages that will be automatically displayed by the system. On the screen, for example, the elderly person might find “Remember to drink some water”, or “Take the number 52 bus”. Or current messages such as “The home help will be coming at nine o’clock this morning to give you a shower”. Another useful feature is that family members can also access the system to check whether the elderly person’s appointments have been kept. Has she been to the doctor? Has he remembered to go to the day-care unit today? Since last summer, a handful of elderly people have been trialling the system in Trondheim and Grimstad. Meanwhile, a variant of the system is being tested in a nursing home near Krakow in Poland. This version uses sensors and GPS to offer smart solutions both in the house and outdoors to sound the alarm if and elderly person is moving around in an unsafe area.

More information:



08 June 2009

Moon Magic

Lunar eclipses are well-documented throughout human history. The rare and breathtaking phenomena, which occur when the moon passes into the Earth’s shadow and seemingly changes shape, color, or disappears from the night sky completely, caught the attention of poets, farmers, leaders, and scientists alike. Researchers at Rensselaer Polytechnic Institute have developed a new method for using computer graphics to simulate and render an accurate visualization of a lunar eclipse. The model uses celestial geometry of the sun, Earth, and moon, along with data for the Earth’s atmosphere and the moon’s peculiar optical properties to create picture-perfect images of lunar eclipses. The computer-generated images, which are virtually indistinguishable from actual photos of eclipses, offer a chance to look back into history at famous eclipses, or peek at future eclipses scheduled to occur in the coming years and decades.

The model can also be configured to show how the eclipse would appear from any geographical perspective on Earth — the same eclipse would look different depending if the viewer was in New York, Seattle, or Rome. Other researchers have rendered the night sky, the moon, and sunsets, but this is the first time anyone has rendered lunar eclipses. The appearance of lunar eclipses can vary considerably, ranging from nearly invisible jet black to deep red, rust, to bright copper-red or orange. The appearance depends on several different factors, including how sunlight is refracted and scattered in the Earth’s atmosphere. Researchers combined and configured models for sunlight, the solar system, as well as the different layers and different effects of the Earth’s atmosphere, to develop their lunar eclipse models.

More information:


04 June 2009

Computer Graphics Liquid Simulation

Those are some of the sounds that have been missing from computer graphic simulations of water and other fluids, according to researchers in Cornell's Department of Computer Science, who have come up with new algorithms to simulate such sounds to go with the images. In computer-animated movies, sound can be added after the fact from recordings or by Foley artists. But as virtual worlds grow increasingly interactive and immersive, the researchers point out, sounds will need to be generated automatically to fit events that can't be predicted in advance. Recordings can be cued in, but can be repetitive and not always well matched to what's happening. However, there is no way to efficiently compute the sounds of water splashing, paper crumpling, hands clapping, wind in trees or a wine glass dropped onto the floor. Along with fluid sounds, the research also will simulate sounds made by objects in contact, like a bin of Legos; the noisy vibrations of thin shells, like trash cans or cymbals; and the sounds of brittle fracture, like breaking glass and the clattering of the resulting debris.

All the simulations will be based on the physics of the objects being simulated in computer graphics, calculating how those objects would vibrate if they actually existed, and how those vibrations would produce acoustic waves in the air. Physics-based simulations also can be used in design, just as visual simulation is now. The simulation method developed by the Cornell researchers starts with the geometry of the scene, figures out where the bubbles would be and how they're moving, computes the expected vibrations and finally the sounds they would produce. The simulation is done on a highly parallel computer, with each processor computing the effects of multiple bubbles. The researchers have fine-tuned the results by comparing their simulations with real water sounds. The current methods still require hours of offline computing time, and work best on compact sound sources, the researchers noted, but they said further development should make possible the real-time performance needed for interactive virtual environments and deal with larger sound sources such as swimming pools or perhaps even Niagara Falls.

More information: