30 December 2008

Mobile College Application

Professors usually don’t ask everyone in a class of 300 students to shout out answers to a question at the same time. But a new application for the iPhone lets a roomful of students beam in answers that can be quietly displayed on a screen to allow instant group feedback. The application was developed by programmers at Abilene Christian University, which handed out free iPhones and iPod Touch devices to all first-year students this year. The university was the first to do such a large-scale iPhone handout, and officials there have been experimenting with ways to use the gadgets in the classroom. The application lets professors set up instant polls in various formats. They can ask true-or-false questions or multiple-choice questions, and they can allow for free-form responses. The software can quickly sort and display the answers so that a professor can view responses privately or share them with the class by projecting them on a screen. For open-ended questions, the software can display answers in “cloud” format, showing frequently-repeated answers in large fonts and less-frequent answers in smaller ones.

The idea for such a system is far from new. Several companies sell classroom response systems, often called “clickers,” that often involve small wireless gadgets that look like television remote controls. Most clickers allow students to answer true-or-false or multiple-choice questions (but do not allow open-ended feedback), and many colleges have experimented with the devices, especially in large lecture courses. There are several drawbacks to many clicker systems, however. First of all, every student in a course must have one of the devices, so in courses that use clickers, students are often required to buy them. Then, students have to remember to bring the gadgets to class, which doesn’t always happen. Using cellphones instead of dedicated clicker devices solves those issues. Because students rely on their phones for all kinds of communication, they usually keep the devices on hand. The university calls its iPhone software NANOtools — NANO stands for No Advanced Notice, emphasizing how easy the system is for students and professors to use. Some companies that make clickers, such as TurningPoint, are starting to sell similar software to turn smartphones into student feedback systems as well.

More information:

http://chronicle.com/wiredcampus/article/3518/mobile-college-app-turning-iphones-into-super-clickers-for-classroom-feedback

26 December 2008

Virtual Battle of Sexes

Picture a typical player of a massively multiplayer game such as World of Warcraft and most people will imagine an overweight, solitary male. But this stereotype has been challenged by a study investigating gender differences among gamers. It found that the most hard-core players are female, that gamers are healthier than average, and that game playing is an increasingly social activity. Despite gaming being seen as a male activity, female players now make up about 40% of the gaming population. The study looked at gender differences in more than 2,400 gamers playing EverQuest II. The participants, who were recruited directly out of the game, completed a web-based questionnaire about their gaming habits and lifestyles. They received an in-game item as a reward for taking part - a condition which has led to some questioning of the results.

In addition Sony Online Entertainment, Everquest's creator, gave the US researchers access to information about the players' in-game behaviours. The results showed that, although more of the players were male, it was the female players who were the most dedicated players, spending more time each day playing the game than their male counterparts. The pressure to conform to traditional gender roles might mean that some women are put off activities seen as ‘masculine’, whereas women who reject traditional gender roles might be more likely to play MMOs such as EverQuest II. Perhaps in support of this the survey revealed an unusually high level of bisexuality among the women who took part in the study - over five times higher than the general population.

More information:

http://news.bbc.co.uk/2/hi/technology/7796482.stm

22 December 2008

Cognitive Computing

Suppose you want to build a computer that operates like the brain of a mammal. How hard could it be? After all, there are supercomputers that can decode the human genome, play chess and calculate prime numbers out to 13 million digits. But University of Wisconsin-Madison research psychiatrist says the goal of building a computer as quick and flexible as a small mammalian brain is more daunting than it sounds. Scientists from Columbia University and IBM will work on the software for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the hardware. Thus, a cat landing on a hot stovetop not only jumps off immediately, it learns not to do that again. The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions. There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day. Our brains can do it, so we have proof that it is possible. What our brains are good at is being flexible, learning from experience and adapting to different situations. While the project will take its inspiration from the brain's architecture and function, it isn't possible or even desirable to recreate the entire structure of the brain down to the level of the individual synapse.

A lot of the work will be to determine what kinds of neurons are crucial and which ones we can do without. It all comes down to an understanding of what is necessary for teaching an artificial brain to reason and learn from experience. Value systems or reward systems are important aspects. Learning is crucial because it needs to learn from experience just like we do. So a system modeled after the neurons that release neuromodulators could be important. For example, neurons in the brain stem flood the brain with a neurotransmitter during times of sudden stress, signaling the "fight-or flight" response. Every neuron in the brain knows that something has changed. Thus, a cat landing on a hot stovetop not only jumps off immediately, it learns not to do that again. The ideal artificial brain will need to be plastic, meaning it is capable of changing as it learns from experience. The design will likely convey information using electrical impulses modeled on the spiking neurons found in mammal brains. And advances in nanotechnology should allow a small artificial brain to contain as many artificial neurons as a small mammal brain. It won't be an easy task, a veteran of earlier efforts to create cognitive computers. Even the brains of the smallest mammals are quite impressive when you consider what tasks they perform with a relatively small volume and energy input.

More information:

http://www.sciencedaily.com/releases/2008/12/081221215537.htm

21 December 2008

Modeling Brain Blasts

Traumatic brain injury (TBI) is often called the signature injury of the war in Iraq. Medical experts have yet to determine exactly what causes the condition, but the violent waves of air pressure emitted by an improvised explosive device (IED) or a rocket-propelled grenade are most likely to blame. These pressure waves travel close to the speed of sound and can rattle the brain's soft tissue, causing permanent, yet invisible, damage. In an effort to better understand how the waves shake soldiers' brains, researchers at the Naval Research Laboratory (NRL), in Washington, DC, developed a computer simulation that models the motion of a propagating blast wave using data gathered from laboratory experiments with sensor-studded mannequins. The simulation gives us the full 3D flow field, velocities, and pressure distributions surrounding the head and the helmet. Initial testing has already revealed some compelling results. The NRL researchers are collaborating with a team of researchers at Allen-Vanguard Technologies, in Canada.

The group placed Marine Corps ballistic helmets on mannequins equipped with pressure sensors and accelerometers, and these modified mannequins were placed at various orientations and distances from controlled explosions. The researchers collected data from more than 40 different blast scenarios and integrated the data into their computer simulation. The simulation uses a set of well-established flow-modeling algorithms for simulating reacting and compressible flow to create a 3D simulation of the pressure wave that would be experienced by a real soldier. These [algorithms] have been used in the past, but we are combining them in a new way to make software for this particular problem. The calculations are done in two steps. First, the algorithms are used to model the initial blast to get a realistic blast profile from the explosion. This includes the chemistry, so we can get the strength of the pressure waves and the velocity field. Second, as the wave approaches the mannequin, this information is fed into a compressible flow simulation that produces a more complex 3D simulation of the head-helmet geometry. This combined approach makes the calculations more realistic and efficient.

More information:

http://www.technologyreview.com/computing/21712/?a=f

18 December 2008

Virtual Cognitive Telerehabilitation

The Guttmann Institute, the Biomedical Engineering Research Center (CREB) and the Department of Software of the Universitat Politècnica de Catalunya (UPC), as well as other science and technology partners, are working on a telerehabilitation program for treating people with cognitive deficits caused by acquired brain damage. A three-dimensional space has been designed to help these people improve their functional capacity in daily life activities. The PREVIRNEC platform enables therapists to personalize treatment plans: intensive rehabilitation can be programmed automatically for the required length of time, the results monitored and the level of difficulty adjusted according to patients’ performance in previous sessions. The aim of this project is to use software to meet the treatment needs of patients with acquired brain damage. The software promotes the rehabilitation of affected cognitive functions by representing everyday, real life situations in a virtual world. All of the software that has been designed has two applications. It offers patients a three-dimensional IT platform on which to carry out their cognitive rehabilitation exercises. In addition, it provides a web interface for the therapist, through which different exercises can be programmed for each individual, their performance monitored, their progress assessed and their rehabilitation treatment plan adapted, if required.

Currently, the main limitation of conventional cognitive rehabilitation is the difference between the types of activities used in therapeutic sessions and the real difficulties that patients face after treatment. The introduction of virtual reality applications helps to reduce this gap between clinical practice and everyday life. In this innovative proposal, the contribution of the UPC’s Computer Science in Engineering Group (GIE) involves developing virtual realities of everyday spaces, for example the kitchen, in which the patient has to carry out several tasks, such as putting things away in the fridge or preparing a salad. These kinds of tasks, which are difficult to simulate in a clinical setting, can help patients to work on their ability to plan, sequence, categorize or use their memory. The flexibility and adaptability of this computer technology means that it can be used in the rehabilitation of other types of patients who also require cognitive treatment to improve their quality of life. The project is still in progress and the following have been incorporated: new technology partners, such as the Biomedical Engineering and Telemedicine Center (GBT) of the Technical University of Madrid; knowledge in the field of neurosciences, provided by the Catalan Institute of Aging (FICE); the UAB, in the form of the Cognitive Neuroscience Research Group; and the recognition and drive of ICT industries, represented by ICA and Vodafone, as the result of a research, development and innovation grant that was awarded through the AVANZA program.

More information:

http://www.sciencedaily.com/releases/2008/12/081201082357.htm

17 December 2008

Future Computer Interfaces

This month, the humble computer mouse celebrated its 40th birthday. Thanks to the popularity of the iPhone, the touch screen has gained recognition as a practical interface for computers. In the coming years, we may see increasingly useful variations on the same theme. A couple of projects, in particular, point the way toward interacting more easily with miniature touch screens, as well as with displays the size of walls. One problem with devices like the iPhone is that users' fingers tend to cover up important information on the screen. Yet making touch screens much larger would make a device too bulky to slip discreetly into a pocket. A project called nanoTouch, developed at Microsoft Research, tackles the challenges of adding touch sensitivity to ever-shrinking displays. Researchers have added touch interaction to the back of devices that range in size from an iPod nano to a watch or a pendant. The researchers' concept is for a gadget to have a front that is entirely a display, a back that is entirely touch sensitive, and a side that features buttons.

To make the back of a gadget touch sensitive, the researchers added a capacitive surface, similar to those used on laptop touch pads. In one demonstration, the team shows that the interface can be used to play a first-person video game on a screen the size of a credit card. In another demo, the device produces a semitransparent image of a finger as if the device were completely see-through. When a transparent finger or a cursor is shown onscreen, people can still operate the device reliably. Details of the device will be presented at the Computer Human Interaction conference in Boston next April. The researchers tested four sizes of square displays, measuring 2.4 inches, 1.2 inches, 0.6 inches, and 0.3 inches wide. They found that people could complete tasks at roughly the same speed using even the smallest display, and that they made about the same number of errors using all sizes of the device. Furthermore, the back-of-the-screen prototypes performed better than the smallest front-touch device.

More information:

http://www.technologyreview.com/computing/21799/?a=f

11 December 2008

Virtual Emotions, Moods, Personality

A team of researchers from the University of the Balearic Islands (UIB) has developed a computer model that enables the generation of faces which for the first time display emotions and moods according to personality traits. The aim of this work has been to design a model that reveals a person's moods and displays them on a virtual face. In the same 3-D space we have integrated personality, emotions and moods, which had previously been dealt with separately. Researchers pointed out that emotions (such as fear, joy or surprise) are almost instantaneous mood alterations, in contrast to emotional states (such as boredom or anxiety) which are more long-lasting, or personality, which normally lasts someone's entire life. The designers have followed theories to draw up the model, based on the five personality traits established by American psychologists including: extraversion, neuroticism, openness, conscientiousness and agreeableness. An introverted and neurotic personality is therefore related to an anxious emotional state. The points of the face that define these emotions can be determined mathematically, and the algorithms developed by computer experts can be used to obtain different facial expressions ‘quickly and easily’.

The system, which uses the MPEG-4 video coding standard for creating images, makes it possible to display basic emotions (anger, disgust, fear, joy, sadness, surprise) and intermediate situations. The results of the method have been assessed objectively (through an automatic recognizer which identified 82% of the expressions generated) and subjectively, through a survey carried out among a group of 75 university students. The students successfully recognised 86% of the emotions and 73% of the emotional states shown on the computer. Even so, the researchers have detected that some emotions, such as fear and surprise, are difficult to tell apart, with context helping to differentiate between the two. The team is already working in this line and prepared a virtual storyteller which enriches the narration, using its face to express the emotions generated by the story told. The researchers believe that this model could be applied in both educational environments (virtual tutors and presenters with personality traits) and in video game characters or interactive stories that have their own emotional motor.

More information:

http://www.sciencedaily.com/releases/2008/12/081204133855.htm

07 December 2008

VAST2008 Article

A few days ago I have presented a co-authored paper in 9th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), in Braga, Portugal. Museums and other cultural institutions try to communicate the theme of their exhibitions and attract the visitors’ attention by presenting audio-visual information in a number of different ways. Traditional museum exhibitions have evolved from passive presentations of artefacts to interactive displays, such as pre-recorded audio guides and static information kiosks. However even if some technological advances have been adopted by current museum and mobile exhibitions, they provide very simplistic presentations compared to the potential of the current Information Technologies. It is therefore essential to provide a unifying framework that can be highly customisable, user-friendly and intuitive to use in order to engage a broad spectrum of users and take into account the diverse needs of museum visitors.

This paper presents solutions for both museum exhibitions and mobile guides moving towards a unifying framework based on open standards. This can offer more customisable experiences attracting and engaging a broader spectrum of users. Our solution takes into account the diverse needs of visitors to heritage and mobile guide exhibitions allowing for multimedia representations of the same content but using diverse interfaces including a web, a map, a virtual reality and an augmented reality domain. Different case studies illustrate the majority of the capabilities of the multimodal interfaces used and also how personalisation and customisation can be performed in both kiosk and mobile guide exhibitions to meet user needs.

A draft version of the paper can be downloaded from here.