30 December 2010

The Emotional Computer

Cambridge University film provides a glimpse of how robots and humans could interact in the future. Can computers understand emotions? Can computers express emotions? Can they feel emotions? The latest video from the University of Cambridge shows how emotions can be used to improve interaction between humans and computers. When people talk to each other, they express their feelings through facial expressions, tone of voice and body postures. They even do this when they are interacting with machines. These hidden signals are an important part of human communication, but computers ignore them.

The research team is collaborating closely with researchers from the University's Autism Research Centre. Because those researchers study the difficulties that some people have understanding emotions, their insights help to address the same problems in computers. Facial expressions are an important way of understanding people's feelings. One system tracks features on a person's face, calculates the gestures that are being made and infers emotions from them. It gets the right answer over 70% of the time, which is as good as most human observers.

More information:

http://www.admin.cam.ac.uk/news/dp/2010122303

23 December 2010

Preserving Time in 3D

A computer science Professor hopes to use open-source software and super-high resolution photos to capture three-dimensional lifelike models of the world's treasures, effectively preserving their current state. Under the plans, a sequence of many thousands of super-high resolution photographs taken in batches from several angles would be stitched together to form detailed pictures and then rendered into 3D form. The effect would reveal minute detail of an object rendered in 3D, allowing future generations to view objects in their present state.

Researchers used a US$1,184 camera, an 800mm lens, a robotic arm and a free open-source application that combined some 11,000 18-megapixel images. The 150 billion-pixel photo was shrunk down to a small image to allow for manual smoothing out the brightness variation between the combined photos. The changes took about three weeks and then mapped onto a larger image. The 700GB photo took about a week to upload to the internet, and was processed with a standard PC beefed up with 24GB of RAM.

More information:

http://asia.cnet.com/crave/2010/12/20/photo-project-aims-to-to-preserve-time-in-3d/

22 December 2010

Video DNA Matching

You know when you're watching a pirated film downloaded from the Internet -- there's no mistaking the fuzzy footage, or the guy in the front row getting up for popcorn. Despite the poor quality, pirated video is a serious problem around the world. Criminal copyright infringement occurs on a massive scale over the Internet, costing the film industry billions of dollars annually. Now researchers of Tel Aviv University’s Department of Electrical Engineering have a new way to stop video pirates by treating video footage like DNA. Of course, video does not have a real genetic code like members of the animal kingdom, so researchers created a DNA analogue, like a unique fingerprint, that can be applied to video files. The result is a unique DNA fingerprint for each individual movie anywhere on the planet. When scenes are altered, colors changed, or film is bootlegged on a camera at the movie theatre, the film can be tracked and traced on the Internet. And, like the films, video thieves can be tracked and caught. The technology employs an invisible sequence and series of grids applied over the film, turning the footage into a series of numbers.

The tool can then scan the content of Web sites where pirated films are believed to be offered, pinpointing subsequent mutations of the original. The technique is called ‘video DNA matching’. It detects aberrations in pirated video in the same way that biologists detect mutations in the genetic code to determine, for example, an individual's family connections. The technique works by identifying features of the film that remain basically unchanged by typical color and resolution manipulations, and geometric transformations. It's effective even with border changes, commercials added or scenes edited out. The researchers have set their sights on popular video-sharing web sites like YouTube. YouTube, they say, automates the detection of copyright infringement to some degree, but their technique doesn't work when the video has been altered. The problem with catching bootlegged and pirated video is that it requires thousands of man-hours to watch the content being downloaded.

More information:

http://www.sciencedaily.com/releases/2010/12/101221101841.htm

18 December 2010

Sun Visualisation by ESA

New software developed by ESA makes available online to everyone, everywhere at anytime, the entire library of images from the SOHO solar and heliospheric observatory. Just download the viewer and begin exploring the Sun. JHelioviewer is new visualisation software that enables everyone to explore the Sun. Developed as part of the ESA/NASA Helioviewer Project, it provides a desktop program that enables users to call up images of the Sun from the past 15 years. More than a million images from SOHO can already be accessed, and new images from NASA's Solar Dynamics Observatory are being added every day. The downloadable JHelioviewer is complemented by the website Helioviewer.org, a web-based image browser. Using this new software, users can create their own movies of the Sun, colour the images as they wish, and image-process the movies in real time.

They can export their finished movies in various formats, and track features on the Sun by compensating for solar rotation. JHelioviewer is written in the Java programming language, hence the 'J' at the beginning of its name. It is open-source software, meaning that all its components are freely available so others can help to improve the program. The code can even be reused for other purposes; it is already being used for Mars data and in medical research. This is because JHelioviewer does not need to download entire datasets, which can often be huge -- it can just choose enough data to stream smoothly over the Internet. It also allows data to be annotated, say, solar flares of a particular magnitude to be marked or diseased tissues in medical images to be highlighted.

More information:

http://www.sciencedaily.com/releases/2010/12/101215083400.htm

14 December 2010

Creating Better Digital Denizens

We are incredibly sensitive to human movement and appearance, which makes it a big challenge to create believable computerised crowds, but researchers at Trinity are working on improving that. Getting those computer-generated avatars to act in engaging and more human ways is trickier than it looks. But researchers at Trinity College Dublin are delving into how we perceive graphical characters and coming up with insights to create more socially realistic virtual humans without hogging too much computer processing expense. Getting the crowds right in this computerised cityscape is important, according to researchers.

The team has been trying to work out smarter ways of making simulated crowds look more varied without the expense of creating a model for each individual, and they are finding that altering the upper bodies and faces on common templates is a good way to get more bang for your buck. Researchers from the team also sat together and attached markers to themselves so they could capture their movements and voices on camera as they conversed. That built up a large corpus of data to tease out the subtle synchronies between gestures and sounds that our brains register without us even thinking about it.

More information:

http://www.irishtimes.com/newspaper/sciencetoday/2010/1209/1224285096674.html

08 December 2010

Virtual Training Gets Real

Computerised training systems are getting an extra dose of reality, thanks to an EU-funded research project led by the University of Leeds. PC-based virtual reality training is typically cheaper than face-to-face sessions with a mentor or coach. As the recent Hollywood blockbuster Up in the Air showed, multiple members of staff can be trained by practising various scenarios in a virtual reality environment without having to leave their desks. Virtual reality training tools are seldom as effective as working with a real person because the simulation package cannot respond to trainees' past experiences or preconceptions.

For example, software designed to help managers conduct job interviews may include a number of different simulated scenarios that appear true to life. However, if the trainee is consistently hostile to the virtual interviewee or overly sympathetic, the system will not flag this up or suggest they try an alternative approach. The project is involving seven partners from six European countries, including Austria, Germany, Ireland, Italy, the Netherlands and the UK. ImREAL will develop intelligent tools that will encourage trainees to detect subtle differences in communication and social cues across different cultures.

More information:

http://www.imreal-project.eu/

http://www.leeds.ac.uk/news/article/1307/virtual_training_gets_real

05 December 2010

Computer Generated Robots

Genetic Robots are moving robots that can be created fully automatically. The robot structures are created using genetic software algorithms and additive manufacturing. The important role robots play is not limited to industrial production in the automotive industry. They are also used for exploration, transportation and as service robots. Modeling the movements to make them mobile or enabling them to grip objects is a complex yet central challenge for engineers. With its ‘Genetic Robots’, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) in Stuttgart has successfully had a moving robot automatically designed – without the intervention of a designing engineer – by a genetic software algorithm. The robots consist of cylinder-shaped tubes with ball-and-socket joints that can assume different shapes depending on external factors and the purpose at hand.

Fitness functions within the software algorithm select the movement elements with which the Genetic Robot can advance along this surface the software determines the shape of the tubes, the position of the movement points and the position of the drives (actuators). The basis for the development is a physic engine in which the most important environmental influences – such as the friction of the ground or gravity – are implemented. If the Genetic Robot is to withstand unevenness, climb stairs or swim in water, these environmental conditions can be simulated. The result is not just one but a multitude of solutions from which the designer can choose the best one. The Genetic Robots system can also be used to design subcomponents such as gripping systems for robots in industry.

More information:

http://www.fraunhofer.de/en/press/research-news/2010/11/euromold-genetic-robots.jsp

03 December 2010

Brain Boost for Information Overload

Imagine you have thousands of photographs and only minutes to find a handful that contain Dalmation puppies. Or that you’re an intelligence analyst and you need to scan 5 million satellite pictures and pull out all the images with a helipad. Researchers proposed a solution to such information overload that could revolutionize how vast amounts of visual information are processed—allowing users to riffle through potentially millions of images and home in on what they are looking for in record time. This is called a cortically coupled computer vision (C3Vision) system, and it uses a computer to amplify the power of the quickest and most accurate tool for object recognition ever created: the human brain. The human brain has the capacity to process very complicated scenes and pick out relevant material before we’re even consciously aware we’re doing so. These ‘aha’ moments of recognition generate an electrical signal that can be picked up using electroencephalography (EEG), the recording of electrical activity along the scalp caused by the firing of neurons in the brain.

Researchers designed a device that monitors brain activity as a subject rapidly views a small sample of photographs culled from a much larger database—as many as 10 pictures a second. The device transmits the data to a computer that ranks which photographs elicited the strongest cortical recognition responses. The computer looks for similarities in the visual characteristics of different high-ranking photographs, such as color, texture and the shapes of edges and lines. Then it scans the much larger database—it could contain upward of 50 million images—and pulls out those that rank high in visual characteristics most highly correlated with the ‘aha’ moments detected by the EEG. It’s an idea that has already drawn significant interest from the U.S. government. The Defense Advanced Research Projects Agency (DARPA), which pioneered such breakthrough technologies as computer networking, provided $2.4 million to test the device over the next 18 months. Analysts at the National Geospacial-Intelligence Agency will attempt to use the device to look for objects of interest within vast satellite images.

More information:

http://news.columbia.edu/record/2188#

30 November 2010

Sensors Monitor Elderly at Home

The sensors know when elderly users wake up to go to the bathroom. They know how much time he spends in bed. They watch him do jigsaw puzzles in the den. They tattle when he opens the refrigerator. Sensor networks, which made their debut in hospitals and assisted living centers, have been creeping into the homes of some older Americans in recent years. The systems -- which can monitor a host of things, from motion in particular rooms to whether a person has taken his or her medicine -- collect information about a person's daily habits and condition, and then relay that in real-time to doctors or family members. If the elderly opens an exterior door at night, for example, an alert goes out to the doctor, a monitoring company and two of his closest friends, since there is no family nearby.

The monitoring network, made by a company called GrandCare Systems, features motion-sensors in every room as well as sensors on every exterior door. A sensor beneath the mattress pad on his bed tells health care professionals if he's sleeping regularly. All of this connects wirelessly with vital sign monitors, which send his doctor daily reports about his blood-sugar levels, blood pressure and weight. He can see charts about how he's doing on a touch-screen monitor that sits on a desk in his home office. University researchers are testing robots that help take care of older people, keep them company -- and even give them sponge baths. Meanwhile, some younger people have taken to collecting information on their own, often going to extremes to document exercise routines, caffeine intake and the like and posting the data online.

More information:

http://www.cnn.com/2010/TECH/innovation/11/19/sensors.aging/

28 November 2010

When the Playroom is the Computer

For all the work that’s gone into developing educational media, even the most stimulating TV shows and video games leave kids stationary. Researchers at the MIT Media Laboratory are hoping to change that with a system called Playtime Computing, which gives new meaning to the term ‘computing environment’. The prototype of the Playtime Computing system consists mainly of three door-high panels with projectors behind them; a set of ceiling-mounted projectors that cast images onto the floor; and a cube-shaped, remote-controlled robot, called the Alphabot, with infrared emitters at its corners that are tracked by cameras mounted on the ceiling. But the system is designed to make the distinctions between its technical components disappear. The three panels together offer a window on a virtual world that, courtesy of the overhead projectors, appears to spill into the space in front of it. And most remarkably, when the Alphabot heads toward the screen, it slips into a box, some robotic foliage closes behind it, and it seems to simply continue rolling, at the same speed, up the side of a virtual hill. Among the symbols are letters of the Roman alphabet, Japanese characters, a heart and a pair of musical notes. When children attach the notes to the Alphabot, music begins to play from the system’s speakers, illustrating the principle that symbolic reasoning can cut across sensory modalities.

Another, vital element of the system is what the researchers call the Creation Station, a tabletop computer on which children can arrange existing objects or draw their own pictures. Whatever’s on the tabletop can be displayed by the projectors, giving children direct control over their environment. To make the Playtime Computing system even more interactive, the researchers have outfitted baseball caps with infrared light emitters, so that the same system that tracks the Alphabot could also track playing children. That would make it possible for on-screen characters — or a future, autonomous version of the Alphabot — to engage directly with the children. The researchers are eager, however, to begin experimenting with the new Microsoft Kinect, a gaming system that, unlike the Nintendo Wii, uses cameras rather than sensor-studded controllers to track gamers’ gestures. Kinect could offer an affordable means of tracking motion in the Playtime Computing environment, without requiring kids to wear hats. The prototype of the Alphabot, the researchers say, uses a few hundred dollars’ worth of off-the-shelf parts, and if the robot were mass-produced, its price would obviously fall. The researchers believe that simple, affordable versions of the Playtime Computing system could be designed for home use, while more elaborate versions, with multiple, multifunctional robots, could be used in the classroom or at museums.

More information:

http://web.mit.edu/newsoffice/2010/rolling-robot-1122.html

22 November 2010

Robot That Learns Via Touch

Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive database of instructions for every object it might encounter. The robot is a product of the Europe-wide PACO-PLUS research project and operates on the principle of “embodied cognition,” which relies on two-way communication between the robot’s sensors in its hands and “eyes” and its processor. Embodied cognition enables AMAR to solve problems that were unforeseen by its programmers, so when faced with a new task it investigates ways of moving or looking at things until the processor makes the required connections.

AMAR has learned to recognize common objects to be found in a kitchen, such as cups of various colors, plates, and boxes of cereal, and it responds to commands to interact with these objects by fetching them or placing them in a dishwasher, for example. One example of the tasks AMAR has learned to carry out is setting a table, and it is able to do this even if a cup is placed in its way. The robot worked out that the cup was in the way, was movable, and would be knocked over if left in the way, and so it moved the cup out of the way before continuing with its task. The type of thinking demonstrated by AMAR mimics the way humans perceive their environment in terms that depend on their ability to interact with it physically.

More information:

http://www.physorg.com/news/2010-11-armar-iii-robot-video.html

19 November 2010

Mouse Brain Visualisation

The most detailed magnetic resonance images ever obtained of a mammalian brain are now available to researchers in a free, online atlas of an ultra-high-resolution mouse brain, thanks to work at the Duke Center for In Vivo Microscopy. In a typical clinical MRI scan, each pixel in the image represents a cube of tissue, called a voxel, which is typically 1x1x3 millimeters. The atlas images, however, are more than 300,000 times higher resolution than an MRI scan, with voxels that are 20 micrometers on a side. The interactive images in the atlas will allow researchers worldwide to evaluate the brain from all angles and assess and share their mouse studies against this reference brain in genetics, toxicology and drug discovery. The brain atlas' detail reaches a resolution of 21 microns. A micron is a millionth of a meter, or 0.00003937 of an inch.

The atlas used three different magnetic resonance microscopy protocols of the intact brain followed by conventional histology to highlight different structures in the reference brain. The brains were scanned using an MR system operating at a magnetic field more than 6 times higher than is routinely used in the clinic. The images were acquired on fixed tissues, with the brain in the cranium to avoid the distortion that occurs when tissues are thinly sliced for conventional histology. The new Waxholm Space brain can be digitally sliced from any plane or angle, so that researchers can precisely visualize any regions in the brain, along any axis without loss of spatial resolution. The team was also able to digitally segment 37 unique brain structures using the three different data acquisition strategies.

More information:

http://mouse.brain-map.org/

http://www.civm.duhs.duke.edu/neuro201001/

http://www.sciencedaily.com/releases/2010/10/101025123906.htm

16 November 2010

3D Maps of Brain Wiring

This now makes it possible to view a total picture of the winding roads and their contacts without having to operate. Doctors can virtually browse along the spaghetti-like ‘wiring’ of the brain, with this new tool. To know accurately where the main nerve bundles in the brain are located is of immense importance for neurosurgeons. As an example he cites ‘deep brain stimulation’, with which vibration seizures in patients with Parkinson’s disease can be suppressed.

With this new tool, it is possible to determine exactly where to place the stimulation electrode in the brain. The guiding map has been improved: because we now see the roads on the map, we know better where to stick the needle. The technique may also yield many new insights into neurological and psychiatric disorders. And it is important for brain surgeons to know in advance where the critical nerve bundles are, to avoid damaging them.

More information:

http://w3.tue.nl/en/news/news_article/?tx_ttnews[tt_news]=10122&tx_ttnews[backPid]=361&cHash=e497383d04

15 November 2010

Taking Movies Beyond Avatar

A new development in virtual cameras at the University of Abertay Dundee is developing the pioneering work of James Cameron’s blockbuster Avatar using a Nintendo Wii-like motion controller – all for less than £100. Avatar, the highest-grossing film of all time, used several completely new filming techniques to bring to life its ultra-realistic 3D action. Now computer games researchers have found a way of taking those techniques further using home computers and motion controllers. James Cameron invented a new way of filming called Simul-cam, where the image recorded is processed in real-time before it reaches the director’s monitor screen. This allows actors in motion-capture suits to be instantly seen as the blue Na’vi characters, without days spent creating computer-generated images. The Abertay researchers, have linked the power of a virtual camera – where a computer dramatically enhances what a film camera could achieve – using a motion-sensor.

This allows completely intuitive, immediately responsive camera actions within any computer-generated world. The applications of the project are substantial. Complex films and animations could be produced at a very low cost, giving new creative tools to small studios or artists at home. Computer environments can be manipulated in the same way as a camera, opening new opportunities for games, and for education. This tool uses electromagnetic sensors to capture the controller’s position to a precise single millimetre accuracy, and unlike other controllers still works even when an object is in the way. It will work on any home PC, and is expected to retail for under £100 from early 2011. A patent application for the invention and unique applications of the technology has been recently filed in the UK.

More information:

http://www.abertay.ac.uk/about/news/newsarchive/2010/name,6983,en.html

11 November 2010

Robotic Limbs that Plug into the Brain

Most of the robotic arms now in use by some amputees are of limited practicality; they have only two to three degrees of freedom, allowing the user to make a single movement at a time. And they are controlled with conscious effort, meaning the user can do little else while moving the limb. A new generation of much more sophisticated and lifelike prosthetic arms, sponsored by the Department of Defense's Defense Advanced Research Projects Agency (DARPA), may be available within the next five to 10 years. Two different prototypes that move with the dexterity of a natural limb and can theoretically be controlled just as intuitively--with electrical signals recorded directly from the brain--are now beginning human tests. The new designs have about 20 degrees of independent motion, a significant leap over existing prostheses, and they can be operated via a variety of interfaces. One device, developed by DEKA Research and Development, can be consciously controlled using a system of levers in a shoe.

In a more invasive but also more intuitive approach, amputees undergo surgery to have the remaining nerves from their lost limbs moved to the muscles of the chest. Thinking about moving the arm contracts the chest muscles, which in turn moves the prosthesis. But this approach only works in those with enough remaining nerve capacity, and it provides a limited level of control. To take full advantage of the dexterity of these prostheses, and make them function like a real arm, scientists want to control them with brain signals. Limited testing of neural implants in severely paralyzed patients has been underway for the last five years. About five people have been implanted with chips to date, and they have been able to control cursors on a computer screen, drive a wheelchair, and even open and close a gripper on a very simple robotic arm. More extensive testing in monkeys implanted with a cortical chip shows the animals can learn to control a relatively simple prosthetic arm in a useful way, using it to grab and eat a piece of marshmallow.

More information:

http://www.technologyreview.com/biomedicine/26622/

08 November 2010

Moving Holograms

A team led by optical sciences developed a new type of holographic telepresence that allows the projection of a three-dimensional, moving image without the need for special eyewear such as 3D glasses or other auxiliary devices. The technology is likely to take applications ranging from telemedicine, advertising, updatable 3D maps and entertainment to a new level. Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world. The prototype device uses a 10-inch screen, but researchers are already successfully testing a much larger version with a 17-inch screen. The image is recorded using an array of regular cameras, each of which views the object from a different perspective. The more cameras that are used, the more refined the final holographic presentation will appear.

That information is then encoded onto a fast-pulsed laser beam, which interferes with another beam that serves as a reference. The resulting interference pattern is written into the photorefractive polymer, creating and storing the image. Each laser pulse records an individual hogel (or holographic pixel) in the polymer. A hogel is the three-dimensional version of a pixel, the basic units that make up the picture. The hologram fades away by natural dark decay after a couple of minutes or seconds depending on experimental parameters. Or it can be erased by recording a new 3D image, creating a new diffraction structure and deleting the old pattern. The overall recording setup is insensitive to vibration because of the short pulse duration and therefore suited for industrial environment applications without any special need for vibration, noise or temperature control. Potential applications of holographic telepresence include advertising, updatable 3D maps, telemedicine and entertainment.

More information:

http://uanews.org/node/35220

28 October 2010

BCI Eavesdrops on a Daydream

New research points to the ability to snoop on people’s visual imagination—although it’s still a long way away from the full-fledged dream-reading technologies popularized in this summer’s blockbuster movie Inception. Scientists from Germany, Israel, Korea, the United Kingdom, and the United States have performed experiments in which they were able to monitor individual neurons in a human brain associated with specific visual memories. They then taught people to will one visual memory onto a television monitor to replace another. The results suggest that scientists have found a neural mechanism equivalent to imagination and daydreaming, in which the mental creation of images overrides visual input. And, if technology someday advances to enable reading the electrical activity of many thousands or millions of individual neurons (as opposed to the dozens typically available by hard-wiring methods today), scientists might begin to access snippets of real daydreams or actual dreams. The researchers inserted microwires into the brains of patients with severe epilepsy as part of a presurgery evaluation to treat their seizures.

The microwires threaded into the medial temporal lobe (MTL), a region of the brain associated with both visual processing and visual memory. A typical patient might have 64 microwires cast into his MTL, like fishing lines into the ocean, researchers at Caltech mentioned. Soon after the patients’ surgery, researchers interviewed the subjects about places they’d recently visited or movies or television shows they’d recently seen. Then, on a display, he’d show images of the actors or visual landmarks the subjects had described. Slides of the Eiffel Tower, for instance, or Michael Jackson—who had recently died at the time of the experiment—would appear on a screen. Any image that reliably caused voltage spikes in one or more of the microwires would become one of the subject’s go-to images. There are about 5 million neurons in our brain that encode for the same concept. There are many neurons that fire all together when you think of Michael Jackson. But, he adds, each neuron also codes for numerous other people, ideas, or images, which is partly how we associate one memory with another thought, place, idea, or person.

More information:

http://www.youtube.com/user/NatureVideoChannel?feature=mhump/a/u/0/bqkUbiUkR5k

http://spectrum.ieee.org/biomedical/bionics/braincomputer-interface-eavesdrops-on-a-daydream/?utm_source=techalert&utm_medium=email&utm_campaign=102810

26 October 2010

Learning Neural Mechanisms

Learning from competitors is a critically important form of learning for animals and humans. A new study has used brain imaging to reveal how people and animals learn from failure and success. The team from Bristol University scanned the brains of players as they battled against an artificial opponent in a computer game. In the game, each player took turns with the computer to select one of four boxes whose payouts were simulating the ebb and flow of natural food sources. Players were able to learn from their own successful selections but those of their competitor failed completely to increase their neural activity. Instead, it was their competitor’s unexpected failures that generated this additional brain activity.

Such failures generated both reward signals in the brains of the players, and learning signals in regions involved with inhibiting response. This suggests that we benefit from our competitors’ failures by learning to inhibit the actions that lead to them. Surprisingly, when players were observing their competitor make selections, the players’ brains were activated as if they were performing these actions themselves. Such ‘mirror neuron’ activities occur when we observe the actions of other humans but here the players knew their opponent was just a computer and no animated graphics were used. Previously, it has been suggested that the mirror neuron system supports a type of unconscious mind-reading that helps us, for example, judge others’ intentions.

More information:


21 October 2010

Lightweight Mobile AR Navigation

A lightweight pair of augmented reality glasses that overlay the world with digital content, such as directions or a travel guide, has debuted in Japan. The headset, created by Olympus and phone-maker NTT Docomo, uses augmented reality software on an attached phone. A virtual tour of Kyoto was used as the first demonstration of the technology. While AR glasses are nothing new, these are among the first to add a miniature projecting display without too causing much encumbrance to the wearer. Researchers at the two companies said they had managed to whittle an earlier "AV Walker" prototype down from 91g to no more than 20g. The retinal display projects text and images directly into the user's peripheral vision, allowing the wearer to maintain eye contact with whatever they are observing normally.

As the glasses are attached to a smartphone with AR software, an acceleration sensor and a direction sensor, the AR Walker knows approximately what you are looking at and provides augmented information relevant to where you may be. The display can also be used to give directions with arrows and if a person lifts their head up to the sky a weather forecast is automatically protected into their peripheral vision. Augmented reality apps for smartphones such as Laya and Wikitude are already having some success as guides to our immediate surroundings. But as this usually involves holding up and pointing the mobile's camera in the direction you are looking AV Walker and its like have the added benefit of accessing information about your surroundings without altering your natural behaviour. According to the developers a release date for the AR glasses has yet to be determined.

More information:

http://www.bbc.co.uk/news/technology-11494729

19 October 2010

Vital Signs On Camera

You can check a person’s vital signs — pulse, respiration and blood pressure — manually or by attaching sensors to the body. But a student in the Harvard-MIT Health Sciences and Technology program is working on a system that could measure these health indicators just by putting a person in front of a low-cost camera such as a laptop computer’s built-in webcam. So far, the graduate student has demonstrated that the system can indeed extract accurate pulse measurements from ordinary low-resolution webcam imagery. Now the student is working on extending the capabilities so it can measure respiration and blood-oxygen levels. The system measures slight variations in brightness produced by the flow of blood through blood vessels in the face. Public-domain software is used to identify the position of the face in the image, and then the digital information from this area is broken down into the separate red, green and blue portions of the video image.

In tests, the pulse data derived from this setup were compared with the pulse determined by a commercially available FDA-approved blood-volume pulse sensor. The big challenge was dealing with movements of the subject and variations in the ambient lighting. But researchers were able to adapt signal-processing techniques originally developed to extract a single voice from a roomful of conversations, a method called Independent Component Analysis, in order to extract the pulse signal from the ‘noise’ of these other variations. The system produced pulse rates that agreed to within about three beats per minute with the rates obtained from the approved monitoring device, and was able to obtain valid results even when the subject was moving a bit in front of the camera. In addition, the system was able to get accurate pulse signals from three people in the camera’s view at the same time.

More information:

http://web.mit.edu/newsoffice/2010/pulse-camera-1004.html

16 October 2010

Mobile Health Monitoring

Imec and Holst Centre, together with TASS software professionals have developed a mobile heart monitoring system that allows to view your electrocardiogram on an Android mobile phone. The innovation is a low-power interface that transmits signals from a wireless ECG (electrocardiogram or heart monitoring)-sensor system to an android mobile phone. With this interface, imec, Holst Centre and TASS are the first to demonstrate a complete Body Area Network (BAN) connected to a mobile phone enabling reliable long-term ambulatory monitoring of various health parameters such as cardiac performance (ECG), brain activity (EEG), muscle activity (EMG), etc. The system will be demonstrated at the Wireless Health Conference in San Diego (US, October 5-7).

The aging population, combined with the increasing need for care and the rising costs of healthcare has become a challenge for our society. Mobile health, which integrates mobile computing technologies with healthcare delivery systems, will play a crucial role in solving this problem by delivering a more comfortable, more efficient and more cost-efficient healthcare. Body Area Networks (BAN) are an essential component of mHealth. BANs are miniaturized sensor networks; consisting of lightweight, ultra low-power, wireless sensor nodes which continuously monitor physical and vital parameters. They provide long-term monitoring, while maintaining user mobility and comfort. For example patients who are no longer compelled to stay in a hospital could be monitored at home.

More information:

http://www2.imec.be/be_en/press/imec-news/wirelesshealthnecklaceinterface.html

12 October 2010

Pin-Size Tracking Device

Optical gyroscopes, also known as rotation sensors, are widely used as a navigational tool in vehicles from ships to airplanes, measuring the rotation rates of a vehicle on three axes to evaluate its exact position and orientation. Researchers of Tel Aviv University's School of Physical Engineering are now scaling down this crucial sensing technology for use in smartphones, medical equipment and more futuristic technologies. Working in collaboration with Israel's Department of Defense, researchers have developed nano-sized optical gyroscopes that can fit on the head of a pin. These gyroscopes will have the ability to pick up smaller rotation rates, delivering higher accuracy while maintaining smaller dimensions.

At the core of the new device are extremely small semi-conductor lasers. As the devices start to rotate, the properties of the light produced by the lasers changes, including the light's intensity and wavelength. Rotation rates can be determined by measuring these differences. These lasers are a few tens-of-micrometers in diameter, as compared to the conventional gyroscope, which measures about 6 to 8 inches. The device itself, when finished, will look like a small computer chip. Measuring a millimeter by a millimeter (0.04 inches by 0.04 inches), about the size of a grain of sand, the device can be built onto a larger chip that also contains other necessary electronics.

More information:

http://www.aftau.org/site/News2?page=NewsArticle&id=13047

04 October 2010

Cars As Traffic Sensors

Data about road and traffic conditions can come from radio stations’ helicopters, the Department of Transportation’s roadside sensors, or even, these days, updates from ordinary people with cell phones. But all of these approaches have limitations: Helicopters are costly to deploy and can observe only so many roads at once, and it could take a while for the effects of congestion to spread far enough that a road sensor will detect them. MIT’s CarTel project is investigating how cars themselves could be used as ubiquitous, highly reliable mobile sensors. Members of the CarTel team recently presented a new algorithm that would optimize the dissemination of data through a network of cars with wireless connections.

Researchers at Ford are already testing the new algorithm for possible inclusion in future versions of Sync, the in-car communications and entertainment system developed by Ford and Microsoft. For the last four years, CarTel, has been collecting data about the driving patterns of Boston-area taxicabs equipped with GPS receivers. On the basis of those data, the CarTel researchers have been developing algorithms for the collection and dissemination of information about the roadways. Once the algorithms have been evaluated and refined, the CarTel researchers plan to test them in an additional, real-world experiment involving networked vehicles. The new algorithm is among those that the group expects to test.

More information:

http://web.mit.edu/newsoffice/2010/cars-sensors-0924.html

01 October 2010

Feelings by Phone

A system which enables psychologists to track people’s emotional behaviour through their mobile phones has been successfully road-tested by researchers. ‘EmotionSense’ uses speech-recognition software and phone sensors in standard smart phones to assess how people's emotions are influenced by factors such as their surroundings, the time of day, or their relationships with others. It was developed by a University of Cambridge-led team of academics, including both psychologists and computer scientists. They will report the first successful trial of the system today at the Association for Computing Machinery's conference on Ubiquitous Computing in Copenhagen. Early results suggest that the technology could provide psychologists with a much deeper insight into how our emotional peaks - such as periods of happiness, anger or stress - are related to where we are, what we are doing or who we are with. EmotionSense uses the recording devices which already exist in many mobile phones to analyse audio samples of the user speaking.

The samples are compared with an existing speech library (known as the ‘Emotional Prosody Speech and Transcripts Library’) which is widely used in emotion and speech processing research. The library consists of actors reading a series of dates and numbers in tones representing 14 different emotional categories. From here, the samples are grouped into five broader categories - "Happy" emotions (such as elation, or interest); "Sadness"; "Fear", "Anger" (which includes related emotions such as disgust) and "Neutral" emotions (such as boredom or passivity. The data can then be compared with other information which is also picked up by the phone. Built-in GPS software enables researchers to cross-refer the audio samples with the user's location, Bluetooth technology can be used to identify who they were with and the phone also records data about who they were talking to and at what time the conversation took place. The software is also set up so that the analysis is carried out on the phone itself. This means that data does not need to be transmitted elsewhere and can be discarded post-analysis with ease to maintain user privacy.

More information:

http://www.admin.cam.ac.uk/news/dp/2010092804

28 September 2010

Simulations of Real Earthquakes

A Princeton University-led research team has developed the capability to produce realistic movies of earthquakes based on complex computer simulations that can be made available worldwide within hours of a disastrous upheaval. The videos show waves of ground motion spreading out from an epicenter. In making them widely available, the team of computational seismologists and computer scientists aims to aid researchers working to improve understanding of earthquakes and develop better maps of the Earth's interior. When an earthquake takes place, data from seismograms measuring ground motion are collected by a worldwide network of more than 1,800 seismographic stations operated by members of the international Federation of Digital Seismograph Networks. The earthquake's location, depth and intensity also are determined. The ShakeMovie system at Princeton will now collect these recordings automatically using the Internet. The scientists will input the recorded data into a computer model that creates a virtual earthquake. The videos will incorporate both real data and computer simulations known as synthetic seismograms. These simulations fill the gaps between the actual ground motion recorded at specific locations in the region, providing a more complete view of the earthquake. The animations rely on software that produces numerical simulations of seismic wave propagation in sedimentary basins.

The software computes the motion of the Earth in 3D based on the actual earthquake recordings, as well as what is known about the subsurface structure of the region. The shape of underground geological structures in the area not recorded on seismograms is key, Tromp said, as the structures can greatly affect wave motion by bending, speeding, slowing or simply reflecting energy. The simulations are created on a parallel processing computer cluster built and maintained by PICSciE and on a computer cluster located at the San Diego Supercomputing Center. After the three-dimensional simulations are computed, the software program plugs in data capturing surface motion, including displacement, velocity and acceleration, and maps it onto the topography of the region around the earthquake. The movies then are automatically published via the ShakeMovie portal. An e-mail also is sent to subscribers, including researchers, news media and the public. The simulations will be made available to scientists through the data management center of the Incorporated Research Institutions for Seismology (IRIS) in Seattle. The organization distributes global scientific data to the seismological community via the Internet. Scientists can visit the IRIS website and download information. Due to the research team's work, they now will be able to compare seismograms directly with synthetic versions.

More information:

http://www.sciencedaily.com/releases/2010/09/100922171608.htm

24 September 2010

Virtual Mediterranean Islands

Three-dimensional versions of Mediterranean islands will be updated virtually automatically with current information from a range of public and private databases. The European research project may launch a revolution in the tourist trade sector. MedIsolae-3D is a project that combined software designed for aircraft landing simulations with orthophotography and satellite images of the islands, as well as public data such as digital terrain models, maps and tourist services to create the portal to the 3D island experience. It has capitalised on the LANDING project that was also funded by the Aviation Sector of the EC/RTD programme. The plan is to link the virtual-visiting tool to web-geoplatforms such as Google Earth, MS Virtual Earth, or ESRI ArcGlobe to make it available to people across the globe. The EU-funded MedIsolae-3D project planned to deliver the service to more than 100 European Mediterranean islands – territories of Greece, Cyprus, France, Italy, Malta and Spain offer platforms for island visualisation.

One of the biggest challenges for the MedIsolae-3D team was to take data from local governments and other providers in a range of formats and data standards, and to use this data to produce a system capable of interoperating its sources to deliver a single virtual visiting service. MedIsolae-3D is an EU-funded project and it builds on the recent development of Inspire, a standardised Spatial Data Infrastructure (SDI) for Europe. Inspire, backed by an EU Directive creates a standard that allows the integration of spatial information services across the Union. Once standardised, users can access local and global level social services, in an interoperable way. The result of the combined datasets must be seamless for the user as they move from satellite generated images above the islands and onto the island’s roads and streets. Once the MedIsolae-3D framework is in place, it can work in combination with a range of spatial data services to aid tourism, transportation and other money-earners for the island economies, but it can also provide services for health and disaster planning, the environment, and policy-making.

More information:

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91441

23 September 2010

VS-GAMES '11 Conference

The 3rd International Conference in Games and Virtual Worlds for Serious Applications 2011 (VS-GAMES 2011) will be held between 4-6 May, at the National Technical University of Athens (NTUA) in Athens, Greece. The emergence of serious or non-leisure uses of games technologies and virtual worlds applications has been swift and dramatic over the last few years. The 3rd International Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES’11) aims to meet the significant challenges of the cross-disciplinary community that work around these serious application areas by bringing the community together to share case studies of practice, to present virtual world infrastructure developments, as well as new frameworks, methodologies and theories, and to begin the process of developing shared cross-disciplinary outputs.

We are seeking contributions that advance the state of the art in the technologies available to support sustainability of serious games. The following topics in the areas of environment, military, cultural heritage, health, smart buildings, v-commerce and education are particularly encouraged. Invited Speakers include Prof. Carol O Sullivan who is the Head of Graphics Vision and Visualisation Group (GV2) at Trinity College Dublin and Prof. Peter Comninos who is the Director of the National Centre for Computer Animation (NCCA) at Bournemouth University and MD of CGAL Software Limited. The best technical full papers will be published in a special issue of International Journal of Interactive Worlds (IJIW). The best educational papers will be submitted to the IEEE Transactions on Learning Technologies. The paper submission deadline is 1st Nov 2010.

More information:

http://www.vs-games.org/

22 September 2010

Virtual Human Unconsciousness

Virtual characters can behave according to actions carried out unconsciously by humans. Researchers at the University of Barcelona have created a system which measures human physiological parameters, such as respiration or heart rate, and introduces them into computer designed characters in real time. The system uses sensors and wireless devices to measure three physiological parameters in real time: heart rate, respiration, and the galvanic (electric) skin response. Immediately, the data is processed with a software programme that is used to control the behaviour of a virtual character who is sitting in a waiting room.

The heart rate is reflected in the movement of the character's feet; respiration in the rising of their chest (exaggerated movements so that it can be noticed); and the galvanic skin response in the more or less reddish colour of the face. The researchers conducted an experiment to see if the people whose physiological parameters were recorded had any preference as regards the virtual actor who was to use them, without them knowing in advance. But the result was negative, probably because other factors also influence the choice such as the character's appearance or their situation in the scene. The team is now studying how to solve this problem.

More information:

http://www.sciencedaily.com/releases/2010/09/100902073637.htm

18 September 2010

The Brain Speaks

In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain. Because the method needs much more improvement and involves placing electrodes on the brain, it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called ‘locked-in syndrome’. The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy - temporary partial skull removal - so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them. Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less. Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals - such as those generated when the man said the words ‘yes’ and ‘no’ - they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time - better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer. People who eventually could benefit from a wireless device that converts thoughts into computer-spoken spoken words include those paralyzed by stroke, Lou Gehrig's disease and trauma. The study used a new kind of nonpenetrating microelectrode that sits on the brain without poking into it. These electrodes are known as microECoGs because they are a small version of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago. For patients with severe epileptic seizures uncontrolled by medication, surgeons remove part of the skull and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The button-sized ECoG electrodes don't penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

More information:

http://www.unews.utah.edu/p/?r=062110-3

14 September 2010

Electric Skin Rivals the Real Thing

The tactile sensitivity of human skin is hard to re-create, especially over large, flexible surfaces. But two California research groups have made pressure-sensing devices that significantly advance the state of the art. One, made by researchers at Stanford University, is based on organic electronics and is 1,000 times more sensitive than human skin. The second, made by researchers at the University of California, Berkeley, uses integrated arrays of nanowire transistors and requires very little power. Both devices are flexible and can be printed over large areas.

Highly sensitive surfaces could help robots pick up delicate objects without breaking them, give prosthetics a sense of touch, and give surgeons finer control over tools used for minimally invasive surgery. Their goal is to mimic the human skin, which responds quickly to pressure, and can detect objects as small as a grain of sand and light as an insect. This approach can be used to make flexible materials with inexpensive printing techniques, but the resulting device requires high voltages to operate.

More information:

http://www.technologyreview.com/computing/26256/?a=f

13 September 2010

3D Movies via Internet & Satellite

Multiview Video Coding (MVC) is the new standard for 3D movie compression. While reducing the data significantly, MVC allows at the same time providing full high-resolution quality. Blockbusters like Avatar, UP or Toy Story 3 will bring the 3D into home living rooms, televisions and computers. There are already displays available and the new Blu-Ray players can already play 3D movies based on MVC. The first soccer games were recorded stereoscopically at the Football World Championships in South Africa. What is missing is an efficient form of transmission. The problem is the data rate required by the movies – in spite of fast Internet and satellite links. 3D movies have higher data rate requirements than 2D movies since at least two images are needed for the spatial representation. This means that a 3D screen has to show two images – one for the left and one for the right eye.


Researchers at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI in Berlin, Germany have already come up with a compression technique for movies in particularly HD quality that squeezes movies while maintaining the quality: the H.264/AVC video format. What H.264/AVC is for HD movies, Multiview Video Coding (MVC) is for 3D movies. The benefit is reducing the data rate used on the transmission channel while maintaining the same high-definition quality. Videos on the Internet have to load quickly so that the viewer can watch the movies without interruptions. MVC packs the two images needed for the stereoscopic 3D effect so that the bit rate of the movies is significantly reduced. These 3D movies are up to 40 percent smaller. Users will be able to experience 3D movies in their living room in near future.

More information:

http://www.fraunhofer.de/en/press/research-news/2010/08/3d-movies-via-internet-und-satellit.jsp

06 September 2010

EmotionML

For all those who believe the computing industry is populated by people who are out of touch with the world of emotion, it's time to think again. The World Wide Web Consortium (W3C), which standardizes many Web technologies, is working on formalizing emotional states in a way that computers can handle. The name of the specification, which in July reached second-draft status, is Emotion Markup Language. EmotionML combines the rigor of computer programming with the squishiness of human emotion. But the Multimodal Interaction Working Group that's overseeing creation of the technology really does want to marry the two worlds. Some of the work is designed to provide a more sophisticated alternative to smiley faces and other emoticons for people communicating with other people.

It's also geared to improve communications between people and computers. The idea is called affective computing in academic circles, and if it catches on, computer interactions could be very different. Avatar faces could show their human master's expression during computer chats. Games could adjust play intensity according to the player's reactions. Customer service representatives could be alerted when customers are really angry. Computers could respond to your expressions as people do. Computer help technology like Microsoft's Clippy or a robot waiter could discern when to make themselves scarce. EmotionML embodies two very different forms of expression--the squishy nature of emotion and the rigorously precise language of a standard.

More information:

http://news.cnet.com/8301-30685_3-20014967-264.html

30 August 2010

Thought-Controlled Computer

Computers controlled by the mind are going a step further with Intel's development of mind-controlled computers. Existing computers operated by brain power require the user to mentally move a cursor on the screen, but the new computers will be designed to directly read the words thought by the user. Intel scientists are currently mapping out brain activity produced when people think of particular words, by measuring activity at about 20,000 locations in the brain. The devices being used to do the mapping at the moment are expensive and bulky MRI scanners, similar to those used in hospitals, but smaller gadgets that could be worn on the head are being developed. Once the brain activity is mapped out the computer will be able to determine what words are being thought by identifying similar brain patterns and differences between them.

Words produce activity in parts of the brain associated with what the word represents. So thinking of a word for a type of food, such as apple, results in activity in the parts of the brain associated with hunger, while a word with a physical association such as spade produces activity in the areas of the motor cortex related to making the physical movements of digging. In this way the computer can infer attributes of a word to narrow it down and identify it quickly. A working prototype can already detect words like house, screwdriver and barn, but as brain scanning becomes more advanced the computer's ability to understand thoughts will improve. If the plans are successful users will be able to surf the Internet, write emails and carry out a host of other activities on the computer simply by thinking about them.

More information:

http://www.physorg.com/news201939898.html

27 August 2010

Robots Learning from Experience

Software that enables robots to move objects about a room, building up ever-more knowledge about their environment, is an important step forward in artificial intelligence. Some objects can be moved, while others cannot. Balls can be placed on top of boxes, but boxes cannot be stably stacked on top of balls. A typical one-year-old child can discover this kind of information about its environment very quickly. But it is a massive challenge for a robot – a machine – to learn concepts such as ‘movability’ and ‘stability’, according to researchers at the Bonn-Rhein-Sieg University and members of the Xpero robotics research project team. The aim of the Xpero project was to develop a cognitive system for a robot that would enable it to explore the world around it and learn through physical experimentation. The first step was to create an algorithm that enabled the robot to discover its environment from data it received from its sensors. The Xpero researchers installed some very basic predefined knowledge into the robot based on logic. The robot believes that things are either true or false. The robot uses the data from its sensors as it moves about to test that knowledge. When the robot finds that an expectation is false it starts to experiment to find out why it is false and to correct its hypotheses. Picking out the important factors in the massive and continuous flow of data from the robot’s sensors created one challenge for the EU-funded Xpero project team. Finding a way for a logic-based system to deal with the concept of time was a second challenge.

Part of the Xpero team’s solution was to ignore some of the flow of data coming in every millisecond and instead to get the robot to compare snapshots of the situation after a few seconds. When an expectation proved false they also cut down the possible number of solutions by getting the robot to build a new hypothesis that kept the logic connectors from its old hypothesis, simply changing the variables. That drastically reduced the number of possible solutions. An important development from Xpero is the robot’s ability to build its knowledge base. In award-winning demonstrations, robots with the Xpero cognitive system on board have moved about, pushed and placed objects, learning all the time about their environment. In an exciting recent development the robot has started to use objects as tools. It has used one object to move or manipulate another object that it cannot reach directly. The Xpero project lays the first cornerstones for a technology that has the potential to become a key technology for the next generation of so-called service robots, which clean our houses and mow our lawns – replacing the rather dumb, pre-programmed devices on the market today. A robotics manufacturer is already planning to use parts of the Xpero platform in the edutainment market.

More information:

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91421

26 August 2010

VR You Can Touch

Researchers at the Computer Vision Lab at ETH Zurich have developed a method with which they can produce virtual copies of real objects. The copies can be touched and even sent via the Internet. By incorporating the sense of touch, the user can delve deeper into virtual reality. The virtual object, in this case the white cylinder, is projected into the actual environment and can be felt using a sensor rod. Sending a friend a virtual birthday present, or quickly beaming a new product over to a customer in America to try out – it sounds like science fiction, but this is what researchers at the Computer Vision Lab want to make possible, with the aid of new technology. Their first step was to successfully transmit a virtual object to a spatially remote person, who could not only see the object, but also feel it and move it. The more senses are stimulated, the greater the degree of immersion in the virtual reality. While visual and acoustic simulation of virtual reality has become increasingly realistic in recent years, development in the haptic area, in other words the sense of touch, lags far behind. Up to now, it has not been possible to touch the virtual copy of an object, or to move it.

The researchers developed a method for combining visual and haptic impressions with one another. Whilst a 3D scanner records an image of the object, which in one experiment was a soft toy frog, a user simultaneously senses the object using a haptic device. The sensor arm, which can be moved in any direction and is equipped with force, acceleration, and slip sensors, collects information about shape and solidity. With the aid of an algorithm, a virtual copy is created on the computer from the measurements – even while the toy frog is still being scanned and probed. The virtual copy can be sent to another person over the Internet if desired. In order for this other person to be able to see and feel the virtual frog, special equipment is needed: data goggles with a monitor onto which the virtual object is projected, and a sensor rod which is equipped with small motors. A computer program calculates when the virtual object and the sensor rod meet, and then sends a signal to the motors in the rod. These brake the movement that is being made by the user, thereby simulating resistance. The user has the sensation of touching the frog, whilst from the outside it appears that he is touching air.

More information:

http://www.ethlife.ethz.ch/archive_articles/100816_virtuelle_realitaet_cho/index_EN

23 August 2010

Desk Lamp Turns TableTop Into 3D

Switching on a lamp is all it takes to turn a table-top into an interactive map with this clever display, on show at the SIGGRAPH computer graphics and animation conference in Los Angeles. Multi-touch table-top displays project content through glass and respond to touch – imagine a table-sized smartphone screen. But researchers from the National Taiwan University in Taipei wanted to make these types of screens more appealing for multiple users. The idea is that several people could look at the same images, and get more information about the areas that interest them, using moveable objects. Users viewing an image such as a map projected onto a table-top display can zoom in on specific areas – seeing street names for example – simply by positioning the lamp device over them.

The team have also created a tablet computer which lets viewers see a two-dimensional scene in 3D. If you hold the computer over the area of the map you are interested in, a 3D view of that area will appear on the screen. The lamp also comes in a handheld flashlight design, which could be used with high-res scans of paintings in museums, for example, so that people could zoom in to see more detail of things that have caught their eye. Using the tablet computer to show up areas of a 3D map would allow several users, each with their own tablet, to examine and discuss the map at once. This could be useful for the military, when examining a map of unfamiliar territory and discussing strategy, for example.

More information:

http://www.newscientist.com/article/dn19249-future-on-display-desk-lamp-turns-table-top-into-3d.html

19 August 2010

Game Immersion

How do you know you are immersed in a game? There are lots of obvious signifiers: time passes unnoticed; you become unaware of events or people around you; your heart rate quickens in scary or exciting sections; you empathise with the characters. But while we can reel off the symptoms, what are the causes? And why do many games get it wrong? Stimulated by all the Demon's Souls obessives on Chatterbox at the moment, Gamesblog decided to jumble together some tangential thoughts on the subject. This might not make a whole lot of sense. But then neither does video game immersion. Back in May 2010, the video game designer responsible for creating Lara Croft, wrote an interesting feature for Gamasutra in which he listed some ways in which developers often accidentally break the immersive spell. One example is poor research, the placing of unanalogous props in a game environment. That might mean an American road sign in a European city, or an eighties car model in a seventies-based game. The interesting thing is that we pick up on most of these clues almost unconsciously – we don't need to process a whole game environment to understand what it is that's making us feel unimmersed. Indeed, in the midst of a first-person shooter, where we often get mere seconds to assess our surroundings before being shot at, we can't process the whole environment.

Neuroscientists and psychologists are divided on this, but while many accept that we're only able to hold three or four objects from our visual field in our working memory at any one time, others believe we actually have a rich perception and that we're conscious of our whole field of vision even if we're not able to readily access that information. So we know we're in a crap, unconvincing game world, even if we don't know we're in a crap, unconvincing game world. But there's more to immersion than simply responding to what a game designer has created. Researchers at York University are currently studying immersion, and how it relates to human traits of attentiveness, imagination and absorption. Generally, though, what researchers are finding is that players do a lot of the work toward immersion themselves. People more prone to fantasising and daydreaming – i.e. more absorptive personalities – are able to become more immersed in game worlds. So while we're often being told that gamers are drooling, passive consumers of digital entertainment, we're actually highly imaginative and emotional – we have to be to get the most out of digital environments that can only hint at the intensity of real-life experiences. The best games help us to build immersive emotional reactions through subtle human clues. Believable relationships with other characters are good examples.

More information:

http://www.guardian.co.uk/technology/gamesblog/2010/aug/10/games-science-of-immersion

09 August 2010

Adding Temperature to HCI

An experimental new game controller adds the sensation of hot and cold to users' experience of a simulated environment. Touch interfaces and haptic feedback are already a part of how we interact with computers, in the form of iPads, rumbling video game controllers and even 3D joysticks. As the range of interactions with digital environments expands, it's logical to ask what's next: Smell-o-vision has been on the horizon for something like 50 years, but there's a dark horse stalking this race: thermoelectrics. Based on the Peltier effect, these solid-state devices are easy to incorporate into objects of reasonable size, i.e. video game controllers.

In this configuration, a pair of thermoelectric surfaces on either side of a controller rapidly heat up or cool down in order to simulate appropriate conditions in a virtual environment. The temperature difference isn't large - less than 10 degrees heating or cooling after five seconds, but the researchers involved discovered that, as with haptics, just a little sensory nudge can be enough to convince involved participants in a virtual environment that they are experiencing something like the real thing. The research was conducted by researchers at Tokyo Metropolitan University, with collaboration from the National Institute of Special Needs Education.

More information:

http://www.technologyreview.com/blog/post.aspx?bid=377&bpid=25544

08 August 2010

New Ideas for Touch Panels

An increasing number of proposals are being made for entirely new methods of tactile feedback, and new technologies are appearing to utilize them alone or in conjunction with existing techniques. Toshiba Information Systems (Japan) Corp. of Japan has prototyped a device based on a technology that utilizes weak electric fields to express a variety of tactile sensations. Until now, tactile feedback technology has usually meant using a small motor or piezoelectric device to generate vibration, with very few examples of electric field variation as the mechanism.

The new technique not only expresses a variety of sensations, it is also highly break-resistant, and because it has no mechanical parts it makes no vibration noise. It operates in any situation, and can be used even in places where conventional technologies are difficult to implement, such as on the sides or backs of equipment, or even on curved surfaces. The area where the sensation is felt can also be controlled freely, so that for example it is possible to provide tactile feedback when touching a button displayed on a screen.

More information:

http://techon.nikkeibp.co.jp/article/HONSHI/20100723/184468/

06 August 2010

Acrobatic Robots

The Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech, is filled with robots that would fit right into a ‘Star Wars’ sequel. With support from the National Science Foundation (NSF), researchers are creating ‘Star Wars’ inspired robots aimed at lending a helping hand. For example, a Robotic Air Powered Hand with Elastic Ligaments (RAPHaEL) is a relatively inexpensive robot that uses compressed air to move and could one day help improve prosthetics. Another series of robots nicknamed CLIMBeR, short for Cable-suspended Limbed Intelligent Matching Behavior Robot, was built with NASA in mind. The robots scale steep cliffs and are rugged enough to handle the terrain on Mars. Intelligent Mobility Platform with Active Spoke System (IMPASS) is a robot with a circle of spokes that individually move in and out so it can walk and roll.

Hyper-redundant Discrete Robotic Articulated Serpentine (HyDRAS) snakes its way up dangerous scaffolding so humans don't have to. The team is also building a family of humanoid robots, some of which are even learning to play soccer. There's a team of kid-sized robots called DARwIn--short for Dynamic Anthropomorphic Robot with Intelligence. DARwIn robots compete for Virginia Tech in the collegiate RoboCup Competition. CHARLI (Cognitive Humanoid Autonomous Robot with Learning Intelligence) is an adult-sized robot getting into the game as well. It has two cameras on the head, looks around, searches for the ball, figures out where it is, and based on that, it kicks the ball to the goal. For another project called the Blind Driver Challenge, the Virginia Tech team developed the first prototype car that can be driven by the blind. The vehicle's name is DAVID, an acronym for Demonstrative Automobile for the Visually Impaired Driver.

More information:

http://www.nsf.gov/news/special_reports/science_nation/acrobaticrobots.jsp

30 July 2010

A Smoother Street View

New street-level imaging software developed by Microsoft could help people find locations more quickly on the Web. The software could also leave new space for online advertising. Services like Google Street View and Bing Streetside instantly teleport Web surfers to any street corner from Tucson to Tokyo. However, the panoramic photos these services offer provide only a limited perspective. You can't travel smoothly down a street.

Instead, you have to jump from one panoramic ‘bubble’ to the next--not the ideal way to identify a specific address or explore a new neighborhood. Microsoft researchers have come up with a refinement to Bing Streetside called Street Slide. It combines slices from multiple panoramas captured along a stretch of road into one continuous view. This can be viewed from a distance, or ‘smooth scrolled’ sideways.

More information:

http://www.technologyreview.com/web/25880/