30 December 2010

The Emotional Computer

Cambridge University film provides a glimpse of how robots and humans could interact in the future. Can computers understand emotions? Can computers express emotions? Can they feel emotions? The latest video from the University of Cambridge shows how emotions can be used to improve interaction between humans and computers. When people talk to each other, they express their feelings through facial expressions, tone of voice and body postures. They even do this when they are interacting with machines. These hidden signals are an important part of human communication, but computers ignore them.

The research team is collaborating closely with researchers from the University's Autism Research Centre. Because those researchers study the difficulties that some people have understanding emotions, their insights help to address the same problems in computers. Facial expressions are an important way of understanding people's feelings. One system tracks features on a person's face, calculates the gestures that are being made and infers emotions from them. It gets the right answer over 70% of the time, which is as good as most human observers.

More information:

http://www.admin.cam.ac.uk/news/dp/2010122303

23 December 2010

Preserving Time in 3D

A computer science Professor hopes to use open-source software and super-high resolution photos to capture three-dimensional lifelike models of the world's treasures, effectively preserving their current state. Under the plans, a sequence of many thousands of super-high resolution photographs taken in batches from several angles would be stitched together to form detailed pictures and then rendered into 3D form. The effect would reveal minute detail of an object rendered in 3D, allowing future generations to view objects in their present state.

Researchers used a US$1,184 camera, an 800mm lens, a robotic arm and a free open-source application that combined some 11,000 18-megapixel images. The 150 billion-pixel photo was shrunk down to a small image to allow for manual smoothing out the brightness variation between the combined photos. The changes took about three weeks and then mapped onto a larger image. The 700GB photo took about a week to upload to the internet, and was processed with a standard PC beefed up with 24GB of RAM.

More information:

http://asia.cnet.com/crave/2010/12/20/photo-project-aims-to-to-preserve-time-in-3d/

22 December 2010

Video DNA Matching

You know when you're watching a pirated film downloaded from the Internet -- there's no mistaking the fuzzy footage, or the guy in the front row getting up for popcorn. Despite the poor quality, pirated video is a serious problem around the world. Criminal copyright infringement occurs on a massive scale over the Internet, costing the film industry billions of dollars annually. Now researchers of Tel Aviv University’s Department of Electrical Engineering have a new way to stop video pirates by treating video footage like DNA. Of course, video does not have a real genetic code like members of the animal kingdom, so researchers created a DNA analogue, like a unique fingerprint, that can be applied to video files. The result is a unique DNA fingerprint for each individual movie anywhere on the planet. When scenes are altered, colors changed, or film is bootlegged on a camera at the movie theatre, the film can be tracked and traced on the Internet. And, like the films, video thieves can be tracked and caught. The technology employs an invisible sequence and series of grids applied over the film, turning the footage into a series of numbers.

The tool can then scan the content of Web sites where pirated films are believed to be offered, pinpointing subsequent mutations of the original. The technique is called ‘video DNA matching’. It detects aberrations in pirated video in the same way that biologists detect mutations in the genetic code to determine, for example, an individual's family connections. The technique works by identifying features of the film that remain basically unchanged by typical color and resolution manipulations, and geometric transformations. It's effective even with border changes, commercials added or scenes edited out. The researchers have set their sights on popular video-sharing web sites like YouTube. YouTube, they say, automates the detection of copyright infringement to some degree, but their technique doesn't work when the video has been altered. The problem with catching bootlegged and pirated video is that it requires thousands of man-hours to watch the content being downloaded.

More information:

http://www.sciencedaily.com/releases/2010/12/101221101841.htm

18 December 2010

Sun Visualisation by ESA

New software developed by ESA makes available online to everyone, everywhere at anytime, the entire library of images from the SOHO solar and heliospheric observatory. Just download the viewer and begin exploring the Sun. JHelioviewer is new visualisation software that enables everyone to explore the Sun. Developed as part of the ESA/NASA Helioviewer Project, it provides a desktop program that enables users to call up images of the Sun from the past 15 years. More than a million images from SOHO can already be accessed, and new images from NASA's Solar Dynamics Observatory are being added every day. The downloadable JHelioviewer is complemented by the website Helioviewer.org, a web-based image browser. Using this new software, users can create their own movies of the Sun, colour the images as they wish, and image-process the movies in real time.

They can export their finished movies in various formats, and track features on the Sun by compensating for solar rotation. JHelioviewer is written in the Java programming language, hence the 'J' at the beginning of its name. It is open-source software, meaning that all its components are freely available so others can help to improve the program. The code can even be reused for other purposes; it is already being used for Mars data and in medical research. This is because JHelioviewer does not need to download entire datasets, which can often be huge -- it can just choose enough data to stream smoothly over the Internet. It also allows data to be annotated, say, solar flares of a particular magnitude to be marked or diseased tissues in medical images to be highlighted.

More information:

http://www.sciencedaily.com/releases/2010/12/101215083400.htm

14 December 2010

Creating Better Digital Denizens

We are incredibly sensitive to human movement and appearance, which makes it a big challenge to create believable computerised crowds, but researchers at Trinity are working on improving that. Getting those computer-generated avatars to act in engaging and more human ways is trickier than it looks. But researchers at Trinity College Dublin are delving into how we perceive graphical characters and coming up with insights to create more socially realistic virtual humans without hogging too much computer processing expense. Getting the crowds right in this computerised cityscape is important, according to researchers.

The team has been trying to work out smarter ways of making simulated crowds look more varied without the expense of creating a model for each individual, and they are finding that altering the upper bodies and faces on common templates is a good way to get more bang for your buck. Researchers from the team also sat together and attached markers to themselves so they could capture their movements and voices on camera as they conversed. That built up a large corpus of data to tease out the subtle synchronies between gestures and sounds that our brains register without us even thinking about it.

More information:

http://www.irishtimes.com/newspaper/sciencetoday/2010/1209/1224285096674.html

08 December 2010

Virtual Training Gets Real

Computerised training systems are getting an extra dose of reality, thanks to an EU-funded research project led by the University of Leeds. PC-based virtual reality training is typically cheaper than face-to-face sessions with a mentor or coach. As the recent Hollywood blockbuster Up in the Air showed, multiple members of staff can be trained by practising various scenarios in a virtual reality environment without having to leave their desks. Virtual reality training tools are seldom as effective as working with a real person because the simulation package cannot respond to trainees' past experiences or preconceptions.

For example, software designed to help managers conduct job interviews may include a number of different simulated scenarios that appear true to life. However, if the trainee is consistently hostile to the virtual interviewee or overly sympathetic, the system will not flag this up or suggest they try an alternative approach. The project is involving seven partners from six European countries, including Austria, Germany, Ireland, Italy, the Netherlands and the UK. ImREAL will develop intelligent tools that will encourage trainees to detect subtle differences in communication and social cues across different cultures.

More information:

http://www.imreal-project.eu/

http://www.leeds.ac.uk/news/article/1307/virtual_training_gets_real

05 December 2010

Computer Generated Robots

Genetic Robots are moving robots that can be created fully automatically. The robot structures are created using genetic software algorithms and additive manufacturing. The important role robots play is not limited to industrial production in the automotive industry. They are also used for exploration, transportation and as service robots. Modeling the movements to make them mobile or enabling them to grip objects is a complex yet central challenge for engineers. With its ‘Genetic Robots’, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) in Stuttgart has successfully had a moving robot automatically designed – without the intervention of a designing engineer – by a genetic software algorithm. The robots consist of cylinder-shaped tubes with ball-and-socket joints that can assume different shapes depending on external factors and the purpose at hand.

Fitness functions within the software algorithm select the movement elements with which the Genetic Robot can advance along this surface the software determines the shape of the tubes, the position of the movement points and the position of the drives (actuators). The basis for the development is a physic engine in which the most important environmental influences – such as the friction of the ground or gravity – are implemented. If the Genetic Robot is to withstand unevenness, climb stairs or swim in water, these environmental conditions can be simulated. The result is not just one but a multitude of solutions from which the designer can choose the best one. The Genetic Robots system can also be used to design subcomponents such as gripping systems for robots in industry.

More information:

http://www.fraunhofer.de/en/press/research-news/2010/11/euromold-genetic-robots.jsp

03 December 2010

Brain Boost for Information Overload

Imagine you have thousands of photographs and only minutes to find a handful that contain Dalmation puppies. Or that you’re an intelligence analyst and you need to scan 5 million satellite pictures and pull out all the images with a helipad. Researchers proposed a solution to such information overload that could revolutionize how vast amounts of visual information are processed—allowing users to riffle through potentially millions of images and home in on what they are looking for in record time. This is called a cortically coupled computer vision (C3Vision) system, and it uses a computer to amplify the power of the quickest and most accurate tool for object recognition ever created: the human brain. The human brain has the capacity to process very complicated scenes and pick out relevant material before we’re even consciously aware we’re doing so. These ‘aha’ moments of recognition generate an electrical signal that can be picked up using electroencephalography (EEG), the recording of electrical activity along the scalp caused by the firing of neurons in the brain.

Researchers designed a device that monitors brain activity as a subject rapidly views a small sample of photographs culled from a much larger database—as many as 10 pictures a second. The device transmits the data to a computer that ranks which photographs elicited the strongest cortical recognition responses. The computer looks for similarities in the visual characteristics of different high-ranking photographs, such as color, texture and the shapes of edges and lines. Then it scans the much larger database—it could contain upward of 50 million images—and pulls out those that rank high in visual characteristics most highly correlated with the ‘aha’ moments detected by the EEG. It’s an idea that has already drawn significant interest from the U.S. government. The Defense Advanced Research Projects Agency (DARPA), which pioneered such breakthrough technologies as computer networking, provided $2.4 million to test the device over the next 18 months. Analysts at the National Geospacial-Intelligence Agency will attempt to use the device to look for objects of interest within vast satellite images.

More information:

http://news.columbia.edu/record/2188#