31 October 2011

Computer Inspired Creativity

Constraints on creativity imposed by computer-aided design (CAD) tools are being overcome, thanks to a novel system that incorporates eye-tracking technology. 'Designing with Vision', a system devised by researchers at The Open University and the University of Leeds, is breaking down rigid distinctions between human and machine. This should help designers to recover intuitive elements of the design process that are otherwise suppressed when working with CAD. Traditional design tools, such as pen and paper, are increasingly being replaced by 2D and 3D computerised drawing packages. The uptake of CAD is helping to increase productivity and improve the quality of designs, reducing errors and unnecessary wastage when the goods are made. However, the switch to CAD may have a downside too. The introduction of digital technologies often forces people to change how they work so they fit with the technology, rather than the other way around. In creative disciplines, this inevitably constrains the results produced - a scenario that would be a disaster for designers, according to researchers at The Open University.


Researchers focused on an early stage in the design process that involves drawing, viewing, selecting and manipulating shapes. This process is common to designers working in areas such as fashion, graphics and consumer goods packaging. Designers who work with shapes tend to intuitively home in on certain areas in initial sketches, using these as a starting point to move forward. However, this element of subconscious selection is difficult to replicate with CAD, because the software package is unable to 'see' what might be catching the designer's eye. To redress this, researchers added eye-tracking technology to a CAD system, giving the digital technology a more fluid human-machine interface. This produced a design system that could identify and select shapes of interest automatically within a drawn sketch, according to the designer's gaze. The system was put through its paces by groups of professional and student designers to check that it worked in practice. The tests confirmed that the combination of eye-tracking technology and conventional mouse-based input allowed initial design sketches to be manipulated and developed according to the user's subconscious visual cues.

More information:

http://www.leeds.ac.uk/news/article/2558/the_eyes_have_it_computer-inspired_creativity

18 October 2011

Visualizing the Future

It appears that we really can be in two places at once. We call these ubiquitous displays, researchers from California Institute for Telecommunications & Information Technology (Calit2) said. As the term implies, ubiquitous displays may soon be used just about everywhere, from huge domes to small cell phones, from amusement parks to doctors’ exam rooms.


While amusement parks, flight training operations and others have long created virtual reality environments, the UCI group’s software will be compatible with new digital equipment and allows the use of everyday cameras and far cheaper projectors. Perhaps most important, the calibration process between the camera and the projectors – key to image quality – is completely automated.

More information:

http://www.uci.edu/features/2011/10/feature_panorama_111010.php

17 October 2011

Robot Biologist

Now computers are at it again, but this time they are trying to automate the scientific process itself. An interdisciplinary team of scientists at Vanderbilt University, Cornell University and CFD Research Corporation, Inc., has taken a major step toward this goal by demonstrating that a computer can analyze raw experimental data from a biological system and derive the basic mathematical equations that describe the way the system operates. According to the researchers, it is one of the most complex scientific modeling problems that a computer has solved completely from scratch. The biological system that the researchers used to test ABE is glycolysis, the primary process that produces energy in a living cell.


Specifically, they focused on the manner in which yeast cells control fluctuations in the chemical compounds produced by the process. The researchers chose this specific system, called glycolytic oscillations, to perform a virtual test of the software because it is one of the most extensively studied biological control systems. They used one of the process’ detailed mathematical models to generate a data set corresponding to the measurements a scientist would make under various conditions. To increase the realism of the test, the researchers salted the data with a 10 percent random error. When they fed the data into Eureqa, it derived a series of equations that were nearly identical to the known equations.

More information:

http://news.vanderbilt.edu/2011/10/robot-biologist/

16 October 2011

Kinect Merges Real and Virtual Worlds

Microsoft's Kinect Xbox controller, which lets gamers control on-screen action with their body movements, has been adapted in hundreds of interesting, useful, and occasionally bizarre ways since its release in November 2010. It's been used for robotic vision and automated home lighting. It's helped wheelchair users with their shopping. Yet these uses could look like child's play compared to the new 3D modeling capabilities Microsoft has developed for the Kinect. KinectFusion, a research project that lets users generate high-quality 3D models in real time using a standard $100 Kinect.


KinectFusion also includes a realistic physics engine that allows scanned objects to be manipulated in realistic ways. The technology allows objects, people, and entire rooms to be scanned in 3D at a fraction of the normal cost. Imagine true-to-life avatars and objects being imported into virtual environments. Or a crime scene that can be re-created within seconds. Visualizing a new sofa in your living room and other virtual interior design tricks could become remarkably simple. 3D scanners already exist, but none of them approach KinectFusion in ease of use and speed, and even desktop versions cost around $3,000.

More information:

http://www.technologyreview.com/computing/38731/

http://research.microsoft.com/en-us/projects/surfacerecon/

14 October 2011

Games May Not Boost Cognition

Over the past decade, many studies and news media reports have suggested that action video games such as Medal of Honor or Unreal Tournament improve a variety of perceptual and cognitive abilities. But in a paper published this week in the journal Frontiers in Psychology, Walter Boot, an assistant professor in Florida State University's Department of Psychology, critically re-evaluates those claims. Researchers believe that it is a persuasive argument that much of the work done over the past decade demonstrating the benefits of video game play is fundamentally flawed. Many of those studies compared the cognitive skills of frequent gamers to non-gamers and found gamers to be superior.


However, new research points out that this doesn't necessarily mean that their game experience caused better perceptual and cognitive abilities. It could be that individuals who have the abilities required to be successful gamers are simply drawn to gaming. Researchers looking for cognitive differences between expert and novice gamers often recruit research participants by circulating ads on college campuses seeking "expert" video game players. Media reports on the superior skills of gamers heighten gamers' awareness of these expectations. Even studies in which non-gamers are trained to play action video games have their own problems, often in the form of weak control groups.

More information:

http://www.sciencedaily.com/releases/2011/09/110915131637.htm

12 October 2011

It's All About the Hair

Researchers from the Jacobs School of Engineering at UC San Diego got to rub shoulders with Hollywood celebrities. They have developed a new way to light and animate characters’ hair. It is now part of Disney’s production pipeline and will be used in the company’s upcoming movies.


Researchers surveyed the research available to improve the appearance of animated hair. The new software researchers developed allowed artists to control the sheen, color and highlights in their hair. They used a technique called light scattering, and blondes have a lot more of it than brunettes.

More information:

http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=1122

11 October 2011

Robot Revolution?

From performing household chores, to entertaining and educating our children, to looking after the elderly, roboticists say we will soon be welcoming their creations into our homes and workplaces. Researchers believe we are on the cusp of a robot revolution that will mirror the explosive growth of the computer revolution from the 1980s onwards. They are developing new laws for robot behaviour, and designing new ways for humans and robots to interact.


Commercially available robots are already beginning to perform everyday tasks like vacuuming our floors. The latest prototypes from Japan are able to help the elderly to get out of bed or get up after a fall. They can also remind them when to take medication, or even help wash their hair. Researchers found that people react well to a robot gym instructor, and seem to get less frustrated with it than with instructions given on a computer screen. The robot can act as a perfect trainer, with infinite patience.

More information:

http://www.bbc.co.uk/news/technology-15146053

10 October 2011

Mind-Reading Car

One of the world's largest motor manufacturers is working with scientists based in Switzerland to design a car that can read its driver's mind and predict his or her next move. The collaboration, between Nissan and the École Polytechnique Fédérale de Lausanne (EPFL), is intended to balance the necessities of road safety with demands for personal transport. Scientists at the EPFL have already developed brain-machine interface (BMI) systems that allow wheelchair users to manoeuvre their chairs by thought transference. Their next step will be finding a way to incorporate that technology into the way motorists interact with their cars.


If the endeavour proves successful, the vehicles of the future may be able to prepare themselves for a left or right turn by gauging that their drivers are thinking about making such a turn. However, although BMI technology is well established, the levels of human concentration needed to make it work are extremely high, so the research team is working on systems that will use statistical analysis to predict a driver's next move and to evaluate a driver's cognitive state relevant to the driving environment. By measuring brain activity, monitoring patterns of eye movement and scanning the environment around the car, the team thinks the car will be able to predict what a driver is planning to do and help him or her complete the manoeuvre safely.

More information:

http://www.guardian.co.uk/technology/2011/sep/28/nissan-car-reads-drivers-mind?newsfeed=true