30 August 2010

Thought-Controlled Computer

Computers controlled by the mind are going a step further with Intel's development of mind-controlled computers. Existing computers operated by brain power require the user to mentally move a cursor on the screen, but the new computers will be designed to directly read the words thought by the user. Intel scientists are currently mapping out brain activity produced when people think of particular words, by measuring activity at about 20,000 locations in the brain. The devices being used to do the mapping at the moment are expensive and bulky MRI scanners, similar to those used in hospitals, but smaller gadgets that could be worn on the head are being developed. Once the brain activity is mapped out the computer will be able to determine what words are being thought by identifying similar brain patterns and differences between them.

Words produce activity in parts of the brain associated with what the word represents. So thinking of a word for a type of food, such as apple, results in activity in the parts of the brain associated with hunger, while a word with a physical association such as spade produces activity in the areas of the motor cortex related to making the physical movements of digging. In this way the computer can infer attributes of a word to narrow it down and identify it quickly. A working prototype can already detect words like house, screwdriver and barn, but as brain scanning becomes more advanced the computer's ability to understand thoughts will improve. If the plans are successful users will be able to surf the Internet, write emails and carry out a host of other activities on the computer simply by thinking about them.

More information:


27 August 2010

Robots Learning from Experience

Software that enables robots to move objects about a room, building up ever-more knowledge about their environment, is an important step forward in artificial intelligence. Some objects can be moved, while others cannot. Balls can be placed on top of boxes, but boxes cannot be stably stacked on top of balls. A typical one-year-old child can discover this kind of information about its environment very quickly. But it is a massive challenge for a robot – a machine – to learn concepts such as ‘movability’ and ‘stability’, according to researchers at the Bonn-Rhein-Sieg University and members of the Xpero robotics research project team. The aim of the Xpero project was to develop a cognitive system for a robot that would enable it to explore the world around it and learn through physical experimentation. The first step was to create an algorithm that enabled the robot to discover its environment from data it received from its sensors. The Xpero researchers installed some very basic predefined knowledge into the robot based on logic. The robot believes that things are either true or false. The robot uses the data from its sensors as it moves about to test that knowledge. When the robot finds that an expectation is false it starts to experiment to find out why it is false and to correct its hypotheses. Picking out the important factors in the massive and continuous flow of data from the robot’s sensors created one challenge for the EU-funded Xpero project team. Finding a way for a logic-based system to deal with the concept of time was a second challenge.

Part of the Xpero team’s solution was to ignore some of the flow of data coming in every millisecond and instead to get the robot to compare snapshots of the situation after a few seconds. When an expectation proved false they also cut down the possible number of solutions by getting the robot to build a new hypothesis that kept the logic connectors from its old hypothesis, simply changing the variables. That drastically reduced the number of possible solutions. An important development from Xpero is the robot’s ability to build its knowledge base. In award-winning demonstrations, robots with the Xpero cognitive system on board have moved about, pushed and placed objects, learning all the time about their environment. In an exciting recent development the robot has started to use objects as tools. It has used one object to move or manipulate another object that it cannot reach directly. The Xpero project lays the first cornerstones for a technology that has the potential to become a key technology for the next generation of so-called service robots, which clean our houses and mow our lawns – replacing the rather dumb, pre-programmed devices on the market today. A robotics manufacturer is already planning to use parts of the Xpero platform in the edutainment market.

More information:


26 August 2010

VR You Can Touch

Researchers at the Computer Vision Lab at ETH Zurich have developed a method with which they can produce virtual copies of real objects. The copies can be touched and even sent via the Internet. By incorporating the sense of touch, the user can delve deeper into virtual reality. The virtual object, in this case the white cylinder, is projected into the actual environment and can be felt using a sensor rod. Sending a friend a virtual birthday present, or quickly beaming a new product over to a customer in America to try out – it sounds like science fiction, but this is what researchers at the Computer Vision Lab want to make possible, with the aid of new technology. Their first step was to successfully transmit a virtual object to a spatially remote person, who could not only see the object, but also feel it and move it. The more senses are stimulated, the greater the degree of immersion in the virtual reality. While visual and acoustic simulation of virtual reality has become increasingly realistic in recent years, development in the haptic area, in other words the sense of touch, lags far behind. Up to now, it has not been possible to touch the virtual copy of an object, or to move it.

The researchers developed a method for combining visual and haptic impressions with one another. Whilst a 3D scanner records an image of the object, which in one experiment was a soft toy frog, a user simultaneously senses the object using a haptic device. The sensor arm, which can be moved in any direction and is equipped with force, acceleration, and slip sensors, collects information about shape and solidity. With the aid of an algorithm, a virtual copy is created on the computer from the measurements – even while the toy frog is still being scanned and probed. The virtual copy can be sent to another person over the Internet if desired. In order for this other person to be able to see and feel the virtual frog, special equipment is needed: data goggles with a monitor onto which the virtual object is projected, and a sensor rod which is equipped with small motors. A computer program calculates when the virtual object and the sensor rod meet, and then sends a signal to the motors in the rod. These brake the movement that is being made by the user, thereby simulating resistance. The user has the sensation of touching the frog, whilst from the outside it appears that he is touching air.

More information:


23 August 2010

Desk Lamp Turns TableTop Into 3D

Switching on a lamp is all it takes to turn a table-top into an interactive map with this clever display, on show at the SIGGRAPH computer graphics and animation conference in Los Angeles. Multi-touch table-top displays project content through glass and respond to touch – imagine a table-sized smartphone screen. But researchers from the National Taiwan University in Taipei wanted to make these types of screens more appealing for multiple users. The idea is that several people could look at the same images, and get more information about the areas that interest them, using moveable objects. Users viewing an image such as a map projected onto a table-top display can zoom in on specific areas – seeing street names for example – simply by positioning the lamp device over them.

The team have also created a tablet computer which lets viewers see a two-dimensional scene in 3D. If you hold the computer over the area of the map you are interested in, a 3D view of that area will appear on the screen. The lamp also comes in a handheld flashlight design, which could be used with high-res scans of paintings in museums, for example, so that people could zoom in to see more detail of things that have caught their eye. Using the tablet computer to show up areas of a 3D map would allow several users, each with their own tablet, to examine and discuss the map at once. This could be useful for the military, when examining a map of unfamiliar territory and discussing strategy, for example.

More information:


19 August 2010

Game Immersion

How do you know you are immersed in a game? There are lots of obvious signifiers: time passes unnoticed; you become unaware of events or people around you; your heart rate quickens in scary or exciting sections; you empathise with the characters. But while we can reel off the symptoms, what are the causes? And why do many games get it wrong? Stimulated by all the Demon's Souls obessives on Chatterbox at the moment, Gamesblog decided to jumble together some tangential thoughts on the subject. This might not make a whole lot of sense. But then neither does video game immersion. Back in May 2010, the video game designer responsible for creating Lara Croft, wrote an interesting feature for Gamasutra in which he listed some ways in which developers often accidentally break the immersive spell. One example is poor research, the placing of unanalogous props in a game environment. That might mean an American road sign in a European city, or an eighties car model in a seventies-based game. The interesting thing is that we pick up on most of these clues almost unconsciously – we don't need to process a whole game environment to understand what it is that's making us feel unimmersed. Indeed, in the midst of a first-person shooter, where we often get mere seconds to assess our surroundings before being shot at, we can't process the whole environment.

Neuroscientists and psychologists are divided on this, but while many accept that we're only able to hold three or four objects from our visual field in our working memory at any one time, others believe we actually have a rich perception and that we're conscious of our whole field of vision even if we're not able to readily access that information. So we know we're in a crap, unconvincing game world, even if we don't know we're in a crap, unconvincing game world. But there's more to immersion than simply responding to what a game designer has created. Researchers at York University are currently studying immersion, and how it relates to human traits of attentiveness, imagination and absorption. Generally, though, what researchers are finding is that players do a lot of the work toward immersion themselves. People more prone to fantasising and daydreaming – i.e. more absorptive personalities – are able to become more immersed in game worlds. So while we're often being told that gamers are drooling, passive consumers of digital entertainment, we're actually highly imaginative and emotional – we have to be to get the most out of digital environments that can only hint at the intensity of real-life experiences. The best games help us to build immersive emotional reactions through subtle human clues. Believable relationships with other characters are good examples.

More information:


09 August 2010

Adding Temperature to HCI

An experimental new game controller adds the sensation of hot and cold to users' experience of a simulated environment. Touch interfaces and haptic feedback are already a part of how we interact with computers, in the form of iPads, rumbling video game controllers and even 3D joysticks. As the range of interactions with digital environments expands, it's logical to ask what's next: Smell-o-vision has been on the horizon for something like 50 years, but there's a dark horse stalking this race: thermoelectrics. Based on the Peltier effect, these solid-state devices are easy to incorporate into objects of reasonable size, i.e. video game controllers.

In this configuration, a pair of thermoelectric surfaces on either side of a controller rapidly heat up or cool down in order to simulate appropriate conditions in a virtual environment. The temperature difference isn't large - less than 10 degrees heating or cooling after five seconds, but the researchers involved discovered that, as with haptics, just a little sensory nudge can be enough to convince involved participants in a virtual environment that they are experiencing something like the real thing. The research was conducted by researchers at Tokyo Metropolitan University, with collaboration from the National Institute of Special Needs Education.

More information:


08 August 2010

New Ideas for Touch Panels

An increasing number of proposals are being made for entirely new methods of tactile feedback, and new technologies are appearing to utilize them alone or in conjunction with existing techniques. Toshiba Information Systems (Japan) Corp. of Japan has prototyped a device based on a technology that utilizes weak electric fields to express a variety of tactile sensations. Until now, tactile feedback technology has usually meant using a small motor or piezoelectric device to generate vibration, with very few examples of electric field variation as the mechanism.

The new technique not only expresses a variety of sensations, it is also highly break-resistant, and because it has no mechanical parts it makes no vibration noise. It operates in any situation, and can be used even in places where conventional technologies are difficult to implement, such as on the sides or backs of equipment, or even on curved surfaces. The area where the sensation is felt can also be controlled freely, so that for example it is possible to provide tactile feedback when touching a button displayed on a screen.

More information:


06 August 2010

Acrobatic Robots

The Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech, is filled with robots that would fit right into a ‘Star Wars’ sequel. With support from the National Science Foundation (NSF), researchers are creating ‘Star Wars’ inspired robots aimed at lending a helping hand. For example, a Robotic Air Powered Hand with Elastic Ligaments (RAPHaEL) is a relatively inexpensive robot that uses compressed air to move and could one day help improve prosthetics. Another series of robots nicknamed CLIMBeR, short for Cable-suspended Limbed Intelligent Matching Behavior Robot, was built with NASA in mind. The robots scale steep cliffs and are rugged enough to handle the terrain on Mars. Intelligent Mobility Platform with Active Spoke System (IMPASS) is a robot with a circle of spokes that individually move in and out so it can walk and roll.

Hyper-redundant Discrete Robotic Articulated Serpentine (HyDRAS) snakes its way up dangerous scaffolding so humans don't have to. The team is also building a family of humanoid robots, some of which are even learning to play soccer. There's a team of kid-sized robots called DARwIn--short for Dynamic Anthropomorphic Robot with Intelligence. DARwIn robots compete for Virginia Tech in the collegiate RoboCup Competition. CHARLI (Cognitive Humanoid Autonomous Robot with Learning Intelligence) is an adult-sized robot getting into the game as well. It has two cameras on the head, looks around, searches for the ball, figures out where it is, and based on that, it kicks the ball to the goal. For another project called the Blind Driver Challenge, the Virginia Tech team developed the first prototype car that can be driven by the blind. The vehicle's name is DAVID, an acronym for Demonstrative Automobile for the Visually Impaired Driver.

More information: