31 July 2012

Mapping the Uncanny Valley

Artificially created beings, whether they be drawn or sculpted, are warmly accepted by viewers when they are distinctively inhuman. As their appearances are made more real, however, acceptance turns to discomfort until the point where the similarity is almost perfect, when comfort returns. This effect, called ‘the uncanny valley’ because of the dip in acceptance between clearly inhuman and clearly human forms, is well known, particularly to animators, but why it happens is a mystery. Some suggest it is all about outward appearance, but a study just published in Cognition by researchers at the University of North Carolina and Harvard argues that there can be something else involved as well: the apparent presence of a mind where it ought not to be. According to some philosophers the mind is made up of two parts, agency (the capacity to plan and do things) and experience (the capacity to feel and sense things). Both set people apart from robots, but researchers speculated that experience in particular was playing a crucial role in generating the uncanny-valley effect. They theorised that adding human-like eyes and facial expressions to robots conveys emotion where viewers do not expect emotion to be present. The resulting clash of expectations, they thought, might be where the unease was coming from.


To test this idea, the researchers presented 45 participants recruited from subway stations and campus dining halls in Massachusetts with a questionnaire about the ‘Delta-Cray supercomputer’. A third were told this machine was ‘like a normal computer but much more powerful’. Another third heard it was capable of experience, by being told it could feel ‘hunger, fear and other emotions’. The remainder were told it was capable of ‘self-control and the capacity to plan ahead’, thus suggesting it had agency. Participants were asked to rate how unnerved they were by the supercomputer on a scale where one was ‘not at all’ and five was ‘extremely’. Researchers found that those presented with the idea of a supercomputer that was much more powerful than other computers or was capable of planning ahead were not much unnerved. They gave it a score of 1.3 and 1.4 respectively. By contrast, those presented with the idea of a computer capable of experiencing emotions gave the machine an average of 3.4. These findings are consistent with the researchers’ hypothesis. There seems to be something about finding emotion in a place where it is not expected that upsets people. This led researchers to wonder if the reverse, discovering a lack of experience in a place where it was expected, might prove just as upsetting.


More information:

19 July 2012

Controlling Computer with Eyes

Millions of people suffering from multiple sclerosis, Parkinson's, muscular dystrophy, spinal cord injuries or amputees could soon interact with their computers and surroundings using just their eyes, thanks to a new device that costs less than £40. Composed from off-the-shelf materials, the new device can work out exactly where a person is looking by tracking their eye movements, allowing them to control a cursor on a screen just like a normal computer mouse. The GT3D device is made up of two fast video game console cameras, costing less than £20 each, that are attached, outside of the line of vision, to a pair of glasses that cost just £3. The cameras constantly take pictures of the eye, working out where the pupil is pointing, and from this the researchers can use a set of calibrations to work out exactly where a person is looking on the screen.


Even more impressively, the researchers are also able to use more detailed calibrations to work out the 3D gaze of the subjects -- in other words, how far into the distance they were looking. It is believed that this could allow people to control an electronic wheelchair simply by looking where they want to go or control a robotic prosthetic arm. To demonstrate the effectiveness of the eye-tracker, the researchers got subjects to play the video game Pong. In this game, the subject used his or her eyes to move a bat to hit a ball that was bouncing around the screen -- a feat that is difficult to accomplish with other read-out mechanisms such as brain waves (EEG). The commercially viable device uses just one watt of power and can transmit data wirelessly over Wi-Fi or via USB into any Windows or Linux computer.

More information:

18 July 2012

Hummingbird Robotics Kit

The Hummingbird robotics kit is a spin-off product of Carnegie Mellon's CREATE lab. Hummingbird is designed to enable engineering and robotics activities for ages 10 and up that involve the making of robots, kinetic sculptures, and animatronics built out of a combination of kit parts and crafting materials.


Combined with a cross-platform, very easy-to-use visual programming environment, Hummingbird provides a great way to introduce kids to robotics and engineering with construction materials that they are already familiar with. The Hummingbird kit contains a large number of sensors, lights, and motors compared to similarly priced kits.

More information:

17 July 2012

Robotic Space Travel

A NASA-created application that brings some of the agency's robotic spacecraft to life in 3D now is available for free on the iPhone and iPad. Called Spacecraft 3D, the app uses animation to show how spacecraft can maneuver and manipulate their outside components. Spacecraft 3D is among the first of what are known as augmented-reality apps for Apple devices. Augmented reality provides users a view of a real-world environment where elements are improved by additional input. Spacecraft 3D uses the iPhone or iPad camera to overlay information on the device's main screen. The app instructs users to print an augmented-reality target on a standard sheet of paper.


When the device's camera is pointed at the target, the spacecraft chosen by the user materializes on screen. Spacecraft 3D also has a feature where you can take your own augmented-reality picture of the rover or GRAIL spacecraft. You can even make a self-portrait with a spacecraft, putting yourself or someone else in the picture. Spacecraft 3D currently is available only for Apple formats, but should be available on other formats in the near future. The detailed computer models of the spacecraft used in Spacecraft 3D originally were originally generated for NASA's ‘Eyes on the Solar System’ Web application. ‘Eyes on the Solar System’ is a 3D environment full of NASA mission data that allows anyone to explore the cosmos from their computer.

More information:

16 July 2012

A Robot Takes Stock

The short figure creeping around the Carnegie Mellon University campus store in a hooded sweatshirt recently isn't some shoplifter, but a robot taking inventory. Andyvision, as it's called, scans the shelves to generate a real-time interactive map of the store, which customers can browse via an in-store screen. At the same time, the robot performs a detailed inventory check, identifying each item on the shelves, and alerting employees if stock is low or if an item has been misplaced. While making its rounds, the robot uses a combination of image-processing and machine-learning algorithms; a database of 3D and 2D images showing the store's stock; and a basic map of the store's layout—for example, where the T-shirts are stacked, and where the mugs live.


The robot has proximity sensors so that it doesn't run into anything. None of the technologies it uses are new in themselves. It's the combination of different types of algorithms running on a low-power system that makes the system unique. The map generated by the robot is sent to a large touch-screen system in the store and a real-time inventory list is sent to iPad-carrying staff. The robot uses a few different tricks to identify items. It looks for barcodes and text; and uses information about the shape, size, and color of an object to determine its identity. These are all pretty conventional computer-vision tasks. But the robot also identifies objects based on information about the structure of the store, and items belong next to each other.

More information:

15 July 2012

Create Music Through Movement

A UK team has developed a musical suit that allows users to create and manipulate sounds through the movement of their bodies. The suit was developed by a team of electronic, software and sound engineers, together with a fashion designer and artist, from Bristol University, the University of the West of England (UWE) and Queen Mary, University of London. The suit uses sensors known as inertial measurement units (IMUs), which combine a gyroscope, accelerometer and magnetometer and are conventionally used to manoeuvre aircraft and spacecraft, to map the exact position, orientation, movement and speed of the wearer’s body parts in a similar way to motion-capture animation technology.


Microphones on the wrists capture sounds that can then be manipulated with different movements that correspond to different production effects or additional sounds in the software’s toolbox, as well as the volume and stereo position of each sound. Sensor signals run through a central processor on the user’s back that is connected wirelessly to a nearby computer, running several pieces of music production software to convert the movements into sounds in real time. As well as extending the system, the team also replaced the original off-the-shelf gloves with custom-designed components, a more stable and efficient software system and more durable fabric.


More information:

14 July 2012

Robot Vision

Using piezoelectric materials, researchers have replicated the muscle motion of the human eye to control camera systems in a way designed to improve the operation of robots. This new muscle-like action could help make robotic tools safer and more effective for MRI-guided surgery and robotic rehabilitation. Key to the new control system is a piezoelectric cellular actuator that uses a novel biologically inspired technology that will allow a robot eye to move more like a real eye. This will be useful for research studies on human eye movement as well as making video feeds from robots more intuitive.


Researchers at the Georgia Tech Bio-Robotics and Human Modeling Laboratory in the School of Mechanical Engineering mentioned that this technology will lay the groundwork for investigating research questions in systems that possess a large number of active units operating together. The application ranges from industrial robots, medical and rehabilitation robots to intelligent assistive robots. Piezoelectric materials expand or contract when electricity is applied to them, providing a way to transform input signals into motion. This principle is the basis for piezoelectric actuators that have been used in numerous applications, but use in robotics applications has been limited due to piezoelectric ceramic’s minuscule displacement.

More information:

13 July 2012

Realistic Robot Legs

US experts have developed what they say are the most biologically-accurate robotic legs yet. They created a version of the message system that generates the rhythmic muscle signals that control walking. The team, from the University of Arizona, were able to replicate the central pattern generator (CPG) - a nerve cell (neuronal) network in the lumbar region of the spinal cord that generates rhythmic muscle signals.


The CPG produces, and then controls, these signals by gathering information from different parts of the body involved in walking, responding to the environment. This is what allows people to walk without thinking about it. The simplest form of a CPG is called a half-centre, which consists of just two neurons that fire signals alternately, producing a rhythm, as well as sensors that deliver information, such as when a leg meets a surface, back to the half-centre.

More information: