30 March 2012

Designing Human-Like Robots

Researchers at the University of Wisconsin-Madison are developing and creating various computer algorithms based on how people communicate without words. These algorithms are then used to program devices, like robots, to look and act more human-like, helping to bridge the gap between man and machine. Research also shows when you finish saying something in a conversation and your gaze is directed to one particular person, that person is likely to take the next turn speaking in the discussion. These nonverbal cues tell people where our attention is focused and what we mean when we direct a question or comment in a conversation. When people really mean what they’re saying, they might open up their eyes and look at who they’re talking to and really try to communicate their message or thought through facial and other cues.


To convert these subtle cues of human communication into data and language that can be used by a robot, researchers take a computational approach. They break down each human cue or gesture into minute segments or sub-mechanisms – such as the direction of the eyes versus the direction of the head or how the body is oriented – which can be modeled. Then, certain temporal dimensions are added to the model. These characteristics include the length of time a target is looked at and whether the gaze is focused on the face or should be directed elsewhere after a time. The research team has found learning improves when a robot teacher uses these cues, as opposed to a robot that doesn’t have these abilities. Their goal is to find the key mechanisms which help us communicate effectively, reproduce them in robots, and enable these systems to connect with us.

More information:

http://blogs.voanews.com/science-world/2012/03/23/designing-human-like-robots/

27 March 2012

Ancient Sites Spotted From Space

Thousands of possible early human settlements have been discovered by archaeologists using computers to scour satellite images. Computers scanned the images for soil discolouration and mounds caused when mud-brick settlements collapsed.


With these computer science techniques, it is possible to immediately come up with an enormous map which is methodologically very interesting, but which also shows the staggering amount of human occupation over the last 7,000 or 8,000 years.

More information:

http://www.bbc.co.uk/news/science-environment-17436400

22 March 2012

Camera That Peers Around Corners

MIT Media Lab researchers caused a stir by releasing a slow-motion video of a burst of light traveling the length of a plastic bottle. But the experimental setup that enabled that video was designed for a much different application: a camera that can see around corners. Researchers describe using their system to produce recognizable 3D images of a wooden figurine and of foam cutouts outside their camera’s line of sight. The research could ultimately lead to imaging systems that allow emergency responders to evaluate dangerous environments or vehicle navigation systems that can negotiate blind turns, among other applications. The principle behind the system is essentially that of the periscope.


But instead of using angled mirrors to redirect light, the system uses ordinary walls, doors or floors. The system exploits a device called a femtosecond laser, which emits bursts of light so short that their duration is measured in quadrillionths of a second. To peer into a room that’s outside its line of sight, the system might fire femtosecond bursts of laser light at the wall opposite the doorway. The light would reflect off the wall and into the room, then bounce around and re-emerge, ultimately striking a detector that can take measurements every few picoseconds. Because the light bursts are so short, the system can gauge how far they’ve traveled by measuring the time it takes them to reach the detector.

More information:

http://web.mit.edu/newsoffice/2012/camera-sees-around-corners-0321.html

20 March 2012

Meaningful Gestures

Kinect, Microsoft’s video-game controller that registers a user’s intentions from his gestures, will be the shape of things to come. Researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, in Pittsburgh, think the Kinect’s basic principles could be used to make a technological panopticon that monitors people’s movements and gives them what they want, wherever they want it. Someone in a shopping mall, for example, might hold up his hand and see a map appear instantly at his fingertips. This image might then be locked in place by the user sticking his thumb out. A visitor to a museum, armed with a suitable earpiece, could get the lowdown on what he was looking at simply by pointing at it. And a person who wanted to send a text message could tap it out with one hand on a keyboard projected onto the other, and then send it by flipping his hand over. In each case, sensors in the wall or ceiling would be watching what he was up to, looking for significant gestures and reacting accordingly.


An older project, OmniTouch, combined a Kinect-like array of sensors with a small, shoulder-mounted projector to project interactive displays onto nearby surfaces, including the user’s body. This prototype, Armura takes the idea a stage further by mounting both sensors and projector in the ceiling. This frees the user from the need to carry anything, and also provides a convenient place from which to spot his gestures. The actual detection is done by infra-red light, which reflects off the user’s skin and clothes. A camera records the various shapes made by the user’s hands and arms. Software then identifies different arrangements of the user’s arms, hands and fingers, such as arms-crossed, thumbs-in, thumbs-out, book, palms-up, palms-down and so on. The hands alone are capable of tens of thousands of interactions and gestures. The trick is to distinguish between them, matching the gesturer’s intention to his pose precisely enough that the correct consequence follows, but not so precisely that slightly non-standard gestures are ignored.

More information:

http://www.economist.com/node/21548486

14 March 2012

3D Animations for Everyone

3D movies like ‘Toy Story’ or ‘Transformers’ are based on everyday objects that are able to move like humans. Such 3D characters are created by skilled experts in time-consuming manual work. Computer scientists at the Max Planck Institute for Informatics have now developed two computer programs that can accomplish the same process in mere seconds and can easily be handled even by inexperienced users. In the 3D movie ‘Toy Story’, the astronaut ‘Space Ranger Buzz Lightyear’ elicits great laughs from the audience. In ‘Transformers’, cars and trucks amaze viewers by turning into robots and then fighting each other, agile like professional boxers. Their spectacular on-screen movements are hand-crafted and take a lot of time to produce, regardless of the hardware involved. After creating a static digital representation of the character, the ability to move is achieved by rigging of the character, i.e. a motion skeleton is manually defined and attached to the character's individual components. Max Planck researchers are now the first worldwide to have developed two novel approaches not only significantly shortening these two important steps of the creation process, but also considerably simplifying them. Their software uses databases like Dosch Design, Turbosquid or Google Warehouse, which, either free or for a small fee, offer data sets defining the shape of a character or an object. That way the users do not need to create their own 3D model, but cannot yet customize them either.


This is where the first of the two novel algorithms comes in. It cleverly splits the 3D models in the database into components and remembers how they were connected. Users can then select two of the processed models that they want to combine into a new and unique model. An amateur designer can thus, for example, assemble his or her own ultimate robot for a video game. By using a slider, the designer can make a real-time decision as to how much of component A or B to use and is always able to view the resulting combination. To make sure that only fitting components can be exchanged, e.g. the arms of A with the arms of B, the program uses segmentation based on identified symmetries. Finally, the newly created model can be animated with another algorithm. All that is needed is a defined movement sequence and a target skeleton. These are also freely available on the internet, for example, at the Mocap Database, maintained by the Carnegie Mellon University. The software developed by the research group applies the movement and the skeleton to the 3D model. This is done by a clever algorithm that is able to identify a similar skeleton, including the appropriate joints in the target model. The movement will be then transferred to the skeleton animating the model. Like this, the clunky astronaut figure of Toy Story star ‘Buzz Lightyear’ can move on the screen like Kung Fu legend Bruce Lee within mere seconds.

More information:

http://www.mpg.de/5158289/cebit-2012_3D

13 March 2012

Teach Your Robot Well

Within a decade, personal robots could become as common in U.S. homes as any other major appliance, and many if not most of these machines will be able to perform innumerable tasks not explicitly imagined by their manufacturers. This opens up a wider world of personal robotics, in which machines are doing anything their owners can program them to do—without actually being programmers. A new study by researchers in Georgia Tech’s Center for Robotics & Intelligent Machines (RIM), identified the types of questions a robot can ask during a learning interaction that are most likely to characterize a smooth and productive human-robot relationship.


These questions are about certain features of tasks, more so than labels of task components or real-time demonstrations of the task itself, and the researchers identified them not by studying robots, but by studying the everyday people who one day will be their masters. The study attempted to discover the role ‘active learning’ concepts play in human-robot interaction. In a nutshell, active learning refers to giving machine learners more control over the information they receive. Simon, a humanoid robot created in the lab of Georgia Tech’s School of Interactive Computing is well acquainted with active learning; researchers are programming him to learn new tasks by asking questions.

More information:

http://www.cc.gatech.edu/news/teach-your-robot-well-georgia-tech-shows-how

08 March 2012

New Direction for Game Controllers

University of Utah engineers designed a new kind of video game controller that not only vibrates like existing devices, but pulls and stretches the thumb tips in different directions to simulate the tug of a fishing line, the recoil of a gun or the feeling of ocean waves. They are demonstrating the device and presenting studies about it during the Institute of Electrical and Electronics Engineers’ Haptics Symposium. Haptics deals with research about touch, just as optics deals with vision. A patent is pending on the device. The first haptic or touch feedback in game controllers came in 1997 with the Nintendo64 system’s ‘rumble pack’ that makes the hands vibrate using an off-balance motor to simulate the feel of driving a race car on a gravel road, flying a jet or dueling with Star Wars light sabers.


The latest game controller prototype looks like controllers for Microsoft’s Xbox or Sony’s PlayStation but with an addition to the controller’s normal thumb joysticks, on which the thumbs are placed and moved in different directions to control the game. The middle of each ring-shaped thumb stick has a round, red ‘tactor’ that looks like the eraser-head-shaped IBM TrackPoint or pointing stick now found on a number of laptop computer brands. Video games commonly are designed so the left thumb stick controls motion and the right controls the player’s gaze or aim. With the new controller, as a soldier avatar crawls forward, the player pushes the left thumb stick forward and feels the tactors tugging alternately back and forth under both thumbs, mimicking the soldier crawling first with one arm, then the other.

More information:

http://unews.utah.edu/news_releases/a-new-direction-for-game-controllers/

04 March 2012

Interactive 3D Web Graphical Objects

When customers visit an online shop, they want to see all parts of a product; they want to enlarge it, or visualize adjusting single elements. Until now, web developers have been dealing with a multiplicity of different programs, in order to illustrate articles on the Internet in such a complex way. The new HTML extension XML3D, which offers the capability to describe computer scenes in spatial detail directly within the website's code, simplifies that.


An online shop can be extended with XML3D in just a few clicks, as researchers of the Saarland University's Intel Visual Computing Institute demonstrated. The online shop's website fills the whole screen of the laptop. In the center, the image of a high-end digital camera appears. Just a few finger moves on the touchpad are needed to move the model freely and to enlarge or minimize it, no matter which objective has been set by the mouse click.

More information:

http://www.rdmag.com/News/2012/02/Information-Tech-Computing-Internet-Interactive-3-D-graphical-objects-may-soon-be-common-on-the-web/