30 May 2012

Robot Cleans Room

Sooner than you think, we may have robots to tidy up our homes. Researchers in Cornell's Personal Robotics Lab have trained a robot to survey a room, identify all the objects, figure out where they belong and put them away. Previous work has dealt with placing single objects on a flat surface. Now researchers are looking at a group of objects, and this is the first work that places objects in non-trivial places. The new algorithms allow the robot to consider the nature of an object in deciding what to do with it. The researchers tested placing dishes, books, clothing and toys on tables and in bookshelves, dish racks, refrigerators and closets. The robot was up to 98 percent successful in identifying and placing objects it had seen before. It was able to place objects it had never seen before, but success rates fell to an average of 80 percent. Ambiguously shaped objects, such as clothing and shoes, were most often misidentified. The robot begins by surveying the room with a Microsoft Kinect 3D camera, originally made for video gaming but now being widely used by robotics researchers. Many images are stitched together to create an overall view of the room, which the robot's computer divides into blocks based on discontinuities of color and shape. The robot has been shown several examples of each kind of object and learns what characteristics they have in common.


For each block it computes the probability of a match with each object in its database and chooses the most likely match. For each object the robot then examines the target area to decide on an appropriate and stable placement. Again it divides a 3D image of the target space into small chunks and computes a series of features of each chunk, taking into account the shape of the object it's placing. The researchers train the robot for this task by feeding it graphic simulations in which placement sites are labeled as good and bad, and it builds a model of what good placement sites have in common. It chooses the chunk of space with the closest fit to that model. Finally the robot creates a graphic simulation of how to move the object to its final location and carries out those movements. These are practical applications of computer graphics far removed from gaming and animating movie monsters. A robot with a success rate less than 100 percent would still break an occasional dish. Performance could be improved, the researchers say, with cameras that provide higher-resolution images, and by pre-programming the robot with 3D models of the objects it is going to handle, rather than leaving it to create its own model from what it sees. The robot sees only part of a real object, so a bowl could look the same as a globe. Tactile feedback from the robot's hand would also help it to know when the object is in a stable position and can be released.

More information:

29 May 2012

Robotic Gesturing

Many works of science fiction have imagined robots that could interact directly with people to provide entertainment, services or even health care. Robotics is now at a stage where some of these ideas can be realized, but it remains difficult to make robots easy to operate. One option is to train robots to recognize and respond to human gestures. In practice, however, this is difficult because a simple gesture such as waving a hand may appear very different between different people. Designers must develop intelligent computer algorithms that can be ‘trained’ to identify general patterns of motion and relate them correctly to individual commands.

Researchers at the A*STAR Institute for Infocomm Research in Singapore have adapted a cognitive memory model called a localist attractor network (LAN) to develop a new system that recognize gestures quickly and accurately, and requires very little training. They tested their software by integrating it with ShapeTape, a special jacket that uses fibre optics and inertial sensors to monitor the bending and twisting of hands and arms. They programmed the ShapeTape to provide data 80 times per second on the three-dimensional orientation of shoulders, elbows and wrists, and applied velocity thresholds to detect when gestures were starting.

More information:

24 May 2012

Robotic Fish Sniffs Out Pollution

There is something unnatural lurking in the waters of the port of Gijon, Spain, and researchers are tracking its every move. It is not some bizarre new form of marine life, but an autonomous robotic fish designed to sense marine pollution, taking to the open waves for the first time. Currently the port relies on divers to monitor water quality, which is a lengthy process costing €100,000 per year. The divers take water samples from hundreds of points in the port and then send them off for analysis, with the results taking weeks to return. By contrast, the SHOAL robots would continuously monitor the water, letting the port respond immediately to the causes of pollution, such as a leaking boat or industrial spillage, and work to mitigate its effects. The SHOAL fish are one and a half metres long, comparable to the size and shape of a tuna, but their neon-yellow plastic shell means they are unlikely to be mistaken for the real thing.


A range of on-board chemical sensors detect lead, copper and other pollutants, along with measuring water salinity. They are driven by a dual-hinged tail capable of making tight turns that would be impossible with a propeller-driven robot. They are also less noisy, reducing the impact on marine life. The robots are battery powered and capable of running for 8 hours between charges. At the moment the researchers have to recover them by boat, but their plan is that the fish will return to a charging station by themselves. Working in a group, the fish can cover a 1 kilometre-square region of water, down to a depth of 30 metres. They communicate with each other and a nearby base-station using very low-frequency sound waves, which can penetrate the water more easily than radio waves. However, this means the fish have a low data transmission rate and can only send short, predefined messages.

More information:

http://www.newscientist.com/article/dn21836-robotic-fish-shoal-sniffs-out-pollution-in-harbours.html

23 May 2012

Paralyzed, Moving Robots With Mind

Two people who are virtually paralyzed from the neck down have learned to manipulate a robotic arm with just their thoughts, using it to reach out and grab objects. One of them, a woman, was able to retrieve a bottle containing coffee and drink it from a straw — the first time she had served herself since her stroke 15 years earlier. The report, released online by the journal Nature, is the first published demonstration that humans with severe brain injuries can effectively control a prosthetic arm, using tiny brain implants that transmit neural signals to a computer.


Scientists have predicted for years that this brain-computer connection would one day allow people with injuries to the brain and spinal cord to live more independent lives. Previously, researchers had shown that humans could learn to move a computer cursor with their thoughts, and that monkeys could manipulate a robotic arm. The technology is not yet ready for use outside the lab, experts said, but the new study is an important step forward, providing dramatic evidence that brain-controlled prosthetics are within reach.

More information:

http://www.nytimes.com/2012/05/17/science/bodies-inert-they-moved-a-robot-with-their-minds.html?_r=1  

14 May 2012

Implanted User Interfaces


Pacemakers and other implanted medical devices have become commonplace. But being able to directly interact with implants--via user interfaces that are implanted as well--still might strike some as science fiction a la the Terminator.  Researchers who are testing implanted user interfaces say the appliances will enable people who have implanted medical devices such as pacemakers to recharge and reprogram them without the use of wireless transmissions, which are considered vulnerable to hacking. Using a cadaver as their subject, the researchers from Autodesk Research in Toronto and the University of Toronto showed that it's possible to communicate with a small UI device that is implanted just below the skin of the arm. Some of the output is sensory, such as vibrations or sounds that might alert a patient with a pacemaker that the device's battery is nearly discharged. They also tested pressure and light sensors for entering information. In addition, they successfully recharged batteries through a ‘powering mat’ placed on top of the skin.


Despite the security issues, the researchers also tested Bluetooth transmissions that could prompt a smartphone or other wireless hub to send signals to a care manager or physician. They discovered that data transmission was hardly affected by the skin covering their UI device. In contrast to current implanted medical devices, which can do only what they're programmed to do, those equipped with or attached to an implanted UI "could support a wide range of applications and tasks," the paper says. For example, if a pacemaker malfunctions, it could be reprogrammed. Implanted units have several advantages over mobile and wearable UI devices, the study says: the implanted units travel with the user, are invisible, and are impervious to the weather. So far, there has been no other research on this type of appliance. Among other things, studies must assess the infection risks of implanted UI devices. Also, it's not clear exactly how people would interact with the devices implanted under their skins.

More information:

13 May 2012

3D Videoconferencing Pod

A Queen's University researcher has created a Star Trek-like human-scale 3D videoconferencing pod that allows people in different locations to video conference as if they are standing in front of each other. The technology researchers at the Queen's Human Media Lab have developed is called TeleHuman and looks like something from the Star Trek holodeck. Two people simply stand infront of their own life-size cylindrical pods and talks to a 3D hologram-like images of each other. Cameras capture and track 3D video and convert into the life-size image. Since the 3D video image is visible 360 degrees around the Pod, the person can walk around it to see the other person’s side or back.


While the technology may seem like it comes from a galaxy far, far away, it's not as complicated as most would think researchers used mostly existing hardware – including a 3D projector, a 1.8 metre-tall translucent acrylic cylinder and a convex mirror. The researchers used the same Pod to create another application called BodiPod, which presents an interactive 3D anatomy model of the human body. The model can be explored 360 degrees around the model through gestures and speech interactions. When people approach the Pod, they can wave in thin air to peel off layers of tissue. In X-ray mode, as users get closer to the Pod they can see deeper into the anatomy.

More information:

http://www.queensu.ca/news/print/36243

10 May 2012

Neural Recordings

Gaining access to the inner workings of a neuron in the living brain offers a wealth of useful information: its patterns of electrical activity, its shape, even a profile of which genes are turned on at a given moment. However, achieving this entry is such a painstaking task that it is considered an art form; it is so difficult to learn that only a small number of labs in the world practice it. But that could soon change: Researchers at MIT and the Georgia Institute of Technology have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and speed than a human experimenter.


The new automated process eliminates the need for months of training and provides long-sought information about living cells’ activities. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells. The method could be particularly useful in studying brain disorders such as schizophrenia, Parkinson’s disease, autism and epilepsy. In all these cases, a molecular description of a cell that is integrated with its electrical and circuit propertie has remained elusive. The researchers also showed that their method can be used to determine the shape of the cell by injecting a dye; they are now working on extracting a cell’s contents to read its genetic profile.

More information:

08 May 2012

Gesture Control System

When you learned about the Doppler Effect in high school physics class—the wave frequency shift that occurs when the source of the wave is moving, easily illustrated by a passing ambulance—you probably didn't envision it helping control your computer one day. But that's exactly what a group of researchers are doing at Microsoft Research, the software giant's Redmond, Washington-based lab. Gesture control is becoming increasingly common and is even built into some TVs. While other motion-sensing technologies such as Microsoft's own Kinect device use cameras to sense and interpret movement and gestures, SoundWave does this using only sound—thanks to the Doppler Effect, some clever software, and the built-in speakers and microphone on a laptop.


Researchers at Microsoft Research, mention that the technology can already be used to sense a number of simple gestures, and with smart phones and laptops starting to include multiple speakers and microphones, the technology could become even more sensitive. The idea for SoundWave emerged last summer, when researchers were working on a project that involved the use of ultrasonic transducers to create haptic effects, and they noticed a sound wave changing in a surprising way as he moved his body around. The transducers were emitting an ultrasonic sound wave that was bouncing off researchers' bodies, and their movements changed the tone of the sound that was picked up, and the sound wave they viewed on the back end.

More information: