25 July 2011

Humanlike Computer Vision

Two new techniques for computer-vision technology mimic how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people. The techniques, called heat mapping and heat distribution, apply mathematical methods to enable machines to perceive three-dimensional objects, researchers mentioned at Purdue. Both of the techniques build on the basic physics and mathematical equations related to how heat diffuses over surfaces. As heat diffuses over a surface it follows and captures the precise contours of a shape. The system takes advantage of this intelligence of heat, simulating heat flowing from one point to another and in the process characterizing the shape of an object. A major limitation of existing methods is that they require prior information about a shape in order for it to be analyzed. Researchers developing a new machine-vision technique tested their method on certain complex shapes, including the human form or a centaur – a mythical half-human, half-horse creature. The heat mapping allows a computer to recognize the objects no matter how the figures are bent or twisted and is able to ignore noise introduced by imperfect laser scanning or other erroneous data.


The new methods mimic the human ability to properly perceive objects because they don't require a preconceived idea of how many segments exist. The methods have many potential applications, including a 3D search engine to find mechanical parts such as automotive components in a database; robot vision and navigation; 3D medical imaging; military drones; multimedia gaming; creating and manipulating animated characters in film production; helping 3D cameras to understand human gestures for interactive games; contributing to progress of areas in science and engineering related to pattern recognition; machine learning; and computer vision. The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles. Heat mapping allows a computer to recognize an object, such as a hand or a nose, no matter how the fingers are bent or the nose is deformed and is able to ignore noise introduced by imperfect laser scanning or other erroneous data.

More information:

http://www.purdue.edu/newsroom/research/2011/110620RamaniHeat.html

24 July 2011

Geo-Immersion

At the site of a terrorist attack, an earthquake or a tsunami, emergency responders are focused on search and rescue, and saving lives. Some disaster sites provide an opportunity for experts with different skills than the police, firefighters and aid organizations that are first on the scene.

With support from the National Science Foundation (NSF), researchers from the Disaster Research Center (DRC) go to devastated locations to learn more about how lives may be saved in the future. The DRC started in 1963 at the Ohio State University, and moved in the mid-1980s to the University of Delaware in Newark.

More information:

http://www.nsf.gov/news/special_reports/science_nation/index.jsp

23 July 2011

Robots Identify Human Activities

Cornell researchers are programming robots to identify human activities by observation. Researchers report that they have trained a robot to recognize 12 different human activities, including brushing teeth, drinking water, relaxing on a couch and working on a computer. Others have tried to teach robots to identify human activities, the researchers note, using video cameras. The Cornell team used a 3D camera that, they said, greatly improves reliability because it helps separate the human image from background clutter. They used an inexpensive Microsoft Kinect camera, designed to control video games with body movements. The camera combines a video image with infrared ranging to create a point cloud with 3D coordinates of every point in the image. To simplify computation, images of people are reduced to skeletons. The computer breaks activities into a series of steps. Brushing teeth, for example, can be broken down into squeezing toothpaste, bringing hand to mouth, moving hand up and down and so on.


The computer is trained by watching a person perform the activity several times; each time it breaks down what it sees into a chain of sub-activities and stores the result, ending with an average of all the observations. When it's time to recognize what a person is doing, the computer again breaks down the activity it observes into a chain of sub-activities, then compares that with the various options in its memory. Of course no human will produce the exact same movements every time, so the computer calculates the probability of a match for each stored chain and chooses the most likely one. In experiments with four different people in five environments, including a kitchen, living room and office, the computer correctly identified one of the 12 specified activities 84 percent of the time when it was observing a person it had trained with, and 64 percent of the time when working with a person it had not seen before. It also was successful at ignoring random activities that didn't fit any of the known patterns.

More information:

http://www.news.cornell.edu/stories/July11/Activity.html

22 July 2011

Who Needs Humans?

Amid all the job losses of the Great Recession, there is one category of worker that the economic disruption has been good for: nonhumans. From self-service checkout lines at the supermarket to industrial robots armed with saws and taught to carve up animal carcasses in slaughter-houses, these ever-more-intelligent machines are now not just assisting workers but actually kicking them out of their jobs. Automation isn’t just affecting factory workers, either. Some law firms now use artificial intelligence software to scan and read mountains of legal documents, work that previously was performed by highly paid human lawyers.


It’s not that robots are cheaper than humans, though often they are. It’s that they are better. In some cases the quality requirements are so stringent that even if you wanted to have a human do the job, you couldn’t. Same goes for surgeons, who are using robotic systems to perform an ever-growing list of operations—not because the machines save money but because, thanks to the greater precision of robots, the patients recover in less time and have fewer complications. The surgery bots don’t replace surgeons—you still need a surgeon to drive the robot. Prices go as high as $2.2 million. Nevertheless, Intuitive sold 400 of them just last year.

More information:

http://www.newsweek.com/2011/07/17/the-threat-of-automation-robots-threaten-american-jobs.html

19 July 2011

Robots Kinect's 'eyes and ears'

Microsoft Robotics has been giving away its free Robotics Developer Studio, complete with a 3D simulator, for the last six years, but without gaining much visibility. Microsoft, however, is convinced that will change when the company launches added services that allow users to plug the Kinect hands-free hardware--intended for gesture control of its Xbox gaming console--directly into any robot. In essence, the Kinect will add eyes and ears to any robot, which can be controlled with sophisticated gesture recognition running on an embedded Windows based computer. Microsoft’s Robotics Developer Studio users will not just have access raw data either, but will also be able to access all of Kinect's sophisticated pattern recognition algorithms that enable the Xbox to be controlled with gestures. Now roboticists using the Robotics Developer Studio will be able to control their robotics with gestures.


While users are prohibited from developing commercial products with the Kinect SDK, non-profits will be able to add the navigation algorithms that enable robots to use Kinect to follow paths, plan routes and generally re-enact the types of behaviors that search-and-rescue robots can now only perform by remote control. Last year Microsoft acquired the fabless chip maker, Canesta Inc. which makes a chip-level pattern recognition engine. Canesta’s engine is said to outperform the PrimeSensor which Microsoft is currently licensing from PrimeSense Ltd. When Microsoft commercializes the Canesta-invented chip-level work-alike of the PrimeSensor, it will be able to downsize the foot-long Kinect to about a square centimeter, enabling tiny robots and other mobile devices, to perform sophisticated gesture recognition for natural user interfaces, autonomous navigation and many other tasks.

More information:

http://www.eetimes.com/electronics-news/4217801/Robots-get-Kinect-s--eyes-and-ears-

16 July 2011

Learn Language Using Games

Computers are great at treating words as data: Word-processing programs let you rearrange and format text however you like, and search engines can quickly find a word anywhere on the Web. But what would it mean for a computer to actually understand the meaning of a sentence written in ordinary English — or French, or Urdu, or Mandarin? One test might be whether the computer could analyze and follow a set of instructions for an unfamiliar task. And indeed, in the last few years, researchers at MIT’s Computer Science and Artificial Intelligence Lab have begun designing machine-learning systems that do exactly that, with surprisingly good results.


In 2009, at the annual meeting of the Association for Computational Linguistics (ACL), researchers in the lab of Regina Barzilay, took the best-paper award for a system that generated scripts for installing a piece of software on a Windows computer by reviewing instructions posted on Microsoft’s help site. At this year’s ACL meeting, researchers applied a similar approach to a more complicated problem: learning to play ‘Civilization’, a computer game in which the player guides the development of a city into an empire across centuries of human history. When the researchers augmented a machine-learning system so that it could use a player’s manual to guide the development of a game-playing strategy, its rate of victory jumped from 46 percent to 79 percent.

More information:

http://web.mit.edu/newsoffice/2011/language-from-games-0712.html

13 July 2011

The Virtue in Virtuality

What if a fifth grader could learn college-level physics concepts? What if the platform used to teach those concepts could be accessed very simply online through a Web browser? What if that new methodology allowed students to write computer programs, progress at their own pace and provide the teacher immediate feedback on individual progress? As it turns out, these questions are not just ‘what ifs’ thanks to several groundbreaking education technology platforms under development in labs across the Peabody campus. These innovations allow cutting-edge researchers to harness innovators of the technology described, call the virtue in virtuality.


Common among these developing platforms is their commitment to accessibility, focus on efficiency and effectiveness, and an emphasis on STEM (science, technology, engineering and math) topics. A driving theme is a desire to free teachers up for more instructional time and, ultimately, improve learning outcomes. Technological innovations always begin with a passion to tackle intransigent problems. Once a problem in need of a solution is identified, then we ask, ‘To what extent could the use of technology make this more accessible to learners?’ The technology comes in on the back end researchers state.

More information:

http://www.vanderbilt.edu/magazines/peabody-reflector/2011/06/the-virtue-in-virtuality/

12 July 2011

Robo-Paparazzi

To create such a robot, computer scientists at the International Institute of Information Technology in Hydrabad, India, turned to a humanoid robot called NAO that is equipped with a head-mounted camera. The team programmed NAO to obey two simple photographic guidelines known as the rule of thirds and the golden ratio. The former states that an image should be divided into three, both vertically and horizontally, with interesting features placed where the dividing lines cross. The latter suggests the horizon line should divide a photo into two rectangles with the larger being 1.62 times the size of the smaller - the golden ratio.


The robot is also programmed to assess the quality of its photos by rating focus, lighting and colour. The researchers taught it what makes a great photo by analysing the top and bottom 10 per cent of 60,000 images from a website hosting a photography contest, as rated by humans. Armed with this knowledge, the robot can take photos when told to, then determine their quality. If the image scores below a certain quality threshold, the robot automatically makes another attempt. It improves on the first shot by working out the photo's deviation from the guidelines and making the appropriate correction to its camera's orientation.

More information:

http://www.newscientist.com/article/mg21128195.300-robopaparazzi-learn-how-to-take-the-perfect-photo.html