28 June 2012

Brain’s Taste for Size

The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation — in other words, how the brain perceives and identifies different objects. Now researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects. As part of their study, they took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects or visualize items of differing size. By evaluating the scans, the researchers found that there are distinct regions of the brain that respond to big objects and small objects.


By looking at the arrangement of the responses, they found a systematic organization of big to small object responses across the brain’s cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places, like the beach or a building. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver. The work could have major implications for the field of robotics, in particular in developing techniques for how robots deal with different objects, from grasping a pen to sitting in a chair.

More information:

27 June 2012

First GPS for the Blind

A new application for devices with Android operating systems, called OnTheBus, helps people find their way and move around in large cities. The application is based on universal design principles and is therefore useful for any person interested in travelling around a big city, and especially for people with visual, hearing or cognitive impairments.  The application, already available at Google Play, offers a set of optimal routes users can choose from. Once one of the routes is chosen, the application guides users from where they are located to the nearest bus stop and informs them of the time remaining until their bus arrives. Inside the bus, the application informs on the number of stops and signals the user when it is time to press the button and get off the bus. It then guides users to their destination.


The system uses the newest technologies available for mobile devices such as GPS, compass, accelerometer, voice recognition and generation, and 3G or WiFi connection.  Currently, it can be used in Barcelona, Madrid and Rome, and will soon be available for the cities of Valencia, Zaragoza and Helsinki. The application is offered in Spanish, Catalan, English and Italian; new versions in other languages and for other cities are also being prepared.  Researchers are already working on improving the application by including other public transport means and more basic services such as finding taxis, the nearest chemist's or assistance centres, using augmented reality techniques to locate stop signs and public transport stops, as well as the integration of social networks.

More information:

26 June 2012

Next Cameras Come Into View

Scientists at Duke University have built an experimental camera that allows the user—after a photo is taken—to zoom in on portions of the image in extraordinary detail, a development that could fundamentally alter the way images are captured and viewed. The new camera collects more than 30 times as much picture data as today's best consumer digital devices. While existing cameras can take photographs that have pixel counts in the tens of millions, the Duke device produces a still or video image with a billion pixels—five times as much detail as can be seen by a person with 20/20 vision. A pixel is one of the many tiny areas of illumination on a display screen from which an image is composed. The more pixels, the more detailed the image. The Duke device, called Aware-2, is a long way from being a product.


The current version needs lots of space to house and cool its electronic boards; it weighs 100 pounds and is about the size of two stacked microwave ovens. It also takes about 18 seconds to shoot a frame and record the data on a disk. The $25 million project is funded by the Defense Advanced Research Projects Agency, part of the U.S. Department of Defense. The military is interested in high-resolution cameras as tools for aerial or land-based surveillance. If the Duke device can be shrunk to hand-held size, it could spark an alternative approach to photography. Instead of deciding where to focus a camera, a user would simply shoot a scene, then later zoom in on any part of the picture and view it in extreme detail. That means desirable or useful portions of a photo could be identified after the image was captured.

More information:

21 June 2012

Robotic Factory Assistants

In today’s manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail. But according to researchers at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human’s efficiency, particularly in airplane manufacturing.

If the robot can provide tools and materials so the person doesn’t have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person. It’s really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory. A robot working in isolation has to simply follow a set of pre-programmed instructions to perform a repetitive task. But working with humans is a different matter.

More information:

20 June 2012

3D Tracking Tech

Technology originally developed to track badgers underground could soon be used to locate people in an emergency situation such as a bomb attack or earthquake. GPS is good at pinpointing locations in open spaces but below the surface it's a different story. The limitations of conventional tracking technology were exposed in the 2005 London bombings, and numerous earthquakes since, where the emergency services struggled to locate people in underground areas or buried beneath debris. Positioning indoors is also a challenge, with no clear winning technology that is able to address people's day-to-day needs, such as finding their way around an airport.


In 2009 researchers from Oxford University's Department of Computer Science, faced similar problems when they joined a project to study badgers in Oxford's Wytham Woods. The animals spend much of their lives underground where conventional technology couldn't keep tabs on them. The solution developed by researchers is a technology based on generating very low frequency fields. This has the unique advantage of penetrating obstacles, enabling positioning and communication even through thick layers of rock, soil and concrete. After the work with badgers the team realised the technology had potential applications in many areas such as location-based advertising, finding victims in emergencies, and tracking people and equipment in modern mines.

More information:

11 June 2012

Zoomable User Interfaces

Zoomable user interfaces (ZUIs), as they are known, are arriving on the coat-tails of touch-screen gadgets such as the iPhone that have popularised zooming to magnify graphics. With ZUIs information need not be chopped up to fit on uniformly sized slides. Instead, text, images and even video sit on a single, limitless surface and can be viewed at whatever size makes most sense—up close for details, or zoomed out for the big picture. Forthcoming software for timeline presentations, dubbed ChronoZoom, offers another zoom-based approach. Events are described or represented along a timeline using text, images, and video. Zoom in so that a recent 24-hour section of the timeline fits on a laptop screen, and at this scale, the timeline stretches about 17 billion kilometres to the left.


The zoom-based approach can transform multi-page websites into a single broad surface that simultaneously displays all content. Instead of clicking and waiting for a new page to appear, a visitor can zoom directly to areas of interest. On the Hard Rock Café website, a page built using Microsoft’s Silverlight software shows 1,610 memorabilia items. By using the scroll wheel to zoom, details of each one can be expanded to fill the entire screen. Software that zooms deep into moving imagery may be next. America’s Department of Energy is developing software to drill into scientific animations of particle behaviour in nuclear reactions. Called VisIt, its zooming range is equivalent to zipping from a view of the Milky Way to a grain of sand.

More information:

07 June 2012

Tree-Thinking Through Touch

A pair of new studies by computer scientists, biologists, and cognitive psychologists at Harvard, Northwestern, Wellesley, and Tufts suggest that collaborative touch-screen games have value beyond just play. Two games, developed with the goal of teaching important evolutionary concepts, were tested on families in a busy museum environment and on pairs of college students. In both cases, the educational games succeeded at making the process of learning difficult material engaging and collaborative. The games take advantage of the multi-touch-screen tabletop, which is essentially a desk-sized tablet computer. In a classroom or a museum, several users can gather around the table and use it simultaneously, either working on independent problems in the same space, or collaborating on a single project.


The table accommodates multiple users and can also interact with physical objects like cards or blocks that are placed onto its surface. The new research moves beyond the novelty of the system, however, and investigates the actual learning outcomes of educational games in both formal and informal settings. The two collaborative games that have been developed for the system, Phylo-Genie and Build-a-Tree, are designed to help people understand phylogeny—specifically, the tree diagrams that evolutionary biologists use to indicate the evolutionary history of related species. Learners new to the discipline sometimes think of evolution as a linear progression, from the simple to the complex, with humans as the end point. Both of the phylogeny games were designed and evaluated in accordance with accepted principles of cognitive psychology and learning sciences.

More information:

06 June 2012

Biometric Ears

Research into ear biometrics by researchers of ECS-Electronics and Computer Science has raised new potential for security systems. The research is currently profiled on the leading website, All Analytics, where they are explaining the potential uses of his pioneering work into ear identification. They believe that using photos of individual ears matched against a comparative database could be as distinctive a form of identification as fingerprints.


Using ears for identification has clear advantages over other kinds of biometric identification, as, once developed, the ear changes little throughout a person’s life. During walk-throughs at security checkpoints cameras could digitally photograph passers-by comparing their ears against others in a database. Used in combination with face recognition, ear recognition offers a second point of comparison in cases where all or part of a face might be obstructed, for example, by make-up.

More information: