27 January 2008

Website Converts 2D into 3D Models

The Make3d algorithm, developed by Stanford computer scientists, can take any 2D image and create a 3D ‘fly around’ model of its content, giving viewers access to the scene's depth and a range of points of view. The algorithm uses a variety of visual cues that humans use for estimating the 3-D aspects of a scene. The applications of extracting 3-D models from 2-D images could range from enhanced pictures for online real estate sites to quickly creating environments for video games and improving the vision and dexterity of mobile robots as they navigate through the spatial world. Extracting 3-D information from still images is an emerging class of technology. Make3d creates accurate and smooth models about twice as often as competing approaches, by abandoning limiting assumptions in favour of a new, deeper analysis of each image and the powerful artificial intelligence technique ‘machine learning’.

To teach the algorithm about depth, orientation and position in 2-D images, the researchers fed it still images of campus scenes along with 3-D data of the same scenes gathered with laser scanners. The algorithm correlated the two sets together, eventually gaining a good idea of the trends and patterns associated with being near or far. To make these judgments, the algorithm breaks the image up into tiny planes called ‘superpixels’, which are within the image and have very uniform color, brightness and other attributes. By looking at a superpixel in concert with its neighbours, analyzing changes such as gradations of texture, the algorithm makes a judgment about how far it is from the viewer and what its orientation in space is. Although the technology works better than any other has so far, it is not perfect. The software is at its best with landscaps and scenery rather than close-ups of individual objects.

More information:

http://make3d.stanford.edu/

http://www.sciencedaily.com/releases/2008/01/080126100444.htm